Hi I have a stored proc -
takes a long time to execute so re-compilation isn't an issue.
I pass a comma separated varchar param to the stored procedure.
exec sp_somename @.CSV = '1,2,3'
If I build the SQL statement in the SP and then execute the statement
dynamically:
EXEC ('SELECT * FROM table1, ... other tables WHERE ID IN (1,2,3,4) ...rest
of sql'
as opposed to this,
having populated a temp table #t with @.CSV values.
SELECT * FROM
table1, #t, ... other tables
WHERE
table1.ID = #t.ID
...rest of sql
I used a function and SP to populate the #t; all took negliable speed.
The dynamic SQL performs twice as fast as the temp table/variable method.
Surprised I thought SELECT IN (.....) was converted into a join and would
be just as fast
Don't like Dynamic SQL if I can avoid it.Do you have any indexes on the temp table? if not, would one help on the
joined column?
David Gugick
Imceda Software
www.imceda.com
"Yitzak" <terryshamir@.bob.com> wrote in message
news:prG2e.903$VN1.310@.newsfe1-win.ntli.net...
> Hi I have a stored proc -
> takes a long time to execute so re-compilation isn't an issue.
> I pass a comma separated varchar param to the stored procedure.
> exec sp_somename @.CSV = '1,2,3'
>
> If I build the SQL statement in the SP and then execute the statement
> dynamically:
> EXEC ('SELECT * FROM table1, ... other tables WHERE ID IN (1,2,3,4)
> ...rest
> of sql'
> as opposed to this,
> having populated a temp table #t with @.CSV values.
> SELECT * FROM
> table1, #t, ... other tables
> WHERE
> table1.ID = #t.ID
> ...rest of sql
> I used a function and SP to populate the #t; all took negliable speed.
> The dynamic SQL performs twice as fast as the temp table/variable method.
> Surprised I thought SELECT IN (.....) was converted into a join and would
> be just as fast
> Don't like Dynamic SQL if I can avoid it.
>
>|||I thought because the table was so small at most having 10 rows - they would
be no benefit but I'll give it a go.
Thanks
"David Gugick" <davidg-nospam@.imceda.com> wrote in message
news:#DuCivYNFHA.3192@.TK2MSFTNGP10.phx.gbl...
> Do you have any indexes on the temp table? if not, would one help on the
> joined column?
> --
> David Gugick
> Imceda Software
> www.imceda.com
>
> "Yitzak" <terryshamir@.bob.com> wrote in message
> news:prG2e.903$VN1.310@.newsfe1-win.ntli.net...
method.
would
>|||Put some indexes on and did improve performance.
Changed some other stored procedures but still Dynamic SQL wins out. Thats
not to mention its clear advantage when you can easily optimise for no ids
e.g. @.param = '' by not including the IN statement in the SQL string
executed.
Just don't like Dynamic SQL - sp_depnds wont work. All the security issues.
but looks like I gotta use it.|||>> I pass a comma separated varchar param to the stored procedure. <<
Ever hear about First Normal Form (1NF)?
Pardon the fact that this "cut & paste" is in Standard SQL and not
dialect. Passing a list of parmeters to a stored procedure can be done
by putting them into a string with a separator. I like to use the
traditional comma. Let's assume that you have a whole table full of
such parameter lists:
CREATE TABLE InputStrings
(keycol CHAR(10) NOT NULL PRIMARY KEY,
input_string VARCHAR(255) NOT NULL);
INSERT INTO InputStrings VALUES ('first', '12,34,567,896');
INSERT INTO InputStrings VALUES ('second', '312,534,997,896');
..
This will be the table that gets the outputs, in the form of the
original key column and one parameter per row.
CREATE TABLE Parmlist
(keycol CHAR(10) NOT NULL PRIMARY KEY,
parm INTEGER NOT NULL);
It makes life easier if the lists in the input strings start and end
with a comma. You will need a table of sequential numbers -- a
standard SQL programming trick, Now, the real query, in SQL-92 syntax:
INSERT INTO ParmList (keycol, parm)
SELECT keycol,
CAST (SUBSTRING (I1.input_string
FROM S1.seq
FOR MIN(S2.seq) - S1.seq -1)
AS INTEGER)
FROM InputStrings AS I1, Sequence AS S1, Sequence AS S2
WHERE SUBSTRING ( ',' || I1.input_string || ',' FROM S1.seq FOR 1) =
','
AND SUBSTRING (',' || I1.input_string || ',' FROM S2.seq FOR 1) =
','
AND S1.seq < S2.seq
GROUP BY I1.keycol, I1.input_string, S1.seq;
The S1 and S2 copies of Sequence are used to locate bracketing pairs of
commas, and the entire set of substrings located between them is
extracted and cast as integers in one non-procedural step. The trick
is to be sure that the right hand comma of the bracketing pair is the
closest one to the first comma.
You can then write:
SELECT *
FROM Foobar
WHERE x IN (SELECT parm FROM Parmlist WHERE key_col = :something);
Of course the right way to do this would be with a base table that
holds the list, or a longer parameter list:
WHERE x IN (@.p1, COALESCE (@.p2, @.p1), .. COALESCE (@.p99, @.p1);
The reason for the Coalesce() is to guarantee you have no nulls. You
must have a value for @.p1. All the other partameters default to NULL.
Alternatively, you can have a local variable, @.p0, which is set to some
impossible value as a sentinel.|||Actually I executed
set @.param = '1,2,3'
insert into #t
exec('SELECT ' + REPLACE(@.param, ',' ' UNION ALL SELECT ') )
To give me
insert into #t
exec (select 1 union all select 2 union all select 3)
Each row produced by executing this is one row and column of the temp table.
Is that what you meant by 1NF (every field must be atomic?)
Must of explained myself badly.
Point is using this CSV param dynamically in a SP is much faster than
breaking CSV param down into a temp table/variable or using a table
returning function.
SP recompile time is not an issue. Maybe if the CSV param gets very large
then using the temp table or function may be quicker.
Point is don't like dynamic SQL but this time forced to.
"--CELKO--" <jcelko212@.earthlink.net> wrote in message
news:1112318927.848861.167290@.g14g2000cwa.googlegroups.com...
> Ever hear about First Normal Form (1NF)?
> Pardon the fact that this "cut & paste" is in Standard SQL and not
> dialect. Passing a list of parmeters to a stored procedure can be done
> by putting them into a string with a separator. I like to use the
> traditional comma. Let's assume that you have a whole table full of
> such parameter lists:
> CREATE TABLE InputStrings
> (keycol CHAR(10) NOT NULL PRIMARY KEY,
> input_string VARCHAR(255) NOT NULL);
> INSERT INTO InputStrings VALUES ('first', '12,34,567,896');
> INSERT INTO InputStrings VALUES ('second', '312,534,997,896');
> ...
> This will be the table that gets the outputs, in the form of the
> original key column and one parameter per row.
> CREATE TABLE Parmlist
> (keycol CHAR(10) NOT NULL PRIMARY KEY,
> parm INTEGER NOT NULL);
> It makes life easier if the lists in the input strings start and end
> with a comma. You will need a table of sequential numbers -- a
> standard SQL programming trick, Now, the real query, in SQL-92 syntax:
>
> INSERT INTO ParmList (keycol, parm)
> SELECT keycol,
> CAST (SUBSTRING (I1.input_string
> FROM S1.seq
> FOR MIN(S2.seq) - S1.seq -1)
> AS INTEGER)
> FROM InputStrings AS I1, Sequence AS S1, Sequence AS S2
> WHERE SUBSTRING ( ',' || I1.input_string || ',' FROM S1.seq FOR 1) =
> ','
> AND SUBSTRING (',' || I1.input_string || ',' FROM S2.seq FOR 1) =
> ','
> AND S1.seq < S2.seq
> GROUP BY I1.keycol, I1.input_string, S1.seq;
> The S1 and S2 copies of Sequence are used to locate bracketing pairs of
> commas, and the entire set of substrings located between them is
> extracted and cast as integers in one non-procedural step. The trick
> is to be sure that the right hand comma of the bracketing pair is the
> closest one to the first comma.
> You can then write:
> SELECT *
> FROM Foobar
> WHERE x IN (SELECT parm FROM Parmlist WHERE key_col = :something);
> Of course the right way to do this would be with a base table that
> holds the list, or a longer parameter list:
> WHERE x IN (@.p1, COALESCE (@.p2, @.p1), .. COALESCE (@.p99, @.p1);
> The reason for the Coalesce() is to guarantee you have no nulls. You
> must have a value for @.p1. All the other partameters default to NULL.
> Alternatively, you can have a local variable, @.p0, which is set to some
> impossible value as a sentinel.
>|||Yitzak wrote:
> Hi I have a stored proc -
> takes a long time to execute so re-compilation isn't an issue.
> I pass a comma separated varchar param to the stored procedure.
> exec sp_somename @.CSV = '1,2,3'
>
> If I build the SQL statement in the SP and then execute the statement
> dynamically:
> EXEC ('SELECT * FROM table1, ... other tables WHERE ID IN (1,2,3,4)
> ...rest of sql'
>
You have several options, which are explained at
http://www.sommarskog.se/arrays-in-sql.html
--
Microsoft MVP - ASP/ASP.NET
Please reply to the newsgroup. This email account is my spam trap so I
don't check it very often. If you must reply off-line, then remove the
"NO SPAM"|||Thanks
Tried different options all better than dynamic SQL (maintenance,
debugging, security) - essentially all use a table to join.
This in my cases is always double the speed of the similar Dynamic SQL.
Recompilation ain't an issue on a SP that takes 40secs to run..
Can't understand as I thought MSSQL turned a "Select IN" into a join on temp
table, but creating my own indexed temp table/variable from SP or Function
takes twice teh time of "SELECT IN"
"Bob Barrows [MVP]" <reb01501@.NOyahoo.SPAMcom> wrote in message
news:#EOF$xwNFHA.2252@.TK2MSFTNGP15.phx.gbl...
> Yitzak wrote:
> You have several options, which are explained at
> http://www.sommarskog.se/arrays-in-sql.html
> --
> Microsoft MVP - ASP/ASP.NET
> Please reply to the newsgroup. This email account is my spam trap so I
> don't check it very often. If you must reply off-line, then remove the
> "NO SPAM"
>
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment