Friday, February 24, 2012

intra-query parallelism

Hi,
I am running sql 7 with sp3.
I got an error in running a sp.
The error message says, :Intra-query parallelism caused your server
command(process ID# 30) to deadlock. Rerun the query without intra-query
parallelism by using the query hint option (Maxdop 1).
Any ideas for this.
Thanks,Can you upgrade to Service Pack 4 and see if that remedies the issue?
Adam Machanic
SQL Server MVP
http://www.sqljunkies.com/weblog/amachanic
--
"metoonyc" <metoonyc@.yahoo.com> wrote in message
news:OpMRxD35EHA.828@.TK2MSFTNGP14.phx.gbl...
> Hi,
> I am running sql 7 with sp3.
> I got an error in running a sp.
> The error message says, :Intra-query parallelism caused your server
> command(process ID# 30) to deadlock. Rerun the query without intra-query
> parallelism by using the query hint option (Maxdop 1).
> Any ideas for this.
> Thanks,
>|||I do have sp4 installed
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:eXWhKH35EHA.1404@.TK2MSFTNGP11.phx.gbl...
> Can you upgrade to Service Pack 4 and see if that remedies the issue?
>
> --
> Adam Machanic
> SQL Server MVP
> http://www.sqljunkies.com/weblog/amachanic
> --
>
> "metoonyc" <metoonyc@.yahoo.com> wrote in message
> news:OpMRxD35EHA.828@.TK2MSFTNGP14.phx.gbl...
> > Hi,
> >
> > I am running sql 7 with sp3.
> >
> > I got an error in running a sp.
> > The error message says, :Intra-query parallelism caused your server
> > command(process ID# 30) to deadlock. Rerun the query without intra-query
> > parallelism by using the query hint option (Maxdop 1).
> >
> > Any ideas for this.
> >
> > Thanks,
> >
> >
>|||"metoonyc" <metoonyc@.yahoo.com> wrote in message
news:%23zvTGa35EHA.3756@.TK2MSFTNGP14.phx.gbl...
> I do have sp4 installed
Okay, your original post said SP3...
Did you try the MAXDOP hint? If it doesn't decrease performance
significantly, go with that. Otherwise, you're going to have to call PSS.
--
Adam Machanic
SQL Server MVP
http://www.sqljunkies.com/weblog/amachanic
--|||What I did was increase the value for intra-query-Paralellism.
Is that ok instead of trying try the MAXDOP hint?
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:#H3rPc35EHA.4004@.tk2msftngp13.phx.gbl...
> "metoonyc" <metoonyc@.yahoo.com> wrote in message
> news:%23zvTGa35EHA.3756@.TK2MSFTNGP14.phx.gbl...
> > I do have sp4 installed
>
> Okay, your original post said SP3...
> Did you try the MAXDOP hint? If it doesn't decrease performance
> significantly, go with that. Otherwise, you're going to have to call PSS.
> --
> Adam Machanic
> SQL Server MVP
> http://www.sqljunkies.com/weblog/amachanic
> --
>|||"metoonyc" <metoonyc@.yahoo.com> wrote in message
news:uqE72i35EHA.2316@.TK2MSFTNGP15.phx.gbl...
> What I did was increase the value for intra-query-Paralellism.
> Is that ok instead of trying try the MAXDOP hint?
Well, that will affect your entire server. If you're okay with that then I
guess it's not a problem, but it could have performance implications on
other queries that aren't having issues with parallelism. The MAXDOP hint
would only affect the one query.
--
Adam Machanic
SQL Server MVP
http://www.sqljunkies.com/weblog/amachanic
--|||Thanks,
How do I do for The MAXDOP hint
>that would only affect the one query.
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:#DluLl35EHA.1632@.tk2msftngp13.phx.gbl...
> "metoonyc" <metoonyc@.yahoo.com> wrote in message
> news:uqE72i35EHA.2316@.TK2MSFTNGP15.phx.gbl...
> > What I did was increase the value for intra-query-Paralellism.
> > Is that ok instead of trying try the MAXDOP hint?
>
> Well, that will affect your entire server. If you're okay with that then
I
> guess it's not a problem, but it could have performance implications on
> other queries that aren't having issues with parallelism. The MAXDOP hint
> would only affect the one query.
> --
> Adam Machanic
> SQL Server MVP
> http://www.sqljunkies.com/weblog/amachanic
> --
>|||"metoonyc" <metoonyc@.yahoo.com> wrote in message
news:%23lNE$r35EHA.824@.TK2MSFTNGP11.phx.gbl...
> Thanks,
> How do I do for The MAXDOP hint
> >that would only affect the one query.
SELECT ...
FROM ...
WHERE ...
OPTION (MAXDOP 1)
Adam Machanic
SQL Server MVP
http://www.sqljunkies.com/weblog/amachanic
--|||Thanks,
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:#zTbWH45EHA.3644@.tk2msftngp13.phx.gbl...
> "metoonyc" <metoonyc@.yahoo.com> wrote in message
> news:%23lNE$r35EHA.824@.TK2MSFTNGP11.phx.gbl...
> > Thanks,
> >
> > How do I do for The MAXDOP hint
> > >that would only affect the one query.
>
> SELECT ...
> FROM ...
> WHERE ...
> OPTION (MAXDOP 1)
>
> --
> Adam Machanic
> SQL Server MVP
> http://www.sqljunkies.com/weblog/amachanic
> --
>
>

Intra-query parallelism

I'm saw this error for the first time this AM from a process which runs
nightly:
Server: Msg 8650, Level 13, State 1, Line 1 Intra-query parallelism caused
your server command (process ID #55) to deadlock. Rerun the query without
intra-query parallelism by using the query hint option (maxdop 1).
I read KB 837983 "You may receive error message 8650 when you run a query
that uses intra-query parallelism" and KB 315662 "FIX: Parallel query may
encounter undetected deadlock with itself" referenced by that article, as
well as the article mentioned by a MVP in this group in a message dated
07/26/2007. I also read up on Degrees of parallelism in BOL. What isn't
clear to me is, what exactly is meant by parallel "query" execution? Is this
referring to a single "query" being executed by multiple threads
simultaneously, or multiple threads executing separate "queries" (even
though the statements may be identical)? If it is the former then I see no
reason why I shouldn't go ahead and disable it, since only this one process
which runs after hours has generated the error. If it is the latter, then
that would seem to indicate that disabling parallel query execution
effectively would make my four CPU server a one CPU server. What I have read
seems to suggest the former, but nowhere is it stated outright. Can anyone
clear this up for me?Intra-query parallelism uses multiple threads to service a single query.
You can just add the OPTION (MAXDOP 1) to your SELECT/INSERT/UPDATE/DELETE
statement to suppress parallelism for that one query.
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
https://mvp.support.microsoft.com/profile/Tom.Moreau
"Ron Hinds" <billg@.microsoft.com> wrote in message
news:OViL%23ns2HHA.4184@.TK2MSFTNGP06.phx.gbl...
I'm saw this error for the first time this AM from a process which runs
nightly:
Server: Msg 8650, Level 13, State 1, Line 1 Intra-query parallelism caused
your server command (process ID #55) to deadlock. Rerun the query without
intra-query parallelism by using the query hint option (maxdop 1).
I read KB 837983 "You may receive error message 8650 when you run a query
that uses intra-query parallelism" and KB 315662 "FIX: Parallel query may
encounter undetected deadlock with itself" referenced by that article, as
well as the article mentioned by a MVP in this group in a message dated
07/26/2007. I also read up on Degrees of parallelism in BOL. What isn't
clear to me is, what exactly is meant by parallel "query" execution? Is this
referring to a single "query" being executed by multiple threads
simultaneously, or multiple threads executing separate "queries" (even
though the statements may be identical)? If it is the former then I see no
reason why I shouldn't go ahead and disable it, since only this one process
which runs after hours has generated the error. If it is the latter, then
that would seem to indicate that disabling parallel query execution
effectively would make my four CPU server a one CPU server. What I have read
seems to suggest the former, but nowhere is it stated outright. Can anyone
clear this up for me?

Intra-query Paralleism caused your query to deadlock . . .

We often get the above message from our Sql Server 2000 Transactional Replication Distribution service. How do we implement the OPTION (MAXDOP 1) hint as directed? It seems I would have to modify the MS procedures. Can't do that. Any suggestions?

Michael

I originally posted this in the Replication forum but was ignored so I thought that perhaps a more "general" forum would attract a different kind of contributor . . .

If you cant give the OPTION (MAXDOP 1) query hint, you can change the Max Degree of Parallelism setting to 1, which in effect gives the same result.|||

Are you suggesting that the max parallism for the entire server be changed to 1? Is that the solution? Will setting this option affect other queries in the system? What are other DBA's doing to solve this problem? I am concerned about the performance impact on the other queries in the server.

Thanks a lot Roji.

Intranet/Internet and Reporting Services

We have a situation where we are going to have our IIS server with two NIC
cards. One for the internet and one for the intranet. We are planning to have
our SQL Server connected to the IIS server via the intranet.
My question is... if a user on the internet requests a report and we render
it programmatically via visual basic/visual C# (Using a reporting services
object in ASP) on the web will they have access to the report even though
they do not have direct access to the SQL Reporting Services server? Will I
be able to take what is returned to that object and display it on the
Intranet?
TIA,
PatrickYes. You can use the Response.BinaryWrite() method to direct the report
content to a webform or ASP page. There are limitations, however, with
images and interactive report features when using this approach.
"Patrick Allmond" <PatrickAllmond@.discussions.microsoft.com> wrote in
message news:EDC55557-ECE5-4F21-B07B-583896BDE7D5@.microsoft.com...
> We have a situation where we are going to have our IIS server with two NIC
> cards. One for the internet and one for the intranet. We are planning to
> have
> our SQL Server connected to the IIS server via the intranet.
> My question is... if a user on the internet requests a report and we
> render
> it programmatically via visual basic/visual C# (Using a reporting services
> object in ASP) on the web will they have access to the report even though
> they do not have direct access to the SQL Reporting Services server? Will
> I
> be able to take what is returned to that object and display it on the
> Intranet?
> TIA,
> Patrick
>

Intranet, windows authentication, Sql Server Login Failed for user (null)

Hi--

I am building an intRAnet website using windows authentication for website access and SQL Server access ( Trusted_Connection = true ).

In IIS I have these settings:

Allow Anonymous = unchecked (false)Windows Authentication = Checked (true)Digest windows = checked (true)

In my Web.Config file:

authentication="windows"impersonate="true"allowusers="*"

When I pull up the page these are my credentials:

Security.Principal.Windows: mydomain\myuserid (this is correct what it shows on my page)

Me.User.Identity: mydomain\myuserid (this is correct what it shows on my page)

Threading.currentThread.currentUser mydomain\myuserid (this is correct what it shows on my page)

So the ASP.NET page recognizes it is me and my domain. However, when i click a button to pull some data from a database I get the error message: Login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection.

My data access on SQL Server works fine. The weird thing is when I debug on my machine it pulls data fine. but when I copy the files to the windows 2003 server it doesn't work.

Do i have to do something with delegation?

some guys and i did some research and we think that kerberos delegation is not enabled on our domain in active directory.

When i take off digest, and select basic authentication the credentials pass...but i don't want that.

Intranet Set up with SQL server and ASP

Dear all,

I am trying to make a Intranet connection with SQL 2000 and ASP,
but a serious problem in connection string. Please can you help me in this issue

I am using following connection string.

(windows authontication)

Set conn=Server.CreateObject("ADODB.Connection")

conn.Mode=adModeReadWrite
conn.Open="PROVIDER=SQLOLEDB;Integrated Security=SSPI;Initial cataloge =nrth;DataSource=server"

System dsn =nrth
server name =server

This string makes following error

Microsoft OLE DB Provider for SQL Server error '80040e4d'

Invalid connection string attribute

//global.asa, line 17

Microsoft OLE DB Provider for SQL Server error '80040e4d'

Invalid connection string attribute

/test2.asp, line

can you correct this or suggest a valid connection string

Thank youThe DSN probably only exists on the one server. Try replacing it with
Server=servername.|||Originally posted by Satya
The DSN probably only exists on the one server. Try replacing it with
Server=servername.

still having error,

I am checking it in server itself

Intranet CAL license

Hello,
I know that you need a SQL Server license for the reporting services server
installation but what about the report viewers/developers?... just a couple
of questions:
1) Do you need a SQL Server CAL license for every report developer using
visual studio .NET?
2) Do ALL report viewers need a CAL license? Or does this depend upon how
you deploy the reports?
Any advice would be appreciated.
Thanksif you are developing reports using vs.net then you will need a client license
if people are accessing the reports over the web then i'm not aware that
you need a licence (for a start it would be impossible to monitor)
"Basil" wrote:
> Hello,
> I know that you need a SQL Server license for the reporting services server
> installation but what about the report viewers/developers?... just a couple
> of questions:
> 1) Do you need a SQL Server CAL license for every report developer using
> visual studio .NET?
> 2) Do ALL report viewers need a CAL license? Or does this depend upon how
> you deploy the reports?
> Any advice would be appreciated.
> Thanks|||The licensing for RS is the same as for SQL Server. For instance, if
internet users are access the reports it would be the same as internet users
accessing SQL Server. I believe MS has special per processor licensing for
internet use. For intranet, every client that would access the report server
needs to have a CAL. Either per client or you can do the per processor. At
least that is my understanding of how the licensing works. The important
thing is RS licensing and SQL Server licensing is the same thing.
Bruce Loehle-Conger
MVP SQL Server Reporting Services
"adolf garlic" <adolfgarlic@.discussions.microsoft.com> wrote in message
news:B22B84CB-CFD8-40F1-A6BB-4CF868DD1685@.microsoft.com...
> if you are developing reports using vs.net then you will need a client
> license
>
> if people are accessing the reports over the web then i'm not aware that
> you need a licence (for a start it would be impossible to monitor)
> "Basil" wrote:
>> Hello,
>> I know that you need a SQL Server license for the reporting services
>> server
>> installation but what about the report viewers/developers?... just a
>> couple
>> of questions:
>> 1) Do you need a SQL Server CAL license for every report developer using
>> visual studio .NET?
>> 2) Do ALL report viewers need a CAL license? Or does this depend upon how
>> you deploy the reports?
>> Any advice would be appreciated.
>> Thanks

Intra Query Error

Hi
I seem to be getting the following error, when running a SP which populates
a table:
***********************************
Server: Msg 8650, Level 13, State 127, Line 1
Intra-query parallelism caused your server command (process ID #62) to deadl
ock. Rerun the query without intra-query parallelism by using the query hint
option (maxdop 1).
************************************
The SP is the following:
****************************************
*****************
ALTER PROCEDURE sp_crosstab_NOV AS
DECLARE @.Year VARCHAR(5)
DECLARE @.Month VARCHAR(20)
DECLARE @.select VARCHAR(8000)
DECLARE @.sumfunc VARCHAR(100)
DECLARE @.pivot VARCHAR(100)
DECLARE @.table VARCHAR(100)
DECLARE @.sql VARCHAR(8000), @.delim VARCHAR(1)
SET @.Year = YEAR(GETDATE())
SET @.Month =MONTH(GETDATE()) - 1
SET @.select = 'INSERT INTO NUCosstab SELECT [Policy No], [Section Sequence],
Insured, [Effective Date], [Expiry Date], [Trans Code], Trade, [Reporting M
onth],[Reporting Year],SUM(GrossPremium) AS [Total Premium],
SUM(BrokerComm) AS [Broker Comm] , SUM(IPT) AS IPT FROM NUPremium WHERE [Re
porting Year] = ' + @.Year + ' AND [Reporting Month] = ' + @.Month + ' GROUP
BY [Policy No],[Section Sequence],Insured, [Effective Date],[Expiry Date],
[Trans Code], Trade, [Reporting Month],[Reporting Year]'
SET @.sumfunc = 'Sum(GrossPremium)'
SET @.pivot = '[Section Type Code]'
SET @.table = 'NUPremium'
TRUNCATE TABLE NUCosstab
SET NOCOUNT ON
SET ANSI_WARNINGS OFF
EXEC ('SELECT ' + @.pivot + ' AS pivot INTO ##pivot FROM ' + @.table + ' WHERE
1=2')
EXEC ('INSERT INTO ##pivot SELECT DISTINCT ' + @.pivot + ' FROM ' + @.table +
' WHERE '
+ @.pivot + ' Is Not Null')
SELECT @.sql='', @.sumfunc=STUFF(@.sumfunc, LEN(@.sumfunc), 1, ' END)' )
SELECT @.delim=CASE SIGN( CHARINDEX('char', data_type)+CHARINDEX('date', data
_type) )
WHEN 0 THEN '' ELSE '''' END
FROM tempdb.information_schema.columns
WHERE table_name='##pivot' AND column_name='pivot'
SELECT @.sql=@.sql + '''' + CONVERT(VARCHAR(100), pivot) + ''' = ' +
STUFF(@.sumfunc,CHARINDEX( '(', @.sumfunc )+1, 0, ' CASE ' + @.pivot + ' WHEN '
+ @.delim + CONVERT(VARCHAR(100), pivot) + @.delim + ' THEN ' ) + ', ' FROM ##
pivot
DROP TABLE ##pivot
SELECT @.sql=LEFT(@.sql, LEN(@.sql)-1)
SELECT @.select=STUFF(@.select, CHARINDEX(' FROM ', @.select)+1, 0, ', ' + @.sql
+ ' ')
EXEC (@.select)
SET ANSI_WARNINGS ON
*****************************
Any ideas?
Kind Regards
RickyHi Ricky
This article may help:
http://support.microsoft.com/?kbid=837983
--
Jack Vamvas
________________________________________
__________________________
Receive free SQL tips - register at www.ciquery.com/sqlserver.htm
SQL Server Performance Audit - check www.ciquery.com/sqlserver_audit.htm
New article by Jack Vamvas - SQL and Markov Chains - www.ciquery.com/articles/art_
04.asp
"Ricky" <MSN.MSN.com> wrote in message news:%23DOlz4CIGHA.240@.TK2MSFTNGP11.p
hx.gbl...
Hi
I seem to be getting the following error, when running a SP which populates
a table:
***********************************
Server: Msg 8650, Level 13, State 127, Line 1
Intra-query parallelism caused your server command (process ID #62) to deadl
ock. Rerun the query without intra-query parallelism by using the query hint
option (maxdop 1).
************************************
The SP is the following:
****************************************
*****************
ALTER PROCEDURE sp_crosstab_NOV AS
DECLARE @.Year VARCHAR(5)
DECLARE @.Month VARCHAR(20)
DECLARE @.select VARCHAR(8000)
DECLARE @.sumfunc VARCHAR(100)
DECLARE @.pivot VARCHAR(100)
DECLARE @.table VARCHAR(100)
DECLARE @.sql VARCHAR(8000), @.delim VARCHAR(1)
SET @.Year = YEAR(GETDATE())
SET @.Month =MONTH(GETDATE()) - 1
SET @.select = 'INSERT INTO NUCosstab SELECT [Policy No], [Section Sequence],
Insured, [Effective Date], [Expiry Date], [Trans Code], Trade, [Reporting M
onth],[Reporting Year],SUM(GrossPremium) AS [Total Premium],
SUM(BrokerComm) AS [Broker Comm] , SUM(IPT) AS IPT FROM NUPremium WHERE [Re
porting Year] = ' + @.Year + ' AND [Reporting Month] = ' + @.Month + ' GROUP
BY [Policy No],[Section Sequence],Insured, [Effective Date],[Expiry Date],
[Trans Code], Trade, [Reporting Month],[Reporting Year]'
SET @.sumfunc = 'Sum(GrossPremium)'
SET @.pivot = '[Section Type Code]'
SET @.table = 'NUPremium'
TRUNCATE TABLE NUCosstab
SET NOCOUNT ON
SET ANSI_WARNINGS OFF
EXEC ('SELECT ' + @.pivot + ' AS pivot INTO ##pivot FROM ' + @.table + ' WHERE
1=2')
EXEC ('INSERT INTO ##pivot SELECT DISTINCT ' + @.pivot + ' FROM ' + @.table +
' WHERE '
+ @.pivot + ' Is Not Null')
SELECT @.sql='', @.sumfunc=STUFF(@.sumfunc, LEN(@.sumfunc), 1, ' END)' )
SELECT @.delim=CASE SIGN( CHARINDEX('char', data_type)+CHARINDEX('date', data
_type) )
WHEN 0 THEN '' ELSE '''' END
FROM tempdb.information_schema.columns
WHERE table_name='##pivot' AND column_name='pivot'
SELECT @.sql=@.sql + '''' + CONVERT(VARCHAR(100), pivot) + ''' = ' +
STUFF(@.sumfunc,CHARINDEX( '(', @.sumfunc )+1, 0, ' CASE ' + @.pivot + ' WHEN '
+ @.delim + CONVERT(VARCHAR(100), pivot) + @.delim + ' THEN ' ) + ', ' FROM ##
pivot
DROP TABLE ##pivot
SELECT @.sql=LEFT(@.sql, LEN(@.sql)-1)
SELECT @.select=STUFF(@.select, CHARINDEX(' FROM ', @.select)+1, 0, ', ' + @.sql
+ ' ')
EXEC (@.select)
SET ANSI_WARNINGS ON
*****************************
Any ideas?
Kind Regards
Ricky

Into cursor problem

My code in SP
DECLARE @.sql nvarchar(4000)
SELECT @.sql = 'SELECT accttype,cocode,coname INTO #tempTable from #myTable '
print @.sql
EXEC @.sql
I got the error Invalid object name '#temptable' , Does anyone know the
reason ? thanks a lotUnless #temptable is declared within the container SP, it only exists within
the scope of the dynamic sql and will be destroyed when it completes:
DECLARE @.sql nvarchar(4000)
CREATE TABLE #temptable ...
SELECT @.sql = 'INSERT #tempTable SELECT ... from #myTable'
print @.sql
EXEC @.sql
Mr Tea
"Agnes" <agnes@.dynamictech.com.hk> wrote in message
news:%23HgFUx7GFHA.3352@.TK2MSFTNGP10.phx.gbl...
> My code in SP
> DECLARE @.sql nvarchar(4000)
> SELECT @.sql = 'SELECT accttype,cocode,coname INTO #tempTable from #myTable
> '
> print @.sql
> EXEC @.sql
> I got the error Invalid object name '#temptable' , Does anyone know the
> reason ? thanks a lot
>|||Try this
SELECT @.sql = 'SELECT accttype,cocode,coname INTO #tempTable from
#myTable '
print @.sql
EXEC (''+@.sql +'')
Madhivanan|||> Try this
> SELECT @.sql = 'SELECT accttype,cocode,coname INTO #tempTable from
> #myTable '
> print @.sql
> EXEC (''+@.sql +'')
?
Does that will anyhow solve the problem?
Roji. P. Thomas
Net Asset Management
https://www.netassetmanagement.com
<madhivanan2001@.gmail.com> wrote in message
news:1109396430.717724.313510@.z14g2000cwz.googlegroups.com...
> Try this
> SELECT @.sql = 'SELECT accttype,cocode,coname INTO #tempTable from
> #myTable '
> print @.sql
> EXEC (''+@.sql +'')
> Madhivanan
>|||Thomas
I tested that in the query analyser. The text displayed is (9 row(s)
affected). But actually the table was not created. Can you tell me
where the prblem is?
Madhivanan|||Madhivanan,

> I tested that in the query analyser
Tested what? The code you posted?
Roji. P. Thomas
Net Asset Management
https://www.netassetmanagement.com
<madhivanan2001@.gmail.com> wrote in message
news:1109401722.488986.74000@.o13g2000cwo.googlegroups.com...
> Thomas
> I tested that in the query analyser. The text displayed is (9 row(s)
> affected). But actually the table was not created. Can you tell me
> where the prblem is?
> Madhivanan
>|||Yes thomas
Madhivanan|||The code you posted will not work on its own.
(It was even missing the declare statement )
Roji. P. Thomas
Net Asset Management
https://www.netassetmanagement.com
<madhivanan2001@.gmail.com> wrote in message
news:1109402883.603761.286740@.f14g2000cwb.googlegroups.com...
> Yes thomas
> Madhivanan
>|||Thomas,
Run this and see the result
create table #temp (no int, name varchar(100))
insert into #temp values(1,'Name1')
insert into #temp values(2,'Name2')
insert into #temp values(3,'Name3')
insert into #temp values(4,'Name4')
insert into #temp values(5,'Name5')
Declare @.sql varchar(100)
SELECT @.sql = 'SELECT no,name INTO #tempTable from #temp'
print @.sql
EXEC (''+@.sql +'')
drop table #temp
Actually it is running without any error. But the table #tempTable is
not created.
Can you find out the reason?
Madhivanan|||Madhivanan,
The #tempTable is not visible because EXEC() has its own scope.
The following are the limitations of EXEC
a.. Within the SQL batch you cannot access local variables or parameters
of the calling stored procedure.
b.. Any USE statement will not affect the calling stored procedure.
c.. Temp tables created in the SQL batch will not be available to the
calling procedure since they are dropped when the block exits - just like
when you exit a stored procedure. The batch can however access tables
created in the calling procedure.
d.. If you issue a SET command in the dynamic SQL batch, the effect of the
SET command lasts for the duration of the dynamic SQL batch only.
e.. The query plan for the batch is not part of the plan for the calling
procedure. Cachewise the query is just as good as a bare SQL statement sent
from the client.
f.. If the batch results in an condition that leads to abortion of the
batch, for instance rollback in a trigger, not only the batch of dynamic SQL
is terminated, but also the calling procedure (and its caller and so on).
Its taken from this excellent article by Erland Sommarskog.
http://www.sommarskog.se/dynamic_sql.html
Roji. P. Thomas
Net Asset Management
https://www.netassetmanagement.com
<madhivanan2001@.gmail.com> wrote in message
news:1109413962.850048.149990@.z14g2000cwz.googlegroups.com...
> Thomas,
> Run this and see the result
>
> create table #temp (no int, name varchar(100))
> insert into #temp values(1,'Name1')
> insert into #temp values(2,'Name2')
> insert into #temp values(3,'Name3')
> insert into #temp values(4,'Name4')
> insert into #temp values(5,'Name5')
> Declare @.sql varchar(100)
> SELECT @.sql = 'SELECT no,name INTO #tempTable from #temp'
> print @.sql
> EXEC (''+@.sql +'')
> drop table #temp
> Actually it is running without any error. But the table #tempTable is
> not created.
> Can you find out the reason?
> Madhivanan
>

into clause

hi, how to define the temp table to local, only the current use can see it:
select * from into temp?A single # implies local temp table for the current connection/context. No
other user (on another connection) can access this #temp.
e.g.
select *
into #temp
from tb
-oj
"js" <js@.someone@.hotmail.com> wrote in message
news:OUFrxpBTFHA.1384@.TK2MSFTNGP09.phx.gbl...
> hi, how to define the temp table to local, only the current use can see
> it:
> select * from into temp?
>|||Do:
SELECT * INTO #temp FROM tbl ;
Anith|||select c1, ..., cn
into #local_temp_table
from t1
If you use one # then it is local. Use ## for global temporary.
AMB
"js" wrote:

> hi, how to define the temp table to local, only the current use can see it
:
> select * from into temp?
>
>|||how about if I don't use, global? Thks.
"Alejandro Mesa" <AlejandroMesa@.discussions.microsoft.com> wrote in message
news:EC46D1A7-AF6B-4A95-ADA4-D58372CEF649@.microsoft.com...
> select c1, ..., cn
> into #local_temp_table
> from t1
> If you use one # then it is local. Use ## for global temporary.
>
> AMB
> "js" wrote:
>|||> how about if I don't use, global? Thks.
Can you re-phrase that question?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"js" <js@.someone@.hotmail.com> wrote in message news:urny42BTFHA.2128@.TK2MSFTNGP14.phx.gbl..
.
> how about if I don't use, global? Thks.
> "Alejandro Mesa" <AlejandroMesa@.discussions.microsoft.com> wrote in messag
e
> news:EC46D1A7-AF6B-4A95-ADA4-D58372CEF649@.microsoft.com...
>|||Sorry,
If you use one # then it is local. Use ## for global temporary. how about if
I don't use # or ##(select * into temp from tb1), is it global?
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:%23HSKB8BTFHA.3040@.TK2MSFTNGP10.phx.gbl...
> Can you re-phrase that question?
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "js" <js@.someone@.hotmail.com> wrote in message
> news:urny42BTFHA.2128@.TK2MSFTNGP14.phx.gbl...
>|||Then it is a permanent table (normal one).
AMB
"js" wrote:

> Sorry,
> If you use one # then it is local. Use ## for global temporary. how about
if
> I don't use # or ##(select * into temp from tb1), is it global?
>
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote i
n
> message news:%23HSKB8BTFHA.3040@.TK2MSFTNGP10.phx.gbl...
>
>|||In SQL Query Analyzer, if i open a new query window, is it a new
connection/context?
"oj" <nospam_ojngo@.home.com> wrote in message
news:eVQ58sBTFHA.3140@.TK2MSFTNGP14.phx.gbl...
>A single # implies local temp table for the current connection/context. No
>other user (on another connection) can access this #temp.
> e.g.
> select *
> into #temp
> from tb
>
> --
> -oj
>
> "js" <js@.someone@.hotmail.com> wrote in message
> news:OUFrxpBTFHA.1384@.TK2MSFTNGP09.phx.gbl...
>|||Thanks for the help...
how to check is #temp is already in database?
"oj" <nospam_ojngo@.home.com> wrote in message
news:eVQ58sBTFHA.3140@.TK2MSFTNGP14.phx.gbl...
>A single # implies local temp table for the current connection/context. No
>other user (on another connection) can access this #temp.
> e.g.
> select *
> into #temp
> from tb
>
> --
> -oj
>
> "js" <js@.someone@.hotmail.com> wrote in message
> news:OUFrxpBTFHA.1384@.TK2MSFTNGP09.phx.gbl...
>

Interview Questions Feb 24, 2006

What is Native Image Generator (Ngen.exe)?
The Native Image Generator utility (Ngen.exe) allows you to run the JIT
compiler on your assembly's MSIL and generate native machine code which
is cached to disk. After the image is created .NET runtime will use the
image to run the code rather than from the hard disk. Running Ngen.exe
on an assembly potentially allows the assembly to load and execute
faster, because it restores code and data structures from the native
image cache rather than generating them dynamically.Native images load
faster than MSIL because JIT compilation and type-safety verification
is eliminated.
Full Interview Questions for .NET and SQL Server
http://www.geocities.com/dotnetinterviews/
Help the community to make job search easier mail your questions to
jobatyourdoorstep@.yahoo.co.in
Looking for a onsite job mail your resumes at
jobatyourdoorstep@.yahoo.co.inShiprasad Koirala|||"Jobs" <jobatyourdoorstep@.yahoo.co.in> schrieb:
> The Native Image Generator utility (Ngen.exe) allows you to run the JIT
> compiler on your assembly's MSIL and generate native machine code which
> is cached to disk. After the image is created .NET runtime will use the
> image to run the code rather than from the hard disk.
There's a little contradiction in the two sentences above. The native image
is stored in a special area of the GAC on the disk (typically harddisk).
BTW: In general Microsoft's public newsgroups are a place for peer support
and not an advertising platform.
--
M S Herfried K. Wagner
M V P <URL:http://dotnet.mvps.org/>
V B <URL:http://classicvb.org/petition/>

Interview Questions Feb 24, 2006

What is Native Image Generator (Ngen.exe)?
The Native Image Generator utility (Ngen.exe) allows you to run the JIT
compiler on your assembly's MSIL and generate native machine code which
is cached to disk. After the image is created .NET runtime will use the
image to run the code rather than from the hard disk. Running Ngen.exe
on an assembly potentially allows the assembly to load and execute
faster, because it restores code and data structures from the native
image cache rather than generating them dynamically.Native images load
faster than MSIL because JIT compilation and type-safety verification
is eliminated.
Full Interview Questions for .NET and SQL Server
http://www.geocities.com/dotnetinterviews/
Help the community to make job search easier mail your questions to
jobatyourdoorstep@.yahoo.co.in
Looking for a onsite job mail your resumes at
jobatyourdoorstep@.yahoo.co.inShiprasad Koirala|||"Jobs" <jobatyourdoorstep@.yahoo.co.in> schrieb:
> The Native Image Generator utility (Ngen.exe) allows you to run the JIT
> compiler on your assembly's MSIL and generate native machine code which
> is cached to disk. After the image is created .NET runtime will use the
> image to run the code rather than from the hard disk.
There's a little contradiction in the two sentences above. The native image
is stored in a special area of the GAC on the disk (typically harddisk).
BTW: In general Microsoft's public newsgroups are a place for peer support
and not an advertising platform.
M S Herfried K. Wagner
M V P <URL:http://dotnet.mvps.org/>
V B <URL:http://classicvb.org/petition/>

Interview Questions Feb 24, 2006

What is Native Image Generator (Ngen.exe)?
The Native Image Generator utility (Ngen.exe) allows you to run the JIT
compiler on your assembly's MSIL and generate native machine code which
is cached to disk. After the image is created .NET runtime will use the
image to run the code rather than from the hard disk. Running Ngen.exe
on an assembly potentially allows the assembly to load and execute
faster, because it restores code and data structures from the native
image cache rather than generating them dynamically.Native images load
faster than MSIL because JIT compilation and type-safety verification
is eliminated.
Full Interview Questions for .NET and SQL Server
http://www.geocities.com/dotnetinterviews/
Help the community to make job search easier mail your questions to
jobatyourdoorstep@.yahoo.co.in
Looking for a onsite job mail your resumes at
jobatyourdoorstep@.yahoo.co.in
Shiprasad Koirala
|||"Jobs" <jobatyourdoorstep@.yahoo.co.in> schrieb:
> The Native Image Generator utility (Ngen.exe) allows you to run the JIT
> compiler on your assembly's MSIL and generate native machine code which
> is cached to disk. After the image is created .NET runtime will use the
> image to run the code rather than from the hard disk.
There's a little contradiction in the two sentences above. The native image
is stored in a special area of the GAC on the disk (typically harddisk).
BTW: In general Microsoft's public newsgroups are a place for peer support
and not an advertising platform.
M S Herfried K. Wagner
M V P <URL:http://dotnet.mvps.org/>
V B <URL:http://classicvb.org/petition/>

Interview questions and answers

Hi,
Can someone please point me to a link which include SQL Server interview
questions and answers on database administration, performance tuning at
expert level (400)?
Thanks,
EinatTry these:
SQL Server Interview Questions
http://vyaskn.tripod.com/iq.htm
Arnie Rowland, Ph.D.
Westwood Consulting, Inc
Most good judgment comes from experience.
Most experience comes from bad judgment.
- Anonymous
You can't help someone get up a hill without getting a little closer to the
top yourself.
- H. Norman Schwarzkopf
"Gil" <gillapid@.google.com> wrote in message
news:%23N2%23kSw%23GHA.4376@.TK2MSFTNGP03.phx.gbl...
> Hi,
> Can someone please point me to a link which include SQL Server interview
> questions and answers on database administration, performance tuning at
> expert level (400)?
> Thanks,
> Einat
>
>

Interview questions and answers

Hi,
Can someone please point me to a link which include SQL Server interview
questions and answers on database administration, performance tuning at
expert level (400)?
Thanks,
EinatTry these:
SQL Server Interview Questions
http://vyaskn.tripod.com/iq.htm
Arnie Rowland, Ph.D.
Westwood Consulting, Inc
Most good judgment comes from experience.
Most experience comes from bad judgment.
- Anonymous
You can't help someone get up a hill without getting a little closer to the
top yourself.
- H. Norman Schwarzkopf
"Gil" <gillapid@.google.com> wrote in message
news:%23N2%23kSw%23GHA.4376@.TK2MSFTNGP03.phx.gbl...
> Hi,
> Can someone please point me to a link which include SQL Server interview
> questions and answers on database administration, performance tuning at
> expert level (400)?
> Thanks,
> Einat
>
>

Interview questions

Hello,

I have an interview coming up for a vb.net programmer and work involves Reporting Services. What questions/tests could they possibly ask me?

Thanks in advance

Hi,

The forums is basically a place where you can ask your doubts/queries etc. For interview questions, it would be better if you search Google. There are a lot of inteview questions sites too.

Regards,

Vivek

Interview Questions

Ok .. I was just thinking about all the interviews i ve had , and those I ve taken ...

What are the questions you would ask a DBA if you are allowed to ask him only 3 questions and have to judge him on that ?

My favourites would be Disaster recovery, Nested transactions and Linked server concepts !!!

Gurus !! Your Ideas please ?I have to restrict it to just three questions, eh? This would depend on how many years of experience the person claims on their resume.

What is your strongest area of knowledge?
This is an easy one one to lure them in. Should tell you what they want to sell, too. Also, you can expand on this to see if you think they got all of those years of experience, or backdated a bit of their resume.

Why are you applying for this job?
Think about your own answer to this one. After all, it is better to be working with folks who share some goals (money, training, etc.)

What is your biggest screw-up, and what did you do about it?
If they say "I covered it up" tell them Enron is down the hall.|||What is the level of dba ? In previous interviews, I gave them a quiz to test specifics before the interview. During the interview, I ask open-ended questions (Department Z needs a database installed - what questions do you need answered before starting) - similar to MCrowley. I like the screw-up question. Anyway, if you ask open-ended questions, after they have finished answering, I normally follow-up with give me an example ... Another useful question is - what have you learned about sql server that you could not find in a book - the nuances most of us have experienced.

interview questions

hi friends,
i want some advanced interview questions that is questions, giving some
situation. and also some intresting questions.
thanks
vanithaSearch Goolge "SQL Interview Questions"
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"vanitha" <vanitha@.discussions.microsoft.com> wrote in message
news:A3B00B7A-B0F3-4804-A033-3F3CDE1BBFAD@.microsoft.com...
> hi friends,
> i want some advanced interview questions that is questions, giving some
> situation. and also some intresting questions.
> thanks
> vanitha|||For question go here:
http://www.sqlservercentral.com/testcenter/qod.asp
For answers... study. :)
ML
http://milambda.blogspot.com/|||Has anyone here every actually been "quized" during an interview.
I 've been sent to online testing sights before, but only once have I ever
actually been asked a "knowledge testing" question during an interview... an
d
I actually guessed the right answer right answer then.
"ML" wrote:

> For question go here:
> http://www.sqlservercentral.com/testcenter/qod.asp
> For answers... study. :)
>
> ML
> --
> http://milambda.blogspot.com/

interview question?

How many tables can be updated with a single update ?I would answer only one
but I'm curious of the exact answer|||even i support one ,but it is correct|||One|||I think one|||zero (where clause finds no rows)

one (duh)

many (updated row cascades to related tables)|||Cascading updates ... hmm ... why didnt I think of that ...

Well .. seems like I lost any chance of getting this job|||What does n mean ... Mr BK|||brett probably means m, not n

m is n + 1

:cool: :cool: :cool: :cool: :cool: :cool: :cool:|||I thought n meant any number...

m to me means many, but infers a relationship as in 1-m

either way...

However, here's a poll type question...how many people use cascade...

I've never...always wanted more control...prefer to delete and insert...

(And keep history)|||Never !!!|||when updates are performed, if you're writing audit records or something, those triggers should fire no matter whether the table being updated is the target table or a related table, right?

oh, and to me, m implies 0 to many, not 1 to many

my answer to the question was "zero, one, or many"

;)|||Originally posted by r937
when updates are performed, if you're writing audit records or something, those triggers should fire no matter whether the table being updated is the target table or a related table, right?

oh, and to me, m implies 0 to many, not 1 to many

my answer to the question was "zero, one, or many"

;)

fair enough...but now we trapse down the logical data modeling path...

things are not always 0-m...could be required to be 1-m...

EDIT: And you didn't answer the cascade question...|||sorry, i assumed my answer to the cascade question would be obvious

i use as much RI (http://evolt.org/RI) as the database supports

unless i'm doing consulting work at a cllient site where there's a DBA with veto on changes or exceptions to his database guidelines...

but that's politics ;)|||That's a great link...love the VW...

And yes, I get the cascade answer...you must be a big IDENTITY kind of guy...

I never thought of cascading as a function of RI though...

but it seems related...

no pun intended|||I use cascading for RI whenever possible. There have been a few times where I have modeled relationships with multiple update paths, and then Cascading fails and I had to resort to triggers.|||Originally posted by Brett Kaiser
That's a great link...love the VW... thanks

actually, when it comes to the natural versus surrogate primary key debate, i tend to favour natural keys, insofar as any candidate key can be called "natural"

however, i have been know to use IDENTITY and its cousins AUTONUMBER, AUTO_INCREMENT, and SEQUENCE from time to time

one of the best articles on the subject, long but very worth the time to read, is Key Points About Surrogate Keys (http://www.rationalcommerce.com/resources/surrogates.htm)|||i think guid is coming up ;)|||You know...|||why I outta...Moe, Larry build a database...

Interview question stored procedure

Can anyone answer ths question please.

If your page is running slow what sould u check specifically with the store procedure. What is it tht you might think is giving a problem. or how would u go about see what making your stored procdure run slow.??

Give these articles a try:

http://www.sql-server-performance.com/tips/stored_procedures_p1.aspx

http://articles.techrepublic.com.com/5100-22-1045447.html

http://www.mssqlcity.com/Tips/stored_procedures_optimization.htm

|||

Absolute basics are check the execution plan, look for things like table scans which could be turned into index seeks by adding indexes, perhaps use the database engine tuning advisor.

Check the SQL for subselects where joins could be used instead, multiple predicates where an IN or BETWEEN clause would be better, unnecessary ORDER BY or DISTINCT instructions, possibly split up stored procedures which have the format

if inputvar = true

do this select

else

do that select

into separate sub-stored procedures for each select with the top level sp just doing the logic and then calling the appropriate sub proc (having multiple optional queries in the same proc means that the query plan may have to be re-generated each time, which costs performance)

Lots of other things you can do too, which I'm sure are mentioned in the above answer's links.

Interview Question

Hello all,
i'm aware of oracle, but i had a quesrtion in SQL Server today in an interview. Which i didn't understand well. But the question is "We have a cursor in SQL Server and what should we do to drop the cursor and keep the logic". Actually i didn't understand the question well, but those are the exact words. Please let me know if any of you have any idea about this.

Actualluy when i was talking to some one he said that we need to use "temp tables" for that. I have no idea about that. I would really appreciate if someone can clear this for me.A short answer would be to change processing logic from row-based to set-based. However, if the requirement of logic was intended to be interpreted literally, then a cursor can be replaced with a WHILE loop where the key that uniquely identifies each row from the table (or tables) that you're processing can be represented in one field.|||You would need to look at the requirement for the process and rewrite it as a set based solution.

It may or may not benefit from a temp table or a loop.

Would also review the rest of the processing, design, architecture as the presence of cursors usually indicates that someone who is not very experienced at develoiping on relationtional databases has been involved.|||I'm still new to al this, but I was wondering:

Won't using cursors be faster and saves database storage space sometimes during a calculation then having to go through creating, inserting ,selecting from a temp table and later droping it?|||cursors place locks on your tables and therefor cause waits for others users that want to modify data.|||Cursors are very fast in Oracle (providing it's not sitting on Windows), but in MSSQL it's the method of "last resort".|||Originally posted by Patrick Chua
I'm still new to al this, but I was wondering:

Won't using cursors be faster and saves database storage space sometimes during a calculation then having to go through creating, inserting ,selecting from a temp table and later droping it?

SQL Server is efficient at actions that work on sets of data which the sort of processing that sql and relational databases are built for.
It is innefficient at procedural processing which is the sort of thing you would find in a client.

Sometimes a cursor can be faster than the equivalent set based sql (especially with correlated subqueries) but I still wouldn't use one.

With Yukon and the proposed common language runtime then maybe procedural code can be embedded in t-sql and things may change.|||Thanks for the insights.

Will try not to use cursors, but it always the easiest way out. :)

Yukon!, almost didn't really get what u mean, but I remember reading it in SQL Mag as a code name for the next SQL Server edition ...just letting others who are as blur as I am know.
er..right?|||Yep.

>> Will try not to use cursors, but it always the easiest way out.
It's the easiest only if you think of processing row by row.
I always ban cursors from code to force people to find a set based solution.|||can't wait till yukon comes out.

As for cursors, - their usage starts at design time. The old school designers are the ones to blame for what developers and DBA's have to live with :)

interview question

when does sql use update lock?>>> is the salary worth it ?

see BOL for that answer

interview question

here is a question asked in a interview:
u created the sales database with the initial size of hte transaction log of 50 mb with autogrow, but no autoshrink
u do not make any backup and donot set the truncate log on check point option to on.
what will be the transaction log size after the next check point has passed?Umm... >= 50mb|||You are in a job interview where the person hiring asks you inane and pointless questions that have little or no value in either database administration or database design, and for which the answers could easily be looked up in Books On Line. At the end of the interview, you are offered the job at a substantial salary bump. Do you work for the dolt, or do you continue seeking an employer that has actual competence?|||lol@.blindman. In the instance above, you tell him he hasn't provided you enough information (which he hasn't btw). If he could do some research though, create a business proposition for why he should waste your time answering stupid questions, get business signoff, send it through QA and user-acceptance testing, then test the release, you'll have a production answer for his highness right away.|||First, there's no "Trancate on checkpoint" in 2K, which means they are talking about 7.0 or 6.5, which is bad in itself.

Second, unless a user process kicked in within less than 0.00000001 millisecond after the "create database..." statement completed, peterlemonjello's answer is almost absolutely correct, - except it will be 50MB exactly, not more or equal to.|||hmmm what an interesting way of asking for clarification... I like it. :D
by the way, it was't me who was being interviewed and it was a colleague of mine, I wasn't sure about the question itself so thought to get some expert comments from the guru's here.
anyways thnks for the replies :p

Interview question

I was asked in an interview what was the fastest way to count the rows in a
table?
I assume that: Select count(*) from Tablename - is or is not the fastest way
.
Is there another way that is faster or did I miss something.
thanksYour query would result in a full table scan. You can
also do this:
SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name') AND indid < 2
Robbe Morris - 2004/2005 Microsoft MVP C#
http://www.masterado.net
"brian" <brian@.discussions.microsoft.com> wrote in message
news:585DEB83-4414-4FD6-BDEA-2CF49C47FAB3@.microsoft.com...
>I was asked in an interview what was the fastest way to count the rows in a
> table?
> I assume that: Select count(*) from Tablename - is or is not the fastest
> way.
> Is there another way that is faster or did I miss something.
> thanks|||The count returned by this method can be used as an approximation but might
not be accurate. The SELECT COUNT(*) method should be used in normal
production code.
Hope this helps.
Dan Guzman
SQL Server MVP
"Robbe Morris [C# MVP]" <info@.eggheadcafe.com> wrote in message
news:e1WlYqQ2FHA.3228@.TK2MSFTNGP15.phx.gbl...
> Your query would result in a full table scan. You can
> also do this:
> SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name') AND indid <
> 2
>
> --
> Robbe Morris - 2004/2005 Microsoft MVP C#
> http://www.masterado.net
>
>
> "brian" <brian@.discussions.microsoft.com> wrote in message
> news:585DEB83-4414-4FD6-BDEA-2CF49C47FAB3@.microsoft.com...
>|||> Your query would result in a full table scan.
Not necessarily. SELECT COUNT(*) FROM tblname can use (scan) any index on th
e table. Preferably
there is an index on a narrow column, and SQL Server can scan that index (le
ss pages to read).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Robbe Morris [C# MVP]" <info@.eggheadcafe.com> wrote in message
news:e1WlYqQ2FHA.3228@.TK2MSFTNGP15.phx.gbl...
> Your query would result in a full table scan. You can
> also do this:
> SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name') AND indid <
2
>
> --
> Robbe Morris - 2004/2005 Microsoft MVP C#
> http://www.masterado.net
>
>
> "brian" <brian@.discussions.microsoft.com> wrote in message
> news:585DEB83-4414-4FD6-BDEA-2CF49C47FAB3@.microsoft.com...
>|||"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com>
wrote in message news:uT63vfT2FHA.892@.TK2MSFTNGP12.phx.gbl

> Not necessarily. SELECT COUNT(*) FROM tblname can use (scan) any
> index on the table. Preferably there is an index on a narrow column,
> and SQL Server can scan that index (less pages to read).
Wouldn't a clustered index be the fastest possible, so that the rows are lin
ed
up for a quickie?
I doubt, therefore I might be.|||> Wouldn't a clustered index be the fastest possible, so that the rows are
> lined up for a quickie?
Not sure what you mean by 'lined up' but the issue with the clustered index
is that the clustered index leaf nodes are the data rows. Consequently, the
scan of a 10GB table with 100M rows will require scanning 10GB of data. A
scan of a non-clustered index on an int column of that table will only scan
about 400MB data (plus overhead) in the non-clustered leaf nodes. This is
why the optimizer chooses the narrowest available index for the COUNT(*)
operation.
Hope this helps.
Dan Guzman
SQL Server MVP
"Kim Noer" <kn@.nospam.dk> wrote in message
news:uDKVnIW2FHA.1148@.tk2msftngp13.phx.gbl...
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com>
> wrote in message news:uT63vfT2FHA.892@.TK2MSFTNGP12.phx.gbl
>
> Wouldn't a clustered index be the fastest possible, so that the rows are
> lined up for a quickie?
> --
> I doubt, therefore I might be.|||"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:Ooe7gxW2FHA.2216@.TK2MSFTNGP15.phx.gbl

> in the non-clustered leaf nodes. This is why the optimizer chooses
> the narrowest available index for the COUNT(*) operation.
Thanks for the clarification!
I doubt, therefore I might be.|||"Robbe Morris [C# MVP]" <info@.eggheadcafe.com> wrote in message
news:e1WlYqQ2FHA.3228@.TK2MSFTNGP15.phx.gbl...
> Your query would result in a full table scan. You can
> also do this:
> SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name') AND indid <
> 2
>
You don't have to do a table scan. A scan of a secondary index will work
too. Moreover the value in sysindexes is not guaranteed to be correct. So
"Select count(*)" is correct, although the "right" answer probably touches
on all these issues.
David|||No, this will not always need to do a full table scan. Since every
nonclustered index has a pointer to every row, a leaf scan of any nc index
will also give an accurate result.
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"Robbe Morris [C# MVP]" <info@.eggheadcafe.com> wrote in message
news:e1WlYqQ2FHA.3228@.TK2MSFTNGP15.phx.gbl...
> Your query would result in a full table scan. You can
> also do this:
> SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name') AND indid <
> 2
>
> --
> Robbe Morris - 2004/2005 Microsoft MVP C#
> http://www.masterado.net
>
>
> "brian" <brian@.discussions.microsoft.com> wrote in message
> news:585DEB83-4414-4FD6-BDEA-2CF49C47FAB3@.microsoft.com...
>
>

Interview question

Hi,
I had a job interview yesterday and they gave me a small test to complete.
One of the questions was the following... I was not sure what to answer...
If you have the following table:
CREATE TABLE [Customers] (
[FirstName] [varchar] (50) NOT NULL ,
[LastName] [varchar] (50) NOT NULL
) ON [PRIMARY]
And all the queries that you will have for this table are like these:
1. LastName ='Simpson'
2. FirstName ='John' and LastName='Smith'
3. LastName='Parker' and FirstName like 'J%'
4. LastName ='Owen'
5. FirstName ='Danny' and LastName='Jackson'
6. LastName='Owen' and FirstName= 'Michael'
7. LastName like 'A%'
.
(with different values for FirstName and LastName):
What kind of index (only one) do you think it would be more effective for
running those queries faster? Please, explain why.
Any ideas?
Thanks!!On Thu, 10 Mar 2005 10:07:23 -0500, Star wrote:
(snip)
>What kind of index (only one) do you think it would be more effective for
>running those queries faster? Please, explain why.
Hi Star,
A composite index on (lastname, firstname) would be best. Since there
are no other columns in this table, might as well make it clustered.
Reason: all queries presented include at least the last name (or the
start of the last name); some include (part of) the first name as well,
so each query can use this index to immediately jump to the part of the
index where matches are.
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||Clustered index on (lastname, firstname).
The common column in all queries is Lastname so it should appear first in
the index. Clustered because if you have only one index it usually makes
sense to cluster it.
David Portas
SQL Server MVP
--|||create nonclustered index ix_nc_customers_lastname_firstname on
customers(lastname, firstname)
go
- nonclustered
- composite
- first column should be LastName in order to allow 1, 4 and 7
AMB
"Star" wrote:

> Hi,
> I had a job interview yesterday and they gave me a small test to complete.
> One of the questions was the following... I was not sure what to answer...
> If you have the following table:
> CREATE TABLE [Customers] (
> [FirstName] [varchar] (50) NOT NULL ,
> [LastName] [varchar] (50) NOT NULL
> ) ON [PRIMARY]
> And all the queries that you will have for this table are like these:
> 1. LastName ='Simpson'
> 2. FirstName ='John' and LastName='Smith'
> 3. LastName='Parker' and FirstName like 'J%'
> 4. LastName ='Owen'
> 5. FirstName ='Danny' and LastName='Jackson'
> 6. LastName='Owen' and FirstName= 'Michael'
> 7. LastName like 'A%'
> ..
> (with different values for FirstName and LastName):
>
> What kind of index (only one) do you think it would be more effective for
> running those queries faster? Please, explain why.
>
> Any ideas?
> Thanks!!
>
>|||An Index on LastName, FirstName would be the most optimal as there is no
occurance of
FirstName column only.
The above index will be used by both the criterias which have
LastName,FirstName and just
LastName.
Gopi
"Star" <nospam@.nospam.com> wrote in message
news:e88xfLYJFHA.1396@.TK2MSFTNGP10.phx.gbl...
> Hi,
> I had a job interview yesterday and they gave me a small test to complete.
> One of the questions was the following... I was not sure what to answer...
> If you have the following table:
> CREATE TABLE [Customers] (
> [FirstName] [varchar] (50) NOT NULL ,
> [LastName] [varchar] (50) NOT NULL
> ) ON [PRIMARY]
> And all the queries that you will have for this table are like these:
> 1. LastName ='Simpson'
> 2. FirstName ='John' and LastName='Smith'
> 3. LastName='Parker' and FirstName like 'J%'
> 4. LastName ='Owen'
> 5. FirstName ='Danny' and LastName='Jackson'
> 6. LastName='Owen' and FirstName= 'Michael'
> 7. LastName like 'A%'
> .
> (with different values for FirstName and LastName):
>
> What kind of index (only one) do you think it would be more effective for
> running those queries faster? Please, explain why.
>
> Any ideas?
> Thanks!!
>|||Thank you, guys!
It helped a lot... now I know for the next time :(|||this interview wasn't in Orlando (or Celebration) Florida was it ?
Greg Jackson
PDX, Oregon|||definately Clustered on LastName, FirstName
GAJ

Interview a Database Administer for MBA Class

Hello, I am a graduate student at Baker College and am working on my MBA. I
am taking a database fundamentals class about how to utilize database
technologies to optimize success in business environments. My homework this
wend is to interview a database administrator/programmer and learn a
little about what they do on a daily basis. If anyone would please fill out
this short form I would greatly appreciate it.
Matt Anderson
Baker College Student in class BUS 562
Which type of projects are you currently working on?
What preparation did you have for this work?
Which parts do you enjoy/not enjoy?
How closely do you work with the end users?
What software programs are in use?
What issues (ie security) do you deal with?
What do you see in the future on this area?Hi Matt,
I'm currently getting my MBA at George Fox University in Portland, Oregon so
I feel your pain....:-)
Answers Below:
> Which type of projects are you currently working on?
I Currently work on large eFinance client software implementations. Banking
customers purchase our software, I"m an engineer\DBA that helps implement
and customize per client.
> What preparation did you have for this work?
Requires excellent .NET skills and Database Admin and Architecture skills

> Which parts do you enjoy/not enjoy?
I enjoy the custom implementations. Dont enjoy cutover events where we are
usually up and working on rolling new system into production from a friday
night to a saturday mid day (all nighters)...

> How closely do you work with the end users?
I dont typcially work with end users who are the Banks On-Line Banking
customers.

> What software programs are in use?
Microsoft .NET and SQL Server 2000

> What issues (ie security) do you deal with?
Security is a huge issue in online banking and efinance. Loss of data,
fraudulant activity, Phishing, etc are all very significant threats.

> What do you see in the future on this area?
This industry is growing and security will be a primary area of growth.
hope this helps,
Cheers,
Greg Jackson
PDX, Oregon|||"pdxJaxon" <GregoryAJackson@.Hotmail.com> wrote in message
news:ejqDvJ%23OGHA.984@.tk2msftngp13.phx.gbl...
> Hi Matt,
>

> hope this helps,
> Cheers,
> Greg Jackson
> PDX, Oregon
>
It does! Thanks!
Matt

Interval/Bucket Dimension

I am trying to find the most efficient way of handelling discreet buckets over continuous/discreet set of values.

In english I am looking to have a dimension that says if age is between 0 and 5 they are an infant, 6 to 18 child and > 18 adult. I know I can do this in a case statement but I have similar senarios that require multiple levels of nesting and the ability to easily change the banding criteria with many fact table sharing the same banding information.

I have done this in the past by preprocessing the fact table and updating the key value based on between joins, but this is very inefficient on large datasets (upwards of 30 million rows).

I also need to be able to do this across continuous values such a monetary amounts.

Can anyone help

Philip Coupar

Dear Philip,

Sorry for interrupting in. Actually, I'm facing the same problem as you did. Have you solved this by any mean? Please share with us if you do. Thanks!

Regards,
Alex|||

Take a look at the DiscretizationMethod property of an attribute

http://msdn2.microsoft.com/en-US/library/ms174810.aspx

Try and build a new attribute that discetizes the coninuous set of values.

Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||Dear Edward,

I've tried looking into the DiscretizationMethod before. However, it only got 3 modes: automatic, EqualAreas and Clusters only. On the other hand, what I am trying to do is using a customizable bucket, said 0~49, 50~99, 100~149 and so on. I can't find any method to control the DiscretizationMethod in this way. Thanks!

Regards,
Alex|||

Your custom buckets are fitting into EqualAreas schenario :)

On the other hand you can create column in the relational database and create custom mapping yourself. Alternatively you can create a named calculation in DSV and use Case statement to map into the values you'd like. http://msdn2.microsoft.com/en-us/library/ms181765.aspx

Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||Dear Edward,

I don't think the EqualAreas can do what I want. The Areas range are calculated by SSAS and I have no control over it. So, it's impossible to group in my way. It'll make sense to me if it provide an option for me to specify the size of each area.

By the way, I'm currently using the CASE method to handle my problem. The drawbacks are a long SQL as I need to break down from 0 to 1000 with an interval of 50. Moreover, I need to insert some dummy records into the Fact table. Otherwise, I will get a discontinuous list if there are not such value in the Fact table. I don't want my user to see the range jumping from 0~49 to 500~549.

Regards,
Alex|||

I agree it is possible to solve this issue as a case statement. It is also possible to solve this problem by using a table that specifies the start and end point of each bucket and allocate the surrogate key neccessary to have a nice dimension table.

However if we take the example where the fact table contains someone's age at the time of an event, we can see that we may first of all break this down by Adult/Child, then have further age bandings under this. The descritization functionality does not accomodate this as it will not build multiple levels, and you cannot assert the start and end ages for each bucket, required for this type of analysis.

I do not really want to find the surrogate key by doing a between join on a dimension table as this would lead to something quite inefficient over very large fact tables. It also does not make sense to me, in terms of performance, to have the case statement in the DSV as this will require a large amount of processing over a degenerate dimension attribute, a significant issue if you need to reprocess 250 Million fact rows.

The original post was asking for ideas on the most efficient ways of achieving this. Edward's posts may help some people who have different decretization requirements, or are handling smaller datasets. Alex is looking at the same issue I referred to in my original post. It may well be that there are no more effcient ways of doing this at the moment, if anyone else has any ideas of how to improve on this kind of requirement I am sure we are all looking forward to hearing from you.

|||

Check out my articles doing aging buckets this way:

... In SSAS with Named Calculations:

http://www.databasejournal.com/features/mssql/article.php/10894_3590866_7

... And a slightly different approach in MSAS 2000 (same logic applies in SSAS 2005):

http://www.databasejournal.com/features/mssql/article.php/3525516

I've done it other ways for clients with specific needs, too. Let me know if you wish further amplification, etc.

Good Luck!

William E. Pearson III
CPA, CMA, CIA, MCSE, MCDBA
Island Technologies Inc.
931 Monroe Drive
Suite 102-321
Atlanta, GA 30308

404.872.5972 Office

wep3@.islandtechnologies.com
wep3@.msas-architect.com

www.msas-architect.com
-- -- --

Publisher Sites:

http://www.databasejournal.com/article.php/1459531

http://www.sql-server-performance.com/bill_pearson.asp

http://www.informit.com/authors/bio.asp?a=862acd62-4662-49ae-879d-541c8b4d656f

http://www.2000trainers.com/section.aspx?sectionID=17

|||

I like the idea of adding the aging logic to the time dimension, this combines the best of the surrogate key approach with the simplicity of the embeded case statements, and would overcome the performance issues around large fact tables.

However this has a very limited application, as within a single UDM model there may be many different custom range requirements, so we may have both transaction aging and the person's age each of which would be banded very differently. and this approach would mean exposing both aging structures on every use of the time dimension.

Interval/Bucket Dimension

I am trying to find the most efficient way of handelling discreet buckets over continuous/discreet set of values.

In english I am looking to have a dimension that says if age is between 0 and 5 they are an infant, 6 to 18 child and > 18 adult. I know I can do this in a case statement but I have similar senarios that require multiple levels of nesting and the ability to easily change the banding criteria with many fact table sharing the same banding information.

I have done this in the past by preprocessing the fact table and updating the key value based on between joins, but this is very inefficient on large datasets (upwards of 30 million rows).

I also need to be able to do this across continuous values such a monetary amounts.

Can anyone help

Philip Coupar

Dear Philip,

Sorry for interrupting in. Actually, I'm facing the same problem as you did. Have you solved this by any mean? Please share with us if you do. Thanks!

Regards,
Alex|||

Take a look at the DiscretizationMethod property of an attribute

http://msdn2.microsoft.com/en-US/library/ms174810.aspx

Try and build a new attribute that discetizes the coninuous set of values.

Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||Dear Edward,

I've tried looking into the DiscretizationMethod before. However, it only got 3 modes: automatic, EqualAreas and Clusters only. On the other hand, what I am trying to do is using a customizable bucket, said 0~49, 50~99, 100~149 and so on. I can't find any method to control the DiscretizationMethod in this way. Thanks!

Regards,
Alex|||

Your custom buckets are fitting into EqualAreas schenario :)

On the other hand you can create column in the relational database and create custom mapping yourself. Alternatively you can create a named calculation in DSV and use Case statement to map into the values you'd like. http://msdn2.microsoft.com/en-us/library/ms181765.aspx

Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||Dear Edward,

I don't think the EqualAreas can do what I want. The Areas range are calculated by SSAS and I have no control over it. So, it's impossible to group in my way. It'll make sense to me if it provide an option for me to specify the size of each area.

By the way, I'm currently using the CASE method to handle my problem. The drawbacks are a long SQL as I need to break down from 0 to 1000 with an interval of 50. Moreover, I need to insert some dummy records into the Fact table. Otherwise, I will get a discontinuous list if there are not such value in the Fact table. I don't want my user to see the range jumping from 0~49 to 500~549.

Regards,
Alex|||

I agree it is possible to solve this issue as a case statement. It is also possible to solve this problem by using a table that specifies the start and end point of each bucket and allocate the surrogate key neccessary to have a nice dimension table.

However if we take the example where the fact table contains someone's age at the time of an event, we can see that we may first of all break this down by Adult/Child, then have further age bandings under this. The descritization functionality does not accomodate this as it will not build multiple levels, and you cannot assert the start and end ages for each bucket, required for this type of analysis.

I do not really want to find the surrogate key by doing a between join on a dimension table as this would lead to something quite inefficient over very large fact tables. It also does not make sense to me, in terms of performance, to have the case statement in the DSV as this will require a large amount of processing over a degenerate dimension attribute, a significant issue if you need to reprocess 250 Million fact rows.

The original post was asking for ideas on the most efficient ways of achieving this. Edward's posts may help some people who have different decretization requirements, or are handling smaller datasets. Alex is looking at the same issue I referred to in my original post. It may well be that there are no more effcient ways of doing this at the moment, if anyone else has any ideas of how to improve on this kind of requirement I am sure we are all looking forward to hearing from you.

|||

Check out my articles doing aging buckets this way:

... In SSAS with Named Calculations:

http://www.databasejournal.com/features/mssql/article.php/10894_3590866_7

... And a slightly different approach in MSAS 2000 (same logic applies in SSAS 2005):

http://www.databasejournal.com/features/mssql/article.php/3525516

I've done it other ways for clients with specific needs, too. Let me know if you wish further amplification, etc.

Good Luck!

William E. Pearson III
CPA, CMA, CIA, MCSE, MCDBA
Island Technologies Inc.
931 Monroe Drive
Suite 102-321
Atlanta, GA 30308

404.872.5972 Office

wep3@.islandtechnologies.com
wep3@.msas-architect.com

www.msas-architect.com
-- -- --

Publisher Sites:

http://www.databasejournal.com/article.php/1459531

http://www.sql-server-performance.com/bill_pearson.asp

http://www.informit.com/authors/bio.asp?a=862acd62-4662-49ae-879d-541c8b4d656f

http://www.2000trainers.com/section.aspx?sectionID=17

|||

I like the idea of adding the aging logic to the time dimension, this combines the best of the surrogate key approach with the simplicity of the embeded case statements, and would overcome the performance issues around large fact tables.

However this has a very limited application, as within a single UDM model there may be many different custom range requirements, so we may have both transaction aging and the person's age each of which would be banded very differently. and this approach would mean exposing both aging structures on every use of the time dimension.

interval dates

i want to create a user-defined function that accepts 2 dates and returns
the time difference in days, hours, minutes, and seconds-
for example, a possible result would be something like:
2 days, 17 hours, 46 minutes, 12 seconds
has anyone out there done this before in sql server? any help would be much
appreciated.
thanks,
jtjtl
CREATE FUNCTION dbo.dates_range
(
@.date1 DATETIME,
@.date2 DATETIME
)
RETURNS VARCHAR(32)
AS
BEGIN
DECLARE @.sD INT, @.sR INT, @.mD INT, @.mR INT, @.hR INT
SET @.sD = DATEDIFF(SECOND, @.date1, @.date2)
SET @.sR = @.sD % 60
SET @.mD = (@.sD - @.sR) / 60
SET @.mR = @.mD % 60
SET @.hR = (@.mD - @.mR) / 60
RETURN CONVERT(VARCHAR, @.hR)
+':'+RIGHT('00'+CONVERT(VARCHAR, @.mR), 2)
+':'+RIGHT('00'+CONVERT(VARCHAR, @.sR), 2)
END
"JTL" <jliautaud@.hotmail.com> wrote in message
news:uo90dQFDGHA.2908@.TK2MSFTNGP09.phx.gbl...
>i want to create a user-defined function that accepts 2 dates and returns
>the time difference in days, hours, minutes, and seconds-
> for example, a possible result would be something like:
> 2 days, 17 hours, 46 minutes, 12 seconds
> has anyone out there done this before in sql server? any help would be
> much appreciated.
> thanks,
> jt
>|||thanks!
do you know how i can return days as well?
jtl
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:%23rZKdUFDGHA.1312@.TK2MSFTNGP09.phx.gbl...
> jtl
> CREATE FUNCTION dbo.dates_range
> (
> @.date1 DATETIME,
> @.date2 DATETIME
> )
> RETURNS VARCHAR(32)
> AS
> BEGIN
> DECLARE @.sD INT, @.sR INT, @.mD INT, @.mR INT, @.hR INT
> SET @.sD = DATEDIFF(SECOND, @.date1, @.date2)
> SET @.sR = @.sD % 60
> SET @.mD = (@.sD - @.sR) / 60
> SET @.mR = @.mD % 60
> SET @.hR = (@.mD - @.mR) / 60
> RETURN CONVERT(VARCHAR, @.hR)
> +':'+RIGHT('00'+CONVERT(VARCHAR, @.mR), 2)
> +':'+RIGHT('00'+CONVERT(VARCHAR, @.sR), 2)
> END
>
>
> "JTL" <jliautaud@.hotmail.com> wrote in message
> news:uo90dQFDGHA.2908@.TK2MSFTNGP09.phx.gbl...
>|||You could do something like this with my TTimeSpan UDT at [0]:
declare @.ts TTimeSpan
set @.ts = TTimeSpan::FromDates('1/1/2005 2:00', '1/12/2005 14:6')
select @.ts.ToString(), @.ts.ToLongString()
Output:
-- ---
11.12:6:0 11 Days 12 Hours 6 Minutes 0 Seconds 0 Milliseconds
[0] - http://channel9.msdn.com/ShowPost.aspx?PostID=147390
William Stacey [MVP]
"JTL" <jliautaud@.hotmail.com> wrote in message
news:%23APwXiFDGHA.2700@.TK2MSFTNGP14.phx.gbl...
> thanks!
> do you know how i can return days as well?
> jtl
> "Uri Dimant" <urid@.iscar.co.il> wrote in message
> news:%23rZKdUFDGHA.1312@.TK2MSFTNGP09.phx.gbl...
>|||i didn't see where to get the source for TTimeSpan- can you help?
jt
"William Stacey [MVP]" <william.stacey@.gmail.com> wrote in message
news:uctoCXQDGHA.2436@.TK2MSFTNGP15.phx.gbl...
> You could do something like this with my TTimeSpan UDT at [0]:
> declare @.ts TTimeSpan
> set @.ts = TTimeSpan::FromDates('1/1/2005 2:00', '1/12/2005 14:6')
> select @.ts.ToString(), @.ts.ToLongString()
> Output:
> -- ---
--
> 11.12:6:0 11 Days 12 Hours 6 Minutes 0 Seconds 0 Milliseconds
> [0] - http://channel9.msdn.com/ShowPost.aspx?PostID=147390
> --
> William Stacey [MVP]
> "JTL" <jliautaud@.hotmail.com> wrote in message
> news:%23APwXiFDGHA.2700@.TK2MSFTNGP14.phx.gbl...
>|||JTL (jliautaud@.hotmail.com) writes:
> i didn't see where to get the source for TTimeSpan- can you help?
What about reading William's post in full?
> "William Stacey [MVP]" <william.stacey@.gmail.com> wrote in message
> news:uctoCXQDGHA.2436@.TK2MSFTNGP15.phx.gbl...
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx

interupt JDBC SQL query connection

Hi

I using Java and JDBC to connect to MS SQL server 2000 (using the MS
drivers msbase,msutil and mssqlserver.jars).

Sometimes it takes a long time for statement.executeQuery
to return and start returning the resultset
(full DB scan can take 30-40 minutes).

Does anyone know if it's possible to interupt/halt the query before
it eventually comes back. The setQueryTimeout works OK but this has to be
set quite high to allow real queries to run correctly. What I'm trying
to do is allow users to halt their query if they've realised they
want to alter it or its taking too long. The query is running in a
separate thread and I've tried using the main thread to close the
connection or stop the query but this still waits until the resultset
is returned.

Maybe it's a driver issue or ....

Thanks
MikeHi

I am not sure about the JDBC driver itself, but you could use a timer thread
to kill it off.

John

"Mike Read" <mar@.roe.ac.uk> wrote in message
news:Pine.OSF.4.58.0406091501320.263861@.reaxp06.ro e.ac.uk...
> Hi
> I using Java and JDBC to connect to MS SQL server 2000 (using the MS
> drivers msbase,msutil and mssqlserver.jars).
> Sometimes it takes a long time for statement.executeQuery
> to return and start returning the resultset
> (full DB scan can take 30-40 minutes).
> Does anyone know if it's possible to interupt/halt the query before
> it eventually comes back. The setQueryTimeout works OK but this has to be
> set quite high to allow real queries to run correctly. What I'm trying
> to do is allow users to halt their query if they've realised they
> want to alter it or its taking too long. The query is running in a
> separate thread and I've tried using the main thread to close the
> connection or stop the query but this still waits until the resultset
> is returned.
> Maybe it's a driver issue or ....
> Thanks
> Mike|||how many people here get the luxery of creating and maintaining apps in
jbuilder and together everyday? care to share any stories, either great
or not so great! i just got an eval of the latest and greatest installed
and after using previous versions for years, what i see now looks really
great. just was wondering about your experiences.

cheers

- perry|||I'm just new in the Java World and also JBuilder X Enterprise IDE. I'm
making a servlet client , using Struts , because the MVC pattern is just what
we need for our application.
Until now this is working just fine. I like the IDE, autocompletion, colors,
indent, etc.
The progam gives a lots of tips and help using class, packages, etc.

Greetings....

In comp.lang.java perry <perry@.cplusplus.org> wrote:
: how many people here get the luxery of creating and maintaining apps in
: jbuilder and together everyday? care to share any stories, either great
: or not so great! i just got an eval of the latest and greatest installed
: and after using previous versions for years, what i see now looks really
: great. just was wondering about your experiences.

: cheers

: - perry

--
Alfredo Diaz
================
School of Engineering and Science, University of Chile
Beaucheff 850, P.O. Box 2777, Santiago, CHILE
mailto:aadiaz@.dcc.uchile.cl.nospam|||Mike Read (mar@.roe.ac.uk) writes:
> I using Java and JDBC to connect to MS SQL server 2000 (using the MS
> drivers msbase,msutil and mssqlserver.jars).
> Sometimes it takes a long time for statement.executeQuery
> to return and start returning the resultset
> (full DB scan can take 30-40 minutes).
> Does anyone know if it's possible to interupt/halt the query before
> it eventually comes back. The setQueryTimeout works OK but this has to be
> set quite high to allow real queries to run correctly. What I'm trying
> to do is allow users to halt their query if they've realised they
> want to alter it or its taking too long. The query is running in a
> separate thread and I've tried using the main thread to close the
> connection or stop the query but this still waits until the resultset
> is returned.

You will have to examine the documetation for the JDBC driver. Most
client APIs for SQL Server supplies functions for asynchronous calls.
With a asynch call, you could then issue a cancel request (again this
is offered by most client APIs).

microsoft.public.sqlserver.jdbcdriver may offer more exact answers with
regards to that driver.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

Intersolv ODBC Driver

Help me!
I try to import some data from an informix 5.0 table to Sql Server 2000 with
a DTS package.
I use Intersolv ODBC driver version 03.011.0021 to connect to the database.
It works well, but i can't define a parameter in the sql statement. For
example if a try to execute :
select *
from table
where field = ?
The dts environment give me an "unspecified syntax error...."
Where is the problem?
The driver don't support parameters or these must be defined with another
syntax?
Bye
DanieleCan you post a code snippet?
SQL Server T-SQL doesn't support anything close to that syntax so I'm not
sure what you're doing exactly...
From T-SQL, you would need to build dynamic SQL or use sp_exectutesql to do
the parameterization you're talking about.
Brian Moran
Principal Mentor
Solid Quality Learning
SQL Server MVP
http://www.solidqualitylearning.com
"Nonno Oreste" <dffdfd@.dfd.dfd> wrote in message
news:OSsw2unuDHA.1596@.TK2MSFTNGP10.phx.gbl...
quote:

> Help me!
> I try to import some data from an informix 5.0 table to Sql Server 2000

with
quote:

> a DTS package.
> I use Intersolv ODBC driver version 03.011.0021 to connect to the

database.
quote:

> It works well, but i can't define a parameter in the sql statement. For
> example if a try to execute :
> select *
> from table
> where field = ?
> The dts environment give me an "unspecified syntax error...."
> Where is the problem?
> The driver don't support parameters or these must be defined with another
> syntax?
> Bye
> Daniele
>

intersection on a single table

Hi all !

I have a table with no keys (temp table) which looks like this :
col1|col2|col3
001|A|.087
001|B|.032
001|C|.345
002|A|.324
002|B|.724
003|A|.088
003|C|.899
001|A|.087
001|A|.234
001|B|.032

As you see, there is some duplicate entries in it. I would like to get a list of all the rows that have the same col1 and col2 BUT different col3 value. The result should return col1=001 col2=A but NOT col1=001 col2=B. I tried a lot of queries with EXISTS, HAVING, etc... but nothing seems to work.

Anyone have an idea how I can do it ?How about something like this?

select distinct test.col1, test.col2, test.col3
from test
inner join
(select col1, col2, count(1) as colcount
from test
group by col1, col2
having count(1) > 1) a on test.col1 = a.col1 and test.col2 = a.col2

Have some fun.|||select col1, col2, col3
from yourtable as t
where 1
< ( select count(distinct col3)
from yourtable
where col1 = t.col1
and col2 = t.col2 )|||Both solution works !

Thanks a lot !

Intersection of N sets

Imagine a table that enumerates membership of items to some set:
create table sets
(
setId int not null,
itemId int not null
);
some data:
set 1 = {1,3,5,7,9}
set 2 = {1,2,3,4,5}
set 3 = {4,5,6,7}
translated to this model:
setId, itemId
1,1
1,3
1,5
1,7
1,9
2,1
2,2
2,3
2,4
2,5
3,4
3,5
3,6
3,7
Consider another table containing an enumeration of sets to calculate the
intersection of:
create table setIntersection
(
setId int not null
);
If the setIntersection table contains the records {1,2,3}, I'd like to calcu
late
the intersection of the items contained in the sets 1, 2 and 3.
In the example above, this would be the {5}.
While this is trivial to do in a typical imperative fashion (looping inside
an SP), I'm wondering if it's possible to perform this operation in a single
query, perhaps using an CTE.Essentially, it would be a join on the the tables, each representing a set.
SELECT t1.itemid -- or t2.itemid or t3.itemid
FROM tbl t1,
tbl t2,
tbl t3
WHERE t1.itemid = t2.itemid
AND t2.itemid = t3.itemid
AND t1.setid = 1
AND t2.setid = 2
AND t3.setid = 3 ;
Anith|||Taras Tielkes (taras.tielkes@.gmail.com) writes:
> Imagine a table that enumerates membership of items to some set:
> create table sets
> (
> setId int not null,
> itemId int not null
> );
> some data:
> set 1 = {1,3,5,7,9}
> set 2 = {1,2,3,4,5}
> set 3 = {4,5,6,7}
>...
> Consider another table containing an enumeration of sets to calculate the
> intersection of:
> create table setIntersection
> (
> setId int not null
> );
> If the setIntersection table contains the records {1,2,3}, I'd like to
> calculate the intersection of the items contained in the sets 1, 2 and
> 3. In the example above, this would be the {5}.
SELECT itemId
FROM sets s
WHERE EXISTS (SELECT *
FROM setIntersection sI
WHERE sI.setId = s.setId)
GROUP BY itemId
HAVING COUNT(*) = (SELECT COUNT(*) FROM setIntersection)
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Ah, but I meant to ask the question in a generic way: "how can I writer a
query that will perform the required operation for an arbitrary number of
sets in the example model. Say 100 defined sets :-)

> Essentially, it would be a join on the the tables, each representing a
> set.
> SELECT t1.itemid -- or t2.itemid or t3.itemid
> FROM tbl t1,
> tbl t2,
> tbl t3
> WHERE t1.itemid = t2.itemid
> AND t2.itemid = t3.itemid
> AND t1.setid = 1
> AND t2.setid = 2
> AND t3.setid = 3 ;|||In that case, Erland's response should help. You may also want to search the
archives for "relational division" to find some related examples.
Anith|||Taras Tielkes wrote:
> Imagine a table that enumerates membership of items to some set:
> create table sets
> (
> setId int not null,
> itemId int not null
> );
> some data:
> set 1 = {1,3,5,7,9}
> set 2 = {1,2,3,4,5}
> set 3 = {4,5,6,7}
> translated to this model:
> setId, itemId
> 1,1
> 1,3
> 1,5
> 1,7
> 1,9
> 2,1
> 2,2
> 2,3
> 2,4
> 2,5
> 3,4
> 3,5
> 3,6
> 3,7
> Consider another table containing an enumeration of sets to calculate the
> intersection of:
> create table setIntersection
> (
> setId int not null
> );
> If the setIntersection table contains the records {1,2,3}, I'd like to cal
culate
> the intersection of the items contained in the sets 1, 2 and 3.
> In the example above, this would be the {5}.
> While this is trivial to do in a typical imperative fashion (looping insid
e
> an SP), I'm wondering if it's possible to perform this operation in a sing
le
> query, perhaps using an CTE.
It's relational division:
sets/setIntersection

intersection of dimension?

i have a query that counts the number of stores that both exist in two dimension:

nonemptycrossjoin(descendants(dimlocation.currentmember,9,leaves),descendants(dimtargetset.currentmember,9,leaves)).count

dimlocation is a parent child dimension while dimtargetset is a star schema...

later i realized that im only counting the stores with data and excluding stores with no data in the count.. i tried crossjoin only but it doesnt return the correct count...

is there any way i could count them? im using as 2000... thanks...

any ideas?|||

See this post: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2139906&SiteID=1

Best regards

- Jens

intersection of dimension?

i have a query that counts the number of stores that both exist in two dimension:

nonemptycrossjoin(descendants(dimlocation.currentmember,9,leaves),descendants(dimtargetset.currentmember,9,leaves)).count

dimlocation is a parent child dimension while dimtargetset is a star schema...

later i realized that im only counting the stores with data and excluding stores with no data in the count.. i tried crossjoin only but it doesnt return the correct count...

is there any way i could count them? im using as 2000... thanks...

any ideas?|||

See this post: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2139906&SiteID=1

Best regards

- Jens

Intersecting Subtotals

I am trying to eliminate the value displayed in an intersecting
subtotal in a matrix. I have a subtotal setup for both the row and
column where the total of the column totals across the bottom isn't
appropriate to be totaled.
Is anyone aware of a way to identify this intersecting cell and change
it to a non-display field or to change the text color to white so it
isn't visible?
ThanksThis will identify all 4 versions of a cell:
=iif(InScope("ColumnGroup1"), iif(InScope("RowGroup1"), "In Cell", "In
Subtotal of RowGroup1"), iif(InScope("RowGroup1"), "In Subtotal of
ColumnGroup1", "In Subtotal of entire matrix"))
You can use it to change the background colour too. I have a matrix with
both row and column subtotal, and like you, I can't get the calculation to
work out for the intercepting cell. My cell expression looks like this:
=iif(InScope("matrix2_DateWeek_Year"),
iif(InScope("matrix2_DateWeek_Week"),
SUM(Fields!Measures_Billable_Hours.Value),
SUM(Fields!Measures_Billable_Hours.Value)),
iif(InScope("matrix2_DateWeek_Week"),
(First(Fields!Measures_Billable_Hours.Value, "matrix2_DateWeek_Week") -
Last(Fields!Measures_Billable_Hours.Value, "matrix2_DateWeek_Week")),
"x"))
The x shows up in the bottom right corner.
Kaisa M. Lindahl
"GregR" <grinard@.mesanetworks.net> wrote in message
news:1137777096.339565.269760@.z14g2000cwz.googlegroups.com...
>I am trying to eliminate the value displayed in an intersecting
> subtotal in a matrix. I have a subtotal setup for both the row and
> column where the total of the column totals across the bottom isn't
> appropriate to be totaled.
> Is anyone aware of a way to identify this intersecting cell and change
> it to a non-display field or to change the text color to white so it
> isn't visible?
> Thanks
>