Sunday, February 19, 2012
Interpreting WaitResource on SysProcesses
The SPID at the head of the blocking chain = 226. SPID 226 has the following
from SYSPROCESSES:
Open_Tran = 2
WaitResource = 40:1:41356
WaitType = 0x0000
cmd = INSERT
LastWaitType = PAGELATCH_SH
The SPID getting blocked = 394. SPID 394 has the following from SYSPROCESSES:
Open_Tran = 2
WaitResource = TAB: 40:514100872
WaitType = 0x0008
cmd = INSERT
LastWaitType = LCK_M_IX
From the above, do I interpret:
The WaitResource on SPID 226 to be [DBID:INDEXID:PAGEID]?
The WaitResource on SPID 394 to be [DBID:OBJECTID]?
If the WaitResource on SPID 394 is [DBID:OBJECTID], is the object in this
case the locked object?
--
Message posted via SQLMonster.com
http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200611/1Does anyone want to take a stab at this?
cbrichards wrote:
>I have a blocking issue, running SQL 2000, SP4, on Windows 2003.
>The SPID at the head of the blocking chain = 226. SPID 226 has the following
>from SYSPROCESSES:
>Open_Tran = 2
>WaitResource = 40:1:41356
>WaitType = 0x0000
>cmd = INSERT
>LastWaitType = PAGELATCH_SH
>The SPID getting blocked = 394. SPID 394 has the following from SYSPROCESSES:
>Open_Tran = 2
>WaitResource = TAB: 40:514100872
>WaitType = 0x0008
>cmd = INSERT
>LastWaitType = LCK_M_IX
>From the above, do I interpret:
>The WaitResource on SPID 226 to be [DBID:INDEXID:PAGEID]?
>The WaitResource on SPID 394 to be [DBID:OBJECTID]?
>If the WaitResource on SPID 394 is [DBID:OBJECTID], is the object in this
>case the locked object?
--
Message posted via SQLMonster.com
http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200611/1
Interpreting WaitResource on SysProcesses
The SPID at the head of the blocking chain = 226. SPID 226 has the following
from SYSPROCESSES:
Open_Tran = 2
WaitResource = 40:1:41356
WaitType = 0x0000
cmd = INSERT
LastWaitType = PAGELATCH_SH
The SPID getting blocked = 394. SPID 394 has the following from SYSPROCESSES
:
Open_Tran = 2
WaitResource = TAB: 40:514100872
WaitType = 0x0008
cmd = INSERT
LastWaitType = LCK_M_IX
From the above, do I interpret:
The WaitResource on SPID 226 to be [DBID:INDEXID:PAGEID]?
The WaitResource on SPID 394 to be [DBID:OBJECTID]?
If the WaitResource on SPID 394 is [DBID:OBJECTID], is the object in thi
s
case the locked object?
Message posted via droptable.com
http://www.droptable.com/Uwe/Forum...server/200611/1Does anyone want to take a stab at this?
cbrichards wrote:
>I have a blocking issue, running SQL 2000, SP4, on Windows 2003.
>The SPID at the head of the blocking chain = 226. SPID 226 has the followin
g
>from SYSPROCESSES:
>Open_Tran = 2
>WaitResource = 40:1:41356
>WaitType = 0x0000
>cmd = INSERT
>LastWaitType = PAGELATCH_SH
>The SPID getting blocked = 394. SPID 394 has the following from SYSPROCESSE
S:
>Open_Tran = 2
>WaitResource = TAB: 40:514100872
>WaitType = 0x0008
>cmd = INSERT
>LastWaitType = LCK_M_IX
>From the above, do I interpret:
>The WaitResource on SPID 226 to be [DBID:INDEXID:PAGEID]?
>The WaitResource on SPID 394 to be [DBID:OBJECTID]?
>If the WaitResource on SPID 394 is [DBID:OBJECTID], is the object in th
is
>case the locked object?
Message posted via droptable.com
http://www.droptable.com/Uwe/Forum...server/200611/1
Interpreting WaitResource on SysProcesses
The SPID at the head of the blocking chain = 226. SPID 226 has the following
from SYSPROCESSES:
Open_Tran = 2
WaitResource = 40:1:41356
WaitType = 0x0000
cmd = INSERT
LastWaitType = PAGELATCH_SH
The SPID getting blocked = 394. SPID 394 has the following from SYSPROCESSES:
Open_Tran = 2
WaitResource = TAB: 40:514100872
WaitType = 0x0008
cmd = INSERT
LastWaitType = LCK_M_IX
From the above, do I interpret:
The WaitResource on SPID 226 to be [DBID:INDEXID:PAGEID]?
The WaitResource on SPID 394 to be [DBID:OBJECTID]?
If the WaitResource on SPID 394 is [DBID:OBJECTID], is the object in this
case the locked object?
Message posted via droptable.com
http://www.droptable.com/Uwe/Forums.aspx/sql-server/200611/1
Does anyone want to take a stab at this?
cbrichards wrote:
>I have a blocking issue, running SQL 2000, SP4, on Windows 2003.
>The SPID at the head of the blocking chain = 226. SPID 226 has the following
>from SYSPROCESSES:
>Open_Tran = 2
>WaitResource = 40:1:41356
>WaitType = 0x0000
>cmd = INSERT
>LastWaitType = PAGELATCH_SH
>The SPID getting blocked = 394. SPID 394 has the following from SYSPROCESSES:
>Open_Tran = 2
>WaitResource = TAB: 40:514100872
>WaitType = 0x0008
>cmd = INSERT
>LastWaitType = LCK_M_IX
>From the above, do I interpret:
>The WaitResource on SPID 226 to be [DBID:INDEXID:PAGEID]?
>The WaitResource on SPID 394 to be [DBID:OBJECTID]?
>If the WaitResource on SPID 394 is [DBID:OBJECTID], is the object in this
>case the locked object?
Message posted via droptable.com
http://www.droptable.com/Uwe/Forums.aspx/sql-server/200611/1
Interpreting the percentage in decision-tree model
I used a decision-tree mining-model to describe and predict fraud. The table contains 1039 records with 775 distinct value of A-number (the calling party). I used 9 columns in the model. SQL Server reports that only 3 columns are significant in predicting the fraud
- BPN_is_too_short (called party-number is too short)
- Duration_is_zero
- Invalid_area_code
The key-column in A-number, and the predicted column is Is_Fraud with the range of values are only 0 and 1. There's no record with NULL (missing-value) in the column Is_Fraud.
Mining Legend shows in the first split
[-] 625 cases of fraud
[-] 150 cases of non-fraud
[-] 0 cases of missing
In addition to that, Mining Legend shows
[-] 79.69% of fraud
[-] 19.64% of non-fraud
[-] 0.67% Missing
Now when I compare those values, they don't match.
(A) 625/775 is 80.645%, not 79.69%
(B) 150/775 is 19.355%, not 19.64%
(C) 0 cases of NULL (missing value) should imply 0% of missing, not 0.67% of missing
Furthermore in one node (with the split on duration_is_zero), there are 541 cases of fraud and 0 cases of non-fraud. This implies the node is leaf-node. However, Mining Legend shows
514 cases of fraud, 99.35%
0 cases of non-fraud, 0.33%
[F] 0 cases of missing, 0.33%
My questions
(1) Why the values don't match like in cases A through C ?
(2) Why the values don't match even in cases D through F when we have no subtree at all ?
I've searched explanation by reading the mathematical reasoning, entropy, Gini index; but it does not answer the discrepancies of those values and percentages in the Mining Legend.
Regards,
BernaridhoOur DT algorithm uses a baysian prior in all calculations. This means that it assumes that all possible states have equal probability at the beginning. This prior gets distributed throughout the tree. Therefore, what you are seeing are the prior-adjusted probabilities and the actual support.|||
Hi..
I need this information more details for doing my thesis which also use DT.
Would you like to give examples how to calculate this percentage ? I have tried to calculate using baysian prior, but the result isn't match which the percentage.
Thank you for your answer..
Interpreting the percentage in decision-tree model
I used a decision-tree mining-model to describe and predict fraud. The table contains 1039 records with 775 distinct value of A-number (the calling party). I used 9 columns in the model. SQL Server reports that only 3 columns are significant in predicting the fraud
- BPN_is_too_short (called party-number is too short)
- Duration_is_zero
- Invalid_area_code
The key-column in A-number, and the predicted column is Is_Fraud with the range of values are only 0 and 1. There's no record with NULL (missing-value) in the column Is_Fraud.
Mining Legend shows in the first split
[-] 625 cases of fraud
[-] 150 cases of non-fraud
[-] 0 cases of missing
In addition to that, Mining Legend shows
[-] 79.69% of fraud
[-] 19.64% of non-fraud
[-] 0.67% Missing
Now when I compare those values, they don't match.
(A) 625/775 is 80.645%, not 79.69%
(B) 150/775 is 19.355%, not 19.64%
(C) 0 cases of NULL (missing value) should imply 0% of missing, not 0.67% of missing
Furthermore in one node (with the split on duration_is_zero), there are 541 cases of fraud and 0 cases of non-fraud. This implies the node is leaf-node. However, Mining Legend shows
514 cases of fraud, 99.35%
0 cases of non-fraud, 0.33%
[F] 0 cases of missing, 0.33%
My questions
(1) Why the values don't match like in cases A through C ?
(2) Why the values don't match even in cases D through F when we have no subtree at all ?
I've searched explanation by reading the mathematical reasoning, entropy, Gini index; but it does not answer the discrepancies of those values and percentages in the Mining Legend.
Regards,
Bernaridho
Our DT algorithm uses a baysian prior in all calculations. This means that it assumes that all possible states have equal probability at the beginning. This prior gets distributed throughout the tree. Therefore, what you are seeing are the prior-adjusted probabilities and the actual support.|||
Hi..
I need this information more details for doing my thesis which also use DT.
Would you like to give examples how to calculate this percentage ? I have tried to calculate using baysian prior, but the result isn't match which the percentage.
Thank you for your answer..
Interpreting the percentage in decision-tree model
I used a decision-tree mining-model to describe and predict fraud. The table contains 1039 records with 775 distinct value of A-number (the calling party). I used 9 columns in the model. SQL Server reports that only 3 columns are significant in predicting the fraud
- BPN_is_too_short (called party-number is too short)
- Duration_is_zero
- Invalid_area_code
The key-column in A-number, and the predicted column is Is_Fraud with the range of values are only 0 and 1. There's no record with NULL (missing-value) in the column Is_Fraud.
Mining Legend shows in the first split
[-] 625 cases of fraud
[-] 150 cases of non-fraud
[-] 0 cases of missing
In addition to that, Mining Legend shows
[-] 79.69% of fraud
[-] 19.64% of non-fraud
[-] 0.67% Missing
Now when I compare those values, they don't match.
(A) 625/775 is 80.645%, not 79.69%
(B) 150/775 is 19.355%, not 19.64%
(C) 0 cases of NULL (missing value) should imply 0% of missing, not 0.67% of missing
Furthermore in one node (with the split on duration_is_zero), there are 541 cases of fraud and 0 cases of non-fraud. This implies the node is leaf-node. However, Mining Legend shows
514 cases of fraud, 99.35%
0 cases of non-fraud, 0.33%
[F] 0 cases of missing, 0.33%
My questions
(1) Why the values don't match like in cases A through C ?
(2) Why the values don't match even in cases D through F when we have no subtree at all ?
I've searched explanation by reading the mathematical reasoning, entropy, Gini index; but it does not answer the discrepancies of those values and percentages in the Mining Legend.
Regards,
Bernaridho
Our DT algorithm uses a baysian prior in all calculations. This means that it assumes that all possible states have equal probability at the beginning. This prior gets distributed throughout the tree. Therefore, what you are seeing are the prior-adjusted probabilities and the actual support.|||
Hi..
I need this information more details for doing my thesis which also use DT.
Would you like to give examples how to calculate this percentage ? I have tried to calculate using baysian prior, but the result isn't match which the percentage.
Thank you for your answer..
Interpreting the execution plan results
I can figure out a little what the information on the execution plan means
but I’m not sure
Where can I find a good description about each item that is show in the
execution plan?
e.g estimated subtree cost etc..
thks
Kenny M. (KennyM@.discussions.microsoft.com) writes:
> On Query Analyzer..
> I can figure out a little what the information on the execution plan means
> but I'm not sure
> Where can I find a good description about each item that is show in the
> execution plan?
>
> e.g estimated subtree cost etc..
SQL Server Books Online. Optimizing Database Performance->Query Tuning->
Analyzing a Query->Logical and Physical Operators.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Interpreting Statistics IO
Table 'MyTable'. Scan count 1, logical reads 4338, physical reads 0, read-
ahead reads 0.
This is my revised statistics io return:
Table 'MyTable'. Scan count 117, logical reads 536, physical reads 0, read-
ahead reads 0.
Do I view this as an improvement since the logical reads reduced? Profiler
shows a large reduction in duration and reads when comparing the original to
the revised. The Scan count grew substantially, yet the logical reads droppe
d
substantially. Based upon Profiler my assumption is that the logical reads
reduction is the best indicator. True?
Message posted via droptable.com
http://www.droptable.com/Uwe/Forum...server/200606/1> Based upon Profiler my assumption is that the logical reads
> reduction is the best indicator. True?
Yes. Logical reads is as close as you can get to seeing how much SQL Server
had to actualy hit the hard drive. The only reason "Physical Reads" are zero
is because the data your query needed is already in the SQL Server data
cache.
You could run
DBCC DROPCLEANBUFFERS
to empty the cache, forcing SQL Server to actually hit the hard drive, and
you'd see how slow your query really could take.
But hard drive access is everything.|||Compare to a query that i wish i could make faster:
Table 'Worktable'. Scan count 155, logical reads 16039, physical reads 0,
read-ahead reads 0.
Table 'Patrons'. Scan count 4, logical reads 12, physical reads 0,
read-ahead reads 0.
Table 'Transactions'. Scan count 24242, logical reads 114968, physical reads
0, read-ahead reads 0.
Table 'Transactions'. Scan count 24242, logical reads 194430, physical reads
0, read-ahead reads 0.
Table 'Windows'. Scan count 1566, logical reads 3132, physical reads 0,
read-ahead reads 0.
Table 'Sessions'. Scan count 1566, logical reads 9396, physical reads 0,
read-ahead reads 0.
Table 'LCTTransactions'. Scan count 13084, logical reads 39312, physical
reads 0, read-ahead reads 0.
Table 'LCTTransactions'. Scan count 13084, logical reads 26412, physical
reads 0, read-ahead reads 0.
Table 'LCTs'. Scan count 4, logical reads 40, physical reads 0, read-ahead
reads 0.
Table 'LCTs'. Scan count 4, logical reads 372, physical reads 0, read-ahead
reads 92.
Table 'Worktable'. Scan count 135, logical reads 14779, physical reads 0,
read-ahead reads 0.|||Thanks Ian.
For further clarification on the relation to Reads shown in Profiler and the
logical and physical reads listed for the same procedure in Statistics IO, i
t
appears the Reads shown in Profiler are greater than the sum of the logical
and physical reads produced for the same procedure in Statistics IO. Is
Profiler showing reads against System tables whereas Statistics IO does not?
Ian Boyd wrote:
>Yes. Logical reads is as close as you can get to seeing how much SQL Server
>had to actualy hit the hard drive. The only reason "Physical Reads" are zer
o
>is because the data your query needed is already in the SQL Server data
>cache.
>You could run
>DBCC DROPCLEANBUFFERS
>to empty the cache, forcing SQL Server to actually hit the hard drive, and
>you'd see how slow your query really could take.
>But hard drive access is everything.
Message posted via droptable.com
http://www.droptable.com/Uwe/Forum...server/200606/1|||> Is
> Profiler showing reads against System tables whereas Statistics IO does not?[/vbco
l]
Yes. Also, STATISTICS IO doesn't show I/O performed by scalar UDFs.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"cbrichards via droptable.com" <u3288@.uwe> wrote in message news:616e5ea2a1085@.uwe...[vbcol
=seagreen]
> Thanks Ian.
> For further clarification on the relation to Reads shown in Profiler and t
he
> logical and physical reads listed for the same procedure in Statistics IO,
it
> appears the Reads shown in Profiler are greater than the sum of the logica
l
> and physical reads produced for the same procedure in Statistics IO. Is
> Profiler showing reads against System tables whereas Statistics IO does no
t?
> Ian Boyd wrote:
> --
> Message posted via droptable.com
> http://www.droptable.com/Uwe/Forum...server/200606/1
Interpreting Statistics IO
Table 'MyTable'. Scan count 1, logical reads 4338, physical reads 0, read-
ahead reads 0.
This is my revised statistics io return:
Table 'MyTable'. Scan count 117, logical reads 536, physical reads 0, read-
ahead reads 0.
Do I view this as an improvement since the logical reads reduced? Profiler
shows a large reduction in duration and reads when comparing the original to
the revised. The Scan count grew substantially, yet the logical reads dropped
substantially. Based upon Profiler my assumption is that the logical reads
reduction is the best indicator. True?
--
Message posted via SQLMonster.com
http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200606/1> Based upon Profiler my assumption is that the logical reads
> reduction is the best indicator. True?
Yes. Logical reads is as close as you can get to seeing how much SQL Server
had to actualy hit the hard drive. The only reason "Physical Reads" are zero
is because the data your query needed is already in the SQL Server data
cache.
You could run
DBCC DROPCLEANBUFFERS
to empty the cache, forcing SQL Server to actually hit the hard drive, and
you'd see how slow your query really could take.
But hard drive access is everything.|||Compare to a query that i wish i could make faster:
Table 'Worktable'. Scan count 155, logical reads 16039, physical reads 0,
read-ahead reads 0.
Table 'Patrons'. Scan count 4, logical reads 12, physical reads 0,
read-ahead reads 0.
Table 'Transactions'. Scan count 24242, logical reads 114968, physical reads
0, read-ahead reads 0.
Table 'Transactions'. Scan count 24242, logical reads 194430, physical reads
0, read-ahead reads 0.
Table 'Windows'. Scan count 1566, logical reads 3132, physical reads 0,
read-ahead reads 0.
Table 'Sessions'. Scan count 1566, logical reads 9396, physical reads 0,
read-ahead reads 0.
Table 'LCTTransactions'. Scan count 13084, logical reads 39312, physical
reads 0, read-ahead reads 0.
Table 'LCTTransactions'. Scan count 13084, logical reads 26412, physical
reads 0, read-ahead reads 0.
Table 'LCTs'. Scan count 4, logical reads 40, physical reads 0, read-ahead
reads 0.
Table 'LCTs'. Scan count 4, logical reads 372, physical reads 0, read-ahead
reads 92.
Table 'Worktable'. Scan count 135, logical reads 14779, physical reads 0,
read-ahead reads 0.|||Thanks Ian.
For further clarification on the relation to Reads shown in Profiler and the
logical and physical reads listed for the same procedure in Statistics IO, it
appears the Reads shown in Profiler are greater than the sum of the logical
and physical reads produced for the same procedure in Statistics IO. Is
Profiler showing reads against System tables whereas Statistics IO does not?
Ian Boyd wrote:
>> Based upon Profiler my assumption is that the logical reads
>> reduction is the best indicator. True?
>Yes. Logical reads is as close as you can get to seeing how much SQL Server
>had to actualy hit the hard drive. The only reason "Physical Reads" are zero
>is because the data your query needed is already in the SQL Server data
>cache.
>You could run
>DBCC DROPCLEANBUFFERS
>to empty the cache, forcing SQL Server to actually hit the hard drive, and
>you'd see how slow your query really could take.
>But hard drive access is everything.
--
Message posted via SQLMonster.com
http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200606/1|||> Is
> Profiler showing reads against System tables whereas Statistics IO does not?
Yes. Also, STATISTICS IO doesn't show I/O performed by scalar UDFs.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"cbrichards via SQLMonster.com" <u3288@.uwe> wrote in message news:616e5ea2a1085@.uwe...
> Thanks Ian.
> For further clarification on the relation to Reads shown in Profiler and the
> logical and physical reads listed for the same procedure in Statistics IO, it
> appears the Reads shown in Profiler are greater than the sum of the logical
> and physical reads produced for the same procedure in Statistics IO. Is
> Profiler showing reads against System tables whereas Statistics IO does not?
> Ian Boyd wrote:
>> Based upon Profiler my assumption is that the logical reads
>> reduction is the best indicator. True?
>>Yes. Logical reads is as close as you can get to seeing how much SQL Server
>>had to actualy hit the hard drive. The only reason "Physical Reads" are zero
>>is because the data your query needed is already in the SQL Server data
>>cache.
>>You could run
>>DBCC DROPCLEANBUFFERS
>>to empty the cache, forcing SQL Server to actually hit the hard drive, and
>>you'd see how slow your query really could take.
>>But hard drive access is everything.
> --
> Message posted via SQLMonster.com
> http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200606/1
Interpreting SQLIO Results... What Now?
It basically seemns to point to optimal performance settings of 64KB I/O's
and 4096 KB Stripes. My question is this: how do I ensure that Windows and
SQL Server 2K use these settings? Is there something that needs to be set
in the registry or do I need to modify settings in SQL, or what?
Thanks
P.S. - We got even better results when we set the Buffering option to "All".
How can we make sure Windows, SQL, et al. are using the same options that
SQLIO used?
Thanks
"Michael C#" <xyz@.yomomma.com> wrote in message
news:ODOA5mMKFHA.3916@.TK2MSFTNGP14.phx.gbl...
> Hi all, I ran SQLIO against my SAN and got back some discouraging results.
> It basically seemns to point to optimal performance settings of 64KB I/O's
> and 4096 KB Stripes. My question is this: how do I ensure that Windows
> and SQL Server 2K use these settings? Is there something that needs to be
> set in the registry or do I need to modify settings in SQL, or what?
> Thanks
>
interpreting product A-2 >=1.978
Dear Jamie,
Thanks for the reply.
We have another problem to solve.
on the node we are getting product A -2 >=1.978
What does it mean (-2) ?
It is mentioned as two time slices ago. Please help me to undertand this.
From
menik
that is exactly the meaning. For example take the series
1, 3, 5, 6, 4, 2, 3
Say I have a tree that says "If (A-2) > 2 then result = (A-1 * 0.75) else result = (A-1 * 1.25)"
The next prediction in the series would be (3*1.25)=3.75 as (A-2) is 2 (not greater than two) and (A-1) is 3. The following prediction would be (3.75 * 0.75)=2.81 as (A-2) in this case is now 3, and (A-1) is the newly predicted value 3.75
interpreting product A-2 >=1.978
Dear Jamie,
Thanks for the reply.
We have another problem to solve.
on the node we are getting product A -2 >=1.978
What does it mean (-2) ?
It is mentioned as two time slices ago. Please help me to undertand this.
From
menik
that is exactly the meaning. For example take the series
1, 3, 5, 6, 4, 2, 3
Say I have a tree that says "If (A-2) > 2 then result = (A-1 * 0.75) else result = (A-1 * 1.25)"
The next prediction in the series would be (3*1.25)=3.75 as (A-2) is 2 (not greater than two) and (A-1) is 3. The following prediction would be (3.75 * 0.75)=2.81 as (A-2) in this case is now 3, and (A-1) is the newly predicted value 3.75
Interpreting index statistics on SQL 2005
Of 48 indexes, 14 have a density of 0. Does this mean that the indexes are not selective enough? Does it mean they're garbage and I should toss them?
6 have a density of NULL. They are all primary keys. I suppose this just means that they're never used because these tables are rarely queried. Would this assumption be correct?
13 have a density of 1. I have no idea what this means.
The others have densities ranging from 0.01210491 to 0.5841165. I was told that the lower this number is, the more selective and thus more useful an index is. I think 0.5841165 is too high a number. Would this be correct?
Thanks in advance.First you need to ask good and bad for what. I can argue both ways equally well. Low density numbers are good for OLTP, and high density numbers are good for OLAP. Density is practically meaningless for compound keys, since they are almost always compound and the densisty only refers to the leftmost column.
Density is meant to convey one specific property of a table. I use it to get an initial feel for the table, but it doesn't mean much in the "big picture" to me... I really need more information than what the Density can convey by itself.
-PatP|||First you need to ask good and bad for what. I can argue both ways equally well. Low density numbers are good for OLTP, and high density numbers are good for OLAP. Density is practically meaningless for compound keys, since they are almost always compound and the densisty only refers to the leftmost column.
Density is meant to convey one specific property of a table. I use it to get an initial feel for the table, but it doesn't mean much in the "big picture" to me... I really need more information than what the Density can convey by itself.
-PatP
Ummm kaaaaaaay. Didn't understand hardly a word of that.
Basically I have a bunch of giant indexes chewing up space on my DB...one table's indexes take up 3 times as much space as the table's data does. Another tables indexes take up 20% more space than the table's data. I want to get rid of indexes that aren't being used or that are garbage; I thought I could use statistics to help determine that.
I know next to nothing about indexing.|||Ah, now we're getting somewhere!
First of all, I need to correct one typing error, I meant "natural keys, since they..." in the second paragraph above, I just didn't proofread it before I sent it.
Just FYI, the density has very little to do with how useful the index is to an application. There are lots of other things that factor in much more importantly than the density.
If you don't want to take the time to understand indexing, you really shouldn't try to make decisions about it... Kind of like if you don't know a distributor from a carburator, you probably shouldn't work on your motorcycle... Just ride it and enjoy! When it needs a tune up, call a professional.
If you want to learn about indexing, there are lots of web pages that explain it at varying levels of detail. I'm not sure what your needs are, so I can't really recommend one over another, but if you snoop around ten or fifteen minutes I'm sure you can find several that suit your needs.
If you don't want to learn about indexing, you've still got several choices. You can hire someone to come in and do the job for you. This is quick, efficient, and gets you about as good a solution as you're likely to get. You can post your index script here, and maybe someone will help you online. This isn't as good or as reliable, but its cheaper and might be faster too.
-PatP|||One thing you may try is the Index Tuning Wizard. It is not a perfect tool, but it may point out what indexes are not getting used. You will need to run a trace of the activity, and making sure you get enough activity is the big problem. If some of these indexes are used for the CEO's big report at the end of the quarter, and you drop that index, there could be a few uncomfortable explanations you will have to give. In short, have all the indexes scripted out, so you can add them back later on, if need be.|||Ah, now we're getting somewhere!
First of all, I need to correct one typing error, I meant "natural keys, since they..." in the second paragraph above, I just didn't proofread it before I sent it.
Just FYI, the density has very little to do with how useful the index is to an application. There are lots of other things that factor in much more importantly than the density.
If you don't want to take the time to understand indexing, you really shouldn't try to make decisions about it... Kind of like if you don't know a distributor from a carburator, you probably shouldn't work on your motorcycle... Just ride it and enjoy! When it needs a tune up, call a professional.
If you want to learn about indexing, there are lots of web pages that explain it at varying levels of detail. I'm not sure what your needs are, so I can't really recommend one over another, but if you snoop around ten or fifteen minutes I'm sure you can find several that suit your needs.
If you don't want to learn about indexing, you've still got several choices. You can hire someone to come in and do the job for you. This is quick, efficient, and gets you about as good a solution as you're likely to get. You can post your index script here, and maybe someone will help you online. This isn't as good or as reliable, but its cheaper and might be faster too.
-PatP
*sigh*
I never said I didn't want to learn. I said that I don't know very much about it and that I didn't understand what you had said earlier, which is not the same thing.
I have read many web pages dealing with indexing. I've found them to be as useful as a bucket of pee.
I'm sorry I asked. I don't feel that I did anything to deserve that barrage of condescending comments. I certainly won't be back asking any more questions or taking up any more of your precious time.|||One thing you may try is the Index Tuning Wizard. It is not a perfect tool, but it may point out what indexes are not getting used. You will need to run a trace of the activity, and making sure you get enough activity is the big problem. If some of these indexes are used for the CEO's big report at the end of the quarter, and you drop that index, there could be a few uncomfortable explanations you will have to give. In short, have all the indexes scripted out, so you can add them back later on, if need be.
We don't have access to that here; I did have the DBAs run it on a SQL Server 2000 database a short time ago and while they didn't let me see the actual output, they said that my indexes were "fine." Then they called me the next day griping about clustered index scans. Can't hit a moving target so why try? Seems nearly every time I ask, I get insulted and shouted down.|||Well, I think you need to chill just a bit...I don't think Pat was trying to be condesending...
If the DBA's are griping about scans...that is a totally different issue...|||Well, I think you need to chill just a bit...I don't think Pat was trying to be condesending...
If the DBA's are griping about scans...that is a totally different issue...
I'm trying to chill but it seems that indexing is a sacred subject because everywhere I've asked, I've basically been told to take a hike when I ask questions about that particular subject. The books I've read suck and I haven't found a decent website yet. I'm here asking for help and I'm getting fobbed off. My DBAs don't want to help me since they're about to be outsourced; you couldn't pay them to care about what's going on at the moment. Thing is, I still have a job and my database's performance is an issue. I'm trying to fix it. That's all I'm trying to do here.|||How about posting one of the queries that seem to be giving you a problem? Not all queries can be helped by indexes. In fact, some can be outright hurt by indexes, so even the best of experts have to guess more often than you might think.|||I've been watching Pat post for several years and he is just about the most even handed poster on here. I think you will probably find that he was trying to triangulate in on your experience and knowledge level to ensure his answer was appropriate and (it appears) simply missed the target a little.
And what is wrong with the UK anyway? Or have we covered that already before...?|||How about posting one of the queries that seem to be giving you a problem? Not all queries can be helped by indexes. In fact, some can be outright hurt by indexes, so even the best of experts have to guess more often than you might think.
Well...I think some of the queries are problematic because they're pulling down a ton of rows. The queries drive reports in an application I wrote and their where clauses are dynamic. I've tried to index the columns used in the where clauses but they just never seem to get used; I just get hit with a clustered index scan unless I keep narrowing the focus of the query; then I get the index seeks that the DBAs want to see.
For example, this query takes 24 seconds to run and returns 53,914 rows.
SELECT a.Pol_no
, a.OM_Available
, a.OM_Complete
, a.Agency
, a.Channel
, a.EDI_Flag
, a.NTU_Date
, a.PLCY_NAPI
, a.Plcy_Type_Cd
, PIE.Started AS 'Policy_Issue_Started'
, PR.Started AS 'PROPNTU_Started'
, Reg.Ended AS 'Registration_Ended'
, a.Team
, a.SumAssured
, a.RunDate
, a.MissingItems
, a.Distributor
, a.ParentCompany
, a.Site
, a.CI_Type
, a.ProcessOwner
, a.Status AS 'FullStatus'
, CriticalIllness = CASE WHEN a.CI_Type IS NULL THEN 'Life Only' ELSE 'Critical Illness' END
, QuartersOld = CAST(DATEDIFF(m, a.OM_Available, a.RunDate)/3 AS Int)
, DATEDIFF(d, a.OM_Available, a.RunDate) AS 'DaysElapsed'
FROM vwFESubset a
LEFT OUTER JOIN vwPIE PIE ON a.Pol_no = PIE.Pol_no
LEFT OUTER JOIN vwPROPNTU PR ON a.Pol_no = PR.Pol_no
LEFT OUTER JOIN vwReg Reg ON a.Pol_no = Reg.Pol_no
WHERE a.Status IN('Accepted','Registered','Pending')
AND a.OM_Complete IS NULL
We never run it on its own; we just query it for other reports. I get index scans in five different tables and index seeks in three, and sometimes multiple index scans or seeks for the same table. There are some tables with less than 12 rows in them which always have clustered index scans. I've been doing this less than a year so I'm not always sure how to interpret the query plan but our DBAs really hate seeing scans of any kind and they jump all over me whenever they see them.|||And what is wrong with the UK anyway? Or have we covered that already before...?
I don't like it.|||I am going to guess that the "vw" in the from clause means views. One of the first things to try here is to go through those views, and see if the views involve tables you do not need. If so, you can rewrite the query to go to the base tables, and you will avoid all the reads and joins against those extra tables.
As a percentage, how many rows have a.Status IN('Accepted','Registered','Pending')? I am going to guess it is a sizable percentage. If so, this query may not be eligible for indexes.
Is this query run for every report? If so, is the data aggregated or filtered further up the chain? If you can get that extra filtering back down to the database level, you may stand a better chance of making your DBAs less unpleasant.|||I am going to guess that the "vw" in the from clause means views. One of the first things to try here is to go through those views, and see if the views involve tables you do not need. If so, you can rewrite the query to go to the base tables, and you will avoid all the reads and joins against those extra tables.
As a percentage, how many rows have a.Status IN('Accepted','Registered','Pending')? I am going to guess it is a sizable percentage. If so, this query may not be eligible for indexes.
Is this query run for every report? If so, is the data aggregated or filtered further up the chain? If you can get that extra filtering back down to the database level, you may stand a better chance of making your DBAs less unpleasant.
The views are all necessary, and that table is indexed. The data is filtered sometimes, and the columns it is filtered on are indexed.
The Status IN value grabs 106800 rows out of 124727 possibles. I would think that wouldn't be eligible for indexing ever but I can't make my DBAs understand. They want me to further narrow the scope of the queries, which would make the reports pretty much useless.
Interpreting deadlock info
when they occur. I got the dump below. My question is how to interpret
a line like
RID: 9:1:587432:0
I know how to interpret lines that start with KEY:, but not RID and I
can't find any documentation.
Wait-for graph
Node:1
RID: 9:1:587432:0 CleanCnt:2 Mode: X Flags: 0x2
Grant List 3::
Owner:0x21e53740 Mode: X Flg:0x0 Ref:0 Life:02000000 SPID:193
ECID:0RID is row ID, the format is DBID:FILEID:PAGEID:ROWNUM
Using DBCC PAGE that I mentioned in my reply to your other post, you can see
the actual rows on the page.
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"Frank Rizzo" <none@.none.com> wrote in message
news:OnTG6fhIGHA.3408@.TK2MSFTNGP12.phx.gbl...
>I have a trace flag 1204 on my sql2k box. It prints out deadlock info when
>they occur. I got the dump below. My question is how to interpret a line
>like
> RID: 9:1:587432:0
> I know how to interpret lines that start with KEY:, but not RID and I
> can't find any documentation.
> Wait-for graph
> Node:1
> RID: 9:1:587432:0 CleanCnt:2 Mode: X Flags: 0x2
> Grant List 3::
> Owner:0x21e53740 Mode: X Flg:0x0 Ref:0 Life:02000000 SPID:193
> ECID:0
>
Interpreting deadlock info
when they occur. I got the dump below. My question is how to interpret
a line like
RID: 9:1:587432:0
I know how to interpret lines that start with KEY:, but not RID and I
can't find any documentation.
Wait-for graph
Node:1
RID: 9:1:587432:0 CleanCnt:2 Mode: X Flags: 0x2
Grant List 3::
Owner:0x21e53740 Mode: X Flg:0x0 Ref:0 Life:02000000 SPID:193
ECID:0RID is row ID, the format is DBID:FILEID:PAGEID:ROWNUM
Using DBCC PAGE that I mentioned in my reply to your other post, you can see
the actual rows on the page.
--
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"Frank Rizzo" <none@.none.com> wrote in message
news:OnTG6fhIGHA.3408@.TK2MSFTNGP12.phx.gbl...
>I have a trace flag 1204 on my sql2k box. It prints out deadlock info when
>they occur. I got the dump below. My question is how to interpret a line
>like
> RID: 9:1:587432:0
> I know how to interpret lines that start with KEY:, but not RID and I
> can't find any documentation.
> Wait-for graph
> Node:1
> RID: 9:1:587432:0 CleanCnt:2 Mode: X Flags: 0x2
> Grant List 3::
> Owner:0x21e53740 Mode: X Flg:0x0 Ref:0 Life:02000000 SPID:193
> ECID:0
>
Interpreting deadlock info
when they occur. I got the dump below. My question is how to interpret
a line like
RID: 9:1:587432:0
I know how to interpret lines that start with KEY:, but not RID and I
can't find any documentation.
Wait-for graph
Node:1
RID: 9:1:587432:0 CleanCnt:2 Mode: X Flags: 0x2
Grant List 3::
Owner:0x21e53740 Mode: X Flg:0x0 Ref:0 Life:02000000 SPID:193
ECID:0
RID is row ID, the format is DBID:FILEID:PAGEID:ROWNUM
Using DBCC PAGE that I mentioned in my reply to your other post, you can see
the actual rows on the page.
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"Frank Rizzo" <none@.none.com> wrote in message
news:OnTG6fhIGHA.3408@.TK2MSFTNGP12.phx.gbl...
>I have a trace flag 1204 on my sql2k box. It prints out deadlock info when
>they occur. I got the dump below. My question is how to interpret a line
>like
> RID: 9:1:587432:0
> I know how to interpret lines that start with KEY:, but not RID and I
> can't find any documentation.
> Wait-for graph
> Node:1
> RID: 9:1:587432:0 CleanCnt:2 Mode: X Flags: 0x2
> Grant List 3::
> Owner:0x21e53740 Mode: X Flg:0x0 Ref:0 Life:02000000 SPID:193
> ECID:0
>