Index Fragmentation - Am I interpreting the results correctly?
I'm not a DBA, but I'm responsible for a database that currently has hundreds of tables and ~5TB of data. I recently ran the following query, in hopes of determining index fragmentation:
Declare @DatabaseId Int = DB_ID('ODS')
SELECT
OBJECT_NAME(T.OBJECT_ID) as TableName,
T2.Name as IndexName,
T.index_id as IndexId,
index_type_desc as IndexType,
index_level as IndexLevel,
avg_fragmentation_in_percent as AverageFragmentationPercent,
avg_page_space_used_in_percent as AveragePageSpaceUsedPercent,
page_count as PageCount
FROM sys.dm_db_index_physical_stats (@DatabaseId, NULL, NULL, NULL, 'DETAILED') T
INNER JOIN [sys].[indexes] T2 ON T.index_id = T2.index_id And T.object_id = T2.object_id
ORDER BY avg_fragmentation_in_percent DESC
The first 30-40 rows of the result set looks as follows (after importing into Excel):
This was very startling to me. Am I reading this correctly, that I have all these indexes, actually a lot more, that are 100% fragmented? Is my query correct?
sql-server index dmv
add a comment |
I'm not a DBA, but I'm responsible for a database that currently has hundreds of tables and ~5TB of data. I recently ran the following query, in hopes of determining index fragmentation:
Declare @DatabaseId Int = DB_ID('ODS')
SELECT
OBJECT_NAME(T.OBJECT_ID) as TableName,
T2.Name as IndexName,
T.index_id as IndexId,
index_type_desc as IndexType,
index_level as IndexLevel,
avg_fragmentation_in_percent as AverageFragmentationPercent,
avg_page_space_used_in_percent as AveragePageSpaceUsedPercent,
page_count as PageCount
FROM sys.dm_db_index_physical_stats (@DatabaseId, NULL, NULL, NULL, 'DETAILED') T
INNER JOIN [sys].[indexes] T2 ON T.index_id = T2.index_id And T.object_id = T2.object_id
ORDER BY avg_fragmentation_in_percent DESC
The first 30-40 rows of the result set looks as follows (after importing into Excel):
This was very startling to me. Am I reading this correctly, that I have all these indexes, actually a lot more, that are 100% fragmented? Is my query correct?
sql-server index dmv
add a comment |
I'm not a DBA, but I'm responsible for a database that currently has hundreds of tables and ~5TB of data. I recently ran the following query, in hopes of determining index fragmentation:
Declare @DatabaseId Int = DB_ID('ODS')
SELECT
OBJECT_NAME(T.OBJECT_ID) as TableName,
T2.Name as IndexName,
T.index_id as IndexId,
index_type_desc as IndexType,
index_level as IndexLevel,
avg_fragmentation_in_percent as AverageFragmentationPercent,
avg_page_space_used_in_percent as AveragePageSpaceUsedPercent,
page_count as PageCount
FROM sys.dm_db_index_physical_stats (@DatabaseId, NULL, NULL, NULL, 'DETAILED') T
INNER JOIN [sys].[indexes] T2 ON T.index_id = T2.index_id And T.object_id = T2.object_id
ORDER BY avg_fragmentation_in_percent DESC
The first 30-40 rows of the result set looks as follows (after importing into Excel):
This was very startling to me. Am I reading this correctly, that I have all these indexes, actually a lot more, that are 100% fragmented? Is my query correct?
sql-server index dmv
I'm not a DBA, but I'm responsible for a database that currently has hundreds of tables and ~5TB of data. I recently ran the following query, in hopes of determining index fragmentation:
Declare @DatabaseId Int = DB_ID('ODS')
SELECT
OBJECT_NAME(T.OBJECT_ID) as TableName,
T2.Name as IndexName,
T.index_id as IndexId,
index_type_desc as IndexType,
index_level as IndexLevel,
avg_fragmentation_in_percent as AverageFragmentationPercent,
avg_page_space_used_in_percent as AveragePageSpaceUsedPercent,
page_count as PageCount
FROM sys.dm_db_index_physical_stats (@DatabaseId, NULL, NULL, NULL, 'DETAILED') T
INNER JOIN [sys].[indexes] T2 ON T.index_id = T2.index_id And T.object_id = T2.object_id
ORDER BY avg_fragmentation_in_percent DESC
The first 30-40 rows of the result set looks as follows (after importing into Excel):
This was very startling to me. Am I reading this correctly, that I have all these indexes, actually a lot more, that are 100% fragmented? Is my query correct?
sql-server index dmv
sql-server index dmv
asked Sep 23 '15 at 16:59
Randy MinderRandy Minder
98021125
98021125
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
Its clearly visible that page_count
for all the indexes shown in figure you attached is < 1500. In such case even if index is fragmented to 100% this is NOT going to cause any performance issue.
Actually below is recommendation on fragmentation from Microsoft if you read BOL 2000 version
Fragmentation affects disk I/O. Therefore, focus on the larger indexes
because their pages are less likely to be cached by SQL Server. Use
the page count reported by DBCC SHOWCONTIG to get an idea of the size
of the indexes (each page is 8 KB in size). Generally, you should not
be concerned with fragmentation levels of indexes with less than 1,000
pages. In the tests, indexes containing more than 10,000 pages
realized performance gains, with the biggest gains on indexes with
significantly more pages (greater than 50,000 pages).
Below is reply from Microsoft Team on a Old Connect Item(Old Connect Item has been retired and the Bugs and Features requests were not carried away) which was raised to understand why fragmentation did not decrease even after rebuild.
For small tables, usually performance impact on fragmentation is
undetectable. The first 8 page allocation would be from mixed extents
and mixed extents could be anywhere in database files. Rebuilding
indexes would not change this nature.
If you have a small table, those mixed pages weight a lot during
fragmentation calculation; therefore, rebuilding index may not reduce
fragmentation. (As matter of fact, I could easily construct a case
that fragmentation increases after rebuild.) Those fragmentation would
not be a pain for your query performance; so basically you can ignore.
You should use below query, this will filter out unnecessary index which has page_count <1500. Its advised only to rebuild index having page_count >1500
Declare @DatabaseId Int = DB_ID('ODS')
SELECT
OBJECT_NAME(T.OBJECT_ID) as TableName,
T2.Name as IndexName,
T.index_id as IndexId,
index_type_desc as IndexType,
index_level as IndexLevel,
avg_fragmentation_in_percent as AverageFragmentationPercent,
avg_page_space_used_in_percent as AveragePageSpaceUsedPercent,
page_count as PageCount
FROM sys.dm_db_index_physical_stats (@DatabaseId, NULL, NULL, NULL, 'DETAILED') T
INNER JOIN [sys].[indexes] T2 ON T.index_id = T2.index_id And T.object_id = T2.object_id
where page_count >1500--this would filter out irrelevant index frag.
ORDER BY avg_fragmentation_in_percent DESC
NOTE: The 1500 figure
recommendation is not something which Microsoft has recomended as a hard and fast rule but its widely accepted figure. In some forum you would see people using the value of 1000. The core point is if page_count for index is quite less you should not rebuild or reorganize that index because such index is actually not going to cause any performance issue.
add a comment |
Yes, that's what it looks like. Unless there is some error happening with the data after it got moved to Excel.
However, these are all tiny, tiny tables. Stop caring about fragmentation on tables with less than, say, 1,000 pages.* And even then you probably shouldn't care too much until another order or two of magnitude, and even less if you are using SSD and/or your database fits into memory even with the fragmentation.
The work that you'll spend reorganizing them will not have the impact you expect, and the benefit from any changes you actually see, temporarily, will not justify it at all. I would simply filter such small tables out of your queries right from the start.
*
1,000 pages is just my ballpark, pull a number out of your you-know-what suggestion. There is no magic number for this. But seriously, for small tables, focus your efforts elsewhere.
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f115943%2findex-fragmentation-am-i-interpreting-the-results-correctly%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
Its clearly visible that page_count
for all the indexes shown in figure you attached is < 1500. In such case even if index is fragmented to 100% this is NOT going to cause any performance issue.
Actually below is recommendation on fragmentation from Microsoft if you read BOL 2000 version
Fragmentation affects disk I/O. Therefore, focus on the larger indexes
because their pages are less likely to be cached by SQL Server. Use
the page count reported by DBCC SHOWCONTIG to get an idea of the size
of the indexes (each page is 8 KB in size). Generally, you should not
be concerned with fragmentation levels of indexes with less than 1,000
pages. In the tests, indexes containing more than 10,000 pages
realized performance gains, with the biggest gains on indexes with
significantly more pages (greater than 50,000 pages).
Below is reply from Microsoft Team on a Old Connect Item(Old Connect Item has been retired and the Bugs and Features requests were not carried away) which was raised to understand why fragmentation did not decrease even after rebuild.
For small tables, usually performance impact on fragmentation is
undetectable. The first 8 page allocation would be from mixed extents
and mixed extents could be anywhere in database files. Rebuilding
indexes would not change this nature.
If you have a small table, those mixed pages weight a lot during
fragmentation calculation; therefore, rebuilding index may not reduce
fragmentation. (As matter of fact, I could easily construct a case
that fragmentation increases after rebuild.) Those fragmentation would
not be a pain for your query performance; so basically you can ignore.
You should use below query, this will filter out unnecessary index which has page_count <1500. Its advised only to rebuild index having page_count >1500
Declare @DatabaseId Int = DB_ID('ODS')
SELECT
OBJECT_NAME(T.OBJECT_ID) as TableName,
T2.Name as IndexName,
T.index_id as IndexId,
index_type_desc as IndexType,
index_level as IndexLevel,
avg_fragmentation_in_percent as AverageFragmentationPercent,
avg_page_space_used_in_percent as AveragePageSpaceUsedPercent,
page_count as PageCount
FROM sys.dm_db_index_physical_stats (@DatabaseId, NULL, NULL, NULL, 'DETAILED') T
INNER JOIN [sys].[indexes] T2 ON T.index_id = T2.index_id And T.object_id = T2.object_id
where page_count >1500--this would filter out irrelevant index frag.
ORDER BY avg_fragmentation_in_percent DESC
NOTE: The 1500 figure
recommendation is not something which Microsoft has recomended as a hard and fast rule but its widely accepted figure. In some forum you would see people using the value of 1000. The core point is if page_count for index is quite less you should not rebuild or reorganize that index because such index is actually not going to cause any performance issue.
add a comment |
Its clearly visible that page_count
for all the indexes shown in figure you attached is < 1500. In such case even if index is fragmented to 100% this is NOT going to cause any performance issue.
Actually below is recommendation on fragmentation from Microsoft if you read BOL 2000 version
Fragmentation affects disk I/O. Therefore, focus on the larger indexes
because their pages are less likely to be cached by SQL Server. Use
the page count reported by DBCC SHOWCONTIG to get an idea of the size
of the indexes (each page is 8 KB in size). Generally, you should not
be concerned with fragmentation levels of indexes with less than 1,000
pages. In the tests, indexes containing more than 10,000 pages
realized performance gains, with the biggest gains on indexes with
significantly more pages (greater than 50,000 pages).
Below is reply from Microsoft Team on a Old Connect Item(Old Connect Item has been retired and the Bugs and Features requests were not carried away) which was raised to understand why fragmentation did not decrease even after rebuild.
For small tables, usually performance impact on fragmentation is
undetectable. The first 8 page allocation would be from mixed extents
and mixed extents could be anywhere in database files. Rebuilding
indexes would not change this nature.
If you have a small table, those mixed pages weight a lot during
fragmentation calculation; therefore, rebuilding index may not reduce
fragmentation. (As matter of fact, I could easily construct a case
that fragmentation increases after rebuild.) Those fragmentation would
not be a pain for your query performance; so basically you can ignore.
You should use below query, this will filter out unnecessary index which has page_count <1500. Its advised only to rebuild index having page_count >1500
Declare @DatabaseId Int = DB_ID('ODS')
SELECT
OBJECT_NAME(T.OBJECT_ID) as TableName,
T2.Name as IndexName,
T.index_id as IndexId,
index_type_desc as IndexType,
index_level as IndexLevel,
avg_fragmentation_in_percent as AverageFragmentationPercent,
avg_page_space_used_in_percent as AveragePageSpaceUsedPercent,
page_count as PageCount
FROM sys.dm_db_index_physical_stats (@DatabaseId, NULL, NULL, NULL, 'DETAILED') T
INNER JOIN [sys].[indexes] T2 ON T.index_id = T2.index_id And T.object_id = T2.object_id
where page_count >1500--this would filter out irrelevant index frag.
ORDER BY avg_fragmentation_in_percent DESC
NOTE: The 1500 figure
recommendation is not something which Microsoft has recomended as a hard and fast rule but its widely accepted figure. In some forum you would see people using the value of 1000. The core point is if page_count for index is quite less you should not rebuild or reorganize that index because such index is actually not going to cause any performance issue.
add a comment |
Its clearly visible that page_count
for all the indexes shown in figure you attached is < 1500. In such case even if index is fragmented to 100% this is NOT going to cause any performance issue.
Actually below is recommendation on fragmentation from Microsoft if you read BOL 2000 version
Fragmentation affects disk I/O. Therefore, focus on the larger indexes
because their pages are less likely to be cached by SQL Server. Use
the page count reported by DBCC SHOWCONTIG to get an idea of the size
of the indexes (each page is 8 KB in size). Generally, you should not
be concerned with fragmentation levels of indexes with less than 1,000
pages. In the tests, indexes containing more than 10,000 pages
realized performance gains, with the biggest gains on indexes with
significantly more pages (greater than 50,000 pages).
Below is reply from Microsoft Team on a Old Connect Item(Old Connect Item has been retired and the Bugs and Features requests were not carried away) which was raised to understand why fragmentation did not decrease even after rebuild.
For small tables, usually performance impact on fragmentation is
undetectable. The first 8 page allocation would be from mixed extents
and mixed extents could be anywhere in database files. Rebuilding
indexes would not change this nature.
If you have a small table, those mixed pages weight a lot during
fragmentation calculation; therefore, rebuilding index may not reduce
fragmentation. (As matter of fact, I could easily construct a case
that fragmentation increases after rebuild.) Those fragmentation would
not be a pain for your query performance; so basically you can ignore.
You should use below query, this will filter out unnecessary index which has page_count <1500. Its advised only to rebuild index having page_count >1500
Declare @DatabaseId Int = DB_ID('ODS')
SELECT
OBJECT_NAME(T.OBJECT_ID) as TableName,
T2.Name as IndexName,
T.index_id as IndexId,
index_type_desc as IndexType,
index_level as IndexLevel,
avg_fragmentation_in_percent as AverageFragmentationPercent,
avg_page_space_used_in_percent as AveragePageSpaceUsedPercent,
page_count as PageCount
FROM sys.dm_db_index_physical_stats (@DatabaseId, NULL, NULL, NULL, 'DETAILED') T
INNER JOIN [sys].[indexes] T2 ON T.index_id = T2.index_id And T.object_id = T2.object_id
where page_count >1500--this would filter out irrelevant index frag.
ORDER BY avg_fragmentation_in_percent DESC
NOTE: The 1500 figure
recommendation is not something which Microsoft has recomended as a hard and fast rule but its widely accepted figure. In some forum you would see people using the value of 1000. The core point is if page_count for index is quite less you should not rebuild or reorganize that index because such index is actually not going to cause any performance issue.
Its clearly visible that page_count
for all the indexes shown in figure you attached is < 1500. In such case even if index is fragmented to 100% this is NOT going to cause any performance issue.
Actually below is recommendation on fragmentation from Microsoft if you read BOL 2000 version
Fragmentation affects disk I/O. Therefore, focus on the larger indexes
because their pages are less likely to be cached by SQL Server. Use
the page count reported by DBCC SHOWCONTIG to get an idea of the size
of the indexes (each page is 8 KB in size). Generally, you should not
be concerned with fragmentation levels of indexes with less than 1,000
pages. In the tests, indexes containing more than 10,000 pages
realized performance gains, with the biggest gains on indexes with
significantly more pages (greater than 50,000 pages).
Below is reply from Microsoft Team on a Old Connect Item(Old Connect Item has been retired and the Bugs and Features requests were not carried away) which was raised to understand why fragmentation did not decrease even after rebuild.
For small tables, usually performance impact on fragmentation is
undetectable. The first 8 page allocation would be from mixed extents
and mixed extents could be anywhere in database files. Rebuilding
indexes would not change this nature.
If you have a small table, those mixed pages weight a lot during
fragmentation calculation; therefore, rebuilding index may not reduce
fragmentation. (As matter of fact, I could easily construct a case
that fragmentation increases after rebuild.) Those fragmentation would
not be a pain for your query performance; so basically you can ignore.
You should use below query, this will filter out unnecessary index which has page_count <1500. Its advised only to rebuild index having page_count >1500
Declare @DatabaseId Int = DB_ID('ODS')
SELECT
OBJECT_NAME(T.OBJECT_ID) as TableName,
T2.Name as IndexName,
T.index_id as IndexId,
index_type_desc as IndexType,
index_level as IndexLevel,
avg_fragmentation_in_percent as AverageFragmentationPercent,
avg_page_space_used_in_percent as AveragePageSpaceUsedPercent,
page_count as PageCount
FROM sys.dm_db_index_physical_stats (@DatabaseId, NULL, NULL, NULL, 'DETAILED') T
INNER JOIN [sys].[indexes] T2 ON T.index_id = T2.index_id And T.object_id = T2.object_id
where page_count >1500--this would filter out irrelevant index frag.
ORDER BY avg_fragmentation_in_percent DESC
NOTE: The 1500 figure
recommendation is not something which Microsoft has recomended as a hard and fast rule but its widely accepted figure. In some forum you would see people using the value of 1000. The core point is if page_count for index is quite less you should not rebuild or reorganize that index because such index is actually not going to cause any performance issue.
edited 25 mins ago
answered Sep 23 '15 at 17:10
ShankyShanky
14.1k32040
14.1k32040
add a comment |
add a comment |
Yes, that's what it looks like. Unless there is some error happening with the data after it got moved to Excel.
However, these are all tiny, tiny tables. Stop caring about fragmentation on tables with less than, say, 1,000 pages.* And even then you probably shouldn't care too much until another order or two of magnitude, and even less if you are using SSD and/or your database fits into memory even with the fragmentation.
The work that you'll spend reorganizing them will not have the impact you expect, and the benefit from any changes you actually see, temporarily, will not justify it at all. I would simply filter such small tables out of your queries right from the start.
*
1,000 pages is just my ballpark, pull a number out of your you-know-what suggestion. There is no magic number for this. But seriously, for small tables, focus your efforts elsewhere.
add a comment |
Yes, that's what it looks like. Unless there is some error happening with the data after it got moved to Excel.
However, these are all tiny, tiny tables. Stop caring about fragmentation on tables with less than, say, 1,000 pages.* And even then you probably shouldn't care too much until another order or two of magnitude, and even less if you are using SSD and/or your database fits into memory even with the fragmentation.
The work that you'll spend reorganizing them will not have the impact you expect, and the benefit from any changes you actually see, temporarily, will not justify it at all. I would simply filter such small tables out of your queries right from the start.
*
1,000 pages is just my ballpark, pull a number out of your you-know-what suggestion. There is no magic number for this. But seriously, for small tables, focus your efforts elsewhere.
add a comment |
Yes, that's what it looks like. Unless there is some error happening with the data after it got moved to Excel.
However, these are all tiny, tiny tables. Stop caring about fragmentation on tables with less than, say, 1,000 pages.* And even then you probably shouldn't care too much until another order or two of magnitude, and even less if you are using SSD and/or your database fits into memory even with the fragmentation.
The work that you'll spend reorganizing them will not have the impact you expect, and the benefit from any changes you actually see, temporarily, will not justify it at all. I would simply filter such small tables out of your queries right from the start.
*
1,000 pages is just my ballpark, pull a number out of your you-know-what suggestion. There is no magic number for this. But seriously, for small tables, focus your efforts elsewhere.
Yes, that's what it looks like. Unless there is some error happening with the data after it got moved to Excel.
However, these are all tiny, tiny tables. Stop caring about fragmentation on tables with less than, say, 1,000 pages.* And even then you probably shouldn't care too much until another order or two of magnitude, and even less if you are using SSD and/or your database fits into memory even with the fragmentation.
The work that you'll spend reorganizing them will not have the impact you expect, and the benefit from any changes you actually see, temporarily, will not justify it at all. I would simply filter such small tables out of your queries right from the start.
*
1,000 pages is just my ballpark, pull a number out of your you-know-what suggestion. There is no magic number for this. But seriously, for small tables, focus your efforts elsewhere.
answered Sep 23 '15 at 17:08
Aaron Bertrand♦Aaron Bertrand
151k18288488
151k18288488
add a comment |
add a comment |
Thanks for contributing an answer to Database Administrators Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f115943%2findex-fragmentation-am-i-interpreting-the-results-correctly%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown