Cannot perform SELECT COUNT(*), rows over 1 billion on a table












0















I have this table that contains over a billion rows and to issue the query, I issued:



SELECT
Total_Rows= SUM(st.row_count)
FROM
sys.dm_db_partition_stats st
WHERE
object_name(object_id) = 'audit_tb' AND (index_id < 2)

Output:
1055075195


I was asked to DELETE records in 2014 so to test out, I tried Deleting the last 3 months in 2014, it's been over 1 hr and it's still executing, any ideas? I suspect there are bad rows.



 DELETE 
FROM [audit_tb]
where datetime >= '2014-01-01 00:00:00'
AND datetime <= '2014-03-31 00:00:00' --3 months in 2014


The table also has a CLUSTERED INDEX.










share|improve this question
















bumped to the homepage by Community 10 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
















  • Can you add the table definition, including indexes? How many rows will be deleted by your query? You might need to delete in chunks.

    – AMtwo
    Jan 20 '17 at 2:02











  • SQL Server? Version? Execution plan? Partitions?

    – David דודו Markovitz
    Jan 20 '17 at 10:33











  • Evergreen: michaeljswart.com/2014/09/take-care-when-scripting-batches

    – Erik Darling
    Jan 21 '17 at 18:38
















0















I have this table that contains over a billion rows and to issue the query, I issued:



SELECT
Total_Rows= SUM(st.row_count)
FROM
sys.dm_db_partition_stats st
WHERE
object_name(object_id) = 'audit_tb' AND (index_id < 2)

Output:
1055075195


I was asked to DELETE records in 2014 so to test out, I tried Deleting the last 3 months in 2014, it's been over 1 hr and it's still executing, any ideas? I suspect there are bad rows.



 DELETE 
FROM [audit_tb]
where datetime >= '2014-01-01 00:00:00'
AND datetime <= '2014-03-31 00:00:00' --3 months in 2014


The table also has a CLUSTERED INDEX.










share|improve this question
















bumped to the homepage by Community 10 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
















  • Can you add the table definition, including indexes? How many rows will be deleted by your query? You might need to delete in chunks.

    – AMtwo
    Jan 20 '17 at 2:02











  • SQL Server? Version? Execution plan? Partitions?

    – David דודו Markovitz
    Jan 20 '17 at 10:33











  • Evergreen: michaeljswart.com/2014/09/take-care-when-scripting-batches

    – Erik Darling
    Jan 21 '17 at 18:38














0












0








0








I have this table that contains over a billion rows and to issue the query, I issued:



SELECT
Total_Rows= SUM(st.row_count)
FROM
sys.dm_db_partition_stats st
WHERE
object_name(object_id) = 'audit_tb' AND (index_id < 2)

Output:
1055075195


I was asked to DELETE records in 2014 so to test out, I tried Deleting the last 3 months in 2014, it's been over 1 hr and it's still executing, any ideas? I suspect there are bad rows.



 DELETE 
FROM [audit_tb]
where datetime >= '2014-01-01 00:00:00'
AND datetime <= '2014-03-31 00:00:00' --3 months in 2014


The table also has a CLUSTERED INDEX.










share|improve this question
















I have this table that contains over a billion rows and to issue the query, I issued:



SELECT
Total_Rows= SUM(st.row_count)
FROM
sys.dm_db_partition_stats st
WHERE
object_name(object_id) = 'audit_tb' AND (index_id < 2)

Output:
1055075195


I was asked to DELETE records in 2014 so to test out, I tried Deleting the last 3 months in 2014, it's been over 1 hr and it's still executing, any ideas? I suspect there are bad rows.



 DELETE 
FROM [audit_tb]
where datetime >= '2014-01-01 00:00:00'
AND datetime <= '2014-03-31 00:00:00' --3 months in 2014


The table also has a CLUSTERED INDEX.







sql-server t-sql delete






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jan 21 '17 at 18:13









a_horse_with_no_name

38.9k775112




38.9k775112










asked Jan 20 '17 at 1:25









Dennis M.Dennis M.

163




163





bumped to the homepage by Community 10 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







bumped to the homepage by Community 10 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.















  • Can you add the table definition, including indexes? How many rows will be deleted by your query? You might need to delete in chunks.

    – AMtwo
    Jan 20 '17 at 2:02











  • SQL Server? Version? Execution plan? Partitions?

    – David דודו Markovitz
    Jan 20 '17 at 10:33











  • Evergreen: michaeljswart.com/2014/09/take-care-when-scripting-batches

    – Erik Darling
    Jan 21 '17 at 18:38



















  • Can you add the table definition, including indexes? How many rows will be deleted by your query? You might need to delete in chunks.

    – AMtwo
    Jan 20 '17 at 2:02











  • SQL Server? Version? Execution plan? Partitions?

    – David דודו Markovitz
    Jan 20 '17 at 10:33











  • Evergreen: michaeljswart.com/2014/09/take-care-when-scripting-batches

    – Erik Darling
    Jan 21 '17 at 18:38

















Can you add the table definition, including indexes? How many rows will be deleted by your query? You might need to delete in chunks.

– AMtwo
Jan 20 '17 at 2:02





Can you add the table definition, including indexes? How many rows will be deleted by your query? You might need to delete in chunks.

– AMtwo
Jan 20 '17 at 2:02













SQL Server? Version? Execution plan? Partitions?

– David דודו Markovitz
Jan 20 '17 at 10:33





SQL Server? Version? Execution plan? Partitions?

– David דודו Markovitz
Jan 20 '17 at 10:33













Evergreen: michaeljswart.com/2014/09/take-care-when-scripting-batches

– Erik Darling
Jan 21 '17 at 18:38





Evergreen: michaeljswart.com/2014/09/take-care-when-scripting-batches

– Erik Darling
Jan 21 '17 at 18:38










2 Answers
2






active

oldest

votes


















0














Here is a script to delete in chunks. It deletes in chunks of 4500 records to avoid lock escalation. It performs 100 loops to delete a total of 450,000 rows.
If your database is using the FULL recovery model, you may want to backup the transaction log after each run to keep if from growing too large.



SET NOCOUNT ON;
DECLARE @continue INT
DECLARE @rowcount INT
DECLARE @loopCount INT
DECLARE @MaxLoops INT
DECLARE @TotalRows BIGINT
DECLARE @PurgeBeginDate DATETIME
DECLARE @PurgeEndDate DATETIME

SET @PurgeBeginDate = '2014-01-01 00:00:00'
SET @PurgeEndDate = '2014-03-31 23:59:59'

SET @MaxLoops = 100
SET @continue = 1
SET @loopCount = 0

SELECT @TotalRows = (SELECT COUNT(*) FROM audit_tb (NOLOCK) WHERE datetime between @PurgeBeginDate and @PurgeEndDate)
PRINT 'Total Rows = ' + CAST(@TotalRows AS VARCHAR(20))
PRINT ''

WHILE @continue = 1
BEGIN
SET @loopCount = @loopCount + 1
PRINT 'Loop # ' + CAST(@loopCount AS VARCHAR(10))
PRINT CONVERT(VARCHAR(20), GETDATE(), 120)

BEGIN TRANSACTION
DELETE TOP (4500) audit_tb WHERE datetime between @PurgeBeginDate and @PurgeEndDate
SET @rowcount = @@rowcount
COMMIT

PRINT 'Rows Deleted: ' + CAST(@rowcount AS VARCHAR(10))
PRINT CONVERT(VARCHAR(20), GETDATE(), 120)
PRINT ''

IF @rowcount = 0 OR @loopCount >= @MaxLoops
BEGIN
SET @continue = 0
END
END

SELECT @TotalRows = (SELECT COUNT(*) FROM audit_tb (NOLOCK) WHERE datetime between @PurgeBeginDate and @PurgeEndDate)
PRINT 'Total Rows Remaining = ' + CAST(@TotalRows AS VARCHAR(20))
PRINT ''


GO


I have used this in a SQL Agent job to run repeatedly until the required number of rows were removed. I just schedule the job to run every 15 minutes so my tran log backup job runs between runs of this job.



I hope this helps






share|improve this answer
























  • Thank you MillhouseD and Paparazzi. I did a select count just referencing within a day and it was just executing forever. Even if I delete in chunks, I'm not sure how long this would take. This table has an application, we will speak with a vendor this week and see what we can do. Can we move to a chat room?

    – Dennis M.
    Jan 24 '17 at 19:39











  • You are welcome @DennisM. You might have better results if you create a non-clustered index on the datetime column. You could drop the index after the deletes are complete. Here is sample syntax: CREATE NONCLUSTERED INDEX IX_audit_tb_datetime ON dbo.audit_tb (datetime ASC) I would be happy to move to a chat room.

    – MillhouseD
    Jan 24 '17 at 20:06











  • Why are you performing a select count - it has nothing to do the the delete. Delete top X does not need to perform a count.

    – paparazzo
    Jan 27 '17 at 0:56











  • You are correct in that it has nothing to do with the delete. It is for logging the before and after counts. If I have hundreds of millions of rows to delete I like to see how many are left in the job history.

    – MillhouseD
    Jan 27 '17 at 1:02





















0














select 1 
while (@@rowcount > 0)
begin
DELETE top (100000)
FROM [audit_tb]
where datetime >= '2014-01-01 00:00:00'
AND datetime < '2015-01-01 00:00:00'
end


You can adjust the 100000 but that is a good starting point






share|improve this answer























    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "182"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f161688%2fcannot-perform-select-count-rows-over-1-billion-on-a-table%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    Here is a script to delete in chunks. It deletes in chunks of 4500 records to avoid lock escalation. It performs 100 loops to delete a total of 450,000 rows.
    If your database is using the FULL recovery model, you may want to backup the transaction log after each run to keep if from growing too large.



    SET NOCOUNT ON;
    DECLARE @continue INT
    DECLARE @rowcount INT
    DECLARE @loopCount INT
    DECLARE @MaxLoops INT
    DECLARE @TotalRows BIGINT
    DECLARE @PurgeBeginDate DATETIME
    DECLARE @PurgeEndDate DATETIME

    SET @PurgeBeginDate = '2014-01-01 00:00:00'
    SET @PurgeEndDate = '2014-03-31 23:59:59'

    SET @MaxLoops = 100
    SET @continue = 1
    SET @loopCount = 0

    SELECT @TotalRows = (SELECT COUNT(*) FROM audit_tb (NOLOCK) WHERE datetime between @PurgeBeginDate and @PurgeEndDate)
    PRINT 'Total Rows = ' + CAST(@TotalRows AS VARCHAR(20))
    PRINT ''

    WHILE @continue = 1
    BEGIN
    SET @loopCount = @loopCount + 1
    PRINT 'Loop # ' + CAST(@loopCount AS VARCHAR(10))
    PRINT CONVERT(VARCHAR(20), GETDATE(), 120)

    BEGIN TRANSACTION
    DELETE TOP (4500) audit_tb WHERE datetime between @PurgeBeginDate and @PurgeEndDate
    SET @rowcount = @@rowcount
    COMMIT

    PRINT 'Rows Deleted: ' + CAST(@rowcount AS VARCHAR(10))
    PRINT CONVERT(VARCHAR(20), GETDATE(), 120)
    PRINT ''

    IF @rowcount = 0 OR @loopCount >= @MaxLoops
    BEGIN
    SET @continue = 0
    END
    END

    SELECT @TotalRows = (SELECT COUNT(*) FROM audit_tb (NOLOCK) WHERE datetime between @PurgeBeginDate and @PurgeEndDate)
    PRINT 'Total Rows Remaining = ' + CAST(@TotalRows AS VARCHAR(20))
    PRINT ''


    GO


    I have used this in a SQL Agent job to run repeatedly until the required number of rows were removed. I just schedule the job to run every 15 minutes so my tran log backup job runs between runs of this job.



    I hope this helps






    share|improve this answer
























    • Thank you MillhouseD and Paparazzi. I did a select count just referencing within a day and it was just executing forever. Even if I delete in chunks, I'm not sure how long this would take. This table has an application, we will speak with a vendor this week and see what we can do. Can we move to a chat room?

      – Dennis M.
      Jan 24 '17 at 19:39











    • You are welcome @DennisM. You might have better results if you create a non-clustered index on the datetime column. You could drop the index after the deletes are complete. Here is sample syntax: CREATE NONCLUSTERED INDEX IX_audit_tb_datetime ON dbo.audit_tb (datetime ASC) I would be happy to move to a chat room.

      – MillhouseD
      Jan 24 '17 at 20:06











    • Why are you performing a select count - it has nothing to do the the delete. Delete top X does not need to perform a count.

      – paparazzo
      Jan 27 '17 at 0:56











    • You are correct in that it has nothing to do with the delete. It is for logging the before and after counts. If I have hundreds of millions of rows to delete I like to see how many are left in the job history.

      – MillhouseD
      Jan 27 '17 at 1:02


















    0














    Here is a script to delete in chunks. It deletes in chunks of 4500 records to avoid lock escalation. It performs 100 loops to delete a total of 450,000 rows.
    If your database is using the FULL recovery model, you may want to backup the transaction log after each run to keep if from growing too large.



    SET NOCOUNT ON;
    DECLARE @continue INT
    DECLARE @rowcount INT
    DECLARE @loopCount INT
    DECLARE @MaxLoops INT
    DECLARE @TotalRows BIGINT
    DECLARE @PurgeBeginDate DATETIME
    DECLARE @PurgeEndDate DATETIME

    SET @PurgeBeginDate = '2014-01-01 00:00:00'
    SET @PurgeEndDate = '2014-03-31 23:59:59'

    SET @MaxLoops = 100
    SET @continue = 1
    SET @loopCount = 0

    SELECT @TotalRows = (SELECT COUNT(*) FROM audit_tb (NOLOCK) WHERE datetime between @PurgeBeginDate and @PurgeEndDate)
    PRINT 'Total Rows = ' + CAST(@TotalRows AS VARCHAR(20))
    PRINT ''

    WHILE @continue = 1
    BEGIN
    SET @loopCount = @loopCount + 1
    PRINT 'Loop # ' + CAST(@loopCount AS VARCHAR(10))
    PRINT CONVERT(VARCHAR(20), GETDATE(), 120)

    BEGIN TRANSACTION
    DELETE TOP (4500) audit_tb WHERE datetime between @PurgeBeginDate and @PurgeEndDate
    SET @rowcount = @@rowcount
    COMMIT

    PRINT 'Rows Deleted: ' + CAST(@rowcount AS VARCHAR(10))
    PRINT CONVERT(VARCHAR(20), GETDATE(), 120)
    PRINT ''

    IF @rowcount = 0 OR @loopCount >= @MaxLoops
    BEGIN
    SET @continue = 0
    END
    END

    SELECT @TotalRows = (SELECT COUNT(*) FROM audit_tb (NOLOCK) WHERE datetime between @PurgeBeginDate and @PurgeEndDate)
    PRINT 'Total Rows Remaining = ' + CAST(@TotalRows AS VARCHAR(20))
    PRINT ''


    GO


    I have used this in a SQL Agent job to run repeatedly until the required number of rows were removed. I just schedule the job to run every 15 minutes so my tran log backup job runs between runs of this job.



    I hope this helps






    share|improve this answer
























    • Thank you MillhouseD and Paparazzi. I did a select count just referencing within a day and it was just executing forever. Even if I delete in chunks, I'm not sure how long this would take. This table has an application, we will speak with a vendor this week and see what we can do. Can we move to a chat room?

      – Dennis M.
      Jan 24 '17 at 19:39











    • You are welcome @DennisM. You might have better results if you create a non-clustered index on the datetime column. You could drop the index after the deletes are complete. Here is sample syntax: CREATE NONCLUSTERED INDEX IX_audit_tb_datetime ON dbo.audit_tb (datetime ASC) I would be happy to move to a chat room.

      – MillhouseD
      Jan 24 '17 at 20:06











    • Why are you performing a select count - it has nothing to do the the delete. Delete top X does not need to perform a count.

      – paparazzo
      Jan 27 '17 at 0:56











    • You are correct in that it has nothing to do with the delete. It is for logging the before and after counts. If I have hundreds of millions of rows to delete I like to see how many are left in the job history.

      – MillhouseD
      Jan 27 '17 at 1:02
















    0












    0








    0







    Here is a script to delete in chunks. It deletes in chunks of 4500 records to avoid lock escalation. It performs 100 loops to delete a total of 450,000 rows.
    If your database is using the FULL recovery model, you may want to backup the transaction log after each run to keep if from growing too large.



    SET NOCOUNT ON;
    DECLARE @continue INT
    DECLARE @rowcount INT
    DECLARE @loopCount INT
    DECLARE @MaxLoops INT
    DECLARE @TotalRows BIGINT
    DECLARE @PurgeBeginDate DATETIME
    DECLARE @PurgeEndDate DATETIME

    SET @PurgeBeginDate = '2014-01-01 00:00:00'
    SET @PurgeEndDate = '2014-03-31 23:59:59'

    SET @MaxLoops = 100
    SET @continue = 1
    SET @loopCount = 0

    SELECT @TotalRows = (SELECT COUNT(*) FROM audit_tb (NOLOCK) WHERE datetime between @PurgeBeginDate and @PurgeEndDate)
    PRINT 'Total Rows = ' + CAST(@TotalRows AS VARCHAR(20))
    PRINT ''

    WHILE @continue = 1
    BEGIN
    SET @loopCount = @loopCount + 1
    PRINT 'Loop # ' + CAST(@loopCount AS VARCHAR(10))
    PRINT CONVERT(VARCHAR(20), GETDATE(), 120)

    BEGIN TRANSACTION
    DELETE TOP (4500) audit_tb WHERE datetime between @PurgeBeginDate and @PurgeEndDate
    SET @rowcount = @@rowcount
    COMMIT

    PRINT 'Rows Deleted: ' + CAST(@rowcount AS VARCHAR(10))
    PRINT CONVERT(VARCHAR(20), GETDATE(), 120)
    PRINT ''

    IF @rowcount = 0 OR @loopCount >= @MaxLoops
    BEGIN
    SET @continue = 0
    END
    END

    SELECT @TotalRows = (SELECT COUNT(*) FROM audit_tb (NOLOCK) WHERE datetime between @PurgeBeginDate and @PurgeEndDate)
    PRINT 'Total Rows Remaining = ' + CAST(@TotalRows AS VARCHAR(20))
    PRINT ''


    GO


    I have used this in a SQL Agent job to run repeatedly until the required number of rows were removed. I just schedule the job to run every 15 minutes so my tran log backup job runs between runs of this job.



    I hope this helps






    share|improve this answer













    Here is a script to delete in chunks. It deletes in chunks of 4500 records to avoid lock escalation. It performs 100 loops to delete a total of 450,000 rows.
    If your database is using the FULL recovery model, you may want to backup the transaction log after each run to keep if from growing too large.



    SET NOCOUNT ON;
    DECLARE @continue INT
    DECLARE @rowcount INT
    DECLARE @loopCount INT
    DECLARE @MaxLoops INT
    DECLARE @TotalRows BIGINT
    DECLARE @PurgeBeginDate DATETIME
    DECLARE @PurgeEndDate DATETIME

    SET @PurgeBeginDate = '2014-01-01 00:00:00'
    SET @PurgeEndDate = '2014-03-31 23:59:59'

    SET @MaxLoops = 100
    SET @continue = 1
    SET @loopCount = 0

    SELECT @TotalRows = (SELECT COUNT(*) FROM audit_tb (NOLOCK) WHERE datetime between @PurgeBeginDate and @PurgeEndDate)
    PRINT 'Total Rows = ' + CAST(@TotalRows AS VARCHAR(20))
    PRINT ''

    WHILE @continue = 1
    BEGIN
    SET @loopCount = @loopCount + 1
    PRINT 'Loop # ' + CAST(@loopCount AS VARCHAR(10))
    PRINT CONVERT(VARCHAR(20), GETDATE(), 120)

    BEGIN TRANSACTION
    DELETE TOP (4500) audit_tb WHERE datetime between @PurgeBeginDate and @PurgeEndDate
    SET @rowcount = @@rowcount
    COMMIT

    PRINT 'Rows Deleted: ' + CAST(@rowcount AS VARCHAR(10))
    PRINT CONVERT(VARCHAR(20), GETDATE(), 120)
    PRINT ''

    IF @rowcount = 0 OR @loopCount >= @MaxLoops
    BEGIN
    SET @continue = 0
    END
    END

    SELECT @TotalRows = (SELECT COUNT(*) FROM audit_tb (NOLOCK) WHERE datetime between @PurgeBeginDate and @PurgeEndDate)
    PRINT 'Total Rows Remaining = ' + CAST(@TotalRows AS VARCHAR(20))
    PRINT ''


    GO


    I have used this in a SQL Agent job to run repeatedly until the required number of rows were removed. I just schedule the job to run every 15 minutes so my tran log backup job runs between runs of this job.



    I hope this helps







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Jan 21 '17 at 18:05









    MillhouseDMillhouseD

    37818




    37818













    • Thank you MillhouseD and Paparazzi. I did a select count just referencing within a day and it was just executing forever. Even if I delete in chunks, I'm not sure how long this would take. This table has an application, we will speak with a vendor this week and see what we can do. Can we move to a chat room?

      – Dennis M.
      Jan 24 '17 at 19:39











    • You are welcome @DennisM. You might have better results if you create a non-clustered index on the datetime column. You could drop the index after the deletes are complete. Here is sample syntax: CREATE NONCLUSTERED INDEX IX_audit_tb_datetime ON dbo.audit_tb (datetime ASC) I would be happy to move to a chat room.

      – MillhouseD
      Jan 24 '17 at 20:06











    • Why are you performing a select count - it has nothing to do the the delete. Delete top X does not need to perform a count.

      – paparazzo
      Jan 27 '17 at 0:56











    • You are correct in that it has nothing to do with the delete. It is for logging the before and after counts. If I have hundreds of millions of rows to delete I like to see how many are left in the job history.

      – MillhouseD
      Jan 27 '17 at 1:02





















    • Thank you MillhouseD and Paparazzi. I did a select count just referencing within a day and it was just executing forever. Even if I delete in chunks, I'm not sure how long this would take. This table has an application, we will speak with a vendor this week and see what we can do. Can we move to a chat room?

      – Dennis M.
      Jan 24 '17 at 19:39











    • You are welcome @DennisM. You might have better results if you create a non-clustered index on the datetime column. You could drop the index after the deletes are complete. Here is sample syntax: CREATE NONCLUSTERED INDEX IX_audit_tb_datetime ON dbo.audit_tb (datetime ASC) I would be happy to move to a chat room.

      – MillhouseD
      Jan 24 '17 at 20:06











    • Why are you performing a select count - it has nothing to do the the delete. Delete top X does not need to perform a count.

      – paparazzo
      Jan 27 '17 at 0:56











    • You are correct in that it has nothing to do with the delete. It is for logging the before and after counts. If I have hundreds of millions of rows to delete I like to see how many are left in the job history.

      – MillhouseD
      Jan 27 '17 at 1:02



















    Thank you MillhouseD and Paparazzi. I did a select count just referencing within a day and it was just executing forever. Even if I delete in chunks, I'm not sure how long this would take. This table has an application, we will speak with a vendor this week and see what we can do. Can we move to a chat room?

    – Dennis M.
    Jan 24 '17 at 19:39





    Thank you MillhouseD and Paparazzi. I did a select count just referencing within a day and it was just executing forever. Even if I delete in chunks, I'm not sure how long this would take. This table has an application, we will speak with a vendor this week and see what we can do. Can we move to a chat room?

    – Dennis M.
    Jan 24 '17 at 19:39













    You are welcome @DennisM. You might have better results if you create a non-clustered index on the datetime column. You could drop the index after the deletes are complete. Here is sample syntax: CREATE NONCLUSTERED INDEX IX_audit_tb_datetime ON dbo.audit_tb (datetime ASC) I would be happy to move to a chat room.

    – MillhouseD
    Jan 24 '17 at 20:06





    You are welcome @DennisM. You might have better results if you create a non-clustered index on the datetime column. You could drop the index after the deletes are complete. Here is sample syntax: CREATE NONCLUSTERED INDEX IX_audit_tb_datetime ON dbo.audit_tb (datetime ASC) I would be happy to move to a chat room.

    – MillhouseD
    Jan 24 '17 at 20:06













    Why are you performing a select count - it has nothing to do the the delete. Delete top X does not need to perform a count.

    – paparazzo
    Jan 27 '17 at 0:56





    Why are you performing a select count - it has nothing to do the the delete. Delete top X does not need to perform a count.

    – paparazzo
    Jan 27 '17 at 0:56













    You are correct in that it has nothing to do with the delete. It is for logging the before and after counts. If I have hundreds of millions of rows to delete I like to see how many are left in the job history.

    – MillhouseD
    Jan 27 '17 at 1:02







    You are correct in that it has nothing to do with the delete. It is for logging the before and after counts. If I have hundreds of millions of rows to delete I like to see how many are left in the job history.

    – MillhouseD
    Jan 27 '17 at 1:02















    0














    select 1 
    while (@@rowcount > 0)
    begin
    DELETE top (100000)
    FROM [audit_tb]
    where datetime >= '2014-01-01 00:00:00'
    AND datetime < '2015-01-01 00:00:00'
    end


    You can adjust the 100000 but that is a good starting point






    share|improve this answer




























      0














      select 1 
      while (@@rowcount > 0)
      begin
      DELETE top (100000)
      FROM [audit_tb]
      where datetime >= '2014-01-01 00:00:00'
      AND datetime < '2015-01-01 00:00:00'
      end


      You can adjust the 100000 but that is a good starting point






      share|improve this answer


























        0












        0








        0







        select 1 
        while (@@rowcount > 0)
        begin
        DELETE top (100000)
        FROM [audit_tb]
        where datetime >= '2014-01-01 00:00:00'
        AND datetime < '2015-01-01 00:00:00'
        end


        You can adjust the 100000 but that is a good starting point






        share|improve this answer













        select 1 
        while (@@rowcount > 0)
        begin
        DELETE top (100000)
        FROM [audit_tb]
        where datetime >= '2014-01-01 00:00:00'
        AND datetime < '2015-01-01 00:00:00'
        end


        You can adjust the 100000 but that is a good starting point







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Jan 21 '17 at 19:15









        paparazzopaparazzo

        4,6141230




        4,6141230






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Database Administrators Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f161688%2fcannot-perform-select-count-rows-over-1-billion-on-a-table%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Liste der Baudenkmale in Friedland (Mecklenburg)

            Single-Malt-Whisky

            Czorneboh