Missing step in backup (DR) plan












1














Unfortunately, I am assuming like most folks in this role, the title DBA came by default rather than design.



I am trying to manage our farm of 5 SQL Server 2016 servers, each having a warm stand-by for our Disaster Recovery.



We have queried our user base and have come to the decision that some downtown is acceptable as long as we can minimize data loss. We do not need point in time recoveries, only to ensure that we have the most current data possible based on the users data loss acceptability.



What we have decide for our model is each database on each of the primaries will be backed up and restored on the appropriate stand-by (with the database left in Stand-By/Read Only mode) and then a log shipping job created to ship the log to the standby server every 30 seconds to 1 hour depending on the data loss sensitivity. Then, on each stand by server, once every 15 minutes to 1 hour, a restore job runs to restore all the transactions shipped from the primary into the stand by database. Once the transactions have been shipped to the stand by and restored successfully, I no longer care about those transactions.



This works great as I can query the stand by databases and see the updated data as expected.



Where I am lacking information in is, my transaction log is growing larger than my DB file on my primary. I ASSUMED that the log shipping would mark the log as backed up, but I am apparently wrong.



I am hesitant to create a job that takes full backups as I do not want to break the log shipping chain so here is where I need some advise on how to control my log sizes.



I have queried the internet for answers, but I get about as many different answers as there are results, so I am turning here for advise.



Thanks in advance,
Keith










share|improve this question






















  • Transaction log backups should be clearing the log unless they are marked as COPY_ONLY. Are you using built-in log shipping or rolling your own?
    – Randolph West
    8 hours ago










  • @RandolphWest Using the Built in Log Shipping Wizard. Don't see anywhere in the wizard setup where I get to choose between copy only or not.
    – Keith Clark
    8 hours ago










  • One of two things might be happening: a long-running transaction is not yet committed, so it's backing up all the changes and that could easily grow massive; or your log frequency isn't high enough. That said, if you're running every 30 seconds, the second option seems unlikely.
    – Randolph West
    8 hours ago










  • @RandolphWest ran a log_reuse_wait_desc and there is nothing holding the log =(
    – Keith Clark
    7 hours ago
















1














Unfortunately, I am assuming like most folks in this role, the title DBA came by default rather than design.



I am trying to manage our farm of 5 SQL Server 2016 servers, each having a warm stand-by for our Disaster Recovery.



We have queried our user base and have come to the decision that some downtown is acceptable as long as we can minimize data loss. We do not need point in time recoveries, only to ensure that we have the most current data possible based on the users data loss acceptability.



What we have decide for our model is each database on each of the primaries will be backed up and restored on the appropriate stand-by (with the database left in Stand-By/Read Only mode) and then a log shipping job created to ship the log to the standby server every 30 seconds to 1 hour depending on the data loss sensitivity. Then, on each stand by server, once every 15 minutes to 1 hour, a restore job runs to restore all the transactions shipped from the primary into the stand by database. Once the transactions have been shipped to the stand by and restored successfully, I no longer care about those transactions.



This works great as I can query the stand by databases and see the updated data as expected.



Where I am lacking information in is, my transaction log is growing larger than my DB file on my primary. I ASSUMED that the log shipping would mark the log as backed up, but I am apparently wrong.



I am hesitant to create a job that takes full backups as I do not want to break the log shipping chain so here is where I need some advise on how to control my log sizes.



I have queried the internet for answers, but I get about as many different answers as there are results, so I am turning here for advise.



Thanks in advance,
Keith










share|improve this question






















  • Transaction log backups should be clearing the log unless they are marked as COPY_ONLY. Are you using built-in log shipping or rolling your own?
    – Randolph West
    8 hours ago










  • @RandolphWest Using the Built in Log Shipping Wizard. Don't see anywhere in the wizard setup where I get to choose between copy only or not.
    – Keith Clark
    8 hours ago










  • One of two things might be happening: a long-running transaction is not yet committed, so it's backing up all the changes and that could easily grow massive; or your log frequency isn't high enough. That said, if you're running every 30 seconds, the second option seems unlikely.
    – Randolph West
    8 hours ago










  • @RandolphWest ran a log_reuse_wait_desc and there is nothing holding the log =(
    – Keith Clark
    7 hours ago














1












1








1







Unfortunately, I am assuming like most folks in this role, the title DBA came by default rather than design.



I am trying to manage our farm of 5 SQL Server 2016 servers, each having a warm stand-by for our Disaster Recovery.



We have queried our user base and have come to the decision that some downtown is acceptable as long as we can minimize data loss. We do not need point in time recoveries, only to ensure that we have the most current data possible based on the users data loss acceptability.



What we have decide for our model is each database on each of the primaries will be backed up and restored on the appropriate stand-by (with the database left in Stand-By/Read Only mode) and then a log shipping job created to ship the log to the standby server every 30 seconds to 1 hour depending on the data loss sensitivity. Then, on each stand by server, once every 15 minutes to 1 hour, a restore job runs to restore all the transactions shipped from the primary into the stand by database. Once the transactions have been shipped to the stand by and restored successfully, I no longer care about those transactions.



This works great as I can query the stand by databases and see the updated data as expected.



Where I am lacking information in is, my transaction log is growing larger than my DB file on my primary. I ASSUMED that the log shipping would mark the log as backed up, but I am apparently wrong.



I am hesitant to create a job that takes full backups as I do not want to break the log shipping chain so here is where I need some advise on how to control my log sizes.



I have queried the internet for answers, but I get about as many different answers as there are results, so I am turning here for advise.



Thanks in advance,
Keith










share|improve this question













Unfortunately, I am assuming like most folks in this role, the title DBA came by default rather than design.



I am trying to manage our farm of 5 SQL Server 2016 servers, each having a warm stand-by for our Disaster Recovery.



We have queried our user base and have come to the decision that some downtown is acceptable as long as we can minimize data loss. We do not need point in time recoveries, only to ensure that we have the most current data possible based on the users data loss acceptability.



What we have decide for our model is each database on each of the primaries will be backed up and restored on the appropriate stand-by (with the database left in Stand-By/Read Only mode) and then a log shipping job created to ship the log to the standby server every 30 seconds to 1 hour depending on the data loss sensitivity. Then, on each stand by server, once every 15 minutes to 1 hour, a restore job runs to restore all the transactions shipped from the primary into the stand by database. Once the transactions have been shipped to the stand by and restored successfully, I no longer care about those transactions.



This works great as I can query the stand by databases and see the updated data as expected.



Where I am lacking information in is, my transaction log is growing larger than my DB file on my primary. I ASSUMED that the log shipping would mark the log as backed up, but I am apparently wrong.



I am hesitant to create a job that takes full backups as I do not want to break the log shipping chain so here is where I need some advise on how to control my log sizes.



I have queried the internet for answers, but I get about as many different answers as there are results, so I am turning here for advise.



Thanks in advance,
Keith







sql-server-2016 transaction-log






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked 9 hours ago









Keith ClarkKeith Clark

1252




1252












  • Transaction log backups should be clearing the log unless they are marked as COPY_ONLY. Are you using built-in log shipping or rolling your own?
    – Randolph West
    8 hours ago










  • @RandolphWest Using the Built in Log Shipping Wizard. Don't see anywhere in the wizard setup where I get to choose between copy only or not.
    – Keith Clark
    8 hours ago










  • One of two things might be happening: a long-running transaction is not yet committed, so it's backing up all the changes and that could easily grow massive; or your log frequency isn't high enough. That said, if you're running every 30 seconds, the second option seems unlikely.
    – Randolph West
    8 hours ago










  • @RandolphWest ran a log_reuse_wait_desc and there is nothing holding the log =(
    – Keith Clark
    7 hours ago


















  • Transaction log backups should be clearing the log unless they are marked as COPY_ONLY. Are you using built-in log shipping or rolling your own?
    – Randolph West
    8 hours ago










  • @RandolphWest Using the Built in Log Shipping Wizard. Don't see anywhere in the wizard setup where I get to choose between copy only or not.
    – Keith Clark
    8 hours ago










  • One of two things might be happening: a long-running transaction is not yet committed, so it's backing up all the changes and that could easily grow massive; or your log frequency isn't high enough. That said, if you're running every 30 seconds, the second option seems unlikely.
    – Randolph West
    8 hours ago










  • @RandolphWest ran a log_reuse_wait_desc and there is nothing holding the log =(
    – Keith Clark
    7 hours ago
















Transaction log backups should be clearing the log unless they are marked as COPY_ONLY. Are you using built-in log shipping or rolling your own?
– Randolph West
8 hours ago




Transaction log backups should be clearing the log unless they are marked as COPY_ONLY. Are you using built-in log shipping or rolling your own?
– Randolph West
8 hours ago












@RandolphWest Using the Built in Log Shipping Wizard. Don't see anywhere in the wizard setup where I get to choose between copy only or not.
– Keith Clark
8 hours ago




@RandolphWest Using the Built in Log Shipping Wizard. Don't see anywhere in the wizard setup where I get to choose between copy only or not.
– Keith Clark
8 hours ago












One of two things might be happening: a long-running transaction is not yet committed, so it's backing up all the changes and that could easily grow massive; or your log frequency isn't high enough. That said, if you're running every 30 seconds, the second option seems unlikely.
– Randolph West
8 hours ago




One of two things might be happening: a long-running transaction is not yet committed, so it's backing up all the changes and that could easily grow massive; or your log frequency isn't high enough. That said, if you're running every 30 seconds, the second option seems unlikely.
– Randolph West
8 hours ago












@RandolphWest ran a log_reuse_wait_desc and there is nothing holding the log =(
– Keith Clark
7 hours ago




@RandolphWest ran a log_reuse_wait_desc and there is nothing holding the log =(
– Keith Clark
7 hours ago










2 Answers
2






active

oldest

votes


















0















I am hesitant to create a job that takes full backups as I do not want to break the log shipping chain so here is where I need some advise on how to control my log sizes.




This is a misconception/myth. A fullbackup will never break the log backup chain. Only an adhoc NON COPY_ONLY backup will break the log backup chain.



In full recovery, only a full backup will truncate the log.




ran a log_reuse_wait_desc and there is nothing holding the log




Check your autogrowth settings. Make sure it is not set to Percent autogrowth and the MB autogrowth is sensible.



You can check my script to check.



Also, you should check : Why Does the Transaction Log Keep Growing or Run Out of Space?






share|improve this answer





























    0















    Where I am lacking information in is, my transaction log is growing
    larger than my DB file on my primary. I ASSUMED that the log shipping
    would mark the log as backed up, but I am apparently wrong.




    Log shipping utilises transaction log backups on the SQL Server. A T-Log backup will mark portions of the log that contain committed transactions as inactive and available for re-use after the backup is complete. If your backup frequency is up to 1 hour then your transaction log file could conceivably grow large due to the low frequency of backup and high volumes of transactions. Does it grow constantly or only at certain times (weekends, overnight etc)? Regardless of the cause, increasing the frequency of backups will likely fix this.




    I am hesitant to create a job that takes full backups as I do not want
    to break the log shipping chain so here is where I need some advise on
    how to control my log sizes.




    Full backups will not break your log chain. You should still be running regular full backups even when utilising log shipping, the key is to avoid ad-hoc log backups as these will break the log chain.



    There is a question on SE and the answers provide really good information on restore chains in SQL Server. Have a read through to get a better understanding of the restore chains in SQL Server.



    You also need to consider other scenarios in your DR plan besides your primary server going offline. Not taking Full backups means that if you have a user-initiated bad data event (someone drops a table, deletes a record etc accidentally), your options for recovering the correct data state will be limited by the retention of your original full backup.



    What happens if that full backup file becomes corrupt?



    Are you prepared to restore, potentially, months worth of log files to reinitialise log shipping if need be?



    Do you retain your log backups long enough to reinitialise log shipping from the original Full backup, or will you take a fresh Full, potentially during business hours, should you need to reinitialise?



    Given some of your databases have such low RPOs (30-seconds), have you considered other technology such as Availability Groups to provide much lower RPORTO without the overhead of shipping log backups?






    share|improve this answer





















      Your Answer








      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "182"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f226634%2fmissing-step-in-backup-dr-plan%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      0















      I am hesitant to create a job that takes full backups as I do not want to break the log shipping chain so here is where I need some advise on how to control my log sizes.




      This is a misconception/myth. A fullbackup will never break the log backup chain. Only an adhoc NON COPY_ONLY backup will break the log backup chain.



      In full recovery, only a full backup will truncate the log.




      ran a log_reuse_wait_desc and there is nothing holding the log




      Check your autogrowth settings. Make sure it is not set to Percent autogrowth and the MB autogrowth is sensible.



      You can check my script to check.



      Also, you should check : Why Does the Transaction Log Keep Growing or Run Out of Space?






      share|improve this answer


























        0















        I am hesitant to create a job that takes full backups as I do not want to break the log shipping chain so here is where I need some advise on how to control my log sizes.




        This is a misconception/myth. A fullbackup will never break the log backup chain. Only an adhoc NON COPY_ONLY backup will break the log backup chain.



        In full recovery, only a full backup will truncate the log.




        ran a log_reuse_wait_desc and there is nothing holding the log




        Check your autogrowth settings. Make sure it is not set to Percent autogrowth and the MB autogrowth is sensible.



        You can check my script to check.



        Also, you should check : Why Does the Transaction Log Keep Growing or Run Out of Space?






        share|improve this answer
























          0












          0








          0







          I am hesitant to create a job that takes full backups as I do not want to break the log shipping chain so here is where I need some advise on how to control my log sizes.




          This is a misconception/myth. A fullbackup will never break the log backup chain. Only an adhoc NON COPY_ONLY backup will break the log backup chain.



          In full recovery, only a full backup will truncate the log.




          ran a log_reuse_wait_desc and there is nothing holding the log




          Check your autogrowth settings. Make sure it is not set to Percent autogrowth and the MB autogrowth is sensible.



          You can check my script to check.



          Also, you should check : Why Does the Transaction Log Keep Growing or Run Out of Space?






          share|improve this answer













          I am hesitant to create a job that takes full backups as I do not want to break the log shipping chain so here is where I need some advise on how to control my log sizes.




          This is a misconception/myth. A fullbackup will never break the log backup chain. Only an adhoc NON COPY_ONLY backup will break the log backup chain.



          In full recovery, only a full backup will truncate the log.




          ran a log_reuse_wait_desc and there is nothing holding the log




          Check your autogrowth settings. Make sure it is not set to Percent autogrowth and the MB autogrowth is sensible.



          You can check my script to check.



          Also, you should check : Why Does the Transaction Log Keep Growing or Run Out of Space?







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered 6 hours ago









          KinKin

          52.8k478187




          52.8k478187

























              0















              Where I am lacking information in is, my transaction log is growing
              larger than my DB file on my primary. I ASSUMED that the log shipping
              would mark the log as backed up, but I am apparently wrong.




              Log shipping utilises transaction log backups on the SQL Server. A T-Log backup will mark portions of the log that contain committed transactions as inactive and available for re-use after the backup is complete. If your backup frequency is up to 1 hour then your transaction log file could conceivably grow large due to the low frequency of backup and high volumes of transactions. Does it grow constantly or only at certain times (weekends, overnight etc)? Regardless of the cause, increasing the frequency of backups will likely fix this.




              I am hesitant to create a job that takes full backups as I do not want
              to break the log shipping chain so here is where I need some advise on
              how to control my log sizes.




              Full backups will not break your log chain. You should still be running regular full backups even when utilising log shipping, the key is to avoid ad-hoc log backups as these will break the log chain.



              There is a question on SE and the answers provide really good information on restore chains in SQL Server. Have a read through to get a better understanding of the restore chains in SQL Server.



              You also need to consider other scenarios in your DR plan besides your primary server going offline. Not taking Full backups means that if you have a user-initiated bad data event (someone drops a table, deletes a record etc accidentally), your options for recovering the correct data state will be limited by the retention of your original full backup.



              What happens if that full backup file becomes corrupt?



              Are you prepared to restore, potentially, months worth of log files to reinitialise log shipping if need be?



              Do you retain your log backups long enough to reinitialise log shipping from the original Full backup, or will you take a fresh Full, potentially during business hours, should you need to reinitialise?



              Given some of your databases have such low RPOs (30-seconds), have you considered other technology such as Availability Groups to provide much lower RPORTO without the overhead of shipping log backups?






              share|improve this answer


























                0















                Where I am lacking information in is, my transaction log is growing
                larger than my DB file on my primary. I ASSUMED that the log shipping
                would mark the log as backed up, but I am apparently wrong.




                Log shipping utilises transaction log backups on the SQL Server. A T-Log backup will mark portions of the log that contain committed transactions as inactive and available for re-use after the backup is complete. If your backup frequency is up to 1 hour then your transaction log file could conceivably grow large due to the low frequency of backup and high volumes of transactions. Does it grow constantly or only at certain times (weekends, overnight etc)? Regardless of the cause, increasing the frequency of backups will likely fix this.




                I am hesitant to create a job that takes full backups as I do not want
                to break the log shipping chain so here is where I need some advise on
                how to control my log sizes.




                Full backups will not break your log chain. You should still be running regular full backups even when utilising log shipping, the key is to avoid ad-hoc log backups as these will break the log chain.



                There is a question on SE and the answers provide really good information on restore chains in SQL Server. Have a read through to get a better understanding of the restore chains in SQL Server.



                You also need to consider other scenarios in your DR plan besides your primary server going offline. Not taking Full backups means that if you have a user-initiated bad data event (someone drops a table, deletes a record etc accidentally), your options for recovering the correct data state will be limited by the retention of your original full backup.



                What happens if that full backup file becomes corrupt?



                Are you prepared to restore, potentially, months worth of log files to reinitialise log shipping if need be?



                Do you retain your log backups long enough to reinitialise log shipping from the original Full backup, or will you take a fresh Full, potentially during business hours, should you need to reinitialise?



                Given some of your databases have such low RPOs (30-seconds), have you considered other technology such as Availability Groups to provide much lower RPORTO without the overhead of shipping log backups?






                share|improve this answer
























                  0












                  0








                  0







                  Where I am lacking information in is, my transaction log is growing
                  larger than my DB file on my primary. I ASSUMED that the log shipping
                  would mark the log as backed up, but I am apparently wrong.




                  Log shipping utilises transaction log backups on the SQL Server. A T-Log backup will mark portions of the log that contain committed transactions as inactive and available for re-use after the backup is complete. If your backup frequency is up to 1 hour then your transaction log file could conceivably grow large due to the low frequency of backup and high volumes of transactions. Does it grow constantly or only at certain times (weekends, overnight etc)? Regardless of the cause, increasing the frequency of backups will likely fix this.




                  I am hesitant to create a job that takes full backups as I do not want
                  to break the log shipping chain so here is where I need some advise on
                  how to control my log sizes.




                  Full backups will not break your log chain. You should still be running regular full backups even when utilising log shipping, the key is to avoid ad-hoc log backups as these will break the log chain.



                  There is a question on SE and the answers provide really good information on restore chains in SQL Server. Have a read through to get a better understanding of the restore chains in SQL Server.



                  You also need to consider other scenarios in your DR plan besides your primary server going offline. Not taking Full backups means that if you have a user-initiated bad data event (someone drops a table, deletes a record etc accidentally), your options for recovering the correct data state will be limited by the retention of your original full backup.



                  What happens if that full backup file becomes corrupt?



                  Are you prepared to restore, potentially, months worth of log files to reinitialise log shipping if need be?



                  Do you retain your log backups long enough to reinitialise log shipping from the original Full backup, or will you take a fresh Full, potentially during business hours, should you need to reinitialise?



                  Given some of your databases have such low RPOs (30-seconds), have you considered other technology such as Availability Groups to provide much lower RPORTO without the overhead of shipping log backups?






                  share|improve this answer













                  Where I am lacking information in is, my transaction log is growing
                  larger than my DB file on my primary. I ASSUMED that the log shipping
                  would mark the log as backed up, but I am apparently wrong.




                  Log shipping utilises transaction log backups on the SQL Server. A T-Log backup will mark portions of the log that contain committed transactions as inactive and available for re-use after the backup is complete. If your backup frequency is up to 1 hour then your transaction log file could conceivably grow large due to the low frequency of backup and high volumes of transactions. Does it grow constantly or only at certain times (weekends, overnight etc)? Regardless of the cause, increasing the frequency of backups will likely fix this.




                  I am hesitant to create a job that takes full backups as I do not want
                  to break the log shipping chain so here is where I need some advise on
                  how to control my log sizes.




                  Full backups will not break your log chain. You should still be running regular full backups even when utilising log shipping, the key is to avoid ad-hoc log backups as these will break the log chain.



                  There is a question on SE and the answers provide really good information on restore chains in SQL Server. Have a read through to get a better understanding of the restore chains in SQL Server.



                  You also need to consider other scenarios in your DR plan besides your primary server going offline. Not taking Full backups means that if you have a user-initiated bad data event (someone drops a table, deletes a record etc accidentally), your options for recovering the correct data state will be limited by the retention of your original full backup.



                  What happens if that full backup file becomes corrupt?



                  Are you prepared to restore, potentially, months worth of log files to reinitialise log shipping if need be?



                  Do you retain your log backups long enough to reinitialise log shipping from the original Full backup, or will you take a fresh Full, potentially during business hours, should you need to reinitialise?



                  Given some of your databases have such low RPOs (30-seconds), have you considered other technology such as Availability Groups to provide much lower RPORTO without the overhead of shipping log backups?







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered 1 hour ago









                  HandyDHandyD

                  56415




                  56415






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Database Administrators Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.





                      Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                      Please pay close attention to the following guidance:


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f226634%2fmissing-step-in-backup-dr-plan%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Liste der Baudenkmale in Friedland (Mecklenburg)

                      Single-Malt-Whisky

                      Czorneboh