Determine size of mysqldump prior to dumping












1















This question is a followup on this previous question.



Q: How can I determine or estimate the size of the SQL dump file prior to using something like mysqldump?



I have tried using the most upvoted answer but it doesn't seem to be working in my case. I am actually trying to determine a specific database's size prior to dumping, so I am using this query



SELECT
Data_BB / POWER(1024,1) Data_KB,
Data_BB / POWER(1024,2) Data_MB,
Data_BB / POWER(1024,3) Data_GB
FROM (SELECT SUM(data_length) Data_BB FROM information_schema.tables
WHERE table_schema ='db_name') A;


I am getting a filesize of ~4.5mb whereas my (uncompressed) dump takes less than 2mb. I have also tried excluding VIEWs from my dump as such:



SELECT
Data_BB / POWER(1024,1) Data_KB,
Data_BB / POWER(1024,2) Data_MB,
Data_BB / POWER(1024,3) Data_GB
FROM (SELECT SUM(data_length) Data_BB FROM information_schema.tables
WHERE table_schema ='db_name' AND engine IS NOT NULL) A;


but this made no difference. Am I doing something wrong here?



(Most of my tables are InnoDB except for two who are MyISAM but contain almost no data)










share|improve this question
















bumped to the homepage by Community 12 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.




















    1















    This question is a followup on this previous question.



    Q: How can I determine or estimate the size of the SQL dump file prior to using something like mysqldump?



    I have tried using the most upvoted answer but it doesn't seem to be working in my case. I am actually trying to determine a specific database's size prior to dumping, so I am using this query



    SELECT
    Data_BB / POWER(1024,1) Data_KB,
    Data_BB / POWER(1024,2) Data_MB,
    Data_BB / POWER(1024,3) Data_GB
    FROM (SELECT SUM(data_length) Data_BB FROM information_schema.tables
    WHERE table_schema ='db_name') A;


    I am getting a filesize of ~4.5mb whereas my (uncompressed) dump takes less than 2mb. I have also tried excluding VIEWs from my dump as such:



    SELECT
    Data_BB / POWER(1024,1) Data_KB,
    Data_BB / POWER(1024,2) Data_MB,
    Data_BB / POWER(1024,3) Data_GB
    FROM (SELECT SUM(data_length) Data_BB FROM information_schema.tables
    WHERE table_schema ='db_name' AND engine IS NOT NULL) A;


    but this made no difference. Am I doing something wrong here?



    (Most of my tables are InnoDB except for two who are MyISAM but contain almost no data)










    share|improve this question
















    bumped to the homepage by Community 12 mins ago


    This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.


















      1












      1








      1








      This question is a followup on this previous question.



      Q: How can I determine or estimate the size of the SQL dump file prior to using something like mysqldump?



      I have tried using the most upvoted answer but it doesn't seem to be working in my case. I am actually trying to determine a specific database's size prior to dumping, so I am using this query



      SELECT
      Data_BB / POWER(1024,1) Data_KB,
      Data_BB / POWER(1024,2) Data_MB,
      Data_BB / POWER(1024,3) Data_GB
      FROM (SELECT SUM(data_length) Data_BB FROM information_schema.tables
      WHERE table_schema ='db_name') A;


      I am getting a filesize of ~4.5mb whereas my (uncompressed) dump takes less than 2mb. I have also tried excluding VIEWs from my dump as such:



      SELECT
      Data_BB / POWER(1024,1) Data_KB,
      Data_BB / POWER(1024,2) Data_MB,
      Data_BB / POWER(1024,3) Data_GB
      FROM (SELECT SUM(data_length) Data_BB FROM information_schema.tables
      WHERE table_schema ='db_name' AND engine IS NOT NULL) A;


      but this made no difference. Am I doing something wrong here?



      (Most of my tables are InnoDB except for two who are MyISAM but contain almost no data)










      share|improve this question
















      This question is a followup on this previous question.



      Q: How can I determine or estimate the size of the SQL dump file prior to using something like mysqldump?



      I have tried using the most upvoted answer but it doesn't seem to be working in my case. I am actually trying to determine a specific database's size prior to dumping, so I am using this query



      SELECT
      Data_BB / POWER(1024,1) Data_KB,
      Data_BB / POWER(1024,2) Data_MB,
      Data_BB / POWER(1024,3) Data_GB
      FROM (SELECT SUM(data_length) Data_BB FROM information_schema.tables
      WHERE table_schema ='db_name') A;


      I am getting a filesize of ~4.5mb whereas my (uncompressed) dump takes less than 2mb. I have also tried excluding VIEWs from my dump as such:



      SELECT
      Data_BB / POWER(1024,1) Data_KB,
      Data_BB / POWER(1024,2) Data_MB,
      Data_BB / POWER(1024,3) Data_GB
      FROM (SELECT SUM(data_length) Data_BB FROM information_schema.tables
      WHERE table_schema ='db_name' AND engine IS NOT NULL) A;


      but this made no difference. Am I doing something wrong here?



      (Most of my tables are InnoDB except for two who are MyISAM but contain almost no data)







      mysql mysqldump






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Apr 13 '17 at 12:42









      Community

      1




      1










      asked Feb 3 '17 at 15:15









      sergaftssergafts

      62




      62





      bumped to the homepage by Community 12 mins ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







      bumped to the homepage by Community 12 mins ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
























          2 Answers
          2






          active

          oldest

          votes


















          0














          Actually, the numbers you have indicate something special: Your table data are fragmented.



          Why such fragmentation ? InnoDB data and index pages are 16K. Row data will never completely fill up 16K.



          I am sure if you reload the dump into another mysql instance, the data will take up less space, no fragmentation.



          What you could also do is shrink all the InnoDB tables. How ???



          To shrink mydb.mytable run



          OPTIMIZE TABLE mydb.mytable;


          or



          ANALYZE TABLE mydb.mytable;


          You could probably script i as follows:



          SQLSTMT="SELECT CONCAT('ALTER TABLE ',table_schema,'.',table_name,' ENGINE=InnoDB;') FROM information_schema.tables WHERE engine = 'InnoDB' AND table_schema NOT IN ('information_schema','mysql','performance_schema','sys')"
          INNODB_CONV_SCRIPT=Shrunk_InnoDB.sql
          echo "SET SQL_LOG_BIN = 0;" > ${INNODB_CONV_SCRIPT}
          mysql -h... -u... -p... -ANe"${SQL}" >> ${INNODB_CONV_SCRIPT}
          mysql -h... -u... -p... < ${INNODB_CONV_SCRIPT}


          After running this all your tables will be smaller.



          To answer your actual question, no, you are not doing anything wrong.



          Views take up no actual diskspace except for the View Definition.






          share|improve this answer
























          • I tried dumping the database and importing it into another mysql instance. The query returns the same size for both databases, so the issue remains.. Also, I guess by defragmenting ( OPTIMIZE or ANALYZE) I would probably affect performance, right?

            – sergafts
            Feb 6 '17 at 9:41





















          0














          Or, you data could be inefficiently stored.



          If you have a BIGINT with the number 1 in it, it takes 8 bytes to store in the database, but 2 in dump (including the comma): ,1



          There are other situations where the dump will be be bigger than the table: TINYINT (1 bytes) with 127 (4 bytes in dump). Or strings that need quotes and a comma in the dump. Etc. Plus there is overhead to say



          INSERT INTO `your_long_table_name` (`your_long_column_name`) VALUES (1);





          share|improve this answer
























          • Hmm. I guess this could be it. We have multiple VARCHAR(255) fields which usually contain less than 64 characters. Probably this affects the size, if the above query takes this into consinderation

            – sergafts
            Feb 6 '17 at 9:45











          • That varchar takes 2 bytes (length) plus the actual number of bytes needed. Hardly any different than the 3 bytes in the dump for quotes and a comma.

            – Rick James
            Feb 6 '17 at 19:44











          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "182"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f163150%2fdetermine-size-of-mysqldump-prior-to-dumping%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          Actually, the numbers you have indicate something special: Your table data are fragmented.



          Why such fragmentation ? InnoDB data and index pages are 16K. Row data will never completely fill up 16K.



          I am sure if you reload the dump into another mysql instance, the data will take up less space, no fragmentation.



          What you could also do is shrink all the InnoDB tables. How ???



          To shrink mydb.mytable run



          OPTIMIZE TABLE mydb.mytable;


          or



          ANALYZE TABLE mydb.mytable;


          You could probably script i as follows:



          SQLSTMT="SELECT CONCAT('ALTER TABLE ',table_schema,'.',table_name,' ENGINE=InnoDB;') FROM information_schema.tables WHERE engine = 'InnoDB' AND table_schema NOT IN ('information_schema','mysql','performance_schema','sys')"
          INNODB_CONV_SCRIPT=Shrunk_InnoDB.sql
          echo "SET SQL_LOG_BIN = 0;" > ${INNODB_CONV_SCRIPT}
          mysql -h... -u... -p... -ANe"${SQL}" >> ${INNODB_CONV_SCRIPT}
          mysql -h... -u... -p... < ${INNODB_CONV_SCRIPT}


          After running this all your tables will be smaller.



          To answer your actual question, no, you are not doing anything wrong.



          Views take up no actual diskspace except for the View Definition.






          share|improve this answer
























          • I tried dumping the database and importing it into another mysql instance. The query returns the same size for both databases, so the issue remains.. Also, I guess by defragmenting ( OPTIMIZE or ANALYZE) I would probably affect performance, right?

            – sergafts
            Feb 6 '17 at 9:41


















          0














          Actually, the numbers you have indicate something special: Your table data are fragmented.



          Why such fragmentation ? InnoDB data and index pages are 16K. Row data will never completely fill up 16K.



          I am sure if you reload the dump into another mysql instance, the data will take up less space, no fragmentation.



          What you could also do is shrink all the InnoDB tables. How ???



          To shrink mydb.mytable run



          OPTIMIZE TABLE mydb.mytable;


          or



          ANALYZE TABLE mydb.mytable;


          You could probably script i as follows:



          SQLSTMT="SELECT CONCAT('ALTER TABLE ',table_schema,'.',table_name,' ENGINE=InnoDB;') FROM information_schema.tables WHERE engine = 'InnoDB' AND table_schema NOT IN ('information_schema','mysql','performance_schema','sys')"
          INNODB_CONV_SCRIPT=Shrunk_InnoDB.sql
          echo "SET SQL_LOG_BIN = 0;" > ${INNODB_CONV_SCRIPT}
          mysql -h... -u... -p... -ANe"${SQL}" >> ${INNODB_CONV_SCRIPT}
          mysql -h... -u... -p... < ${INNODB_CONV_SCRIPT}


          After running this all your tables will be smaller.



          To answer your actual question, no, you are not doing anything wrong.



          Views take up no actual diskspace except for the View Definition.






          share|improve this answer
























          • I tried dumping the database and importing it into another mysql instance. The query returns the same size for both databases, so the issue remains.. Also, I guess by defragmenting ( OPTIMIZE or ANALYZE) I would probably affect performance, right?

            – sergafts
            Feb 6 '17 at 9:41
















          0












          0








          0







          Actually, the numbers you have indicate something special: Your table data are fragmented.



          Why such fragmentation ? InnoDB data and index pages are 16K. Row data will never completely fill up 16K.



          I am sure if you reload the dump into another mysql instance, the data will take up less space, no fragmentation.



          What you could also do is shrink all the InnoDB tables. How ???



          To shrink mydb.mytable run



          OPTIMIZE TABLE mydb.mytable;


          or



          ANALYZE TABLE mydb.mytable;


          You could probably script i as follows:



          SQLSTMT="SELECT CONCAT('ALTER TABLE ',table_schema,'.',table_name,' ENGINE=InnoDB;') FROM information_schema.tables WHERE engine = 'InnoDB' AND table_schema NOT IN ('information_schema','mysql','performance_schema','sys')"
          INNODB_CONV_SCRIPT=Shrunk_InnoDB.sql
          echo "SET SQL_LOG_BIN = 0;" > ${INNODB_CONV_SCRIPT}
          mysql -h... -u... -p... -ANe"${SQL}" >> ${INNODB_CONV_SCRIPT}
          mysql -h... -u... -p... < ${INNODB_CONV_SCRIPT}


          After running this all your tables will be smaller.



          To answer your actual question, no, you are not doing anything wrong.



          Views take up no actual diskspace except for the View Definition.






          share|improve this answer













          Actually, the numbers you have indicate something special: Your table data are fragmented.



          Why such fragmentation ? InnoDB data and index pages are 16K. Row data will never completely fill up 16K.



          I am sure if you reload the dump into another mysql instance, the data will take up less space, no fragmentation.



          What you could also do is shrink all the InnoDB tables. How ???



          To shrink mydb.mytable run



          OPTIMIZE TABLE mydb.mytable;


          or



          ANALYZE TABLE mydb.mytable;


          You could probably script i as follows:



          SQLSTMT="SELECT CONCAT('ALTER TABLE ',table_schema,'.',table_name,' ENGINE=InnoDB;') FROM information_schema.tables WHERE engine = 'InnoDB' AND table_schema NOT IN ('information_schema','mysql','performance_schema','sys')"
          INNODB_CONV_SCRIPT=Shrunk_InnoDB.sql
          echo "SET SQL_LOG_BIN = 0;" > ${INNODB_CONV_SCRIPT}
          mysql -h... -u... -p... -ANe"${SQL}" >> ${INNODB_CONV_SCRIPT}
          mysql -h... -u... -p... < ${INNODB_CONV_SCRIPT}


          After running this all your tables will be smaller.



          To answer your actual question, no, you are not doing anything wrong.



          Views take up no actual diskspace except for the View Definition.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Feb 3 '17 at 16:43









          RolandoMySQLDBARolandoMySQLDBA

          141k24221377




          141k24221377













          • I tried dumping the database and importing it into another mysql instance. The query returns the same size for both databases, so the issue remains.. Also, I guess by defragmenting ( OPTIMIZE or ANALYZE) I would probably affect performance, right?

            – sergafts
            Feb 6 '17 at 9:41





















          • I tried dumping the database and importing it into another mysql instance. The query returns the same size for both databases, so the issue remains.. Also, I guess by defragmenting ( OPTIMIZE or ANALYZE) I would probably affect performance, right?

            – sergafts
            Feb 6 '17 at 9:41



















          I tried dumping the database and importing it into another mysql instance. The query returns the same size for both databases, so the issue remains.. Also, I guess by defragmenting ( OPTIMIZE or ANALYZE) I would probably affect performance, right?

          – sergafts
          Feb 6 '17 at 9:41







          I tried dumping the database and importing it into another mysql instance. The query returns the same size for both databases, so the issue remains.. Also, I guess by defragmenting ( OPTIMIZE or ANALYZE) I would probably affect performance, right?

          – sergafts
          Feb 6 '17 at 9:41















          0














          Or, you data could be inefficiently stored.



          If you have a BIGINT with the number 1 in it, it takes 8 bytes to store in the database, but 2 in dump (including the comma): ,1



          There are other situations where the dump will be be bigger than the table: TINYINT (1 bytes) with 127 (4 bytes in dump). Or strings that need quotes and a comma in the dump. Etc. Plus there is overhead to say



          INSERT INTO `your_long_table_name` (`your_long_column_name`) VALUES (1);





          share|improve this answer
























          • Hmm. I guess this could be it. We have multiple VARCHAR(255) fields which usually contain less than 64 characters. Probably this affects the size, if the above query takes this into consinderation

            – sergafts
            Feb 6 '17 at 9:45











          • That varchar takes 2 bytes (length) plus the actual number of bytes needed. Hardly any different than the 3 bytes in the dump for quotes and a comma.

            – Rick James
            Feb 6 '17 at 19:44
















          0














          Or, you data could be inefficiently stored.



          If you have a BIGINT with the number 1 in it, it takes 8 bytes to store in the database, but 2 in dump (including the comma): ,1



          There are other situations where the dump will be be bigger than the table: TINYINT (1 bytes) with 127 (4 bytes in dump). Or strings that need quotes and a comma in the dump. Etc. Plus there is overhead to say



          INSERT INTO `your_long_table_name` (`your_long_column_name`) VALUES (1);





          share|improve this answer
























          • Hmm. I guess this could be it. We have multiple VARCHAR(255) fields which usually contain less than 64 characters. Probably this affects the size, if the above query takes this into consinderation

            – sergafts
            Feb 6 '17 at 9:45











          • That varchar takes 2 bytes (length) plus the actual number of bytes needed. Hardly any different than the 3 bytes in the dump for quotes and a comma.

            – Rick James
            Feb 6 '17 at 19:44














          0












          0








          0







          Or, you data could be inefficiently stored.



          If you have a BIGINT with the number 1 in it, it takes 8 bytes to store in the database, but 2 in dump (including the comma): ,1



          There are other situations where the dump will be be bigger than the table: TINYINT (1 bytes) with 127 (4 bytes in dump). Or strings that need quotes and a comma in the dump. Etc. Plus there is overhead to say



          INSERT INTO `your_long_table_name` (`your_long_column_name`) VALUES (1);





          share|improve this answer













          Or, you data could be inefficiently stored.



          If you have a BIGINT with the number 1 in it, it takes 8 bytes to store in the database, but 2 in dump (including the comma): ,1



          There are other situations where the dump will be be bigger than the table: TINYINT (1 bytes) with 127 (4 bytes in dump). Or strings that need quotes and a comma in the dump. Etc. Plus there is overhead to say



          INSERT INTO `your_long_table_name` (`your_long_column_name`) VALUES (1);






          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Feb 3 '17 at 18:17









          Rick JamesRick James

          42.3k22258




          42.3k22258













          • Hmm. I guess this could be it. We have multiple VARCHAR(255) fields which usually contain less than 64 characters. Probably this affects the size, if the above query takes this into consinderation

            – sergafts
            Feb 6 '17 at 9:45











          • That varchar takes 2 bytes (length) plus the actual number of bytes needed. Hardly any different than the 3 bytes in the dump for quotes and a comma.

            – Rick James
            Feb 6 '17 at 19:44



















          • Hmm. I guess this could be it. We have multiple VARCHAR(255) fields which usually contain less than 64 characters. Probably this affects the size, if the above query takes this into consinderation

            – sergafts
            Feb 6 '17 at 9:45











          • That varchar takes 2 bytes (length) plus the actual number of bytes needed. Hardly any different than the 3 bytes in the dump for quotes and a comma.

            – Rick James
            Feb 6 '17 at 19:44

















          Hmm. I guess this could be it. We have multiple VARCHAR(255) fields which usually contain less than 64 characters. Probably this affects the size, if the above query takes this into consinderation

          – sergafts
          Feb 6 '17 at 9:45





          Hmm. I guess this could be it. We have multiple VARCHAR(255) fields which usually contain less than 64 characters. Probably this affects the size, if the above query takes this into consinderation

          – sergafts
          Feb 6 '17 at 9:45













          That varchar takes 2 bytes (length) plus the actual number of bytes needed. Hardly any different than the 3 bytes in the dump for quotes and a comma.

          – Rick James
          Feb 6 '17 at 19:44





          That varchar takes 2 bytes (length) plus the actual number of bytes needed. Hardly any different than the 3 bytes in the dump for quotes and a comma.

          – Rick James
          Feb 6 '17 at 19:44


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Database Administrators Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f163150%2fdetermine-size-of-mysqldump-prior-to-dumping%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Liste der Baudenkmale in Friedland (Mecklenburg)

          Single-Malt-Whisky

          Czorneboh