What can cause a rapid drop in RDS MySQL Database free storage space?












3















How could my MySQL database on Amazon RDS have recently gone from 10.5 GB free to "storage-full" status within about 1.5 hours?



It's a 15GB MySQL 5.6.27 database running on a db.t2.micro instance. Normally only a few hundred KBytes gets added to it per day.



About a day ago the free storage space went from 10.5 GB to basically 0 GB in about 1.5 hours. The Write IOPS chart shows only my regular low-volume traffic during that time span, so apparently the data must have been generated server-side.



One potentially relevant note is that my database has about 7,000 tables and has the innodb_file_per_table set to 1.



A similar incident apparently happened 8 days ago, but was not as severe and I didn't even notice it because it didn't fill up the storage space.



Screenshot showing the incident 8 days ago plus the storage-filling incident a day ago
enter image description here



Screenshot showing a detailed view of the storage-filling incident
enter image description here



I'm not a database expert and this is for a hobby project of mine, so I'm struggling a bit to figure out how even to start troubleshooting this!



EDIT 1



I'm starting to look at at the answer provided by @RolandoMySQLDBA and I realize I left out some very useful details.



The only systems that write to the database are two EC2 instances which write to it every 30 minutes, which corresponds to the storage reduction seen in the graph.



Both of these systems are collecting the same data from the web, and then they both attempt to write that collected data at the same time to my database on the half-hour. I use two data collection systems simply for redundancy, and I have my write routines coded so that each system will try to write all of its data using INSERT IGNORE INTO so whichever system writes that particular data first wins, and the second system's insert attempt is simply ignored.



During the write that occurs every 30 minutes, one row is inserted into each of the thousands of tables in the database, except one table. Nothing is inserted into that table, but each of its (roughly) 2000 rows gets updated, one at a time.



EDIT 2



I restored an instance of the database from a point after it had about 2.5GB of data added (the event on 8/16 as seen in the first screenshot), so that I could run commands without hitting "storage full" errors.



With help from @RolandoMySQLDBA I was able to see how much InnoDB and MyISAM data was in use (How to monitor MySQL space?). Here's the output:



rudy InnoDB 761.72 MB   0.00 B  761.72 MB 
rudy Total 761.72 MB 0.00 B 761.72 MB
sys InnoDB 16.00 KB 0.00 B 16.00 KB
sys Total 16.00 KB 0.00 B 16.00 KB
Database Total 761.73 MB 0.00 B 761.73 MB


I also ran the command following command to check the 'Data_Length's of all the tables in my database:



show table status from rudy;


I exported the output of that command to a CSV file, imported it as a spreadsheet, then summed all the data lengths and the total was 798,720,00.



So at this point I'm confused. If there's about 798MB in the tables, and about 761MB in the whole database according to your command's output, what else is there that could be taking up about 4.5GB (15GB instance, ~10.5GB storage free)?



Is there some other way I can see what else could be taking up space on my RDS instance?



EDIT 3



I simplified my test scenario by using only one system to write to the database, and by removing all update statements, so now all my code is doing on the database is essentially this (using python 3 with pymysql):



query = "INSERT IGNORE INTO {tn} (Timestamp, Price, Flags, Sales, Total) VALUES(%s,%s,%s,%s,%s)".format(tn=table_name)
self.cursor.execute(query, (timestamp, price, flags, sales, total))


And here is the DDL for the tables I'm inserting into:



query = "CREATE TABLE IF NOT EXISTS {tn} (Timestamp INT PRIMARY KEY, Price BIGINT, Flags INT, Sales INT, Total INT)".format(tn=table_name)
self.cursor.execute(query)


In my simplified code I'm only inserting into around 2000 tables of this type, and each table has between 1,000 and 11,000 rows.



I can reproduce the issue consistently with the above test setup.



Aurora Works Fine!



I also tried migrating a snapshot to Aurora and running the test scenario, and the issue does not occur! I'd like to stick with MySQL server since it's less expensive, but if nobody can help me solve this then I may just move on to Aurora permanently.










share|improve this question

























  • I have five(5) questions : 1) Is your general log or slow log enabled ??? 2) Are you using Aurora ??? 3) You have Snapshots enabled without a specific preference of time ??? 4) Do you perform mass updates ??? 5) Do you have large update transactions ???

    – RolandoMySQLDBA
    Aug 24 '17 at 21:45













  • @RolandoMySQLDBA 1) How do I check for this on an RDS instance? 2) Not using Aurora. 3) Backup retention period is 10 days if that's what you're asking. 4) I don't perform mass updates, just add/update a few hundred KB each day. 5) Small update transactions I think, not sure what is considered large though...

    – user5071535
    Aug 24 '17 at 21:52













  • Please run SELECT @@global.general_log; SELECT @@global.slow_log;. Are either one of them on ???

    – RolandoMySQLDBA
    Aug 24 '17 at 21:54











  • Another question: Run call mysql.rds_show_configuration; Is your binlog retention NULL or a number ???

    – RolandoMySQLDBA
    Aug 24 '17 at 21:56











  • SELECT @@global.general_log LIMIT 0, 20000 1 row(s) returned 0.096 sec / 0.000 sec SELECT @@global.slow_log LIMIT 0, 20000 Error Code: 1193. Unknown system variable 'slow_log' 0.094 sec - Also, binlog retention hours is NULL

    – user5071535
    Aug 24 '17 at 21:56


















3















How could my MySQL database on Amazon RDS have recently gone from 10.5 GB free to "storage-full" status within about 1.5 hours?



It's a 15GB MySQL 5.6.27 database running on a db.t2.micro instance. Normally only a few hundred KBytes gets added to it per day.



About a day ago the free storage space went from 10.5 GB to basically 0 GB in about 1.5 hours. The Write IOPS chart shows only my regular low-volume traffic during that time span, so apparently the data must have been generated server-side.



One potentially relevant note is that my database has about 7,000 tables and has the innodb_file_per_table set to 1.



A similar incident apparently happened 8 days ago, but was not as severe and I didn't even notice it because it didn't fill up the storage space.



Screenshot showing the incident 8 days ago plus the storage-filling incident a day ago
enter image description here



Screenshot showing a detailed view of the storage-filling incident
enter image description here



I'm not a database expert and this is for a hobby project of mine, so I'm struggling a bit to figure out how even to start troubleshooting this!



EDIT 1



I'm starting to look at at the answer provided by @RolandoMySQLDBA and I realize I left out some very useful details.



The only systems that write to the database are two EC2 instances which write to it every 30 minutes, which corresponds to the storage reduction seen in the graph.



Both of these systems are collecting the same data from the web, and then they both attempt to write that collected data at the same time to my database on the half-hour. I use two data collection systems simply for redundancy, and I have my write routines coded so that each system will try to write all of its data using INSERT IGNORE INTO so whichever system writes that particular data first wins, and the second system's insert attempt is simply ignored.



During the write that occurs every 30 minutes, one row is inserted into each of the thousands of tables in the database, except one table. Nothing is inserted into that table, but each of its (roughly) 2000 rows gets updated, one at a time.



EDIT 2



I restored an instance of the database from a point after it had about 2.5GB of data added (the event on 8/16 as seen in the first screenshot), so that I could run commands without hitting "storage full" errors.



With help from @RolandoMySQLDBA I was able to see how much InnoDB and MyISAM data was in use (How to monitor MySQL space?). Here's the output:



rudy InnoDB 761.72 MB   0.00 B  761.72 MB 
rudy Total 761.72 MB 0.00 B 761.72 MB
sys InnoDB 16.00 KB 0.00 B 16.00 KB
sys Total 16.00 KB 0.00 B 16.00 KB
Database Total 761.73 MB 0.00 B 761.73 MB


I also ran the command following command to check the 'Data_Length's of all the tables in my database:



show table status from rudy;


I exported the output of that command to a CSV file, imported it as a spreadsheet, then summed all the data lengths and the total was 798,720,00.



So at this point I'm confused. If there's about 798MB in the tables, and about 761MB in the whole database according to your command's output, what else is there that could be taking up about 4.5GB (15GB instance, ~10.5GB storage free)?



Is there some other way I can see what else could be taking up space on my RDS instance?



EDIT 3



I simplified my test scenario by using only one system to write to the database, and by removing all update statements, so now all my code is doing on the database is essentially this (using python 3 with pymysql):



query = "INSERT IGNORE INTO {tn} (Timestamp, Price, Flags, Sales, Total) VALUES(%s,%s,%s,%s,%s)".format(tn=table_name)
self.cursor.execute(query, (timestamp, price, flags, sales, total))


And here is the DDL for the tables I'm inserting into:



query = "CREATE TABLE IF NOT EXISTS {tn} (Timestamp INT PRIMARY KEY, Price BIGINT, Flags INT, Sales INT, Total INT)".format(tn=table_name)
self.cursor.execute(query)


In my simplified code I'm only inserting into around 2000 tables of this type, and each table has between 1,000 and 11,000 rows.



I can reproduce the issue consistently with the above test setup.



Aurora Works Fine!



I also tried migrating a snapshot to Aurora and running the test scenario, and the issue does not occur! I'd like to stick with MySQL server since it's less expensive, but if nobody can help me solve this then I may just move on to Aurora permanently.










share|improve this question

























  • I have five(5) questions : 1) Is your general log or slow log enabled ??? 2) Are you using Aurora ??? 3) You have Snapshots enabled without a specific preference of time ??? 4) Do you perform mass updates ??? 5) Do you have large update transactions ???

    – RolandoMySQLDBA
    Aug 24 '17 at 21:45













  • @RolandoMySQLDBA 1) How do I check for this on an RDS instance? 2) Not using Aurora. 3) Backup retention period is 10 days if that's what you're asking. 4) I don't perform mass updates, just add/update a few hundred KB each day. 5) Small update transactions I think, not sure what is considered large though...

    – user5071535
    Aug 24 '17 at 21:52













  • Please run SELECT @@global.general_log; SELECT @@global.slow_log;. Are either one of them on ???

    – RolandoMySQLDBA
    Aug 24 '17 at 21:54











  • Another question: Run call mysql.rds_show_configuration; Is your binlog retention NULL or a number ???

    – RolandoMySQLDBA
    Aug 24 '17 at 21:56











  • SELECT @@global.general_log LIMIT 0, 20000 1 row(s) returned 0.096 sec / 0.000 sec SELECT @@global.slow_log LIMIT 0, 20000 Error Code: 1193. Unknown system variable 'slow_log' 0.094 sec - Also, binlog retention hours is NULL

    – user5071535
    Aug 24 '17 at 21:56
















3












3








3


1






How could my MySQL database on Amazon RDS have recently gone from 10.5 GB free to "storage-full" status within about 1.5 hours?



It's a 15GB MySQL 5.6.27 database running on a db.t2.micro instance. Normally only a few hundred KBytes gets added to it per day.



About a day ago the free storage space went from 10.5 GB to basically 0 GB in about 1.5 hours. The Write IOPS chart shows only my regular low-volume traffic during that time span, so apparently the data must have been generated server-side.



One potentially relevant note is that my database has about 7,000 tables and has the innodb_file_per_table set to 1.



A similar incident apparently happened 8 days ago, but was not as severe and I didn't even notice it because it didn't fill up the storage space.



Screenshot showing the incident 8 days ago plus the storage-filling incident a day ago
enter image description here



Screenshot showing a detailed view of the storage-filling incident
enter image description here



I'm not a database expert and this is for a hobby project of mine, so I'm struggling a bit to figure out how even to start troubleshooting this!



EDIT 1



I'm starting to look at at the answer provided by @RolandoMySQLDBA and I realize I left out some very useful details.



The only systems that write to the database are two EC2 instances which write to it every 30 minutes, which corresponds to the storage reduction seen in the graph.



Both of these systems are collecting the same data from the web, and then they both attempt to write that collected data at the same time to my database on the half-hour. I use two data collection systems simply for redundancy, and I have my write routines coded so that each system will try to write all of its data using INSERT IGNORE INTO so whichever system writes that particular data first wins, and the second system's insert attempt is simply ignored.



During the write that occurs every 30 minutes, one row is inserted into each of the thousands of tables in the database, except one table. Nothing is inserted into that table, but each of its (roughly) 2000 rows gets updated, one at a time.



EDIT 2



I restored an instance of the database from a point after it had about 2.5GB of data added (the event on 8/16 as seen in the first screenshot), so that I could run commands without hitting "storage full" errors.



With help from @RolandoMySQLDBA I was able to see how much InnoDB and MyISAM data was in use (How to monitor MySQL space?). Here's the output:



rudy InnoDB 761.72 MB   0.00 B  761.72 MB 
rudy Total 761.72 MB 0.00 B 761.72 MB
sys InnoDB 16.00 KB 0.00 B 16.00 KB
sys Total 16.00 KB 0.00 B 16.00 KB
Database Total 761.73 MB 0.00 B 761.73 MB


I also ran the command following command to check the 'Data_Length's of all the tables in my database:



show table status from rudy;


I exported the output of that command to a CSV file, imported it as a spreadsheet, then summed all the data lengths and the total was 798,720,00.



So at this point I'm confused. If there's about 798MB in the tables, and about 761MB in the whole database according to your command's output, what else is there that could be taking up about 4.5GB (15GB instance, ~10.5GB storage free)?



Is there some other way I can see what else could be taking up space on my RDS instance?



EDIT 3



I simplified my test scenario by using only one system to write to the database, and by removing all update statements, so now all my code is doing on the database is essentially this (using python 3 with pymysql):



query = "INSERT IGNORE INTO {tn} (Timestamp, Price, Flags, Sales, Total) VALUES(%s,%s,%s,%s,%s)".format(tn=table_name)
self.cursor.execute(query, (timestamp, price, flags, sales, total))


And here is the DDL for the tables I'm inserting into:



query = "CREATE TABLE IF NOT EXISTS {tn} (Timestamp INT PRIMARY KEY, Price BIGINT, Flags INT, Sales INT, Total INT)".format(tn=table_name)
self.cursor.execute(query)


In my simplified code I'm only inserting into around 2000 tables of this type, and each table has between 1,000 and 11,000 rows.



I can reproduce the issue consistently with the above test setup.



Aurora Works Fine!



I also tried migrating a snapshot to Aurora and running the test scenario, and the issue does not occur! I'd like to stick with MySQL server since it's less expensive, but if nobody can help me solve this then I may just move on to Aurora permanently.










share|improve this question
















How could my MySQL database on Amazon RDS have recently gone from 10.5 GB free to "storage-full" status within about 1.5 hours?



It's a 15GB MySQL 5.6.27 database running on a db.t2.micro instance. Normally only a few hundred KBytes gets added to it per day.



About a day ago the free storage space went from 10.5 GB to basically 0 GB in about 1.5 hours. The Write IOPS chart shows only my regular low-volume traffic during that time span, so apparently the data must have been generated server-side.



One potentially relevant note is that my database has about 7,000 tables and has the innodb_file_per_table set to 1.



A similar incident apparently happened 8 days ago, but was not as severe and I didn't even notice it because it didn't fill up the storage space.



Screenshot showing the incident 8 days ago plus the storage-filling incident a day ago
enter image description here



Screenshot showing a detailed view of the storage-filling incident
enter image description here



I'm not a database expert and this is for a hobby project of mine, so I'm struggling a bit to figure out how even to start troubleshooting this!



EDIT 1



I'm starting to look at at the answer provided by @RolandoMySQLDBA and I realize I left out some very useful details.



The only systems that write to the database are two EC2 instances which write to it every 30 minutes, which corresponds to the storage reduction seen in the graph.



Both of these systems are collecting the same data from the web, and then they both attempt to write that collected data at the same time to my database on the half-hour. I use two data collection systems simply for redundancy, and I have my write routines coded so that each system will try to write all of its data using INSERT IGNORE INTO so whichever system writes that particular data first wins, and the second system's insert attempt is simply ignored.



During the write that occurs every 30 minutes, one row is inserted into each of the thousands of tables in the database, except one table. Nothing is inserted into that table, but each of its (roughly) 2000 rows gets updated, one at a time.



EDIT 2



I restored an instance of the database from a point after it had about 2.5GB of data added (the event on 8/16 as seen in the first screenshot), so that I could run commands without hitting "storage full" errors.



With help from @RolandoMySQLDBA I was able to see how much InnoDB and MyISAM data was in use (How to monitor MySQL space?). Here's the output:



rudy InnoDB 761.72 MB   0.00 B  761.72 MB 
rudy Total 761.72 MB 0.00 B 761.72 MB
sys InnoDB 16.00 KB 0.00 B 16.00 KB
sys Total 16.00 KB 0.00 B 16.00 KB
Database Total 761.73 MB 0.00 B 761.73 MB


I also ran the command following command to check the 'Data_Length's of all the tables in my database:



show table status from rudy;


I exported the output of that command to a CSV file, imported it as a spreadsheet, then summed all the data lengths and the total was 798,720,00.



So at this point I'm confused. If there's about 798MB in the tables, and about 761MB in the whole database according to your command's output, what else is there that could be taking up about 4.5GB (15GB instance, ~10.5GB storage free)?



Is there some other way I can see what else could be taking up space on my RDS instance?



EDIT 3



I simplified my test scenario by using only one system to write to the database, and by removing all update statements, so now all my code is doing on the database is essentially this (using python 3 with pymysql):



query = "INSERT IGNORE INTO {tn} (Timestamp, Price, Flags, Sales, Total) VALUES(%s,%s,%s,%s,%s)".format(tn=table_name)
self.cursor.execute(query, (timestamp, price, flags, sales, total))


And here is the DDL for the tables I'm inserting into:



query = "CREATE TABLE IF NOT EXISTS {tn} (Timestamp INT PRIMARY KEY, Price BIGINT, Flags INT, Sales INT, Total INT)".format(tn=table_name)
self.cursor.execute(query)


In my simplified code I'm only inserting into around 2000 tables of this type, and each table has between 1,000 and 11,000 rows.



I can reproduce the issue consistently with the above test setup.



Aurora Works Fine!



I also tried migrating a snapshot to Aurora and running the test scenario, and the issue does not occur! I'd like to stick with MySQL server since it's less expensive, but if nobody can help me solve this then I may just move on to Aurora permanently.







mysql innodb amazon-rds






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Aug 31 '17 at 4:31







user5071535

















asked Aug 24 '17 at 21:05









user5071535user5071535

1166




1166













  • I have five(5) questions : 1) Is your general log or slow log enabled ??? 2) Are you using Aurora ??? 3) You have Snapshots enabled without a specific preference of time ??? 4) Do you perform mass updates ??? 5) Do you have large update transactions ???

    – RolandoMySQLDBA
    Aug 24 '17 at 21:45













  • @RolandoMySQLDBA 1) How do I check for this on an RDS instance? 2) Not using Aurora. 3) Backup retention period is 10 days if that's what you're asking. 4) I don't perform mass updates, just add/update a few hundred KB each day. 5) Small update transactions I think, not sure what is considered large though...

    – user5071535
    Aug 24 '17 at 21:52













  • Please run SELECT @@global.general_log; SELECT @@global.slow_log;. Are either one of them on ???

    – RolandoMySQLDBA
    Aug 24 '17 at 21:54











  • Another question: Run call mysql.rds_show_configuration; Is your binlog retention NULL or a number ???

    – RolandoMySQLDBA
    Aug 24 '17 at 21:56











  • SELECT @@global.general_log LIMIT 0, 20000 1 row(s) returned 0.096 sec / 0.000 sec SELECT @@global.slow_log LIMIT 0, 20000 Error Code: 1193. Unknown system variable 'slow_log' 0.094 sec - Also, binlog retention hours is NULL

    – user5071535
    Aug 24 '17 at 21:56





















  • I have five(5) questions : 1) Is your general log or slow log enabled ??? 2) Are you using Aurora ??? 3) You have Snapshots enabled without a specific preference of time ??? 4) Do you perform mass updates ??? 5) Do you have large update transactions ???

    – RolandoMySQLDBA
    Aug 24 '17 at 21:45













  • @RolandoMySQLDBA 1) How do I check for this on an RDS instance? 2) Not using Aurora. 3) Backup retention period is 10 days if that's what you're asking. 4) I don't perform mass updates, just add/update a few hundred KB each day. 5) Small update transactions I think, not sure what is considered large though...

    – user5071535
    Aug 24 '17 at 21:52













  • Please run SELECT @@global.general_log; SELECT @@global.slow_log;. Are either one of them on ???

    – RolandoMySQLDBA
    Aug 24 '17 at 21:54











  • Another question: Run call mysql.rds_show_configuration; Is your binlog retention NULL or a number ???

    – RolandoMySQLDBA
    Aug 24 '17 at 21:56











  • SELECT @@global.general_log LIMIT 0, 20000 1 row(s) returned 0.096 sec / 0.000 sec SELECT @@global.slow_log LIMIT 0, 20000 Error Code: 1193. Unknown system variable 'slow_log' 0.094 sec - Also, binlog retention hours is NULL

    – user5071535
    Aug 24 '17 at 21:56



















I have five(5) questions : 1) Is your general log or slow log enabled ??? 2) Are you using Aurora ??? 3) You have Snapshots enabled without a specific preference of time ??? 4) Do you perform mass updates ??? 5) Do you have large update transactions ???

– RolandoMySQLDBA
Aug 24 '17 at 21:45







I have five(5) questions : 1) Is your general log or slow log enabled ??? 2) Are you using Aurora ??? 3) You have Snapshots enabled without a specific preference of time ??? 4) Do you perform mass updates ??? 5) Do you have large update transactions ???

– RolandoMySQLDBA
Aug 24 '17 at 21:45















@RolandoMySQLDBA 1) How do I check for this on an RDS instance? 2) Not using Aurora. 3) Backup retention period is 10 days if that's what you're asking. 4) I don't perform mass updates, just add/update a few hundred KB each day. 5) Small update transactions I think, not sure what is considered large though...

– user5071535
Aug 24 '17 at 21:52







@RolandoMySQLDBA 1) How do I check for this on an RDS instance? 2) Not using Aurora. 3) Backup retention period is 10 days if that's what you're asking. 4) I don't perform mass updates, just add/update a few hundred KB each day. 5) Small update transactions I think, not sure what is considered large though...

– user5071535
Aug 24 '17 at 21:52















Please run SELECT @@global.general_log; SELECT @@global.slow_log;. Are either one of them on ???

– RolandoMySQLDBA
Aug 24 '17 at 21:54





Please run SELECT @@global.general_log; SELECT @@global.slow_log;. Are either one of them on ???

– RolandoMySQLDBA
Aug 24 '17 at 21:54













Another question: Run call mysql.rds_show_configuration; Is your binlog retention NULL or a number ???

– RolandoMySQLDBA
Aug 24 '17 at 21:56





Another question: Run call mysql.rds_show_configuration; Is your binlog retention NULL or a number ???

– RolandoMySQLDBA
Aug 24 '17 at 21:56













SELECT @@global.general_log LIMIT 0, 20000 1 row(s) returned 0.096 sec / 0.000 sec SELECT @@global.slow_log LIMIT 0, 20000 Error Code: 1193. Unknown system variable 'slow_log' 0.094 sec - Also, binlog retention hours is NULL

– user5071535
Aug 24 '17 at 21:56







SELECT @@global.general_log LIMIT 0, 20000 1 row(s) returned 0.096 sec / 0.000 sec SELECT @@global.slow_log LIMIT 0, 20000 Error Code: 1193. Unknown system variable 'slow_log' 0.094 sec - Also, binlog retention hours is NULL

– user5071535
Aug 24 '17 at 21:56












3 Answers
3






active

oldest

votes


















2














Here are the folders you are writing to in a MySQL RDS Server



mysql> select * from information_schema.global_variables where variable_name in
-> ('innodb_log_group_home_dir','innodb_data_home_dir','innodb_data_file_path');
+---------------------------+------------------------+
| VARIABLE_NAME | VARIABLE_VALUE |
+---------------------------+------------------------+
| INNODB_LOG_GROUP_HOME_DIR | /rdsdbdata/log/innodb |
| INNODB_DATA_FILE_PATH | ibdata1:12M:autoextend |
| INNODB_DATA_HOME_DIR | /rdsdbdata/db/innodb |
+---------------------------+------------------------+
3 rows in set (0.00 sec)


Your ibdata1 file lives in /rdsdbdata/db/innodb and your redo logs live in /rdsdbdata/log/innodb.



What worries me is your ibdata1 file. Since innodb_file_per_table is enabled amd assuming you have no MyISAM tables, the only thing that could cause growth is MVCC. Lots of selects and writes can cause InnoDB to create lots of rollback info. That info can stretch the ibdata1 file. I have discussed this over the years:





  • Apr 23, 2013 : How can Innodb ibdata1 file grows by 5X even with innodb_file_per_table set?


  • Mar 31, 2014 : mysql directory grow to 246G after one query, which failed due to table is full


  • Jun 04, 2014 : Can I move the undo log outside of ibdata1 in MySQL 5.6 on an existing server?


  • Jun 16, 2014 : MySQL Index creation failing on table is full


  • Aug 21, 2015 : Transactional DDL workflow for MySQL


You could run OPTIMIZE TABLE against all your InnoDB table to provide some shrinkage. See my 5 year old post Why does InnoDB store all databases in one file? for ideas on how to shrink your tables.



Unfortunately, you cannot do that in you present state. See this YouTube Video. As for your not being able to list your databases, please note this:



mysql>  show global variables like 'tmpdir';
+---------------+----------------+
| Variable_name | Value |
+---------------+----------------+
| tmpdir | /rdsdbdata/tmp |
+---------------+----------------+
1 row in set (0.00 sec)


Metacommands like SHOW create temp tables. The whole disk is just full.



BAD NEWS



Creating a read replica will not shrink anything. RDS will just take a snapshot and setup replication.



Doing the ALTER TABLE trick will shrink tables, not ibdata1.



Spinning up a new RDS instance and loading from scratch will start with a fresh ibdata1.



UPDATE 2017-08-25 12:21 EDT



Looking back on your graphs, I can see that you are sending in too much data every 30 minutes. Try updating 500 rows at a time instead of 2000. Please keep in mind that heavy updates is just as bad as heavy inserts in terms of ibdata1 growth.






share|improve this answer


























  • As I'm a real novice, I'll have to go through all of this over some time. Quick question for now though, if I'd disabled innodb_file_per_table from the start would this issue have been avoided?

    – user5071535
    Aug 24 '17 at 22:39











  • Nope, that will make this problem hundreds of times worse

    – RolandoMySQLDBA
    Aug 24 '17 at 22:42











  • I edited my question with relevant details regarding some hints you gave in your answer

    – user5071535
    Aug 25 '17 at 1:24











  • If I read your update correctly, the problem is that I'm hammering the DB with too many updates and/or inserts in a tight time span? So if I add a delay between operations it should avoid the problem?

    – user5071535
    Aug 25 '17 at 17:05











  • Adding a delay may help. Updating less rows at time would also help. Try a combination of both, perhaps updating 100 row per operation along with a delay.

    – RolandoMySQLDBA
    Aug 25 '17 at 17:07





















0














Study your queries. Perhaps you have a "cross join" (JOIN without saying how the tables are related) and this generated a huge intermediate table. Can you discover the size of ibdata1? Did it become huge? If not, what other file?






share|improve this answer
























  • I use no JOIN statements at all in any of my queries. Not sure how to check the file size of ibdata1 since I'm using RDS, but I can try to see if that's possible to check

    – user5071535
    Aug 25 '17 at 15:59











  • Beat on Amazon to help you find out what file is growing so suddenly. That will be an important clue.

    – Rick James
    Aug 25 '17 at 16:04



















0














Similar thing happened to us, turned out to be a funky execution plan causing a SELECT query to generate a massive intermediate table for mergesort, which got written to disk... and doesn't get cleaned up until the server restarts.



If that's what happened to you, that's probably why your Aurora replica was fine - any replica would've been fine as it would've deleted the temporary table on starting :)



If you don't mind a minute's down time, I recommend turning it off and on again.






share|improve this answer








New contributor




Alex is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "182"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f184299%2fwhat-can-cause-a-rapid-drop-in-rds-mysql-database-free-storage-space%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    2














    Here are the folders you are writing to in a MySQL RDS Server



    mysql> select * from information_schema.global_variables where variable_name in
    -> ('innodb_log_group_home_dir','innodb_data_home_dir','innodb_data_file_path');
    +---------------------------+------------------------+
    | VARIABLE_NAME | VARIABLE_VALUE |
    +---------------------------+------------------------+
    | INNODB_LOG_GROUP_HOME_DIR | /rdsdbdata/log/innodb |
    | INNODB_DATA_FILE_PATH | ibdata1:12M:autoextend |
    | INNODB_DATA_HOME_DIR | /rdsdbdata/db/innodb |
    +---------------------------+------------------------+
    3 rows in set (0.00 sec)


    Your ibdata1 file lives in /rdsdbdata/db/innodb and your redo logs live in /rdsdbdata/log/innodb.



    What worries me is your ibdata1 file. Since innodb_file_per_table is enabled amd assuming you have no MyISAM tables, the only thing that could cause growth is MVCC. Lots of selects and writes can cause InnoDB to create lots of rollback info. That info can stretch the ibdata1 file. I have discussed this over the years:





    • Apr 23, 2013 : How can Innodb ibdata1 file grows by 5X even with innodb_file_per_table set?


    • Mar 31, 2014 : mysql directory grow to 246G after one query, which failed due to table is full


    • Jun 04, 2014 : Can I move the undo log outside of ibdata1 in MySQL 5.6 on an existing server?


    • Jun 16, 2014 : MySQL Index creation failing on table is full


    • Aug 21, 2015 : Transactional DDL workflow for MySQL


    You could run OPTIMIZE TABLE against all your InnoDB table to provide some shrinkage. See my 5 year old post Why does InnoDB store all databases in one file? for ideas on how to shrink your tables.



    Unfortunately, you cannot do that in you present state. See this YouTube Video. As for your not being able to list your databases, please note this:



    mysql>  show global variables like 'tmpdir';
    +---------------+----------------+
    | Variable_name | Value |
    +---------------+----------------+
    | tmpdir | /rdsdbdata/tmp |
    +---------------+----------------+
    1 row in set (0.00 sec)


    Metacommands like SHOW create temp tables. The whole disk is just full.



    BAD NEWS



    Creating a read replica will not shrink anything. RDS will just take a snapshot and setup replication.



    Doing the ALTER TABLE trick will shrink tables, not ibdata1.



    Spinning up a new RDS instance and loading from scratch will start with a fresh ibdata1.



    UPDATE 2017-08-25 12:21 EDT



    Looking back on your graphs, I can see that you are sending in too much data every 30 minutes. Try updating 500 rows at a time instead of 2000. Please keep in mind that heavy updates is just as bad as heavy inserts in terms of ibdata1 growth.






    share|improve this answer


























    • As I'm a real novice, I'll have to go through all of this over some time. Quick question for now though, if I'd disabled innodb_file_per_table from the start would this issue have been avoided?

      – user5071535
      Aug 24 '17 at 22:39











    • Nope, that will make this problem hundreds of times worse

      – RolandoMySQLDBA
      Aug 24 '17 at 22:42











    • I edited my question with relevant details regarding some hints you gave in your answer

      – user5071535
      Aug 25 '17 at 1:24











    • If I read your update correctly, the problem is that I'm hammering the DB with too many updates and/or inserts in a tight time span? So if I add a delay between operations it should avoid the problem?

      – user5071535
      Aug 25 '17 at 17:05











    • Adding a delay may help. Updating less rows at time would also help. Try a combination of both, perhaps updating 100 row per operation along with a delay.

      – RolandoMySQLDBA
      Aug 25 '17 at 17:07


















    2














    Here are the folders you are writing to in a MySQL RDS Server



    mysql> select * from information_schema.global_variables where variable_name in
    -> ('innodb_log_group_home_dir','innodb_data_home_dir','innodb_data_file_path');
    +---------------------------+------------------------+
    | VARIABLE_NAME | VARIABLE_VALUE |
    +---------------------------+------------------------+
    | INNODB_LOG_GROUP_HOME_DIR | /rdsdbdata/log/innodb |
    | INNODB_DATA_FILE_PATH | ibdata1:12M:autoextend |
    | INNODB_DATA_HOME_DIR | /rdsdbdata/db/innodb |
    +---------------------------+------------------------+
    3 rows in set (0.00 sec)


    Your ibdata1 file lives in /rdsdbdata/db/innodb and your redo logs live in /rdsdbdata/log/innodb.



    What worries me is your ibdata1 file. Since innodb_file_per_table is enabled amd assuming you have no MyISAM tables, the only thing that could cause growth is MVCC. Lots of selects and writes can cause InnoDB to create lots of rollback info. That info can stretch the ibdata1 file. I have discussed this over the years:





    • Apr 23, 2013 : How can Innodb ibdata1 file grows by 5X even with innodb_file_per_table set?


    • Mar 31, 2014 : mysql directory grow to 246G after one query, which failed due to table is full


    • Jun 04, 2014 : Can I move the undo log outside of ibdata1 in MySQL 5.6 on an existing server?


    • Jun 16, 2014 : MySQL Index creation failing on table is full


    • Aug 21, 2015 : Transactional DDL workflow for MySQL


    You could run OPTIMIZE TABLE against all your InnoDB table to provide some shrinkage. See my 5 year old post Why does InnoDB store all databases in one file? for ideas on how to shrink your tables.



    Unfortunately, you cannot do that in you present state. See this YouTube Video. As for your not being able to list your databases, please note this:



    mysql>  show global variables like 'tmpdir';
    +---------------+----------------+
    | Variable_name | Value |
    +---------------+----------------+
    | tmpdir | /rdsdbdata/tmp |
    +---------------+----------------+
    1 row in set (0.00 sec)


    Metacommands like SHOW create temp tables. The whole disk is just full.



    BAD NEWS



    Creating a read replica will not shrink anything. RDS will just take a snapshot and setup replication.



    Doing the ALTER TABLE trick will shrink tables, not ibdata1.



    Spinning up a new RDS instance and loading from scratch will start with a fresh ibdata1.



    UPDATE 2017-08-25 12:21 EDT



    Looking back on your graphs, I can see that you are sending in too much data every 30 minutes. Try updating 500 rows at a time instead of 2000. Please keep in mind that heavy updates is just as bad as heavy inserts in terms of ibdata1 growth.






    share|improve this answer


























    • As I'm a real novice, I'll have to go through all of this over some time. Quick question for now though, if I'd disabled innodb_file_per_table from the start would this issue have been avoided?

      – user5071535
      Aug 24 '17 at 22:39











    • Nope, that will make this problem hundreds of times worse

      – RolandoMySQLDBA
      Aug 24 '17 at 22:42











    • I edited my question with relevant details regarding some hints you gave in your answer

      – user5071535
      Aug 25 '17 at 1:24











    • If I read your update correctly, the problem is that I'm hammering the DB with too many updates and/or inserts in a tight time span? So if I add a delay between operations it should avoid the problem?

      – user5071535
      Aug 25 '17 at 17:05











    • Adding a delay may help. Updating less rows at time would also help. Try a combination of both, perhaps updating 100 row per operation along with a delay.

      – RolandoMySQLDBA
      Aug 25 '17 at 17:07
















    2












    2








    2







    Here are the folders you are writing to in a MySQL RDS Server



    mysql> select * from information_schema.global_variables where variable_name in
    -> ('innodb_log_group_home_dir','innodb_data_home_dir','innodb_data_file_path');
    +---------------------------+------------------------+
    | VARIABLE_NAME | VARIABLE_VALUE |
    +---------------------------+------------------------+
    | INNODB_LOG_GROUP_HOME_DIR | /rdsdbdata/log/innodb |
    | INNODB_DATA_FILE_PATH | ibdata1:12M:autoextend |
    | INNODB_DATA_HOME_DIR | /rdsdbdata/db/innodb |
    +---------------------------+------------------------+
    3 rows in set (0.00 sec)


    Your ibdata1 file lives in /rdsdbdata/db/innodb and your redo logs live in /rdsdbdata/log/innodb.



    What worries me is your ibdata1 file. Since innodb_file_per_table is enabled amd assuming you have no MyISAM tables, the only thing that could cause growth is MVCC. Lots of selects and writes can cause InnoDB to create lots of rollback info. That info can stretch the ibdata1 file. I have discussed this over the years:





    • Apr 23, 2013 : How can Innodb ibdata1 file grows by 5X even with innodb_file_per_table set?


    • Mar 31, 2014 : mysql directory grow to 246G after one query, which failed due to table is full


    • Jun 04, 2014 : Can I move the undo log outside of ibdata1 in MySQL 5.6 on an existing server?


    • Jun 16, 2014 : MySQL Index creation failing on table is full


    • Aug 21, 2015 : Transactional DDL workflow for MySQL


    You could run OPTIMIZE TABLE against all your InnoDB table to provide some shrinkage. See my 5 year old post Why does InnoDB store all databases in one file? for ideas on how to shrink your tables.



    Unfortunately, you cannot do that in you present state. See this YouTube Video. As for your not being able to list your databases, please note this:



    mysql>  show global variables like 'tmpdir';
    +---------------+----------------+
    | Variable_name | Value |
    +---------------+----------------+
    | tmpdir | /rdsdbdata/tmp |
    +---------------+----------------+
    1 row in set (0.00 sec)


    Metacommands like SHOW create temp tables. The whole disk is just full.



    BAD NEWS



    Creating a read replica will not shrink anything. RDS will just take a snapshot and setup replication.



    Doing the ALTER TABLE trick will shrink tables, not ibdata1.



    Spinning up a new RDS instance and loading from scratch will start with a fresh ibdata1.



    UPDATE 2017-08-25 12:21 EDT



    Looking back on your graphs, I can see that you are sending in too much data every 30 minutes. Try updating 500 rows at a time instead of 2000. Please keep in mind that heavy updates is just as bad as heavy inserts in terms of ibdata1 growth.






    share|improve this answer















    Here are the folders you are writing to in a MySQL RDS Server



    mysql> select * from information_schema.global_variables where variable_name in
    -> ('innodb_log_group_home_dir','innodb_data_home_dir','innodb_data_file_path');
    +---------------------------+------------------------+
    | VARIABLE_NAME | VARIABLE_VALUE |
    +---------------------------+------------------------+
    | INNODB_LOG_GROUP_HOME_DIR | /rdsdbdata/log/innodb |
    | INNODB_DATA_FILE_PATH | ibdata1:12M:autoextend |
    | INNODB_DATA_HOME_DIR | /rdsdbdata/db/innodb |
    +---------------------------+------------------------+
    3 rows in set (0.00 sec)


    Your ibdata1 file lives in /rdsdbdata/db/innodb and your redo logs live in /rdsdbdata/log/innodb.



    What worries me is your ibdata1 file. Since innodb_file_per_table is enabled amd assuming you have no MyISAM tables, the only thing that could cause growth is MVCC. Lots of selects and writes can cause InnoDB to create lots of rollback info. That info can stretch the ibdata1 file. I have discussed this over the years:





    • Apr 23, 2013 : How can Innodb ibdata1 file grows by 5X even with innodb_file_per_table set?


    • Mar 31, 2014 : mysql directory grow to 246G after one query, which failed due to table is full


    • Jun 04, 2014 : Can I move the undo log outside of ibdata1 in MySQL 5.6 on an existing server?


    • Jun 16, 2014 : MySQL Index creation failing on table is full


    • Aug 21, 2015 : Transactional DDL workflow for MySQL


    You could run OPTIMIZE TABLE against all your InnoDB table to provide some shrinkage. See my 5 year old post Why does InnoDB store all databases in one file? for ideas on how to shrink your tables.



    Unfortunately, you cannot do that in you present state. See this YouTube Video. As for your not being able to list your databases, please note this:



    mysql>  show global variables like 'tmpdir';
    +---------------+----------------+
    | Variable_name | Value |
    +---------------+----------------+
    | tmpdir | /rdsdbdata/tmp |
    +---------------+----------------+
    1 row in set (0.00 sec)


    Metacommands like SHOW create temp tables. The whole disk is just full.



    BAD NEWS



    Creating a read replica will not shrink anything. RDS will just take a snapshot and setup replication.



    Doing the ALTER TABLE trick will shrink tables, not ibdata1.



    Spinning up a new RDS instance and loading from scratch will start with a fresh ibdata1.



    UPDATE 2017-08-25 12:21 EDT



    Looking back on your graphs, I can see that you are sending in too much data every 30 minutes. Try updating 500 rows at a time instead of 2000. Please keep in mind that heavy updates is just as bad as heavy inserts in terms of ibdata1 growth.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Aug 25 '17 at 16:21

























    answered Aug 24 '17 at 22:33









    RolandoMySQLDBARolandoMySQLDBA

    141k24219374




    141k24219374













    • As I'm a real novice, I'll have to go through all of this over some time. Quick question for now though, if I'd disabled innodb_file_per_table from the start would this issue have been avoided?

      – user5071535
      Aug 24 '17 at 22:39











    • Nope, that will make this problem hundreds of times worse

      – RolandoMySQLDBA
      Aug 24 '17 at 22:42











    • I edited my question with relevant details regarding some hints you gave in your answer

      – user5071535
      Aug 25 '17 at 1:24











    • If I read your update correctly, the problem is that I'm hammering the DB with too many updates and/or inserts in a tight time span? So if I add a delay between operations it should avoid the problem?

      – user5071535
      Aug 25 '17 at 17:05











    • Adding a delay may help. Updating less rows at time would also help. Try a combination of both, perhaps updating 100 row per operation along with a delay.

      – RolandoMySQLDBA
      Aug 25 '17 at 17:07





















    • As I'm a real novice, I'll have to go through all of this over some time. Quick question for now though, if I'd disabled innodb_file_per_table from the start would this issue have been avoided?

      – user5071535
      Aug 24 '17 at 22:39











    • Nope, that will make this problem hundreds of times worse

      – RolandoMySQLDBA
      Aug 24 '17 at 22:42











    • I edited my question with relevant details regarding some hints you gave in your answer

      – user5071535
      Aug 25 '17 at 1:24











    • If I read your update correctly, the problem is that I'm hammering the DB with too many updates and/or inserts in a tight time span? So if I add a delay between operations it should avoid the problem?

      – user5071535
      Aug 25 '17 at 17:05











    • Adding a delay may help. Updating less rows at time would also help. Try a combination of both, perhaps updating 100 row per operation along with a delay.

      – RolandoMySQLDBA
      Aug 25 '17 at 17:07



















    As I'm a real novice, I'll have to go through all of this over some time. Quick question for now though, if I'd disabled innodb_file_per_table from the start would this issue have been avoided?

    – user5071535
    Aug 24 '17 at 22:39





    As I'm a real novice, I'll have to go through all of this over some time. Quick question for now though, if I'd disabled innodb_file_per_table from the start would this issue have been avoided?

    – user5071535
    Aug 24 '17 at 22:39













    Nope, that will make this problem hundreds of times worse

    – RolandoMySQLDBA
    Aug 24 '17 at 22:42





    Nope, that will make this problem hundreds of times worse

    – RolandoMySQLDBA
    Aug 24 '17 at 22:42













    I edited my question with relevant details regarding some hints you gave in your answer

    – user5071535
    Aug 25 '17 at 1:24





    I edited my question with relevant details regarding some hints you gave in your answer

    – user5071535
    Aug 25 '17 at 1:24













    If I read your update correctly, the problem is that I'm hammering the DB with too many updates and/or inserts in a tight time span? So if I add a delay between operations it should avoid the problem?

    – user5071535
    Aug 25 '17 at 17:05





    If I read your update correctly, the problem is that I'm hammering the DB with too many updates and/or inserts in a tight time span? So if I add a delay between operations it should avoid the problem?

    – user5071535
    Aug 25 '17 at 17:05













    Adding a delay may help. Updating less rows at time would also help. Try a combination of both, perhaps updating 100 row per operation along with a delay.

    – RolandoMySQLDBA
    Aug 25 '17 at 17:07







    Adding a delay may help. Updating less rows at time would also help. Try a combination of both, perhaps updating 100 row per operation along with a delay.

    – RolandoMySQLDBA
    Aug 25 '17 at 17:07















    0














    Study your queries. Perhaps you have a "cross join" (JOIN without saying how the tables are related) and this generated a huge intermediate table. Can you discover the size of ibdata1? Did it become huge? If not, what other file?






    share|improve this answer
























    • I use no JOIN statements at all in any of my queries. Not sure how to check the file size of ibdata1 since I'm using RDS, but I can try to see if that's possible to check

      – user5071535
      Aug 25 '17 at 15:59











    • Beat on Amazon to help you find out what file is growing so suddenly. That will be an important clue.

      – Rick James
      Aug 25 '17 at 16:04
















    0














    Study your queries. Perhaps you have a "cross join" (JOIN without saying how the tables are related) and this generated a huge intermediate table. Can you discover the size of ibdata1? Did it become huge? If not, what other file?






    share|improve this answer
























    • I use no JOIN statements at all in any of my queries. Not sure how to check the file size of ibdata1 since I'm using RDS, but I can try to see if that's possible to check

      – user5071535
      Aug 25 '17 at 15:59











    • Beat on Amazon to help you find out what file is growing so suddenly. That will be an important clue.

      – Rick James
      Aug 25 '17 at 16:04














    0












    0








    0







    Study your queries. Perhaps you have a "cross join" (JOIN without saying how the tables are related) and this generated a huge intermediate table. Can you discover the size of ibdata1? Did it become huge? If not, what other file?






    share|improve this answer













    Study your queries. Perhaps you have a "cross join" (JOIN without saying how the tables are related) and this generated a huge intermediate table. Can you discover the size of ibdata1? Did it become huge? If not, what other file?







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Aug 25 '17 at 15:56









    Rick JamesRick James

    41.5k22258




    41.5k22258













    • I use no JOIN statements at all in any of my queries. Not sure how to check the file size of ibdata1 since I'm using RDS, but I can try to see if that's possible to check

      – user5071535
      Aug 25 '17 at 15:59











    • Beat on Amazon to help you find out what file is growing so suddenly. That will be an important clue.

      – Rick James
      Aug 25 '17 at 16:04



















    • I use no JOIN statements at all in any of my queries. Not sure how to check the file size of ibdata1 since I'm using RDS, but I can try to see if that's possible to check

      – user5071535
      Aug 25 '17 at 15:59











    • Beat on Amazon to help you find out what file is growing so suddenly. That will be an important clue.

      – Rick James
      Aug 25 '17 at 16:04

















    I use no JOIN statements at all in any of my queries. Not sure how to check the file size of ibdata1 since I'm using RDS, but I can try to see if that's possible to check

    – user5071535
    Aug 25 '17 at 15:59





    I use no JOIN statements at all in any of my queries. Not sure how to check the file size of ibdata1 since I'm using RDS, but I can try to see if that's possible to check

    – user5071535
    Aug 25 '17 at 15:59













    Beat on Amazon to help you find out what file is growing so suddenly. That will be an important clue.

    – Rick James
    Aug 25 '17 at 16:04





    Beat on Amazon to help you find out what file is growing so suddenly. That will be an important clue.

    – Rick James
    Aug 25 '17 at 16:04











    0














    Similar thing happened to us, turned out to be a funky execution plan causing a SELECT query to generate a massive intermediate table for mergesort, which got written to disk... and doesn't get cleaned up until the server restarts.



    If that's what happened to you, that's probably why your Aurora replica was fine - any replica would've been fine as it would've deleted the temporary table on starting :)



    If you don't mind a minute's down time, I recommend turning it off and on again.






    share|improve this answer








    New contributor




    Alex is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.

























      0














      Similar thing happened to us, turned out to be a funky execution plan causing a SELECT query to generate a massive intermediate table for mergesort, which got written to disk... and doesn't get cleaned up until the server restarts.



      If that's what happened to you, that's probably why your Aurora replica was fine - any replica would've been fine as it would've deleted the temporary table on starting :)



      If you don't mind a minute's down time, I recommend turning it off and on again.






      share|improve this answer








      New contributor




      Alex is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.























        0












        0








        0







        Similar thing happened to us, turned out to be a funky execution plan causing a SELECT query to generate a massive intermediate table for mergesort, which got written to disk... and doesn't get cleaned up until the server restarts.



        If that's what happened to you, that's probably why your Aurora replica was fine - any replica would've been fine as it would've deleted the temporary table on starting :)



        If you don't mind a minute's down time, I recommend turning it off and on again.






        share|improve this answer








        New contributor




        Alex is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.










        Similar thing happened to us, turned out to be a funky execution plan causing a SELECT query to generate a massive intermediate table for mergesort, which got written to disk... and doesn't get cleaned up until the server restarts.



        If that's what happened to you, that's probably why your Aurora replica was fine - any replica would've been fine as it would've deleted the temporary table on starting :)



        If you don't mind a minute's down time, I recommend turning it off and on again.







        share|improve this answer








        New contributor




        Alex is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        share|improve this answer



        share|improve this answer






        New contributor




        Alex is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        answered 12 mins ago









        AlexAlex

        1011




        1011




        New contributor




        Alex is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.





        New contributor





        Alex is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






        Alex is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Database Administrators Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f184299%2fwhat-can-cause-a-rapid-drop-in-rds-mysql-database-free-storage-space%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Liste der Baudenkmale in Friedland (Mecklenburg)

            Single-Malt-Whisky

            Czorneboh