How to move a 80GB MySQL Database with least effort and least time offline?












2















Im a software developer with 4 years experience but brand new in my company, my first job is to move a 80GB mySQL database from a very very small and old database server to a brand new.
The problem: It is the alltime working database for nearly everything in my company... I can't reduce the data to less gb because it is all needed data.
So: How to do this and how to achieve this with the least time offline so around 500 people can work again fastly...



Information requested for Help:



SELECT COUNT(*) as '# TABLES', CONCAT(ROUND(sum(data_length) / ( 1024 * 1024 * 1024 ), 2), 'G') as DATA, CONCAT(ROUND(sum(index_length) / ( 1024 * 1024 * 1024 ), 2), 'G') as INDEXES, CONCAT(sum(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2)), 'G') as TOTAL, engine as ENGINE FROM information_schema.TABLES GROUP BY engine;


Answer:



3 NULL NULL NULL NULL   
13 1.36G 1.70G 3.05G InnoDB
13 0.00G 0.00G 0.00G MEMORY
1159 44.04G 34.89G 78.90G MyISAM









share|improve this question
















bumped to the homepage by Community 4 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
















  • Are you using mainly InnoDB?

    – jynus
    Apr 20 '15 at 10:52











  • I have no idea, I never worked with MySQL before, Im windows developer with MS SQL... so it's a very hard job for me :) Where to find this information in phpmyadmin? I will look it up.

    – Kovu
    Apr 20 '15 at 10:53











  • Okay, found the information :) Its all MyISAM

    – Kovu
    Apr 20 '15 at 10:55











  • Execute this and tell us the results: SELECT COUNT(*) as '# TABLES', CONCAT(ROUND(sum(data_length) / ( 1024 * 1024 * 1024 ), 2), 'G') as DATA, CONCAT(ROUND(sum(index_length) / ( 1024 * 1024 * 1024 ), 2), 'G') as INDEXES, CONCAT(sum(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2)), 'G') as TOTAL, engine as ENGINE FROM information_schema.TABLES GROUP BY engine;

    – jynus
    Apr 20 '15 at 10:55











  • Done, thx very much

    – Kovu
    Apr 20 '15 at 11:13
















2















Im a software developer with 4 years experience but brand new in my company, my first job is to move a 80GB mySQL database from a very very small and old database server to a brand new.
The problem: It is the alltime working database for nearly everything in my company... I can't reduce the data to less gb because it is all needed data.
So: How to do this and how to achieve this with the least time offline so around 500 people can work again fastly...



Information requested for Help:



SELECT COUNT(*) as '# TABLES', CONCAT(ROUND(sum(data_length) / ( 1024 * 1024 * 1024 ), 2), 'G') as DATA, CONCAT(ROUND(sum(index_length) / ( 1024 * 1024 * 1024 ), 2), 'G') as INDEXES, CONCAT(sum(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2)), 'G') as TOTAL, engine as ENGINE FROM information_schema.TABLES GROUP BY engine;


Answer:



3 NULL NULL NULL NULL   
13 1.36G 1.70G 3.05G InnoDB
13 0.00G 0.00G 0.00G MEMORY
1159 44.04G 34.89G 78.90G MyISAM









share|improve this question
















bumped to the homepage by Community 4 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
















  • Are you using mainly InnoDB?

    – jynus
    Apr 20 '15 at 10:52











  • I have no idea, I never worked with MySQL before, Im windows developer with MS SQL... so it's a very hard job for me :) Where to find this information in phpmyadmin? I will look it up.

    – Kovu
    Apr 20 '15 at 10:53











  • Okay, found the information :) Its all MyISAM

    – Kovu
    Apr 20 '15 at 10:55











  • Execute this and tell us the results: SELECT COUNT(*) as '# TABLES', CONCAT(ROUND(sum(data_length) / ( 1024 * 1024 * 1024 ), 2), 'G') as DATA, CONCAT(ROUND(sum(index_length) / ( 1024 * 1024 * 1024 ), 2), 'G') as INDEXES, CONCAT(sum(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2)), 'G') as TOTAL, engine as ENGINE FROM information_schema.TABLES GROUP BY engine;

    – jynus
    Apr 20 '15 at 10:55











  • Done, thx very much

    – Kovu
    Apr 20 '15 at 11:13














2












2








2








Im a software developer with 4 years experience but brand new in my company, my first job is to move a 80GB mySQL database from a very very small and old database server to a brand new.
The problem: It is the alltime working database for nearly everything in my company... I can't reduce the data to less gb because it is all needed data.
So: How to do this and how to achieve this with the least time offline so around 500 people can work again fastly...



Information requested for Help:



SELECT COUNT(*) as '# TABLES', CONCAT(ROUND(sum(data_length) / ( 1024 * 1024 * 1024 ), 2), 'G') as DATA, CONCAT(ROUND(sum(index_length) / ( 1024 * 1024 * 1024 ), 2), 'G') as INDEXES, CONCAT(sum(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2)), 'G') as TOTAL, engine as ENGINE FROM information_schema.TABLES GROUP BY engine;


Answer:



3 NULL NULL NULL NULL   
13 1.36G 1.70G 3.05G InnoDB
13 0.00G 0.00G 0.00G MEMORY
1159 44.04G 34.89G 78.90G MyISAM









share|improve this question
















Im a software developer with 4 years experience but brand new in my company, my first job is to move a 80GB mySQL database from a very very small and old database server to a brand new.
The problem: It is the alltime working database for nearly everything in my company... I can't reduce the data to less gb because it is all needed data.
So: How to do this and how to achieve this with the least time offline so around 500 people can work again fastly...



Information requested for Help:



SELECT COUNT(*) as '# TABLES', CONCAT(ROUND(sum(data_length) / ( 1024 * 1024 * 1024 ), 2), 'G') as DATA, CONCAT(ROUND(sum(index_length) / ( 1024 * 1024 * 1024 ), 2), 'G') as INDEXES, CONCAT(sum(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2)), 'G') as TOTAL, engine as ENGINE FROM information_schema.TABLES GROUP BY engine;


Answer:



3 NULL NULL NULL NULL   
13 1.36G 1.70G 3.05G InnoDB
13 0.00G 0.00G 0.00G MEMORY
1159 44.04G 34.89G 78.90G MyISAM






mysql






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Apr 20 '15 at 11:14









jynus

11.1k11832




11.1k11832










asked Apr 20 '15 at 10:36









KovuKovu

1215




1215





bumped to the homepage by Community 4 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







bumped to the homepage by Community 4 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.















  • Are you using mainly InnoDB?

    – jynus
    Apr 20 '15 at 10:52











  • I have no idea, I never worked with MySQL before, Im windows developer with MS SQL... so it's a very hard job for me :) Where to find this information in phpmyadmin? I will look it up.

    – Kovu
    Apr 20 '15 at 10:53











  • Okay, found the information :) Its all MyISAM

    – Kovu
    Apr 20 '15 at 10:55











  • Execute this and tell us the results: SELECT COUNT(*) as '# TABLES', CONCAT(ROUND(sum(data_length) / ( 1024 * 1024 * 1024 ), 2), 'G') as DATA, CONCAT(ROUND(sum(index_length) / ( 1024 * 1024 * 1024 ), 2), 'G') as INDEXES, CONCAT(sum(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2)), 'G') as TOTAL, engine as ENGINE FROM information_schema.TABLES GROUP BY engine;

    – jynus
    Apr 20 '15 at 10:55











  • Done, thx very much

    – Kovu
    Apr 20 '15 at 11:13



















  • Are you using mainly InnoDB?

    – jynus
    Apr 20 '15 at 10:52











  • I have no idea, I never worked with MySQL before, Im windows developer with MS SQL... so it's a very hard job for me :) Where to find this information in phpmyadmin? I will look it up.

    – Kovu
    Apr 20 '15 at 10:53











  • Okay, found the information :) Its all MyISAM

    – Kovu
    Apr 20 '15 at 10:55











  • Execute this and tell us the results: SELECT COUNT(*) as '# TABLES', CONCAT(ROUND(sum(data_length) / ( 1024 * 1024 * 1024 ), 2), 'G') as DATA, CONCAT(ROUND(sum(index_length) / ( 1024 * 1024 * 1024 ), 2), 'G') as INDEXES, CONCAT(sum(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2)), 'G') as TOTAL, engine as ENGINE FROM information_schema.TABLES GROUP BY engine;

    – jynus
    Apr 20 '15 at 10:55











  • Done, thx very much

    – Kovu
    Apr 20 '15 at 11:13

















Are you using mainly InnoDB?

– jynus
Apr 20 '15 at 10:52





Are you using mainly InnoDB?

– jynus
Apr 20 '15 at 10:52













I have no idea, I never worked with MySQL before, Im windows developer with MS SQL... so it's a very hard job for me :) Where to find this information in phpmyadmin? I will look it up.

– Kovu
Apr 20 '15 at 10:53





I have no idea, I never worked with MySQL before, Im windows developer with MS SQL... so it's a very hard job for me :) Where to find this information in phpmyadmin? I will look it up.

– Kovu
Apr 20 '15 at 10:53













Okay, found the information :) Its all MyISAM

– Kovu
Apr 20 '15 at 10:55





Okay, found the information :) Its all MyISAM

– Kovu
Apr 20 '15 at 10:55













Execute this and tell us the results: SELECT COUNT(*) as '# TABLES', CONCAT(ROUND(sum(data_length) / ( 1024 * 1024 * 1024 ), 2), 'G') as DATA, CONCAT(ROUND(sum(index_length) / ( 1024 * 1024 * 1024 ), 2), 'G') as INDEXES, CONCAT(sum(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2)), 'G') as TOTAL, engine as ENGINE FROM information_schema.TABLES GROUP BY engine;

– jynus
Apr 20 '15 at 10:55





Execute this and tell us the results: SELECT COUNT(*) as '# TABLES', CONCAT(ROUND(sum(data_length) / ( 1024 * 1024 * 1024 ), 2), 'G') as DATA, CONCAT(ROUND(sum(index_length) / ( 1024 * 1024 * 1024 ), 2), 'G') as INDEXES, CONCAT(sum(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2)), 'G') as TOTAL, engine as ENGINE FROM information_schema.TABLES GROUP BY engine;

– jynus
Apr 20 '15 at 10:55













Done, thx very much

– Kovu
Apr 20 '15 at 11:13





Done, thx very much

– Kovu
Apr 20 '15 at 11:13










1 Answer
1






active

oldest

votes


















0














The main problem with MyISAM is that you cannot create a copy of the database without locking the tables (otherwise the copy may be inconsistent). I would recommend you to research a potential migration to InnoDB.



You will have to execute FLUSH TABLES WITH READ LOCK in order to lock all tables in read only mode, then copy all the .MYD, .MYI and .frm files from the filesystem (basically, your entire datadir). Once you copy those files, you will want to get your binary log coordinates so you can setup replication with SHOW MASTER STATUS. If you cannot setup replication, you will have to leave the server in read_only mode while you setup the other one. Otherwise, UNLOCK TABLES;. In previous versions of MySQL, you had the mysqlhotcopy script that automatized some of those tasks for MyISAM.



EDIT: As you have some InnoDB tables, you must be sure to copy the whole data directory (including ib_logfile* and ib_data*ones), and not only single files- otherwise the copy won't succeed.



Setup the new server copying the files to the right place, start the new server, and you can now setup a replication between the old server and the new one. Once you double check that the setup is correct and the key_buffer/buffer pool is warm, you will put the old server in read_only mode and failover your applications to the new one.



The idea is to avoid mysqldump, which is slower for exporting, but above all, for importing, although it is easier to use. That it is what it is internally executed with phpMyAdmin's "export/import database" feature.



I know this is a high level overview, but I recommend you to start reasearching the links I mentioned and about MySQL replication and asking additional specific questions if needed.






share|improve this answer


























  • Thanks very much, your help is very much appriciated! Can you tell me how long did you think will we need, if we say it will work in first try, to lock the tables for this 80GB

    – Kovu
    Apr 20 '15 at 11:15











  • @Kovu Lock granting should take seconds, unless you have long SELECTs running or lots of buffered content. Obviously I recommend you to do all of this on the lowest possible load time (e.g. night). The copy process (while you held the lock) will be as fast as copying 80 GB of files from disk. I recommend you to try the process on a test system first.

    – jynus
    Apr 20 '15 at 11:22













Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f98293%2fhow-to-move-a-80gb-mysql-database-with-least-effort-and-least-time-offline%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














The main problem with MyISAM is that you cannot create a copy of the database without locking the tables (otherwise the copy may be inconsistent). I would recommend you to research a potential migration to InnoDB.



You will have to execute FLUSH TABLES WITH READ LOCK in order to lock all tables in read only mode, then copy all the .MYD, .MYI and .frm files from the filesystem (basically, your entire datadir). Once you copy those files, you will want to get your binary log coordinates so you can setup replication with SHOW MASTER STATUS. If you cannot setup replication, you will have to leave the server in read_only mode while you setup the other one. Otherwise, UNLOCK TABLES;. In previous versions of MySQL, you had the mysqlhotcopy script that automatized some of those tasks for MyISAM.



EDIT: As you have some InnoDB tables, you must be sure to copy the whole data directory (including ib_logfile* and ib_data*ones), and not only single files- otherwise the copy won't succeed.



Setup the new server copying the files to the right place, start the new server, and you can now setup a replication between the old server and the new one. Once you double check that the setup is correct and the key_buffer/buffer pool is warm, you will put the old server in read_only mode and failover your applications to the new one.



The idea is to avoid mysqldump, which is slower for exporting, but above all, for importing, although it is easier to use. That it is what it is internally executed with phpMyAdmin's "export/import database" feature.



I know this is a high level overview, but I recommend you to start reasearching the links I mentioned and about MySQL replication and asking additional specific questions if needed.






share|improve this answer


























  • Thanks very much, your help is very much appriciated! Can you tell me how long did you think will we need, if we say it will work in first try, to lock the tables for this 80GB

    – Kovu
    Apr 20 '15 at 11:15











  • @Kovu Lock granting should take seconds, unless you have long SELECTs running or lots of buffered content. Obviously I recommend you to do all of this on the lowest possible load time (e.g. night). The copy process (while you held the lock) will be as fast as copying 80 GB of files from disk. I recommend you to try the process on a test system first.

    – jynus
    Apr 20 '15 at 11:22


















0














The main problem with MyISAM is that you cannot create a copy of the database without locking the tables (otherwise the copy may be inconsistent). I would recommend you to research a potential migration to InnoDB.



You will have to execute FLUSH TABLES WITH READ LOCK in order to lock all tables in read only mode, then copy all the .MYD, .MYI and .frm files from the filesystem (basically, your entire datadir). Once you copy those files, you will want to get your binary log coordinates so you can setup replication with SHOW MASTER STATUS. If you cannot setup replication, you will have to leave the server in read_only mode while you setup the other one. Otherwise, UNLOCK TABLES;. In previous versions of MySQL, you had the mysqlhotcopy script that automatized some of those tasks for MyISAM.



EDIT: As you have some InnoDB tables, you must be sure to copy the whole data directory (including ib_logfile* and ib_data*ones), and not only single files- otherwise the copy won't succeed.



Setup the new server copying the files to the right place, start the new server, and you can now setup a replication between the old server and the new one. Once you double check that the setup is correct and the key_buffer/buffer pool is warm, you will put the old server in read_only mode and failover your applications to the new one.



The idea is to avoid mysqldump, which is slower for exporting, but above all, for importing, although it is easier to use. That it is what it is internally executed with phpMyAdmin's "export/import database" feature.



I know this is a high level overview, but I recommend you to start reasearching the links I mentioned and about MySQL replication and asking additional specific questions if needed.






share|improve this answer


























  • Thanks very much, your help is very much appriciated! Can you tell me how long did you think will we need, if we say it will work in first try, to lock the tables for this 80GB

    – Kovu
    Apr 20 '15 at 11:15











  • @Kovu Lock granting should take seconds, unless you have long SELECTs running or lots of buffered content. Obviously I recommend you to do all of this on the lowest possible load time (e.g. night). The copy process (while you held the lock) will be as fast as copying 80 GB of files from disk. I recommend you to try the process on a test system first.

    – jynus
    Apr 20 '15 at 11:22
















0












0








0







The main problem with MyISAM is that you cannot create a copy of the database without locking the tables (otherwise the copy may be inconsistent). I would recommend you to research a potential migration to InnoDB.



You will have to execute FLUSH TABLES WITH READ LOCK in order to lock all tables in read only mode, then copy all the .MYD, .MYI and .frm files from the filesystem (basically, your entire datadir). Once you copy those files, you will want to get your binary log coordinates so you can setup replication with SHOW MASTER STATUS. If you cannot setup replication, you will have to leave the server in read_only mode while you setup the other one. Otherwise, UNLOCK TABLES;. In previous versions of MySQL, you had the mysqlhotcopy script that automatized some of those tasks for MyISAM.



EDIT: As you have some InnoDB tables, you must be sure to copy the whole data directory (including ib_logfile* and ib_data*ones), and not only single files- otherwise the copy won't succeed.



Setup the new server copying the files to the right place, start the new server, and you can now setup a replication between the old server and the new one. Once you double check that the setup is correct and the key_buffer/buffer pool is warm, you will put the old server in read_only mode and failover your applications to the new one.



The idea is to avoid mysqldump, which is slower for exporting, but above all, for importing, although it is easier to use. That it is what it is internally executed with phpMyAdmin's "export/import database" feature.



I know this is a high level overview, but I recommend you to start reasearching the links I mentioned and about MySQL replication and asking additional specific questions if needed.






share|improve this answer















The main problem with MyISAM is that you cannot create a copy of the database without locking the tables (otherwise the copy may be inconsistent). I would recommend you to research a potential migration to InnoDB.



You will have to execute FLUSH TABLES WITH READ LOCK in order to lock all tables in read only mode, then copy all the .MYD, .MYI and .frm files from the filesystem (basically, your entire datadir). Once you copy those files, you will want to get your binary log coordinates so you can setup replication with SHOW MASTER STATUS. If you cannot setup replication, you will have to leave the server in read_only mode while you setup the other one. Otherwise, UNLOCK TABLES;. In previous versions of MySQL, you had the mysqlhotcopy script that automatized some of those tasks for MyISAM.



EDIT: As you have some InnoDB tables, you must be sure to copy the whole data directory (including ib_logfile* and ib_data*ones), and not only single files- otherwise the copy won't succeed.



Setup the new server copying the files to the right place, start the new server, and you can now setup a replication between the old server and the new one. Once you double check that the setup is correct and the key_buffer/buffer pool is warm, you will put the old server in read_only mode and failover your applications to the new one.



The idea is to avoid mysqldump, which is slower for exporting, but above all, for importing, although it is easier to use. That it is what it is internally executed with phpMyAdmin's "export/import database" feature.



I know this is a high level overview, but I recommend you to start reasearching the links I mentioned and about MySQL replication and asking additional specific questions if needed.







share|improve this answer














share|improve this answer



share|improve this answer








edited Apr 20 '15 at 11:24

























answered Apr 20 '15 at 11:13









jynusjynus

11.1k11832




11.1k11832













  • Thanks very much, your help is very much appriciated! Can you tell me how long did you think will we need, if we say it will work in first try, to lock the tables for this 80GB

    – Kovu
    Apr 20 '15 at 11:15











  • @Kovu Lock granting should take seconds, unless you have long SELECTs running or lots of buffered content. Obviously I recommend you to do all of this on the lowest possible load time (e.g. night). The copy process (while you held the lock) will be as fast as copying 80 GB of files from disk. I recommend you to try the process on a test system first.

    – jynus
    Apr 20 '15 at 11:22





















  • Thanks very much, your help is very much appriciated! Can you tell me how long did you think will we need, if we say it will work in first try, to lock the tables for this 80GB

    – Kovu
    Apr 20 '15 at 11:15











  • @Kovu Lock granting should take seconds, unless you have long SELECTs running or lots of buffered content. Obviously I recommend you to do all of this on the lowest possible load time (e.g. night). The copy process (while you held the lock) will be as fast as copying 80 GB of files from disk. I recommend you to try the process on a test system first.

    – jynus
    Apr 20 '15 at 11:22



















Thanks very much, your help is very much appriciated! Can you tell me how long did you think will we need, if we say it will work in first try, to lock the tables for this 80GB

– Kovu
Apr 20 '15 at 11:15





Thanks very much, your help is very much appriciated! Can you tell me how long did you think will we need, if we say it will work in first try, to lock the tables for this 80GB

– Kovu
Apr 20 '15 at 11:15













@Kovu Lock granting should take seconds, unless you have long SELECTs running or lots of buffered content. Obviously I recommend you to do all of this on the lowest possible load time (e.g. night). The copy process (while you held the lock) will be as fast as copying 80 GB of files from disk. I recommend you to try the process on a test system first.

– jynus
Apr 20 '15 at 11:22







@Kovu Lock granting should take seconds, unless you have long SELECTs running or lots of buffered content. Obviously I recommend you to do all of this on the lowest possible load time (e.g. night). The copy process (while you held the lock) will be as fast as copying 80 GB of files from disk. I recommend you to try the process on a test system first.

– jynus
Apr 20 '15 at 11:22




















draft saved

draft discarded




















































Thanks for contributing an answer to Database Administrators Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f98293%2fhow-to-move-a-80gb-mysql-database-with-least-effort-and-least-time-offline%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Liste der Baudenkmale in Friedland (Mecklenburg)

Single-Malt-Whisky

Czorneboh