How to restore a pg_dumpall dump without CREATE INDEX?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}
I am attempting to migrate from Postgres 9.6 to 10.3 and during the restore each index is recreated one by one - this is a problem.
So far I thought pg_dumpall
is a good option.
pg_dumpall -U postgres -h localhost -p 5432 --clean --file=dumpall_clean.sql
Once this is done the file is around 1.2TB in size and I can load it to the new 10.3 instance with
psql -U postgres -h localhost -p 5433 < dumpall_clean.sql
simple.
Problem
As I learned the indicies are not backed up like tables are, they are simply recreated, and that is my problem.
The cluster has thousands of partitions each with several million rows and two indices (one BTREE and one GIST). This takes days since each index is created one by one.
As I have enough resources and I know which indices have to be created, I would perfer to do this step after the dump has been restored. Initially I made 8 FOR loops (to run in parallel) to go through the partitions, and created an index by moving a partition to a faster tablespace (SSD), create the index, then move the table and the index back to the default tablespace. So far this has worked for me.
Question
How can I have the same result* of a pg_dumpall
dump without recreating the indices when loading the dumpall_clean.sql
file?
A pg_dumpall --without-index
would be nice.
"This currently includes information about database users and groups, tablespaces, and properties such as access permissions that apply to databases as a whole." - pg_dumpall manual
postgresql index migration postgresql-9.6 postgresql-10
bumped to the homepage by Community♦ 5 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
I am attempting to migrate from Postgres 9.6 to 10.3 and during the restore each index is recreated one by one - this is a problem.
So far I thought pg_dumpall
is a good option.
pg_dumpall -U postgres -h localhost -p 5432 --clean --file=dumpall_clean.sql
Once this is done the file is around 1.2TB in size and I can load it to the new 10.3 instance with
psql -U postgres -h localhost -p 5433 < dumpall_clean.sql
simple.
Problem
As I learned the indicies are not backed up like tables are, they are simply recreated, and that is my problem.
The cluster has thousands of partitions each with several million rows and two indices (one BTREE and one GIST). This takes days since each index is created one by one.
As I have enough resources and I know which indices have to be created, I would perfer to do this step after the dump has been restored. Initially I made 8 FOR loops (to run in parallel) to go through the partitions, and created an index by moving a partition to a faster tablespace (SSD), create the index, then move the table and the index back to the default tablespace. So far this has worked for me.
Question
How can I have the same result* of a pg_dumpall
dump without recreating the indices when loading the dumpall_clean.sql
file?
A pg_dumpall --without-index
would be nice.
"This currently includes information about database users and groups, tablespaces, and properties such as access permissions that apply to databases as a whole." - pg_dumpall manual
postgresql index migration postgresql-9.6 postgresql-10
bumped to the homepage by Community♦ 5 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
Are the two Postgres installations on the same server? If yes, you could usepg_upgrade
with the--link
option which will massively increase the speed of the migration
– a_horse_with_no_name
Apr 20 '18 at 5:59
@a_horse_with_no_name, two separate machines.
– Michael
Apr 20 '18 at 6:22
add a comment |
I am attempting to migrate from Postgres 9.6 to 10.3 and during the restore each index is recreated one by one - this is a problem.
So far I thought pg_dumpall
is a good option.
pg_dumpall -U postgres -h localhost -p 5432 --clean --file=dumpall_clean.sql
Once this is done the file is around 1.2TB in size and I can load it to the new 10.3 instance with
psql -U postgres -h localhost -p 5433 < dumpall_clean.sql
simple.
Problem
As I learned the indicies are not backed up like tables are, they are simply recreated, and that is my problem.
The cluster has thousands of partitions each with several million rows and two indices (one BTREE and one GIST). This takes days since each index is created one by one.
As I have enough resources and I know which indices have to be created, I would perfer to do this step after the dump has been restored. Initially I made 8 FOR loops (to run in parallel) to go through the partitions, and created an index by moving a partition to a faster tablespace (SSD), create the index, then move the table and the index back to the default tablespace. So far this has worked for me.
Question
How can I have the same result* of a pg_dumpall
dump without recreating the indices when loading the dumpall_clean.sql
file?
A pg_dumpall --without-index
would be nice.
"This currently includes information about database users and groups, tablespaces, and properties such as access permissions that apply to databases as a whole." - pg_dumpall manual
postgresql index migration postgresql-9.6 postgresql-10
I am attempting to migrate from Postgres 9.6 to 10.3 and during the restore each index is recreated one by one - this is a problem.
So far I thought pg_dumpall
is a good option.
pg_dumpall -U postgres -h localhost -p 5432 --clean --file=dumpall_clean.sql
Once this is done the file is around 1.2TB in size and I can load it to the new 10.3 instance with
psql -U postgres -h localhost -p 5433 < dumpall_clean.sql
simple.
Problem
As I learned the indicies are not backed up like tables are, they are simply recreated, and that is my problem.
The cluster has thousands of partitions each with several million rows and two indices (one BTREE and one GIST). This takes days since each index is created one by one.
As I have enough resources and I know which indices have to be created, I would perfer to do this step after the dump has been restored. Initially I made 8 FOR loops (to run in parallel) to go through the partitions, and created an index by moving a partition to a faster tablespace (SSD), create the index, then move the table and the index back to the default tablespace. So far this has worked for me.
Question
How can I have the same result* of a pg_dumpall
dump without recreating the indices when loading the dumpall_clean.sql
file?
A pg_dumpall --without-index
would be nice.
"This currently includes information about database users and groups, tablespaces, and properties such as access permissions that apply to databases as a whole." - pg_dumpall manual
postgresql index migration postgresql-9.6 postgresql-10
postgresql index migration postgresql-9.6 postgresql-10
edited Jun 29 '18 at 11:46
Colin 't Hart
6,64182634
6,64182634
asked Apr 20 '18 at 5:20
MichaelMichael
13311
13311
bumped to the homepage by Community♦ 5 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 5 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
Are the two Postgres installations on the same server? If yes, you could usepg_upgrade
with the--link
option which will massively increase the speed of the migration
– a_horse_with_no_name
Apr 20 '18 at 5:59
@a_horse_with_no_name, two separate machines.
– Michael
Apr 20 '18 at 6:22
add a comment |
Are the two Postgres installations on the same server? If yes, you could usepg_upgrade
with the--link
option which will massively increase the speed of the migration
– a_horse_with_no_name
Apr 20 '18 at 5:59
@a_horse_with_no_name, two separate machines.
– Michael
Apr 20 '18 at 6:22
Are the two Postgres installations on the same server? If yes, you could use
pg_upgrade
with the --link
option which will massively increase the speed of the migration– a_horse_with_no_name
Apr 20 '18 at 5:59
Are the two Postgres installations on the same server? If yes, you could use
pg_upgrade
with the --link
option which will massively increase the speed of the migration– a_horse_with_no_name
Apr 20 '18 at 5:59
@a_horse_with_no_name, two separate machines.
– Michael
Apr 20 '18 at 6:22
@a_horse_with_no_name, two separate machines.
– Michael
Apr 20 '18 at 6:22
add a comment |
2 Answers
2
active
oldest
votes
I can see one workaround for this, by using pg_dumpall
in two steps:
pg_dumpall --schema-only ....
Then edit the file and extract the index definitions into a second file. You also need to extract the foreign keys, because you have to run them manually after the import (probably together with the index creation script)
Then run that script (without the indexes) to create the (empty) tables. You
pg_dumpall --data-only ....
Then run that script to import the data into the new database. After that run the FK and index creation scripts.
Thanks! Couple of questions; As I also need the roles and tablespaces, should I dopg_dumpall --globals-only ... --file=globals.sql
,pg_dumpall --schema-only ... --file=schema.sql
, and thenpg_dumpall --data-only ... --file=data.sql
? Then I can edit theschema.sql
as you suggested and run (in this order?)globals.sql
,schema.sql
,data.sql
, and then the index creation the way I want it? I am also thinking of usinggrep
to find the lines withCREATE INDEX
, write a new file and commenting them in the orginal*.sql
dump usingsed
.
– Michael
Apr 20 '18 at 6:46
@Michael: yes you would need globals as well. I was just focusing on the basic idea to separate the generation of the DDL from the actual data.
– a_horse_with_no_name
Apr 20 '18 at 6:50
add a comment |
it should be possible to just filter them out using "grep" :
grep -v '^CREATE INDEX [^t]*;$' dump.sql | psql
or
pg_dumpall "source db connection string" |
grep -v '^CREATE INDEX [^t]*;$' |
psql "destination db connection string"
should be safe unless you have matching lines inside stored code.
in your specific case:
grep -v '^CREATE INDEX [^t]*;$' dumpall_clean.sql | psql -U postgres -h localhost -p 5433
yes something like this is what I was thinking about, too. Since, there are procedures that contain theCREATE INDEX
statement, I just have to match those lines a bit better.
– Michael
Apr 20 '18 at 7:09
perhaps you can find those procedures and modify the lines so that they start with a space or tab or end with a comment or space etc . then they won't match that regex.
– Jasen
Apr 20 '18 at 23:50
there's probably an SQL update you could run to do that :)
– Jasen
Apr 20 '18 at 23:51
update pg_catalog.pg_proc as p set prosrc= regexp_replace(prosrc,'CREATE INDEX ',' CREATE INDEX ','g') where p.prolang = ( select oid from pg_catalog.pg_language where lanname='plpgsql' );
– Jasen
Apr 21 '18 at 0:08
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f204490%2fhow-to-restore-a-pg-dumpall-dump-without-create-index%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
I can see one workaround for this, by using pg_dumpall
in two steps:
pg_dumpall --schema-only ....
Then edit the file and extract the index definitions into a second file. You also need to extract the foreign keys, because you have to run them manually after the import (probably together with the index creation script)
Then run that script (without the indexes) to create the (empty) tables. You
pg_dumpall --data-only ....
Then run that script to import the data into the new database. After that run the FK and index creation scripts.
Thanks! Couple of questions; As I also need the roles and tablespaces, should I dopg_dumpall --globals-only ... --file=globals.sql
,pg_dumpall --schema-only ... --file=schema.sql
, and thenpg_dumpall --data-only ... --file=data.sql
? Then I can edit theschema.sql
as you suggested and run (in this order?)globals.sql
,schema.sql
,data.sql
, and then the index creation the way I want it? I am also thinking of usinggrep
to find the lines withCREATE INDEX
, write a new file and commenting them in the orginal*.sql
dump usingsed
.
– Michael
Apr 20 '18 at 6:46
@Michael: yes you would need globals as well. I was just focusing on the basic idea to separate the generation of the DDL from the actual data.
– a_horse_with_no_name
Apr 20 '18 at 6:50
add a comment |
I can see one workaround for this, by using pg_dumpall
in two steps:
pg_dumpall --schema-only ....
Then edit the file and extract the index definitions into a second file. You also need to extract the foreign keys, because you have to run them manually after the import (probably together with the index creation script)
Then run that script (without the indexes) to create the (empty) tables. You
pg_dumpall --data-only ....
Then run that script to import the data into the new database. After that run the FK and index creation scripts.
Thanks! Couple of questions; As I also need the roles and tablespaces, should I dopg_dumpall --globals-only ... --file=globals.sql
,pg_dumpall --schema-only ... --file=schema.sql
, and thenpg_dumpall --data-only ... --file=data.sql
? Then I can edit theschema.sql
as you suggested and run (in this order?)globals.sql
,schema.sql
,data.sql
, and then the index creation the way I want it? I am also thinking of usinggrep
to find the lines withCREATE INDEX
, write a new file and commenting them in the orginal*.sql
dump usingsed
.
– Michael
Apr 20 '18 at 6:46
@Michael: yes you would need globals as well. I was just focusing on the basic idea to separate the generation of the DDL from the actual data.
– a_horse_with_no_name
Apr 20 '18 at 6:50
add a comment |
I can see one workaround for this, by using pg_dumpall
in two steps:
pg_dumpall --schema-only ....
Then edit the file and extract the index definitions into a second file. You also need to extract the foreign keys, because you have to run them manually after the import (probably together with the index creation script)
Then run that script (without the indexes) to create the (empty) tables. You
pg_dumpall --data-only ....
Then run that script to import the data into the new database. After that run the FK and index creation scripts.
I can see one workaround for this, by using pg_dumpall
in two steps:
pg_dumpall --schema-only ....
Then edit the file and extract the index definitions into a second file. You also need to extract the foreign keys, because you have to run them manually after the import (probably together with the index creation script)
Then run that script (without the indexes) to create the (empty) tables. You
pg_dumpall --data-only ....
Then run that script to import the data into the new database. After that run the FK and index creation scripts.
answered Apr 20 '18 at 5:58
a_horse_with_no_namea_horse_with_no_name
41.6k779116
41.6k779116
Thanks! Couple of questions; As I also need the roles and tablespaces, should I dopg_dumpall --globals-only ... --file=globals.sql
,pg_dumpall --schema-only ... --file=schema.sql
, and thenpg_dumpall --data-only ... --file=data.sql
? Then I can edit theschema.sql
as you suggested and run (in this order?)globals.sql
,schema.sql
,data.sql
, and then the index creation the way I want it? I am also thinking of usinggrep
to find the lines withCREATE INDEX
, write a new file and commenting them in the orginal*.sql
dump usingsed
.
– Michael
Apr 20 '18 at 6:46
@Michael: yes you would need globals as well. I was just focusing on the basic idea to separate the generation of the DDL from the actual data.
– a_horse_with_no_name
Apr 20 '18 at 6:50
add a comment |
Thanks! Couple of questions; As I also need the roles and tablespaces, should I dopg_dumpall --globals-only ... --file=globals.sql
,pg_dumpall --schema-only ... --file=schema.sql
, and thenpg_dumpall --data-only ... --file=data.sql
? Then I can edit theschema.sql
as you suggested and run (in this order?)globals.sql
,schema.sql
,data.sql
, and then the index creation the way I want it? I am also thinking of usinggrep
to find the lines withCREATE INDEX
, write a new file and commenting them in the orginal*.sql
dump usingsed
.
– Michael
Apr 20 '18 at 6:46
@Michael: yes you would need globals as well. I was just focusing on the basic idea to separate the generation of the DDL from the actual data.
– a_horse_with_no_name
Apr 20 '18 at 6:50
Thanks! Couple of questions; As I also need the roles and tablespaces, should I do
pg_dumpall --globals-only ... --file=globals.sql
, pg_dumpall --schema-only ... --file=schema.sql
, and then pg_dumpall --data-only ... --file=data.sql
? Then I can edit the schema.sql
as you suggested and run (in this order?) globals.sql
, schema.sql
,data.sql
, and then the index creation the way I want it? I am also thinking of using grep
to find the lines with CREATE INDEX
, write a new file and commenting them in the orginal *.sql
dump using sed
.– Michael
Apr 20 '18 at 6:46
Thanks! Couple of questions; As I also need the roles and tablespaces, should I do
pg_dumpall --globals-only ... --file=globals.sql
, pg_dumpall --schema-only ... --file=schema.sql
, and then pg_dumpall --data-only ... --file=data.sql
? Then I can edit the schema.sql
as you suggested and run (in this order?) globals.sql
, schema.sql
,data.sql
, and then the index creation the way I want it? I am also thinking of using grep
to find the lines with CREATE INDEX
, write a new file and commenting them in the orginal *.sql
dump using sed
.– Michael
Apr 20 '18 at 6:46
@Michael: yes you would need globals as well. I was just focusing on the basic idea to separate the generation of the DDL from the actual data.
– a_horse_with_no_name
Apr 20 '18 at 6:50
@Michael: yes you would need globals as well. I was just focusing on the basic idea to separate the generation of the DDL from the actual data.
– a_horse_with_no_name
Apr 20 '18 at 6:50
add a comment |
it should be possible to just filter them out using "grep" :
grep -v '^CREATE INDEX [^t]*;$' dump.sql | psql
or
pg_dumpall "source db connection string" |
grep -v '^CREATE INDEX [^t]*;$' |
psql "destination db connection string"
should be safe unless you have matching lines inside stored code.
in your specific case:
grep -v '^CREATE INDEX [^t]*;$' dumpall_clean.sql | psql -U postgres -h localhost -p 5433
yes something like this is what I was thinking about, too. Since, there are procedures that contain theCREATE INDEX
statement, I just have to match those lines a bit better.
– Michael
Apr 20 '18 at 7:09
perhaps you can find those procedures and modify the lines so that they start with a space or tab or end with a comment or space etc . then they won't match that regex.
– Jasen
Apr 20 '18 at 23:50
there's probably an SQL update you could run to do that :)
– Jasen
Apr 20 '18 at 23:51
update pg_catalog.pg_proc as p set prosrc= regexp_replace(prosrc,'CREATE INDEX ',' CREATE INDEX ','g') where p.prolang = ( select oid from pg_catalog.pg_language where lanname='plpgsql' );
– Jasen
Apr 21 '18 at 0:08
add a comment |
it should be possible to just filter them out using "grep" :
grep -v '^CREATE INDEX [^t]*;$' dump.sql | psql
or
pg_dumpall "source db connection string" |
grep -v '^CREATE INDEX [^t]*;$' |
psql "destination db connection string"
should be safe unless you have matching lines inside stored code.
in your specific case:
grep -v '^CREATE INDEX [^t]*;$' dumpall_clean.sql | psql -U postgres -h localhost -p 5433
yes something like this is what I was thinking about, too. Since, there are procedures that contain theCREATE INDEX
statement, I just have to match those lines a bit better.
– Michael
Apr 20 '18 at 7:09
perhaps you can find those procedures and modify the lines so that they start with a space or tab or end with a comment or space etc . then they won't match that regex.
– Jasen
Apr 20 '18 at 23:50
there's probably an SQL update you could run to do that :)
– Jasen
Apr 20 '18 at 23:51
update pg_catalog.pg_proc as p set prosrc= regexp_replace(prosrc,'CREATE INDEX ',' CREATE INDEX ','g') where p.prolang = ( select oid from pg_catalog.pg_language where lanname='plpgsql' );
– Jasen
Apr 21 '18 at 0:08
add a comment |
it should be possible to just filter them out using "grep" :
grep -v '^CREATE INDEX [^t]*;$' dump.sql | psql
or
pg_dumpall "source db connection string" |
grep -v '^CREATE INDEX [^t]*;$' |
psql "destination db connection string"
should be safe unless you have matching lines inside stored code.
in your specific case:
grep -v '^CREATE INDEX [^t]*;$' dumpall_clean.sql | psql -U postgres -h localhost -p 5433
it should be possible to just filter them out using "grep" :
grep -v '^CREATE INDEX [^t]*;$' dump.sql | psql
or
pg_dumpall "source db connection string" |
grep -v '^CREATE INDEX [^t]*;$' |
psql "destination db connection string"
should be safe unless you have matching lines inside stored code.
in your specific case:
grep -v '^CREATE INDEX [^t]*;$' dumpall_clean.sql | psql -U postgres -h localhost -p 5433
edited Apr 20 '18 at 6:54
answered Apr 20 '18 at 6:49
JasenJasen
1,271410
1,271410
yes something like this is what I was thinking about, too. Since, there are procedures that contain theCREATE INDEX
statement, I just have to match those lines a bit better.
– Michael
Apr 20 '18 at 7:09
perhaps you can find those procedures and modify the lines so that they start with a space or tab or end with a comment or space etc . then they won't match that regex.
– Jasen
Apr 20 '18 at 23:50
there's probably an SQL update you could run to do that :)
– Jasen
Apr 20 '18 at 23:51
update pg_catalog.pg_proc as p set prosrc= regexp_replace(prosrc,'CREATE INDEX ',' CREATE INDEX ','g') where p.prolang = ( select oid from pg_catalog.pg_language where lanname='plpgsql' );
– Jasen
Apr 21 '18 at 0:08
add a comment |
yes something like this is what I was thinking about, too. Since, there are procedures that contain theCREATE INDEX
statement, I just have to match those lines a bit better.
– Michael
Apr 20 '18 at 7:09
perhaps you can find those procedures and modify the lines so that they start with a space or tab or end with a comment or space etc . then they won't match that regex.
– Jasen
Apr 20 '18 at 23:50
there's probably an SQL update you could run to do that :)
– Jasen
Apr 20 '18 at 23:51
update pg_catalog.pg_proc as p set prosrc= regexp_replace(prosrc,'CREATE INDEX ',' CREATE INDEX ','g') where p.prolang = ( select oid from pg_catalog.pg_language where lanname='plpgsql' );
– Jasen
Apr 21 '18 at 0:08
yes something like this is what I was thinking about, too. Since, there are procedures that contain the
CREATE INDEX
statement, I just have to match those lines a bit better.– Michael
Apr 20 '18 at 7:09
yes something like this is what I was thinking about, too. Since, there are procedures that contain the
CREATE INDEX
statement, I just have to match those lines a bit better.– Michael
Apr 20 '18 at 7:09
perhaps you can find those procedures and modify the lines so that they start with a space or tab or end with a comment or space etc . then they won't match that regex.
– Jasen
Apr 20 '18 at 23:50
perhaps you can find those procedures and modify the lines so that they start with a space or tab or end with a comment or space etc . then they won't match that regex.
– Jasen
Apr 20 '18 at 23:50
there's probably an SQL update you could run to do that :)
– Jasen
Apr 20 '18 at 23:51
there's probably an SQL update you could run to do that :)
– Jasen
Apr 20 '18 at 23:51
update pg_catalog.pg_proc as p set prosrc= regexp_replace(prosrc,'CREATE INDEX ',' CREATE INDEX ','g') where p.prolang = ( select oid from pg_catalog.pg_language where lanname='plpgsql' );
– Jasen
Apr 21 '18 at 0:08
update pg_catalog.pg_proc as p set prosrc= regexp_replace(prosrc,'CREATE INDEX ',' CREATE INDEX ','g') where p.prolang = ( select oid from pg_catalog.pg_language where lanname='plpgsql' );
– Jasen
Apr 21 '18 at 0:08
add a comment |
Thanks for contributing an answer to Database Administrators Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f204490%2fhow-to-restore-a-pg-dumpall-dump-without-create-index%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Are the two Postgres installations on the same server? If yes, you could use
pg_upgrade
with the--link
option which will massively increase the speed of the migration– a_horse_with_no_name
Apr 20 '18 at 5:59
@a_horse_with_no_name, two separate machines.
– Michael
Apr 20 '18 at 6:22