create a backup table with million records












0















I want to move old records from TABLEA to TABLEA_AUDIT. TABLEA have around 1.5 million records and it have two nested tables.



No of Records : 1557951
Size : 1024 MB



I tried it by,



1.using Create table as select * from



CREATE TABLE
TABLEA_AUDIT
COLUMN TABLEA_STAGE NOT SUBSTITUTABLE AT ALL LEVELS
NESTED TABLE TABLEA_STAGES STORE AS AUDIT_TABLEA_STAGES,
NESTED TABLE TABLEA_MODELS STORE AS AUDIT_TABLEA_MODELS
AS
SELECT *
FROM
TABLEA
WHERE COULMN1 IS NOT NULL
AND TRUNC(UPDATED_DT) < '01-MAR-18';


Result : 2hours of waiting and didnt get any result



2.Tried CREATE TABLE AS with no logging by follwoing below blog.
http://www.dba-oracle.com/t_fast_copy_data_oracle_table.htm



Result : 2hours of waiting and didnt get any result



3.Developed a new procedure to copy the records and created a new DBMS_JOB



EXECUTE IMMEDIATE 'CREATE TABLE TABLEA_AUDIT AS SELECT * FROM TABLEA';


Result : Job is running for more than 2hours and no result.



4.Created the table and developed a procedure to insert bulk records



set serveroutput on size unlimited
set timing on
declare
type audit_type is table of TABLEA%rowtype;
v_type audit_type;
CURSOR temp_cur is
select *
FROM TABLEA a
WHERE COLUMN1 IS NOT NULL
AND TRUNC(UPDATED_DT) < '01-MAR-18';
BEGIN

OPEN temp_cur;
/ collect data in the collection /
FETCH temp_cur BULK COLLECT INTO v_type;
/ close the pointer /
CLOSE temp_cur;

FORALL i in v_type.first .. v_type.last
INSERT INTO TABLEA_AUDIT VALUES v_type(i);

FORALL i in v_type.first .. v_type.last
DELETE FROM TABLEA WHERE PRIMARY_KEY_COL = v_type(i).PRIMARY_KEY_COL;

COMMIT;
END;
/


Ps: We have tried the same with out any indexes in new AUDIT (TABLEA) table.

Result : Block running for more than a hour.




Without Nested table I can copy 7,716,204 records in less than a
minute. Size : 960.28125 MB




Thanks.










share|improve this question
















bumped to the homepage by Community 3 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
















  • Is this a one off thing or do you intend to do this regularly? If it's just a one off: Is there no chance to just let it run? Over night or the weekend maybe? In mean it seems like you've spent at least seven hours on it already. Maybe it had finished in that time? Maybe an index on TRUNC(UPDATED_DT), COULMN1 could support the SELECT but on the other hand, given your last statement, the retrieval doesn't seem to be the actual problem. Maybe consider looping over a cursor and commit after every some rows. Could be the growing transaction log that hits the breaks here.

    – sticky bit
    Apr 30 '18 at 11:45













  • Its a one time task and I have already checked the index.

    – User12345
    Apr 30 '18 at 12:26













  • What about the suggestion regarding the transaction resources growing?

    – sticky bit
    Apr 30 '18 at 13:07
















0















I want to move old records from TABLEA to TABLEA_AUDIT. TABLEA have around 1.5 million records and it have two nested tables.



No of Records : 1557951
Size : 1024 MB



I tried it by,



1.using Create table as select * from



CREATE TABLE
TABLEA_AUDIT
COLUMN TABLEA_STAGE NOT SUBSTITUTABLE AT ALL LEVELS
NESTED TABLE TABLEA_STAGES STORE AS AUDIT_TABLEA_STAGES,
NESTED TABLE TABLEA_MODELS STORE AS AUDIT_TABLEA_MODELS
AS
SELECT *
FROM
TABLEA
WHERE COULMN1 IS NOT NULL
AND TRUNC(UPDATED_DT) < '01-MAR-18';


Result : 2hours of waiting and didnt get any result



2.Tried CREATE TABLE AS with no logging by follwoing below blog.
http://www.dba-oracle.com/t_fast_copy_data_oracle_table.htm



Result : 2hours of waiting and didnt get any result



3.Developed a new procedure to copy the records and created a new DBMS_JOB



EXECUTE IMMEDIATE 'CREATE TABLE TABLEA_AUDIT AS SELECT * FROM TABLEA';


Result : Job is running for more than 2hours and no result.



4.Created the table and developed a procedure to insert bulk records



set serveroutput on size unlimited
set timing on
declare
type audit_type is table of TABLEA%rowtype;
v_type audit_type;
CURSOR temp_cur is
select *
FROM TABLEA a
WHERE COLUMN1 IS NOT NULL
AND TRUNC(UPDATED_DT) < '01-MAR-18';
BEGIN

OPEN temp_cur;
/ collect data in the collection /
FETCH temp_cur BULK COLLECT INTO v_type;
/ close the pointer /
CLOSE temp_cur;

FORALL i in v_type.first .. v_type.last
INSERT INTO TABLEA_AUDIT VALUES v_type(i);

FORALL i in v_type.first .. v_type.last
DELETE FROM TABLEA WHERE PRIMARY_KEY_COL = v_type(i).PRIMARY_KEY_COL;

COMMIT;
END;
/


Ps: We have tried the same with out any indexes in new AUDIT (TABLEA) table.

Result : Block running for more than a hour.




Without Nested table I can copy 7,716,204 records in less than a
minute. Size : 960.28125 MB




Thanks.










share|improve this question
















bumped to the homepage by Community 3 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
















  • Is this a one off thing or do you intend to do this regularly? If it's just a one off: Is there no chance to just let it run? Over night or the weekend maybe? In mean it seems like you've spent at least seven hours on it already. Maybe it had finished in that time? Maybe an index on TRUNC(UPDATED_DT), COULMN1 could support the SELECT but on the other hand, given your last statement, the retrieval doesn't seem to be the actual problem. Maybe consider looping over a cursor and commit after every some rows. Could be the growing transaction log that hits the breaks here.

    – sticky bit
    Apr 30 '18 at 11:45













  • Its a one time task and I have already checked the index.

    – User12345
    Apr 30 '18 at 12:26













  • What about the suggestion regarding the transaction resources growing?

    – sticky bit
    Apr 30 '18 at 13:07














0












0








0








I want to move old records from TABLEA to TABLEA_AUDIT. TABLEA have around 1.5 million records and it have two nested tables.



No of Records : 1557951
Size : 1024 MB



I tried it by,



1.using Create table as select * from



CREATE TABLE
TABLEA_AUDIT
COLUMN TABLEA_STAGE NOT SUBSTITUTABLE AT ALL LEVELS
NESTED TABLE TABLEA_STAGES STORE AS AUDIT_TABLEA_STAGES,
NESTED TABLE TABLEA_MODELS STORE AS AUDIT_TABLEA_MODELS
AS
SELECT *
FROM
TABLEA
WHERE COULMN1 IS NOT NULL
AND TRUNC(UPDATED_DT) < '01-MAR-18';


Result : 2hours of waiting and didnt get any result



2.Tried CREATE TABLE AS with no logging by follwoing below blog.
http://www.dba-oracle.com/t_fast_copy_data_oracle_table.htm



Result : 2hours of waiting and didnt get any result



3.Developed a new procedure to copy the records and created a new DBMS_JOB



EXECUTE IMMEDIATE 'CREATE TABLE TABLEA_AUDIT AS SELECT * FROM TABLEA';


Result : Job is running for more than 2hours and no result.



4.Created the table and developed a procedure to insert bulk records



set serveroutput on size unlimited
set timing on
declare
type audit_type is table of TABLEA%rowtype;
v_type audit_type;
CURSOR temp_cur is
select *
FROM TABLEA a
WHERE COLUMN1 IS NOT NULL
AND TRUNC(UPDATED_DT) < '01-MAR-18';
BEGIN

OPEN temp_cur;
/ collect data in the collection /
FETCH temp_cur BULK COLLECT INTO v_type;
/ close the pointer /
CLOSE temp_cur;

FORALL i in v_type.first .. v_type.last
INSERT INTO TABLEA_AUDIT VALUES v_type(i);

FORALL i in v_type.first .. v_type.last
DELETE FROM TABLEA WHERE PRIMARY_KEY_COL = v_type(i).PRIMARY_KEY_COL;

COMMIT;
END;
/


Ps: We have tried the same with out any indexes in new AUDIT (TABLEA) table.

Result : Block running for more than a hour.




Without Nested table I can copy 7,716,204 records in less than a
minute. Size : 960.28125 MB




Thanks.










share|improve this question
















I want to move old records from TABLEA to TABLEA_AUDIT. TABLEA have around 1.5 million records and it have two nested tables.



No of Records : 1557951
Size : 1024 MB



I tried it by,



1.using Create table as select * from



CREATE TABLE
TABLEA_AUDIT
COLUMN TABLEA_STAGE NOT SUBSTITUTABLE AT ALL LEVELS
NESTED TABLE TABLEA_STAGES STORE AS AUDIT_TABLEA_STAGES,
NESTED TABLE TABLEA_MODELS STORE AS AUDIT_TABLEA_MODELS
AS
SELECT *
FROM
TABLEA
WHERE COULMN1 IS NOT NULL
AND TRUNC(UPDATED_DT) < '01-MAR-18';


Result : 2hours of waiting and didnt get any result



2.Tried CREATE TABLE AS with no logging by follwoing below blog.
http://www.dba-oracle.com/t_fast_copy_data_oracle_table.htm



Result : 2hours of waiting and didnt get any result



3.Developed a new procedure to copy the records and created a new DBMS_JOB



EXECUTE IMMEDIATE 'CREATE TABLE TABLEA_AUDIT AS SELECT * FROM TABLEA';


Result : Job is running for more than 2hours and no result.



4.Created the table and developed a procedure to insert bulk records



set serveroutput on size unlimited
set timing on
declare
type audit_type is table of TABLEA%rowtype;
v_type audit_type;
CURSOR temp_cur is
select *
FROM TABLEA a
WHERE COLUMN1 IS NOT NULL
AND TRUNC(UPDATED_DT) < '01-MAR-18';
BEGIN

OPEN temp_cur;
/ collect data in the collection /
FETCH temp_cur BULK COLLECT INTO v_type;
/ close the pointer /
CLOSE temp_cur;

FORALL i in v_type.first .. v_type.last
INSERT INTO TABLEA_AUDIT VALUES v_type(i);

FORALL i in v_type.first .. v_type.last
DELETE FROM TABLEA WHERE PRIMARY_KEY_COL = v_type(i).PRIMARY_KEY_COL;

COMMIT;
END;
/


Ps: We have tried the same with out any indexes in new AUDIT (TABLEA) table.

Result : Block running for more than a hour.




Without Nested table I can copy 7,716,204 records in less than a
minute. Size : 960.28125 MB




Thanks.







oracle backup oracle-12c plsql






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Apr 30 '18 at 8:46







User12345

















asked Apr 30 '18 at 4:42









User12345User12345

598




598





bumped to the homepage by Community 3 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







bumped to the homepage by Community 3 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.















  • Is this a one off thing or do you intend to do this regularly? If it's just a one off: Is there no chance to just let it run? Over night or the weekend maybe? In mean it seems like you've spent at least seven hours on it already. Maybe it had finished in that time? Maybe an index on TRUNC(UPDATED_DT), COULMN1 could support the SELECT but on the other hand, given your last statement, the retrieval doesn't seem to be the actual problem. Maybe consider looping over a cursor and commit after every some rows. Could be the growing transaction log that hits the breaks here.

    – sticky bit
    Apr 30 '18 at 11:45













  • Its a one time task and I have already checked the index.

    – User12345
    Apr 30 '18 at 12:26













  • What about the suggestion regarding the transaction resources growing?

    – sticky bit
    Apr 30 '18 at 13:07



















  • Is this a one off thing or do you intend to do this regularly? If it's just a one off: Is there no chance to just let it run? Over night or the weekend maybe? In mean it seems like you've spent at least seven hours on it already. Maybe it had finished in that time? Maybe an index on TRUNC(UPDATED_DT), COULMN1 could support the SELECT but on the other hand, given your last statement, the retrieval doesn't seem to be the actual problem. Maybe consider looping over a cursor and commit after every some rows. Could be the growing transaction log that hits the breaks here.

    – sticky bit
    Apr 30 '18 at 11:45













  • Its a one time task and I have already checked the index.

    – User12345
    Apr 30 '18 at 12:26













  • What about the suggestion regarding the transaction resources growing?

    – sticky bit
    Apr 30 '18 at 13:07

















Is this a one off thing or do you intend to do this regularly? If it's just a one off: Is there no chance to just let it run? Over night or the weekend maybe? In mean it seems like you've spent at least seven hours on it already. Maybe it had finished in that time? Maybe an index on TRUNC(UPDATED_DT), COULMN1 could support the SELECT but on the other hand, given your last statement, the retrieval doesn't seem to be the actual problem. Maybe consider looping over a cursor and commit after every some rows. Could be the growing transaction log that hits the breaks here.

– sticky bit
Apr 30 '18 at 11:45







Is this a one off thing or do you intend to do this regularly? If it's just a one off: Is there no chance to just let it run? Over night or the weekend maybe? In mean it seems like you've spent at least seven hours on it already. Maybe it had finished in that time? Maybe an index on TRUNC(UPDATED_DT), COULMN1 could support the SELECT but on the other hand, given your last statement, the retrieval doesn't seem to be the actual problem. Maybe consider looping over a cursor and commit after every some rows. Could be the growing transaction log that hits the breaks here.

– sticky bit
Apr 30 '18 at 11:45















Its a one time task and I have already checked the index.

– User12345
Apr 30 '18 at 12:26







Its a one time task and I have already checked the index.

– User12345
Apr 30 '18 at 12:26















What about the suggestion regarding the transaction resources growing?

– sticky bit
Apr 30 '18 at 13:07





What about the suggestion regarding the transaction resources growing?

– sticky bit
Apr 30 '18 at 13:07










1 Answer
1






active

oldest

votes


















0














In case you won't need the data immediately, you can try extracting table structure, then rename it to backup, and create a new table with the same structure and name.



ALTER TABLE table_name RENAME TO new_table_name;


This will show the table structure. you can use the result to create new table.



>describe table_name


many suggestions are out there to copy by the select statement onstackoverflow. one is:



create table table_name as select * from table_name_oldd where 1=0;


It creates structure but won't copy any value because the condition is false.






share|improve this answer
























  • Sorry to say but this will not help me in this case as I'm moving the records by condition.

    – User12345
    Apr 30 '18 at 5:23













  • @User12345, thanks for the feedback, I hope you find a useful answer later. just don't forget making full backups while trying.

    – Yılmaz Durmaz
    Apr 30 '18 at 5:28











Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f205310%2fcreate-a-backup-table-with-million-records%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














In case you won't need the data immediately, you can try extracting table structure, then rename it to backup, and create a new table with the same structure and name.



ALTER TABLE table_name RENAME TO new_table_name;


This will show the table structure. you can use the result to create new table.



>describe table_name


many suggestions are out there to copy by the select statement onstackoverflow. one is:



create table table_name as select * from table_name_oldd where 1=0;


It creates structure but won't copy any value because the condition is false.






share|improve this answer
























  • Sorry to say but this will not help me in this case as I'm moving the records by condition.

    – User12345
    Apr 30 '18 at 5:23













  • @User12345, thanks for the feedback, I hope you find a useful answer later. just don't forget making full backups while trying.

    – Yılmaz Durmaz
    Apr 30 '18 at 5:28
















0














In case you won't need the data immediately, you can try extracting table structure, then rename it to backup, and create a new table with the same structure and name.



ALTER TABLE table_name RENAME TO new_table_name;


This will show the table structure. you can use the result to create new table.



>describe table_name


many suggestions are out there to copy by the select statement onstackoverflow. one is:



create table table_name as select * from table_name_oldd where 1=0;


It creates structure but won't copy any value because the condition is false.






share|improve this answer
























  • Sorry to say but this will not help me in this case as I'm moving the records by condition.

    – User12345
    Apr 30 '18 at 5:23













  • @User12345, thanks for the feedback, I hope you find a useful answer later. just don't forget making full backups while trying.

    – Yılmaz Durmaz
    Apr 30 '18 at 5:28














0












0








0







In case you won't need the data immediately, you can try extracting table structure, then rename it to backup, and create a new table with the same structure and name.



ALTER TABLE table_name RENAME TO new_table_name;


This will show the table structure. you can use the result to create new table.



>describe table_name


many suggestions are out there to copy by the select statement onstackoverflow. one is:



create table table_name as select * from table_name_oldd where 1=0;


It creates structure but won't copy any value because the condition is false.






share|improve this answer













In case you won't need the data immediately, you can try extracting table structure, then rename it to backup, and create a new table with the same structure and name.



ALTER TABLE table_name RENAME TO new_table_name;


This will show the table structure. you can use the result to create new table.



>describe table_name


many suggestions are out there to copy by the select statement onstackoverflow. one is:



create table table_name as select * from table_name_oldd where 1=0;


It creates structure but won't copy any value because the condition is false.







share|improve this answer












share|improve this answer



share|improve this answer










answered Apr 30 '18 at 5:20









Yılmaz DurmazYılmaz Durmaz

1013




1013













  • Sorry to say but this will not help me in this case as I'm moving the records by condition.

    – User12345
    Apr 30 '18 at 5:23













  • @User12345, thanks for the feedback, I hope you find a useful answer later. just don't forget making full backups while trying.

    – Yılmaz Durmaz
    Apr 30 '18 at 5:28



















  • Sorry to say but this will not help me in this case as I'm moving the records by condition.

    – User12345
    Apr 30 '18 at 5:23













  • @User12345, thanks for the feedback, I hope you find a useful answer later. just don't forget making full backups while trying.

    – Yılmaz Durmaz
    Apr 30 '18 at 5:28

















Sorry to say but this will not help me in this case as I'm moving the records by condition.

– User12345
Apr 30 '18 at 5:23







Sorry to say but this will not help me in this case as I'm moving the records by condition.

– User12345
Apr 30 '18 at 5:23















@User12345, thanks for the feedback, I hope you find a useful answer later. just don't forget making full backups while trying.

– Yılmaz Durmaz
Apr 30 '18 at 5:28





@User12345, thanks for the feedback, I hope you find a useful answer later. just don't forget making full backups while trying.

– Yılmaz Durmaz
Apr 30 '18 at 5:28


















draft saved

draft discarded




















































Thanks for contributing an answer to Database Administrators Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f205310%2fcreate-a-backup-table-with-million-records%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Liste der Baudenkmale in Friedland (Mecklenburg)

Single-Malt-Whisky

Czorneboh