SQL Server In-Memory Table Conversion












3















I have a table that I want to convert to memory optimized.

I have 160 million rows in my table. According to this topic my table will need:



(24B(header size) + 8B(IndexPointer) + 30B(dataRowSize)) * 160mln = 9460.45 MB of memory to store the table and 4B*160mln = 610.35 MB memory to store the index.

Total memory need is about 10GB.



I gave 16,000MB memory to my SQL Server, and I think this might be enough, but when I try to populate my table with data I am getting "out of memory"



I use the following script to change my disk based table to memory-optimized:



USE [master]
GO

ALTER DATABASE [BGD] ADD FILEGROUP [memory_optimized_filegroup_0] CONTAINS MEMORY_OPTIMIZED_DATA
GO

ALTER DATABASE [BGD] ADD FILE ( NAME = N'memory_optimized_file_122569704', FILENAME = N'E:ArtashLocalBGDDatamemory_optimized_file_122569704' ) TO FILEGROUP [memory_optimized_filegroup_0]
GO

USE [BGD]
GO

EXEC dbo.sp_rename @objname = N'[dbo].[household_mem]', @newname = N'household_mem_old', @objtype = N'OBJECT'
GO

USE [BGD]
GO

SET ANSI_NULLS ON
GO

CREATE TABLE [dbo].[household_mem]
(
[location2id] [SMALLINT] NULL,
[location3id] [SMALLINT] NULL,
[location4id] [SMALLINT] NULL,
[location5id] [SMALLINT] NULL,
[location6id] [INT] NULL,
[location7id] [INT] NULL,
[location8id] [INT] NULL,
[location9id] [INT] NULL,
[povertyscore] [SMALLINT] NULL,
[householdid] [INT] IDENTITY(1,1) NOT NULL,

CONSTRAINT [household_mem_primaryKey] PRIMARY KEY NONCLUSTERED
(
[householdid] ASC
)
)WITH ( MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_ONLY )

GO

SET IDENTITY_INSERT [BGD].[dbo].[household_mem] ON

GO

INSERT INTO [BGD].[dbo].[household_mem] ([location2id], [location3id], [location4id], [location5id], [location6id], [location7id], [location8id], [location9id], [povertyscore], [householdid]) SELECT [location2id], [location3id], [location4id], [location5id], [location6id], [location7id], [location8id], [location9id], [povertyscore], [householdid] FROM [BGD].[dbo].[household_mem_old]

GO

SET IDENTITY_INSERT [BGD].[dbo].[household_mem] OFF

GO

Caution: Changing any part of an object name could break scripts and stored procedures.
The statement has been terminated.
Msg 701, Level 17, State 109, Line 47
There is insufficient system memory in resource pool 'default' to run this query.


I have also tried to add a resource pool for my database but still get the same error.










share|improve this question
















bumped to the homepage by Community 2 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
















  • Have you considered batching your inserts rather than doing them all in one shot?

    – Erik Darling
    Oct 4 '17 at 12:19













  • So you already have household_mem_old of that size and now you are trying to create the SECOND memory optimized table of the same size?

    – sepupic
    Oct 4 '17 at 12:43











  • @sepupic No, I just want to convert my Household_mem table to memory optimized. That is why first I do rename, and then I am creating memory optimized table and populating data in it.

    – Artashes Khachatryan
    Oct 4 '17 at 12:46











  • @sp_BlitzErik I tried to split by big insert into batches. In each batch I was inserting 1mln rows. 140 batches completes successful and then I get the error. Currently my table size is 8,544MB and index size is 1,715MB.

    – Artashes Khachatryan
    Oct 4 '17 at 13:41


















3















I have a table that I want to convert to memory optimized.

I have 160 million rows in my table. According to this topic my table will need:



(24B(header size) + 8B(IndexPointer) + 30B(dataRowSize)) * 160mln = 9460.45 MB of memory to store the table and 4B*160mln = 610.35 MB memory to store the index.

Total memory need is about 10GB.



I gave 16,000MB memory to my SQL Server, and I think this might be enough, but when I try to populate my table with data I am getting "out of memory"



I use the following script to change my disk based table to memory-optimized:



USE [master]
GO

ALTER DATABASE [BGD] ADD FILEGROUP [memory_optimized_filegroup_0] CONTAINS MEMORY_OPTIMIZED_DATA
GO

ALTER DATABASE [BGD] ADD FILE ( NAME = N'memory_optimized_file_122569704', FILENAME = N'E:ArtashLocalBGDDatamemory_optimized_file_122569704' ) TO FILEGROUP [memory_optimized_filegroup_0]
GO

USE [BGD]
GO

EXEC dbo.sp_rename @objname = N'[dbo].[household_mem]', @newname = N'household_mem_old', @objtype = N'OBJECT'
GO

USE [BGD]
GO

SET ANSI_NULLS ON
GO

CREATE TABLE [dbo].[household_mem]
(
[location2id] [SMALLINT] NULL,
[location3id] [SMALLINT] NULL,
[location4id] [SMALLINT] NULL,
[location5id] [SMALLINT] NULL,
[location6id] [INT] NULL,
[location7id] [INT] NULL,
[location8id] [INT] NULL,
[location9id] [INT] NULL,
[povertyscore] [SMALLINT] NULL,
[householdid] [INT] IDENTITY(1,1) NOT NULL,

CONSTRAINT [household_mem_primaryKey] PRIMARY KEY NONCLUSTERED
(
[householdid] ASC
)
)WITH ( MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_ONLY )

GO

SET IDENTITY_INSERT [BGD].[dbo].[household_mem] ON

GO

INSERT INTO [BGD].[dbo].[household_mem] ([location2id], [location3id], [location4id], [location5id], [location6id], [location7id], [location8id], [location9id], [povertyscore], [householdid]) SELECT [location2id], [location3id], [location4id], [location5id], [location6id], [location7id], [location8id], [location9id], [povertyscore], [householdid] FROM [BGD].[dbo].[household_mem_old]

GO

SET IDENTITY_INSERT [BGD].[dbo].[household_mem] OFF

GO

Caution: Changing any part of an object name could break scripts and stored procedures.
The statement has been terminated.
Msg 701, Level 17, State 109, Line 47
There is insufficient system memory in resource pool 'default' to run this query.


I have also tried to add a resource pool for my database but still get the same error.










share|improve this question
















bumped to the homepage by Community 2 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
















  • Have you considered batching your inserts rather than doing them all in one shot?

    – Erik Darling
    Oct 4 '17 at 12:19













  • So you already have household_mem_old of that size and now you are trying to create the SECOND memory optimized table of the same size?

    – sepupic
    Oct 4 '17 at 12:43











  • @sepupic No, I just want to convert my Household_mem table to memory optimized. That is why first I do rename, and then I am creating memory optimized table and populating data in it.

    – Artashes Khachatryan
    Oct 4 '17 at 12:46











  • @sp_BlitzErik I tried to split by big insert into batches. In each batch I was inserting 1mln rows. 140 batches completes successful and then I get the error. Currently my table size is 8,544MB and index size is 1,715MB.

    – Artashes Khachatryan
    Oct 4 '17 at 13:41
















3












3








3








I have a table that I want to convert to memory optimized.

I have 160 million rows in my table. According to this topic my table will need:



(24B(header size) + 8B(IndexPointer) + 30B(dataRowSize)) * 160mln = 9460.45 MB of memory to store the table and 4B*160mln = 610.35 MB memory to store the index.

Total memory need is about 10GB.



I gave 16,000MB memory to my SQL Server, and I think this might be enough, but when I try to populate my table with data I am getting "out of memory"



I use the following script to change my disk based table to memory-optimized:



USE [master]
GO

ALTER DATABASE [BGD] ADD FILEGROUP [memory_optimized_filegroup_0] CONTAINS MEMORY_OPTIMIZED_DATA
GO

ALTER DATABASE [BGD] ADD FILE ( NAME = N'memory_optimized_file_122569704', FILENAME = N'E:ArtashLocalBGDDatamemory_optimized_file_122569704' ) TO FILEGROUP [memory_optimized_filegroup_0]
GO

USE [BGD]
GO

EXEC dbo.sp_rename @objname = N'[dbo].[household_mem]', @newname = N'household_mem_old', @objtype = N'OBJECT'
GO

USE [BGD]
GO

SET ANSI_NULLS ON
GO

CREATE TABLE [dbo].[household_mem]
(
[location2id] [SMALLINT] NULL,
[location3id] [SMALLINT] NULL,
[location4id] [SMALLINT] NULL,
[location5id] [SMALLINT] NULL,
[location6id] [INT] NULL,
[location7id] [INT] NULL,
[location8id] [INT] NULL,
[location9id] [INT] NULL,
[povertyscore] [SMALLINT] NULL,
[householdid] [INT] IDENTITY(1,1) NOT NULL,

CONSTRAINT [household_mem_primaryKey] PRIMARY KEY NONCLUSTERED
(
[householdid] ASC
)
)WITH ( MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_ONLY )

GO

SET IDENTITY_INSERT [BGD].[dbo].[household_mem] ON

GO

INSERT INTO [BGD].[dbo].[household_mem] ([location2id], [location3id], [location4id], [location5id], [location6id], [location7id], [location8id], [location9id], [povertyscore], [householdid]) SELECT [location2id], [location3id], [location4id], [location5id], [location6id], [location7id], [location8id], [location9id], [povertyscore], [householdid] FROM [BGD].[dbo].[household_mem_old]

GO

SET IDENTITY_INSERT [BGD].[dbo].[household_mem] OFF

GO

Caution: Changing any part of an object name could break scripts and stored procedures.
The statement has been terminated.
Msg 701, Level 17, State 109, Line 47
There is insufficient system memory in resource pool 'default' to run this query.


I have also tried to add a resource pool for my database but still get the same error.










share|improve this question
















I have a table that I want to convert to memory optimized.

I have 160 million rows in my table. According to this topic my table will need:



(24B(header size) + 8B(IndexPointer) + 30B(dataRowSize)) * 160mln = 9460.45 MB of memory to store the table and 4B*160mln = 610.35 MB memory to store the index.

Total memory need is about 10GB.



I gave 16,000MB memory to my SQL Server, and I think this might be enough, but when I try to populate my table with data I am getting "out of memory"



I use the following script to change my disk based table to memory-optimized:



USE [master]
GO

ALTER DATABASE [BGD] ADD FILEGROUP [memory_optimized_filegroup_0] CONTAINS MEMORY_OPTIMIZED_DATA
GO

ALTER DATABASE [BGD] ADD FILE ( NAME = N'memory_optimized_file_122569704', FILENAME = N'E:ArtashLocalBGDDatamemory_optimized_file_122569704' ) TO FILEGROUP [memory_optimized_filegroup_0]
GO

USE [BGD]
GO

EXEC dbo.sp_rename @objname = N'[dbo].[household_mem]', @newname = N'household_mem_old', @objtype = N'OBJECT'
GO

USE [BGD]
GO

SET ANSI_NULLS ON
GO

CREATE TABLE [dbo].[household_mem]
(
[location2id] [SMALLINT] NULL,
[location3id] [SMALLINT] NULL,
[location4id] [SMALLINT] NULL,
[location5id] [SMALLINT] NULL,
[location6id] [INT] NULL,
[location7id] [INT] NULL,
[location8id] [INT] NULL,
[location9id] [INT] NULL,
[povertyscore] [SMALLINT] NULL,
[householdid] [INT] IDENTITY(1,1) NOT NULL,

CONSTRAINT [household_mem_primaryKey] PRIMARY KEY NONCLUSTERED
(
[householdid] ASC
)
)WITH ( MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_ONLY )

GO

SET IDENTITY_INSERT [BGD].[dbo].[household_mem] ON

GO

INSERT INTO [BGD].[dbo].[household_mem] ([location2id], [location3id], [location4id], [location5id], [location6id], [location7id], [location8id], [location9id], [povertyscore], [householdid]) SELECT [location2id], [location3id], [location4id], [location5id], [location6id], [location7id], [location8id], [location9id], [povertyscore], [householdid] FROM [BGD].[dbo].[household_mem_old]

GO

SET IDENTITY_INSERT [BGD].[dbo].[household_mem] OFF

GO

Caution: Changing any part of an object name could break scripts and stored procedures.
The statement has been terminated.
Msg 701, Level 17, State 109, Line 47
There is insufficient system memory in resource pool 'default' to run this query.


I have also tried to add a resource pool for my database but still get the same error.







sql-server sql-server-2016 memory-optimized-tables






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Oct 4 '17 at 12:06









Paul White

49.4k14260414




49.4k14260414










asked Oct 4 '17 at 11:59









Artashes KhachatryanArtashes Khachatryan

6471414




6471414





bumped to the homepage by Community 2 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







bumped to the homepage by Community 2 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.















  • Have you considered batching your inserts rather than doing them all in one shot?

    – Erik Darling
    Oct 4 '17 at 12:19













  • So you already have household_mem_old of that size and now you are trying to create the SECOND memory optimized table of the same size?

    – sepupic
    Oct 4 '17 at 12:43











  • @sepupic No, I just want to convert my Household_mem table to memory optimized. That is why first I do rename, and then I am creating memory optimized table and populating data in it.

    – Artashes Khachatryan
    Oct 4 '17 at 12:46











  • @sp_BlitzErik I tried to split by big insert into batches. In each batch I was inserting 1mln rows. 140 batches completes successful and then I get the error. Currently my table size is 8,544MB and index size is 1,715MB.

    – Artashes Khachatryan
    Oct 4 '17 at 13:41





















  • Have you considered batching your inserts rather than doing them all in one shot?

    – Erik Darling
    Oct 4 '17 at 12:19













  • So you already have household_mem_old of that size and now you are trying to create the SECOND memory optimized table of the same size?

    – sepupic
    Oct 4 '17 at 12:43











  • @sepupic No, I just want to convert my Household_mem table to memory optimized. That is why first I do rename, and then I am creating memory optimized table and populating data in it.

    – Artashes Khachatryan
    Oct 4 '17 at 12:46











  • @sp_BlitzErik I tried to split by big insert into batches. In each batch I was inserting 1mln rows. 140 batches completes successful and then I get the error. Currently my table size is 8,544MB and index size is 1,715MB.

    – Artashes Khachatryan
    Oct 4 '17 at 13:41



















Have you considered batching your inserts rather than doing them all in one shot?

– Erik Darling
Oct 4 '17 at 12:19







Have you considered batching your inserts rather than doing them all in one shot?

– Erik Darling
Oct 4 '17 at 12:19















So you already have household_mem_old of that size and now you are trying to create the SECOND memory optimized table of the same size?

– sepupic
Oct 4 '17 at 12:43





So you already have household_mem_old of that size and now you are trying to create the SECOND memory optimized table of the same size?

– sepupic
Oct 4 '17 at 12:43













@sepupic No, I just want to convert my Household_mem table to memory optimized. That is why first I do rename, and then I am creating memory optimized table and populating data in it.

– Artashes Khachatryan
Oct 4 '17 at 12:46





@sepupic No, I just want to convert my Household_mem table to memory optimized. That is why first I do rename, and then I am creating memory optimized table and populating data in it.

– Artashes Khachatryan
Oct 4 '17 at 12:46













@sp_BlitzErik I tried to split by big insert into batches. In each batch I was inserting 1mln rows. 140 batches completes successful and then I get the error. Currently my table size is 8,544MB and index size is 1,715MB.

– Artashes Khachatryan
Oct 4 '17 at 13:41







@sp_BlitzErik I tried to split by big insert into batches. In each batch I was inserting 1mln rows. 140 batches completes successful and then I get the error. Currently my table size is 8,544MB and index size is 1,715MB.

– Artashes Khachatryan
Oct 4 '17 at 13:41












1 Answer
1






active

oldest

votes


















0














Microsoft recommends 2x data/index size as a starting point for memory allocation, but that's a generalization.



Is any of the data and/or indexes in the source table compressed?



How much memory on the server?



How much memory allocated to SQL Server?



When you created the resource group, did you bind the database to it, and then offline/online the database to make the binding effective? What percent did you allocate to the resource group?



Please post the result of:
SELECT committed_target_kb
FROM sys.dm_os_sys_info



And:
SELECT @@version



Agree with @sp_blitzErik that an atomic insert of 100 million rows to a memory-optimized table is not the best way to migrate data, but not necessarily because it will require less memory - batching will perform much better.






share|improve this answer
























  • 1. 16Gb of memory, and all allocated to SQL Server. 2. Yes I have binded the database to the recource group. 3. Microsoft SQL Server 2016 (SP1-CU5) (KB4040714) - 13.0.4451.0 (X64) Sep 5 2017 16:12:34 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows 10 Pro 6.3 <X64> (Build 14393: ) 4. 8388608 - from sys.dm_os_sys_info 5. No compresed data or indexes.

    – Artashes Khachatryan
    Oct 4 '17 at 12:41













  • @NedOtter >>>Is any of the data and/or indexes in the source table compressed?<<< How can it be relevant if memory optimized table size is calculated NOT in base of Gb to be inserted but in base of memory optimized table row size and number of rows, both don't depend on original table/index compression?

    – sepupic
    Oct 4 '17 at 13:38











  • @sepupic It's only relevant because I don't trust anyone's calculations, I trust only the size of the source data/indexes.

    – NedOtter
    Oct 4 '17 at 14:11











  • @artashes - if you have 16GB of server memory, you cannot allocate all of it to SQL Server 'max memory' setting, because you can/will starve the OS. So you need to leave room for the OS, subtracting that amount from the total server memory, and then decide what you will allocate to SQL Server 'max memory'. And from that, 75% can be allocated to In-Memory OLTP. Link here: bit.ly/2rIT4um

    – NedOtter
    Oct 4 '17 at 14:15











  • @NedOtter - I know what are you talking about, but I will receive another error in case of OS problems. Currently my table is 10GB in size, so I have more 6GB available to SQL Server, but when I insert 1mln rows, I get "There is insufficient system memory in resource pool 'Pool_IMOLTP' to run this query."

    – Artashes Khachatryan
    Oct 4 '17 at 14:26













Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f187632%2fsql-server-in-memory-table-conversion%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














Microsoft recommends 2x data/index size as a starting point for memory allocation, but that's a generalization.



Is any of the data and/or indexes in the source table compressed?



How much memory on the server?



How much memory allocated to SQL Server?



When you created the resource group, did you bind the database to it, and then offline/online the database to make the binding effective? What percent did you allocate to the resource group?



Please post the result of:
SELECT committed_target_kb
FROM sys.dm_os_sys_info



And:
SELECT @@version



Agree with @sp_blitzErik that an atomic insert of 100 million rows to a memory-optimized table is not the best way to migrate data, but not necessarily because it will require less memory - batching will perform much better.






share|improve this answer
























  • 1. 16Gb of memory, and all allocated to SQL Server. 2. Yes I have binded the database to the recource group. 3. Microsoft SQL Server 2016 (SP1-CU5) (KB4040714) - 13.0.4451.0 (X64) Sep 5 2017 16:12:34 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows 10 Pro 6.3 <X64> (Build 14393: ) 4. 8388608 - from sys.dm_os_sys_info 5. No compresed data or indexes.

    – Artashes Khachatryan
    Oct 4 '17 at 12:41













  • @NedOtter >>>Is any of the data and/or indexes in the source table compressed?<<< How can it be relevant if memory optimized table size is calculated NOT in base of Gb to be inserted but in base of memory optimized table row size and number of rows, both don't depend on original table/index compression?

    – sepupic
    Oct 4 '17 at 13:38











  • @sepupic It's only relevant because I don't trust anyone's calculations, I trust only the size of the source data/indexes.

    – NedOtter
    Oct 4 '17 at 14:11











  • @artashes - if you have 16GB of server memory, you cannot allocate all of it to SQL Server 'max memory' setting, because you can/will starve the OS. So you need to leave room for the OS, subtracting that amount from the total server memory, and then decide what you will allocate to SQL Server 'max memory'. And from that, 75% can be allocated to In-Memory OLTP. Link here: bit.ly/2rIT4um

    – NedOtter
    Oct 4 '17 at 14:15











  • @NedOtter - I know what are you talking about, but I will receive another error in case of OS problems. Currently my table is 10GB in size, so I have more 6GB available to SQL Server, but when I insert 1mln rows, I get "There is insufficient system memory in resource pool 'Pool_IMOLTP' to run this query."

    – Artashes Khachatryan
    Oct 4 '17 at 14:26


















0














Microsoft recommends 2x data/index size as a starting point for memory allocation, but that's a generalization.



Is any of the data and/or indexes in the source table compressed?



How much memory on the server?



How much memory allocated to SQL Server?



When you created the resource group, did you bind the database to it, and then offline/online the database to make the binding effective? What percent did you allocate to the resource group?



Please post the result of:
SELECT committed_target_kb
FROM sys.dm_os_sys_info



And:
SELECT @@version



Agree with @sp_blitzErik that an atomic insert of 100 million rows to a memory-optimized table is not the best way to migrate data, but not necessarily because it will require less memory - batching will perform much better.






share|improve this answer
























  • 1. 16Gb of memory, and all allocated to SQL Server. 2. Yes I have binded the database to the recource group. 3. Microsoft SQL Server 2016 (SP1-CU5) (KB4040714) - 13.0.4451.0 (X64) Sep 5 2017 16:12:34 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows 10 Pro 6.3 <X64> (Build 14393: ) 4. 8388608 - from sys.dm_os_sys_info 5. No compresed data or indexes.

    – Artashes Khachatryan
    Oct 4 '17 at 12:41













  • @NedOtter >>>Is any of the data and/or indexes in the source table compressed?<<< How can it be relevant if memory optimized table size is calculated NOT in base of Gb to be inserted but in base of memory optimized table row size and number of rows, both don't depend on original table/index compression?

    – sepupic
    Oct 4 '17 at 13:38











  • @sepupic It's only relevant because I don't trust anyone's calculations, I trust only the size of the source data/indexes.

    – NedOtter
    Oct 4 '17 at 14:11











  • @artashes - if you have 16GB of server memory, you cannot allocate all of it to SQL Server 'max memory' setting, because you can/will starve the OS. So you need to leave room for the OS, subtracting that amount from the total server memory, and then decide what you will allocate to SQL Server 'max memory'. And from that, 75% can be allocated to In-Memory OLTP. Link here: bit.ly/2rIT4um

    – NedOtter
    Oct 4 '17 at 14:15











  • @NedOtter - I know what are you talking about, but I will receive another error in case of OS problems. Currently my table is 10GB in size, so I have more 6GB available to SQL Server, but when I insert 1mln rows, I get "There is insufficient system memory in resource pool 'Pool_IMOLTP' to run this query."

    – Artashes Khachatryan
    Oct 4 '17 at 14:26
















0












0








0







Microsoft recommends 2x data/index size as a starting point for memory allocation, but that's a generalization.



Is any of the data and/or indexes in the source table compressed?



How much memory on the server?



How much memory allocated to SQL Server?



When you created the resource group, did you bind the database to it, and then offline/online the database to make the binding effective? What percent did you allocate to the resource group?



Please post the result of:
SELECT committed_target_kb
FROM sys.dm_os_sys_info



And:
SELECT @@version



Agree with @sp_blitzErik that an atomic insert of 100 million rows to a memory-optimized table is not the best way to migrate data, but not necessarily because it will require less memory - batching will perform much better.






share|improve this answer













Microsoft recommends 2x data/index size as a starting point for memory allocation, but that's a generalization.



Is any of the data and/or indexes in the source table compressed?



How much memory on the server?



How much memory allocated to SQL Server?



When you created the resource group, did you bind the database to it, and then offline/online the database to make the binding effective? What percent did you allocate to the resource group?



Please post the result of:
SELECT committed_target_kb
FROM sys.dm_os_sys_info



And:
SELECT @@version



Agree with @sp_blitzErik that an atomic insert of 100 million rows to a memory-optimized table is not the best way to migrate data, but not necessarily because it will require less memory - batching will perform much better.







share|improve this answer












share|improve this answer



share|improve this answer










answered Oct 4 '17 at 12:33









NedOtterNedOtter

50628




50628













  • 1. 16Gb of memory, and all allocated to SQL Server. 2. Yes I have binded the database to the recource group. 3. Microsoft SQL Server 2016 (SP1-CU5) (KB4040714) - 13.0.4451.0 (X64) Sep 5 2017 16:12:34 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows 10 Pro 6.3 <X64> (Build 14393: ) 4. 8388608 - from sys.dm_os_sys_info 5. No compresed data or indexes.

    – Artashes Khachatryan
    Oct 4 '17 at 12:41













  • @NedOtter >>>Is any of the data and/or indexes in the source table compressed?<<< How can it be relevant if memory optimized table size is calculated NOT in base of Gb to be inserted but in base of memory optimized table row size and number of rows, both don't depend on original table/index compression?

    – sepupic
    Oct 4 '17 at 13:38











  • @sepupic It's only relevant because I don't trust anyone's calculations, I trust only the size of the source data/indexes.

    – NedOtter
    Oct 4 '17 at 14:11











  • @artashes - if you have 16GB of server memory, you cannot allocate all of it to SQL Server 'max memory' setting, because you can/will starve the OS. So you need to leave room for the OS, subtracting that amount from the total server memory, and then decide what you will allocate to SQL Server 'max memory'. And from that, 75% can be allocated to In-Memory OLTP. Link here: bit.ly/2rIT4um

    – NedOtter
    Oct 4 '17 at 14:15











  • @NedOtter - I know what are you talking about, but I will receive another error in case of OS problems. Currently my table is 10GB in size, so I have more 6GB available to SQL Server, but when I insert 1mln rows, I get "There is insufficient system memory in resource pool 'Pool_IMOLTP' to run this query."

    – Artashes Khachatryan
    Oct 4 '17 at 14:26





















  • 1. 16Gb of memory, and all allocated to SQL Server. 2. Yes I have binded the database to the recource group. 3. Microsoft SQL Server 2016 (SP1-CU5) (KB4040714) - 13.0.4451.0 (X64) Sep 5 2017 16:12:34 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows 10 Pro 6.3 <X64> (Build 14393: ) 4. 8388608 - from sys.dm_os_sys_info 5. No compresed data or indexes.

    – Artashes Khachatryan
    Oct 4 '17 at 12:41













  • @NedOtter >>>Is any of the data and/or indexes in the source table compressed?<<< How can it be relevant if memory optimized table size is calculated NOT in base of Gb to be inserted but in base of memory optimized table row size and number of rows, both don't depend on original table/index compression?

    – sepupic
    Oct 4 '17 at 13:38











  • @sepupic It's only relevant because I don't trust anyone's calculations, I trust only the size of the source data/indexes.

    – NedOtter
    Oct 4 '17 at 14:11











  • @artashes - if you have 16GB of server memory, you cannot allocate all of it to SQL Server 'max memory' setting, because you can/will starve the OS. So you need to leave room for the OS, subtracting that amount from the total server memory, and then decide what you will allocate to SQL Server 'max memory'. And from that, 75% can be allocated to In-Memory OLTP. Link here: bit.ly/2rIT4um

    – NedOtter
    Oct 4 '17 at 14:15











  • @NedOtter - I know what are you talking about, but I will receive another error in case of OS problems. Currently my table is 10GB in size, so I have more 6GB available to SQL Server, but when I insert 1mln rows, I get "There is insufficient system memory in resource pool 'Pool_IMOLTP' to run this query."

    – Artashes Khachatryan
    Oct 4 '17 at 14:26



















1. 16Gb of memory, and all allocated to SQL Server. 2. Yes I have binded the database to the recource group. 3. Microsoft SQL Server 2016 (SP1-CU5) (KB4040714) - 13.0.4451.0 (X64) Sep 5 2017 16:12:34 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows 10 Pro 6.3 <X64> (Build 14393: ) 4. 8388608 - from sys.dm_os_sys_info 5. No compresed data or indexes.

– Artashes Khachatryan
Oct 4 '17 at 12:41







1. 16Gb of memory, and all allocated to SQL Server. 2. Yes I have binded the database to the recource group. 3. Microsoft SQL Server 2016 (SP1-CU5) (KB4040714) - 13.0.4451.0 (X64) Sep 5 2017 16:12:34 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows 10 Pro 6.3 <X64> (Build 14393: ) 4. 8388608 - from sys.dm_os_sys_info 5. No compresed data or indexes.

– Artashes Khachatryan
Oct 4 '17 at 12:41















@NedOtter >>>Is any of the data and/or indexes in the source table compressed?<<< How can it be relevant if memory optimized table size is calculated NOT in base of Gb to be inserted but in base of memory optimized table row size and number of rows, both don't depend on original table/index compression?

– sepupic
Oct 4 '17 at 13:38





@NedOtter >>>Is any of the data and/or indexes in the source table compressed?<<< How can it be relevant if memory optimized table size is calculated NOT in base of Gb to be inserted but in base of memory optimized table row size and number of rows, both don't depend on original table/index compression?

– sepupic
Oct 4 '17 at 13:38













@sepupic It's only relevant because I don't trust anyone's calculations, I trust only the size of the source data/indexes.

– NedOtter
Oct 4 '17 at 14:11





@sepupic It's only relevant because I don't trust anyone's calculations, I trust only the size of the source data/indexes.

– NedOtter
Oct 4 '17 at 14:11













@artashes - if you have 16GB of server memory, you cannot allocate all of it to SQL Server 'max memory' setting, because you can/will starve the OS. So you need to leave room for the OS, subtracting that amount from the total server memory, and then decide what you will allocate to SQL Server 'max memory'. And from that, 75% can be allocated to In-Memory OLTP. Link here: bit.ly/2rIT4um

– NedOtter
Oct 4 '17 at 14:15





@artashes - if you have 16GB of server memory, you cannot allocate all of it to SQL Server 'max memory' setting, because you can/will starve the OS. So you need to leave room for the OS, subtracting that amount from the total server memory, and then decide what you will allocate to SQL Server 'max memory'. And from that, 75% can be allocated to In-Memory OLTP. Link here: bit.ly/2rIT4um

– NedOtter
Oct 4 '17 at 14:15













@NedOtter - I know what are you talking about, but I will receive another error in case of OS problems. Currently my table is 10GB in size, so I have more 6GB available to SQL Server, but when I insert 1mln rows, I get "There is insufficient system memory in resource pool 'Pool_IMOLTP' to run this query."

– Artashes Khachatryan
Oct 4 '17 at 14:26







@NedOtter - I know what are you talking about, but I will receive another error in case of OS problems. Currently my table is 10GB in size, so I have more 6GB available to SQL Server, but when I insert 1mln rows, I get "There is insufficient system memory in resource pool 'Pool_IMOLTP' to run this query."

– Artashes Khachatryan
Oct 4 '17 at 14:26




















draft saved

draft discarded




















































Thanks for contributing an answer to Database Administrators Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f187632%2fsql-server-in-memory-table-conversion%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Liste der Baudenkmale in Friedland (Mecklenburg)

Single-Malt-Whisky

Czorneboh