Index not being used in SELECT query
I have a table of about 3.25M rows with the follow format in Postgres 9.4.1
CREATE TABLE stats
(
id serial NOT NULL,
type character varying(255) NOT NULL,
"references" jsonb NOT NULL,
path jsonb,
data jsonb,
"createdAt" timestamp with time zone NOT NULL,
CONSTRAINT stats_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
The type
is a simple string no longer than 50 characters.
The references
column is an object with an list of key values. Basically any list of simple key values, and only ever 1 level deep, the values are always strings. It could be
{
"fruit": "plum"
"car": "toyota"
}
or it could be
{
"project": "2532"
}
The createdAt
timestamp is not always generated from the database (but it will by default if a value isn't supplied)
I'm currently using the table with only testing data. In this data every row has a project
key as a reference. So there are 3.25M rows with a project key. There are exactly 400,000 distinct values for the project
reference. There are only 5 distinct values for the type
field, this would probably be no more than a few hundred in production.
So I'm trying to index the table to perform the following query quickly:
SELECT
EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt")))
FROM
stats
WHERE
stats."references"::jsonb ? 'project' AND
(
stats."type" = 'event1' OR
(
stats."type" = 'event2' AND
stats."createdAt" > '2015-11-02T00:00:00+08:00' AND
stats."createdAt" < '2015-12-03T23:59:59+08:00'
)
)
GROUP BY stats."references"::jsonb->> 'project'
The query returns the time distance between two events based on two stats rows that have the same reference. In this case project
. There is only ever 1 row for each type
and selected reference
value, but there may also be no rows in which case the result returned is 0 (this is averaged out later in a different part of a larger query).
I've created an index on the createdAt
type
and references
columns but the query execution plan appears to be doing a full scan instead.
The index
CREATE INDEX "stats_createdAt_references_type_idx"
ON stats
USING btree
("createdAt", "references", type COLLATE pg_catalog."default");
Execution plan:
HashAggregate (cost=111188.31..111188.33 rows=1 width=38)
(actual time=714.499..714.499 rows=0 loops=1)
Group Key: ("references" ->> 'project'::text)
-> Seq Scan on stats (cost=0.00..111188.30 rows=1 width=38)
(actual time=714.498..714.498 rows=0 loops=1)
Filter: (
(("references" ? 'project'::text)
AND ((type)::text = 'event1'::text)) OR
(((type)::text = 'event2'::text)
AND ("createdAt" > '2015-11-02 05:00:00+13'::timestamp with time zone)
AND ("createdAt" < '2015-12-04 04:59:59+13'::timestamp with time zone)))
Rows Removed by Filter: 3258680
Planning time: 0.163 ms
Execution time: 714.534 ms
I'm really not that clued up on indexing and query execution plans so if someone could point me in the right direction that'd be great.
Edit
As noted by Erwin, it appears as though even if i did have correct indexes, a table scan would still occur as the portion of the table returned from the query is very large. Does this mean that for this set of data this is the fastest query time i can get? Im assuming if i added 60M more unrelated rows without a project reference, it might use an index (if i had the correct indexes), but i dont see how that could speed up the query by adding more data. Maybe im missing something.
postgresql database-design index postgresql-performance
migrated from stackoverflow.com Jan 1 '16 at 12:59
This question came from our site for professional and enthusiast programmers.
|
show 4 more comments
I have a table of about 3.25M rows with the follow format in Postgres 9.4.1
CREATE TABLE stats
(
id serial NOT NULL,
type character varying(255) NOT NULL,
"references" jsonb NOT NULL,
path jsonb,
data jsonb,
"createdAt" timestamp with time zone NOT NULL,
CONSTRAINT stats_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
The type
is a simple string no longer than 50 characters.
The references
column is an object with an list of key values. Basically any list of simple key values, and only ever 1 level deep, the values are always strings. It could be
{
"fruit": "plum"
"car": "toyota"
}
or it could be
{
"project": "2532"
}
The createdAt
timestamp is not always generated from the database (but it will by default if a value isn't supplied)
I'm currently using the table with only testing data. In this data every row has a project
key as a reference. So there are 3.25M rows with a project key. There are exactly 400,000 distinct values for the project
reference. There are only 5 distinct values for the type
field, this would probably be no more than a few hundred in production.
So I'm trying to index the table to perform the following query quickly:
SELECT
EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt")))
FROM
stats
WHERE
stats."references"::jsonb ? 'project' AND
(
stats."type" = 'event1' OR
(
stats."type" = 'event2' AND
stats."createdAt" > '2015-11-02T00:00:00+08:00' AND
stats."createdAt" < '2015-12-03T23:59:59+08:00'
)
)
GROUP BY stats."references"::jsonb->> 'project'
The query returns the time distance between two events based on two stats rows that have the same reference. In this case project
. There is only ever 1 row for each type
and selected reference
value, but there may also be no rows in which case the result returned is 0 (this is averaged out later in a different part of a larger query).
I've created an index on the createdAt
type
and references
columns but the query execution plan appears to be doing a full scan instead.
The index
CREATE INDEX "stats_createdAt_references_type_idx"
ON stats
USING btree
("createdAt", "references", type COLLATE pg_catalog."default");
Execution plan:
HashAggregate (cost=111188.31..111188.33 rows=1 width=38)
(actual time=714.499..714.499 rows=0 loops=1)
Group Key: ("references" ->> 'project'::text)
-> Seq Scan on stats (cost=0.00..111188.30 rows=1 width=38)
(actual time=714.498..714.498 rows=0 loops=1)
Filter: (
(("references" ? 'project'::text)
AND ((type)::text = 'event1'::text)) OR
(((type)::text = 'event2'::text)
AND ("createdAt" > '2015-11-02 05:00:00+13'::timestamp with time zone)
AND ("createdAt" < '2015-12-04 04:59:59+13'::timestamp with time zone)))
Rows Removed by Filter: 3258680
Planning time: 0.163 ms
Execution time: 714.534 ms
I'm really not that clued up on indexing and query execution plans so if someone could point me in the right direction that'd be great.
Edit
As noted by Erwin, it appears as though even if i did have correct indexes, a table scan would still occur as the portion of the table returned from the query is very large. Does this mean that for this set of data this is the fastest query time i can get? Im assuming if i added 60M more unrelated rows without a project reference, it might use an index (if i had the correct indexes), but i dont see how that could speed up the query by adding more data. Maybe im missing something.
postgresql database-design index postgresql-performance
migrated from stackoverflow.com Jan 1 '16 at 12:59
This question came from our site for professional and enthusiast programmers.
Did you try an index on(type, created_at, references)
?
– ypercubeᵀᴹ
Jan 1 '16 at 13:14
@YperSillyCubeᵀᴹ I'm just going through now and trying out some indexes so I'll try that one out next.
– James Hay
Jan 1 '16 at 13:19
No that one slowed it down to 26 seconds unfortunately :/
– James Hay
Jan 1 '16 at 13:32
Oh, I misread the WHERE condition. Are you aware that the condition is equivalent to:?WHERE (stats."references"::jsonb ? 'project' AND stats."type" = 'event1') OR (stats."type" = 'event2' AND stats."createdAt" > '2015-11-02T00:00:00+08:00' AND stats."createdAt" < '2015-12-03T23:59:59+08:00')
– ypercubeᵀᴹ
Jan 1 '16 at 14:06
Please provide your actual table definition showing exact data types and all constraints: a validCREATE TABLE
script or what you get withd stats
in psql. And always your version of Postgres. How many rows with a keyreferences ->> 'project'
and how many distinct values forreferences ->> 'project'
?
– Erwin Brandstetter
Jan 1 '16 at 16:42
|
show 4 more comments
I have a table of about 3.25M rows with the follow format in Postgres 9.4.1
CREATE TABLE stats
(
id serial NOT NULL,
type character varying(255) NOT NULL,
"references" jsonb NOT NULL,
path jsonb,
data jsonb,
"createdAt" timestamp with time zone NOT NULL,
CONSTRAINT stats_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
The type
is a simple string no longer than 50 characters.
The references
column is an object with an list of key values. Basically any list of simple key values, and only ever 1 level deep, the values are always strings. It could be
{
"fruit": "plum"
"car": "toyota"
}
or it could be
{
"project": "2532"
}
The createdAt
timestamp is not always generated from the database (but it will by default if a value isn't supplied)
I'm currently using the table with only testing data. In this data every row has a project
key as a reference. So there are 3.25M rows with a project key. There are exactly 400,000 distinct values for the project
reference. There are only 5 distinct values for the type
field, this would probably be no more than a few hundred in production.
So I'm trying to index the table to perform the following query quickly:
SELECT
EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt")))
FROM
stats
WHERE
stats."references"::jsonb ? 'project' AND
(
stats."type" = 'event1' OR
(
stats."type" = 'event2' AND
stats."createdAt" > '2015-11-02T00:00:00+08:00' AND
stats."createdAt" < '2015-12-03T23:59:59+08:00'
)
)
GROUP BY stats."references"::jsonb->> 'project'
The query returns the time distance between two events based on two stats rows that have the same reference. In this case project
. There is only ever 1 row for each type
and selected reference
value, but there may also be no rows in which case the result returned is 0 (this is averaged out later in a different part of a larger query).
I've created an index on the createdAt
type
and references
columns but the query execution plan appears to be doing a full scan instead.
The index
CREATE INDEX "stats_createdAt_references_type_idx"
ON stats
USING btree
("createdAt", "references", type COLLATE pg_catalog."default");
Execution plan:
HashAggregate (cost=111188.31..111188.33 rows=1 width=38)
(actual time=714.499..714.499 rows=0 loops=1)
Group Key: ("references" ->> 'project'::text)
-> Seq Scan on stats (cost=0.00..111188.30 rows=1 width=38)
(actual time=714.498..714.498 rows=0 loops=1)
Filter: (
(("references" ? 'project'::text)
AND ((type)::text = 'event1'::text)) OR
(((type)::text = 'event2'::text)
AND ("createdAt" > '2015-11-02 05:00:00+13'::timestamp with time zone)
AND ("createdAt" < '2015-12-04 04:59:59+13'::timestamp with time zone)))
Rows Removed by Filter: 3258680
Planning time: 0.163 ms
Execution time: 714.534 ms
I'm really not that clued up on indexing and query execution plans so if someone could point me in the right direction that'd be great.
Edit
As noted by Erwin, it appears as though even if i did have correct indexes, a table scan would still occur as the portion of the table returned from the query is very large. Does this mean that for this set of data this is the fastest query time i can get? Im assuming if i added 60M more unrelated rows without a project reference, it might use an index (if i had the correct indexes), but i dont see how that could speed up the query by adding more data. Maybe im missing something.
postgresql database-design index postgresql-performance
I have a table of about 3.25M rows with the follow format in Postgres 9.4.1
CREATE TABLE stats
(
id serial NOT NULL,
type character varying(255) NOT NULL,
"references" jsonb NOT NULL,
path jsonb,
data jsonb,
"createdAt" timestamp with time zone NOT NULL,
CONSTRAINT stats_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
The type
is a simple string no longer than 50 characters.
The references
column is an object with an list of key values. Basically any list of simple key values, and only ever 1 level deep, the values are always strings. It could be
{
"fruit": "plum"
"car": "toyota"
}
or it could be
{
"project": "2532"
}
The createdAt
timestamp is not always generated from the database (but it will by default if a value isn't supplied)
I'm currently using the table with only testing data. In this data every row has a project
key as a reference. So there are 3.25M rows with a project key. There are exactly 400,000 distinct values for the project
reference. There are only 5 distinct values for the type
field, this would probably be no more than a few hundred in production.
So I'm trying to index the table to perform the following query quickly:
SELECT
EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt")))
FROM
stats
WHERE
stats."references"::jsonb ? 'project' AND
(
stats."type" = 'event1' OR
(
stats."type" = 'event2' AND
stats."createdAt" > '2015-11-02T00:00:00+08:00' AND
stats."createdAt" < '2015-12-03T23:59:59+08:00'
)
)
GROUP BY stats."references"::jsonb->> 'project'
The query returns the time distance between two events based on two stats rows that have the same reference. In this case project
. There is only ever 1 row for each type
and selected reference
value, but there may also be no rows in which case the result returned is 0 (this is averaged out later in a different part of a larger query).
I've created an index on the createdAt
type
and references
columns but the query execution plan appears to be doing a full scan instead.
The index
CREATE INDEX "stats_createdAt_references_type_idx"
ON stats
USING btree
("createdAt", "references", type COLLATE pg_catalog."default");
Execution plan:
HashAggregate (cost=111188.31..111188.33 rows=1 width=38)
(actual time=714.499..714.499 rows=0 loops=1)
Group Key: ("references" ->> 'project'::text)
-> Seq Scan on stats (cost=0.00..111188.30 rows=1 width=38)
(actual time=714.498..714.498 rows=0 loops=1)
Filter: (
(("references" ? 'project'::text)
AND ((type)::text = 'event1'::text)) OR
(((type)::text = 'event2'::text)
AND ("createdAt" > '2015-11-02 05:00:00+13'::timestamp with time zone)
AND ("createdAt" < '2015-12-04 04:59:59+13'::timestamp with time zone)))
Rows Removed by Filter: 3258680
Planning time: 0.163 ms
Execution time: 714.534 ms
I'm really not that clued up on indexing and query execution plans so if someone could point me in the right direction that'd be great.
Edit
As noted by Erwin, it appears as though even if i did have correct indexes, a table scan would still occur as the portion of the table returned from the query is very large. Does this mean that for this set of data this is the fastest query time i can get? Im assuming if i added 60M more unrelated rows without a project reference, it might use an index (if i had the correct indexes), but i dont see how that could speed up the query by adding more data. Maybe im missing something.
postgresql database-design index postgresql-performance
postgresql database-design index postgresql-performance
edited Jan 2 '16 at 5:20
James Hay
asked Jan 1 '16 at 8:48
James HayJames Hay
1504
1504
migrated from stackoverflow.com Jan 1 '16 at 12:59
This question came from our site for professional and enthusiast programmers.
migrated from stackoverflow.com Jan 1 '16 at 12:59
This question came from our site for professional and enthusiast programmers.
Did you try an index on(type, created_at, references)
?
– ypercubeᵀᴹ
Jan 1 '16 at 13:14
@YperSillyCubeᵀᴹ I'm just going through now and trying out some indexes so I'll try that one out next.
– James Hay
Jan 1 '16 at 13:19
No that one slowed it down to 26 seconds unfortunately :/
– James Hay
Jan 1 '16 at 13:32
Oh, I misread the WHERE condition. Are you aware that the condition is equivalent to:?WHERE (stats."references"::jsonb ? 'project' AND stats."type" = 'event1') OR (stats."type" = 'event2' AND stats."createdAt" > '2015-11-02T00:00:00+08:00' AND stats."createdAt" < '2015-12-03T23:59:59+08:00')
– ypercubeᵀᴹ
Jan 1 '16 at 14:06
Please provide your actual table definition showing exact data types and all constraints: a validCREATE TABLE
script or what you get withd stats
in psql. And always your version of Postgres. How many rows with a keyreferences ->> 'project'
and how many distinct values forreferences ->> 'project'
?
– Erwin Brandstetter
Jan 1 '16 at 16:42
|
show 4 more comments
Did you try an index on(type, created_at, references)
?
– ypercubeᵀᴹ
Jan 1 '16 at 13:14
@YperSillyCubeᵀᴹ I'm just going through now and trying out some indexes so I'll try that one out next.
– James Hay
Jan 1 '16 at 13:19
No that one slowed it down to 26 seconds unfortunately :/
– James Hay
Jan 1 '16 at 13:32
Oh, I misread the WHERE condition. Are you aware that the condition is equivalent to:?WHERE (stats."references"::jsonb ? 'project' AND stats."type" = 'event1') OR (stats."type" = 'event2' AND stats."createdAt" > '2015-11-02T00:00:00+08:00' AND stats."createdAt" < '2015-12-03T23:59:59+08:00')
– ypercubeᵀᴹ
Jan 1 '16 at 14:06
Please provide your actual table definition showing exact data types and all constraints: a validCREATE TABLE
script or what you get withd stats
in psql. And always your version of Postgres. How many rows with a keyreferences ->> 'project'
and how many distinct values forreferences ->> 'project'
?
– Erwin Brandstetter
Jan 1 '16 at 16:42
Did you try an index on
(type, created_at, references)
?– ypercubeᵀᴹ
Jan 1 '16 at 13:14
Did you try an index on
(type, created_at, references)
?– ypercubeᵀᴹ
Jan 1 '16 at 13:14
@YperSillyCubeᵀᴹ I'm just going through now and trying out some indexes so I'll try that one out next.
– James Hay
Jan 1 '16 at 13:19
@YperSillyCubeᵀᴹ I'm just going through now and trying out some indexes so I'll try that one out next.
– James Hay
Jan 1 '16 at 13:19
No that one slowed it down to 26 seconds unfortunately :/
– James Hay
Jan 1 '16 at 13:32
No that one slowed it down to 26 seconds unfortunately :/
– James Hay
Jan 1 '16 at 13:32
Oh, I misread the WHERE condition. Are you aware that the condition is equivalent to:?
WHERE (stats."references"::jsonb ? 'project' AND stats."type" = 'event1') OR (stats."type" = 'event2' AND stats."createdAt" > '2015-11-02T00:00:00+08:00' AND stats."createdAt" < '2015-12-03T23:59:59+08:00')
– ypercubeᵀᴹ
Jan 1 '16 at 14:06
Oh, I misread the WHERE condition. Are you aware that the condition is equivalent to:?
WHERE (stats."references"::jsonb ? 'project' AND stats."type" = 'event1') OR (stats."type" = 'event2' AND stats."createdAt" > '2015-11-02T00:00:00+08:00' AND stats."createdAt" < '2015-12-03T23:59:59+08:00')
– ypercubeᵀᴹ
Jan 1 '16 at 14:06
Please provide your actual table definition showing exact data types and all constraints: a valid
CREATE TABLE
script or what you get with d stats
in psql. And always your version of Postgres. How many rows with a key references ->> 'project'
and how many distinct values for references ->> 'project'
?– Erwin Brandstetter
Jan 1 '16 at 16:42
Please provide your actual table definition showing exact data types and all constraints: a valid
CREATE TABLE
script or what you get with d stats
in psql. And always your version of Postgres. How many rows with a key references ->> 'project'
and how many distinct values for references ->> 'project'
?– Erwin Brandstetter
Jan 1 '16 at 16:42
|
show 4 more comments
1 Answer
1
active
oldest
votes
According to your current explanation indexes are not going to help much (if at all) with your current query.
So there are 3.25M rows with a project key.
That's also the total number of rows, so this predicate is true
for (almost) every row ... and not selective at all. But there is no useful index for the jsonb
column "references"
. Including it in the btree index on ("createdAt", "references", type)
is just pointless.
Even if you had a generally more useful GIN index on "reference"
like:
CREATE INDEX stats_references_gix ON stats USING gin ("references");
... Postgres would still have no useful statistics about individual keys inside the jsonb
column.
There are only 5 distinct values for the
type
Your query selects all of one type and an unknown fraction of another type. That's an estimated 20 - 40 % of all rows. A sequential scan is most certainly going to be fastest plan. Indexes start to make sense for around 5 % of all rows or less.
To test, you can force a possible index by setting for debugging purposes in your session:
SET enable_seqscan = off;
Reset with:
RESET enable_seqscan;
You'll see slower queries ...
You group by project values:
GROUP BY "references"->> 'project'
And:
There are exactly 400,000 distinct values for the project reference.
That's 8 rows per project on average. Depending on value frequencies we still have to retrieve an estimated 3 - 20 % of all rows if we only picked min and max per project in a LATERAL subquery ...
Try this index, it makes more sense than what you have now:
CREATE INDEX stats_special_idx ON stats (type, ("references" ->> 'project'), "createdAt")
WHERE "references" ? 'project';
Postgres might still fall back to a sequential scan ...
More might be done with a normalized schema / more selective criteria / a smarter query that only picks min and max "createdAt"
...
Query
I would write your query like this:
SELECT EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt")))
FROM stats
WHERE "references" ? 'project'
AND (type = 'event1' OR
type = 'event2'
AND "createdAt" >= '2015-11-02 00:00:00+08:00' -- I guess you want this
AND "createdAt" < '2015-12-04 00:00:00+08:00'
)
GROUP BY "references"->> 'project'; -- don't cast
Notes
Don't cast here:
stats."references"
::jsonb? 'project'
The column is
jsonb
already, you gain nothing. If the predicate was selective, index usage might be prohibited by the cast.
Your predicates on
"createdAt"
are probably incorrect at lower and upper bound. To include whole days consider my suggested alternative.references
is a reserved word, so you have to always double-quote it. Don't use it as identifier. Similar for double-quoted CaMeL-case names like"createdAt"
either. Allowed, but error prone, needless complication.
type
type character varying(255) NOT NULL,
The type is a simple string no longer than 50 characters.
There are only 5 distinct values for the type field, this would probably be no more than a few hundred in production.
None of this seems to make sense.
varchar(255)
in itself hardly ever makes any sense. 255 characters is an arbitrary limit without significance in Postgres.- If it's no longer than 50 chars, then the limit of 255 makes even less sense.
- In a properly normalized design you would have a small
integer
columntype_id
(referencing a smalltype
table) which occupies only 4 bytes per row and makes indexes smaller and faster.
Ideally, you would have a
project
table, listing all projects and another small integer FK columnproject_id
instats
. Would make any such query faster. And for selective criteria, much faster queries would be possible - even without the suggested normalization. Along these lines:Optimize GROUP BY query to retrieve latest record per user
Thanks for the in depth answer. Noted on all of the above, naming conventions, date predicates, unecessary casting etc. I guess i've thrown myself off by using bad test data. On average the rows with project as a reference would be around 5 - 10 % of the table. I dont think the specialized index would work as the "references" dont actually point to anything else in the databse, theyre used purely as a way to tie rows together for various queries, and the key (project in this case) is user defined
– James Hay
Jan 2 '16 at 5:10
Also regarding varchar it was automatically generated by a model definition using sequelize. Ive previously only worked with SQL server so thats the reason I wasnt aware of that problem. I will see if i can update the schema to use text.
– James Hay
Jan 2 '16 at 5:35
@JamesHay: There is no problem withvarchar(n)
per se. But unlike SQL Server, there is also no magic benefit for limiting tovarchar(255)
. More: dba.stackexchange.com/a/89433/3684. Sequelize may be optimized for SQL Server ...
– Erwin Brandstetter
Jan 2 '16 at 14:42
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f124982%2findex-not-being-used-in-select-query%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
According to your current explanation indexes are not going to help much (if at all) with your current query.
So there are 3.25M rows with a project key.
That's also the total number of rows, so this predicate is true
for (almost) every row ... and not selective at all. But there is no useful index for the jsonb
column "references"
. Including it in the btree index on ("createdAt", "references", type)
is just pointless.
Even if you had a generally more useful GIN index on "reference"
like:
CREATE INDEX stats_references_gix ON stats USING gin ("references");
... Postgres would still have no useful statistics about individual keys inside the jsonb
column.
There are only 5 distinct values for the
type
Your query selects all of one type and an unknown fraction of another type. That's an estimated 20 - 40 % of all rows. A sequential scan is most certainly going to be fastest plan. Indexes start to make sense for around 5 % of all rows or less.
To test, you can force a possible index by setting for debugging purposes in your session:
SET enable_seqscan = off;
Reset with:
RESET enable_seqscan;
You'll see slower queries ...
You group by project values:
GROUP BY "references"->> 'project'
And:
There are exactly 400,000 distinct values for the project reference.
That's 8 rows per project on average. Depending on value frequencies we still have to retrieve an estimated 3 - 20 % of all rows if we only picked min and max per project in a LATERAL subquery ...
Try this index, it makes more sense than what you have now:
CREATE INDEX stats_special_idx ON stats (type, ("references" ->> 'project'), "createdAt")
WHERE "references" ? 'project';
Postgres might still fall back to a sequential scan ...
More might be done with a normalized schema / more selective criteria / a smarter query that only picks min and max "createdAt"
...
Query
I would write your query like this:
SELECT EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt")))
FROM stats
WHERE "references" ? 'project'
AND (type = 'event1' OR
type = 'event2'
AND "createdAt" >= '2015-11-02 00:00:00+08:00' -- I guess you want this
AND "createdAt" < '2015-12-04 00:00:00+08:00'
)
GROUP BY "references"->> 'project'; -- don't cast
Notes
Don't cast here:
stats."references"
::jsonb? 'project'
The column is
jsonb
already, you gain nothing. If the predicate was selective, index usage might be prohibited by the cast.
Your predicates on
"createdAt"
are probably incorrect at lower and upper bound. To include whole days consider my suggested alternative.references
is a reserved word, so you have to always double-quote it. Don't use it as identifier. Similar for double-quoted CaMeL-case names like"createdAt"
either. Allowed, but error prone, needless complication.
type
type character varying(255) NOT NULL,
The type is a simple string no longer than 50 characters.
There are only 5 distinct values for the type field, this would probably be no more than a few hundred in production.
None of this seems to make sense.
varchar(255)
in itself hardly ever makes any sense. 255 characters is an arbitrary limit without significance in Postgres.- If it's no longer than 50 chars, then the limit of 255 makes even less sense.
- In a properly normalized design you would have a small
integer
columntype_id
(referencing a smalltype
table) which occupies only 4 bytes per row and makes indexes smaller and faster.
Ideally, you would have a
project
table, listing all projects and another small integer FK columnproject_id
instats
. Would make any such query faster. And for selective criteria, much faster queries would be possible - even without the suggested normalization. Along these lines:Optimize GROUP BY query to retrieve latest record per user
Thanks for the in depth answer. Noted on all of the above, naming conventions, date predicates, unecessary casting etc. I guess i've thrown myself off by using bad test data. On average the rows with project as a reference would be around 5 - 10 % of the table. I dont think the specialized index would work as the "references" dont actually point to anything else in the databse, theyre used purely as a way to tie rows together for various queries, and the key (project in this case) is user defined
– James Hay
Jan 2 '16 at 5:10
Also regarding varchar it was automatically generated by a model definition using sequelize. Ive previously only worked with SQL server so thats the reason I wasnt aware of that problem. I will see if i can update the schema to use text.
– James Hay
Jan 2 '16 at 5:35
@JamesHay: There is no problem withvarchar(n)
per se. But unlike SQL Server, there is also no magic benefit for limiting tovarchar(255)
. More: dba.stackexchange.com/a/89433/3684. Sequelize may be optimized for SQL Server ...
– Erwin Brandstetter
Jan 2 '16 at 14:42
add a comment |
According to your current explanation indexes are not going to help much (if at all) with your current query.
So there are 3.25M rows with a project key.
That's also the total number of rows, so this predicate is true
for (almost) every row ... and not selective at all. But there is no useful index for the jsonb
column "references"
. Including it in the btree index on ("createdAt", "references", type)
is just pointless.
Even if you had a generally more useful GIN index on "reference"
like:
CREATE INDEX stats_references_gix ON stats USING gin ("references");
... Postgres would still have no useful statistics about individual keys inside the jsonb
column.
There are only 5 distinct values for the
type
Your query selects all of one type and an unknown fraction of another type. That's an estimated 20 - 40 % of all rows. A sequential scan is most certainly going to be fastest plan. Indexes start to make sense for around 5 % of all rows or less.
To test, you can force a possible index by setting for debugging purposes in your session:
SET enable_seqscan = off;
Reset with:
RESET enable_seqscan;
You'll see slower queries ...
You group by project values:
GROUP BY "references"->> 'project'
And:
There are exactly 400,000 distinct values for the project reference.
That's 8 rows per project on average. Depending on value frequencies we still have to retrieve an estimated 3 - 20 % of all rows if we only picked min and max per project in a LATERAL subquery ...
Try this index, it makes more sense than what you have now:
CREATE INDEX stats_special_idx ON stats (type, ("references" ->> 'project'), "createdAt")
WHERE "references" ? 'project';
Postgres might still fall back to a sequential scan ...
More might be done with a normalized schema / more selective criteria / a smarter query that only picks min and max "createdAt"
...
Query
I would write your query like this:
SELECT EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt")))
FROM stats
WHERE "references" ? 'project'
AND (type = 'event1' OR
type = 'event2'
AND "createdAt" >= '2015-11-02 00:00:00+08:00' -- I guess you want this
AND "createdAt" < '2015-12-04 00:00:00+08:00'
)
GROUP BY "references"->> 'project'; -- don't cast
Notes
Don't cast here:
stats."references"
::jsonb? 'project'
The column is
jsonb
already, you gain nothing. If the predicate was selective, index usage might be prohibited by the cast.
Your predicates on
"createdAt"
are probably incorrect at lower and upper bound. To include whole days consider my suggested alternative.references
is a reserved word, so you have to always double-quote it. Don't use it as identifier. Similar for double-quoted CaMeL-case names like"createdAt"
either. Allowed, but error prone, needless complication.
type
type character varying(255) NOT NULL,
The type is a simple string no longer than 50 characters.
There are only 5 distinct values for the type field, this would probably be no more than a few hundred in production.
None of this seems to make sense.
varchar(255)
in itself hardly ever makes any sense. 255 characters is an arbitrary limit without significance in Postgres.- If it's no longer than 50 chars, then the limit of 255 makes even less sense.
- In a properly normalized design you would have a small
integer
columntype_id
(referencing a smalltype
table) which occupies only 4 bytes per row and makes indexes smaller and faster.
Ideally, you would have a
project
table, listing all projects and another small integer FK columnproject_id
instats
. Would make any such query faster. And for selective criteria, much faster queries would be possible - even without the suggested normalization. Along these lines:Optimize GROUP BY query to retrieve latest record per user
Thanks for the in depth answer. Noted on all of the above, naming conventions, date predicates, unecessary casting etc. I guess i've thrown myself off by using bad test data. On average the rows with project as a reference would be around 5 - 10 % of the table. I dont think the specialized index would work as the "references" dont actually point to anything else in the databse, theyre used purely as a way to tie rows together for various queries, and the key (project in this case) is user defined
– James Hay
Jan 2 '16 at 5:10
Also regarding varchar it was automatically generated by a model definition using sequelize. Ive previously only worked with SQL server so thats the reason I wasnt aware of that problem. I will see if i can update the schema to use text.
– James Hay
Jan 2 '16 at 5:35
@JamesHay: There is no problem withvarchar(n)
per se. But unlike SQL Server, there is also no magic benefit for limiting tovarchar(255)
. More: dba.stackexchange.com/a/89433/3684. Sequelize may be optimized for SQL Server ...
– Erwin Brandstetter
Jan 2 '16 at 14:42
add a comment |
According to your current explanation indexes are not going to help much (if at all) with your current query.
So there are 3.25M rows with a project key.
That's also the total number of rows, so this predicate is true
for (almost) every row ... and not selective at all. But there is no useful index for the jsonb
column "references"
. Including it in the btree index on ("createdAt", "references", type)
is just pointless.
Even if you had a generally more useful GIN index on "reference"
like:
CREATE INDEX stats_references_gix ON stats USING gin ("references");
... Postgres would still have no useful statistics about individual keys inside the jsonb
column.
There are only 5 distinct values for the
type
Your query selects all of one type and an unknown fraction of another type. That's an estimated 20 - 40 % of all rows. A sequential scan is most certainly going to be fastest plan. Indexes start to make sense for around 5 % of all rows or less.
To test, you can force a possible index by setting for debugging purposes in your session:
SET enable_seqscan = off;
Reset with:
RESET enable_seqscan;
You'll see slower queries ...
You group by project values:
GROUP BY "references"->> 'project'
And:
There are exactly 400,000 distinct values for the project reference.
That's 8 rows per project on average. Depending on value frequencies we still have to retrieve an estimated 3 - 20 % of all rows if we only picked min and max per project in a LATERAL subquery ...
Try this index, it makes more sense than what you have now:
CREATE INDEX stats_special_idx ON stats (type, ("references" ->> 'project'), "createdAt")
WHERE "references" ? 'project';
Postgres might still fall back to a sequential scan ...
More might be done with a normalized schema / more selective criteria / a smarter query that only picks min and max "createdAt"
...
Query
I would write your query like this:
SELECT EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt")))
FROM stats
WHERE "references" ? 'project'
AND (type = 'event1' OR
type = 'event2'
AND "createdAt" >= '2015-11-02 00:00:00+08:00' -- I guess you want this
AND "createdAt" < '2015-12-04 00:00:00+08:00'
)
GROUP BY "references"->> 'project'; -- don't cast
Notes
Don't cast here:
stats."references"
::jsonb? 'project'
The column is
jsonb
already, you gain nothing. If the predicate was selective, index usage might be prohibited by the cast.
Your predicates on
"createdAt"
are probably incorrect at lower and upper bound. To include whole days consider my suggested alternative.references
is a reserved word, so you have to always double-quote it. Don't use it as identifier. Similar for double-quoted CaMeL-case names like"createdAt"
either. Allowed, but error prone, needless complication.
type
type character varying(255) NOT NULL,
The type is a simple string no longer than 50 characters.
There are only 5 distinct values for the type field, this would probably be no more than a few hundred in production.
None of this seems to make sense.
varchar(255)
in itself hardly ever makes any sense. 255 characters is an arbitrary limit without significance in Postgres.- If it's no longer than 50 chars, then the limit of 255 makes even less sense.
- In a properly normalized design you would have a small
integer
columntype_id
(referencing a smalltype
table) which occupies only 4 bytes per row and makes indexes smaller and faster.
Ideally, you would have a
project
table, listing all projects and another small integer FK columnproject_id
instats
. Would make any such query faster. And for selective criteria, much faster queries would be possible - even without the suggested normalization. Along these lines:Optimize GROUP BY query to retrieve latest record per user
According to your current explanation indexes are not going to help much (if at all) with your current query.
So there are 3.25M rows with a project key.
That's also the total number of rows, so this predicate is true
for (almost) every row ... and not selective at all. But there is no useful index for the jsonb
column "references"
. Including it in the btree index on ("createdAt", "references", type)
is just pointless.
Even if you had a generally more useful GIN index on "reference"
like:
CREATE INDEX stats_references_gix ON stats USING gin ("references");
... Postgres would still have no useful statistics about individual keys inside the jsonb
column.
There are only 5 distinct values for the
type
Your query selects all of one type and an unknown fraction of another type. That's an estimated 20 - 40 % of all rows. A sequential scan is most certainly going to be fastest plan. Indexes start to make sense for around 5 % of all rows or less.
To test, you can force a possible index by setting for debugging purposes in your session:
SET enable_seqscan = off;
Reset with:
RESET enable_seqscan;
You'll see slower queries ...
You group by project values:
GROUP BY "references"->> 'project'
And:
There are exactly 400,000 distinct values for the project reference.
That's 8 rows per project on average. Depending on value frequencies we still have to retrieve an estimated 3 - 20 % of all rows if we only picked min and max per project in a LATERAL subquery ...
Try this index, it makes more sense than what you have now:
CREATE INDEX stats_special_idx ON stats (type, ("references" ->> 'project'), "createdAt")
WHERE "references" ? 'project';
Postgres might still fall back to a sequential scan ...
More might be done with a normalized schema / more selective criteria / a smarter query that only picks min and max "createdAt"
...
Query
I would write your query like this:
SELECT EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt")))
FROM stats
WHERE "references" ? 'project'
AND (type = 'event1' OR
type = 'event2'
AND "createdAt" >= '2015-11-02 00:00:00+08:00' -- I guess you want this
AND "createdAt" < '2015-12-04 00:00:00+08:00'
)
GROUP BY "references"->> 'project'; -- don't cast
Notes
Don't cast here:
stats."references"
::jsonb? 'project'
The column is
jsonb
already, you gain nothing. If the predicate was selective, index usage might be prohibited by the cast.
Your predicates on
"createdAt"
are probably incorrect at lower and upper bound. To include whole days consider my suggested alternative.references
is a reserved word, so you have to always double-quote it. Don't use it as identifier. Similar for double-quoted CaMeL-case names like"createdAt"
either. Allowed, but error prone, needless complication.
type
type character varying(255) NOT NULL,
The type is a simple string no longer than 50 characters.
There are only 5 distinct values for the type field, this would probably be no more than a few hundred in production.
None of this seems to make sense.
varchar(255)
in itself hardly ever makes any sense. 255 characters is an arbitrary limit without significance in Postgres.- If it's no longer than 50 chars, then the limit of 255 makes even less sense.
- In a properly normalized design you would have a small
integer
columntype_id
(referencing a smalltype
table) which occupies only 4 bytes per row and makes indexes smaller and faster.
Ideally, you would have a
project
table, listing all projects and another small integer FK columnproject_id
instats
. Would make any such query faster. And for selective criteria, much faster queries would be possible - even without the suggested normalization. Along these lines:Optimize GROUP BY query to retrieve latest record per user
edited 3 mins ago
Pang
1236
1236
answered Jan 2 '16 at 4:20
Erwin BrandstetterErwin Brandstetter
93.5k9180293
93.5k9180293
Thanks for the in depth answer. Noted on all of the above, naming conventions, date predicates, unecessary casting etc. I guess i've thrown myself off by using bad test data. On average the rows with project as a reference would be around 5 - 10 % of the table. I dont think the specialized index would work as the "references" dont actually point to anything else in the databse, theyre used purely as a way to tie rows together for various queries, and the key (project in this case) is user defined
– James Hay
Jan 2 '16 at 5:10
Also regarding varchar it was automatically generated by a model definition using sequelize. Ive previously only worked with SQL server so thats the reason I wasnt aware of that problem. I will see if i can update the schema to use text.
– James Hay
Jan 2 '16 at 5:35
@JamesHay: There is no problem withvarchar(n)
per se. But unlike SQL Server, there is also no magic benefit for limiting tovarchar(255)
. More: dba.stackexchange.com/a/89433/3684. Sequelize may be optimized for SQL Server ...
– Erwin Brandstetter
Jan 2 '16 at 14:42
add a comment |
Thanks for the in depth answer. Noted on all of the above, naming conventions, date predicates, unecessary casting etc. I guess i've thrown myself off by using bad test data. On average the rows with project as a reference would be around 5 - 10 % of the table. I dont think the specialized index would work as the "references" dont actually point to anything else in the databse, theyre used purely as a way to tie rows together for various queries, and the key (project in this case) is user defined
– James Hay
Jan 2 '16 at 5:10
Also regarding varchar it was automatically generated by a model definition using sequelize. Ive previously only worked with SQL server so thats the reason I wasnt aware of that problem. I will see if i can update the schema to use text.
– James Hay
Jan 2 '16 at 5:35
@JamesHay: There is no problem withvarchar(n)
per se. But unlike SQL Server, there is also no magic benefit for limiting tovarchar(255)
. More: dba.stackexchange.com/a/89433/3684. Sequelize may be optimized for SQL Server ...
– Erwin Brandstetter
Jan 2 '16 at 14:42
Thanks for the in depth answer. Noted on all of the above, naming conventions, date predicates, unecessary casting etc. I guess i've thrown myself off by using bad test data. On average the rows with project as a reference would be around 5 - 10 % of the table. I dont think the specialized index would work as the "references" dont actually point to anything else in the databse, theyre used purely as a way to tie rows together for various queries, and the key (project in this case) is user defined
– James Hay
Jan 2 '16 at 5:10
Thanks for the in depth answer. Noted on all of the above, naming conventions, date predicates, unecessary casting etc. I guess i've thrown myself off by using bad test data. On average the rows with project as a reference would be around 5 - 10 % of the table. I dont think the specialized index would work as the "references" dont actually point to anything else in the databse, theyre used purely as a way to tie rows together for various queries, and the key (project in this case) is user defined
– James Hay
Jan 2 '16 at 5:10
Also regarding varchar it was automatically generated by a model definition using sequelize. Ive previously only worked with SQL server so thats the reason I wasnt aware of that problem. I will see if i can update the schema to use text.
– James Hay
Jan 2 '16 at 5:35
Also regarding varchar it was automatically generated by a model definition using sequelize. Ive previously only worked with SQL server so thats the reason I wasnt aware of that problem. I will see if i can update the schema to use text.
– James Hay
Jan 2 '16 at 5:35
@JamesHay: There is no problem with
varchar(n)
per se. But unlike SQL Server, there is also no magic benefit for limiting to varchar(255)
. More: dba.stackexchange.com/a/89433/3684. Sequelize may be optimized for SQL Server ...– Erwin Brandstetter
Jan 2 '16 at 14:42
@JamesHay: There is no problem with
varchar(n)
per se. But unlike SQL Server, there is also no magic benefit for limiting to varchar(255)
. More: dba.stackexchange.com/a/89433/3684. Sequelize may be optimized for SQL Server ...– Erwin Brandstetter
Jan 2 '16 at 14:42
add a comment |
Thanks for contributing an answer to Database Administrators Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f124982%2findex-not-being-used-in-select-query%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Did you try an index on
(type, created_at, references)
?– ypercubeᵀᴹ
Jan 1 '16 at 13:14
@YperSillyCubeᵀᴹ I'm just going through now and trying out some indexes so I'll try that one out next.
– James Hay
Jan 1 '16 at 13:19
No that one slowed it down to 26 seconds unfortunately :/
– James Hay
Jan 1 '16 at 13:32
Oh, I misread the WHERE condition. Are you aware that the condition is equivalent to:?
WHERE (stats."references"::jsonb ? 'project' AND stats."type" = 'event1') OR (stats."type" = 'event2' AND stats."createdAt" > '2015-11-02T00:00:00+08:00' AND stats."createdAt" < '2015-12-03T23:59:59+08:00')
– ypercubeᵀᴹ
Jan 1 '16 at 14:06
Please provide your actual table definition showing exact data types and all constraints: a valid
CREATE TABLE
script or what you get withd stats
in psql. And always your version of Postgres. How many rows with a keyreferences ->> 'project'
and how many distinct values forreferences ->> 'project'
?– Erwin Brandstetter
Jan 1 '16 at 16:42