SlideShare a Scribd company logo
1/44
PostgreSQL
Optimisation of queries with grouping
Alexey Bashtanov, Brandwatch
28 Jan 2016
2/44
What is it all about?
This talk will cover optimisation of
Grouping
Aggregation
Unfortunately it will not cover optimisation of
Getting the data
Filtering
Joins
Window functions
Other data transformations
3/44
Outline
1 What is a grouping?
2 How does it work?
Aggregation functions under the hood
Grouping algorithms
3 Optimisation
Avoiding sorts
Summation
Denormalized data aggregation
Arg-maximum
4 Still slow?
4/44
What is a grouping?
5/44
What is a grouping?
What do we call a grouping/aggregation operation?
An operation of splitting input data into several classes and
then compilation each class into one row.
3
32 21 1
3
3
3
3
1
1
2
2 15
2
2
2
2
3
3
1
1 8
3
3
2
2
3
3
1
1 9
6/44
Examples
SELECT department_id,
avg(salary)
FROM employees
GROUP BY department_id
SELECT DISTINCT department_id
FROM employees
7/44
Examples
SELECT DISTINCT ON (department_id)
department_id,
employee_id,
salary
FROM employees
ORDER BY department_id,
salary DESC
8/44
Examples
SELECT max(salary)
FROM employees
SELECT salary
FROM employees
ORDER BY salary DESC
LIMIT 1
9/44
How does it work?
10/44
Aggregation functions under the hood
INITCOND SFUNC
Input data
state SFUNC
Input data
state SFUNC
Input data
state
FINALFUNC
Result
An aggregate function is defined by:
State, input and output types
Initial state (INITCOND)
Transition function (SFUNC)
Final function (FINALFUNC)
10/44
Aggregation functions under the hood
state = 0 state += input
2
2 state += input
3
5 state += input
7
12
=
sum=12
SELECT sum(column1),
avg(column1)
FROM (VALUES (2), (3), (7)) _
10/44
Aggregation functions under the hood
cnt = 0
sum = 0
cnt++
sum+=input
2
cnt=1
sum=2
cnt++
sum+=input
3
cnt=2
sum=5
cnt++
sum+=input
7
cnt=3
sum=12
sum / cnt
avg=4
SELECT sum(column1),
avg(column1)
FROM (VALUES (2), (3), (7)) _
11/44
Aggregation functions under the hood
SFUNC and FINALFUNC functions can be written in
C — fast (SFUNC may modify input state and return it)
SQL
PL/pgSQL — SLOW!
any other language
SFUNC and FINALFUNC functions can be declared STRICT
(i.e. not called on null input)
12/44
Grouping algorithms
PostgreSQL uses 2 algorithms to feed aggregate functions by
grouped data:
GroupAggregate: get the data sorted and apply
aggregation function to groups one by one
HashAggregate: store state for each key in a hash table
13/44
GroupAgg
1 3 1 2 2 3 1 3 2 1 state: 0
13/44
GroupAgg
1 3 1 2 2 3 1 3 2 1 state: 0
1 3 1 2 2 3 1 3 state: 3
13/44
GroupAgg
1 3 1 2 2 3 1 3 2 1 state: 0
1 3 1 2 2 3 1 3 state: 3
1 3 1 2 2 state: 4 6
13/44
GroupAgg
1 3 1 2 2 3 1 3 2 1 state: 0
1 3 1 2 2 3 1 3 state: 3
1 3 1 2 2 state: 4 6
1 3 1 state: 0 8 6
13/44
GroupAgg
1 3 1 2 2 3 1 3 2 1 state: 0
1 3 1 2 2 3 1 3 state: 3
1 3 1 2 2 state: 4 6
1 3 1 state: 0 8 6
5 8 6
14/44
HashAggregate
1 2 3 2 3 1 2 1 3 1 state: 0
14/44
HashAggregate
1 2 3 2 3 1 2 1 3 1 state: 0
1 2 3 2 3 1 2 1 3 state: 1
14/44
HashAggregate
1 2 3 2 3 1 2 1 3 1 state: 0
1 2 3 2 3 1 2 1 3 state: 1
1 2 3 2 3 1 2 1
state: 1
state: 3
14/44
HashAggregate
1 2 3 2 3 1 2 1 3 1 state: 0
1 2 3 2 3 1 2 1 3 state: 1
1 2 3 2 3 1 2 1
state: 1
state: 3
1 2 3
state: 6
state: 6
state: 1
14/44
HashAggregate
1 2 3 2 3 1 2 1 3 1 state: 0
1 2 3 2 3 1 2 1 3 state: 1
1 2 3 2 3 1 2 1
state: 1
state: 3
1 2 3
state: 6
state: 6
state: 1
state: 6
state: 8
state: 5
14/44
HashAggregate
1 2 3 2 3 1 2 1 3 1 state: 0
1 2 3 2 3 1 2 1 3 state: 1
1 2 3 2 3 1 2 1
state: 1
state: 3
1 2 3
state: 6
state: 6
state: 1
state: 6
state: 8
state: 5
68 5
15/44
GroupAggregate vs. HashAggregate
GroupAggregate
− Requires sorted data
+ Needs less memory
+ Returns sorted data
+ Returns data on the fly
+ Can perform
count(distinct ...),
array_agg(... order by ...)
etc.
HashAggregate
+ Accepts unsorted data
− Needs more memory
− Returns unsorted data
− Returns data at the end
− Can perform only basic
aggregation
16/44
Optimisation
17/44
Avoiding sorts
Sorts are really slow. Prefer HashAggregation if possible.
17/44
Avoiding sorts
Sorts are really slow. Prefer HashAggregation if possible.
What to do if you get something like this?
EXPLAIN
SELECT region_id,
avg(age)
FROM people
GROUP BY region_id
GroupAggregate (cost=149244.84..156869.46 rows=9969 width=10)
-> Sort (cost=149244.84..151744.84 rows=1000000 width=10)
Sort Key: region_id
-> Seq Scan on people (cost=0.00..15406.00 rows=1000000 width=10)
1504.474 ms
17/44
Avoiding sorts
Sorts are really slow. Prefer HashAggregation if possible.
What to do if you get something like this?
EXPLAIN
SELECT region_id,
avg(age)
FROM people
GROUP BY region_id
set enable_sort to off?
17/44
Avoiding sorts
Sorts are really slow. Prefer HashAggregation if possible.
What to do if you get something like this?
EXPLAIN
SELECT region_id,
avg(age)
FROM people
GROUP BY region_id
set enable_sort to off? No!
GroupAggregate (cost=10000149244.84..10000156869.46 rows=9969 width=10)
-> Sort (cost=10000149244.84..10000151744.84 rows=1000000 width=10)
Sort Key: region_id
-> Seq Scan on people (cost=0.00..15406.00 rows=1000000 width=10)
1497.167 ms
17/44
Avoiding sorts
Sorts are really slow. Prefer HashAggregation if possible.
What to do if you get something like this?
EXPLAIN
SELECT region_id,
avg(age)
FROM people
GROUP BY region_id
Increase work_mem: set work_mem to ’100MB’
HashAggregate (cost=20406.00..20530.61 rows=9969 width=10)
-> Seq Scan on people (cost=0.00..15406.00 rows=1000000 width=10)
685.689 ms
17/44
Avoiding sorts
Sorts are really slow. Prefer HashAggregation if possible.
What to do if you get something like this?
EXPLAIN
SELECT region_id,
avg(age)
FROM people
GROUP BY region_id
Increase work_mem: set work_mem to ’100MB’
HashAggregate (cost=20406.00..20530.61 rows=9969 width=10)
-> Seq Scan on people (cost=0.00..15406.00 rows=1000000 width=10)
685.689 ms
Increase sanely to avoid OOM
18/44
Avoiding sorts
How to spend less memory to allow HashAggregation?
Don’t aggregate joined
SELECT p.region_id,
d.region_description,
avg(age)
FROM people p
JOIN regions r using (region_id)
GROUP BY region_id,
region_description
Join aggregated instead
SELECT a.region_id,
r.region_description,
a.avg_age
FROM (
SELECT region_id,
avg(age) avg_age
FROM people p
GROUP BY region_id
) a
JOIN regions r using (region_id)
19/44
Avoiding sorts
How to avoid sorts for count(DISTINCT ...)?
SELECT date_trunc(’month’, visit_date),
count(DISTINCT visitor_id)
FROM visits
GROUP BY date_trunc(’month’, visit_date)
GroupAggregate (actual time=7685.972..10564.358 rows=329 loops=1)
-> Sort (actual time=7680.426..9423.331 rows=4999067 loops=1)
Sort Key: (date_trunc(’month’::text, visit_date))
Sort Method: external merge Disk: 107496kB
-> Seq Scan on visits (actual time=10.941..2966.460 rows=4999067 loops=1)
20/44
Avoiding sorts
Two levels of HashAggregate could be faster!
SELECT visit_month,
count(*)
FROM (
SELECT DISTINCT
date_trunc(’month’, visit_date)
as visit_month,
visitor_id
FROM visits
) _
GROUP BY visit_month
HashAggregate (actual time=2632.322..2632.354 rows=329 loops=1)
-> HashAggregate (actual time=2496.010..2578.779 rows=329000 loops=1)
-> Seq Scan on visits (actual time=0.060..1569.906 rows=4999067 loops=1)
21/44
Avoiding sorts
How to avoid sorts for array_agg(...ORDER BY ...)?
SELECT
visit_date,
array_agg(visitor_id ORDER BY visitor_id)
FROM visits
GROUP BY visit_date
GroupAggregate (actual time=5433.658..8010.309 rows=10000 loops=1)
-> Sort (actual time=5433.416..6769.872 rows=4999067 loops=1)
Sort Key: visit_date
Sort Method: external merge Disk: 107504kB
-> Seq Scan on visits (actual time=0.046..581.672 rows=4999067 loops=1)
22/44
Avoiding sorts
Might be better to sort each line separately
SELECT
visit_date,
(
select array_agg(i ORDER BY i)
from unnest(visitors_u) i
)
FROM (
SELECT visit_date,
array_agg(visitor_id) visitors_u
FROM visits
GROUP BY visit_date
) _
Subquery Scan on _ (actual time=2504.915..3767.300 rows=10000 loops=1)
-> HashAggregate (actual time=2504.757..2555.038 rows=10000 loops=1)
-> Seq Scan on visits (actual time=0.056..397.859 rows=4999067 loops=1)
SubPlan 1
-> Aggregate (actual time=0.120..0.121 rows=1 loops=10000)
-> Function Scan on unnest i (actual time=0.033..0.055 rows=500 loops=10000)
23/44
Summation
There are three sum functions in PostgreSQL:
sum(int) returns bigint
sum(bigint) returns numeric — SLOW
(needs to convert every input value)
sum(numeric) returns numeric
Do not use bigint as a datatype for a value to be summed,
prefer numeric. BTW small numeric numbers spend less
space bytes on disk than bigint.
It might be worth writing a custom aggregate function
sum(bigint) returns bigint . . .
24/44
Summation
Straightforward solution, to be used if there are few zero values:
SELECT sum(cat_cnt)
FROM cities
Can speed up up to 7 times. Worth considering if >50% zeroes:
SELECT coalesce(sum(tiger_cnt), 0)
FROM cities
WHERE tiger_cnt <> 0
Can help only if the type is numeric and we cannot filter out:
SELECT coalesce(sum(nullif(tiger_cnt, 0)), 0),
sum(cat_cnt)
FROM cities
25/44
Summation
Better in any case to replace all zeroes by nulls:
UPDATE cities
SET cat_cnt = nullif(cat_cnt, 0),
tiger_cnt = nullif(tiger_cnt, 0);
VACUUM FULL cities;
Additionally this will dramatically reduce space occupied.
26/44
Denormalized data aggregation
Sometimes we need to aggregate denormalized data
Most common solution is
SELECT account_id,
account_name,
sum(payment_amount)
FROM payments
GROUP BY account_id,
account_name
Planner does not know that account_id and account_name
correlate. It can lead to wrong estimates and suboptimal plan.
27/44
Denormalized data aggregation
A bit less-known approach is
SELECT account_id,
min(account_name),
sum(payment_amount)
FROM payments
GROUP BY account_id
Works only if the type of "denormalized payload" supports
comparison operator.
28/44
Denormalized data aggregation
Also we can write a custom aggregate function
CREATE FUNCTION frst (text, text)
RETURNS text IMMUTABLE LANGUAGE sql AS
$$ select $1; $$;
CREATE AGGREGATE a (text) (
SFUNC=frst,
STYPE=text
);
SELECT account_id,
a(account_name),
sum(payment_amount)
FROM payments
GROUP BY account_id
29/44
Denormalized data aggregation
Or even write it in C
SELECT account_id,
anyold(account_name),
sum(payment_amount)
FROM payments
GROUP BY account_id
Sorry, no source code for anyold
30/44
Denormalized data aggregation
And what is the fastest?
It depends on the width of "denormalized payload":
1 10 100 1000 10000
dumb 366ms 374ms 459ms 1238ms 53236ms
min 375ms 377ms 409ms 716ms 16747ms
SQL 1970ms 1975ms 2031ms 2446ms 2036ms
C 385ms 385ms 408ms 659ms 436ms
30/44
Denormalized data aggregation
And what is the fastest?
It depends on the width of "denormalized payload":
1 10 100 1000 10000
dumb 366ms 374ms 459ms 1238ms 53236ms
min 375ms 377ms 409ms 716ms 16747ms
SQL 1970ms 1975ms 2031ms 2446ms 2036ms*
C 385ms 385ms 408ms 659ms 436ms*
* — The more data the faster we proceed?
It is because we do not need to extract TOASTed values.
31/44
Arg-maximum
Max
Population of the largest
city in each country
Date of last tweet by each
author
The highest salary in each
department
31/44
Arg-maximum
Max
Population of the largest
city in each country
Date of last tweet by each
author
The highest salary in each
department
Arg-max
What is the largest city in
each country
What is the last tweet by
each author
Who gets the highest
salary in each department
32/44
Arg-maximum
Max is built-in. How to perform Arg-max?
Self-joins?
Window-functions?
32/44
Arg-maximum
Max is built-in. How to perform Arg-max?
Self-joins?
Window-functions?
Use DISTINCT ON() (PG-specific, not in SQL standard)
SELECT DISTINCT ON (author_id)
author_id,
twit_id
FROM twits
ORDER BY author_id,
twit_date DESC
32/44
Arg-maximum
Max is built-in. How to perform Arg-max?
Self-joins?
Window-functions?
Use DISTINCT ON() (PG-specific, not in SQL standard)
SELECT DISTINCT ON (author_id)
author_id,
twit_id
FROM twits
ORDER BY author_id,
twit_date DESC
But it still can be performed only by sorting, not by hashing :(
33/44
Arg-maximum
We can emulate Arg-max by ordinary max and dirty hacks
SELECT author_id,
(max(array[
twit_date,
date’epoch’ + twit_id
]))[2] - date’epoch’
FROM twits
GROUP BY author_id;
But such types tweaking is not always possible.
34/44
Arg-maximum
It’s time to write more custom aggregate functions
CREATE TYPE amax_ty AS (key_date date, payload int);
CREATE FUNCTION amax_t (p_state amax_ty, p_key_date date, p_payload int)
RETURNS amax_ty IMMUTABLE LANGUAGE sql AS
$$
SELECT CASE WHEN p_state.key_date < p_key_date
OR (p_key_date IS NOT NULL AND p_state.key_date IS NULL)
THEN (p_key_date, p_payload)::amax_ty
ELSE p_state END
$$;
CREATE FUNCTION amax_f (p_state amax_ty) RETURNS int IMMUTABLE LANGUAGE sql AS
$$ SELECT p_state.payload $$;
CREATE AGGREGATE amax (date, int) (
SFUNC = amax_t,
STYPE = amax_ty,
FINALFUNC = amax_f,
INITCOND = ’(,)’
);
SELECT author_id,
amax(twit_date, twit_id)
FROM twits
GROUP BY author_id;
35/44
Arg-maximum
Argmax is similar to amax, but written in C
SELECT author_id,
argmax(twit_date, twit_id)
FROM twits
GROUP BY author_id;
36/44
Arg-maximum
Who wins now?
1002 3332 10002 33332 50002
DISTINCT ON 6ms 42ms 342ms 10555ms 30421ms
Max(array) 5ms 47ms 399ms 4464ms 10025ms
SQL amax 38ms 393ms 3541ms 39539ms 90164ms
C argmax 5ms 37ms 288ms 3183ms 7176ms
36/44
Arg-maximum
Who wins now?
1002 3332 10002 33332 50002
DISTINCT ON 6ms 42ms 342ms 10555ms 30421ms
Max(array) 5ms 47ms 399ms 4464ms 10025ms
SQL amax 38ms 393ms 3541ms 39539ms 90164ms
C argmax 5ms 37ms 288ms 3183ms 7176ms
SQL amax finally outperforms DISTINCT ON on 109-ish rows
37/44
Still slow?
38/44
Still slow?
Slow max, arg-max or distinct query?
Sometimes we can fetch the rows one-by-one using index:
3 2 1 4 2 2 1 3 31 0
CREATE INDEX ON twits(author_id, twit_date DESC);
-- for the very first author_id fetch the row with latest date
SELECT twit_id,
twit_date,
author_id
FROM twits
ORDER BY author_id,
twit_date DESC
LIMIT 1;
-- find the next author_id and fetch the row with latest date
SELECT twit_id,
twit_date,
author_id
FROM twits
WHERE author_id > ?
ORDER BY author_id,
twit_date DESC
LIMIT 1;
...
38/44
Still slow?
Slow max, arg-max or distinct query?
Sometimes we can fetch the rows one-by-one using index:
3 2 1 4 2 2 1 3 31 0
CREATE INDEX ON twits(author_id, twit_date DESC);
CREATE FUNCTION f1by1() RETURNS TABLE (o_twit_id int, o_twit_date date) AS $$
DECLARE l_author_id int := -1; -- to make the code a bit more simple
BEGIN
LOOP
SELECT twit_id,
twit_date,
author_id
INTO o_twit_id,
o_twit_date,
l_author_id
FROM twits
WHERE author_id > l_author_id
ORDER BY author_id,
twit_date DESC
LIMIT 1;
EXIT WHEN NOT FOUND;
RETURN NEXT;
END LOOP;
END;
$$ LANGUAGE plpgsql;
SELECT * FROM f1by1();
39/44
Still slow?
Let us use pure SQL instead, it is a bit faster as usual
WITH RECURSIVE d AS (
(
SELECT array[author_id, twit_id] ids
FROM twits
ORDER BY author_id,
twit_date DESC
LIMIT 1
)
UNION
SELECT (
SELECT array[t.author_id, t.twit_id]
FROM twits t
WHERE t.author_id > d.ids[1]
ORDER BY t.author_id,
t.twit_date DESC
LIMIT 1
) q
FROM d
)
SELECT d.ids[1] author_id,
d.ids[2] twit_id
FROM d;
40/44
Still slow?
One-by-one retrieval by index
+ Incredibly fast unless returns too many rows
− Needs an index
− SQL version needs tricks if the data types differ
Authors × Twits-per-author:
106 × 101 105 × 102 104 × 103 102 × 105
C argmax 3679ms 3081ms 2881ms 2859ms
1-by-1 proc 12750ms 1445ms 152ms 2ms
1-by-1 SQL 6250ms 906ms 137ms 2ms
40/44
Still slow?
One-by-one retrieval by index
+ Incredibly fast unless returns too many rows
− Needs an index
− SQL version needs tricks if the data types differ
1002 3332 10002 33332 50002
DISTINCT ON 6ms 42ms 342ms 10555ms 30421ms
Max(array) 5ms 47ms 399ms 4464ms 10025ms
SQL amax 38ms 393ms 3541ms 39539ms 90164ms
C argmax 5ms 37ms 288ms 3183ms 7176ms
1-by-1 proc 2ms 6ms 12ms 42ms 63ms
1-by-1 SQL 1ms 4ms 11ms 29ms 37ms
41/44
Still slow?
Slow HashAggregate?
Use parallel aggregation extension:
http://www.cybertec.at/en/products/
agg-parallel-aggregations-postgresql/
+ Up to 30 times faster
+ Speeds up SeqScan as well
− Mostly useful for complex row operations
− Requires PG 9.5+
− No magic: it loads up several of your cores
42/44
Still slow?
Slow count(DISTINCT ...)?
Use HyperLogLog: reliable and efficient approximate algorithm
https://en.wikipedia.org/wiki/HyperLogLog
https://github.com/aggregateknowledge/postgresql-hll
Or fetch approximate values from pg_stats
43/44
Still slow?
Slow in typing? ;)
SELECT department_id,
avg(salary)
FROM employees
GROUP BY 1 -- same as GROUP BY department_id
SELECT count(*)
FROM employees
GROUP BY true -- same as HAVING count(*) > 0
-- or use MySQL
SELECT account_id,
account_name,
sum(payment_amount)
FROM payments
GROUP BY 1
44/44
Questions?

More Related Content

PDF
InnoDB MVCC Architecture (by 권건우)
PDF
Solving PostgreSQL wicked problems
PDF
Secondary Index Search in InnoDB
PDF
Best Practices with PostgreSQL on Solaris
PDF
InnoDB Flushing and Checkpoints
PDF
Innodb에서의 Purge 메커니즘 deep internal (by 이근오)
PDF
Using ClickHouse for Experimentation
PPTX
Modeling Data and Queries for Wide Column NoSQL
InnoDB MVCC Architecture (by 권건우)
Solving PostgreSQL wicked problems
Secondary Index Search in InnoDB
Best Practices with PostgreSQL on Solaris
InnoDB Flushing and Checkpoints
Innodb에서의 Purge 메커니즘 deep internal (by 이근오)
Using ClickHouse for Experimentation
Modeling Data and Queries for Wide Column NoSQL

What's hot (20)

PDF
Introduction VAUUM, Freezing, XID wraparound
PPTX
Evening out the uneven: dealing with skew in Flink
PPT
Galera Cluster Best Practices for DBA's and DevOps Part 1
PDF
PostgreSQL Replication Tutorial
PDF
PostgreSQL and RAM usage
PDF
MongoDB vs. Postgres Benchmarks
 
PDF
Logical Replication in PostgreSQL - FLOSSUK 2016
ODP
Presto
PDF
Best practices for MySQL High Availability Tutorial
PDF
Streaming Millions of Contact Center Interactions in (Near) Real-Time with Pu...
PDF
Advanced Cassandra Operations via JMX (Nate McCall, The Last Pickle) | C* Sum...
PDF
Iceberg: a fast table format for S3
PDF
Apache Arrow Workshop at VLDB 2019 / BOSS Session
PDF
The Parquet Format and Performance Optimization Opportunities
PDF
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
PDF
writing self-modifying code and utilizing advanced assembly techniques
PDF
Memory Management in Apache Spark
PDF
SeaweedFS introduction
PDF
High Availability PostgreSQL with Zalando Patroni
PDF
Understanding PostgreSQL LW Locks
Introduction VAUUM, Freezing, XID wraparound
Evening out the uneven: dealing with skew in Flink
Galera Cluster Best Practices for DBA's and DevOps Part 1
PostgreSQL Replication Tutorial
PostgreSQL and RAM usage
MongoDB vs. Postgres Benchmarks
 
Logical Replication in PostgreSQL - FLOSSUK 2016
Presto
Best practices for MySQL High Availability Tutorial
Streaming Millions of Contact Center Interactions in (Near) Real-Time with Pu...
Advanced Cassandra Operations via JMX (Nate McCall, The Last Pickle) | C* Sum...
Iceberg: a fast table format for S3
Apache Arrow Workshop at VLDB 2019 / BOSS Session
The Parquet Format and Performance Optimization Opportunities
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
writing self-modifying code and utilizing advanced assembly techniques
Memory Management in Apache Spark
SeaweedFS introduction
High Availability PostgreSQL with Zalando Patroni
Understanding PostgreSQL LW Locks
Ad

Viewers also liked (12)

PDF
PostgreSql query planning and tuning
PDF
Ten Reasons Why You Should Prefer PostgreSQL to MySQL
ODP
PostgreSQL Administration for System Administrators
PDF
Modern SQL in Open Source and Commercial Databases
PDF
Pg big fast ugly acid
PDF
Life on a_rollercoaster
PDF
The ninja elephant, scaling the analytics database in Transwerwise
PDF
PostgreSQL, the big the fast and the (NOSQL on) Acid
PDF
Pg chameleon MySQL to PostgreSQL replica
PDF
The ninja elephant, scaling the analytics database in Transwerwise
PDF
Postgresql database administration volume 1
PDF
5 Steps to PostgreSQL Performance
PostgreSql query planning and tuning
Ten Reasons Why You Should Prefer PostgreSQL to MySQL
PostgreSQL Administration for System Administrators
Modern SQL in Open Source and Commercial Databases
Pg big fast ugly acid
Life on a_rollercoaster
The ninja elephant, scaling the analytics database in Transwerwise
PostgreSQL, the big the fast and the (NOSQL on) Acid
Pg chameleon MySQL to PostgreSQL replica
The ninja elephant, scaling the analytics database in Transwerwise
Postgresql database administration volume 1
5 Steps to PostgreSQL Performance
Ad

Similar to PostgreSQL, performance for queries with grouping (20)

PDF
PGDay UK 2016 -- Performace for queries with grouping
PDF
Run your queries 14X faster without any investment!
PPTX
DBMS: Week 07 - Advanced SQL Queries in MySQL
PDF
Photon Technical Deep Dive: How to Think Vectorized
PDF
PostgreSQL: Data analysis and analytics
PDF
CS121Lec05.pdf
PDF
SQL Queries .pdf
PPTX
Analysis Services Best Practices From Large Deployments
PDF
Vertica trace
PDF
Don’t optimize my queries, optimize my data!
PDF
Performance improvements in PostgreSQL 9.5 and beyond
PPTX
Lazy beats Smart and Fast
PDF
SQL-AGG-FUN.pdfiiiijuyyttfffgyyuyyyyyhhh
PDF
PostgreSQL query planner's internals
PDF
Data profiling with Apache Calcite
PDF
Data profiling in Apache Calcite
PDF
query_tuning.pdf
PDF
Data Profiling in Apache Calcite
PDF
MongoDB Europe 2016 - Advanced MongoDB Aggregation Pipelines
PDF
Advanced MongoDB Aggregation Pipelines
PGDay UK 2016 -- Performace for queries with grouping
Run your queries 14X faster without any investment!
DBMS: Week 07 - Advanced SQL Queries in MySQL
Photon Technical Deep Dive: How to Think Vectorized
PostgreSQL: Data analysis and analytics
CS121Lec05.pdf
SQL Queries .pdf
Analysis Services Best Practices From Large Deployments
Vertica trace
Don’t optimize my queries, optimize my data!
Performance improvements in PostgreSQL 9.5 and beyond
Lazy beats Smart and Fast
SQL-AGG-FUN.pdfiiiijuyyttfffgyyuyyyyyhhh
PostgreSQL query planner's internals
Data profiling with Apache Calcite
Data profiling in Apache Calcite
query_tuning.pdf
Data Profiling in Apache Calcite
MongoDB Europe 2016 - Advanced MongoDB Aggregation Pipelines
Advanced MongoDB Aggregation Pipelines

Recently uploaded (20)

PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
PPTX
IB Computer Science - Internal Assessment.pptx
PDF
Launch Your Data Science Career in Kochi – 2025
PDF
Galatica Smart Energy Infrastructure Startup Pitch Deck
PPTX
Business Acumen Training GuidePresentation.pptx
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
PPTX
Introduction-to-Cloud-ComputingFinal.pptx
PPTX
climate analysis of Dhaka ,Banglades.pptx
PPTX
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
PPTX
oil_refinery_comprehensive_20250804084928 (1).pptx
PPTX
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
PPTX
Supervised vs unsupervised machine learning algorithms
PDF
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PDF
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
PPT
Quality review (1)_presentation of this 21
PDF
Lecture1 pattern recognition............
PDF
Foundation of Data Science unit number two notes
Data_Analytics_and_PowerBI_Presentation.pptx
IB Computer Science - Internal Assessment.pptx
Launch Your Data Science Career in Kochi – 2025
Galatica Smart Energy Infrastructure Startup Pitch Deck
Business Acumen Training GuidePresentation.pptx
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
Acceptance and paychological effects of mandatory extra coach I classes.pptx
Introduction-to-Cloud-ComputingFinal.pptx
climate analysis of Dhaka ,Banglades.pptx
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
oil_refinery_comprehensive_20250804084928 (1).pptx
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
Supervised vs unsupervised machine learning algorithms
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
Quality review (1)_presentation of this 21
Lecture1 pattern recognition............
Foundation of Data Science unit number two notes

PostgreSQL, performance for queries with grouping

  • 1. 1/44 PostgreSQL Optimisation of queries with grouping Alexey Bashtanov, Brandwatch 28 Jan 2016
  • 2. 2/44 What is it all about? This talk will cover optimisation of Grouping Aggregation Unfortunately it will not cover optimisation of Getting the data Filtering Joins Window functions Other data transformations
  • 3. 3/44 Outline 1 What is a grouping? 2 How does it work? Aggregation functions under the hood Grouping algorithms 3 Optimisation Avoiding sorts Summation Denormalized data aggregation Arg-maximum 4 Still slow?
  • 4. 4/44 What is a grouping?
  • 5. 5/44 What is a grouping? What do we call a grouping/aggregation operation? An operation of splitting input data into several classes and then compilation each class into one row. 3 32 21 1 3 3 3 3 1 1 2 2 15 2 2 2 2 3 3 1 1 8 3 3 2 2 3 3 1 1 9
  • 6. 6/44 Examples SELECT department_id, avg(salary) FROM employees GROUP BY department_id SELECT DISTINCT department_id FROM employees
  • 7. 7/44 Examples SELECT DISTINCT ON (department_id) department_id, employee_id, salary FROM employees ORDER BY department_id, salary DESC
  • 8. 8/44 Examples SELECT max(salary) FROM employees SELECT salary FROM employees ORDER BY salary DESC LIMIT 1
  • 10. 10/44 Aggregation functions under the hood INITCOND SFUNC Input data state SFUNC Input data state SFUNC Input data state FINALFUNC Result An aggregate function is defined by: State, input and output types Initial state (INITCOND) Transition function (SFUNC) Final function (FINALFUNC)
  • 11. 10/44 Aggregation functions under the hood state = 0 state += input 2 2 state += input 3 5 state += input 7 12 = sum=12 SELECT sum(column1), avg(column1) FROM (VALUES (2), (3), (7)) _
  • 12. 10/44 Aggregation functions under the hood cnt = 0 sum = 0 cnt++ sum+=input 2 cnt=1 sum=2 cnt++ sum+=input 3 cnt=2 sum=5 cnt++ sum+=input 7 cnt=3 sum=12 sum / cnt avg=4 SELECT sum(column1), avg(column1) FROM (VALUES (2), (3), (7)) _
  • 13. 11/44 Aggregation functions under the hood SFUNC and FINALFUNC functions can be written in C — fast (SFUNC may modify input state and return it) SQL PL/pgSQL — SLOW! any other language SFUNC and FINALFUNC functions can be declared STRICT (i.e. not called on null input)
  • 14. 12/44 Grouping algorithms PostgreSQL uses 2 algorithms to feed aggregate functions by grouped data: GroupAggregate: get the data sorted and apply aggregation function to groups one by one HashAggregate: store state for each key in a hash table
  • 15. 13/44 GroupAgg 1 3 1 2 2 3 1 3 2 1 state: 0
  • 16. 13/44 GroupAgg 1 3 1 2 2 3 1 3 2 1 state: 0 1 3 1 2 2 3 1 3 state: 3
  • 17. 13/44 GroupAgg 1 3 1 2 2 3 1 3 2 1 state: 0 1 3 1 2 2 3 1 3 state: 3 1 3 1 2 2 state: 4 6
  • 18. 13/44 GroupAgg 1 3 1 2 2 3 1 3 2 1 state: 0 1 3 1 2 2 3 1 3 state: 3 1 3 1 2 2 state: 4 6 1 3 1 state: 0 8 6
  • 19. 13/44 GroupAgg 1 3 1 2 2 3 1 3 2 1 state: 0 1 3 1 2 2 3 1 3 state: 3 1 3 1 2 2 state: 4 6 1 3 1 state: 0 8 6 5 8 6
  • 20. 14/44 HashAggregate 1 2 3 2 3 1 2 1 3 1 state: 0
  • 21. 14/44 HashAggregate 1 2 3 2 3 1 2 1 3 1 state: 0 1 2 3 2 3 1 2 1 3 state: 1
  • 22. 14/44 HashAggregate 1 2 3 2 3 1 2 1 3 1 state: 0 1 2 3 2 3 1 2 1 3 state: 1 1 2 3 2 3 1 2 1 state: 1 state: 3
  • 23. 14/44 HashAggregate 1 2 3 2 3 1 2 1 3 1 state: 0 1 2 3 2 3 1 2 1 3 state: 1 1 2 3 2 3 1 2 1 state: 1 state: 3 1 2 3 state: 6 state: 6 state: 1
  • 24. 14/44 HashAggregate 1 2 3 2 3 1 2 1 3 1 state: 0 1 2 3 2 3 1 2 1 3 state: 1 1 2 3 2 3 1 2 1 state: 1 state: 3 1 2 3 state: 6 state: 6 state: 1 state: 6 state: 8 state: 5
  • 25. 14/44 HashAggregate 1 2 3 2 3 1 2 1 3 1 state: 0 1 2 3 2 3 1 2 1 3 state: 1 1 2 3 2 3 1 2 1 state: 1 state: 3 1 2 3 state: 6 state: 6 state: 1 state: 6 state: 8 state: 5 68 5
  • 26. 15/44 GroupAggregate vs. HashAggregate GroupAggregate − Requires sorted data + Needs less memory + Returns sorted data + Returns data on the fly + Can perform count(distinct ...), array_agg(... order by ...) etc. HashAggregate + Accepts unsorted data − Needs more memory − Returns unsorted data − Returns data at the end − Can perform only basic aggregation
  • 28. 17/44 Avoiding sorts Sorts are really slow. Prefer HashAggregation if possible.
  • 29. 17/44 Avoiding sorts Sorts are really slow. Prefer HashAggregation if possible. What to do if you get something like this? EXPLAIN SELECT region_id, avg(age) FROM people GROUP BY region_id GroupAggregate (cost=149244.84..156869.46 rows=9969 width=10) -> Sort (cost=149244.84..151744.84 rows=1000000 width=10) Sort Key: region_id -> Seq Scan on people (cost=0.00..15406.00 rows=1000000 width=10) 1504.474 ms
  • 30. 17/44 Avoiding sorts Sorts are really slow. Prefer HashAggregation if possible. What to do if you get something like this? EXPLAIN SELECT region_id, avg(age) FROM people GROUP BY region_id set enable_sort to off?
  • 31. 17/44 Avoiding sorts Sorts are really slow. Prefer HashAggregation if possible. What to do if you get something like this? EXPLAIN SELECT region_id, avg(age) FROM people GROUP BY region_id set enable_sort to off? No! GroupAggregate (cost=10000149244.84..10000156869.46 rows=9969 width=10) -> Sort (cost=10000149244.84..10000151744.84 rows=1000000 width=10) Sort Key: region_id -> Seq Scan on people (cost=0.00..15406.00 rows=1000000 width=10) 1497.167 ms
  • 32. 17/44 Avoiding sorts Sorts are really slow. Prefer HashAggregation if possible. What to do if you get something like this? EXPLAIN SELECT region_id, avg(age) FROM people GROUP BY region_id Increase work_mem: set work_mem to ’100MB’ HashAggregate (cost=20406.00..20530.61 rows=9969 width=10) -> Seq Scan on people (cost=0.00..15406.00 rows=1000000 width=10) 685.689 ms
  • 33. 17/44 Avoiding sorts Sorts are really slow. Prefer HashAggregation if possible. What to do if you get something like this? EXPLAIN SELECT region_id, avg(age) FROM people GROUP BY region_id Increase work_mem: set work_mem to ’100MB’ HashAggregate (cost=20406.00..20530.61 rows=9969 width=10) -> Seq Scan on people (cost=0.00..15406.00 rows=1000000 width=10) 685.689 ms Increase sanely to avoid OOM
  • 34. 18/44 Avoiding sorts How to spend less memory to allow HashAggregation? Don’t aggregate joined SELECT p.region_id, d.region_description, avg(age) FROM people p JOIN regions r using (region_id) GROUP BY region_id, region_description Join aggregated instead SELECT a.region_id, r.region_description, a.avg_age FROM ( SELECT region_id, avg(age) avg_age FROM people p GROUP BY region_id ) a JOIN regions r using (region_id)
  • 35. 19/44 Avoiding sorts How to avoid sorts for count(DISTINCT ...)? SELECT date_trunc(’month’, visit_date), count(DISTINCT visitor_id) FROM visits GROUP BY date_trunc(’month’, visit_date) GroupAggregate (actual time=7685.972..10564.358 rows=329 loops=1) -> Sort (actual time=7680.426..9423.331 rows=4999067 loops=1) Sort Key: (date_trunc(’month’::text, visit_date)) Sort Method: external merge Disk: 107496kB -> Seq Scan on visits (actual time=10.941..2966.460 rows=4999067 loops=1)
  • 36. 20/44 Avoiding sorts Two levels of HashAggregate could be faster! SELECT visit_month, count(*) FROM ( SELECT DISTINCT date_trunc(’month’, visit_date) as visit_month, visitor_id FROM visits ) _ GROUP BY visit_month HashAggregate (actual time=2632.322..2632.354 rows=329 loops=1) -> HashAggregate (actual time=2496.010..2578.779 rows=329000 loops=1) -> Seq Scan on visits (actual time=0.060..1569.906 rows=4999067 loops=1)
  • 37. 21/44 Avoiding sorts How to avoid sorts for array_agg(...ORDER BY ...)? SELECT visit_date, array_agg(visitor_id ORDER BY visitor_id) FROM visits GROUP BY visit_date GroupAggregate (actual time=5433.658..8010.309 rows=10000 loops=1) -> Sort (actual time=5433.416..6769.872 rows=4999067 loops=1) Sort Key: visit_date Sort Method: external merge Disk: 107504kB -> Seq Scan on visits (actual time=0.046..581.672 rows=4999067 loops=1)
  • 38. 22/44 Avoiding sorts Might be better to sort each line separately SELECT visit_date, ( select array_agg(i ORDER BY i) from unnest(visitors_u) i ) FROM ( SELECT visit_date, array_agg(visitor_id) visitors_u FROM visits GROUP BY visit_date ) _ Subquery Scan on _ (actual time=2504.915..3767.300 rows=10000 loops=1) -> HashAggregate (actual time=2504.757..2555.038 rows=10000 loops=1) -> Seq Scan on visits (actual time=0.056..397.859 rows=4999067 loops=1) SubPlan 1 -> Aggregate (actual time=0.120..0.121 rows=1 loops=10000) -> Function Scan on unnest i (actual time=0.033..0.055 rows=500 loops=10000)
  • 39. 23/44 Summation There are three sum functions in PostgreSQL: sum(int) returns bigint sum(bigint) returns numeric — SLOW (needs to convert every input value) sum(numeric) returns numeric Do not use bigint as a datatype for a value to be summed, prefer numeric. BTW small numeric numbers spend less space bytes on disk than bigint. It might be worth writing a custom aggregate function sum(bigint) returns bigint . . .
  • 40. 24/44 Summation Straightforward solution, to be used if there are few zero values: SELECT sum(cat_cnt) FROM cities Can speed up up to 7 times. Worth considering if >50% zeroes: SELECT coalesce(sum(tiger_cnt), 0) FROM cities WHERE tiger_cnt <> 0 Can help only if the type is numeric and we cannot filter out: SELECT coalesce(sum(nullif(tiger_cnt, 0)), 0), sum(cat_cnt) FROM cities
  • 41. 25/44 Summation Better in any case to replace all zeroes by nulls: UPDATE cities SET cat_cnt = nullif(cat_cnt, 0), tiger_cnt = nullif(tiger_cnt, 0); VACUUM FULL cities; Additionally this will dramatically reduce space occupied.
  • 42. 26/44 Denormalized data aggregation Sometimes we need to aggregate denormalized data Most common solution is SELECT account_id, account_name, sum(payment_amount) FROM payments GROUP BY account_id, account_name Planner does not know that account_id and account_name correlate. It can lead to wrong estimates and suboptimal plan.
  • 43. 27/44 Denormalized data aggregation A bit less-known approach is SELECT account_id, min(account_name), sum(payment_amount) FROM payments GROUP BY account_id Works only if the type of "denormalized payload" supports comparison operator.
  • 44. 28/44 Denormalized data aggregation Also we can write a custom aggregate function CREATE FUNCTION frst (text, text) RETURNS text IMMUTABLE LANGUAGE sql AS $$ select $1; $$; CREATE AGGREGATE a (text) ( SFUNC=frst, STYPE=text ); SELECT account_id, a(account_name), sum(payment_amount) FROM payments GROUP BY account_id
  • 45. 29/44 Denormalized data aggregation Or even write it in C SELECT account_id, anyold(account_name), sum(payment_amount) FROM payments GROUP BY account_id Sorry, no source code for anyold
  • 46. 30/44 Denormalized data aggregation And what is the fastest? It depends on the width of "denormalized payload": 1 10 100 1000 10000 dumb 366ms 374ms 459ms 1238ms 53236ms min 375ms 377ms 409ms 716ms 16747ms SQL 1970ms 1975ms 2031ms 2446ms 2036ms C 385ms 385ms 408ms 659ms 436ms
  • 47. 30/44 Denormalized data aggregation And what is the fastest? It depends on the width of "denormalized payload": 1 10 100 1000 10000 dumb 366ms 374ms 459ms 1238ms 53236ms min 375ms 377ms 409ms 716ms 16747ms SQL 1970ms 1975ms 2031ms 2446ms 2036ms* C 385ms 385ms 408ms 659ms 436ms* * — The more data the faster we proceed? It is because we do not need to extract TOASTed values.
  • 48. 31/44 Arg-maximum Max Population of the largest city in each country Date of last tweet by each author The highest salary in each department
  • 49. 31/44 Arg-maximum Max Population of the largest city in each country Date of last tweet by each author The highest salary in each department Arg-max What is the largest city in each country What is the last tweet by each author Who gets the highest salary in each department
  • 50. 32/44 Arg-maximum Max is built-in. How to perform Arg-max? Self-joins? Window-functions?
  • 51. 32/44 Arg-maximum Max is built-in. How to perform Arg-max? Self-joins? Window-functions? Use DISTINCT ON() (PG-specific, not in SQL standard) SELECT DISTINCT ON (author_id) author_id, twit_id FROM twits ORDER BY author_id, twit_date DESC
  • 52. 32/44 Arg-maximum Max is built-in. How to perform Arg-max? Self-joins? Window-functions? Use DISTINCT ON() (PG-specific, not in SQL standard) SELECT DISTINCT ON (author_id) author_id, twit_id FROM twits ORDER BY author_id, twit_date DESC But it still can be performed only by sorting, not by hashing :(
  • 53. 33/44 Arg-maximum We can emulate Arg-max by ordinary max and dirty hacks SELECT author_id, (max(array[ twit_date, date’epoch’ + twit_id ]))[2] - date’epoch’ FROM twits GROUP BY author_id; But such types tweaking is not always possible.
  • 54. 34/44 Arg-maximum It’s time to write more custom aggregate functions CREATE TYPE amax_ty AS (key_date date, payload int); CREATE FUNCTION amax_t (p_state amax_ty, p_key_date date, p_payload int) RETURNS amax_ty IMMUTABLE LANGUAGE sql AS $$ SELECT CASE WHEN p_state.key_date < p_key_date OR (p_key_date IS NOT NULL AND p_state.key_date IS NULL) THEN (p_key_date, p_payload)::amax_ty ELSE p_state END $$; CREATE FUNCTION amax_f (p_state amax_ty) RETURNS int IMMUTABLE LANGUAGE sql AS $$ SELECT p_state.payload $$; CREATE AGGREGATE amax (date, int) ( SFUNC = amax_t, STYPE = amax_ty, FINALFUNC = amax_f, INITCOND = ’(,)’ ); SELECT author_id, amax(twit_date, twit_id) FROM twits GROUP BY author_id;
  • 55. 35/44 Arg-maximum Argmax is similar to amax, but written in C SELECT author_id, argmax(twit_date, twit_id) FROM twits GROUP BY author_id;
  • 56. 36/44 Arg-maximum Who wins now? 1002 3332 10002 33332 50002 DISTINCT ON 6ms 42ms 342ms 10555ms 30421ms Max(array) 5ms 47ms 399ms 4464ms 10025ms SQL amax 38ms 393ms 3541ms 39539ms 90164ms C argmax 5ms 37ms 288ms 3183ms 7176ms
  • 57. 36/44 Arg-maximum Who wins now? 1002 3332 10002 33332 50002 DISTINCT ON 6ms 42ms 342ms 10555ms 30421ms Max(array) 5ms 47ms 399ms 4464ms 10025ms SQL amax 38ms 393ms 3541ms 39539ms 90164ms C argmax 5ms 37ms 288ms 3183ms 7176ms SQL amax finally outperforms DISTINCT ON on 109-ish rows
  • 59. 38/44 Still slow? Slow max, arg-max or distinct query? Sometimes we can fetch the rows one-by-one using index: 3 2 1 4 2 2 1 3 31 0 CREATE INDEX ON twits(author_id, twit_date DESC); -- for the very first author_id fetch the row with latest date SELECT twit_id, twit_date, author_id FROM twits ORDER BY author_id, twit_date DESC LIMIT 1; -- find the next author_id and fetch the row with latest date SELECT twit_id, twit_date, author_id FROM twits WHERE author_id > ? ORDER BY author_id, twit_date DESC LIMIT 1; ...
  • 60. 38/44 Still slow? Slow max, arg-max or distinct query? Sometimes we can fetch the rows one-by-one using index: 3 2 1 4 2 2 1 3 31 0 CREATE INDEX ON twits(author_id, twit_date DESC); CREATE FUNCTION f1by1() RETURNS TABLE (o_twit_id int, o_twit_date date) AS $$ DECLARE l_author_id int := -1; -- to make the code a bit more simple BEGIN LOOP SELECT twit_id, twit_date, author_id INTO o_twit_id, o_twit_date, l_author_id FROM twits WHERE author_id > l_author_id ORDER BY author_id, twit_date DESC LIMIT 1; EXIT WHEN NOT FOUND; RETURN NEXT; END LOOP; END; $$ LANGUAGE plpgsql; SELECT * FROM f1by1();
  • 61. 39/44 Still slow? Let us use pure SQL instead, it is a bit faster as usual WITH RECURSIVE d AS ( ( SELECT array[author_id, twit_id] ids FROM twits ORDER BY author_id, twit_date DESC LIMIT 1 ) UNION SELECT ( SELECT array[t.author_id, t.twit_id] FROM twits t WHERE t.author_id > d.ids[1] ORDER BY t.author_id, t.twit_date DESC LIMIT 1 ) q FROM d ) SELECT d.ids[1] author_id, d.ids[2] twit_id FROM d;
  • 62. 40/44 Still slow? One-by-one retrieval by index + Incredibly fast unless returns too many rows − Needs an index − SQL version needs tricks if the data types differ Authors × Twits-per-author: 106 × 101 105 × 102 104 × 103 102 × 105 C argmax 3679ms 3081ms 2881ms 2859ms 1-by-1 proc 12750ms 1445ms 152ms 2ms 1-by-1 SQL 6250ms 906ms 137ms 2ms
  • 63. 40/44 Still slow? One-by-one retrieval by index + Incredibly fast unless returns too many rows − Needs an index − SQL version needs tricks if the data types differ 1002 3332 10002 33332 50002 DISTINCT ON 6ms 42ms 342ms 10555ms 30421ms Max(array) 5ms 47ms 399ms 4464ms 10025ms SQL amax 38ms 393ms 3541ms 39539ms 90164ms C argmax 5ms 37ms 288ms 3183ms 7176ms 1-by-1 proc 2ms 6ms 12ms 42ms 63ms 1-by-1 SQL 1ms 4ms 11ms 29ms 37ms
  • 64. 41/44 Still slow? Slow HashAggregate? Use parallel aggregation extension: http://www.cybertec.at/en/products/ agg-parallel-aggregations-postgresql/ + Up to 30 times faster + Speeds up SeqScan as well − Mostly useful for complex row operations − Requires PG 9.5+ − No magic: it loads up several of your cores
  • 65. 42/44 Still slow? Slow count(DISTINCT ...)? Use HyperLogLog: reliable and efficient approximate algorithm https://en.wikipedia.org/wiki/HyperLogLog https://github.com/aggregateknowledge/postgresql-hll Or fetch approximate values from pg_stats
  • 66. 43/44 Still slow? Slow in typing? ;) SELECT department_id, avg(salary) FROM employees GROUP BY 1 -- same as GROUP BY department_id SELECT count(*) FROM employees GROUP BY true -- same as HAVING count(*) > 0 -- or use MySQL SELECT account_id, account_name, sum(payment_amount) FROM payments GROUP BY 1