SlideShare a Scribd company logo
Changing your huge table's
data types in production
Jimmy Angelakos FOSDEM
Senior PostgreSQL Architect 07/02/2021
EDB
Motivation
●
Era of “Big Data” 🤦
●
PostgreSQL seeing heavier usage
●
PostgreSQL performance getting better
●
Find your DB facing rapid growth
●
Too rapid growth?
Why change types?
●
Incorrect data type
– VARCHAR(42) not enough → TEXT
●
Non-optimal data type
– TEXT id ‘12816750’ (9 bytes) vs INTEGER (4 bytes)
●
Running out of IDs
– Max INTEGER is +2,147,483,647
– Whoops!
But I can change types!
●
Yes, if they are compatible
●
“Binary coercible”
– No conversion function invocation
e.g. XML → TEXT (but not TEXT → XML)
●
“Binary compatible”
– Same internal representation
e.g. TEXT ↔ VARCHAR
ALTER TABLE ALTER TYPE
●
ALTER TABLE ALTER column_name
TYPE data_type
●
USING expression needed if there is no implicit cast
– May need to DROP DEFAULT & add a new one after
●
Needs index rebuild
So what’s the problem?
●
Requires ACCESS EXCLUSIVE lock
– No reads, writes allowed to other transactions
●
Effectively prevents usage of the table in production
●
Causes table rewrite, if not binary coercible
– S L O W
– Requires double the disk space
Scenario
One possible scenario
●
Huge table in production (1.7B rows)
●
PK column is INT, rapidly approaching 2.1B rows
●
BIGINT is seen as the solution
●
Not binary compatible (8 bytes vs 4 bytes)
●
Cannot be taken offline
What now?
One possible concurrent solution
●
Add new BIGINT column
●
Write procedure to copy values to new column in batches
●
Write trigger to replicate changes from old column
●
Drop old column, rename new column
●
Make new column PK
Small details
●
Need to create sequence for new PK
●
Need to create index for new PK
●
After conversion, perform all DDL in one transaction
●
Minimum possible locking/blocking
●
Test system: Intel i7-9750H, 64GB RAM, NVMe SSD
●
Table with 1.7*10^9 rows of 170 bytes each
Create example data (i)
CREATE TABLE largetable (id INT NOT NULL, content TEXT);
CREATE TABLE
INSERT INTO largetable
SELECT i, 'Lorem ipsum dolor sit amet, consectetur'
' adipiscing elit. Curabitur sodales arcu'
' non pulvinar venenatis. Morbi ut enim'
' efficitur.'
FROM generate_series(1,1700000000) AS i;
INSERT 0 1700000000
Time: 1945398.859 ms (32:25.399)
Create example data (ii)
CREATE SEQUENCE largetable_id_seq START 1700000001;
CREATE SEQUENCE
ALTER TABLE largetable
ALTER id SET DEFAULT nextval('largetable_id_seq');
ALTER TABLE
CREATE UNIQUE INDEX ON largetable(id);
CREATE INDEX
Time: 1585770.840 ms (26:25.771)
ALTER TABLE largetable
ADD PRIMARY KEY USING INDEX largetable_id_idx;
ALTER TABLE
Time: 8.534 ms
Data “in production” (i)
test=> d largetable;
Table "public.largetable"
Column | Type | Nullable | Default
---------+---------+----------+----------------------------------------
id | integer | not null | nextval('largetable_id_seq'::regclass)
content | text | |
Indexes:
"largetable_id_idx" PRIMARY KEY, btree (id)
test=> dt+ largetable;
List of relations
Schema | Name | Type | Owner | Size | Description
--------+------------+-------+-------+--------+-------------
public | largetable | table | test | 265 GB |
(1 row)
Data “in production” (ii)
test=> TABLE largetable LIMIT 5;
id | content
----+---------------------------------------------------------------------
1 | Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur s
2 | Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur s
3 | Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur s
4 | Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur s
5 | Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur s
(5 rows)
test=> SELECT n_live_tup
test-> FROM pg_stat_user_tables WHERE relname='largetable';
n_live_tup
------------
1700000000
(1 row)
Add the new column
●
With zeros (instantaneous):
ALTER TABLE largetable
ADD COLUMN id_new BIGINT
NOT NULL
DEFAULT 0;
ALTER TABLE
Time: 13.249 ms
Build the trigger function
●
Replicates incoming changes while conversion is running
CREATE FUNCTION largetable_trig_func()
RETURNS TRIGGER AS $$
BEGIN
NEW.id_new := NEW.id;
RETURN NEW;
END $$ LANGUAGE plpgsql;
CREATE FUNCTION
Time: 21.512 ms
Add the trigger
●
Replicates incoming changes while conversion is running
CREATE TRIGGER largetable_trig
BEFORE INSERT OR UPDATE ON largetable
FOR EACH ROW
EXECUTE FUNCTION largetable_trig_func();
CREATE TRIGGER
Time: 12.576 ms
The conversion procedure
CREATE PROCEDURE largetable_sync_proc() AS $$
DECLARE r RECORD;
DECLARE count BIGINT := 0;
DECLARE batchsize BIGINT := 100000;
DECLARE cur CURSOR FOR SELECT id FROM largetable;
BEGIN
FOR r IN cur LOOP
UPDATE largetable
SET id_new = id
WHERE id = r.id;
count := count + 1;
IF (count % batchsize = 0) THEN
COMMIT;
END IF;
END LOOP;
COMMIT;
RETURN;
END $$ LANGUAGE plpgsql;
… with progress notices
… BEGIN
FOR r IN cur LOOP
UPDATE largetable
SET id_new = id
WHERE id = r.id;
count := count + 1;
IF (count % batchsize = 0) THEN
IF (count % (batchsize * 10) = 0) THEN
RAISE NOTICE '% rows done', count;
END IF;
COMMIT;
END IF;
END LOOP;
COMMIT;
RETURN;
END …
Do it! 😬
test=> CALL largetable_sync_proc();
NOTICE: 1000000 rows done
NOTICE: 2000000 rows done
NOTICE: 3000000 rows done
…
…
…
Is it blocking anything?
test=> SELECT id FROM largetable
test-> TABLESAMPLE bernoulli(1) LIMIT 1 watch 1;
id
---------
5600006
(1 row)
Thu 14 Jan 2021 09:09:43 GMT (every 1s)
id
---------
6900014
(1 row)
🥱
😅
test=> CALL largetable_sync_proc();
NOTICE: 1000000 rows done
NOTICE: 2000000 rows done
NOTICE: 3000000 rows done
…
…
…
NOTICE: 1698000000 rows done
NOTICE: 1699000000 rows done
NOTICE: 1700000000 rows done
CALL
Time: 25583914.664 ms (07:06:23.915)
Our table now looks like:
test=> TABLE largetable LIMIT 5;
id | content | id_new
--------+------------------------------------------+--------
100001 | Lorem ipsum dolor sit amet, consectetur… | 100001
100002 | Lorem ipsum dolor sit amet, consectetur… | 100002
100003 | Lorem ipsum dolor sit amet, consectetur… | 100003
100004 | Lorem ipsum dolor sit amet, consectetur… | 100004
100005 | Lorem ipsum dolor sit amet, consectetur… | 100005
(5 rows)
Create index for PK
CREATE UNIQUE INDEX
CONCURRENTLY largetable_id_new_idx
ON largetable(id_new);
CREATE INDEX
Time: 4662236.271 ms (01:17:42.236)
And now all the DDL! 🤞
DO $$
DECLARE new_start BIGINT;
BEGIN
SELECT max(id) + 1 FROM largetable INTO new_start;
EXECUTE 'CREATE SEQUENCE largetable_id_bigint_seq '
'START ' || new_start;
ALTER TABLE largetable ALTER id_new
SET DEFAULT nextval('largetable_id_bigint_seq');
ALTER TABLE largetable DROP id;
ALTER TABLE largetable RENAME id_new TO id;
ALTER TABLE largetable ADD CONSTRAINT largetable_id_pkey
PRIMARY KEY USING INDEX largetable_id_new_idx;
DROP TRIGGER largetable_trig ON largetable;
COMMIT;
END $$ LANGUAGE plpgsql;
All done! 🏆
NOTICE: ALTER TABLE / ADD CONSTRAINT USING
INDEX will rename index
"largetable_id_new_idx" to
"largetable_id_pkey"
DO
Time: 451.049 ms
Thank you =)
Twitter: @vyruss
Photo: River Broom Valley, Northwest Highlands, Scotland

More Related Content

PDF
MariaDB 10.11 key features overview for DBAs
PPTX
How to size up an Apache Cassandra cluster (Training)
PDF
NUMA and Java Databases
PDF
AWS를 통한 빅데이터 기반 비지니스 인텔리전스 구축- AWS Summit Seoul 2017
PDF
AWS 初心者向けWebinar 利用者が実施するAWS上でのセキュリティ対策
PPTX
Sizing MongoDB Clusters
PDF
Cassandra - A Decentralized Structured Storage System
PDF
MySQL Parallel Replication: inventory, use-case and limitations
MariaDB 10.11 key features overview for DBAs
How to size up an Apache Cassandra cluster (Training)
NUMA and Java Databases
AWS를 통한 빅데이터 기반 비지니스 인텔리전스 구축- AWS Summit Seoul 2017
AWS 初心者向けWebinar 利用者が実施するAWS上でのセキュリティ対策
Sizing MongoDB Clusters
Cassandra - A Decentralized Structured Storage System
MySQL Parallel Replication: inventory, use-case and limitations

What's hot (20)

PDF
AWS KMS를 활용하여 안전한 AWS 환경을 구축하기 위한 전략::임기성::AWS Summit Seoul 2018
PDF
Oracle Exadata Exam Dump
PDF
Practical Partitioning in Production with Postgres
PDF
AWS Summit Seoul 2023 | AWS Graviton과 함께하는 계획문제 최적화 애플리케이션 개발
PDF
Evolution of MySQL Parallel Replication
PPTX
[211] HBase 기반 검색 데이터 저장소 (공개용)
PPTX
Openstack Swift - Lots of small files
PDF
ProxySQL Tutorial - PLAM 2016
PPTX
Apache Kudu: Technical Deep Dive


PDF
Why MySQL Replication Fails, and How to Get it Back
PPTX
Sizing MongoDB Clusters
PDF
Amazon Redshift 아키텍처 및 모범사례::김민성::AWS Summit Seoul 2018
PPT
Elaboracio de plans individualitzats
PDF
Google Bigtable Paper Presentation
PPTX
Chapter 5 design of keyvalue databses from nosql for mere mortals
PDF
AWS Aurora 운영사례 (by 배은미)
PPTX
Introduction to NOSQL databases
PDF
How to Analyze and Tune MySQL Queries for Better Performance
PDF
Amazon DocumentDB vs MongoDB 의 내부 아키텍쳐 와 장단점 비교
PDF
[KubeCon EU 2021] Introduction and Deep Dive Into Containerd
AWS KMS를 활용하여 안전한 AWS 환경을 구축하기 위한 전략::임기성::AWS Summit Seoul 2018
Oracle Exadata Exam Dump
Practical Partitioning in Production with Postgres
AWS Summit Seoul 2023 | AWS Graviton과 함께하는 계획문제 최적화 애플리케이션 개발
Evolution of MySQL Parallel Replication
[211] HBase 기반 검색 데이터 저장소 (공개용)
Openstack Swift - Lots of small files
ProxySQL Tutorial - PLAM 2016
Apache Kudu: Technical Deep Dive


Why MySQL Replication Fails, and How to Get it Back
Sizing MongoDB Clusters
Amazon Redshift 아키텍처 및 모범사례::김민성::AWS Summit Seoul 2018
Elaboracio de plans individualitzats
Google Bigtable Paper Presentation
Chapter 5 design of keyvalue databses from nosql for mere mortals
AWS Aurora 운영사례 (by 배은미)
Introduction to NOSQL databases
How to Analyze and Tune MySQL Queries for Better Performance
Amazon DocumentDB vs MongoDB 의 내부 아키텍쳐 와 장단점 비교
[KubeCon EU 2021] Introduction and Deep Dive Into Containerd
Ad

Similar to Changing your huge table's data types in production (20)

PDF
Advanced Int->Bigint Conversions
PPTX
Boosting MySQL (for starters)
PPTX
"MySQL Boosting - DB Best Practices & Optimization" by José Luis Martínez - C...
PDF
PerlApp2Postgresql (2)
PDF
Adventures on live partitioning
PDF
Postgresql 9.3 overview
PDF
Rob Sullivan at Heroku's Waza 2013: Your Database -- A Story of Indifference
PDF
Don't Do This [FOSDEM 2023]
PDF
PostgreSQL10の新機能 ~ロジカルレプリケーションを中心に~
PDF
MySQL Cheat Sheet
PDF
Managing terabytes: When PostgreSQL gets big
PDF
In-core compression: how to shrink your database size in several times
PDF
[Pgday.Seoul 2021] 2. Porting Oracle UDF and Optimization
PPTX
Oracle basic queries
PPTX
session_2 on database mysql databaseds from file to
PDF
Managing terabytes: When Postgres gets big
PDF
Relational Database Design Bootcamp
ODP
Chetan postgresql partitioning
ODP
Chetan postgresql partitioning
PDF
1 data types
Advanced Int->Bigint Conversions
Boosting MySQL (for starters)
"MySQL Boosting - DB Best Practices & Optimization" by José Luis Martínez - C...
PerlApp2Postgresql (2)
Adventures on live partitioning
Postgresql 9.3 overview
Rob Sullivan at Heroku's Waza 2013: Your Database -- A Story of Indifference
Don't Do This [FOSDEM 2023]
PostgreSQL10の新機能 ~ロジカルレプリケーションを中心に~
MySQL Cheat Sheet
Managing terabytes: When PostgreSQL gets big
In-core compression: how to shrink your database size in several times
[Pgday.Seoul 2021] 2. Porting Oracle UDF and Optimization
Oracle basic queries
session_2 on database mysql databaseds from file to
Managing terabytes: When Postgres gets big
Relational Database Design Bootcamp
Chetan postgresql partitioning
Chetan postgresql partitioning
1 data types
Ad

More from Jimmy Angelakos (7)

PDF
Slow things down to make them go faster [FOSDEM 2022]
PDF
The State of (Full) Text Search in PostgreSQL 12
PDF
Deploying PostgreSQL on Kubernetes
PDF
Bringing the Semantic Web closer to reality: PostgreSQL as RDF Graph Database
PDF
Using PostgreSQL with Bibliographic Data
PDF
Eισαγωγή στην PostgreSQL - Χρήση σε επιχειρησιακό περιβάλλον
PDF
PostgreSQL: Mέθοδοι για Data Replication
Slow things down to make them go faster [FOSDEM 2022]
The State of (Full) Text Search in PostgreSQL 12
Deploying PostgreSQL on Kubernetes
Bringing the Semantic Web closer to reality: PostgreSQL as RDF Graph Database
Using PostgreSQL with Bibliographic Data
Eισαγωγή στην PostgreSQL - Χρήση σε επιχειρησιακό περιβάλλον
PostgreSQL: Mέθοδοι για Data Replication

Recently uploaded (20)

PDF
top salesforce developer skills in 2025.pdf
PPTX
L1 - Introduction to python Backend.pptx
PPTX
Introduction to Artificial Intelligence
PDF
Understanding Forklifts - TECH EHS Solution
PPTX
ISO 45001 Occupational Health and Safety Management System
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 42 Updates Latest 2025
PDF
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
PDF
Upgrade and Innovation Strategies for SAP ERP Customers
PDF
Design an Analysis of Algorithms II-SECS-1021-03
PDF
Design an Analysis of Algorithms I-SECS-1021-03
PDF
Flood Susceptibility Mapping Using Image-Based 2D-CNN Deep Learnin. Overview ...
PDF
How Creative Agencies Leverage Project Management Software.pdf
PPTX
ai tools demonstartion for schools and inter college
PDF
SAP S4 Hana Brochure 3 (PTS SYSTEMS AND SOLUTIONS)
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 41
PDF
Softaken Excel to vCard Converter Software.pdf
PPT
Introduction Database Management System for Course Database
PPTX
Odoo POS Development Services by CandidRoot Solutions
PPTX
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
PPTX
Oracle E-Business Suite: A Comprehensive Guide for Modern Enterprises
top salesforce developer skills in 2025.pdf
L1 - Introduction to python Backend.pptx
Introduction to Artificial Intelligence
Understanding Forklifts - TECH EHS Solution
ISO 45001 Occupational Health and Safety Management System
Internet Downloader Manager (IDM) Crack 6.42 Build 42 Updates Latest 2025
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
Upgrade and Innovation Strategies for SAP ERP Customers
Design an Analysis of Algorithms II-SECS-1021-03
Design an Analysis of Algorithms I-SECS-1021-03
Flood Susceptibility Mapping Using Image-Based 2D-CNN Deep Learnin. Overview ...
How Creative Agencies Leverage Project Management Software.pdf
ai tools demonstartion for schools and inter college
SAP S4 Hana Brochure 3 (PTS SYSTEMS AND SOLUTIONS)
Internet Downloader Manager (IDM) Crack 6.42 Build 41
Softaken Excel to vCard Converter Software.pdf
Introduction Database Management System for Course Database
Odoo POS Development Services by CandidRoot Solutions
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
Oracle E-Business Suite: A Comprehensive Guide for Modern Enterprises

Changing your huge table's data types in production

  • 1. Changing your huge table's data types in production Jimmy Angelakos FOSDEM Senior PostgreSQL Architect 07/02/2021 EDB
  • 2. Motivation ● Era of “Big Data” 🤦 ● PostgreSQL seeing heavier usage ● PostgreSQL performance getting better ● Find your DB facing rapid growth ● Too rapid growth?
  • 3. Why change types? ● Incorrect data type – VARCHAR(42) not enough → TEXT ● Non-optimal data type – TEXT id ‘12816750’ (9 bytes) vs INTEGER (4 bytes) ● Running out of IDs – Max INTEGER is +2,147,483,647 – Whoops!
  • 4. But I can change types! ● Yes, if they are compatible ● “Binary coercible” – No conversion function invocation e.g. XML → TEXT (but not TEXT → XML) ● “Binary compatible” – Same internal representation e.g. TEXT ↔ VARCHAR
  • 5. ALTER TABLE ALTER TYPE ● ALTER TABLE ALTER column_name TYPE data_type ● USING expression needed if there is no implicit cast – May need to DROP DEFAULT & add a new one after ● Needs index rebuild
  • 6. So what’s the problem? ● Requires ACCESS EXCLUSIVE lock – No reads, writes allowed to other transactions ● Effectively prevents usage of the table in production ● Causes table rewrite, if not binary coercible – S L O W – Requires double the disk space
  • 7. Scenario One possible scenario ● Huge table in production (1.7B rows) ● PK column is INT, rapidly approaching 2.1B rows ● BIGINT is seen as the solution ● Not binary compatible (8 bytes vs 4 bytes) ● Cannot be taken offline
  • 8. What now? One possible concurrent solution ● Add new BIGINT column ● Write procedure to copy values to new column in batches ● Write trigger to replicate changes from old column ● Drop old column, rename new column ● Make new column PK
  • 9. Small details ● Need to create sequence for new PK ● Need to create index for new PK ● After conversion, perform all DDL in one transaction ● Minimum possible locking/blocking ● Test system: Intel i7-9750H, 64GB RAM, NVMe SSD ● Table with 1.7*10^9 rows of 170 bytes each
  • 10. Create example data (i) CREATE TABLE largetable (id INT NOT NULL, content TEXT); CREATE TABLE INSERT INTO largetable SELECT i, 'Lorem ipsum dolor sit amet, consectetur' ' adipiscing elit. Curabitur sodales arcu' ' non pulvinar venenatis. Morbi ut enim' ' efficitur.' FROM generate_series(1,1700000000) AS i; INSERT 0 1700000000 Time: 1945398.859 ms (32:25.399)
  • 11. Create example data (ii) CREATE SEQUENCE largetable_id_seq START 1700000001; CREATE SEQUENCE ALTER TABLE largetable ALTER id SET DEFAULT nextval('largetable_id_seq'); ALTER TABLE CREATE UNIQUE INDEX ON largetable(id); CREATE INDEX Time: 1585770.840 ms (26:25.771) ALTER TABLE largetable ADD PRIMARY KEY USING INDEX largetable_id_idx; ALTER TABLE Time: 8.534 ms
  • 12. Data “in production” (i) test=> d largetable; Table "public.largetable" Column | Type | Nullable | Default ---------+---------+----------+---------------------------------------- id | integer | not null | nextval('largetable_id_seq'::regclass) content | text | | Indexes: "largetable_id_idx" PRIMARY KEY, btree (id) test=> dt+ largetable; List of relations Schema | Name | Type | Owner | Size | Description --------+------------+-------+-------+--------+------------- public | largetable | table | test | 265 GB | (1 row)
  • 13. Data “in production” (ii) test=> TABLE largetable LIMIT 5; id | content ----+--------------------------------------------------------------------- 1 | Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur s 2 | Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur s 3 | Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur s 4 | Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur s 5 | Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur s (5 rows) test=> SELECT n_live_tup test-> FROM pg_stat_user_tables WHERE relname='largetable'; n_live_tup ------------ 1700000000 (1 row)
  • 14. Add the new column ● With zeros (instantaneous): ALTER TABLE largetable ADD COLUMN id_new BIGINT NOT NULL DEFAULT 0; ALTER TABLE Time: 13.249 ms
  • 15. Build the trigger function ● Replicates incoming changes while conversion is running CREATE FUNCTION largetable_trig_func() RETURNS TRIGGER AS $$ BEGIN NEW.id_new := NEW.id; RETURN NEW; END $$ LANGUAGE plpgsql; CREATE FUNCTION Time: 21.512 ms
  • 16. Add the trigger ● Replicates incoming changes while conversion is running CREATE TRIGGER largetable_trig BEFORE INSERT OR UPDATE ON largetable FOR EACH ROW EXECUTE FUNCTION largetable_trig_func(); CREATE TRIGGER Time: 12.576 ms
  • 17. The conversion procedure CREATE PROCEDURE largetable_sync_proc() AS $$ DECLARE r RECORD; DECLARE count BIGINT := 0; DECLARE batchsize BIGINT := 100000; DECLARE cur CURSOR FOR SELECT id FROM largetable; BEGIN FOR r IN cur LOOP UPDATE largetable SET id_new = id WHERE id = r.id; count := count + 1; IF (count % batchsize = 0) THEN COMMIT; END IF; END LOOP; COMMIT; RETURN; END $$ LANGUAGE plpgsql;
  • 18. … with progress notices … BEGIN FOR r IN cur LOOP UPDATE largetable SET id_new = id WHERE id = r.id; count := count + 1; IF (count % batchsize = 0) THEN IF (count % (batchsize * 10) = 0) THEN RAISE NOTICE '% rows done', count; END IF; COMMIT; END IF; END LOOP; COMMIT; RETURN; END …
  • 19. Do it! 😬 test=> CALL largetable_sync_proc(); NOTICE: 1000000 rows done NOTICE: 2000000 rows done NOTICE: 3000000 rows done … … …
  • 20. Is it blocking anything? test=> SELECT id FROM largetable test-> TABLESAMPLE bernoulli(1) LIMIT 1 watch 1; id --------- 5600006 (1 row) Thu 14 Jan 2021 09:09:43 GMT (every 1s) id --------- 6900014 (1 row)
  • 21. 🥱
  • 22. 😅 test=> CALL largetable_sync_proc(); NOTICE: 1000000 rows done NOTICE: 2000000 rows done NOTICE: 3000000 rows done … … … NOTICE: 1698000000 rows done NOTICE: 1699000000 rows done NOTICE: 1700000000 rows done CALL Time: 25583914.664 ms (07:06:23.915)
  • 23. Our table now looks like: test=> TABLE largetable LIMIT 5; id | content | id_new --------+------------------------------------------+-------- 100001 | Lorem ipsum dolor sit amet, consectetur… | 100001 100002 | Lorem ipsum dolor sit amet, consectetur… | 100002 100003 | Lorem ipsum dolor sit amet, consectetur… | 100003 100004 | Lorem ipsum dolor sit amet, consectetur… | 100004 100005 | Lorem ipsum dolor sit amet, consectetur… | 100005 (5 rows)
  • 24. Create index for PK CREATE UNIQUE INDEX CONCURRENTLY largetable_id_new_idx ON largetable(id_new); CREATE INDEX Time: 4662236.271 ms (01:17:42.236)
  • 25. And now all the DDL! 🤞 DO $$ DECLARE new_start BIGINT; BEGIN SELECT max(id) + 1 FROM largetable INTO new_start; EXECUTE 'CREATE SEQUENCE largetable_id_bigint_seq ' 'START ' || new_start; ALTER TABLE largetable ALTER id_new SET DEFAULT nextval('largetable_id_bigint_seq'); ALTER TABLE largetable DROP id; ALTER TABLE largetable RENAME id_new TO id; ALTER TABLE largetable ADD CONSTRAINT largetable_id_pkey PRIMARY KEY USING INDEX largetable_id_new_idx; DROP TRIGGER largetable_trig ON largetable; COMMIT; END $$ LANGUAGE plpgsql;
  • 26. All done! 🏆 NOTICE: ALTER TABLE / ADD CONSTRAINT USING INDEX will rename index "largetable_id_new_idx" to "largetable_id_pkey" DO Time: 451.049 ms
  • 27. Thank you =) Twitter: @vyruss Photo: River Broom Valley, Northwest Highlands, Scotland