SlideShare a Scribd company logo
Rollback a table to an earlier version
PERFORMANCE OPTIMIZATIONS
TIME TRAVEL
View table details
Delete old files with Vacuum
Clone a Delta Lake table
Interoperability with Python / DataFrames
Run SQL queries from Python
Modify data retention settings for Delta Lake table
-- RESTORE requires Delta Lake version 0.7.0+ & DBR 7.4+.
RESTORE tableName VERSION AS OF 0
RESTORE tableName TIMESTAMP AS OF "2020-12-18"
Delta Lake is an open source storage layer that brings ACID
transactions to Apache Spark™ and big data workloads.
delta.io | Documentation | GitHub | Delta Lake on Databricks
WITH SPARK SQL UPDATE tableName SET event = 'click' WHERE event = 'clk'
DELETE FROM tableName WHERE "date < '2017-01-01"
MERGE INTO logs
USING newDedupedLogs
ON logs.uniqueId = newDedupedLogs.uniqueId
WHEN NOT MATCHED
THEN INSERT *
-- Add "Not null" constraint:
ALTER TABLE tableName CHANGE COLUMN col_name SET NOT NULL
-- Add "Check" constraint:
ALTER TABLE tableName
ADD CONSTRAINT dateWithinRange CHECK date > "1900-01-01"
-- Drop constraint:
ALTER TABLE tableName DROP CONSTRAINT dateWithinRange
ALTER TABLE tableName ADD COLUMNS (
col_name data_type
[FIRST|AFTER colA_name])
MERGE INTO target
USING updates
ON target.Id = updates.Id
WHEN MATCHED AND target.delete_flag = "true" THEN
DELETE
WHEN MATCHED THEN
UPDATE SET * -- star notation means all columns
WHEN NOT MATCHED THEN
INSERT (date, Id, data) -- or, use INSERT *
VALUES (date, Id, data)
INSERT INTO TABLE tableName VALUES (
(8003, "Kim Jones", "2020-12-18", 3.875),
(8004, "Tim Jones", "2020-12-20", 3.750)
);
-- Insert using SELECT statement
INSERT INTO tableName SELECT * FROM sourceTable
-- Atomically replace all data in table with new values
INSERT OVERWRITE loan_by_state_delta VALUES (...)
DELTA LAKE DDL/DML: UPDATE, DELETE, MERGE, ALTER TABLE
Update rows that match a predicate condition
Delete rows that match a predicate condition
Insert values directly into table
Upsert (update + insert) using MERGE
Alter table schema — add columns
Insert with Deduplication using MERGE
Alter table — add constraint
DESCRIBE DETAIL tableName
DESCRIBE FORMATTED tableName
-- logRetentionDuration -> how long transaction log history
is kept, deletedFileRetentionDuration -> how long ago a file
must have been deleted before being a candidate for VACCUM.
ALTER TABLE tableName
SET TBLPROPERTIES(
delta.logRetentionDuration = "interval 30 days",
delta.deletedFileRetentionDuration = "interval 7 days"
);
SHOW TBLPROPERTIES tableName;
spark.sql("SELECT * FROM tableName")
spark.sql("SELECT * FROM delta.`/path/to/delta_table`")
-- Read name-based table from Hive metastore into DataFrame
df = spark.table("tableName")
-- Read path-based table into DataFrame
df = spark.read.format("delta").load("/path/to/delta_table")
-- Deep clones copy data from source, shallow clones don't.
CREATE TABLE [dbName.] targetName
[SHALLOW | DEEP] CLONE sourceName [VERSION AS OF 0]
[LOCATION "path/to/table"]
-- specify location only for path-based tables
VACUUM tableName [RETAIN num HOURS] [DRY RUN]
UTILITY METHODS
*Databricks Delta Lake feature
OPTIMIZE tableName
[ZORDER BY (colNameA, colNameB)]
*Databricks Delta Lake feature
ALTER TABLE [table_name | delta.`path/to/delta_table`]
SET TBLPROPERTIES (delta.autoOptimize.optimizeWrite = true)
*Databricks Delta Lake feature
CACHE SELECT * FROM tableName
-- or:
CACHE SELECT colA, colB FROM tableName WHERE colNameA > 0
Compact data files with Optimize and Z-Order
Auto-optimize tables
Cache frequently queried data in Delta Cache
DESCRIBE HISTORY tableName
SELECT * FROM tableName VERSION AS OF 12
EXCEPT ALL SELECT * FROM tableName VERSION AS OF 11
SELECT * FROM tableName VERSION AS OF 0
SELECT * FROM tableName@v0 -- equivalent to VERSION AS OF 0
SELECT * FROM tableName TIMESTAMP AS OF "2020-12-18"
View transaction log (aka Delta Log)
Query historical versions of Delta Lake tables
Find changes between 2 versions of table
-- Managed database is saved in the Hive metastore.
Default database is named "default".
DROP DATABASE IF EXISTS dbName;
CREATE DATABASE dbName;
USE dbName -- This command avoids having to specify
dbName.tableName every time instead of just tableName.
/* You can refer to Delta Tables by table name, or by
path. Table name is the preferred way, since named tables
are managed in the Hive Metastore (i.e., when you DROP a
named table, the data is dropped also — not the case for
path-based tables.) */
SELECT * FROM [dbName.] tableName
CREATE TABLE [dbName.] tableName
USING DELTA
AS SELECT * FROM tableName | parquet.`path/to/data`
[LOCATION `/path/to/table`]
-- using location = unmanaged table
-- by table name
CONVERT TO DELTA [dbName.]tableName
[PARTITIONED BY (col_name1 col_type1, col_name2
col_type2)]
-- path-based tables
CONVERT TO DELTA parquet.`/path/to/table` -- note backticks
[PARTITIONED BY (col_name1 col_type1, col_name2 col_type2)]
SELECT * FROM delta.`path/to/delta_table` -- note backticks
CREATE TABLE [dbName.] tableName (
id INT [NOT NULL],
name STRING,
date DATE,
int_rate FLOAT)
USING DELTA
[PARTITIONED BY (time, date)] -- optional
COPY INTO [dbName.] targetTable
FROM "/path/to/table"
FILEFORMAT = DELTA -- or CSV, Parquet, ORC, JSON, etc.
CREATE AND QUERY DELTA TABLES
Create and use managed database
Query Delta Lake table by table name (preferred)
Query Delta Lake table by path
Convert Parquet table to Delta Lake format in place
Create table, define schema explicitly with SQL DDL
Create Delta Lake table as SELECT * with no upfront
schema definition
Copy new data into Delta Lake table (with idempotent retries)
TIME TRAVEL (CONTINUED)
Provided to the open source community by Databricks
© Databricks 2021. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are
trademarks of the Apache Software Foundation.
spark.sql("SELECT * FROM tableName")
spark.sql("SELECT * FROM delta.`/path/to/delta_table`")
spark.sql("DESCRIBE HISTORY tableName")
deltaTable.vacuum() # vacuum files older than default
retention period (7 days)
deltaTable.vacuum(100) # vacuum files not required by
versions more than 100 hours old
deltaTable.clone(target="/path/to/delta_table/",
isShallow=True, replace=True)
spark.sql("SELECT * FROM tableName")
spark.sql("SELECT * FROM delta.`/path/to/delta_table`")
UTILITY METHODS
WITH PYTHON
Convert Parquet table to Delta Lake format in place
Run Spark SQL queries in Python
Compact old files with Vacuum
Clone a Delta Lake table
Get DataFrame representation of a Delta Lake table
Run SQL queries on Delta Lake tables
fullHistoryDF = deltaTable.history()
# choose only one option: versionAsOf, or timestampAsOf
df = (spark.read.format("delta")
.option("versionAsOf", 0)
.option("timestampAsOf", "2020-12-18")
.load("/path/to/delta_table"))
TIME TRAVEL
View transaction log (aka Delta Log)
Query historical versions of Delta Lake tables
PERFORMANCE OPTIMIZATIONS
*Databricks Delta Lake feature
spark.sql("OPTIMIZE tableName [ZORDER BY (colA, colB)]")
*Databricks Delta Lake feature. For existing tables:
spark.sql("ALTER TABLE [table_name |
delta.`path/to/delta_table`]
SET TBLPROPERTIES (delta.autoOptimize.optimizeWrite = true)
To enable auto-optimize for all new Delta Lake tables:
spark.sql("SET spark.databricks.delta.properties.
defaults.autoOptimize.optimizeWrite = true")
*Databricks Delta Lake feature
spark.sql("CACHE SELECT * FROM tableName")
-- or:
spark.sql("CACHE SELECT colA, colB FROM tableName
WHERE colNameA > 0")
Compact data files with Optimize and Z-Order
Auto-optimize tables
Cache frequently queried data in Delta Cache
WORKING WITH DELTATABLES
WORKING WITH DELTA TABLES
# A DeltaTable is the entry point for interacting with
tables programmatically in Python — for example, to
perform updates or deletes.
from delta.tables import *
deltaTable = DeltaTable.forName(spark, tableName)
deltaTable = DeltaTable.forPath(spark,
delta.`path/to/table`)
CONVERT PARQUET TO DELTA LAKE
from delta.tables import *
deltaTable = DeltaTable.convertToDelta(spark,
"parquet.`/path/to/parquet_table`")
partitionedDeltaTable = DeltaTable.convertToDelta(spark,
"parquet.`/path/to/parquet_table`", "part int")
df1 = spark.read.format("delta").load(pathToTable)
df2 = spark.read.format("delta").option("versionAsOf",
2).load("/path/to/delta_table")
df1.exceptAll(df2).show()
deltaTable.restoreToVersion(0)
deltaTable.restoreToTimestamp('2020-12-01')
Find changes between 2 versions of a table
Rollback a table by version or timestamp
Delta Lake is an open source storage layer that brings ACID
transactions to Apache Spark™ and big data workloads.
delta.io | Documentation | GitHub | API reference | Databricks
df = spark.createDataFrame(pdf)
# where pdf is a pandas DF
# then save DataFrame in Delta Lake format as shown below
# read by path
df = (spark.read.format("parquet"|"csv"|"json"|etc.)
.load("/path/to/delta_table"))
# read by table name
df = spark.table("events")
# by path or by table name
df = (spark.readStream
.format("delta")
.schema(schema)
.table("events") | .load("/delta/events")
)
(df.writeStream.format("delta")
.outputMode("append"|"update"|"complete")
.option("checkpointLocation", "/path/to/checkpoints")
.trigger(once=True|processingTime="10 seconds")
.table("events") | .start("/delta/events")
)
(df.write.format("delta")
.mode("append"|"overwrite")
.partitionBy("date") # optional
.option("mergeSchema", "true") # option - evolve schema
.saveAsTable("events") | .save("/path/to/delta_table")
)
READS AND WRITES WITH DELTA LAKE
Read data from pandas DataFrame
Read data using Apache Spark™
Save DataFrame in Delta Lake format
Streaming reads (Delta table as streaming source)
Streaming writes (Delta table as a sink)
# predicate using SQL formatted string
deltaTable.delete("date < '2017-01-01'")
# predicate using Spark SQL functions
deltaTable.delete(col("date") < "2017-01-01")
# Available options for merges [see docs for details]:
.whenMatchedUpdate(...) | .whenMatchedUpdateAll(...) |
.whenNotMatchedInsert(...) | .whenMatchedDelete(...)
(deltaTable.alias("target").merge(
source = updatesDF.alias("updates"),
condition = "target.eventId = updates.eventId")
.whenMatchedUpdateAll()
.whenNotMatchedInsert(
values = {
"date": "updates.date",
"eventId": "updates.eventId",
"data": "updates.data",
"count": 1
}
).execute()
)
(deltaTable.alias("logs").merge(
newDedupedLogs.alias("newDedupedLogs"),
"logs.uniqueId = newDedupedLogs.uniqueId")
.whenNotMatchedInsertAll()
.execute()
)
# predicate using SQL formatted string
deltaTable.update(condition = "eventType = 'clk'",
set = { "eventType": "'click'" } )
# predicate using Spark SQL functions
deltaTable.update(condition = col("eventType") == "clk",
set = { "eventType": lit("click") } )
DELTA LAKE DDL/DML: UPDATES, DELETES, INSERTS, MERGES
Delete rows that match a predicate condition
Update rows that match a predicate condition
Upsert (update + insert) using MERGE
Insert with Deduplication using MERGE
df = deltaTable.toDF()
TIME TRAVEL (CONTINUED)
Provided to the open source community by Databricks
© Databricks 2021. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are
trademarks of the Apache Software Foundation.

More Related Content

PDF
Delta Lake Cheat Sheet.pdf
PPTX
Delta Lake Tips, Tricks, and Best Practices WIP.pptx
PPTX
Smadav Pro 2025 v15.3 Activaed Full Free
PPT
Creating a database
PDF
Sql for dbaspresentation
PDF
Simplify CDC Pipeline with Spark Streaming SQL and Delta Lake
PDF
Deeply Declarative Data Pipelines
PPTX
Introduction to Oracle Database.pptx
Delta Lake Cheat Sheet.pdf
Delta Lake Tips, Tricks, and Best Practices WIP.pptx
Smadav Pro 2025 v15.3 Activaed Full Free
Creating a database
Sql for dbaspresentation
Simplify CDC Pipeline with Spark Streaming SQL and Delta Lake
Deeply Declarative Data Pipelines
Introduction to Oracle Database.pptx

Similar to DeltaLakeOperations.pdf (20)

ODP
My sql Syntax
PPTX
SQL: Data Definition Language(DDL) command
DOC
Oracle SQL AND PL/SQL
PDF
Building a High-Performance Database with Scala, Akka, and Spark
PDF
Impala SQL Support
ODP
Introduction4 SQLite
PPT
Oracle-L11 using Oracle flashback technology-Mazenet solution
PPTX
MS SQL SERVER: Manipulating Database
PPTX
MS SQLSERVER:Manipulating Database
PPTX
MS Sql Server: Manipulating Database
PPT
PPTX
DML, DDL, DCL ,DRL/DQL and TCL Statements in SQL with Examples
PPTX
PostgreSQL Database Slides
ODP
My sql
DOCX
COMPUTERS SQL
PPTX
OTN TOUR 2016 - DBA Commands and Concepts That Every Developer Should Know
PPTX
OTN TOUR 2016 - DBA Commands and Concepts That Every Developer Should Know
PPTX
OOW16 - Oracle Database 12c - The Best Oracle Database 12c New Features for D...
PPTX
OOW16 - Oracle Database 12c - The Best Oracle Database 12c New Features for D...
My sql Syntax
SQL: Data Definition Language(DDL) command
Oracle SQL AND PL/SQL
Building a High-Performance Database with Scala, Akka, and Spark
Impala SQL Support
Introduction4 SQLite
Oracle-L11 using Oracle flashback technology-Mazenet solution
MS SQL SERVER: Manipulating Database
MS SQLSERVER:Manipulating Database
MS Sql Server: Manipulating Database
DML, DDL, DCL ,DRL/DQL and TCL Statements in SQL with Examples
PostgreSQL Database Slides
My sql
COMPUTERS SQL
OTN TOUR 2016 - DBA Commands and Concepts That Every Developer Should Know
OTN TOUR 2016 - DBA Commands and Concepts That Every Developer Should Know
OOW16 - Oracle Database 12c - The Best Oracle Database 12c New Features for D...
OOW16 - Oracle Database 12c - The Best Oracle Database 12c New Features for D...
Ad

Recently uploaded (20)

PPTX
01_intro xxxxxxxxxxfffffffffffaaaaaaaaaaafg
PDF
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PDF
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
PDF
Mega Projects Data Mega Projects Data
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
PDF
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
PPTX
Business Acumen Training GuidePresentation.pptx
PPTX
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
PPTX
IBA_Chapter_11_Slides_Final_Accessible.pptx
PDF
Foundation of Data Science unit number two notes
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
PPT
ISS -ESG Data flows What is ESG and HowHow
PDF
Business Analytics and business intelligence.pdf
PDF
annual-report-2024-2025 original latest.
PPTX
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
PPTX
Business Ppt On Nestle.pptx huunnnhhgfvu
PPT
Miokarditis (Inflamasi pada Otot Jantung)
01_intro xxxxxxxxxxfffffffffffaaaaaaaaaaafg
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
Mega Projects Data Mega Projects Data
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
Business Acumen Training GuidePresentation.pptx
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
IBA_Chapter_11_Slides_Final_Accessible.pptx
Foundation of Data Science unit number two notes
Acceptance and paychological effects of mandatory extra coach I classes.pptx
ISS -ESG Data flows What is ESG and HowHow
Business Analytics and business intelligence.pdf
annual-report-2024-2025 original latest.
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
Business Ppt On Nestle.pptx huunnnhhgfvu
Miokarditis (Inflamasi pada Otot Jantung)
Ad

DeltaLakeOperations.pdf

  • 1. Rollback a table to an earlier version PERFORMANCE OPTIMIZATIONS TIME TRAVEL View table details Delete old files with Vacuum Clone a Delta Lake table Interoperability with Python / DataFrames Run SQL queries from Python Modify data retention settings for Delta Lake table -- RESTORE requires Delta Lake version 0.7.0+ & DBR 7.4+. RESTORE tableName VERSION AS OF 0 RESTORE tableName TIMESTAMP AS OF "2020-12-18" Delta Lake is an open source storage layer that brings ACID transactions to Apache Spark™ and big data workloads. delta.io | Documentation | GitHub | Delta Lake on Databricks WITH SPARK SQL UPDATE tableName SET event = 'click' WHERE event = 'clk' DELETE FROM tableName WHERE "date < '2017-01-01" MERGE INTO logs USING newDedupedLogs ON logs.uniqueId = newDedupedLogs.uniqueId WHEN NOT MATCHED THEN INSERT * -- Add "Not null" constraint: ALTER TABLE tableName CHANGE COLUMN col_name SET NOT NULL -- Add "Check" constraint: ALTER TABLE tableName ADD CONSTRAINT dateWithinRange CHECK date > "1900-01-01" -- Drop constraint: ALTER TABLE tableName DROP CONSTRAINT dateWithinRange ALTER TABLE tableName ADD COLUMNS ( col_name data_type [FIRST|AFTER colA_name]) MERGE INTO target USING updates ON target.Id = updates.Id WHEN MATCHED AND target.delete_flag = "true" THEN DELETE WHEN MATCHED THEN UPDATE SET * -- star notation means all columns WHEN NOT MATCHED THEN INSERT (date, Id, data) -- or, use INSERT * VALUES (date, Id, data) INSERT INTO TABLE tableName VALUES ( (8003, "Kim Jones", "2020-12-18", 3.875), (8004, "Tim Jones", "2020-12-20", 3.750) ); -- Insert using SELECT statement INSERT INTO tableName SELECT * FROM sourceTable -- Atomically replace all data in table with new values INSERT OVERWRITE loan_by_state_delta VALUES (...) DELTA LAKE DDL/DML: UPDATE, DELETE, MERGE, ALTER TABLE Update rows that match a predicate condition Delete rows that match a predicate condition Insert values directly into table Upsert (update + insert) using MERGE Alter table schema — add columns Insert with Deduplication using MERGE Alter table — add constraint DESCRIBE DETAIL tableName DESCRIBE FORMATTED tableName -- logRetentionDuration -> how long transaction log history is kept, deletedFileRetentionDuration -> how long ago a file must have been deleted before being a candidate for VACCUM. ALTER TABLE tableName SET TBLPROPERTIES( delta.logRetentionDuration = "interval 30 days", delta.deletedFileRetentionDuration = "interval 7 days" ); SHOW TBLPROPERTIES tableName; spark.sql("SELECT * FROM tableName") spark.sql("SELECT * FROM delta.`/path/to/delta_table`") -- Read name-based table from Hive metastore into DataFrame df = spark.table("tableName") -- Read path-based table into DataFrame df = spark.read.format("delta").load("/path/to/delta_table") -- Deep clones copy data from source, shallow clones don't. CREATE TABLE [dbName.] targetName [SHALLOW | DEEP] CLONE sourceName [VERSION AS OF 0] [LOCATION "path/to/table"] -- specify location only for path-based tables VACUUM tableName [RETAIN num HOURS] [DRY RUN] UTILITY METHODS *Databricks Delta Lake feature OPTIMIZE tableName [ZORDER BY (colNameA, colNameB)] *Databricks Delta Lake feature ALTER TABLE [table_name | delta.`path/to/delta_table`] SET TBLPROPERTIES (delta.autoOptimize.optimizeWrite = true) *Databricks Delta Lake feature CACHE SELECT * FROM tableName -- or: CACHE SELECT colA, colB FROM tableName WHERE colNameA > 0 Compact data files with Optimize and Z-Order Auto-optimize tables Cache frequently queried data in Delta Cache DESCRIBE HISTORY tableName SELECT * FROM tableName VERSION AS OF 12 EXCEPT ALL SELECT * FROM tableName VERSION AS OF 11 SELECT * FROM tableName VERSION AS OF 0 SELECT * FROM tableName@v0 -- equivalent to VERSION AS OF 0 SELECT * FROM tableName TIMESTAMP AS OF "2020-12-18" View transaction log (aka Delta Log) Query historical versions of Delta Lake tables Find changes between 2 versions of table -- Managed database is saved in the Hive metastore. Default database is named "default". DROP DATABASE IF EXISTS dbName; CREATE DATABASE dbName; USE dbName -- This command avoids having to specify dbName.tableName every time instead of just tableName. /* You can refer to Delta Tables by table name, or by path. Table name is the preferred way, since named tables are managed in the Hive Metastore (i.e., when you DROP a named table, the data is dropped also — not the case for path-based tables.) */ SELECT * FROM [dbName.] tableName CREATE TABLE [dbName.] tableName USING DELTA AS SELECT * FROM tableName | parquet.`path/to/data` [LOCATION `/path/to/table`] -- using location = unmanaged table -- by table name CONVERT TO DELTA [dbName.]tableName [PARTITIONED BY (col_name1 col_type1, col_name2 col_type2)] -- path-based tables CONVERT TO DELTA parquet.`/path/to/table` -- note backticks [PARTITIONED BY (col_name1 col_type1, col_name2 col_type2)] SELECT * FROM delta.`path/to/delta_table` -- note backticks CREATE TABLE [dbName.] tableName ( id INT [NOT NULL], name STRING, date DATE, int_rate FLOAT) USING DELTA [PARTITIONED BY (time, date)] -- optional COPY INTO [dbName.] targetTable FROM "/path/to/table" FILEFORMAT = DELTA -- or CSV, Parquet, ORC, JSON, etc. CREATE AND QUERY DELTA TABLES Create and use managed database Query Delta Lake table by table name (preferred) Query Delta Lake table by path Convert Parquet table to Delta Lake format in place Create table, define schema explicitly with SQL DDL Create Delta Lake table as SELECT * with no upfront schema definition Copy new data into Delta Lake table (with idempotent retries) TIME TRAVEL (CONTINUED) Provided to the open source community by Databricks © Databricks 2021. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.
  • 2. spark.sql("SELECT * FROM tableName") spark.sql("SELECT * FROM delta.`/path/to/delta_table`") spark.sql("DESCRIBE HISTORY tableName") deltaTable.vacuum() # vacuum files older than default retention period (7 days) deltaTable.vacuum(100) # vacuum files not required by versions more than 100 hours old deltaTable.clone(target="/path/to/delta_table/", isShallow=True, replace=True) spark.sql("SELECT * FROM tableName") spark.sql("SELECT * FROM delta.`/path/to/delta_table`") UTILITY METHODS WITH PYTHON Convert Parquet table to Delta Lake format in place Run Spark SQL queries in Python Compact old files with Vacuum Clone a Delta Lake table Get DataFrame representation of a Delta Lake table Run SQL queries on Delta Lake tables fullHistoryDF = deltaTable.history() # choose only one option: versionAsOf, or timestampAsOf df = (spark.read.format("delta") .option("versionAsOf", 0) .option("timestampAsOf", "2020-12-18") .load("/path/to/delta_table")) TIME TRAVEL View transaction log (aka Delta Log) Query historical versions of Delta Lake tables PERFORMANCE OPTIMIZATIONS *Databricks Delta Lake feature spark.sql("OPTIMIZE tableName [ZORDER BY (colA, colB)]") *Databricks Delta Lake feature. For existing tables: spark.sql("ALTER TABLE [table_name | delta.`path/to/delta_table`] SET TBLPROPERTIES (delta.autoOptimize.optimizeWrite = true) To enable auto-optimize for all new Delta Lake tables: spark.sql("SET spark.databricks.delta.properties. defaults.autoOptimize.optimizeWrite = true") *Databricks Delta Lake feature spark.sql("CACHE SELECT * FROM tableName") -- or: spark.sql("CACHE SELECT colA, colB FROM tableName WHERE colNameA > 0") Compact data files with Optimize and Z-Order Auto-optimize tables Cache frequently queried data in Delta Cache WORKING WITH DELTATABLES WORKING WITH DELTA TABLES # A DeltaTable is the entry point for interacting with tables programmatically in Python — for example, to perform updates or deletes. from delta.tables import * deltaTable = DeltaTable.forName(spark, tableName) deltaTable = DeltaTable.forPath(spark, delta.`path/to/table`) CONVERT PARQUET TO DELTA LAKE from delta.tables import * deltaTable = DeltaTable.convertToDelta(spark, "parquet.`/path/to/parquet_table`") partitionedDeltaTable = DeltaTable.convertToDelta(spark, "parquet.`/path/to/parquet_table`", "part int") df1 = spark.read.format("delta").load(pathToTable) df2 = spark.read.format("delta").option("versionAsOf", 2).load("/path/to/delta_table") df1.exceptAll(df2).show() deltaTable.restoreToVersion(0) deltaTable.restoreToTimestamp('2020-12-01') Find changes between 2 versions of a table Rollback a table by version or timestamp Delta Lake is an open source storage layer that brings ACID transactions to Apache Spark™ and big data workloads. delta.io | Documentation | GitHub | API reference | Databricks df = spark.createDataFrame(pdf) # where pdf is a pandas DF # then save DataFrame in Delta Lake format as shown below # read by path df = (spark.read.format("parquet"|"csv"|"json"|etc.) .load("/path/to/delta_table")) # read by table name df = spark.table("events") # by path or by table name df = (spark.readStream .format("delta") .schema(schema) .table("events") | .load("/delta/events") ) (df.writeStream.format("delta") .outputMode("append"|"update"|"complete") .option("checkpointLocation", "/path/to/checkpoints") .trigger(once=True|processingTime="10 seconds") .table("events") | .start("/delta/events") ) (df.write.format("delta") .mode("append"|"overwrite") .partitionBy("date") # optional .option("mergeSchema", "true") # option - evolve schema .saveAsTable("events") | .save("/path/to/delta_table") ) READS AND WRITES WITH DELTA LAKE Read data from pandas DataFrame Read data using Apache Spark™ Save DataFrame in Delta Lake format Streaming reads (Delta table as streaming source) Streaming writes (Delta table as a sink) # predicate using SQL formatted string deltaTable.delete("date < '2017-01-01'") # predicate using Spark SQL functions deltaTable.delete(col("date") < "2017-01-01") # Available options for merges [see docs for details]: .whenMatchedUpdate(...) | .whenMatchedUpdateAll(...) | .whenNotMatchedInsert(...) | .whenMatchedDelete(...) (deltaTable.alias("target").merge( source = updatesDF.alias("updates"), condition = "target.eventId = updates.eventId") .whenMatchedUpdateAll() .whenNotMatchedInsert( values = { "date": "updates.date", "eventId": "updates.eventId", "data": "updates.data", "count": 1 } ).execute() ) (deltaTable.alias("logs").merge( newDedupedLogs.alias("newDedupedLogs"), "logs.uniqueId = newDedupedLogs.uniqueId") .whenNotMatchedInsertAll() .execute() ) # predicate using SQL formatted string deltaTable.update(condition = "eventType = 'clk'", set = { "eventType": "'click'" } ) # predicate using Spark SQL functions deltaTable.update(condition = col("eventType") == "clk", set = { "eventType": lit("click") } ) DELTA LAKE DDL/DML: UPDATES, DELETES, INSERTS, MERGES Delete rows that match a predicate condition Update rows that match a predicate condition Upsert (update + insert) using MERGE Insert with Deduplication using MERGE df = deltaTable.toDF() TIME TRAVEL (CONTINUED) Provided to the open source community by Databricks © Databricks 2021. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.