SlideShare a Scribd company logo
zfs to developers
zfs, a modern file system built for 
large scale data integrity
wikipedia to the rescue!
NFS 
Lustre 
Open Office 
Sun Microsystems
They also made a large amount of 
hardware
ZFS Talk Part 1
ZFS Talk Part 1
ZFS Talk Part 1
ZFS Talk Part 1
ZFS Talk Part 1
http://zfsonlinux.org/docs/LUG12_ZFS_Lustre_for_Sequoia.pdf
2005, integrated into 
solaris kernel 
45 commits to ZoL 1174 commits to ZoL 
2013 
OpenZFS 
2010 illumos founded 
2008 first commit 
to ZoL
-File systems should be large 
-Storage media is not to be trusted 
-Storage maintenance should be easy 
-Disk storage should be more like ram
File systems should be large 
Our largest system was 144 TB of storage. 
disks * capacity 
36 * 4 
ZFS can address hard drives so large they could not be 
stored on this planet.
File systems should be large 
ext4 1 EXB 
HFS+ 1 EXB 
BTRFS 16 EXB 
zfs 256X(1024) EXB
File systems should be large 
who cares?
Storage media is not to be trusted 
-Spinning disks have a bit error rate 
-Sometimes the head writes to the wrong place 
-”Modern hard disks write so fast and so faint 
that they are only guessing that what they read 
is what you wrote” 
-Cables go bad 
-Cosmic rays (!!!)
Storage media is not to be trusted 
zfs overcomes these problems with checksumming. Every 
block is run through fletcher4 before it is written, and that 
checksum is combined with other metadata and written “far 
away” from the data when they are written out. 
sha256 
future 
Edon-R 
Skein
Storage media is not to be trusted 
Does not happen too often, is usually just a 
great early warning that the drive is failing
Storage maintenance should be easy 
zpool create name disks 
zfs create filesystem 
zfs set compression=off filesystem 
zfs set sync=disabled filesystem 
zpool status 
zfs destroy
Storage maintenance should be easy 
Is it intuitive? 
zfs snapshot 
zfs send/receive 
zfs create/destroy
Storage maintenance should be easy 
Is it intuitive? 
zpool add VS zpool attach
Storage maintenance should be easy 
Is it easy? 
I think so
Disk storage should be more like ram 
Should open a computer up, throw some disks 
in there and be running. Never need to mess 
with it, never need to tune it.
Disk storage should be more like ram 
FAIL 
tuning is not recommended
Disk storage should be more like ram 
“Tuning is evil, yes, in the way that doing 
something against the will of the creator is evil”
zfs sits above your hard drives and below your 
directory, it adds features you might like.
zfs sits above your hard drives and below your 
directory, it adds features you might like. 
data integrity 
trasnparent compression (LZ4) 
improved throughput 
snapshoting 
replication via snapshoting 
speed via ARC 
easy maintancne 
choice in raid setup
Command overview 
zfs 
zpool 
zdb
Command overview 
zfs every week 
zpool every month 
zdb depends on the day
Command overview 
zfs Awesome man page 
zpool Awesome man page 
zdb meh...
zpool create
zpool create 
zpool create tank -o ashift=12 -O compression=lz4 mirror 
ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32714185 
ata-WDC_WD1002FAEX-00Z3A0_WD-WMATR0443468
zpool create 
zpool create tank -o ashift=12 -O compression=lz4 mirror 
ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32714185 
ata-WDC_WD1002FAEX-00Z3A0_WD-WMATR0443468
zpool create 
zpool create tank -o ashift=12 -O compression=lz4 mirror 
ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32714185 
ata-WDC_WD1002FAEX-00Z3A0_WD-WMATR0443468
zpool create 
zpool create tank -o ashift=12 -O compression=lz4 mirror 
ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32714185 
ata-WDC_WD1002FAEX-00Z3A0_WD-WMATR0443468
zpool create 
zpool create tank -o ashift=12 -O compression=lz4 mirror 
ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32714185 
ata-WDC_WD1002FAEX-00Z3A0_WD-WMATR0443468
zpool create 
zpool create tank -o ashift=12 -O compression=lz4 mirror 
ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32714185 
ata-WDC_WD1002FAEX-00Z3A0_WD-WMATR0443468
zpool create 
zpool create tank -o ashift=12 -O compression=lz4 mirror 
ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32714185 
ata-WDC_WD1002FAEX-00Z3A0_WD-WMATR0443468
zpool create 
zpool create tank -o ashift=12 -O compression=lz4 mirror 
ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32714185 
ata-WDC_WD1002FAEX-00Z3A0_WD-WMATR0443468 
/dev/disk/by-id/ata-*
zpool create 
zpool create tank -o ashift=12 -O compression=lz4 mirror 
ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32714185 
ata-WDC_WD1002FAEX-00Z3A0_WD-WMATR0443468 
/dev/disk/by-id/ata-* 
http://zfsonlinux.org/faq. 
html#WhatDevNamesShouldIUseWhenCreatingMyPool
zpool status 
/home/sburgess > zpool status 
pool: tank 
state: ONLINE 
scan: scrub repaired 0 in 19h39m with 0 errors on Tue Jul 15 10:23:16 2014 
config: 
NAME STATE READ WRITE CKSUM 
tank ONLINE 0 0 0 
mirror-0 ONLINE 0 0 0 
ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32714185 ONLINE 0 0 0 
ata-WDC_WD1002FAEX-00Z3A0_WD-WMATR0443468 ONLINE 0 0 0
so far 
/home/sburgess > zpool get all tank 
NAME PROPERTY VALUE SOURCE 
tank size 928G - 
tank capacity 34% - 
tank health ONLINE -
so far 
/home/sburgess > zfs get all tank 
NAME PROPERTY VALUE SOURCE 
tank type filesystem - 
tank creation Thu Jan 3 15:55 2013 - 
tank used 325G - 
tank available 589G - 
tank referenced 184K - 
tank compressratio 1.54x - 
tank mounted yes - 
tank recordsize 128K default 
tank mountpoint /tank default 
tank compression lz4 local 
tank sync standard default 
tank refcompressratio 1.00x
so far 
/home/sburgess > ls /tank/
zfs create 
zfs create tank/home
zfs create 
zfs create tank/home/sburgess 
-o mountpoint=/home/sburgess
zfs create 
zfs create tank/home/sburgess/downloads 
zfs create tank/home/sburgess/projects 
zfs create tank/home/sburgess/tools
zfs create 
zfs create tank/home/sburgess/downloads 
zfs create tank/home/sburgess/projects 
zfs create tank/home/sburgess/tools 
chown -R sburgess: /home/sburgess
zfs create 
zfs list -o name,refer,used,compressratio -r tank/home/sburgess 
NAME REFER USED RATIO 
tank/home/sburgess 4.37G 114G 1.73x 
tank/home/sburgess/downloads 34.8G 36.0G 1.66x 
tank/home/sburgess/projects 2.08G 11.7G 1.30x 
tank/home/sburgess/tools 583M 635M 1.54x
zfs create 
mv Pictures pic 
zfs create tank/home/sburgess/Pictures 
chown -R surgess: Pictures 
mv pic/* Pictures
zfs create 
/home/sburgess > zfs list -o name,refer,used,compressratio -r 
tank/home/sburgess 
NAME REFER USED RATIO 
tank/home/sburgess 4.36G 114G 1.73x 
tank/home/sburgess/Pictures 11.3M 11.3M 1.16x 
tank/home/sburgess/downloads 34.8G 36.0G 1.66x 
tank/home/sburgess/projects 2.08G 11.7G 1.30x 
tank/home/sburgess/tools 583M 635M 1.54x
zfs create 
shopt -s dotglob 
du -hs * 
2.9G .kde 
1.3G .cache
uberblock
uberblock 
The root of the zfs hash tree 
“A Merkle tree is a tree in which every non-leaf 
node is labelled with the hash of the labels of its 
children nodes.”
ZFS Talk Part 1
uberblock 
zdb -u poolName
uberblock 
zdb -u test 
Uberblock: 
magic = 0000000000bab10c 
version = 5000 
txg = 5 
guid_sum = 16411893724316372364 
timestamp = 1392754246 UTC = Tue Feb 18 15:10:46 2014
uberblock 
Uberblock: 
magic = 0000000000bab10c 
version = 5000 
txg = 5 
guid_sum = 16411893724316372364 
timestamp = 1392754246 UTC = Tue Feb 18 15:10:46 2014 
… cat /dev/urandom > file … 
Uberblock: 
magic = 0000000000bab10c 
version = 5000 
txg = 163 
guid_sum = 16411893724316372364 
timestamp = 1392755035 UTC = Tue Feb 18 15:23:55 2014
uberblock 
Uberblock: 
magic = 0000000000bab10c 
version = 5000 
txg = 163 
guid_sum = 16411893724316372364 
timestamp = 1392755035 UTC = Tue Feb 18 15:23:55 2014 
… zpool attach pool disk1 disk2… 
Uberblock: 
magic = 0000000000bab10c 
version = 5000 
txg = 197 
guid_sum = 16865875370843337150 
timestamp = 1392755190 UTC = Tue Feb 18 15:26:30 2014
uberblock 
Go back in time via 
zpool import -F
snapshotting
snapshotting 
zfs snapshot tank/home/sburgess@now
snapshotting 
zfs list -o name,creation,used -t all -r tank/home/sburgess
ZFS Talk Part 1
What to do with snapshots
.zfs directory 
Always there, whether or not it shows up in ls - 
a is controlled by 
zfs set snapdir=hidden|visible filesystem
.zfs directory 
Contains .zfs/snapshots, which has a directory 
for each snapshot. When you access any 
directory, it is temporarily mounted read only 
there.
.zfs directory 
Use case: 
-Test if/when a file was created 
-Easily restore a file or two, for large 
complicated restores, use clone.
zfs rollback 
zfs rollback tank/home/sburgess@then 
Should be the most recent snapshot, but you 
can use -r to roll back further
zfs rollback 
Use case: 
Being too bold with tar -x
zfs clone 
zfs clone tank/home/sburgess@now tank/other 
tank/other is a 
read/write, snapshotable, cloneable file system 
Initially shares all blocks with the parent, takes 
0 space, amplify ARC hits
zfs clone 
Use case: 
Virtual Machine base images 
All configs, modules, programs and OS data 
shared
zfs clone 
zfs clone 
-o readonly=on 
-o mountpoint=/tmp/ro 
tank/home/sburgess@now tank/other
zfs clone 
-safe (readonly) 
-0 time 
-0 space
zfs clone 
Use case: 
-large file restore 
-diffing files across both
zfs clone 
What clones of this snapshot exist? 
zfs get clones filesystem@snapshot 
What snapshot was this filesystem cloned 
from? 
zfs get origin filesystem
a note on - 
“-” is zfs none/null/not applicable 
zfs get clones tank 
NAME PROPERTY VALUE SOURCE 
tank clones - - 
zfs get origin tank@now 
NAME PROPERTY VALUE SOURCE 
tank@now origin - -
a note on - 
“-” is zfs none/null/not applicable 
zpool get version 
NAME PROPERTY VALUE SOURCE 
tank version - default
a note on 5000 
zpool version numbers no longer increase with 
features
zfs send
zfs send 
Original idea: 
Send the changes I made today across the 
ocean
zfs send 
Create a file detailing the changes that need to 
be made to transition a filesystem from one 
snapshot to another.
zfs send 
zfs send is a dictation, not a conversation
zfs send 
zpool create -O compression=off -O copies=2 -o ashift=12 
zpool create -O compression=lz4 -O checksum=sha256 -o ashift=9
zfs send 
zfs send tank/currr@1387825261 
Error: Stream can not be written to a terminal. 
You must redirect standard output.
zfs send 
-n 
-v
zfs send 
zfs send -n -v tank/home/sburgess@now
zfs send 
zfs send -n -v tank/home/sburgess@now 
send from @ to tank/home/sburgess@now 
total estimated size is 9.22G
zfs send 
zfs send tank/home/sburgess@now 
What does this send? What does it create 
when its received?
zfs send 
zfs send tank/home/sburgess@now 
Its sends a “full” filesystem, everything that is 
needed to create tank/home/sburgess@now 
The receiving side gets a new FS with a single 
snapshot named now
zfs send 
Can be used with the -i and -I options to send 
incremental changes. Only send the blocks that 
changed between the first and second 
snapshots.
zfs send 
-i do not send intermediate snapshots 
-I send intermediate snapshots
zfs send 
-i do not send intermediate snapshots 
-I send intermediate snapshots 
zfs send -I early file/system/path@late
zfs get vs zfs list
zfs get vs zfs list 
When working interactively use zfs list 
zfs list -t all -o name,written,used,mounted 
NAME WRITTEN USED MOUNTED 
tank/home/sburgess/tools@1387825261 0 0 - 
tank/images 590M 8.82G no 
tank/images@base 8.25G 369M - 
tank/other 8K 8K yes 
tank/trick 0 136K yes
zfs get vs zfs list 
zfs list is the same as 
zfs list -o name,used,avail,refer,mountpoint
zfs get vs zfs list 
zfs list is the same as 
zfs list -o name,used,avail,refer,mountpoint 
^^^^
zfs get vs zfs list 
zfs list | grep/awk/??
zfs get vs zfs list 
when looking at an FS or snapshot, I call 
zfs get all item | less
zfs get vs zfs list 
For programmatic use, use zfs get -H -P 
zfs get used tank 
NAME PROPERTY VALUE SOURCE 
tank used 484G - 
zfs get used -o value -H -p tank 
519265562624
Learn more 
read the zpool man page 
read the zfs man page 
subscribe to the ZoL mailing list, and just read 
new messages as they come in

More Related Content

PDF
Scale2014
PDF
ZFS Workshop
PDF
ZIP
Zfs Nuts And Bolts
ODP
ZFS by PWR 2013
PDF
Flourish16
KEY
ZFS Tutorial LISA 2011
Scale2014
ZFS Workshop
Zfs Nuts And Bolts
ZFS by PWR 2013
Flourish16
ZFS Tutorial LISA 2011

What's hot (20)

PPT
Zettabyte File Storage System
PDF
ZFS in 30 minutes
PDF
An Introduction to the Implementation of ZFS by Kirk McKusick
PDF
ZFS Tutorial USENIX June 2009
PDF
SmartOS ZFS Architecture
PDF
Zfs intro v2
PDF
S8 File Systems Tutorial USENIX LISA13
PDF
ZFS: The Last Word in Filesystems
PDF
USENIX LISA11 Tutorial: ZFS a
PPTX
JetStor NAS 724UXD Dual Controller Active-Active ZFS Based
KEY
ZFS Tutorial USENIX LISA09 Conference
PDF
Fossetcon14
PDF
PostgreSQL + ZFS best practices
PDF
OSDC 2016 - Interesting things you can do with ZFS by Allan Jude&Benedict Reu...
PDF
MySQL on ZFS
PPTX
What every data programmer needs to know about disks
PDF
OpenZFS novel algorithms: snapshots, space allocation, RAID-Z - Matt Ahrens
PDF
Introduction to BTRFS and ZFS
PPTX
Storage spaces direct webinar
PPTX
Introduction to TrioNAS LX U300
Zettabyte File Storage System
ZFS in 30 minutes
An Introduction to the Implementation of ZFS by Kirk McKusick
ZFS Tutorial USENIX June 2009
SmartOS ZFS Architecture
Zfs intro v2
S8 File Systems Tutorial USENIX LISA13
ZFS: The Last Word in Filesystems
USENIX LISA11 Tutorial: ZFS a
JetStor NAS 724UXD Dual Controller Active-Active ZFS Based
ZFS Tutorial USENIX LISA09 Conference
Fossetcon14
PostgreSQL + ZFS best practices
OSDC 2016 - Interesting things you can do with ZFS by Allan Jude&Benedict Reu...
MySQL on ZFS
What every data programmer needs to know about disks
OpenZFS novel algorithms: snapshots, space allocation, RAID-Z - Matt Ahrens
Introduction to BTRFS and ZFS
Storage spaces direct webinar
Introduction to TrioNAS LX U300
Ad

Similar to ZFS Talk Part 1 (20)

PDF
Improving the ZFS Userland-Kernel API with Channel Programs - BSDCAN 2017 - M...
ODP
OpenSolaris 2009.06 Workshop
PDF
Docker and friends at Linux Days 2014 in Prague
PDF
Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1
ODP
Slug 2009 06 SELinux For Sysadmins
PDF
Solaris Zones (native & lxbranded) ~ A techXpress Guide
PDF
Putting some "logic" in LVM.
PDF
EF09-Installing-Alfresco-components-1-by-1.pdf
PDF
Containers with systemd-nspawn
PDF
PDF
SiteGround Tech TeamBuilding
PDF
glance replicator
PDF
Filip palian mateuszkocielski. simplest ownage human observed… routers
PDF
Simplest-Ownage-Human-Observed… - Routers
PPSX
Linux configer
PDF
OpenStack Tokyo Meeup - Gluster Storage Day
PDF
Ef09 installing-alfresco-components-1-by-1
PDF
SANS @Night There's Gold in Them Thar Package Management Databases
PDF
Kubernetes on Bare Metal at the Kitchener-Waterloo Kubernetes and Cloud Nativ...
PPTX
Virtualization and automation of library software/machines + Puppet
Improving the ZFS Userland-Kernel API with Channel Programs - BSDCAN 2017 - M...
OpenSolaris 2009.06 Workshop
Docker and friends at Linux Days 2014 in Prague
Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1
Slug 2009 06 SELinux For Sysadmins
Solaris Zones (native & lxbranded) ~ A techXpress Guide
Putting some "logic" in LVM.
EF09-Installing-Alfresco-components-1-by-1.pdf
Containers with systemd-nspawn
SiteGround Tech TeamBuilding
glance replicator
Filip palian mateuszkocielski. simplest ownage human observed… routers
Simplest-Ownage-Human-Observed… - Routers
Linux configer
OpenStack Tokyo Meeup - Gluster Storage Day
Ef09 installing-alfresco-components-1-by-1
SANS @Night There's Gold in Them Thar Package Management Databases
Kubernetes on Bare Metal at the Kitchener-Waterloo Kubernetes and Cloud Nativ...
Virtualization and automation of library software/machines + Puppet
Ad

Recently uploaded (20)

PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
Tartificialntelligence_presentation.pptx
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
A comparative study of natural language inference in Swahili using monolingua...
PDF
Approach and Philosophy of On baking technology
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Univ-Connecticut-ChatGPT-Presentaion.pdf
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Machine learning based COVID-19 study performance prediction
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
Spectral efficient network and resource selection model in 5G networks
Tartificialntelligence_presentation.pptx
Unlocking AI with Model Context Protocol (MCP)
Group 1 Presentation -Planning and Decision Making .pptx
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Assigned Numbers - 2025 - Bluetooth® Document
Reach Out and Touch Someone: Haptics and Empathic Computing
Building Integrated photovoltaic BIPV_UPV.pdf
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
A comparative study of natural language inference in Swahili using monolingua...
Approach and Philosophy of On baking technology
Network Security Unit 5.pdf for BCA BBA.
Univ-Connecticut-ChatGPT-Presentaion.pdf
SOPHOS-XG Firewall Administrator PPT.pptx
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Machine learning based COVID-19 study performance prediction
Accuracy of neural networks in brain wave diagnosis of schizophrenia
Advanced methodologies resolving dimensionality complications for autism neur...

ZFS Talk Part 1

  • 2. zfs, a modern file system built for large scale data integrity
  • 4. NFS Lustre Open Office Sun Microsystems
  • 5. They also made a large amount of hardware
  • 12. 2005, integrated into solaris kernel 45 commits to ZoL 1174 commits to ZoL 2013 OpenZFS 2010 illumos founded 2008 first commit to ZoL
  • 13. -File systems should be large -Storage media is not to be trusted -Storage maintenance should be easy -Disk storage should be more like ram
  • 14. File systems should be large Our largest system was 144 TB of storage. disks * capacity 36 * 4 ZFS can address hard drives so large they could not be stored on this planet.
  • 15. File systems should be large ext4 1 EXB HFS+ 1 EXB BTRFS 16 EXB zfs 256X(1024) EXB
  • 16. File systems should be large who cares?
  • 17. Storage media is not to be trusted -Spinning disks have a bit error rate -Sometimes the head writes to the wrong place -”Modern hard disks write so fast and so faint that they are only guessing that what they read is what you wrote” -Cables go bad -Cosmic rays (!!!)
  • 18. Storage media is not to be trusted zfs overcomes these problems with checksumming. Every block is run through fletcher4 before it is written, and that checksum is combined with other metadata and written “far away” from the data when they are written out. sha256 future Edon-R Skein
  • 19. Storage media is not to be trusted Does not happen too often, is usually just a great early warning that the drive is failing
  • 20. Storage maintenance should be easy zpool create name disks zfs create filesystem zfs set compression=off filesystem zfs set sync=disabled filesystem zpool status zfs destroy
  • 21. Storage maintenance should be easy Is it intuitive? zfs snapshot zfs send/receive zfs create/destroy
  • 22. Storage maintenance should be easy Is it intuitive? zpool add VS zpool attach
  • 23. Storage maintenance should be easy Is it easy? I think so
  • 24. Disk storage should be more like ram Should open a computer up, throw some disks in there and be running. Never need to mess with it, never need to tune it.
  • 25. Disk storage should be more like ram FAIL tuning is not recommended
  • 26. Disk storage should be more like ram “Tuning is evil, yes, in the way that doing something against the will of the creator is evil”
  • 27. zfs sits above your hard drives and below your directory, it adds features you might like.
  • 28. zfs sits above your hard drives and below your directory, it adds features you might like. data integrity trasnparent compression (LZ4) improved throughput snapshoting replication via snapshoting speed via ARC easy maintancne choice in raid setup
  • 29. Command overview zfs zpool zdb
  • 30. Command overview zfs every week zpool every month zdb depends on the day
  • 31. Command overview zfs Awesome man page zpool Awesome man page zdb meh...
  • 33. zpool create zpool create tank -o ashift=12 -O compression=lz4 mirror ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32714185 ata-WDC_WD1002FAEX-00Z3A0_WD-WMATR0443468
  • 34. zpool create zpool create tank -o ashift=12 -O compression=lz4 mirror ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32714185 ata-WDC_WD1002FAEX-00Z3A0_WD-WMATR0443468
  • 35. zpool create zpool create tank -o ashift=12 -O compression=lz4 mirror ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32714185 ata-WDC_WD1002FAEX-00Z3A0_WD-WMATR0443468
  • 36. zpool create zpool create tank -o ashift=12 -O compression=lz4 mirror ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32714185 ata-WDC_WD1002FAEX-00Z3A0_WD-WMATR0443468
  • 37. zpool create zpool create tank -o ashift=12 -O compression=lz4 mirror ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32714185 ata-WDC_WD1002FAEX-00Z3A0_WD-WMATR0443468
  • 38. zpool create zpool create tank -o ashift=12 -O compression=lz4 mirror ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32714185 ata-WDC_WD1002FAEX-00Z3A0_WD-WMATR0443468
  • 39. zpool create zpool create tank -o ashift=12 -O compression=lz4 mirror ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32714185 ata-WDC_WD1002FAEX-00Z3A0_WD-WMATR0443468
  • 40. zpool create zpool create tank -o ashift=12 -O compression=lz4 mirror ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32714185 ata-WDC_WD1002FAEX-00Z3A0_WD-WMATR0443468 /dev/disk/by-id/ata-*
  • 41. zpool create zpool create tank -o ashift=12 -O compression=lz4 mirror ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32714185 ata-WDC_WD1002FAEX-00Z3A0_WD-WMATR0443468 /dev/disk/by-id/ata-* http://zfsonlinux.org/faq. html#WhatDevNamesShouldIUseWhenCreatingMyPool
  • 42. zpool status /home/sburgess > zpool status pool: tank state: ONLINE scan: scrub repaired 0 in 19h39m with 0 errors on Tue Jul 15 10:23:16 2014 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32714185 ONLINE 0 0 0 ata-WDC_WD1002FAEX-00Z3A0_WD-WMATR0443468 ONLINE 0 0 0
  • 43. so far /home/sburgess > zpool get all tank NAME PROPERTY VALUE SOURCE tank size 928G - tank capacity 34% - tank health ONLINE -
  • 44. so far /home/sburgess > zfs get all tank NAME PROPERTY VALUE SOURCE tank type filesystem - tank creation Thu Jan 3 15:55 2013 - tank used 325G - tank available 589G - tank referenced 184K - tank compressratio 1.54x - tank mounted yes - tank recordsize 128K default tank mountpoint /tank default tank compression lz4 local tank sync standard default tank refcompressratio 1.00x
  • 45. so far /home/sburgess > ls /tank/
  • 46. zfs create zfs create tank/home
  • 47. zfs create zfs create tank/home/sburgess -o mountpoint=/home/sburgess
  • 48. zfs create zfs create tank/home/sburgess/downloads zfs create tank/home/sburgess/projects zfs create tank/home/sburgess/tools
  • 49. zfs create zfs create tank/home/sburgess/downloads zfs create tank/home/sburgess/projects zfs create tank/home/sburgess/tools chown -R sburgess: /home/sburgess
  • 50. zfs create zfs list -o name,refer,used,compressratio -r tank/home/sburgess NAME REFER USED RATIO tank/home/sburgess 4.37G 114G 1.73x tank/home/sburgess/downloads 34.8G 36.0G 1.66x tank/home/sburgess/projects 2.08G 11.7G 1.30x tank/home/sburgess/tools 583M 635M 1.54x
  • 51. zfs create mv Pictures pic zfs create tank/home/sburgess/Pictures chown -R surgess: Pictures mv pic/* Pictures
  • 52. zfs create /home/sburgess > zfs list -o name,refer,used,compressratio -r tank/home/sburgess NAME REFER USED RATIO tank/home/sburgess 4.36G 114G 1.73x tank/home/sburgess/Pictures 11.3M 11.3M 1.16x tank/home/sburgess/downloads 34.8G 36.0G 1.66x tank/home/sburgess/projects 2.08G 11.7G 1.30x tank/home/sburgess/tools 583M 635M 1.54x
  • 53. zfs create shopt -s dotglob du -hs * 2.9G .kde 1.3G .cache
  • 55. uberblock The root of the zfs hash tree “A Merkle tree is a tree in which every non-leaf node is labelled with the hash of the labels of its children nodes.”
  • 57. uberblock zdb -u poolName
  • 58. uberblock zdb -u test Uberblock: magic = 0000000000bab10c version = 5000 txg = 5 guid_sum = 16411893724316372364 timestamp = 1392754246 UTC = Tue Feb 18 15:10:46 2014
  • 59. uberblock Uberblock: magic = 0000000000bab10c version = 5000 txg = 5 guid_sum = 16411893724316372364 timestamp = 1392754246 UTC = Tue Feb 18 15:10:46 2014 … cat /dev/urandom > file … Uberblock: magic = 0000000000bab10c version = 5000 txg = 163 guid_sum = 16411893724316372364 timestamp = 1392755035 UTC = Tue Feb 18 15:23:55 2014
  • 60. uberblock Uberblock: magic = 0000000000bab10c version = 5000 txg = 163 guid_sum = 16411893724316372364 timestamp = 1392755035 UTC = Tue Feb 18 15:23:55 2014 … zpool attach pool disk1 disk2… Uberblock: magic = 0000000000bab10c version = 5000 txg = 197 guid_sum = 16865875370843337150 timestamp = 1392755190 UTC = Tue Feb 18 15:26:30 2014
  • 61. uberblock Go back in time via zpool import -F
  • 63. snapshotting zfs snapshot tank/home/sburgess@now
  • 64. snapshotting zfs list -o name,creation,used -t all -r tank/home/sburgess
  • 66. What to do with snapshots
  • 67. .zfs directory Always there, whether or not it shows up in ls - a is controlled by zfs set snapdir=hidden|visible filesystem
  • 68. .zfs directory Contains .zfs/snapshots, which has a directory for each snapshot. When you access any directory, it is temporarily mounted read only there.
  • 69. .zfs directory Use case: -Test if/when a file was created -Easily restore a file or two, for large complicated restores, use clone.
  • 70. zfs rollback zfs rollback tank/home/sburgess@then Should be the most recent snapshot, but you can use -r to roll back further
  • 71. zfs rollback Use case: Being too bold with tar -x
  • 72. zfs clone zfs clone tank/home/sburgess@now tank/other tank/other is a read/write, snapshotable, cloneable file system Initially shares all blocks with the parent, takes 0 space, amplify ARC hits
  • 73. zfs clone Use case: Virtual Machine base images All configs, modules, programs and OS data shared
  • 74. zfs clone zfs clone -o readonly=on -o mountpoint=/tmp/ro tank/home/sburgess@now tank/other
  • 75. zfs clone -safe (readonly) -0 time -0 space
  • 76. zfs clone Use case: -large file restore -diffing files across both
  • 77. zfs clone What clones of this snapshot exist? zfs get clones filesystem@snapshot What snapshot was this filesystem cloned from? zfs get origin filesystem
  • 78. a note on - “-” is zfs none/null/not applicable zfs get clones tank NAME PROPERTY VALUE SOURCE tank clones - - zfs get origin tank@now NAME PROPERTY VALUE SOURCE tank@now origin - -
  • 79. a note on - “-” is zfs none/null/not applicable zpool get version NAME PROPERTY VALUE SOURCE tank version - default
  • 80. a note on 5000 zpool version numbers no longer increase with features
  • 82. zfs send Original idea: Send the changes I made today across the ocean
  • 83. zfs send Create a file detailing the changes that need to be made to transition a filesystem from one snapshot to another.
  • 84. zfs send zfs send is a dictation, not a conversation
  • 85. zfs send zpool create -O compression=off -O copies=2 -o ashift=12 zpool create -O compression=lz4 -O checksum=sha256 -o ashift=9
  • 86. zfs send zfs send tank/currr@1387825261 Error: Stream can not be written to a terminal. You must redirect standard output.
  • 88. zfs send zfs send -n -v tank/home/sburgess@now
  • 89. zfs send zfs send -n -v tank/home/sburgess@now send from @ to tank/home/sburgess@now total estimated size is 9.22G
  • 90. zfs send zfs send tank/home/sburgess@now What does this send? What does it create when its received?
  • 91. zfs send zfs send tank/home/sburgess@now Its sends a “full” filesystem, everything that is needed to create tank/home/sburgess@now The receiving side gets a new FS with a single snapshot named now
  • 92. zfs send Can be used with the -i and -I options to send incremental changes. Only send the blocks that changed between the first and second snapshots.
  • 93. zfs send -i do not send intermediate snapshots -I send intermediate snapshots
  • 94. zfs send -i do not send intermediate snapshots -I send intermediate snapshots zfs send -I early file/system/path@late
  • 95. zfs get vs zfs list
  • 96. zfs get vs zfs list When working interactively use zfs list zfs list -t all -o name,written,used,mounted NAME WRITTEN USED MOUNTED tank/home/sburgess/tools@1387825261 0 0 - tank/images 590M 8.82G no tank/images@base 8.25G 369M - tank/other 8K 8K yes tank/trick 0 136K yes
  • 97. zfs get vs zfs list zfs list is the same as zfs list -o name,used,avail,refer,mountpoint
  • 98. zfs get vs zfs list zfs list is the same as zfs list -o name,used,avail,refer,mountpoint ^^^^
  • 99. zfs get vs zfs list zfs list | grep/awk/??
  • 100. zfs get vs zfs list when looking at an FS or snapshot, I call zfs get all item | less
  • 101. zfs get vs zfs list For programmatic use, use zfs get -H -P zfs get used tank NAME PROPERTY VALUE SOURCE tank used 484G - zfs get used -o value -H -p tank 519265562624
  • 102. Learn more read the zpool man page read the zfs man page subscribe to the ZoL mailing list, and just read new messages as they come in