BITS Pilani
Pilani Campus
Data Storage Technologies
& Networks
Dr. Virendra Singh Shekhawat
Department of Computer Science and Information Systems
BITS Pilani, Pilani Campus
Topics
• File Store Organization
• OS Support for I/O
– Device Drivers
– Interrupt handling
– Device Interface
– Buffering
2
BITS Pilani, Pilani Campus
Organization of File store
• Berkeley Fast File System (FFS) model:
– A file system is described by its superblock
• Located at the beginning of the file system
• Possibly replicated for redundancy
– A disk partition is divided into one or more cylinder
groups i.e. a set of consecutive cylinders:
• Each group maintains book-keeping info. including
– a redundant copy of superblock
– Space for i-nodes
– A bitmap of available blocks and
– Summary usage info.
• Cylinder groups are used to create clusters of
allocated blocks to improve locality.
3
BITS Pilani, Pilani Campus
Local File Store- Storage
Utilization
• Data layout – Performance requirement
– Large blocks of data should be transferable in a single disk operation
• So, logical block size must be large.
– But, typical profiles primarily use small files.
• Internal Fragmentation:
– Increases from 1.1% for 512 bytes logical block size, 2.5% for 1KB, to
5.4% for 2KB, to an unacceptable 61% for 16KB
• I-nodes also add to the space overhead:
– But overhead due to i-nodes is about 6% for logical block sizes of
512B, 1KB and 2KB, reduces to about 3% for 4KB, and to 0.8% for
16KB.
• One option to balance internal fragmentation against
improved I/O performance is
– to maintain large logical blocks made of smaller fragments
4
BITS Pilani, Pilani Campus
Local File Store – Layout [1]
• Global Policies:
– Use summary information to make decisions
regarding placement of i-nodes and disk blocks.
• Routines responsible for deciding placement of new
directories and files.
– Layout policies rely on locality to cluster information
for improved performance.
• E.g. Cylinder group clustering
• Local Allocation Routines.
– Use locally optimal schemes for data block layouts.
5
BITS Pilani, Pilani Campus
Local File Store – Layout [2]
• Local allocators use a multi-level allocation strategy:
1. Use the next available block that is rotationally
closest to the requested block on the same cylinder
2. If no blocks are available in the same cylinder use
a block within the same cylinder group
3. If the cylinder group is full, choose another group
by quadratic hashing.
4. Search exhaustively.
6
BITS Pilani, Pilani Campus
OS Support for I/O
• System calls form the interface between applications
and OS (kernel)
– File System and I/O system are responsible for
• Implementing system calls related to file management and handling
input/output.
• Device drivers form the interface between the OS
(kernel) and the hardware
7
BITS Pilani, Pilani Campus
I/O in UNIX - Example
• Application level operation
– E.g. printf call
• OS (kernel) level
– System call bwrite
• Device Driver level
– Strategy entry point – code for write operation
• Device level
– E.g. SCSI protocol command for write
8
BITS Pilani, Pilani Campus
I/O in UNIX - Devices
• I/O system is used for
– Network communication and virtual memory (Swap
space)
• Two types of devices
– Character devices
• Terminals, line printers, main memory
– Block devices
• Disks and Tapes
• Buffering done by kernel
9
BITS Pilani, Pilani Campus
I/O in UNIX – Device Drivers
• Device Driver Sections
– Auto-configuration and initialization Routines
• Probe and initialize the device
– Routines for servicing I/O requests
• Invoked because of system calls or VM requests
• Referred to as the “top-half” of the driver
– Interrupt service routines
• Invoked by interrupt from a device
– Can’t depend on per-process state
– Can’t block
• Referred to as the “bottom-half” of the driver
10
BITS Pilani, Pilani Campus
I/O Queuing and Interrupt
Handling
• Each device driver manages one or more queues
for I/O requests
– Shared among asynchronous routines – must be
synchronized.
– Multi-process synchronization also required.
• Interrupts are generated by devices
– Signal status change (or completion of operation)
– DD-ISR invoked through a glue routine that is
responsible for saving volatile registers.
– ISR removes request from queue, notifies requestor
that the command has been executed.
11
BITS Pilani, Pilani Campus
I/O in UNIX – Block Devices and I/O
• Disk sector sized read/write
– Conversion of random access to disk sector
read/write is known as block I/O
– Kernel buffering reduces latency for multiple reads
and writes.
12
BITS Pilani, Pilani Campus
Device Driver to Disk Interface
• Disk Interface:
– Disk access requires an address:
• device id, LBA OR
• device id, cylinder #, head#, sector #.
– Device drivers need to be aware of disk details for
address translation:
• i.e. converting a logical address (say file system level address
such as i-node number) to a physical address (i.e. CHS) on the
disk;
• they need to be aware of complete disk geometry if CHS
addressing is used.
– Early device drivers had hard-coded disk geometries.
• This results in reduced modularity –
– disks cannot be moved (data mapping is with the device driver);
– device driver upgrades would require shutdowns and data copies.
13
BITS Pilani, Pilani Campus
Disk Labels
• Disk Geometry and Data mapping is stored on
the disk itself.
– Known as disk label.
– Must be stored in a fixed location – usually 0th
block i.e. 0th sector on head 0, cylinder 0.
– Usually this is where bootstrap code is stored
• In which case disk information is part of the bootstrap
code, because booting may require disk access.
14
BITS Pilani, Pilani Campus
Buffering
• Buffer Cache
– Memory buffer for data being transferred to and from
disks
– Cache for recently used blocks
• 85% hit rate is typical
• Typical buffer size = 64KB virtual memory
• Buffer pool – hundreds of buffers
• Consistency issue
– Each disk block mapped to at most one buffer
– Buffers have dirty bits associated
– When a new buffer is allocated and if its disk blocks
overlap with that of an existing buffer, then the old buffer
must be purged.
15
BITS Pilani, Pilani Campus
Buffer Pool Management
• Buffer pool is maintained as a (separately chained) hash table
indexed by a buffer id
• The buffers in the pool are also in one of four lists:
– Locked list:
• buffers that are currently used for I/O and therefore locked and cannot be
released until operation is complete
– LRU list:
• A queue of buffers – a recently used item is added at the rear of the queue
and when a buffer is needed one at the front of the queue is replaced.
• Buffers staying in this queue long enough are migrated to an Aged list
– Aged List:
• Maintained as a list and any element may be used for replacement.
– Empty List
• When a new buffer is needed check in the following order:
– Empty List, Aged List, LRU list
16
BITS Pilani, Pilani Campus
Thank You!
17

More Related Content

PDF
Ch8 main memory
PPT
02 computer evolution and performance
PPT
IT209 Cpu Structure Report
PPT
chap 18 multicore computers
PPT
06 External Memory
PPTX
Nehalem (microarchitecture)
PPT
04 cache memory
PPTX
Chip Multithreading Systems Need a New Operating System Scheduler
Ch8 main memory
02 computer evolution and performance
IT209 Cpu Structure Report
chap 18 multicore computers
06 External Memory
Nehalem (microarchitecture)
04 cache memory
Chip Multithreading Systems Need a New Operating System Scheduler

What's hot (20)

PPT
07 input output
PDF
27 multicore
PPTX
PPT
01 introduction
PDF
Multiprocessor
PPT
04 cache memory
PPTX
Nehalem
PPT
Driver development – memory management
PPSX
Understanding memory management
PPT
Structure And Role Of The Processor
 
PPTX
File allocation methods (1)
PPT
Memory management early_systems
PPT
08 operating system support
PDF
Ch4 memory management
PPT
Memory Management
PPTX
Operation System
PPTX
Memory organization (Computer architecture)
PDF
Operating Systems 1 (9/12) - Memory Management Concepts
PDF
Dynamic loading
PPT
01 introduction
07 input output
27 multicore
01 introduction
Multiprocessor
04 cache memory
Nehalem
Driver development – memory management
Understanding memory management
Structure And Role Of The Processor
 
File allocation methods (1)
Memory management early_systems
08 operating system support
Ch4 memory management
Memory Management
Operation System
Memory organization (Computer architecture)
Operating Systems 1 (9/12) - Memory Management Concepts
Dynamic loading
01 introduction
Ad

Similar to M2 212-unix fs-rl_2.1.2 (20)

PPTX
M2 211-unix fs-rl_2.1.1
PPTX
M3 nas architecture-3.1.1
PPTX
PPTX
M2 242-scsi-bus rl-2.4.2
PDF
MK Sistem Operasi.pdf
PPT
PPTX
Lect 27 28_29_30_virtual_memory
PPT
Computer Architecture & Organization.ppt
PPTX
M1 rl 1.1.1
PPTX
M2 221-ssd fs-rl_2.2.1
PDF
Ch1 Introduction Silberschatz 10e OS.pdf
PPTX
OS Unit5.pptx
PPT
OS Intro.ppt
PPT
chapter1.ppt
PPT
Operating systems structures and their practical applications
PPT
Storage Managment
PPT
ch1.ppt
PDF
CH01.pdf
PPT
OPERATING SYSTEM ENGINEERING SYLLABUS PPT
PPT
Chapter 05
M2 211-unix fs-rl_2.1.1
M3 nas architecture-3.1.1
M2 242-scsi-bus rl-2.4.2
MK Sistem Operasi.pdf
Lect 27 28_29_30_virtual_memory
Computer Architecture & Organization.ppt
M1 rl 1.1.1
M2 221-ssd fs-rl_2.2.1
Ch1 Introduction Silberschatz 10e OS.pdf
OS Unit5.pptx
OS Intro.ppt
chapter1.ppt
Operating systems structures and their practical applications
Storage Managment
ch1.ppt
CH01.pdf
OPERATING SYSTEM ENGINEERING SYLLABUS PPT
Chapter 05
Ad

More from MrudulaJoshi10 (13)

PPTX
M4 san features-4.3.1
PPTX
M4 rdma 4.5.1
PPTX
M4 f co-e_4.4.2
PPTX
M4 fc stack-4.1.1
PPTX
M4 fc san-4.2.1
PPTX
M3 smb cifs-protocol_3.3.1
PPTX
M3 nfs fs-3.2.1
PPTX
M3 nfs fs-_performance_3.2.2
PPTX
M2 241-buses rl-2.4.1
PPTX
M2 231-io tech-rl_2.3.1
PPTX
M1 rl 1.4.1
PPTX
M1 rl 1.3.1
PPTX
M1 rl 1.2.1
M4 san features-4.3.1
M4 rdma 4.5.1
M4 f co-e_4.4.2
M4 fc stack-4.1.1
M4 fc san-4.2.1
M3 smb cifs-protocol_3.3.1
M3 nfs fs-3.2.1
M3 nfs fs-_performance_3.2.2
M2 241-buses rl-2.4.1
M2 231-io tech-rl_2.3.1
M1 rl 1.4.1
M1 rl 1.3.1
M1 rl 1.2.1

Recently uploaded (20)

PDF
August 2025 - Top 10 Read Articles in Network Security & Its Applications
PDF
Implantable Drug Delivery System_NDDS_BPHARMACY__SEM VII_PCI .pdf
PPTX
CyberSecurity Mobile and Wireless Devices
PDF
MLpara ingenieira CIVIL, meca Y AMBIENTAL
PDF
Soil Improvement Techniques Note - Rabbi
PPTX
mechattonicsand iotwith sensor and actuator
 
PDF
Abrasive, erosive and cavitation wear.pdf
PDF
UEFA_Embodied_Carbon_Emissions_Football_Infrastructure.pdf
PDF
Computer System Architecture 3rd Edition-M Morris Mano.pdf
PDF
Exploratory_Data_Analysis_Fundamentals.pdf
PDF
Computer organization and architecuture Digital Notes....pdf
PPTX
Chapter 2 -Technology and Enginerring Materials + Composites.pptx
PDF
Prof. Dr. KAYIHURA A. SILAS MUNYANEZA, PhD..pdf
PPTX
Feature types and data preprocessing steps
PDF
Unit1 - AIML Chapter 1 concept and ethics
PPTX
ai_satellite_crop_management_20250815030350.pptx
PPTX
Software Engineering and software moduleing
PPTX
Sorting and Hashing in Data Structures with Algorithms, Techniques, Implement...
PDF
UEFA_Carbon_Footprint_Calculator_Methology_2.0.pdf
PDF
Java Basics-Introduction and program control
August 2025 - Top 10 Read Articles in Network Security & Its Applications
Implantable Drug Delivery System_NDDS_BPHARMACY__SEM VII_PCI .pdf
CyberSecurity Mobile and Wireless Devices
MLpara ingenieira CIVIL, meca Y AMBIENTAL
Soil Improvement Techniques Note - Rabbi
mechattonicsand iotwith sensor and actuator
 
Abrasive, erosive and cavitation wear.pdf
UEFA_Embodied_Carbon_Emissions_Football_Infrastructure.pdf
Computer System Architecture 3rd Edition-M Morris Mano.pdf
Exploratory_Data_Analysis_Fundamentals.pdf
Computer organization and architecuture Digital Notes....pdf
Chapter 2 -Technology and Enginerring Materials + Composites.pptx
Prof. Dr. KAYIHURA A. SILAS MUNYANEZA, PhD..pdf
Feature types and data preprocessing steps
Unit1 - AIML Chapter 1 concept and ethics
ai_satellite_crop_management_20250815030350.pptx
Software Engineering and software moduleing
Sorting and Hashing in Data Structures with Algorithms, Techniques, Implement...
UEFA_Carbon_Footprint_Calculator_Methology_2.0.pdf
Java Basics-Introduction and program control

M2 212-unix fs-rl_2.1.2

  • 1. BITS Pilani Pilani Campus Data Storage Technologies & Networks Dr. Virendra Singh Shekhawat Department of Computer Science and Information Systems
  • 2. BITS Pilani, Pilani Campus Topics • File Store Organization • OS Support for I/O – Device Drivers – Interrupt handling – Device Interface – Buffering 2
  • 3. BITS Pilani, Pilani Campus Organization of File store • Berkeley Fast File System (FFS) model: – A file system is described by its superblock • Located at the beginning of the file system • Possibly replicated for redundancy – A disk partition is divided into one or more cylinder groups i.e. a set of consecutive cylinders: • Each group maintains book-keeping info. including – a redundant copy of superblock – Space for i-nodes – A bitmap of available blocks and – Summary usage info. • Cylinder groups are used to create clusters of allocated blocks to improve locality. 3
  • 4. BITS Pilani, Pilani Campus Local File Store- Storage Utilization • Data layout – Performance requirement – Large blocks of data should be transferable in a single disk operation • So, logical block size must be large. – But, typical profiles primarily use small files. • Internal Fragmentation: – Increases from 1.1% for 512 bytes logical block size, 2.5% for 1KB, to 5.4% for 2KB, to an unacceptable 61% for 16KB • I-nodes also add to the space overhead: – But overhead due to i-nodes is about 6% for logical block sizes of 512B, 1KB and 2KB, reduces to about 3% for 4KB, and to 0.8% for 16KB. • One option to balance internal fragmentation against improved I/O performance is – to maintain large logical blocks made of smaller fragments 4
  • 5. BITS Pilani, Pilani Campus Local File Store – Layout [1] • Global Policies: – Use summary information to make decisions regarding placement of i-nodes and disk blocks. • Routines responsible for deciding placement of new directories and files. – Layout policies rely on locality to cluster information for improved performance. • E.g. Cylinder group clustering • Local Allocation Routines. – Use locally optimal schemes for data block layouts. 5
  • 6. BITS Pilani, Pilani Campus Local File Store – Layout [2] • Local allocators use a multi-level allocation strategy: 1. Use the next available block that is rotationally closest to the requested block on the same cylinder 2. If no blocks are available in the same cylinder use a block within the same cylinder group 3. If the cylinder group is full, choose another group by quadratic hashing. 4. Search exhaustively. 6
  • 7. BITS Pilani, Pilani Campus OS Support for I/O • System calls form the interface between applications and OS (kernel) – File System and I/O system are responsible for • Implementing system calls related to file management and handling input/output. • Device drivers form the interface between the OS (kernel) and the hardware 7
  • 8. BITS Pilani, Pilani Campus I/O in UNIX - Example • Application level operation – E.g. printf call • OS (kernel) level – System call bwrite • Device Driver level – Strategy entry point – code for write operation • Device level – E.g. SCSI protocol command for write 8
  • 9. BITS Pilani, Pilani Campus I/O in UNIX - Devices • I/O system is used for – Network communication and virtual memory (Swap space) • Two types of devices – Character devices • Terminals, line printers, main memory – Block devices • Disks and Tapes • Buffering done by kernel 9
  • 10. BITS Pilani, Pilani Campus I/O in UNIX – Device Drivers • Device Driver Sections – Auto-configuration and initialization Routines • Probe and initialize the device – Routines for servicing I/O requests • Invoked because of system calls or VM requests • Referred to as the “top-half” of the driver – Interrupt service routines • Invoked by interrupt from a device – Can’t depend on per-process state – Can’t block • Referred to as the “bottom-half” of the driver 10
  • 11. BITS Pilani, Pilani Campus I/O Queuing and Interrupt Handling • Each device driver manages one or more queues for I/O requests – Shared among asynchronous routines – must be synchronized. – Multi-process synchronization also required. • Interrupts are generated by devices – Signal status change (or completion of operation) – DD-ISR invoked through a glue routine that is responsible for saving volatile registers. – ISR removes request from queue, notifies requestor that the command has been executed. 11
  • 12. BITS Pilani, Pilani Campus I/O in UNIX – Block Devices and I/O • Disk sector sized read/write – Conversion of random access to disk sector read/write is known as block I/O – Kernel buffering reduces latency for multiple reads and writes. 12
  • 13. BITS Pilani, Pilani Campus Device Driver to Disk Interface • Disk Interface: – Disk access requires an address: • device id, LBA OR • device id, cylinder #, head#, sector #. – Device drivers need to be aware of disk details for address translation: • i.e. converting a logical address (say file system level address such as i-node number) to a physical address (i.e. CHS) on the disk; • they need to be aware of complete disk geometry if CHS addressing is used. – Early device drivers had hard-coded disk geometries. • This results in reduced modularity – – disks cannot be moved (data mapping is with the device driver); – device driver upgrades would require shutdowns and data copies. 13
  • 14. BITS Pilani, Pilani Campus Disk Labels • Disk Geometry and Data mapping is stored on the disk itself. – Known as disk label. – Must be stored in a fixed location – usually 0th block i.e. 0th sector on head 0, cylinder 0. – Usually this is where bootstrap code is stored • In which case disk information is part of the bootstrap code, because booting may require disk access. 14
  • 15. BITS Pilani, Pilani Campus Buffering • Buffer Cache – Memory buffer for data being transferred to and from disks – Cache for recently used blocks • 85% hit rate is typical • Typical buffer size = 64KB virtual memory • Buffer pool – hundreds of buffers • Consistency issue – Each disk block mapped to at most one buffer – Buffers have dirty bits associated – When a new buffer is allocated and if its disk blocks overlap with that of an existing buffer, then the old buffer must be purged. 15
  • 16. BITS Pilani, Pilani Campus Buffer Pool Management • Buffer pool is maintained as a (separately chained) hash table indexed by a buffer id • The buffers in the pool are also in one of four lists: – Locked list: • buffers that are currently used for I/O and therefore locked and cannot be released until operation is complete – LRU list: • A queue of buffers – a recently used item is added at the rear of the queue and when a buffer is needed one at the front of the queue is replaced. • Buffers staying in this queue long enough are migrated to an Aged list – Aged List: • Maintained as a list and any element may be used for replacement. – Empty List • When a new buffer is needed check in the following order: – Empty List, Aged List, LRU list 16
  • 17. BITS Pilani, Pilani Campus Thank You! 17