SlideShare a Scribd company logo
1
IT105
Operating Systems
Memory Management
Harris Chikunya
2
Introduction
• Memory management is the functionality of an operating system
which handles or manages primary memory and moves processes
back and forth between main memory and disk during execution.
• Memory management keeps track of each and every memory
location, regardless of either it is allocated to some process or it is
free.
• It checks how much memory is to be allocated to processes.
• It decides which process will get memory at what time.
• It tracks whenever some memory gets freed or unallocated and
correspondingly it updates the status.
3
Objectives
• By the end of this unit, you should be able to:
 Describe the various activities handled by the operating system while
performing the memory management function;
 Identify the logical and physical memory organisation;
 Discuss the memory protection against unauthorised access and
sharing;
 Manage swapping between main memory and disk in case main
storage is small to hold all processes;
 Summarise the principles of memory management as applied to paging
and segmentation;
4
Memory Management Concepts
5
Logical & Physical Address Space
• Logical Address
 Logical address is the address at which a memory location appears to reside from the
perspective of an executing application program
 It is the address generated by the CPU.
 The set of all logical addresses generated by a program is a logical-address space
• Physical Address
 A physical address, also real address, or binary address, is the memory address,
that is electronically (binary number) presented on the computer address bus
circuitry in order to enable the data bus to access a particular storage cell of main
memory.
 Is the address as seen by the MMU.
 The set of all physical addresses corresponding to the logical addresses is a
physical-address space.
• Logical and physical addresses are the same in compile-time and load-time
address-binding schemes;
• logical (virtual) and physical addresses differ in execution-time address-binding
scheme
6
Memory management Unit (MMU)
• Hardware device that maps virtual address to physical address.
• In MMU scheme, the value in the relocation register is added to
every address generated by a user process at the time it is sent to
memory
• The user program deals with logical addresses; it never sees the real
physical addresses
7
Dynamic Relocation
8
Swapping
• Swapping is the act of moving processes between memory and a
backing store.
• This is done to free up available memory.
• Swapping is necessary when there are more processes than
available memory.
• To move a program from fast-access memory to a slow-access
memory is known as “swap out”, and the reverse operation is known
as “swap in”.
9
Benefits of using swapping
1. Allows higher of multiprogramming;
2. Allows dynamic relocation. i.e., if address binding at execution
time is being used we can swap in different location else in case of
compile and load time bindings processes have to be moved to
same location only;
3. Better memory utilisation;
4. Less wastage of CPU time on compaction; and
5. Can easily be applied on priority-based scheduling algorithms to
improve performance
10
Memory Protection
• Memory protection is required
 To protect Operating System from the user processes and
 To protect user processes from one another
• This protection is done by the Relocation-register & Limit-register Scheme
• The relocation register contains the value of smallest physical address i.e.
base value e.g. 30004
• Limit register contains range of logical addresses e.g 12090 – each logical
address must be less than the limit register
11
Memory Protection (cont.)
• Hardware Support: The relocation-register scheme used to protect
user processes from each other, and from changing operating
system code and data.
• If a logical address is greater than the limit register, then there is an
addressing error and it is trapped.
• The limit register hence offers memory protection.
12
Memory Allocation
• The main memory must accommodate both the operating system
and the various user processes.
• We need to allocate different parts of the main memory in the most
efficient way possible.
• Main memory usually has two partitions:
 Low Memory -- Operating system resides in this memory.
 High Memory -- User processes are held in high memory.
• There are two ways to allocate memory for user processes:
1. Contiguous memory allocation
2. Non contiguous memory allocation
13
Memory Allocation hierarchy
14
Contiguous memory allocation
• Each process is contained in a single contiguous section of memory
• There are two methods namely:
 Fixed-Sized Partition memory allocation
 Variable-Sized Partition memory allocation
15
Fixed-Sized Partition
• Divide memory into fixed size partitions, where each partition has
exactly one process
• any process whose size is less than or equal to the partition size can
be loaded into any available partition.
• The block of available memory is known as a Hole.
16
Example
• Fixed-size partition scheme
• Any program, no matter how small, occupies entire partition.
• In our example, process B takes 150K of partition2 (200K) size).
• We are left with 50K sized hole. This phenomenon, in which there is
wasted space internal to a partition, is known as internal
fragmentation.
17
Fixed-Sized Partition
• Advantages:
 Simple to implement and little operating system overhead.
• Disadvantage:
 Inefficient use of memory due to internal fragmentation i.e memory
space unused within a partition is wasted (e.g. when process size <
partition size
 Maximum number of active processes is fixed.
18
Variable-Sized Partition
• Divide memory into variable size partitions, depending upon the size
of the incoming process.
• When the process terminates, the partition becomes available for
another process.
• As processes complete and leave they create holes in the main
memory
 Hole – block of available memory; holes of various size are scattered
throughout memory.
19
Variable-Sized Partition
• Advantages:
 No internal fragmentation and more efficient use of main memory.
• Disadvantages:
 Inefficient use of processor due to the need for compaction to counter
external fragmentation.
20
Fragmentation
• As processes are loaded and removed from memory, the free
memory space is broken into little pieces.
• It happens after sometimes that processes cannot be allocated to
memory blocks considering their small size and memory blocks
remains unused.
• This problem is known as Fragmentation.
• Fragmentation is of two types:
 External fragmentation Total memory space is enough to satisfy a
request or to reside a process in it, but it is not contiguous, so it cannot
be used.
 Internal fragmentation Memory block assigned to process is bigger.
Some portion of memory is left unused, as it cannot be used by
another process.
21
Fragmentation (Cont.)
• The following diagram shows how fragmentation can cause waste of
memory
• External fragmentation can be reduced by compaction or shuffle memory
contents to place all free memory together in one large block. To make
compaction feasible, relocation should be dynamic.
• The internal fragmentation can be reduced by effectively assigning the
smallest partition but large enough for the process.
22
Fragmentation Solution
1. Coalescing:
 process of merging existing hole adjacent to a process that will
terminate and free its allocated space.
 Thus, new adjacent holes and existing holes can be viewed as a single
large hole and can be efficiently utilised.
2. Storage Compaction
 For utilising scattered holes, shuffle all occupied areas of memory to
one end and leave all free memory space as a single large block, which
can further be utilised.
3. Permit the logical address space of a process to be non-
contiguous. This is achieved through paging and segmentation.
23
Non-contiguous memory allocation
• Processes are stored in non-contiguous memory locations
• There are different techniques used to load processes into memory,
as follows:
 Paging
 Segmentation
 Virtual memory paging(Demand paging)
24
Paging
• In a paged system, logical memory is divided into a number of fixed
sizes chunks’ called pages.
• The physical memory is also predivided into same fixed sized blocks
(as is the size of pages) called page frames.
• The pages sizes (also the frame sizes) are always powers of 2, and
vary between 512 bytes to 8192 bytes per page.
• the size of a frame is kept the same as that of a page to have
optimum utilization of the main memory and to avoid external
fragmentation
25
Principle of operation
• Each process page is loaded to some memory frame.
• These pages can be loaded into contiguous frames in memory or
into non-contiguous frames also as shown in the Figure
• The external fragmentation is alleviated since processes are loaded
into separate holes.
26
Page Allocation
• In variable sized partitioning of memory every time when a process of sizes
n is to be loaded, it is important to know the best location from the list of
available/free holes.
• This dynamic storage allocation is necessary to increase efficiency and
throughput of system.
• Most commonly used strategies to make such selection are:
1. Best-fit Policy
• Allocating the hole in which the process fits most “tightly” i.e., the
difference between the hole size and the process size is minimum.
2. First-fit Policy
• Allocating the hole first available hole (according to memory order),
which is big enough to accommodate the new process.
3. Worst-fit Policy
• Allocating the largest hole that will leave maximum amount of unused
space i.e., leftover space is maximum after allocation.
27
Page allocation Example
28
Address Translation Scheme
• The page address is called local address is divided into 2 parts:
1. A page number (p) in logical address space; and
2. The displacement (d) (or offset) within the page table
• The frame address is called physical address and is divided into:
1. Frame number (f) in physical address space; and
2. Page offset (d)
• A data structure called page map table is used to keep track of the relation
between a page of a process to a frame in physical memory.
29
Hardware Support for Paging
• This can be done in the following ways:
1. Page-table base register (PTBR)
2. Translation Look-aside Buffer (TLB)
30
Page-table base register (PTBR)
• Aka Paging Address Translation by Direct mapping
• This is the case of direct mapping as page table sends directly to
physical memory page
• Disadvantage
 Decreased speed of translation because page table is kept in primary
storage and its size can be considerably large which increases
instruction execution time
31
Translation Look-aside Buffer (TLB)
• A.K.A Paging Address Translation with Associative Mapping
• Each entry in the TLB consists of two parts:
 a key (or tag) and
 a value.
• The TLB is used with page tables in the following way.
 The TLB contains only a few of the page-table entries.
 When a logical address is generated by the CPU, its page number is
presented to the TLB.
 If the page number is found (known as a TLB Hit), its frame number is
immediately available and is used to access memory.
 It takes only one memory access.
 If the page number is not in the TLB (known as a TLB miss), a memory
reference to the page table must be made.
32
Translation Look-aside Buffer (TLB)
 When the frame number is obtained, we can use it to access memory.
 It takes two memory accesses.
 In addition, it stores the page number and frame number to the TLB,
so that they will be found quickly on the next reference.
 If the TLB is already full of entries, the operating system must select
one for replacement by using page replacement algorithms.
33
Memory protection in Paged Environment
1. read-write or read-only bits
 These bits are kept in the page table.
 One bit can define a page to be read-write or read-only. This
protection bit can be checked to verify that no writes are being made
to a read-only page.
 An attempt to write to a read-only page causes a hardware trap to the
operating system (or memory-protection violation).
2. valid-invalid bit.
 When this bit is set to "valid," this value indicates that the associated
page is in the process' logical address space, and is a legal (or valid)
page.
 If the bit is set to "invalid," this value indicates that the page is not in
the process' logical-address space. Illegal addresses are trapped by
using the valid-invalid bit.
34
Research
• Structure of Page Tables
 Hierarchical Page table
 Hashed Page Tables
 Inverted Page Table
• Shared Pages
 Shared code
 Private code and data
35
Segmentation
• This scheme divides the logical address space into variable length
chunks, called segments, with no proper ordering among them.
• Each segment has a name and a length.
• Thus, the logical addresses are expressed as a pair of segment
number and offset within segment. <segment-number, offset>
• It allows a program to be broken down into logical parts according to
the user view of the memory, which is then mapped into physical
memory.
• A program is a collection of segments.
• A segment is a logical unit such as: main program, procedure,
function, method, object, local variables, global variables, common
block, stack, symbol table, arrays etc.
36
Segmentation Hardware
• Segment table maps two-dimensional physical addresses and each
entry in the table has:
 base – contains the starting physical address where the segments
reside in memory.
 limit – specifies the length of the segment.
• Segment-table base register (STBR) points to the segment table’s
location in memory.
• Segment-table length register (STLR) indicates number of segments
used by a program.
37
Segmentation Hardware
• The segment number is used as an index into the segment table.
• The offset d of the logical address must be between 0 and the segment
limit. If it is not, we trap to the operating
• system that logical addressing attempt beyond end of segment.
• If this offset is legal, it is added to the segment base to produce the
address in physical memory of the desired byte.
38
Segmentation Example
39
End

More Related Content

PPTX
4-Memory Management -Main memoryyesno.pptx
PDF
lecture 8 b main memory
PPTX
M20CA1030_391_2_Part2.pptx
PPTX
CSE2010- Module 4 V1.pptx
PPT
operationg systemsdocumentmemorymanagement
PPT
OS-unit-3 part -1mxmxmxmmxmxmmxmxmxmxmxmmxmxmmx.ppt
PPTX
HW29kkkkkkkkkkkkkkkkkkkmmmmkkmmkkk454.pptx
PPTX
Os unit 3
4-Memory Management -Main memoryyesno.pptx
lecture 8 b main memory
M20CA1030_391_2_Part2.pptx
CSE2010- Module 4 V1.pptx
operationg systemsdocumentmemorymanagement
OS-unit-3 part -1mxmxmxmmxmxmmxmxmxmxmxmmxmxmmx.ppt
HW29kkkkkkkkkkkkkkkkkkkmmmmkkmmkkk454.pptx
Os unit 3

Similar to Lecture 5 memory management in operating systems.pptx (20)

PPT
OPERATING SYSTEM IMPORTANT NOTES_UNIT-4.ppt
PPT
Memory management principles in operating systems
PPTX
Memory Management techniques -ch8_1.pptx
PPT
Memory Management in Operating Systems for all
PPT
Chap7
PDF
Memory Management
PPT
Chapter07_ds.ppt
PDF
CS6401 OPERATING SYSTEMS Unit 3
PPTX
Memory Managment(OS).pptx
PPT
Unit 5 Memory management System in OS.ppt
PPTX
Memory Management
PDF
Cs8493 unit 3
PPTX
Main Memory Management in Operating System
PPT
Operating system ch#7
PPT
Main memory os - prashant odhavani- 160920107003
PPTX
Memory management in operating system | Paging | Virtual memory
PDF
CH08.pdf
PDF
Cs8493 unit 3
PPT
7. Memory management in operating system.ppt
PPTX
Os unit 2
OPERATING SYSTEM IMPORTANT NOTES_UNIT-4.ppt
Memory management principles in operating systems
Memory Management techniques -ch8_1.pptx
Memory Management in Operating Systems for all
Chap7
Memory Management
Chapter07_ds.ppt
CS6401 OPERATING SYSTEMS Unit 3
Memory Managment(OS).pptx
Unit 5 Memory management System in OS.ppt
Memory Management
Cs8493 unit 3
Main Memory Management in Operating System
Operating system ch#7
Main memory os - prashant odhavani- 160920107003
Memory management in operating system | Paging | Virtual memory
CH08.pdf
Cs8493 unit 3
7. Memory management in operating system.ppt
Os unit 2
Ad

Recently uploaded (20)

PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Encapsulation theory and applications.pdf
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPTX
Machine Learning_overview_presentation.pptx
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PPTX
A Presentation on Artificial Intelligence
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
sap open course for s4hana steps from ECC to s4
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Empathic Computing: Creating Shared Understanding
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Chapter 3 Spatial Domain Image Processing.pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm
Review of recent advances in non-invasive hemoglobin estimation
Encapsulation theory and applications.pdf
Building Integrated photovoltaic BIPV_UPV.pdf
Mobile App Security Testing_ A Comprehensive Guide.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
gpt5_lecture_notes_comprehensive_20250812015547.pdf
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Machine Learning_overview_presentation.pptx
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
A Presentation on Artificial Intelligence
Unlocking AI with Model Context Protocol (MCP)
sap open course for s4hana steps from ECC to s4
The Rise and Fall of 3GPP – Time for a Sabbatical?
Programs and apps: productivity, graphics, security and other tools
Empathic Computing: Creating Shared Understanding
Digital-Transformation-Roadmap-for-Companies.pptx
Dropbox Q2 2025 Financial Results & Investor Presentation
Chapter 3 Spatial Domain Image Processing.pdf
Ad

Lecture 5 memory management in operating systems.pptx

  • 2. 2 Introduction • Memory management is the functionality of an operating system which handles or manages primary memory and moves processes back and forth between main memory and disk during execution. • Memory management keeps track of each and every memory location, regardless of either it is allocated to some process or it is free. • It checks how much memory is to be allocated to processes. • It decides which process will get memory at what time. • It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
  • 3. 3 Objectives • By the end of this unit, you should be able to:  Describe the various activities handled by the operating system while performing the memory management function;  Identify the logical and physical memory organisation;  Discuss the memory protection against unauthorised access and sharing;  Manage swapping between main memory and disk in case main storage is small to hold all processes;  Summarise the principles of memory management as applied to paging and segmentation;
  • 5. 5 Logical & Physical Address Space • Logical Address  Logical address is the address at which a memory location appears to reside from the perspective of an executing application program  It is the address generated by the CPU.  The set of all logical addresses generated by a program is a logical-address space • Physical Address  A physical address, also real address, or binary address, is the memory address, that is electronically (binary number) presented on the computer address bus circuitry in order to enable the data bus to access a particular storage cell of main memory.  Is the address as seen by the MMU.  The set of all physical addresses corresponding to the logical addresses is a physical-address space. • Logical and physical addresses are the same in compile-time and load-time address-binding schemes; • logical (virtual) and physical addresses differ in execution-time address-binding scheme
  • 6. 6 Memory management Unit (MMU) • Hardware device that maps virtual address to physical address. • In MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory • The user program deals with logical addresses; it never sees the real physical addresses
  • 8. 8 Swapping • Swapping is the act of moving processes between memory and a backing store. • This is done to free up available memory. • Swapping is necessary when there are more processes than available memory. • To move a program from fast-access memory to a slow-access memory is known as “swap out”, and the reverse operation is known as “swap in”.
  • 9. 9 Benefits of using swapping 1. Allows higher of multiprogramming; 2. Allows dynamic relocation. i.e., if address binding at execution time is being used we can swap in different location else in case of compile and load time bindings processes have to be moved to same location only; 3. Better memory utilisation; 4. Less wastage of CPU time on compaction; and 5. Can easily be applied on priority-based scheduling algorithms to improve performance
  • 10. 10 Memory Protection • Memory protection is required  To protect Operating System from the user processes and  To protect user processes from one another • This protection is done by the Relocation-register & Limit-register Scheme • The relocation register contains the value of smallest physical address i.e. base value e.g. 30004 • Limit register contains range of logical addresses e.g 12090 – each logical address must be less than the limit register
  • 11. 11 Memory Protection (cont.) • Hardware Support: The relocation-register scheme used to protect user processes from each other, and from changing operating system code and data. • If a logical address is greater than the limit register, then there is an addressing error and it is trapped. • The limit register hence offers memory protection.
  • 12. 12 Memory Allocation • The main memory must accommodate both the operating system and the various user processes. • We need to allocate different parts of the main memory in the most efficient way possible. • Main memory usually has two partitions:  Low Memory -- Operating system resides in this memory.  High Memory -- User processes are held in high memory. • There are two ways to allocate memory for user processes: 1. Contiguous memory allocation 2. Non contiguous memory allocation
  • 14. 14 Contiguous memory allocation • Each process is contained in a single contiguous section of memory • There are two methods namely:  Fixed-Sized Partition memory allocation  Variable-Sized Partition memory allocation
  • 15. 15 Fixed-Sized Partition • Divide memory into fixed size partitions, where each partition has exactly one process • any process whose size is less than or equal to the partition size can be loaded into any available partition. • The block of available memory is known as a Hole.
  • 16. 16 Example • Fixed-size partition scheme • Any program, no matter how small, occupies entire partition. • In our example, process B takes 150K of partition2 (200K) size). • We are left with 50K sized hole. This phenomenon, in which there is wasted space internal to a partition, is known as internal fragmentation.
  • 17. 17 Fixed-Sized Partition • Advantages:  Simple to implement and little operating system overhead. • Disadvantage:  Inefficient use of memory due to internal fragmentation i.e memory space unused within a partition is wasted (e.g. when process size < partition size  Maximum number of active processes is fixed.
  • 18. 18 Variable-Sized Partition • Divide memory into variable size partitions, depending upon the size of the incoming process. • When the process terminates, the partition becomes available for another process. • As processes complete and leave they create holes in the main memory  Hole – block of available memory; holes of various size are scattered throughout memory.
  • 19. 19 Variable-Sized Partition • Advantages:  No internal fragmentation and more efficient use of main memory. • Disadvantages:  Inefficient use of processor due to the need for compaction to counter external fragmentation.
  • 20. 20 Fragmentation • As processes are loaded and removed from memory, the free memory space is broken into little pieces. • It happens after sometimes that processes cannot be allocated to memory blocks considering their small size and memory blocks remains unused. • This problem is known as Fragmentation. • Fragmentation is of two types:  External fragmentation Total memory space is enough to satisfy a request or to reside a process in it, but it is not contiguous, so it cannot be used.  Internal fragmentation Memory block assigned to process is bigger. Some portion of memory is left unused, as it cannot be used by another process.
  • 21. 21 Fragmentation (Cont.) • The following diagram shows how fragmentation can cause waste of memory • External fragmentation can be reduced by compaction or shuffle memory contents to place all free memory together in one large block. To make compaction feasible, relocation should be dynamic. • The internal fragmentation can be reduced by effectively assigning the smallest partition but large enough for the process.
  • 22. 22 Fragmentation Solution 1. Coalescing:  process of merging existing hole adjacent to a process that will terminate and free its allocated space.  Thus, new adjacent holes and existing holes can be viewed as a single large hole and can be efficiently utilised. 2. Storage Compaction  For utilising scattered holes, shuffle all occupied areas of memory to one end and leave all free memory space as a single large block, which can further be utilised. 3. Permit the logical address space of a process to be non- contiguous. This is achieved through paging and segmentation.
  • 23. 23 Non-contiguous memory allocation • Processes are stored in non-contiguous memory locations • There are different techniques used to load processes into memory, as follows:  Paging  Segmentation  Virtual memory paging(Demand paging)
  • 24. 24 Paging • In a paged system, logical memory is divided into a number of fixed sizes chunks’ called pages. • The physical memory is also predivided into same fixed sized blocks (as is the size of pages) called page frames. • The pages sizes (also the frame sizes) are always powers of 2, and vary between 512 bytes to 8192 bytes per page. • the size of a frame is kept the same as that of a page to have optimum utilization of the main memory and to avoid external fragmentation
  • 25. 25 Principle of operation • Each process page is loaded to some memory frame. • These pages can be loaded into contiguous frames in memory or into non-contiguous frames also as shown in the Figure • The external fragmentation is alleviated since processes are loaded into separate holes.
  • 26. 26 Page Allocation • In variable sized partitioning of memory every time when a process of sizes n is to be loaded, it is important to know the best location from the list of available/free holes. • This dynamic storage allocation is necessary to increase efficiency and throughput of system. • Most commonly used strategies to make such selection are: 1. Best-fit Policy • Allocating the hole in which the process fits most “tightly” i.e., the difference between the hole size and the process size is minimum. 2. First-fit Policy • Allocating the hole first available hole (according to memory order), which is big enough to accommodate the new process. 3. Worst-fit Policy • Allocating the largest hole that will leave maximum amount of unused space i.e., leftover space is maximum after allocation.
  • 28. 28 Address Translation Scheme • The page address is called local address is divided into 2 parts: 1. A page number (p) in logical address space; and 2. The displacement (d) (or offset) within the page table • The frame address is called physical address and is divided into: 1. Frame number (f) in physical address space; and 2. Page offset (d) • A data structure called page map table is used to keep track of the relation between a page of a process to a frame in physical memory.
  • 29. 29 Hardware Support for Paging • This can be done in the following ways: 1. Page-table base register (PTBR) 2. Translation Look-aside Buffer (TLB)
  • 30. 30 Page-table base register (PTBR) • Aka Paging Address Translation by Direct mapping • This is the case of direct mapping as page table sends directly to physical memory page • Disadvantage  Decreased speed of translation because page table is kept in primary storage and its size can be considerably large which increases instruction execution time
  • 31. 31 Translation Look-aside Buffer (TLB) • A.K.A Paging Address Translation with Associative Mapping • Each entry in the TLB consists of two parts:  a key (or tag) and  a value. • The TLB is used with page tables in the following way.  The TLB contains only a few of the page-table entries.  When a logical address is generated by the CPU, its page number is presented to the TLB.  If the page number is found (known as a TLB Hit), its frame number is immediately available and is used to access memory.  It takes only one memory access.  If the page number is not in the TLB (known as a TLB miss), a memory reference to the page table must be made.
  • 32. 32 Translation Look-aside Buffer (TLB)  When the frame number is obtained, we can use it to access memory.  It takes two memory accesses.  In addition, it stores the page number and frame number to the TLB, so that they will be found quickly on the next reference.  If the TLB is already full of entries, the operating system must select one for replacement by using page replacement algorithms.
  • 33. 33 Memory protection in Paged Environment 1. read-write or read-only bits  These bits are kept in the page table.  One bit can define a page to be read-write or read-only. This protection bit can be checked to verify that no writes are being made to a read-only page.  An attempt to write to a read-only page causes a hardware trap to the operating system (or memory-protection violation). 2. valid-invalid bit.  When this bit is set to "valid," this value indicates that the associated page is in the process' logical address space, and is a legal (or valid) page.  If the bit is set to "invalid," this value indicates that the page is not in the process' logical-address space. Illegal addresses are trapped by using the valid-invalid bit.
  • 34. 34 Research • Structure of Page Tables  Hierarchical Page table  Hashed Page Tables  Inverted Page Table • Shared Pages  Shared code  Private code and data
  • 35. 35 Segmentation • This scheme divides the logical address space into variable length chunks, called segments, with no proper ordering among them. • Each segment has a name and a length. • Thus, the logical addresses are expressed as a pair of segment number and offset within segment. <segment-number, offset> • It allows a program to be broken down into logical parts according to the user view of the memory, which is then mapped into physical memory. • A program is a collection of segments. • A segment is a logical unit such as: main program, procedure, function, method, object, local variables, global variables, common block, stack, symbol table, arrays etc.
  • 36. 36 Segmentation Hardware • Segment table maps two-dimensional physical addresses and each entry in the table has:  base – contains the starting physical address where the segments reside in memory.  limit – specifies the length of the segment. • Segment-table base register (STBR) points to the segment table’s location in memory. • Segment-table length register (STLR) indicates number of segments used by a program.
  • 37. 37 Segmentation Hardware • The segment number is used as an index into the segment table. • The offset d of the logical address must be between 0 and the segment limit. If it is not, we trap to the operating • system that logical addressing attempt beyond end of segment. • If this offset is legal, it is added to the segment base to produce the address in physical memory of the desired byte.

Editor's Notes

  • #1: In the previous units, we have studied about introductory concepts of the OS, process management and deadlocks. In this unit, we will go through another important function of the Operating System - the memory management.
  • #2: Memory is central to the operation of a modern computer system. Keep track of which parts of memory are currently being used and by whom; Decide which processes are to be loaded into memory when memory space becomes available; and Allocate and deallocate memory space as needed.
  • #5: A memory address identifies a physical location in computer memory, somewhat similar to a street address in a town. The address points to the location where data is stored, just like your address points to where you live. Logical address is generated by CPU while a program is running. The logical address is virtual address as it does not exist physically, therefore, it is also known as Virtual Address.
  • #6: The concept of a logical address space that is bound to a separate physical address space is the central to proper memory management.
  • #7: The user program never sees the real physical address space, it always deals with the Logical addresses. As we have two different type of addresses Logical address in the range (0 to max) and Physical addresses in the range(R to R+max) where R is the value of relocation register.
  • #8: Any operating system has a fixed amount of physical memory available. Usually, application need more than the physical memory installed on your system, for that purpose the operating system uses a swap mechanism: instead of storing data in physical memory, it uses a disk file.
  • #10: If a logical address is greater than the limit register, then there is an addressing error and it is trapped. The limit register hence offers memory protection.
  • #14: processes are stored in contiguous memory locations. Next or together in sequence
  • #15: Simple memory management scheme is to divide memory into n (possible unequal) fixed-sized partitions, each of which can hold exactly one process. Initially, whole memory is available for user processes and is like large block of available memory. Operating System keeps details of available memory block and occupied blocks in tabular form. OS also keeps track on memory requirements of each process. As processes enter into the input queue and when sufficient space for it is available, process is allocated space and loaded. After its execution is over it releases its occupied and OS fills this space with other processes in input queue. The block of available memory is known as a Hole. Hole of various sizes are scattered throughout the memory. When any process arrives, it is allocated memory from a hole that is large enough to accommodate it.
  • #16: It occurs because initially process is loaded in partition that is large enough to hold it (i.e., allocated memory that is internal to a partition, but is not in use.
  • #17: Storage fragmentation occurs either because the user processes do not completely accommodate the allocated the allotted partition or partition remains unused, if it is too small to hold any process form input queue
  • #18: This scheme is also known as dynamic partitioning. In this scheme, boundaries are not fixed. Processes accommodate memory according to their requirement. There is no wastage as partition size is exactly same as the size of the user process. Initially when processes start this wastage can be avoided but later on when they terminate they leave holes in the main storage. Other processes can accommodate these, but eventually they become too small to accommodate new jobs as shown in Figure 5.9.
  • #19: As time goes on and processes are loaded and removed from memory, fragmentation increase and memory utilisation declines. This wastage of memory, which is external to partition, is known as external fragmentation.
  • #22: There is another possibility that holes are distributed throughout the memory.
  • #26: Now, question arises which strategy is likely to be used? In practice, best-fit and first-fit are better than worst-fit. Both these are efficient in terms of time and storage requirement. Best-fit on the other hand requires least overheads in its implementation because of its simplicity. Possibly worst-fit also sometimes leaves large holes that could further be used to accommodate other processes. Thus all these policies have their own merits and demerits.
  • #27: 1. Best fit: The allocator places a process in the smallest block of unallocated memory in which it will fit. For example, suppose a process requests 12KB of memory and the memory manager currently has a list of unallocated blocks of 6KB, 14KB, 19KB, 11KB, and 13KB blocks. The best-fit strategy will allocate 12KB of the 13KB block to the process. 2. Worst fi t: The memory manager places a process in the largest block of unallocated memory available. The idea is that this placement will create the largest hold after the allocations, thus increasing the possibility that, compared to best fit, another process can use the remaining space. Using the same example as above, worst fit will allocate 12KB of the 19KB block to the process, leaving a 7KB block for future use. 3. First fi t: There may be many holes in the memory, so the operating system, to reduce the amount of time it spends analyzing the available spaces, begins at the start of primary memory and allocates memory from the first hole it encounters large enough to satisfy the request. Using the same example as above, first fit will allocate 12KB of the 14KB block to the process.
  • #28: Process of Translation from logical to physical addresses Every address generated by the CPU is divided into two parts: a page number (p) and a page offset (d). The page number is used as an index into a page table. ⇒ The page table contains the base address of each page in physical memory. This base address is combined with the page offset to define the physical memory address that is sent to the memory unit.
  • #30: To overcome this additional hardware support of registers and buffers can be used. This is explained in next section.
  • #32: Here, page number is matched with all associative registers simultaneously, the percentage of the number of time the page is found in TLB’s is called hit ration. If it is not found, it is searched in page table and added into TLB.
  • #33: Memory protection in a paged environment is accomplished by Protection bits that are associated with each frame. One more bit is attached to each entry in the page table: The operating system sets this bit for each page to allow or disallow accesses to that page.
  • #35: In the earlier section we have seen the memory management scheme called as paging. In general, a user or a programmer prefers to view system memory as a collection of variable-sized segment rather than as a linear array of words. Segmentation is a memory management scheme that supports this view of memory.
  • #38: Consider we have five segments numbered from 0 through 4. The segments are stored in physical memory as shown in figure. The segment table has a separate entry for each segment, giving start address in physical memory (or base) and the length of that segment (or limit). For example, segment 2 is 400 bytes long and begins at location 4300. Thus, a reference to byte 53 of segment 2 is mapped onto location 4300 + 53 = 4353.