2. 2
BASICS OF MEMORY MANAGEMENT
Background
• A Typical instruction-execution cycle, first fetches an instruction from memory.
The instruction is then decoded and may cause operands to be fetched from
memory. After the instruction has been executed on the operands, results may
be stored back in memory.
• Program must be brought (from disk) into memory and placed within a process
for it to be run.
• Main memory and registers are only storage, the CPU can access directly
• Memory unit only sees a stream of addresses + read requests, or address + data
and write requests
• Registers are accessible within one CPU clock (or less)
• Memory accesses to main memory are comparatively slow, and may take a
number of clock ticks to complete. This would require intolerable waiting by
the CPU if it were not for an intermediary fast memory cache built into most
modern CPUs. The basic idea of the cache is to transfer chunks of memory at a
time from the main memory to the cache, and then to access individual memory
locations one at a time from the cache.
• Protection of memory required to ensure correct operation
3. Why Memory Management is Required?
•Allocate and de-allocate memory before and after process execution.
•To keep track of used memory space by processes.
•To minimize fragmentation issues.
•To proper utilization of main memory.
•To maintain data integrity while executing of process.
4. Memory Management 3
4. 4
Base and Limit Registers
• User processes must be restricted so that they only access memory locations that
"belong" to that particular process. This is usually implemented using a base
register and a limit register for each process.
• Every memory access made by a user process is checked against these two
registers, and if a memory access is attempted outside the valid range, then a fatal
error is generated
• 2 registers are used for each process-Base and Limit.
• A pair of base and limit registers define the logical address space
• Base is starting address of
the register
• Limit-length/size of the
register
6. 6
Address Binding
• Address binding in OS refers to the process of associating the symbolic
addresses used by programs with the actual physical memory addresses at
runtime. When a program is written, it uses symbolic addresses (such as
variable names and function names) to refer to different parts of memory or
code. However, these symbolic addresses are not directly usable by the
underlying hardware.
• Address binding is responsible for translating these symbolic addresses into
actual memory addresses that can be understood and accessed by the
hardware. It ensures that the program can correctly and efficiently interact with
the system’s memory.
7. 7
Types of Address Binding in the Operating Systems
In operating systems, there are three primary types of address binding in OS:
1. compile-time binding
2. load-time binding
3. runtime binding.
Each type differs in when and how the association between symbolic addresses
and physical memory addresses occurs.
Let’s explore each type in detail:
• Compile-Time Binding (Static Binding):
Compile-time binding, also known as static binding, associates symbolic
addresses with physical memory addresses during the compilation phase of a
program. The addresses are determined and fixed before the program is
executed. This type of binding is commonly used for global variables and
functions that have a fixed memory location throughout the program’s
execution.
8. • Load-Time Binding:
Load-time binding refers the address binding process until the program is
loaded into memory for execution.
During the loading phase, the linker and loader allocate memory addresses for
variables and functions based on their requirements and the availability of
memory.
The linker resolves external references and updates the symbolic addresses with
actual physical addresses.
Load-time binding provides more flexibility compared to compile-time binding
since the addresses can be adjusted based on the specific runtime conditions.
• Runtime Binding (Dynamic Binding):
Runtime binding, also known as dynamic binding, performs the address binding
process during program execution.
This type of binding allows for greater flexibility as memory addresses can be
dynamically allocated and deallocated as needed.
Runtime binding is often used in dynamic and object-oriented programming
languages where the memory layout can change during program execution.
In runtime binding, the program resolves symbolic addresses at runtime based
on the current state of the program 8
10. 10
Logical vs. Physical Address Space
• The concept of a logical address space that is bound to a separate physical
address space is central to proper memory management
– Logical address – generated by the CPU; also referred to as virtual
address
– Physical address – address seen by the memory unit
• The compile-time and load-time address-binding schemes generate identical
logical (virtual) and physical addresses . However, the execution-time
address binding results in different logical and physical addresses.
• Logical address space is the set of all logical addresses generated by a
program
• Physical address space is the set of all physical addresses generated by a
program.
• The runtime mapping from virtual to physical address is done by a
hardware device called the memory-management unit(MMU).
13. Memory allocation Techniques
There are two types
•Contiguous Memory allocation
•Non- Contiguous Memory allocation
4. Memory Management 13
14. 14
Single Contiguous Memory Management
Advantages:
• Simplicity
• Available memory fully not utilised
• Limited Job Size (< Available Memory)
4. Memory Management
Main memory usually into two partitions:
•Resident operating system, usually held in low memory with interrupt vector
•User processes then held in high memory
•Each process contained in single contiguous section of memory
15. 15
Contiguous Allocation (Cont.)
• Multiple-partition allocation
– Degree of multiprogramming limited by number of partitions
– Hole – block of available memory; holes of various size are
scattered throughout memory
– When a process arrives, it is allocated memory from a hole large
enough to accommodate it
– Process exiting frees its partition, adjacent free partitions
combined
– Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
OS
process 5
process 8
process 2
OS
process 5
process 2
OS
process 5
process 2
OS
process 5
process 9
process 2
process 9
process 10
20. Dynamic Storage-Allocation Problem
• First-fit:
– Allocate the first hole that is big enough
• Best-fit:
– Allocate the smallest hole that is big enough; must search entire list, unless
ordered by size
– Produces the smallest leftover hole
• Worst-fit:
– Allocate the largest hole; must also search entire list
– Produces the largest leftover hole
How to satisfy a request of size n from a list of free holes
First-fit and best-fit better than worst-fit in terms of speed and storage
utilization
20
21. 21
1. Consider a swapping system in which memory consists of the following hole sizes
in memory order: 10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB, and 15KB. Which
hole is taken for successive segment requests of 12KB, 10KB, 9KB for first fit?
Now repeat the question for best fit, and worst fit.
10 KB
(Job 2)
4KB
20 KB
18 KB
7 KB
9 KB
(Job3)
12 KB
(Job 1)
15 KB
Best Fit
10 KB
4KB
12 KB
(Job 1)
10 KB
(Job 2)
7 KB
9 KB
12 KB
6 KB
8 KB
8 KB
9 KB
Job 3)
Worst Fit
10 KB
4KB
20 KB
18 KB
7 KB
9 KB
12 KB
15 KB
Memory
10 KB
(Job 2)
4KB
9 KB
7 KB
9 KB
(Job 3)
12 KB
15 KB
12 KB
(Job 1)
8 KB
First Fit
9 KB
21
22. 22
2.Given memory partitions of 12KB, 7KB, 15KB, 20KB, 9KB,
4KB, 10KB, and 18KB (in order), how would each of the first-fit
and best-fit algorithms place processes of 10KB, 12KB, 6KB,
and 9KB (in order)?
23. 23
Fragmentation
• External Fragmentation – total memory space
exists to satisfy a request, but it is not
contiguous
• Internal Fragmentation – allocated memory
may be slightly larger than requested
memory; this size difference is memory
internal to a partition, but not being used
• First fit analysis reveals that given N blocks
allocated, 0.5 N blocks lost to fragmentation
– 1/2 may be unusable -> 50-percent rule
26. 26
Solution to the External Fragmentation
• Reduce external fragmentation by compaction
– Shuffle memory contents to place all free memory
together in one large block.
Constraints:
1.Compaction is possible only if relocation is dynamic,
and is done at execution time
2.I/O problem
• Now consider that backing store has same
fragmentation problems
27. 27
Relocation Partitioned Memory Management
Compaction / Burping / Recompaction / Reburping
• Periodically combining all free areas in between partitions into one
contiguous area.
• Move the contents of all allocated partitions to become one
contiguous.
29. Paging
• Paging is a storage mechanism used to retrieve processes from the
secondary storage into the main memory in the form of pages.
• Paging is a memory management scheme that eliminates the need for
a contiguous allocation of physical memory.
• The main idea behind the paging is to divide each process in the form of
pages.
• The main memory will also be divided into fixed sized blocks called
frames.
• One page of the process is to be stored in one of the frames of the
memory.
• When a process requests memory, the operating system allocates one or
more frames to the process and maps the process’s logical pages to the
physical frames.
• The mapping between logical pages and physical page frames is
maintained by the page table, which is used by the memory management
unit to translate logical addresses into physical addresses.
29
30. Address Translation
•Page address is called logical address and represented by page number and
the offset.
•Logical Address = Page number + page offset
•Frame address is called physical address and represented by a frame number and
the offset.
•Physical Address = Frame number + page offset
30
31. 31
Physical Address Space = M words
Logical Address Space = L words
Page Size = P words
Physical Address = log 2 M = m bits
Logical Address = log 2 L = l bits
page offset = log 2 P = p bits
32. Mapping from page table to main memory
32
A data structure called page map table is used to keep track of the relation between a page
of a process to a frame in physical memory.
• When the system allocates a
frame to any page, it translates
this logical address into a
physical address and create
entry into the page table to be
used throughout execution of
the program.
• When a process is to be
executed, its corresponding
pages are loaded into any
available memory frames.
• When a computer runs out of
RAM, the operating system
(OS) will move idle or unwanted
pages of memory to secondary
memory to free up RAM for
other processes and brings
them back when needed by the
program.
33. Advantages and Disadvantages of Paging
•Paging reduces external fragmentation, but still suffer from internal
fragmentation.
•Paging is simple to implement and assumed as an efficient memory
management technique.
•Due to equal size of the pages and frames, swapping becomes very easy.
•Page table requires extra memory space, so may not be good for a system
having small RAM.
33
34. Segmentation
• Segmentation is a memory management technique in which the memory is divided into the
variable size parts. Each part is known as a segment which can be allocated to a process.
• A program is a collection of segments
– A segment is a logical unit such as:
main program
procedure
function
method
object
local variables, global variables
common block
stack
symbol table
arrays
• The details about each segment are stored in a table called a segment table. Segment table
is stored in one (or many) of the segments.
• Segment table contains mainly two information about segment:
• Base: It is the base address of the segment
• Limit: It is the length of the segment.
34
35. Why Segmentation is required?
•Paging is more close to the Operating system rather than the
User.
•It divides all the processes into the form of pages regardless of
the fact that a process can have some relative parts of functions
which need to be loaded in the same page.
•Operating system doesn't care about the User's view of the
process.
•It may divide the same function into different pages and those
pages may or may not be loaded at the same time into the
memory. It decreases the efficiency of the system.
•It is better to have segmentation which divides the process into
the segments.
•Each segment contains the same type of functions such as the
main function can be included in one segment and the library
functions can be included in the other segment.
35
36. 36
Advantages of Segmentation
•No internal fragmentation
•Average Segment Size is larger than the actual page size.
•Less overhead
•It is easier to relocate segments than entire address space.
•The segment table is of lesser size as compared to the page table in
paging.
Disadvantages
•It can have external fragmentation.
•it is difficult to allocate contiguous memory to variable sized partition.
•Costly memory management algorithms.
37. Sr No. Paging Segmentation
1 Non-Contiguous memory allocation Non-contiguous memory allocation
2 Paging divides program into fixed size
pages.
Segmentation divides program into variable
size segments.
3 OS is responsible Compiler is responsible.
4 Paging is faster than segmentation Segmentation is slower than paging
5 Paging is closer to Operating System Segmentation is closer to User
6 It suffers from internal fragmentation It suffers from external fragmentation
7 There is no external fragmentation There is no external fragmentation
8 Logical address is divided into page
number and page offset
Logical address is divided into segment
number and segment offset
9 Page table is used to maintain the page
information.
Segment Table maintains the segment
information
10 Page table entry has the frame number
and some flag bits to represent details
about pages.
Segment table entry has the base address
of the segment and some protection bits for
the segments.
37
Difference between Paging and Segmentation
38. Segmentation with Paging/Segmented Paging
• Pure segmentation is not very popular and not being used in many of the
operating systems. However, Segmentation can be combined with Paging to get
the best features out of both the techniques.
• In Segmented Paging, the main memory is divided into variable size segments
which are further divided into fixed size pages.
• Pages are smaller than segments.
• Each Segment has a page table which means every program has multiple page
tables.
• The logical address is represented as Segment Number (base address), Page
number and page offset.
• Segment Number → It points to the appropriate Segment Number.
• Page Number → It Points to the exact page within the segment
• Page Offset → Used as an offset within the page frame
38
39. • Each Page table contains
the various information
about every page of the
segment.
• The Segment Table
contains the information
about every segment.
• Each segment table entry
points to a page table
entry and every page table
entry is mapped to one of
the page within a
segment.
39
40. Mapping Logical address to physical address
• The CPU generates a logical address
which is divided into two parts:
Segment Number and Segment
Offset.
• The Segment Offset must be less
than the segment limit. Offset is
further divided into Page number
and Page Offset.
• To map the exact page number in
the page table, the page number is
added into the page table base.
• The actual frame number with the
page offset is mapped to the main
memory to get the desired word in
the page of the certain segment of
the process.
40
41. 41
Advantages of Segmented Paging
•It reduces memory usage.
•Page table size is limited by the segment size.
•Segment table has only one entry corresponding to one actual segment.
•External Fragmentation is not there.
•It simplifies memory allocation.
Disadvantages of Segmented Paging
•Internal Fragmentation will be there.
•The complexity level will be much higher as compare to paging.
•Page Tables need to be contiguously stored in the memory.
42. Thrashing :
•The state when the system spends a lot of time in a page fault, but the
actual execution of the process is negligible is called thrashing.
42
• In the initial stage, when we increase the
degree of multi-programming then, CPU
utilization is very high up to lambda (). Lemda
is the stage at which the memory is full, and
swapping ( IN/OUT) is just going to start.
• At point lemda, if we increase the degree of
multi-programming, CPU utilization sharply
falls. This situation is Thrashing. Look at the
following diagram for details.
• In simple words, we can also say that, at a
certain point of CPU utilization, the
continuous page fault is called Thrashing.
44. Virtual Memory
•A computer can address more memory than the amount physically installed on the system.
This extra memory is actually called virtual memory and it is a section of a hard disk that's
set up to emulate the computer's RAM.
•The main visible advantage of this scheme is that programs can be larger than physical
memory. Virtual memory serves two purposes. First, it allows us to extend the use of
physical memory by using disk. Second, it allows us to have memory protection, because
each virtual address is translated to a physical address.
•The Operating System loads the many components of several processes in the main
memory as opposed to loading a single large process there.
•By doing this, the level of multiprogramming will be enhanced, which will increase CPU
consumption.
Advantages of Virtual Memory
•The degree of Multiprogramming will be increased.
•User can run large application with less real RAM.
•There is no need to buy more memory RAMs.
Disadvantages of Virtual Memory
•The system becomes slower since swapping takes time.
•It takes more time in switching between applications.
•The user will have the lesser hard disk space for its use.
45. 4. Memory Management 45
Demand Paging
Demand paging is a technique used in virtual memory systems where the pages are
brought in the main memory only when required or demanded by the CPU. Hence, it
is also named as lazy swapper because the swapping of pages is done only when
required by the CPU.
A page is copied to the main memory when its demand is made or page fault occurs.
There are various page replacement algorithms which are used to determine the
pages which will be replaced.
46. Frame Allocation
• Demand paging is used to implement virtual memory, an essential
component of operating systems. A page-replacement mechanism and a
frame allocation algorithm must be created for demand paging. If you
have numerous processes, frame allocation techniques are utilized to
determine how many frames to provide to each process.
• A Physical Address is required by the Central Processing Unit (CPU) for the
frame creation and the physical Addressing provides the actual address to
the frame created. For each page a frame must be created.
Frame Allocation Constraints
• The Frames that can be allocated cannot be greater than total number of
frames.
• Each process should be given a set minimum amount of frames.
• When fewer frames are allocated then the page fault ration increases and
the process execution becomes less efficient
47. Frame Allocation Algorithms
There are three types of Frame Allocation Algorithms in Operating Systems. They are:
1) Equal Frame Allocation Algorithms
Frame Allocation Algorithm we take number of frames and number of processes at
once. We divide the number of frames by number of processes. We get the number
of frames we must provide for each process.
2) Proportionate Frame Allocation Algorithms
this Frame Allocation Algorithms we take number of frames based on the process
size. For big process more number of frames is allocated. For small processes less
number of frames is allocated by the operating system.
3) Priority Frame Allocation Algorithms
According to the quantity of frame allocations and the processes, priority frame
allocation distributes frames. Let's say a process has a high priority and needs more
frames; in such case, additional frames will be given to the process. Processes with
lower priorities are then later executed in future and first only high priority processes
are executed first.
48. Page Replacement Algorithms
There are three types of Page Replacement Algorithms. They are:
– First In First Out Page Replacement Algorithm(FIFO)
– Optimal Page Replacement Algorithm(Optimal)
– Least Recently Used (LRU) Page Replacement Algorithm(LRU)
•If the page to be searched is found among the frames then, this process is known as Page Hit.
•If the page to be searched is not found among the frames then, this process is known as Page
Fault.
•When Page Fault occurs this problem arises, then the Page Replacement Algorithm comes
into picture..
•Want lowest page-fault rate on both first access and re-access
•Evaluate algorithm by running it on a particular string of memory references (reference
string) and computing the number of page faults on that string
– String is just page numbers, not full addresses
– Repeated access to the same page does not cause a page fault
– Results depend on number of frames available
•In all our examples, the reference string of referenced page numbers is
7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
48
50. First-In-First-Out (FIFO)
Algorithm
• Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
• 3 frames (3 pages can be in memory at a time per process)
15 page faults
• Can vary by reference string: consider 1,2,3,4,1,2,5,1,2,3,4,5
• Adding more frames can cause more page faults!
• Belady’s Anomaly
• How to track ages of pages?
• Just use a FIFO queue
F1
F2
F3
52. Optimal Algorithm
• The principle is:
• Replace the Page which is not used in the Longest Dimension of time in
future
• This principle means that after all the frames are filled then, see the future
pages which are to occupy the frames. Go on checking for the pages which
are already available in the frames. Choose the page which is at last.
• Replace page that will not be used for longest period of time
53. Least Recently Used (LRU) Algorithm
• Use past knowledge rather than future
• Replace page that has not been used in the most amount of time
• Associate time of last use with each page
• 12 faults – better than FIFO but worse than OPT
• Generally good algorithm and frequently used
• But how to implement?
54. Exercise
• Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1,
2, 0 for a memory with three frames and calculate number of page faults
by using FIFO (First In First Out), Optimal and LRU Page replacement
algorithms.
54