2. BACKGROUND
❖ Program must be brought (from disk) into
memory and placed within a process for it to be
run
❖ Main memory and registers are only storage
CPU can access directly
❖ Memory unit only sees a
stream of addresses + read requests, or
address + data and write requests
3. ADDRESS BINDING
❖ Usually a program resides on a disk as a binary executable file.
To be executed, the program must be brought into memory and
placed within a process.
❖ Depending on the memory management in use, the process may
be moved between disk and memory during its execution.
❖ The processes on the disk that are waiting to be brought into
memory for execution from the input queue.
❖ The normal procedure is to select one of the processes in the
input queue and to load that process into memory.
❖ As the process is executed, it accesses instructions and data
from memory.
❖ Eventually the process terminate and its memory space is
declared available.
❖ Most systems allow user process to reside in any part of the
physical memory.
4. ADDRESS BINDING
❖ Although the address space of the computer start at 0000,
the first address of the user program need not be 0000
❖ A user program go through several steps.
❖ Addresses may be represented in different ways during these
steps.
1) Address in the source program are generally symbolic.
2) A compiler will bind these symbolic addresses to relocatable
addresses
3) The linkage editor or loader will in turn bind the relocatable
address to absolute address.
4) Each binding is a mapping from one address space to another
5. DYNAMIC LOADING
❖
❖
❖
❖
❖
❖
❖
❖
❖
❖ The entire program and all data of a program must be in physical memory
for the process to execute.
The size of a process is thus limited to the size of physical memory.
To obtain better memory space utilization, we can use dynamic loading.
With dynamic loading a routine is not loaded until it called.
All routines are kept on disk in a relocatable load format.
The main program is loaded into memory and is executed.
When a routine needs to call another routine, the calling routine first
checks to see whether the other routine has been loaded.
If not , the relocatable linking loader is called to load the desired routine
into memory and to update the program address tables to reflect this
changes.
Then control is passed to newly loaded routine.
The advantages of dynamic loading is that an unused routine is never
loaded.
6. DYNAMIC LINKING
❖
❖
❖
❖
❖ The O.S support dynamic linking and static linking
❖ In static linking the library routines which are needed for the program
execution are combined by the loaded into the library program image.
All the program on a system need to have a copy of their library routines.
So memory space is wastage.
In dynamic linking a stub is included in the image for each library routine
reference
The stub is a small piece of code that indicate how to locate the
appropriate memory resident library routine or how to load the library if
routine is not already present .
If routine is not in memory then stub loads the routine into memory and
replace itself with the address of routine.
7. OVERLAYS
❖ Many times total program size exceeds physical memory size
we cannot write program which takes more than available.
❖ So we can make use of overlays. The instructions and data
which are required are kept in memory and those which are
not required are kept in files called overlays.
❖ These files are loaded only when they are required. When an
overlay file is loaded, the contents of previous overlay file is
lost because the same memory locations are used to load the
overlay file.
❖ The maximum amount of memory requirement is calculated
from maximum size of overlay files.
❖ Overlays are used to reduce the main memory
requirement of a program.
8. LOGICAL VS. PHYSICAL ADDRESS
SPACE
❖
❖
❖
❖
❖
❖
❖
❖
The address generated by the CPU is commonly
referred to as a logical address/ virtual address.
Whereas an address seen by the memory unit i.e.
address where actual program and data are stored
in memory is called physical address
Logical and physical addresses are the same in
compile-time and load-time address-binding
schemes.
logical (virtual) and physical addresses differ in
execution-time address-binding scheme
9. Logical address space is the set of all logical
addresses generated by a program
Physical address space is the set of all physical
addresses generated by a program The run time
mapping from virtual to physical address is done by
a hardware device called MMU (memory
management unit)
The user generate only logical addresses, the
logical address range from 0 to maximum
The logical address is added to base register or
relocation register and physical address is
generated.
11. SWAPPING
❖
❖
❖
❖
❖
❖
❖ A process must be in memory to be executed
❖ A process can be swapped temporarily out of memory to a backing store and then
brought back into memory for continued execution.
E.g. assume a multiprogramming environment with round robin CPU scheduling
algorithm. When a time quantum expires, the memory manager will start to swap
out the process that just finished and to swap another process into the memory
space that has been freed
12. Swapping requires a backing store. The backing store is
commonly a fast disk.
It must be large enough to accommodate copies of all
memory images for all users and it must provide direct
access to these memory images.
The system maintains a ready queue consisting of all
processes whose memory images are on the backing store
or in memory and are ready to run.
A special system variable is used to indicate which process is
in the memory
When scheduler needs any process, dispatcher is called.
Dispatcher checks whether the process is in memory.
If not , the current process in the memory is swapped out
and a desired one is swapped in.
14. OVERLAPPED SWAPPING
❖ Generally while swapping is going on CPU sits idle.
❖ To reduce the context switch time overlapped may be
used.
❖ Swapping of one process is overlapped with execution of the
other.
❖ In this scheme while one program is executing , the previous
program is swapped out and next program is swapped in.
16. CONTIGUOUS MEMORY ALLOCATION
❖ In multiprogramming more than one program resides in main
memory of the system. There are many jobs with different memory
requirements.
❖ Memory management has to select few of them and allocate
memory to them. Memory management has a problem in deciding
which job to select.
❖ Usually several user processes resides in memory at the same time.
We therefore need to consider how to allocate available memory to
the processes that are in the i/p queue waiting to be brought into
memory.
❖ Memory is divided into no. of regions, each of which may contain
one job.
❖ No of regions in the memory decide degree of multiprogramming (
degree of multiprogramming is no of jobs that can be in the main
memory at a time for a multiprogramming system.
17. CONTIGUOUS MEMORY ALLOCATION
1)
2)
❖ There are 2 approaches used for memory management.
Single partition allocation
Multiple partition allocation
Depending upon how and when partitions are created there many be two
types of approaches
a) Multiple contiguous fixed partition allocation
Regions or partitions in this scheme are static or fixed can never be
changed. The common algorithm for this scheme is MFT. (
Multiprogramming with fixed no of tasks.
b) Multiple contiguous variable partition allocation
Partitions in this scheme are dynamic, can change. E.g MVT
(multiprogramming with variable no. of tasks).
18. SINGLE PARTITION ALLOCATION
❖
❖
❖ The memory is divided into two sections.
1) one section for O.S
2) second section for user program
Resident operating system, usually held in low memory with interrupt vector
User processes then held in high memory
O.S
User Program
0 Low memory
High Memory
19. SINGLE PARTITION ALLOCATION
❖
❖
❖
❖
❖
❖
❖
❖
❖ In order to provide a contiguous area of free storage for user program , O.S
is loaded at one end (usually in low or bottom part)
The relocation register points to first location in user’s partition.
User’s logical address is adjusted by H/W to produce physical address.
A new program (user program) is loaded only when O.S passes control to
it.
After receiving control it starts running until its completion or termination
due to I/O or some error.
When this program is completed or terminated, the O.S may load another
program for execution.
Protection of O.S from user program is achieved through limit registers.
It is set to highest address occupied by O.S code. Memory address is
generated by user process to access certain memory location is compared
with limit register.
If address is more than limit register, then trap will be generated and
permission is denied
21. SINGLE PARTITION ALLOCATION
❖ Advantages :
1) It is simplest method
2) Sharing of data and code is very easy
❖ Disadvantages:
1) It only supports single process environment MS DOS
2) Less utilization of CPU. CPU will be sitting idle during I/O
operation.
3) Less Utilization of memory. Since only one program is
residing at a time in memory it may not occupy whole
memory.
22. MULTIPLE PARTITION ALLOCATION
METHOD
❖ In multiprogramming environment several programs reside in
memory at a time
❖ To support multiprogramming Main memory is divided into
several partitions each of which is allocated to a single
process.
❖ The no of programs residing in memory will be bound by
number of partitions.
❖ Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
❖ Depending upon how and when partitions are created there
may be two types of approaches.
a) Multiple contiguous fixed (static) partition allocation
b) Multiple contiguous variable (dynamic) partition allocation
23. A) MULTIPLE CONTIGUOUS FIXED PARTITION
ALLOCATION
❖ In this method memory is divided into several partition of
fixed size that size never changes
❖ Each partition method holds one process.
❖ It is also known as multiprogramming with fixed no. of tasks
(MFT)
❖ Jobs are assigned to these fixed partitions be either of the
following methods.
1) There will be separate ready queue for each of regions
2) All the jobs are placed in only one queue
24. A) MULTIPLE CONTIGUOUS FIXED PARTITION
ALLOCATION
1)
❖
❖
❖
There will be separate ready queue for each of regions
❖ A region will be big enough to satisfy memory requirement of all jobs in
its own queue.
In this scheme a system will have to prepass over the common ready queue
in which all the jobs were initially placed. And then assign each of these
jobs to the queue of appropriate region.
When job arrives then it is put on a appropriate queue if there is no
memory partition available.
If a job of 4K arrives and there is no partition of size 4K then it is put in
the queue of larger partition (6K)
2) All the jobs are placed in only one queue
❖
❖
❖ Whenever a partition is free, job selected by FCFS, round robin or pririoty
algorithm is assigned to the partition
If while one job is executing and a high priority job from appropriate
partition is swapped out/rolled out and high priority job is swapped / rolled
in.
After this job is over, generally the swapped out job is brought back in the
same region. This is because that job will again have to undergo relocation
and calculation of new address space for it.
25. There will be separate ready queue for each of regions
27. A) MULTIPLE CONTIGUOUS FIXED PARTITION
ALLOCATION
1)
2)
3)
❖
❖ If after a memory is assigned to a job, the memory requirements of the job
increase, then either of following situations takes place.
An error message is given to the user and a job is terminated
Control returns to the user along with error and user decides either to modify or
exit.
System can –
- swap out the job
- wait for larger region
- swap into larger region
- continue
❖ Overall performance of MFT depends upon correct selection of region size. There
are two major things to be selected and decided.
- how many regions to have.
- what will be the size of each regions?
The decision regarding the size of the region is generally based on educated guess
of the memory requirement of input programs.
28. A) MULTIPLE CONTIGUOUS FIXED PARTITION
ALLOCATION
❖
❖
❖ For which one may have to study the kind and types of job which will be
submitted to the system.
Generally the set of region is searched to determine which region is best to
allocate
Strategies used to select free region from the set of available region.
1) First-fit: Allocate the first hole that is big enough
2) Best-fit: Allocate the smallest hole that is big enough; must search entire list,
unless ordered by size Produces the smallest leftover hole
3) Worst-fit: Allocate the largest hole; must also search entire list. Produces the
largest leftover hole.
1)
2)
3)
❖ Problems with MFT
Process cannot demand more memory than allocated partition
It does not support for dynamic relocation
It does not support a system having dynamically data structure
29. B) MULTIPLE CONTIGUOUS VARIABLE PARTITION
ALLOCATION
❖
❖
❖
❖
❖ In this scheme region or partition size is not fixed , it can vary
dynamically
❖ It creates partition according to requirement of process. It is also known as
multiprogramming with variable no. of tasks (MVT)
O.S maintains a table indicating which parts of the memory are free and
which are allocated
Whenever a job requests for the memory, the table indicating the current
status of the memory
If memory is available then the job is assigned to partition according to job
scheduling policy and the table is updated to reflect the new status.
When the job terminates, it releases the memory occupied by it. Therefore
at any instances of time the snapshot of memory will show blocks of
allocated memory and free holes distributed all over the memory.
30. B) MULTIPLE CONTIGUOUS VARIABLE PARTITION
ALLOCATION
1)
2)
3)
❖
1)
2)
❖ Advantages
It increases degree of multiprogramming
It does not suffer from internal fragmentation.
Size of process can grow or shrink dynamically
Disadvantages
It suffers from external fragmentation
Lot of time is consumed by the system to search holes when a memory
request arrives.
31. ❖ Consider O.S=128k and user space= 896k
❖Allocate the processes
P1=320k
P2=224k
P3=288k
P4=128k
Example
32. EXAMPLE
□ A hole of 64K is left after loading 3 processes: not enough
room for another process
□ Eventually each process is blocked. The OS swaps out
process 2 to bring in process 4
Consider O.S=128k and user space= 896k
□ P1=320k P2=224k P3=288k P4=128k
31
33. DYNAMIC PARTITIONING: AN
EXAMPLE
□ another hole of 96K is created
□ Eventually each process is blocked. The OS swaps out
process 1 to bring in again process 2 and another hole of
96K is created...
□ Compaction would produce a single hole of 256K
32
34. FRAGMENTATION
❖ As jobs are allocated and deallocated memory is fragmented into small chunks.
One of reason of fragmentation is improper selection of region size
❖ Internal Fragmentation – There cannot be always a partition perfectly enough
for memory requirement of a job. Even if a partition is little bigger than memory
of job an internal fragmentation will be created.
e.g. memory requirement of a job is 7K, one of the partitions is of size 8K. If this
job is assigned to 8K partitions an internal fragmentation of !K will be there.
❖ External Fragmentation – As processes are loaded and removed ,the free
memory is broken into little pieces.
- Total memory space exists to satisfy a request, but it is not contiguous.
-There may be some regions not big enough to satisfy memory requirement of any
of the jobs. Such regions create external fragmentation.
-Sometimes it happen that a job makes a request of memory. But there is no single
memory hole to satisfy this memory requirement.
-But two or more holes available in the memory together may be able to satisfy
the requirement of memory
35. COMPACTION
❖
❖
1)
2)
❖
❖
❖
❖ It means there is need of collecting all such free holes together. To
achieve this compaction is used.
Compaction is the process of collecting all free holes and form a single
big free hole.
There are two main questions with compactions:
How to carry out compaction?
When to carry out compaction?
Compaction is process of collecting all free holes and forms a single big
free hole. It shuffles memory contents to place all free memory together
in one large block.
Compaction is possible only with dynamic relocation i.e. logical
addresses be relocated dynamically at execution time.
If addresses are relocated only at load time we cannot compact storage.
36. COMPACTION
❖
❖
❖
❖
❖
❖
❖ Simple compaction algorithm is to move all jobs towards one end of
memory and all free holes to the other end of memory. This method is
very expensive. As almost all jobs will change their address space, they
will have to undergo relocation.
Variation of this algorithm says a free space will be created in middle of
the memory
If swapping is a part of the system, then that can be combined with
compaction.
E.g a job to be shifted will be rolled out to a backing store and later
rolled into some different address space.
On this method actual code for compaction will be very less, or system
already have code for roll-out and roll-in
One approach says compaction should be called immediately after
execution of job is over.
Other approach says compaction must be called only it is required or
may be after some regular interval of time.
37. COMPACTION
❖ Another solution to external fragmentation problem is to
permit the logical address space of the process to be
non-contiguous.
❖ Thus allowing a process to be allocated physical memory
whenever the space is available
❖ Two technique achieve this solution – Paging, Segmentation
38. PAGING
❖
1.
2.
3.
4.
5.
6.
7.
8.
❖ Paging is a memory magmt scheme that permits the physical address space of a
process to be non-contiguous . It avoids external fragmentation.
Basic Method
It involves breaking physical memory into fixed sized blocks called frames and logical
memory into blocks of the same size called pages.
Every address generated by the CPU is divided to two parts. – Page Number (p),
-Page Offset (d)
The page number (p) is used as a index into a page table. A page
is used to map a page on a frame.
Page table has an entry for each page. The page table has base address of each page
in physical memory which is combined with page offset (d) to define the physical
memory address which is sent to memory unit.
P1-9kb
Pages=2kb 5pages 5 frames - 2kb
last 2kb -- 1kb Frames =2 kb
39. ADDRESS TRANSLATION SCHEME
□ Address generated by CPU is divided into:
□ Page number (p) – used as an index into a page table which contains
base address of each page in physical memory
□ Page offset (d) – combined with base address to define the physical
memory address that is sent to the memory unit
page number page offset
p d
m - n n
42. SHARED PAGES
□ Shared code
□ One copy of read-only (reentrant) code shared among processes (i.e., text
editors, compilers, window systems)
□ Similar to multiple threads sharing the same process space
□ Also useful for interprocess communication if sharing of read-write pages
is allowed
□ Private code and data
□ Each process keeps a separate copy of the code and data
□ The pages for the private code and data can appear anywhere in the logical
address space