Operating Systems and Kernels
Operating System Concepts: Memory Management
Multiprogrammed CPUs timeshare between processes, allowing each to run for a set period of time before being descheduled. What if there is not enough memory to hold all the processes that want to run? The first solution that comes to mind is to move processes that have been blocked or descheduled to disk to make room for other ones that want to run. When it is their turn to run again, they can be brought back into memory. The method of dynamically moving processes between memory and disk is called swapping.
Partitions in swapping do not have to be fixed, as was the case with the previous method. One problem that arises as a result of swapping with variable-sized partitions is the creation of holes in memory. Holes are areas of unused memory between partitions. They are usually too small to hold a process and too scattered out to be of any use whatsoever.
The solution is to combine all the holes together, allowing a process to move into that space on the next cycle. This technique is called memory compaction. Since memory compaction has a large overhead (takes a considerable amount of time to complete), various other algorithms are used by the memory manager to allocate processes to holes. These are described below:
First fit: Looks for the first hole that is big enough for the process and uses it. It breaks it into two depending on the process's size.
All of the above algorithms suffer from large overheads. Implementing them is expensive (in terms of CPU time). Merging segments (recall, a segment is a block of addresses containing a process) in memory has to be done every time a process exits, otherwise memory becomes fragmented into lots of holes.