# Dynamic Storage Management

Lecture Notes for CS 140
Spring 2019
John Ousterhout

• Readings for this topic from Operating Systems: Principles and Practice: none.
• How to manage a region of memory or storage to satisfy various needs?
• Two basic operations in dynamic storage management:
• Allocate a block with a given number of bytes
• Free a previously allocated block
• Two general approaches to dynamic storage allocation:
• Stack allocation (hierarchical): restricted, but simple and efficient.
• Heap allocation: more general, but more difficult to implement, less efficient.

## Stack Allocation

• A stack can be used when memory allocation and freeing are partially predictable: memory is freed in opposite order from allocation.
• Example: procedure call. X calls Y calls Y again.
• Stacks are also useful for lots of other things: tree traversal, expression evaluation, top-down recursive descent parsers, etc.
• A stack-based organization keeps all the free space together in one place.

## Heap Allocation

• Heap allocation must be used when allocation and release are unpredictable
• Memory consists of allocated areas and free areas (or holes). Inevitably end up with lots of holes.
• Goal: Keep the number of holes small, keep their size large.
• Fragmentation: inefficient use of memory because of lots of small holes. Stack allocation is perfect: all free space is in one large hole.
• Heap allocators must keep track of the storage that is not in use: free list.
• Best fit: keep linked list of free blocks, search the whole list on each allocation, choose block that comes closest to matching the needs of the allocation, save the excess for later. During release operations, merge adjacent free blocks.
• First fit:
• Scan list (circularly) for the first hole that is large enough.
• Free excess.
• Merge on releases.
• Problem: over time, holes tend to fragment, approaching the size of the smallest objects allocated
• Bit map: alternate representation of the free list, useful if storage comes in fixed-size chunks (e.g. disk blocks).
• Keep a large array of bits, one for each chunk.
• If bit is 0 it means chunk is in use, if bit is 1 it means chunk is free.
• Pools: keep a separate linked list for each popular size.
• Allocation and freeing are fast.
• If pool runs out, allocate more memory to that pool, usually in large chunks.
• What's wrong with this?

## Storage Reclamation

• How do we know when dynamically-allocated memory can be freed?
• Easy when a chunk is only used in one place.
• Reclamation is hard when information is shared: it can't be recycled until all of the users are finished.
• Usage is indicated by the presence of pointers to the data. Without a pointer, can't access (can't find it).
• Two potential errors in reclamation:
• Dangling pointers: better not recycle storage while it's still being used.
• Memory leaks: storage gets "lost" because no one freed it even though it can't ever be used again.
• Reference counts: keep count of the number of outstanding pointers to each chunk of memory. When this becomes zero, free the memory. Example: early versions of JavaScript, inodes in Unix.
• Garbage collection: storage isn't freed explicitly (using free operation), but rather implicitly: just delete pointers.
• When the system needs storage, it searches through all of the pointers (must be able to find them all!) and collects things that aren't used.
• If structures are circular then this is the only safe way to reclaim space.
• Garbage collectors typically compact memory, moving objects to coalesce all free space.
• One way to implement garbage collection: mark and copy:
• Must be able to find all objects.
• Must be able to find all pointers to objects.
• Pass 1: mark. Go through all statically-allocated and procedure-local variables, looking for pointers (roots). Mark each object pointed to, and recursively mark all objects it points to. The compiler has to cooperate by saving information about where the pointers are within structures.
• Pass 2: copy and compact. Go through all objects, copy live objects into contiguous memory; then free any remaining space.
• Garbage collection is expensive:
• 10-20% of all CPU time in systems that use it.
• Uses memory inefficiently: 2-5x overallocation.
• Long pauses during garbage collection.