Lab Handout 3: Parallel Processing
The lab checkoff sheet for all students can be found right here.
Problem 1: Analyzing parallel mergesort
Before starting, go ahead and clone the lab3
folder and run make
; this folder contains a working implementation of mergesort
.
git clone /usr/class/cs110/repos/lab3/shared lab3
Consider the architecturally interesting portion of the mergesort
executable, which launches 128 peer processes to cooperatively sort an array of 128 randomly generated numbers. The implementations of createSharedArray
and freeSharedArray
are omitted for the time being. You can also find this code, with comments (omitted below for brevity), in mergesort.cc
.
static bool shouldKeepMerging(size_t startIndex, size_t reach, size_t length) {
return startIndex % reach == 0 && reach <= length;
}
static void repeatedlyMerge(int numbers[], size_t length, size_t startIndex) {
int *base = numbers + startIndex;
for (size_t reach = 2; shouldKeepMerging(startIndex, reach, length); reach *= 2) {
raise(SIGSTOP);
inplace_merge(base, base + reach / 2, base + reach);
}
exit(0);
}
static void createMergers(int numbers[], pid_t workers[], size_t length) {
for (size_t workerIndex = 0; workerIndex < length; workerIndex++) {
workers[workerIndex] = fork();
if (workers[workerIndex] == 0) {
repeatedlyMerge(numbers, length, workerIndex);
}
}
}
static void orchestrateMergers(int numbers[], pid_t workers[], size_t length) {
size_t step = 1;
while (step <= length) {
// Wait for all still-remaining workers to stop or terminate
for (size_t start = 0; start < length; start += step) {
waitpid(workers[start], NULL, WUNTRACED);
}
// Continue half of the workers
step *= 2;
for (size_t start = 0; start < length; start += step) {
kill(workers[start], SIGCONT);
}
}
}
static void mergesort(int numbers[], size_t length) {
pid_t workers[length];
createMergers(numbers, workers, length);
orchestrateMergers(numbers, workers, length);
}
int main(int argc, char *argv[]) {
for (size_t trial = 1; trial <= kNumTrials; trial++) {
int *numbers = createSharedRandomArray(kNumElements);
mergesort(numbers, kNumElements);
// Check if the resulting array is in fact sorted
bool sorted = is_sorted(numbers, numbers + kNumElements);
cout << "\rTrial #" << setw(5) << setfill('0') << trial << ": ";
cout << (sorted ? "\033[1;34mSUCCEEDED!\033[0m" : "\033[1;31mFAILED! \033[0m") << flush;
freeSharedArray(numbers, kNumElements);
if (!sorted) {
cout << endl << "mergesort is \033[1;31mBROKEN.... please fix!\033[0m" << flush;
break;
}
}
cout << endl;
return 0;
}
The program presented above is a nod to concurrent programming and whether parallelism can reduce the asymptotic running time of an algorithm (in this case, mergesort
). We’ll lead you through a series of questions to reinforce your multiprocessing and signal skills and to understand why the asymptotic running time of an algorithm can sometimes be improved in a parallel programming world.
For reasons discussed below, this program works because the address in the numbers variable is cloned across the 128 fork
calls, and this particular address maps to the same set of physical addresses in all 128 processes (and that’s different than what usually happens).
The program successfully sorts any array of length 128 by relying on 128 independent processes. In a nutshell, the above program works because:
- All even numbered workers (e.g. workers[0], workers[2], etc.) self-halt, while all odd numbered workers terminate immediately.
- Once all even numbered workers have self-halted, each is instructed to carry on to call
inplace_merge
(a C++ built-in) to potentially update the sequence so thatnumbers[0] <= numbers[1]
,numbers[2] <= numbers[3]
, etc. In general,inplace_merge(first, mid, last)
assumes the two ranges[first, mid)
and[mid, last)
are already sorted in non-decreasing order, and places the merged result in[first, last)
. - Once all neighboring pairs have been merged into sorted sub-arrays of length 2,
workers[0]
,workers[4]
,workers[8]
, etc. all self-halt whileworkers[2]
,workers[6]
,workers[10]
, etc. all exit. - Once all remaining workers self-halt, each is instructed to continue to merge the 64 sorted sub-arrays of length 2 into 32 sorted sub-arrays of length 4.
- The algorithm continues as above, where half of the remaining workers terminate while the other half continue to repeatedly merge larger and larger sub-arrays until only
workers[0]
remains, at which pointworkers[0]
does one final merge before exiting. The end product is a sorted array of length 128, and that's pretty awesome.
Truth be told, the mergesort
algorithm we've implemented is more of theoretical interest than practical. But it's still a novel example of parallel programming that rings much more relevant and real-world than parent-and-pentuplets-go-to-disney
.
Use the following short answer questions to guide the discussion.
- Why is the
raise(SIGSTOP)
line within the implementation ofrepeatedlyMerge
necessary? - When the implementation of
orchestrateMergers
executes thestep *= 2;
line the very first time, all worker processes have either terminated or self-halted. Explain why that’s guaranteed. - The
repeatedlyMerge
function relies on areach
variable, and theorchestrateMergers
function relies on astep
variable. Each of the two variables doubles with each iteration. What are the two variables accomplishing? - Had we replaced the one use of
WUNTRACED
with a0
, would the overall program still correctly sort an arbitrary array of length 128? Why or why not? - Had we instead replaced the one use of
WUNTRACED
withWUNTRACED | WNOHANG
instead, would the overall program still correctly sort an arbitrary array of length 128? Why or why not? - Assume the following implementation of
orchestrateMergers
replaces the original version. Would the overall program always successfully sort an arbitrary array of length 128? Why or why not?
static void orchestrateMergers(int numbers[], pid_t workers[], size_t length) {
for (size_t step = 1; step <= length; step *= 2) {
for (size_t start = 0; start < length; start += step) {
int status;
waitpid(workers[start], &status, WUNTRACED);
if (WIFSTOPPED(status)) {
kill(workers[start], SIGCONT);
}
}
}
}
- Now assume the following implementation of
orchestrateMergers
replaces the original version. Note the innerfor
loop counts down instead of up. Would the overall program always successfully sort an arbitrary array of length 128? Why or why not?
static void orchestrateMergers(int numbers[], pid_t workers[], size_t length) {
for (size_t step = 1; step <= length; step *= 2) {
for (ssize_t start = length - step; start >= 0; start -= step) {
int status;
waitpid(workers[start], &status, WUNTRACED);
if (WIFSTOPPED(status)) {
kill(workers[start], SIGCONT);
}
}
}
}
The createSharedRandomArray
function (defined in memory.h
and memory.cc
in your lab3
repo) sets aside space for an array of length
integers and seeds it with random numbers. It does so using the mmap
function you've seen in Assignment 1 and 2, and you also saw it a bunch of times while playing with strace
last week during discussion section.
int *createSharedRandomArray(size_t length) {
// Allocate space for the array
int *numbers = static_cast<int *>(mmap(NULL, length * sizeof(int),
PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0));
// Fill the array with random numbers
RandomGenerator rgen;
for (size_t i = 0; i < length; i++) {
numbers[i] = rgen.getNextInt(kMinValue, kMaxValue);
}
return numbers;
}
The mmap
function takes the place of malloc
here, because it sets up space not in the heap, but in an undisclosed segment that other processes can see and touch (that’s what MAP_ANONYMOUS
and MAP_SHARED
mean).
- Normally virtual address spaces are private and inaccessible to other processes, but that’s clearly not the case here. Given what we have discussed about virtual-to-physical address mapping, explain what the operating system must do to support this so that only the mergers have shared access but arbitrary, unrelated processes don’t?
- Virtual memory is one form of virtualization used so that the above program works. Describe one other form of virtualization you see.
- Assuming the implementation of
inplace_merge
is O(n), explain why the running time of our parallelmergesort
is O(n) instead of the O(n log n) normally ascribed to the sequential version. (Your explanation should be framed mathematically; it’s not enough to just say it’s parallel.)
Problem 2: Virtual Memory and Memory Mapping
Assume the OS allocates virtual memory to physical memory in 4096-byte pages (a page is like the unit of memory the operating system manages). Recall that one key benefit of this virtual memory mapping is that the OS can lazily map as needed (e.g. it doesn't need to map every virtual address to a physical address from the outset).
Describe how virtual memory for a process undergoing an execvp
transformation might be updated as:
- the assembly code instructions are loaded
- the initialized global variables, initialized global constants, and uninitialized global variables are loaded
- the heap is initialized
- the portion of the stack frame set up to call main
If the virtual address 0x7fffa2efc345
maps to the physical page in main memory whose base address is 0x12345aab8000
, what range of virtual addresses around it would map to the same physical page?
What's the largest size a character array can be before it absolutely must map to three different physical pages?
What's the smallest size a character array can be and still map to three physical pages?
For fun, optional reading, read these two documents (though you needn't do this reading if you don't want to, since it goes beyond the scope of our discussion of virtual memory):
- http://www.cs.cmu.edu/afs/cs/academic/class/15213-f15/www/lectures/17-vm-concepts.pdf. These are lecture slides that Bryant, O'Hallaron, and their colleagues rely on while teaching the CMU equivalent of CS110 (they're on a 15-week semester, so they go into more depth than we do).
- http://www.informit.com/articles/article.aspx?p=29961&seqNum=2: This is an article written some 15 years ago by two senior research scientists at HP Labs who were charged with the task of porting Linux to IA-64.
As an optional fun activity, try the following experiment:
* ssh
into any myth machine, and type ps u
at the command prompt to learn the process id of your terminal, as with:
myth64:~$ ps u
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
<USER> 22606 0.6 0.0 15716 5004 pts/18 Ss 18:14 0:00 -bash
<USER> 22827 0.0 0.0 30404 1516 pts/18 R+ 18:14 0:00 ps u
- The
pid
of the bash terminal session above is 22606, but yours will almost certainly be different. But assuming a pid 22606, type in the following:
myth64:~$ cd /proc/22606
myth64:/proc/22606$ ls -lta maps
-r--r--r-- 1 <USER> operator 0 Jan 28 18:15 maps
myth64:/proc/22606$ more maps
00400000-004f4000 r-xp 00000000 08:01 2752634 /bin/bash
006f3000-006f4000 r--p 000f3000 08:01 2752634 /bin/bash
006f4000-006fd000 rw-p 000f4000 08:01 2752634 /bin/bash
006fd000-00703000 rw-p 00000000 00:00 0
01230000-01405000 rw-p 00000000 00:00 0 [heap]
7f7de7c51000-7f7de8069000 r--p 00000000 08:01 8921285 /usr/lib/locale/locale-archive
7f7de8069000-7f7de8229000 r-xp 00000000 08:01 6033975 /lib/x86_64-linux-gnu/libc-2.23.so
7f7de8229000-7f7de8429000 ---p 001c0000 08:01 6033975 /lib/x86_64-linux-gnu/libc-2.23.so
7f7de8429000-7f7de842d000 r--p 001c0000 08:01 6033975 /lib/x86_64-linux-gnu/libc-2.23.so
7f7de842d000-7f7de842f000 rw-p 001c4000 08:01 6033975 /lib/x86_64-linux-gnu/libc-2.23.so
7f7de842f000-7f7de8433000 rw-p 00000000 00:00 0
7f7de8433000-7f7de8436000 r-xp 00000000 08:01 6033979 /lib/x86_64-linux-gnu/libdl-2.23.so
7f7de8436000-7f7de8635000 ---p 00003000 08:01 6033979 /lib/x86_64-linux-gnu/libdl-2.23.so
7f7de8635000-7f7de8636000 r--p 00002000 08:01 6033979 /lib/x86_64-linux-gnu/libdl-2.23.so
7f7de8636000-7f7de8637000 rw-p 00003000 08:01 6033979 /lib/x86_64-linux-gnu/libdl-2.23.so
7f7de8637000-7f7de865c000 r-xp 00000000 08:01 6029424 /lib/x86_64-linux-gnu/libtinfo.so.5.9
7f7de865c000-7f7de885b000 ---p 00025000 08:01 6029424 /lib/x86_64-linux-gnu/libtinfo.so.5.9
7f7de885b000-7f7de885f000 r--p 00024000 08:01 6029424 /lib/x86_64-linux-gnu/libtinfo.so.5.9
7f7de885f000-7f7de8860000 rw-p 00028000 08:01 6029424 /lib/x86_64-linux-gnu/libtinfo.so.5.9
7f7de8860000-7f7de8886000 r-xp 00000000 08:01 6033971 /lib/x86_64-linux-gnu/ld-2.23.so
7f7de8a28000-7f7de8a5d000 r--s 00000000 00:13 713 /run/nscd/dbS6IIxD (deleted)
7f7de8a5d000-7f7de8a61000 rw-p 00000000 00:00 0
7f7de8a7e000-7f7de8a85000 r--s 00000000 08:01 8924731 /usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache
7f7de8a85000-7f7de8a86000 r--p 00025000 08:01 6033971 /lib/x86_64-linux-gnu/ld-2.23.so
7f7de8a86000-7f7de8a87000 rw-p 00026000 08:01 6033971 /lib/x86_64-linux-gnu/ld-2.23.so
7f7de8a87000-7f7de8a88000 rw-p 00000000 00:00 0
7fff30343000-7fff30364000 rw-p 00000000 00:00 0 [stack]
7fff30382000-7fff30384000 r--p 00000000 00:00 0 [vvar]
7fff30384000-7fff30386000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
From the man page for proc
: "The proc filesystem is a pseudo-filesystem which provides an interface to kernel data structures. It is commonly mounted at /proc
. Most of it is read-only, but some files allow kernel variables to be changed."
Within proc
is a subdirectory for every single process running on the machine, and within each of those there are sub-subdirectories that present information about various resources tapped by that process. In this case, the process subdirectory is named 22606
, and the sub-subdirectory of interest is maps
, which provides information about all of the contiguous regions of virtual memory the process relies on for execution.
To find out what each row and column in the output means, consult this stackoverflow question and read through the accepted answer.
Problem 3: The Process Scheduler
The Linux Kernel is responsible for scheduling processes onto processor cores. The “process scheduler” is a component of the operating system that decides whether a running process should continue running and, if not, what process should run next. This scheduler maintains three different data structures to help manage the selection process:
The Running Queue
The running queue keeps track of all of the processes that are currently assigned to a CPU. The nodes in that queue needn't store very much if anything at all, since the CPUs themselves house everything needed for execution. The running queue could be of length 0 (meaning all processes are blocked), or its length can be as high as the number of CPUs.
The Ready Queue
The ready queue keeps track of all of the processes that aren't currently running but are qualified to run. The nodes in the queue store the state of a CPU at the moment it was pulled off the processor. That information is used to restore a CPU when a process is promoted from ready to running again, so that the process can continue as if it was never interrupted.
The Blocked Set
This set holds processes which cannot at the moment carry on without some external event happening — they may be waiting for user input, waiting for memory reads, waiting on a network, or waiting for another process to change state. The blocked set looks much like the ready queue, except that it contains processes that were forced off the processor even before its time slice ended. A process is demoted to the blocked set because it blocked on something (e.g. waitpid
).
- Give an example of a system call (with arguments) that may or may not move a running process to the blocked set.
- Give an example of a system call (with arguments) that is 100% guaranteed to move a process to the blocked set.
- What do you think needs to happen for a process to be hoisted from the blocked set to the ready queue?
Icons by Piotr Kwiatkowski