[ << ]  [ >> ]  [Top]  [Contents]  [Index]  [ ? ] 
The goal of a generalpurpose scheduler is to balance threads' different scheduling needs. Threads that perform a lot of I/O require a fast response time to keep input and output devices busy, but need little CPU time. On the other hand, computebound threads need to receive a lot of CPU time to finish their work, but have no requirement for fast response time. Other threads lie somewhere in between, with periods of I/O punctuated by periods of computation, and thus have requirements that vary over time. A welldesigned scheduler can often accommodate threads with all these requirements simultaneously.
For project 1, you must implement the scheduler described in this appendix. Our scheduler resembles the one described in [ McKusick], which is one example of a multilevel feedback queue scheduler. This type of scheduler maintains several queues of readytorun threads, where each queue holds threads with a different priority. At any given time, the scheduler chooses a thread from the highestpriority nonempty queue. If the highestpriority queue contains multiple threads, then they run in "round robin" order.
Multiple facets of the scheduler require data to be updated after a
certain number of timer ticks. In every case, these updates should
occur before any ordinary kernel thread has a chance to run, so that
there is no chance that a kernel thread could see a newly increased
timer_ticks()
value but old scheduler data values.
The 4.4BSD scheduler does not include priority donation.
Thread priority is dynamically determined by the scheduler using a formula given below. However, each thread also has an integer nice value that determines how "nice" the thread should be to other threads. A nice of zero does not affect thread priority. A positive nice, to the maximum of 20, decreases the priority of a thread and causes it to give up some CPU time it would otherwise receive. On the other hand, a negative nice, to the minimum of 20, tends to take away CPU time from other threads.
The initial thread starts with a nice value of zero. Other
threads start with a nice value inherited from their parent
thread. You must implement the functions described below, which are for
use by test programs. We have provided skeleton definitions for them in
threads/thread.c
.
Our scheduler has 64 priorities and thus 64 ready queues, numbered 0
(PRI_MIN
) through 63 (PRI_MAX
). Lower numbers correspond
to lower priorities, so that priority 0 is the lowest priority
and priority 63 is the highest. Thread priority is calculated initially
at thread initialization. It is also recalculated once every fourth
clock tick, for every thread. In either case, it is determined by
the formula
PRI_MAX
 (recent_cpu / 4)  (nice * 2),
where recent_cpu is an estimate of the CPU time the
thread has used recently (see below) and nice is the thread's
nice value. The result should be rounded down to the nearest
integer (truncated).
The coefficients 1/4 and 2 on recent_cpu
and nice, respectively, have been found to work well in practice
but lack deeper meaning. The calculated priority is always
adjusted to lie in the valid range PRI_MIN
to PRI_MAX
.
This formula gives a thread that has received CPU time recently lower priority for being reassigned the CPU the next time the scheduler runs. This is key to preventing starvation: a thread that has not received any CPU time recently will have a recent_cpu of 0, which barring a high nice value should ensure that it receives CPU time soon.
We wish recent_cpu to measure how much CPU time each process has received "recently." Furthermore, as a refinement, more recent CPU time should be weighted more heavily than less recent CPU time. One approach would use an array of n elements to track the CPU time received in each of the last n seconds. However, this approach requires O(n) space per thread and O(n) time per calculation of a new weighted average.
Instead, we use a exponentially weighted moving average, which takes this general form:
where x(t) is the moving average at integer time t >= 0, f(t) is the function being averaged, and k > 0 controls the rate of decay. We can iterate the formula over a few steps as follows:
The value of f(t) has a weight of 1 at time t, a weight of a at time t+1, a**2 at time t+2, and so on. We can also relate x(t) to k: f(t) has a weight of approximately 1/e at time t+k, approximately 1/e**2 at time t+2*k, and so on. From the opposite direction, f(t) decays to weight w at time t + ln(w)/ln(a).
The initial value of recent_cpu is 0 in the first thread created, or the parent's value in other new threads. Each time a timer interrupt occurs, recent_cpu is incremented by 1 for the running thread only, unless the idle thread is running. In addition, once per second the value of recent_cpu is recalculated for every thread (whether running, ready, or blocked), using this formula:
where load_avg is a moving average of the number of threads ready to run (see below). If load_avg is 1, indicating that a single thread, on average, is competing for the CPU, then the current value of recent_cpu decays to a weight of .1 in ln(.1)/ln(2/3) = approx. 6 seconds; if load_avg is 2, then decay to a weight of .1 takes ln(.1)/ln(3/4) = approx. 8 seconds. The effect is that recent_cpu estimates the amount of CPU time the thread has received "recently," with the rate of decay inversely proportional to the number of threads competing for the CPU.
Assumptions made by some of the tests require that these recalculations of
recent_cpu be made exactly when the system tick counter reaches a
multiple of a second, that is, when timer_ticks () % TIMER_FREQ ==
0
, and not at any other time.
The value of recent_cpu can be negative for a thread with a negative nice value. Do not clamp negative recent_cpu to 0.
You may need to think about the order of calculations in this formula. We recommend computing the coefficient of recent_cpu first, then multiplying. Some students have reported that multiplying load_avg by recent_cpu directly can cause overflow.
You must implement thread_get_recent_cpu()
, for which there is a
skeleton in threads/thread.c
.
Finally, load_avg, often known as the system load average, estimates the average number of threads ready to run over the past minute. Like recent_cpu, it is an exponentially weighted moving average. Unlike priority and recent_cpu, load_avg is systemwide, not threadspecific. At system boot, it is initialized to 0. Once per second thereafter, it is updated according to the following formula:
where ready_threads is the number of threads that are either running or ready to run at time of update (not including the idle thread).
Because of assumptions made by some of the tests, load_avg must be
updated exactly when the system tick counter reaches a multiple of a
second, that is, when timer_ticks () % TIMER_FREQ == 0
, and not
at any other time.
You must implement thread_get_load_avg()
, for which there is a
skeleton in threads/thread.c
.
The following formulas summarize the calculations required to implement the scheduler. They are not a complete description of scheduler requirements.
Every thread has a nice value between 20 and 20 directly under
its control. Each thread also has a priority, between 0
(PRI_MIN
) through 63 (PRI_MAX
), which is recalculated
using the following formula every fourth tick:
PRI_MAX
 (recent_cpu / 4)  (nice * 2).
recent_cpu measures the amount of CPU time a thread has received "recently." On each timer tick, the running thread's recent_cpu is incremented by 1. Once per second, every thread's recent_cpu is updated this way:
load_avg estimates the average number of threads ready to run over the past minute. It is initialized to 0 at boot and recalculated once per second as follows:
where ready_threads is the number of threads that are either running or ready to run at time of update (not including the idle thread).
In the formulas above, priority, nice, and
ready_threads are integers, but recent_cpu and load_avg
are real numbers. Unfortunately, Pintos does not support floatingpoint
arithmetic in the kernel, because it would
complicate and slow the kernel. Real kernels often have the same
limitation, for the same reason. This means that calculations on real
quantities must be simulated using integers. This is not
difficult, but many students do not know how to do it. This
section explains the basics; we have provided you with a file
threads/fixedpoint.h
that implements the techniques described here; you can use those
functions in the priority calculations for your scheduler.
The fundamental idea is to treat the rightmost bits of an integer as representing a fraction. For example, we can designate the lowest 14 bits of a signed 32bit integer as fractional bits, so that an integer x represents the real number x/(2**14), where ** represents exponentiation. This is called a 17.14 fixedpoint number representation, because there are 17 bits before the decimal point, 14 bits after it, and one sign bit.(7) A number in 17.14 format represents, at maximum, a value of (2**31  1)/(2**14) = approx. 131,071.999.
Suppose that we are using a p.q fixedpoint format, and let f = 2**q. By the definition above, we can convert an integer or
real number into p.q format by multiplying with f. For example,
in 17.14 format the fraction 59/60 used in the calculation of
load_avg, above, is 59/60*(2**14) = 16,110.
To convert a fixedpoint value back to an
integer, divide by f. (The normal /
operator in C rounds
toward zero, that is, it rounds positive numbers down and negative
numbers up. To round to nearest, add f / 2 to a positive number, or
subtract it from a negative number, before dividing.)
Many operations on fixedpoint numbers are straightforward. Let
x
and y
be fixedpoint numbers, and let n
be an
integer. Then the sum of x
and y
is x + y
and
their difference is x  y
. The sum of x
and n
is
x + n * f
; difference, x  n * f
; product, x * n
;
quotient, x / n
.
Multiplying two fixedpoint values has two complications. First, the
decimal point of the result is q bits too far to the left. Consider
that (59/60)*(59/60) should be slightly less than
1, but 16,111*16,111 = 259,564,321 is much
greater than 2**14 = 16,384. Shifting q bits right, we
get 259,564,321/(2**14) = 15,842, or about 0.97,
the correct answer. Second, the multiplication can overflow even though
the answer is representable. For example, 64 in 17.14 format is
64*(2**14) = 1,048,576 and its square 64**2 = 4,096 is well within the 17.14 range, but 1,048,576**2 = 2**40, greater than the maximum signed 32bit
integer value 2**31  1. An easy solution is to do the
multiplication as a 64bit operation. The product of x
and
y
is then ((int64_t) x) * y / f
.
Dividing two fixedpoint values has opposite issues. The
decimal point will be too far to the right, which we fix by shifting the
dividend q bits to the left before the division. The left shift
discards the top q bits of the dividend, which we can again fix by
doing the division in 64 bits. Thus, the quotient when x
is
divided by y
is ((int64_t) x) * f / y
.
This section has consistently used multiplication or division by f, instead of qbit shifts, for two reasons. First, multiplication and division do not have the surprising operator precedence of the C shift operators. Second, multiplication and division are welldefined on negative operands, but the C shift operators are not. Take care with these issues in your implementation.
The following table summarizes how fixedpoint arithmetic operations can
be implemented in C. In the table, x
and y
are
fixedpoint numbers, n
is an integer, fixedpoint numbers are in
signed p.q format where p + q = 31, and f
is 1 <<
q
:
Convert n to fixed point: 
n * f

Convert x to integer (rounding toward zero): 
x / f

Convert x to integer (rounding to nearest): 
(x + f / 2) / f if x >= 0 , (x  f / 2) / f if x <= 0 .

Add x and y : 
x + y

Subtract y from x : 
x  y

Add x and n : 
x + n * f

Subtract n from x : 
x  n * f

Multiply x by y : 
((int64_t) x) * y / f

Multiply x by n : 
x * n

Divide x by y : 
((int64_t) x) * f / y

Divide x by n : 
x / n

[ << ]  [ >> ]  [Top]  [Contents]  [Index]  [ ? ] 