Stanford Research Communication Program
  Home   Researchers Professionals  About
Archive by Major Area


Social Science

Natural Science

Archive by Year

Fall 1999 - Spring 2000

Fall 2000 - Summer 2001

Fall 2001 - Spring 2002

Fall 2002 - Summer 2003




Keeping Your Computer On Time

Frank O'Mahony
Electrical Engineering
Stanford University
November 2002

Microprocessors, the brains inside every computer, are always getting faster to keep pace with the computing needs of next-generation software. The speed of a microprocessor is measured by how many times a second it does a small computation, such as adding two numbers together. A microprocessor uses a type of clock to regulate its speed. When it hears a "tick" it does a computation and when it hears a "tock" it stops. Unlike a regular clock, however, today's fastest microprocessors "tick" over one billion times every second. It may seem like a trivial task, but sending the clock signal to all of the different parts of a microprocessor is a very difficult problem and could become a serious limiting factor for the semiconductor industry in the near future. My research in electrical engineering is examines why problems arise with fast microprocessor clocks and how to distribute a clock that "ticks" ten times faster than they do today.

To understand how a microprocessor works and why it needs a clock, think of a concert band with many musicians playing in unison to create music. Each musician keeps time and plays in-time with the band. If one musician is even a fraction of a beat off, the music just doesn't sound right. A microprocessor works in a similar way - it is comprised of many separate parts, called "blocks", each with a different purpose. Just like musicians in a band, the blocks need to have a sense of timing so that they can work in unison to run software on your computer. A microprocessor uses a clock - like a digital metronome - to keep the blocks working in-time. If one block is even a fraction of a beat off, the microprocessor will not work correctly.

The first part of my research looks at current problems involved in sending a fast clock signal around the microprocessor to its many blocks. I use software to simulate the circuits in a microprocessor and examine what happens when the clock gets faster. The clock physically sits in the center of the chip. Like a conductor with a baton, it sends out its signal to the blocks. Because it takes time for the clock signal to travel to the blocks, if one block is sitting a little further from the clock than all the others it will always receive the "tick" late. This problem is called "clock skew" and is equivalent to one musician always playing a little ahead or behind everyone else. The other problem that I research is called "clock jitter". Like its name implies, jitter refers to a "tick" that is sometimes early and sometimes late. As long as the amount of skew and jitter are small compared to the time between "ticks", the microprocessor will run correctly. But as the clock gets faster, the time between ticks gets shorter, so the amount of skew and jitter that can be tolerated decreases. This is like saying that the faster a band plays, the better their sense of timing needs to be. I simulate skew and jitter and use this information to figure out better ways to send the clock around the chip.

Once I developed an understanding of skew and jitter from these simulations, I devised a better way to distribute the clock signal around the microprocessor. My method uses many clocks situated around the chip instead of one clock at the center. Each clock is able to keep time pretty well, but left to itself it would vary quite a bit from the others. By looking at other nearby clocks, it can keep correcting itself and stay synchronized to a very high degree of accuracy. This is similar to how musicians actually keep time in a band - they have a rough idea of the timing, but they also listen to what their neighbors are playing and correct their timing if necessary. A major part of my research was building this idea into a microchip and testing it. The chip that I made using this technique is good enough to run a microprocessor that does ten billion computations every second, ten times the clock rate of today's microprocessors.

The results of this research are promising, and my hope is that it will be incorporated into microprocessors five or ten years from now. Today, leading manufacturers of microprocessors are looking for new ideas for that will alleviate the worsening problems of skew and jitter. My idea of using many clocks instead of a single clock could be in your computer not long from now, helping it to run faster and on time.