We've just finished grading the midterm exam and solutions and statistics are now available using the previous link. The exams themselves are available for review up on GradeScope.
We've been spending the past several days reading project papers and will aim to get grades on the final project returned as soon as possible. We were absolutely blown away by the quality of the presentations we saw earlier in the week. Thanks for making this class so much fun to teach!
For those of you who are graduating, best of luck in the future, and please stay in touch! We'd love to hear about all the exciting work you've been up to. For those of you who will be returning next year, have a great summer, and we hope to see you around!
The takehome midterm exam is now available. It's due this Thursday at 2:30PM, and information about the exam is available on the front page. We also have a LaTeX template you can use to type up your answers.
Best of luck on the exam! You can do this!
Here's a quick rundown of the office hours times we'll have available each week:
Welcome to CS166, a course in the design, analysis, and implementation of data structures. We've got an exciting quarter ahead of us  the data structures we'll investigate are some of the most beautiful constructs I've ever come across  and I hope you're able to join us.
CS166 has two prerequisites  CS107 and CS161. From CS107, we'll assume that you're comfortable working from the commandline; designing, testing, and debugging nontrivial programs; manipulating pointers and arrays; using bitwise operators; and reasoning about the memory hierarchy. From CS161, we'll assume you're comfortable designing and analyzing nontrivial algorithms; using O, o, Θ, Ω, and ω notation; solving recurrences; working through standard graph and sequence algorithms; and structuring proofs of correctness.
We'll update this site with more information as we get closer to the start of the quarter. In the meantime, feel free to email me at htiek@cs.stanford.edu if you have any questions about the class!
This syllabus is still under construction and is subject to change as we finetune the course. Stay tuned for more information and updates!
Tuesday  Thursday 

Fusion Trees, Part I
May 28
Memory inside a computer is split apart into individual machine words, with primitive operations like addition, subtraction, and multiplication taking constant time. This means that, if we can pack multiple pieces of data into a single machine word, we can essentially get a limited form of parallel computation. Through a variety of very clever tricks, we can use this to speed up searching and sorting. This first lecture explores wordlevel parallelism through a data structure for very small integers and a technique for computing mostsignificant bits in time O(1). It will lay the groundwork we'll use next time when exploring fusion trees. Slides: Readings:

Fusion Trees, Part II
May 30
The sardine tree we developed in our last lecture gives a fast ordered dictionary data structure for small keys. By harnessing its key insight  Btree lookups can be sped up by improving rank calculations at each node  and combining it with some insights about integers and Patricia tries, we can build the fusion tree, which works for any integers that fit into a machine word. Slides: 
Approximate Membership Queries
May 21
Approximate membership query structures are ways of representing approximations of sets. They're used extensively in practice and are among the most popular randomized data structures in use today. Moreover, their analysis dives into a beautiful realm of informationtheoretic lower bounds and fingerprinting techniques. Slides: Readings:
Handouts: 
xFast and yFast Tries
May 23
Balanced BSTs are excellent generalpurpose data structures for storing sorted collections of elements. But what happens if we know those elements are integers? By taking advantage of their structure and some nice properties of machine words, we can exponentially improve their performance. Slides: Readings: 
Linear Probing
May 14
Linear probing is one of the oldest and simplest strategies for building a hash table. It's easy to implement and, for a number of reasons, is extremely fast in practice. The analysis of linear probing dates was due to Knuth, which assumed truly random hash functions. But what happens if we tighten up this restriction and say that the hash functions aren't truly random but are, instead, just kindependent for some fixed k? Then the analysis gets a lot more challenging, but also a lot more interesting. In exploring how the math works out, we'll get a richer understanding for how to analyze randomized data structures. Slides: Readings:

Cuckoo Hashing
May 16
Most hash tables give expected O(1) lookups. Can we make hash tables with no collisions at all, and if so, can we do it efficiently? Amazingly, the answer is yes. There are many schemes for achieving this, one of which, Cuckoo hashing, is surprisingly simple to implement. Slides: Readings:

Splay Trees
May 7
Balanced binary search trees give worstcase O(log n) times on each tree operation. If we're trying to guarantee worstcase efficiency, this is as good as it gets. But worstcase efficiency doesn't capture everything. We can find a bunch of other properties we'd like our data structures to have. Sometimes, we get those properties by directly designing for them. Sometimes, we get those properties by having almost no structural guarantees at all. Slides: Readings:
Handouts: 
Frequency Estimators
May 9
How can Google keep track of frequent search queries without storing all the queries it gets in memory? How can you estimate frequently occurring tweets without storing every tweet in RAM? As long as you're willing to trade off accuracy for space, you get get excellent approximations. Slides: Readings:

Binomial Heaps
April 30
Binomial heaps are a simple and flexible priority queue structure that supports efficient melding of priority queues. The intuition behind binomial heaps is particularly elegant, and they'll serve as a building block toward the more complex Fibonacci heap data structure that we'll talk about on Thursday. Slides: Readings:

Fibonacci Heaps
May 2
Fibonacci heaps are a type of priority queue that efficiently supports decreasekey, an operation used as a subroutine in many graph algorithms (Dijkstra's algorithm, Prim's algorithm, the StoerWagner min cut algorithm, etc.) They're formed by a clever transformation on a lazy binomial heap. Although Fibonacci heaps have a reputation for being ferociously complicated, they're a lot less scary than they might seem! Slides: Readings:

Balanced Trees, Part II
April 23
We've spent a lot of time trying to figure out how to build nice balanced trees. Now that we've got them, what else can we do with them? In this lecture, we'll see two more advanced techniques  tree augmentation and the split/join operations  that will make it possible to build significantly more complex data structures in the future. Slides: Readings:

Amortized Analysis
April 25
In many cases we only care about the total time required to process a set of data. In those cases, we can design data structures that make some operations more expensive in order to lower the total cost of all aggregate operations. How do you analyze these structures? Slides: Readings:
Handouts: 
Constructing Suffix Arrays
April 16
Suffix trees and suffix arrays are amazing structures, but they'd be much less useful if it weren't possible to construct them quickly. Fortunately, there are some great techniques for building suffix arrays and suffix trees. By using the fact that suffixes overlap and simulating what a multiway merge algorithm would do in certain circumstances, we can rapidly build these beautiful structures. Slides: Readings:
Handouts: 
Balanced Trees, Part I
April 18
Balanced search trees are among the most versatile and flexible data structures. They're used extensively in theory and in practice. What sorts of balanced trees exist? How would you design them? And what can you do with them? Slides Readings:

Suffix Trees
April 9
To kick off our discussion of string data structures, we'll be exploring tries, Patricia tries, and, most importantly, suffix trees. These data structures provide fast solutions to a number of algorithmic problems and are much more versatile than they might initially seem. What makes them so useful? What properties of strings do they capture? And what intuitions can we build from them? Slides: Handouts: 
Suffix Arrays
April 11
What makes suffix trees so useful as a data structure? Surprisingly, much of their utility and flexibility can be attributed purely to two facts: they keep the suffixes sorted, and they expose the branching words in the string. By representing this information in a different way, we can get much of the benefit of suffix trees without the huge space cost. Readings:
Slides: 
Range Minimum Queries, Part One
April 2
The range minimum query problem is the following: given an array, preprocess it so that you can efficiently determine the smallest value in a variety of subranges. RMQ has tons of applications throughout computer science and is an excellent proving ground for a number of advanced algorithmic techniques. Slides: Readings: 
Range Minimum Queries, Part Two
April 4
Our last lecture took us very, very close to a ⟨O(n), O(1)⟩time solution to RMQ. Using a new data structure called a Cartesian tree in conjunction with a technique called the Method of Four Russians, we can adapt our approach to end up with a linearpreprocessingtime, constantquerytime solution to RMQ. In doing so, we'll see a number of clever techniques that will appear time and time again in data structure design. Slides: Readings:
