Timeline of CS103 Results


This page provides a timeline of some of the major results from CS103, hopefully offering you a glimpse of how everything we've explored this quarter was developed and discovered. I'm aware that this timeline is incomplete and possibly incorrect, so please feel free to reach out to me with any corrections or updates to this timeline!

  • c. 1550 BCE: Ahmose, a Egyptian scribe, composes what’s now called the Rhind Mathematical Papyrus, which includes, among other things, a table of fractions of the form $\frac{2}{n}$ expressed as sums of unit fractions. This gives rise to the term Egyptian fractions, which you saw on the Optional Fun Problem for Problem Set Five.
  • c. 350 BCE: Aristotle discusses several modes of logical syllogism in Prior Analytics, considered by many to be the first treatise on formal logic. The four Aristotelian forms were first described in this work.
  • 1202: Leonardo of Pisa, better known as Fibonacci, publishes Liber Abaci, introducing the Hindu-Arabic numeral system to the Western world. The book also included a puzzle about population growth rates, whose solution is the Fibonacci sequence.
  • 1624: The first recorded instance of the pigeonhole principle appears in Récréation mathematique (probably written by Jean Leurechon). The example provided is an argument that there must be two people with the same number of hairs on their heads.
  • 1654: Although the general technique had been in use for some time, mathematical induction is used in its first modern form in Blaise Pascal’s Traité du triangle arithmétique, which also introduced Pascal’s triangle and binomial coefficients. (Take CS109 for details!)
  • 1735: Leonhard Euler lays the foundations for graph theory in his work on whether a particular series of bridges can be walked across once and exactly once.
  • 1770: Joseph Louis Lagrange proves the four-square theorem: every natural number can be written as the sum of four square numbers. You saw a special case of this on the first problem set.
  • 1798: Surveying millenia of treatment of modular congruences and adding in many of his own results, Carl Friedrich Gauss publishes Disquisitiones Arithmeticae, introducing the $\equiv$ symbol for modular congruence and giving a formal definition used from that point onward.
  • 1854: George Boole publishes An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities, outlining a mathematical system for formal logic. His approach was based on representing logical notation as arithmetical statements involving addition and multiplication and ultimately gave rise (after some later modifications) to propositional logic.
  • 1874: Georg Cantor proves that $\abs{\naturals} \ne \abs{\reals}$ in his paper Über eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen. Interestingly, this paper did not primarily focus on cardinality arguments and instead looked at the algebraic irrational numbers (irrational numbers that are the roots of polynomials with integer coefficients) and the transcendental irrational numbers (irrational numbers that are not algebraic). The proof Cantor outlined in this paper is not his famous diagonal argument, but rather a more technical argument involving infinite sequences.
  • 1879: Gottlob Frege publishes Begriffsschrift (“Concept Script”), a formal notation for logical reasoning that is considered the precursor of first-order logic and other modern logical systems. Frege's system was based on implications and negations only and contained one of the earliest references to quantifiers.
  • 1880: John Venn publishes On the Diagrammatic and Mechanical Representation of Propositions and Reasonings, containing a detailed catalog of ways to use overlapping regions to describe how different logical statements overlapped with one another. He referred to them as “Eulerian Circles,” but they’ve since been called Venn diagrams in honor of this paper.
  • 1888: Guiseppe Peano publishes Calcolo geometrico secondo l'Ausdehnungslehre di H. Grassmann, introducing the symbols $\cup$ and $\cap$ for union and intersection. Yes, you read that right – people had worked out that different orders of infinities exist before there was a standardized notation for union and intersection.
  • 1891: Georg Cantor introduces the diagonal argument in his paper Über eine elementare Frage der Mannigfaltigkeitslehre. His paper proved that there are more functions from a set $S$ to the set $\Set{0, 1}$ than there are elements of $S$. Although Cantor didn't state it as such, this proof is essentially the proof of Cantor's theorem (treat each function $f : S \to \Set{0, 1}$ as an “indicator function” describing a particular subset of $S$: elements mapping to $0$ are not part of the subset of $S$ and elements mapping to $1$ are a part of the subset of $S$.)
  • 1891: Julius Petersen publishes Die Theorie der regulären graphs, outlining a number of foundational results about graphs and graph theory. This paper contains what might be the first proof that the bipartite graphs are precisely those graphs that do not contain odd-length cycles.
  • 1908: Ernst Zermelo publishes Untersuchungen über die Grundlagen der Mengenlehre, introducing the idea that sets could be defined via a small number of axioms describing their behavior. This paper introduced the now familiar notation of describing sets using curly braces and contains the first reference to Cantor’s theorem under that name.
  • 1917: David Hilbert distinguishes first-order logic from other sorts of logical systems and presents it in a class at the University of Göttingen.
  • 1928: David Hilbert poses the Entscheidungsproblem (“decision problem”), asking whether there is a mechanical procedure by which mathematical theorems can be automatically proven or disproven, setting up a challenge to define what a “mechanical procedure” means and how one might build such a procedure for automated theorem proving.
  • 1928: Bronislaw Knaster and Alfred Tarski publish a proof that any monotone function from $\powerset{A}$ to $\powerset{A}$ has a fixed point.
  • 1930: Frank Ramsey publishes On a Problem in Formal Logic, giving an algorithm for a restricted case of the Entscheidungsproblem. A key step in his result was a theorem that says that for any number of colors $n$ and any numbers $c_1, c_2, \dots, c_n$, there is some $k$ such that any $k$-clique where the edges are given one of $k$ colors must have a $c_1$-clique of the first color, a $c_2$-clique of the second color, $\dots$, or a $c_n$-clique of the $n$th color. This theorem, later called Ramsey's Theorem, contains the Theorem on Friends and Strangers as a special case and launched much of modern combinatorics.
  • 1935: Gehrard Gentzen publishes Untersuchungen ueber das logische Schliessen, which, among other things, introduces the symbol $\forall$ as the universal quantifier. Previously, authors had used a bunch of other symbols, such as $\Pi$ or $(x)$ to indicate universal quantification. The existential quantifier $\exists$ had been introduced at least thirty years before this. (Note that first-order logic was first formally explored about fifteen years before this!)
  • 1936: Oh, what a year…
    • Alonzo Church publishes An Unsolvable Problem of Elementary Number Theory, outlining the $\lambda$-calculus (a formalism for describing computation through recursive functions) as a model of computation and using a diagonal argument to find a particular function that cannot be computed by any $\lambda$-calculus formula. Church concluded that the Entscheidungsproblem is therefore undecidable.
    • Alan Turing, Church’s doctoral student, publishes his landmark paper On Computable Numbers with Applications to the Entscheidungsproblem introducing the Turing machine, the notion of TM encodings, the universal Turing machine, the undecidability of the halting problem, the equivalence of $\lambda$-calculus and Turing machines, and the undecidability of the Entscheidungsproblem.
    • Emil Post, working independently of Church and Turing, publishes Finite Combinatory Processes - Formulation 1, a paper outlining a model of general-purpose computation that shares many similarities with Turing's TMs. Post's formulation of TMs bears a bit more resemblance to the version of TMs we use in this class.
  • 1937: Lothar Collatz poses the Collatz Conjecture.
  • 1938: Stephen Cole Kleene publishes On Notation for Ordinal Numbers, which included as a technical lemma the result that would later be called Kleene's Second Recursion Theorem, establishing that computational models equal in power to Turing machines can compute functions on their own source code.
  • 1951: Stephen Cole Kleene invents the regular languages in an internal paper at the RAND corporation (this paper was republished in 1956 as Representation of Events in Nerve Nets and Finite Automata). This paper introduced the Kleene star operator, the regular languages, the regular expressions, and a theorem that proved that any type of finite automaton whose behavior depends only on its previous state must be equivalent in power to the regular expressions. (Note that at this point, our notions of DFAs and NFAs had not yet been invented.)
  • 1953: Two fun results about unavoidable structures in graphs:
    • H. G. Landau, a mathematical sociologist, publishes On dominance relations and the structure of animal societies: (III) The condition for a score structure, introducing the concept of a tournament champion and proving that every tournament has a tournament champion. His original work was on the structure of pecking orders in flocks of chickens. (I'm not kidding. Look it up.)
    • R. E. Greenwood and A. M. Gleason write Combinatorial Relations and Chromatic Graphs, proving that any $\ceil{e \cdot n!}$-clique whose edges are colored one of $n$ different colors has a monochrome triangle.
  • 1954: A group of French mathematicians working under the pseudonym “Nicolas Bourbaki” introduce the terms “injection,” “surjection,” and “bijection” in their highly influential textbook Théorie des ensembles. The Bourbaki group has since had a profound influence on the standard mathematics curriculum.
  • 1956: Kurt Gödel writes a letter to John von Neumann discussing the inherent computational complexity of theorem proving. The letter contains the idea that if there is a way of checking mechanically whether a short proof exists for some result, it would be possible to automate much of mathematics. This idea is essentially a primordeal version of the $\plangs$ versus $\nplangs$ question, as the problem “does this theorem have a proof of length at most $n$?” can be reframed in as an $\nplangs$-complete problem.
  • 1957: Two milestones:
    • Noam Chomsky invents context-free grammars in his book “Syntactic Structures. At the time, he called them phrase-structure grammars and explored them in the context of modeling the syntax of natural languages.
    • Hao Wang publises A Variant to Turing’s Theory of Computing Machines, introducing the B-machine. The version of TMs we’ve used in CS103 are heavily influenced by this work.
  • 1958: Anil Nerode publishes Linear Automaton Transformations, outlining what is now called the Myhill-Nerode theorem. His result was relative to a different class of automata than the ones we saw (DFAs and NFAs had not yet been published yet).
  • 1959: Michael Rabin and Dana Scott publish Finite Automata and their Decision Problems, formally describing finite automata. Their paper invented DFAs and NFAs (and, in fact, invented the entire idea of nondeterministic computation), proved the Myhill-Nerode theorem (giving credit to Anil Nerode for his 1958 paper and to John Myhill for developing the more modern version of the theorem), demonstrated the equivalence of DFAs and NFAs via the subset construction, and explored the equivalence of finite automata and regular expressions. Rabin and Scott eventually won the Turing Award, the equivalent of the Nobel Prize for computer science, for this work.
  • 1963: Janusz Brzozowski and Edward McCluskey publish Signal Flow Graph Techniques for Sequential Circuit State Diagrams, introducing the state elimination algorithm.
  • 1965: Three major papers were published that formed the basis for complexity theory:
    • Alan Cobham publishes The Intrinsic Computational Difficulty of Functions, outlining polynomial-time decidability as a criterion for efficient computation.
    • Jack Edmonds publishes Paths, Trees, and Flowers, giving a polynomial-time algorithm for maximum matching and arguing that efficient computation means polynomial-time computation. (This paper and the previous paper are the reason for the terminology “Cobham-Edmonds thesis.”)
    • Juris Hartmanis and Richard Stearns publish On the Computational Complexity of Algorithms, formally defining time complexity for Turing machines and proving the time hierarchy theorem.
  • 1968: Ken Thompson publishes Regular Expression Search Algorithm, describing a construction for converting regular expressions into NFAs. This paper is one of the first to describe efficient regular expression matching algorithms. (Ken Thompson is more famous for co-creating UNIX along with Dennis Ritchie, the inventor of the C programming language.)
  • 1971: Stephen Cook publishes The Complexity of Theorem-Proving Procedures, giving the first proof that SAT is $\nplangs$-complete. Cook's proof uses a slightly different definition of reducibility (the Turing reduction) than the type of reduction typically used nowadays for polynomial-time reducibility (the mapping reduction). Technically, Cook proved that the problem of checking whether a formula is a tautology is $\nplangs$-complete with respect to Turing reductions. (The tautology problem is known to be co-$\nplangs$ complete using the standard definition of $\nplangs$-completeness, and it's an open problem whether it's $\nplangs$-complete or not.) Cook also proved that another problem, subgraph isomorphism, is $\nplangs$-complete. Stephen Cook later won a Turing Award for this work.
  • 1972: Richard Karp publishes Reducibility among Combinatorial Problems, giving 21 examples of “natural” $\nplangs$-complete problems and launching the exploration of the $\plangs$ vs $\nplangs$ problem. Karp's paper is actually quite accessible if you have a background in the definitions of $\plangs$, $\nplangs$, and $\nplangs$-completeness. Karp later won a Turing Award, in part because of this paper.
  • 1973: Working in the Soviet Union, Leonid Levin publishes Универсальные задачи перебора, proving that SAT, exact cover, subgraph isomorphism, and a few other problems are $\nplangs$-complete. He also proved that these problems have an interesting property: for each problem, there is a single optimal algorithm that can only be improved upon by a constant factor, even though it's unclear exactly what those algorithms are. Although Levin's work was published in 1973, he apparently had worked out many of the results several years earlier. The theorem that SAT is $\nplangs$-complete is typically attributed to both Cook and Levin, even though neither worked with one another or was aware that the other was working on similar problems.
  • 1975: Theodore Baker, John Gill, and Robert Solovay publish Relativizations of the $\plangs = \nplangs$ Question, proving a result implying that proofs based purely on universality and self-reference cannot resolve $\plangs$ vs $\nplangs$.
  • 1979: Douglas Hofstadter publishes Gödel, Escher, Bach: An Eternal Golden Braid, coining the term “Quine” to describe self-referential programs. This book also is the source of the MU puzzle we saw when exploring induction for the first time.
  • 1993: Stephen Yablo publishes Paradox without Self-Reference, introducing what is now called Yablo's Paradox.
  • 2000: The Clay Mathematics Institute offers the Millenium Prizes for seven open mathematical problems, starting a million-dollar bounty for resolving the $\plangs$ vs $\nplangs$ problem.
  • 2019: Andrew Booker finds concrete choices of integers $x$, $y$, and $z$ where $x^3 + y^3 + z^3 = 33\text.$