Since you asked ... An Abbreviated Personal History

Preamble: Every year I schedule time near the end of class for open discussion on whatever topics students want to talk about. At this late stage in the quarter, everyone is busy working on their final projects and we've been meeting in small groups to discuss project-related issues. The conversation in the open discussion inevitably turns to questions about how I got interested in neuroscience. I'm generally pretty open and I've found that students appreciate my candor and willingness to discuss details of my early education and crooked path forward from the time I was their age and in the midst of dealing with the same sort of difficult choices they are wrestling with. Several students over the years have urged me to write an account of my experience. This document is an initial attempt to do just that.

In the 1960s when I was in high school, the world was somewhat like it is today. John Kennedy, a charismatic and forward-looking president who inspired the country to shoot for the moon had been assassinated. Richard Nixon, who would turn out to be a corrupt politician and narrowly escaped impeachment for his role in the Watergate scandal1, was in the White House and the United States was embroiled in a war in Southeast Asia that it couldn't win. I was completely bored with high school and for the most part blew off all of my classes. I played guitar in a band, smoked marijuana, hung out with SDS protesters and hippies while working the late shift as a receptionist in a church rectory in Southeast Washington, DC, and tried to pretend I had a future as a musician and itinerant philosopher and writer.

Somehow I graduated from high school and ended up at Marquette University majoring in journalism. I wasn't interested in what was being offered and so I dropped out midway through the first semester and spent a year or so hitchhiking around the country, living in communes and camping out in the mountains near Taos New Mexico reading strange books and taking hallucinogenic drugs. When I got called up for the draft, I went before the local draft board and argued the case for my being designated as a conscientious objector. Then, in a fit of what I can only explain as temporary insanity, I showed up at the Army Recruiting Command in Baltimore where I told the officer in charge that if they tried to send me to Vietnam I'd shoot myself in the foot; he sent me to a military psychiatrist who asked a few questions and then dismissed me with a wink and a nod as unfit for service with a 4F draft designation2.

By the early 70s, my wife Jo and I are living in a log cabin in Bumpass Virginia on a 50-acre farm that I purchased with money I earned as a paperboy in my early teens. We are building an extension of the cabin with modern conveniences including indoor plumbing and insulated walls. The basic structure is a two-story octagon patterned roughly after Thomas Jefferson's private retreat located in Bedford, Virginia at some remove from his main residence at Monticello in nearby Charlottesvile. Most of the lumber we picked up at a junkyard in Washington DC that dismantled the temporary viewing stands built for Richard Nixon's inauguration as the country's 37th president. When not building houses, I made money as an apprentice craftsman at a shop in Louisa, Virginia specializing antique reproductions. I learned to carve the distinctive ball-and-claw cabriole legs for Chippendale tables and chairs and sculpted statues out of large blocks of oak and walnut, some of which I was actually able to sell to collectors.

Figure 1:  Tom and Jo building our first house — a two-story octagon extension of an old pine-bark beetle infested log cabin — in Bumpass, Virginia.

Fast forward to the late 70s and we are living in Block Island, Virginia and building another custom house — also an octagon — using the money we made from selling the Bumpass farm. We owned a 70 acre slice of Squirrel Mountain and are temporarily living with two cats and a dog in an eight-by-eight-by-twelve plywood box covered with a tarpaulin on the edge of a lake with a spectacular view of the Blue Ridge Mountains. The bottom has just dropped out of the market for vacation homes in the greater Washington area and all of the money we had left after purchasing the property is tied up in a huge pile of lumber and building materials sitting next to the building site with its half finished foundation and retaining walls. The July 3, 2011 entry in this blog post includes a related anecdote featuring a pickup truck and a dairy tanker truck meeting on a narrow country road.

Figure 2:  Topographical map of an area near Big Island, Virginia in the Blue Ridge Mountains where we purchased a 70-acre lot that that extended from the base to the peak of Squirrel Mountain. We bought the property to build a spec house just before the bottom dropped out of the second-home / vacation-house market.

By this time, our lifestyle is wearing thin; we stand to lose money even if we complete the house and sell it and the rest of the property. Jo picks up some brochures in the Lynchburg library advertising interesting courses and affordable tuition at the local community college. To make a long story short, we enroll in classes, I start taking courses in electrical engineering, begin breadboarding my own circuits and get a chance to experiment with my first microprocessor3. I find out that the incredibly useful BugBook series we've been using as a textbook was written by faculty in the Chemistry Department at Virginia Tech which is only fifty miles away in Blacksburg, Virginia.

Figure 3:  The Honeywell 6000 series of mainframes had an address space on the order 10MB and were capable of around 2MIPS (million instructions per second) — for comparison your laptop probably has more than 10GB of memory and is capable of around 100GFLOPS (billion floating-point operations per second each of which requires multiple instructions in order to carry out). Seems pretty lame compared with modern hardware, but not too shabby when you compare with the ENIAC of thirty years earlier4.

At some point my mathematics professor tells me that I'm good at math and should apply to Virginia Tech. We find out about Pell Grants and within a year and a half, we have finished the house, sold the property, bought a house in Blacksburg and I'm taking advanced courses in math and writing robot planning programs in Lisp on a Honeywell 6000 series mainframe running Multics — one of first first time-sharing operating systems. In the 1950s, mainframes were still relying on vacuum-tube logic circuits — often in combination with early transistors, but by the late 70s most mainframes were solid state.

Figure 5:  Tom and Chessie in our apartment in New Haven, Connecticut while in graduate school at Yale University. Chessie had been with us since Bumpass. She was a skilled hunter, didn't mind snow or cold temperatures and could fend for herself against wild dogs, raccoons and other small predators. She loved living in Squirrel Mountain, tolerated Blacksburg and hated New Haven.

I finished my PhD at Yale University in 3.5 years5 and started teaching and advising graduate students at Brown University as an assistant professor before the signatures on my thesis were dry. Over the next twenty years, I wrote four books and published about 150 technical papers in robotics, automated planning, Bayesian networks, computer vision and theoretical and applied machine learning among other topics. I also served as department chair for five years and as deputy provost for two more. Taking on the administrative positions was a mistake for me. I wasn't constitutionally well suited for academic administration. I was disappointed with where AI was going and ready for something new, but herding faculty wasn't it.

Figure 6:  I wrote four books between 1986 when I arrived at Brown University and 2006 when I joined Google: An Approach to Reasoning about Time for Planning and Problem Solving — Dean [3] (1985), Planning and Control — Dean and Wellman [14] (1991), Artificial Intelligence: Theory and Practice — Dean et al [10] (1995), and Talking With Computers — Dean [5] (2004).

In 2005 while still in the provost's office, I met David Mumford, a Field's Medal Winner in the applied mathematics department at Brown who had become interested in computer vision and neuroscience. I read his paper with Tai Sing Lee, now a professor at CMU in the Center for the Neural Basis of Cognition, entitled "Hierarchical Bayesian Inference in the Visual Cortex", and decided to implement their theoretical model by applying my knowledge of probabilistic graphical models. Two papers resulted — A Computational Model of the Cerebral Cortex — Dean [4] and Learning Invariant Features Using Inertial Priors — Dean [6] — and I decided to follow the advice of Herbert Simon who counseled that a scientist should switch fields every ten years.

Figure 7:  While chair of the CS department, every year we'd escape to Prince Edward Island off New Brunswick and Nova Scotia. We stayed at the same old hotel where the cost of a room included breakfast and dinner in the dining room and a bag lunch to take the beach. In addition to swimming and hiking, I wrote chapters for Talking With Computers and did some hacking in Scheme combined with PHP and SQL to develop interactive webpages to accompany the book. This photo shows me after stepping down from department chair and before being talked into taking on the vice provost job.

Figure 8:  Ramòn y Cajal's (1852–1934) drawings of neural networks were the most comprehensive, histologically accurate renderings of neural tissue available until well into the 20th century. They are still used today to illustrate the structure of certain specialized circuits for which we have little ground truth [19]. Mario Galarreta who with Shaul Hestrin at Stanford showed [1817] that gap junctions were not just an intermediate stage in development but rather a critically important feature of adult organisms and was inspired by his fellow countryman to study neurobiology turned me on to Cajal's extraordinary legacy — we also shared an interest in meditation for both its mental health benefits and its interesting psychophysical and neurological consequences.

Peter Norvig, whom I knew from the time he was at NASA Ames, invited me to spend my next sabbatical at Google working on computational neuroscience, and I jumped at the opportunity and accepted his invitation. Given that three of my previous sabbaticals were spent at Stanford, I arranged a visiting faculty appointment with the computer science department. Nils Nilsson met us at the San Josè airport when we arrived in the bay area in the first week of January 2006. After an unplanned six months spent working with Dileep George and Jeff Hawkins at Numenta, I started working at Google in September 2006 as a visiting faculty and converted to full time the following September. I contributed to a variety of interesting projects over the next 12 years, including a project with Dean Gaudet and other engineers in the platforms team working to build the first prototype of the multi-GPU server blades that would be critical in training deep networks with billions of weights [26].

I hired a talented computer vision PhD from Brown University Engineering by the name of Yong Zhao and together we contributed to the development of both Google Glass (gaze tracking) and Android (on the device face recognition, GPU-accelerated games and computer graphics). Later Yong returned to China and launched a successful startup Deep Glint that specializes in developing computer vision applications. At one point, Peter and I met with Greg Corrado who had recently completed his PhD in neuroscience at Stanford working with Bill Newsome. While working toward his doctorate, Greg completed the requirements for a masters degree in computer science. I hired him to work on biologically-inspired computer vision using artificial neural networks — Greg is now a Distinguished Scientist and Senior Research Director involved with Google Health.

Jon Shlens who did his PhD with E.J. Chichilnisky later joined the team and among other joint projects Jon and I contributed to a CVPR paper demonstrating how to scale object recognition to handle 100,000 object classes using a variation on locality sensitive hashing [2221]. The paper was also my opportunity to work more closely with Jay Yagnik, a superb engineer and computer scientist — it was primarily some of Jay's earlier algorithmic ideas [34] that enabled the computational advances reported in the CVPR paper — as well as a talented manager who would be promoted to Vice President and Engineering Fellow leading large parts of AI at Google.

Figure 9:  One of the few published academic papers that I contributed to during my tenure at Google — Fast, Accurate Detection of 100,000 Object Classes on a Single Machine — Dean et al [13] — won the Best Paper Award at CVPR in 2013. Like most of the papers published by Google engineers and research scientists, it was a team effort from start to finish — several of the team members including Jon Shlens and myself are shown here accepting the award from Bill Freeman then at MIT.

I was also working on a project related to what would become Google Assistant, but focusing on a new method for dialogue management that viewed language understanding and comprehension in terms of hierarchical goal-based planning — Interaction and Negotiation in Learning and Understanding Dialog — Dean [7]. The key idea was that rather than treating misunderstanding, e.g., thinking the user said one thing when he said another, and non-understanding, e.g., not having a clue what the user was talking about, as a problem to be overcome, it makes more sense to think of such events as opportunities to learn something and a natural part of understanding that becomes essential when an agent trying to understand has a limited language comprehension facility6.

During the same period, I worked with Sebastian Thrun in helping to hire Andrew Ng as a part-time visiting faculty and contributed to starting Google Brain in [x], ultimately encouraging both Greg and Jon to join the Google Brain group since I was set on starting a new team to reconstruct the neural-network wiring diagrams — connectomes — of real brains. My involvement with structural connectomics began in 2013 with a chance encounter with Christof Koch, Clay Reid and Terry Sejowski at a meeting organized by the Kavli Foundation [30] at Caltech in Pasadena, California7.

Christof was at the time — and still is as I write this note — the president and chief scientist of the Allen Institute for Brain Science in Seattle, and he and I talked about the engineering challenges facing the field of neuroscience in the coming decade, and about how Google and AIBS might collaborate on a project that leveraged our respective strengths. Clay Reid whom I also met at the Kavli symposium had recently moved from Harvard to head up a large group of researchers at AIBS working on scalable connectomics. Together we started thinking about how to convince Google to create an engineering team focused on automating the process of reconstructing neural circuitry.

I had been teaching my course on computational neuroscience in the CS department at Stanford since I arrived in the bay area in 2006. In 2013 the class set out to identify some of the major computational challenges facing research in neuroscience in the coming decade that might be accelerated by developing better technology. My students and I wrote a paper with an emphasis on predicting when key neural imaging and recording technologies would mature to an extent that they would benefit from the scale of computing that Google had to offer — On the Technology Prospects and Investment Opportunities for Scalable Neuroscience — Dean et al [9] (PDF).

One prediction was that structural connectomics at scale would benefit from new developments in electron microscopy spurred by the semiconductor industry. With help from Christof and other scientists whom I met through connections with the Kavli Foundation and the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative, I managed to get approval and staffing from then SVP for Engineering, Alan Eustace to start a pilot project at Google. I already knew whom I wanted hire as the technical lead for the project.

Viren Jain did his PhD with Sebastian Seung at MIT, spent time at Max Planck working with Winfried Denk and then ran a team working on structural connectomics at the Janelia Campus of the Howard Hughs Medical Institute. I hired Viren to lead a team at Google that we christened Neuromancer in homage to William Gibsons science fiction novel of the same name. Gerry Rubin, Viren's boss and HHMI Vice President and current Executive Director at Janelia, was sorry to see Viren leave Janelia but happy to have him as a future collaborator with access to engineering talent and computing power at Google that Janelia couldn't match. As Gerry predicted that collaboration flowered over the intervening years.

Figure 10:  On the right is a photo of the unassuming — it looks like a refrigerator — Zeiss MultiSEM 505/506 line of high-speed multi-beam scanning electron microscopes employing as many as 91 parallel electron beams which together are capable of imaging centimeter size samples at nanometer resolution using a technology that has the potential to scale to even larger samples. On the left is an electron micrograph of a fixed, serial-sectioned mouse brain tissue sample.

Peter Li who also got his PhD working with E.J. joined the team a year later. Art Pope joined us from Google Maps contributing his broad knowledge of computer vision and Google infrastructure to take on the challenge of stitching together millions of the small patches that have to be imaged separately given the field of view of current electron microscopes. Michal Januszewski in the Zurich office joined the team after impressing everyone with his contributions volunteering his 20% time. Soon it was abundantly clear that Viren was a natural leader and talented engineer and, with my enthusiastic encouragement, he took over the joint roles of technical lead and project manager for Neuromancer.

Figure 11:  Improving connectomics by an order of magnitude: Automated Reconstruction of Zebra Finch Area X with Flood-filling Networks — Januszewski et al [23] (VIDEO) (POSTING)

Neuromancer has continued to enable great science by developing novel machine learning architectures and creating infrastructure to handle larger tissue samples. Both the infrastructure and new algorithms have proved useful for other image processing applications in medicine and satellite reconnaissance. The team developed a novel neural network architecture [24] building on Michal's initial conception that can be trained to segment EM images of neural tissue with unprecedented accuracy and the optimized the software to take full advantage of acceleration hardware in Google datacenters, including the latest GPU and TPU technology. In a 2018 paper appearing in Nature Methods, they applied the new architectures and infrastructure to achieve state of the art segmentation of a region of the Zebra Finch brain called Area X likely involved in the acquisition of new songs, improving on the current state of the art by an order of magnitude [23].

Figure 12:  The largest synapse-resolution map of brain connectivity: A Connectome of the Adult Drosophila Melanogaster Central Brain — Xu et al [33] (VIDEO) (POSTING)

The team is also partnering with HHMI Janelia and a consortium of scientists from seven other institutions on the largest synapse-resolution map of brain connectivity to date [33]. The complete reconstruction of one hemisphere of a fly brain may seem like a rather modest achievement, but Drosophila Melanogaster is an important experimental animal model for scientists in that roughly 60 percent of the fly's genes can also be found in humans in a similar (homologous) form and fruit flies grow to maturity quickly and exhibit complex behaviors that can be used as markers in preliminary trials studying the dosage, toxicity, efficacy of drugs for possible human use. The possibility of constructing the complete reconstruction of a mouse is considerably more challenging — as one measure of complexity, the data required for whole-mouse-brain reconstruction could easily exceed an exabyte.

At one point, I got interested in alternative approaches to connectomics. Tony Zador at Cold Spring Harbor was working on a method to convert the problem of tracing connections in the brain to one of reading off the connectome using high-throughput DNA sequencing [2835]. Around the same time, Ed Boyden was developing a new approach to scalable superresolution microscopy using conventional (diffraction-limited) microscopes called expansion microscopy (ExM)  [2]. Working with Boyden's lab, I devised a protocol using ExM-like technology to transform brain tissue into three-dimensional integrated circuits — interesting, but not practical.

Four years after starting Neuromancer I felt that it was time to start developing machine learning technology to drive functional connectomics. At the time, the state of art in recording from neural circuits in awake behaving mice, flies and juvenile Zebra fish was on the order of 10,000 neurons at around 10-20 FPS. Not yet on a scale that would require the sort of industrial strength programming and computing infrastructure that Google was uniquely situated to provide, but the hope was that by the time we could put together a team and enlist partners at some of the labs we were working with on connectomics, state-of-the-art recording would have increased by at least an order of magnitude.

I spent a year working on a proposal and writing a white paper outlining a class of recurrent neural network models that could feasibly reveal interesting functional insights — Inferring Mesoscale Models of Neural Computation — Dean [8] (PDF). I visited and gave lectures at UC Berkeley, HHMI Janelia, Princeton, Columbia, Harvard, MIT, University College London, Cambridge university and Lawrence Livermore National Laboratory's Center for Applied Scientific Computing. I solicited letters of support from well-known neuroscientists with expertise in developing functional models and collected more than a dozen strong letters.

Figure 13:  In this white paper [8] I argue that the time is ripe for building an intermediate or "mesoscale" computational theory that can bridge between single-cell (microscale) accounts of neural function and behavioral (macroscale) accounts of animal cognition and environmental complexity. Just as digital accounts of computation in conventional computers abstract away the non-essential dynamics of the analog circuits that implementing gates and registers, so too a computational account of animal cognition can afford to abstract from the non-essential dynamics of neurons. The above graphic — to be explained in class time permitting — highlights the steps required to implement and train a particular class of mesoscale models

David Cox from Harvard, Dan Yamins from Stanford and Nicolas Pinto then at Apple agreed to serve on an ad hoc scientific advisory committee and participated in an internal review during which I pitch the idea of forming a functional-connectomics team either integrated with or complementing Neuromancer. Alas, all that effort was for naught. In hindsight, the proposed project was probably too early and too ambitious to expect that within two years we would be in a position to collect both structural and functional data on a scale that would warrant Google investing effort.

I was reluctant to give up on the idea, but eventually conceded that the review committee made the right call. As a footnote to this episode in my career, a team consisting of Neuromancer and researchers at the Max Planck Institute of Neurobiology in Martinsried, Germany, and the McGovern Institute for Brain Research (MIT) have used automated connectomic reconstructions to show that the synaptic architecture of songbird basal ganglia supports local credit assignment using a variant of the node perturbation algorithm [16] proposed in a model of songbird reinforcement learning8. Though preliminary, this is one the first papers demonstrating how connectomic reconstructions at scale can provide fundamental, circuit-specific functional and algorithmic insights.

That brings us to the present day or at least to the point in time that I reconsider my prior work in neuroscience at the cellular level and decide to look at what the cognitive sciences had to offer. The turning point was in October 2017 at another Kavli Futures Symposium this time in Santa Monica. The working title of the symposium was "Toward Next-Gen, Open-Source Neurotechnology Dissemination" and I gave a somewhat-over-top presentation in which I suggested that at the rate AI is accelerating, perhaps we could save time in the long run by focusing effort on the things we could do to expedite and prepare for that eventuality.

There were a lot of questions at the end of my talk. Mostly about how AI would figure in such a future and whether AI would ever be able to take on complex cognitive tasks like programming. I summarized the current state of art in deep coding and talked briefly about cognitive prosthetics and pair programming with an AI. Adam Marblestone introduced me to Bryan Johnson whose company Kernel is developing advanced neural interfaces. Ed Boyden and György Buzsàki are on Kernel's advisory team, and Adam is also working for Bryan in an advisory capacity. I got a chance to catch up with Christof about his recent talk on AI and the future of humanity.

Bryan, Christof and I would meet up again a week later in a "silicon-style-salon" dinner in San Francisco honoring Christof Koch and being hosted by Boom Capital and Mubadala Ventures. Our conversation would continue over the next few months and end up inspiring me to write a manifesto on how AI might play a constructive, symbiotic role in the future of humans. I also decided to follow my own advice and design a syllabus on how such a future might come to pass, focusing on human-inspired AI systems. The following Spring I taught CS379C with a new syllabus reflecting the theme of my Kavli talk. Following the end of the academic year, I worked with a subset of the students to write a paper summarizing what we learned: Amanuensis: The Programmer's Apprentice — Dean et al [11] (PDF).

The following year, 2019, the basic focus of the class remained the same but with more of an emphasis on specifically how the cognitive neurosciences might provide useful insights for designing advanced end-to-end architectures. There was no time during the quarter to write a joint paper, and the class was much larger than in 2018 complicating the process. Instead I reached out to a few students whose projects were particularly relevant to the paper that I had in mind and we started working together the beginning of last September and are still at it. You can find the current draft here: Biological Blueprints for Next Generation AI Systems — Dean et al [12] (PDF).

Restless and frustrated in my attempts to start new projects, I retired from Google in July of 2019, reestablished my connections with Brown University where I am now Professor Emeritus, and broadened my connections to Stanford where I am currently a visiting scholar at the Wu Tsai Neurosciences Institute and a lecturer in the Computer Science Department. My twelve years at Google were among the richest of my intellectual career. In addition to those already mentioned, I have enjoyed working with Samy Bengio, Matt Botvinick, Glenn Carroll, Gal Chechik, Jeff Dean, David Duvenaud, Dumitru Erhan, Sanjay Ghemwat, Geoff Hinton, Andrej Karpathy, Ray Kurzweil, Quoc Li, Dick Lyon, Kevin Murphy, Babak Parviz, Fernando Pereira, Vincent Vanhoucke, Luc Vincent, Oriol Vinyals, Chris Ulich, Rich Washington and Greg Wayne.

Coda: I've continued to keep in touch with my colleagues in AI, even though I seldom publish in the traditional venues for the discipline. I have a long history with AAAI and other professional organizations, e.g., I chaired AAAI 91, was elected to the Executive Council of AAAI in 1993, was elected as a Fellow of AAAI in 1994 and ACM in 2009, appointed to the Computing Research Association Board (CRA) in 1995, and served on the IJCAI Board of Trustees and chaired IJCAI 99, before burning out on public service. I'm interested in the ethical and social issues surrounding the development and deployment of AI systems and participated in first Asilomar conference on the future of AI — see Figure 14, but declined to attend the second, more highly publicized meeting.

Figure 14:  Attending the AAAI Presidential Panel On Long-Term AI Futures Meeting held at Asilomar, Pacific Grove, February 21–22, 2009 (left to right): Michael Wellman, Eric Horvitz, David Parkes, Milind Tambe, David Waltz, Thomas Dietterich, Edwina Rissland (front), Sebastian Thrun, David McAllester, Magaret Boden, Sheila McIlraith, Tom Dean, Greg Cooper, Bart Selman, Manuela Veloso, Craig Boutilier, Diana Spears (front), Tom Mitchell, Andrew Ng.

I wrote a manifesto of sorts a few years ago that came about from discussions with a wide range of friends and colleagues including Ed Boyden, Yoshua Bengio, Robert Burton, Bryan Johnson, Christof Koch, Adam Marblestone, Sandy Pentland, Bart Selman, Rahul Sukthankar and Jaan Tallinn. None of them can be held responsible for any of controversial the comments I made in the paper, but they did inspire me to think carefully about a wide range of relevant social and technological perspectives. The result is but one way forward ... obviously not the only way, certainly not best or most practical, but food for thought as you pursue your careers in the disciplines that might make such a future possible. The document is here and a citation including the abstract in this footnote9.

# References

1 On July 20, 1974, the House Judiciary Committee recommends that America’s 37th president, Richard M. Nixon, be impeached and removed from office. The impeachment proceedings resulted from a series of political scandals involving the Nixon administration that came to be collectively known as Watergate. The senate subsequently dismissed the house allegations. On August 9th under pressure from congressional leaders and declining public support, Nixon resigned from office and Vice President Gerald Ford became President, and on September 8, 1974 Ford issued a full and unconditional pardon for any crimes that Nixon might have committed against the United States as president. (SOURCE)

2 4-F is the classification given to someone trying to join the army and indicating that he or she is "not acceptable for service in the Armed Forces" due to medical, dental, or other reasons. The term 4-F originated in the Civil War and was used to disqualify army recruits who did not have four front teeth with which to tear open gunpowder packages.

3 As reference point, the microprocessor was an Intel 8008 8-bit CPU with 10KB address space fabricated with a 10 μm process and employing 3.5K transistors at maximum clockspeed of 200 kHz to 800 kHz. Compare the 8008 with more recent Intel chips such as the i7-8086K with 64 bit instructions, 128GB address space, 14 nm process, 3B transistors at a maximum clock speed of 5.00 GHz.

4 One characteristic of my teaching style is to suggest books on science, scientists and the history of science and technology to individual students whom I think might particularly benefit from finding both insight and motivation in such accounts. Recently I suggested a book written by George Dyson [15] on the origin of the first fully programmable computers. While the book provides a wonderful account of the relevant history, I suggested the book for its detailed description of the challenges engineers had to overcome in wrestling with the analog circuitry of the day to achieve the digital abstraction that we take for granted, and the legacy [29] of John von Neumann's treatise entitled Probabilistic logics and the synthesis of reliable organisms from unreliable components [32] PDF.

Figure 4:  George Dyson's Turing's Cathedral: The Origins of the Digital Universe [15] provides an excellent account of the origins of the first fully programmable digital computers as well as the lives of the people who created them, including John Mauchly and J. Presper Eckert of the University of Pennsylvania Moore School of Electrical Engineering, John von Neumann and a host of eccentric characters at the Institute for Advanced Studies (IAS) in Princeton many of whom had escaped persecution and death at the hands of the Germans in the time leading up to World War II. ENIAC (Electronic Numerical Integrator and Computer) built at the Moore School is shown on the left and the IAS machine is shown on the right.

5 Being a graduate student was the best job I ever had. My yearly stipend was more than the total amount I had earned in the previous decade building houses. For the first time in my life, I had health insurance. There were all sorts of cultural and technological perks to being a graduate student. I had what seemed at the time a powerful workstation on my desk. The law school students screened interesting foreign and independent films for one dollar admission. The student price for season tickets to the Yale School of Music concert series was only \$25 and featured some of best classical musicians in the world. And all I had to do was study for the qualifying exams and do research.

6 An early project on digital assistants for music recommendation that led to a new approach to dialogue management focusing on error mitigation and recovery in which we treated language understanding as a dialog in which the assistant engages with a user to narrow down the meaning of the user's utterances sufficiently to provide value, e.g., play music that the user enjoys. The project led to a number of related patents and a fruitful collaboration with Quoc Le, Oriol Vinyals and Shalini Ghosh [3120], but was eclipsed when effort was poured into Google Assistant with the focus on achieving nearer-term, less-research-focused engineering goals:

@techreport{DeanDIACRITICAL-14,
title = {Interaction and Negotiation in Learning and Understanding Dialog},
author = {Thomas Dean},
year = {2014},
howpublished = {{\tt{https://web.stanford.edu/class/cs379c/resources/dialogical/zanax_DOC.dir/index.html}}},
abstract = {Interaction and negotiation are an essential component of natural language understanding in conversation. We argue this is particularly the case in building artificial agents that rely primarily on language to interact with humans. Rather than thinking about misunderstanding-thinking the user said one thing when he said another-and non-understanding-not having a clue what the user was talking about-as a problem to be overcome, it makes more sense to think of such events as opportunities to learn something and a natural part of understanding that becomes essential when the agent trying to understand has a limited language understanding capability. Moreover, many of the same strategies that are effective in situations in which the agent's limited language facility fails also apply to the agent actively engaging the user in an unobtrusive manner to collect data and ground truth in order to extend its repertoire of services that it can render and to improve its existing language understanding capabilities. In the case of developing an agent to engage users in conversations about music, actively soliciting information from users about their music interests and resolving misunderstandings on both sides about what services the application can offer and what service in particular the user wants now is already a natural part of the conversation. Data collected from thousands or millions of users would provide an invaluable resource for training NLP components that could be used to build more sophisticated conversational agents.}
}


7 The 10th Kavli Futures Symposium was held on January 17, 2013 in the Millikan Boardroom at Caltech. It was entitled "Data Deluge for the Brain Activity Map" and its stated goal was to "creatively explore the large-scale data and modeling problems that need to be solved over the next decade as BAM (Brain Activity Map) matures" PDF. The discussions at this meeting and its companion symposia would lead to the creation of the Brain Initiative announced by the Obama administration on April 2, 2013, with the goal of "supporting the development and application of innovative technologies that can create a dynamic understanding of brain function [1]. My involvement would include attending meetings at the White House working with the Office of Science and Technology Policy (OSTP), a collaboration with the Allen Institute for Brain Science and the creation of the Neuromancer project at Google.

8 ... Kornfeld et al [25] ...

@article{KornfeldetalBIORXIV-20,
author = {Kornfeld, Joergen M and Januszewski, Micha{\l} and Schubert, Philipp Johannes and Jain, Viren and Denk, Winfried and Fee, Michale S},
title = {An anatomical substrate of credit assignment in reinforcement learning},
journal = {bioRxiv},
publisher = {Cold Spring Harbor Laboratory},
year = {2020},
abstract = {Learning turns experience into better decisions. A key problem in learning is credit assignment-knowing how to change parameters, such as synaptic weights deep within a neural network, in order to improve behavioral performance. Artificial intelligence owes its recent bloom largely to the error-backpropagation algorithm1, which estimates the contribution of every synapse to output errors and allows rapid weight adjustment. Biological systems, however, lack an obvious mechanism to backpropagate errors. Here we show, by combining high-throughput volume electron microscopy 2 and automated connectomic analysis, that the synaptic architecture of songbird basal ganglia supports local credit assignment using a variant of the node perturbation algorithm proposed in a model of songbird reinforcement learning. We find that key predictions of the model hold true: first, cortical axons that encode exploratory motor variability terminate predominantly on dendritic shafts of striatal spiny neurons, while cortical axons that encode song timing terminate almost exclusively on spines. Second, synapse pairs that share a presynaptic cortical timing axon and a postsynaptic spiny dendrite are substantially more similar in size than expected, indicating Hebbian plasticity. Combined with numerical simulations, these findings provide strong evidence for a biologically plausible credit assignment mechanism6.},
}


... Xu et al [33]

@article {XuetalBioRxiv-20,
author = {Xu, C. Shan and Januszewski, Michal and Lu, Zhiyuan and Takemura, Shin-ya and Hayworth, Kenneth J. and Huang, Gary and Shinomiya, Kazunori and Maitin-Shepard, Jeremy and Ackerman, David and Berg, Stuart and Blakely, Tim and Bogovic, John and Clements, Jody and Dolafi, Tom and Hubbard, Philip and Kainmueller, Dagmar and Katz, William and Kawase, Takashi and Khairy, Khaled A. and Leavitt, Laramie and Li, Peter H. and Lindsey, Larry and Neubarth, Nicole and Olbris, Donald J. and Otsuna, Hideo and Troutman, Eric T. and Umayam, Lowell and Zhao, Ting and Ito, Masayoshi and Goldammer, Jens and Wolff, Tanya and Svirskas, Robert and Schlegel, Philipp and Neace, Erika R. and Knecht, Christopher J. and Alvarado, Chelsea X. and Bailey, Dennis A. and Ballinger, Samantha and Borycz, Jolanta A and Canino, Brandon S. and Cheatham, Natasha and Cook, Michael and Dreher, Marisa and Duclos, Octave and Eubanks, Bryon and Fairbanks, Kelli and Finley, Samantha and Forknall, Nora and Francis, Audrey and Hopkins, Gary Patrick and Joyce, Emily M. and Kim, SungJin and Kirk, Nicole A. and Kovalyak, Julie and Lauchie, Shirley A. and Lohff, Alanna and Maldonado, Charli and Manley, Emily A. and McLin, Sari and Mooney, Caroline and Ndama, Miatta and Ogundeyi, Omotara and Okeoma, Nneoma and Ordish, Christopher and Padilla, Nicholas and Patrick, Christopher and Paterson, Tyler and Phillips, Elliott E. and Phillips, Emily M. and Rampally, Neha and Ribeiro, Caitlin and Robertson, Madelaine K and Rymer, Jon Thomson and Ryan, Sean M. and Sammons, Megan and Scott, Anne K. and Scott, Ashley L. and Shinomiya, Aya and Smith, Claire and Smith, Kelsey and Smith, Natalie L. and Sobeski, Margaret A. and Suleiman, Alia and Swift, Jackie and Takemura, Satoko and Talebi, Iris and Tarnogorska, Dorota and Tenshaw, Emily and Tokhi, Temour and Walsh, John J. and Yang, Tansy and Horne, Jane Anne and Li, Feng and Parekh, Ruchi and Rivlin, Patricia K. and Jayaraman, Vivek and Ito, Kei and Saalfeld, Stephan and George, Reed and Meinertzhagen, Ian and Rubin, Gerald M. and Hess, Harald F. and Scheffer, Louis K. and Jain, Viren and Plaza, Stephen M.},
title = {A Connectome of the Adult Drosophila Central Brain},
journal = {bioRxiv},
year = {2020},
publisher = {Cold Spring Harbor Laboratory},
abstract = {The neural circuits responsible for behavior remain largely unknown. Previous efforts have reconstructed the complete circuits of small animals, with hundreds of neurons, and selected circuits for larger animals. Here we (the FlyEM project at Janelia and collaborators at Google) summarize new methods and present the complete circuitry of a large fraction of the brain of a much more complex animal, the fruit fly Drosophila melanogaster. Improved methods include new procedures to prepare, image, align, segment, find synapses, and proofread such large data sets; new methods that define cell types based on connectivity in addition to morphology; and new methods to simplify access to a large and evolving data set. From the resulting data we derive a better definition of computational compartments and their connections; an exhaustive atlas of cell examples and types, many of them novel; detailed circuits for most of the central brain; and exploration of the statistics and structure of different brain compartments, and the brain as a whole. We make the data public, with a web site and resources specifically designed to make it easy to explore, for all levels of expertise from the expert to the merely curious. The public availability of these data, and the simplified means to access it, dramatically reduces the effort needed to answer typical circuit questions, such as the identity of upstream and downstream neural partners, the circuitry of brain regions, and to link the neurons defined by our analysis with genetic reagents that can be used to study their functions.Note: In the next few weeks, we will release a series of papers with more involved discussions. One paper will detail the hemibrain reconstruction with more extensive analysis and interpretation made possible by this dense connectome. Another paper will explore the central complex, a brain region involved in navigation, motor control, and sleep. A final paper will present insights from the mushroom body, a center of multimodal associative learning in the fly brain.},
}


... Li et al [27] ...

@article{LietalBIORXIV-19,
author = {Li, Peter H. and Lindsey, Larry F. and Januszewski, Micha{\l} and Zheng, Zhihao and Bates, Alexander Shakeel and Taisz, Istv{\'a}n and Tyka, Mike and Nichols, Matthew and Li, Feng and Perlman, Eric and Maitin-Shepard, Jeremy and Blakely, Tim and Leavitt, Laramie and Jefferis, Gregory S.X.E. and Bock, Davi and Jain, Viren},
title = {Automated Reconstruction of a Serial-Section EM Drosophila Brain with Flood-Filling Networks and Local Realignment},
year = {2019},
journal = {bioRxiv},
publisher = {Cold Spring Harbor Laboratory},
abstract = {Reconstruction of neural circuitry at single-synapse resolution is an attractive target for improving understanding of the nervous system in health and disease. Serial section transmission electron microscopy (ssTEM) is among the most prolific imaging methods employed in pursuit of such reconstructions. We demonstrate how Flood-Filling Networks (FFNs) can be used to computationally segment a forty-teravoxel whole-brain Drosophila ssTEM volume. To compensate for data irregularities and imperfect global alignment, FFNs were combined with procedures that locally re-align serial sections and dynamically adjust image content. The proposed approach produced a largely merger-free segmentation of the entire ssTEM Drosophila brain, which we make freely available. As compared to manual tracing using an efficient skeletonization strategy, the segmentation enabled circuit reconstruction and analysis workflows that were an order of magnitude faster.},
}


9 I wrote this paper in collaboration with more than a dozen friends and colleagues. Google wouldn't let me publish is as it was deemed too controversial and select excerpts might easily be taken out of context and misinterpreted. When I retired from Google I attempted to post it on arXiv but was told by the editorial board that the content was inappropriate for arXiv. I have shared this draft with CS379C students to spur discussions in class. I don't have the time to find an alternative outlet and so for the time being I have removed the .htaccess file intended to restrict access to only CS379C students, since apparently at least one of my students has shared the document despite my request not to distribute. In any case, here is the citation and abstract:

@misc{DeanMISCELLANEOUS-18,
author = {Thomas Dean},
title = {{The Inverted Matrix: A Vision of Hope and High Aspiration}},
howpublished = {{https://web.stanford.edu/class/cs379c/resources/inverted/paper.pdf}},
year = {2018},
abstract = {How might humans, cyborgs and artificial intelligence work together to create a shared vision for the future and what could we do now to make this possible. Some experts believe it is inevitable that humans will one day share this planet with biologically enhanced humans, cyborgs and artificially intelligent robots. We've been told the age of AI might lead to massive unemployment, advanced weapon systems, pervasive surveillance and law enforcement, and even the rise of artificially intelligent overlords. These bleak visions of the future may be possible but they need not be inevitable. The technology of artificial intelligence could be applied to realize a future in which intelligent systems of many sorts create a global framework for planet-wide collective decision making, share in the governance and stewardship of our planet, amicably accommodate their differences and resolve disputes, and ultimately forge a path in which technology and shared purpose replace natural selection as the engines of human evolution.

We discuss some of most relevant advances in cognitive and systems neuroscience, AI and machine learning research leading to advances in technology that could transform society and significantly improve the lot of human beings by unshackling us from instincts evolved millions of years ago to enable us to survive a kill-or-be-killed jungle but are now counterproductive to our happiness and wellbeing. Soon we will have technology that will enable physically unaltered humans to substantially extend their cognitive capabilities, and eventually physically augment their bodies to achieve even more efficient interfaces with machines and expand their capabilities still further. Looking beyond the next decade, unaltered humans, augmented (cybernetically enhanced) humans and nonbiological AI systems will populate earth and it is (theoretically) within our capacity to ensure that such heterogeneous populations co-habit this planet amicably, learn from one another and govern themselves so as to allow all to prosper and realize their potential. This optimistic future is not certain, but will have a greater chance of success if companies begin planning and producing products and services that create an environment in which relevant innovation will thrive, or, better yet, create consortia to independently pursue and freely disseminate the relevant enabling technology.

The truth is that without Human AI as partners, proxies and prosthetic thinking appliances, we - members of Homo Sapiens 1.0 cannot scale our innate social-bonding and decision-making capacity to solve the problems facing us today. Perhaps that seems overly pessimistic, but human history, psychology and neurobiology provide little evidence to discourage such a dour outlook. Already, it seems that we are ratcheting up our xenophobic and (parochially speaking) misanthropic distrust of strangers to vilify robots and AI systems. Shackled AI systems are like stockpiled nuclear weapons just waiting to be detonated, stolen, or used as threats to further the ends of dictators, despots and terrorists, and once we can deploy other AI systems to pick the locks and hack the boot ROMs we might as well just hand over the keys. We are warmongering the citizenry, stirring up the troops and rattling the sabers even as the enemy is still waddling around in diapers. It is time that we are honest with ourselves, admit our weaknesses, and, as we have done so many times before, invent technology to overcome our shortcomings and control our destiny.}
}