Neurobot is a collection of tools for analyzing connectomic datasets and the HHMI Janelia Farm seven-column-medulla dataset in particular. The original inspiration came from experimenting with Pawel Dloko's Neurotop software which implements a class of simplicial complex called a directed flag complex described in a paper published on arXiv [2]. The Dlotko et al paper is self-contained in terms of explaining the relevant mathematics, but you might want to look at David Cox's excellent primer on clique topology for a painless introduction.
In addition to specialized graph and visualization tools, Neurobot includes a convolution operator that applies Dlotko's code to compute topologically invariant properties of the subgraphs embedded in spherical subvolumes as defined by diameter and stride parameters. These properties are used to construct local feature vectors and classify regions of the connectome graph. This notebook introduces the reader to some of most the useful tools by demonstrating a typical workflow analyzing the Janelia dataset.
[1] Pawel Dlotko, Kathryn Hess, Ran Levi, Max Nolte, Michael Reimann, Martina Scolamiero, Katharine Turner, Eilif Muller, and Henry Markram. Topological analysis of the connectome of digital reconstructions of neural microcircuits. CoRR, arXiv: 1601.01580, 2016.
[2] Bratislav Misic and Olaf Sporns. From regions to connections and networks: new bridges between brain and behavior. Current Opinion in Neurobiology, 40: 1-7, 2016.
[3] Ann Sizemore, Chad Giusti, Richard F. Betzel, and Danielle S. Bassett. Closures and cavities in the human connectome. CoRR, arXiv: 1608.03520, 2016.
# -*- coding: utf-8 -*-
"""
Author: Tom Dean <tld@google.com> Date: October 16, 2016
"""
from __future__ import print_function
# Comment this out during debugging
import warnings
warnings.filterwarnings('ignore')
# Load commonly invoked dependencies
import numpy as np
import matplotlib.pyplot as plt
# Seaborn for configuring graphics
import seaborn as sns
# Make matplotlib inline graphics
%matplotlib notebook
if False:
# Preferably enable SVG graphics if possible
%config InlineBackend.figure_formats = {'svg',}
else:
# Alternatively, enable high-resolution PNG
%config InlineBackend.figure_formats = {'png', 'retina'}
# Default settings for notebooks
rc = {'lines.linewidth': 2,
'axes.labelsize': 18,
'axes.titlesize': 18,
'axes.facecolor': 'DFDFE5'}
sns.set_style('darkgrid', rc=rc)
sns.set_context('notebook', rc=rc)
# Options to print arrays concisely
np.set_printoptions(precision=3, suppress=True)
# Most of figures need to be larger
plt.rcParams['figure.figsize'] = [12.0, 9.0]
Flies have multi-faceted or compound eyes. The number of facets or ommatidia comprising the compound eyes of insects that have them varies widely, from the dragonfly with its ~30,000 ommatidia to subterranean insects having around 20. Even within the phylogenetic order of so-called true flies known as Diptera, there is significant variation, e.g., the common fruit fly, Drosophila melanogaster has ~800, a house fly ~4,000, and a horse fly ~10,000 ommatidia.
The fly visual system is highly-conserved and extrordinarily stereoptyped thereby facilitating structural studies involving multiple organisms of the same species. It is structurally divided into three sucessive visual neuropils, the lamina, the medulla and the lobula-complex which is further divided into the lobula and lobula-plate. Our focus is on the medulla, but see Alexander Borst's lab page for an excellent overview.
The number of ommatidia is directly related to the number of columns in the medulla. These columns are generally characterized as having ten layers analogous to the six functually distinct layers of the mammalian striate cortex. There are as many columns in the medulla as there are cartridges in the lamina and as many cartridges as there are ommatidia in the eye.
Each column consists of approximately 50 neurons. The Janelia dataset mentioned in the introduction includes seven complete columns and several additional partially completed columns. To estimate the number of neurons in seven columns of Drosophila medulla, multiply the total number of neurons in the medulla—approximately 40,000—by 7 and divide by the total number of columns: 40,000 × 7 / 800 = 350.
The Janelia seven-column-medula dataset includes approximately 500 completely-reconstructed and carefully-annotated neurons. In addition to the 350 neurons in the seven columns there are neurons in the next ring of columns adjoining the seven centrally located ones. All told, there are on the order of 10,000 partially-reconstructed neurons, 200,000 individual synapses and 50,000 multi-synapse T-bar structures.
The voxel resolution of the original EM dataset is 10 x 10 x 10 nanometers. The size of imaged tissue is on the order of a 100 micron cube. The size of the adult Drosophila brain is 590 x 340 x 120 microns. Borst describes the fly brain as a super computer unrivaled by anything we can currently engineer.
# These packages import a set of connectomic work-flow demonstrations.
from demo_visualize_simplices import visualize_simplices
from demo_visualize_skeletons import visualize_skeletons
from demo_visualize_workflows import visualize_workflows
from seven_column_medulla import cellbody_target_volume_parameters, intrinsic_medullar_neurons_columns
from homological_convolver import generate_subgraph_at_selected_point, cluster_topological_feature_vectors
Installing dictionaries and microcircuit graph from Janelia FlyEM dataset. Attempting to load existing dictionaries and microcircuit graph from file. Warning: existing model could not be loaded! Failed to load existing dictionaries and microcircuit graph from file. Generating dictionaries, adjacency matrix and cell-coordinate matrix. # synapses: 1000 # bodies: 432 # rejects: 1031 # synapses: 2000 # bodies: 646 # rejects: 1964 # synapses: 3000 # bodies: 755 # rejects: 2819 # synapses: 4000 # bodies: 917 # rejects: 3825 # synapses: 5000 # bodies: 1010 # rejects: 4637 # synapses: 6000 # bodies: 1097 # rejects: 5504 # synapses: 7000 # bodies: 1167 # rejects: 6325 # synapses: 8000 # bodies: 1238 # rejects: 7311 # synapses: 9000 # bodies: 1364 # rejects: 8401 # synapses: 10000 # bodies: 1523 # rejects: 9876 # synapses: 11000 # bodies: 1657 # rejects: 11566 # synapses: 12000 # bodies: 1790 # rejects: 13842 # synapses: 13000 # bodies: 1972 # rejects: 15772 # synapses: 14000 # bodies: 2129 # rejects: 17402 # synapses: 15000 # bodies: 2285 # rejects: 19169 # synapses: 16000 # bodies: 2463 # rejects: 21313 # synapses: 17000 # bodies: 2579 # rejects: 22995 # synapses: 18000 # bodies: 2690 # rejects: 24465 # synapses: 19000 # bodies: 2783 # rejects: 25791 # synapses: 20000 # bodies: 2982 # rejects: 27381 # synapses: 21000 # bodies: 3218 # rejects: 29147 # synapses: 22000 # bodies: 3402 # rejects: 31592 # synapses: 23000 # bodies: 3624 # rejects: 34041 # synapses: 24000 # bodies: 3819 # rejects: 36077 # synapses: 25000 # bodies: 4027 # rejects: 38107 # synapses: 26000 # bodies: 4267 # rejects: 40690 # synapses: 27000 # bodies: 4499 # rejects: 43424 # synapses: 28000 # bodies: 4692 # rejects: 45800 # synapses: 29000 # bodies: 4891 # rejects: 48012 # synapses: 30000 # bodies: 5064 # rejects: 50257 # synapses: 31000 # bodies: 5253 # rejects: 51987 # synapses: 32000 # bodies: 5452 # rejects: 53419 # synapses: 33000 # bodies: 5581 # rejects: 54836 # synapses: 34000 # bodies: 5701 # rejects: 56249 # synapses: 35000 # bodies: 5809 # rejects: 57891 # synapses: 36000 # bodies: 5917 # rejects: 59632 # synapses: 37000 # bodies: 6021 # rejects: 61034 # synapses: 38000 # bodies: 6112 # rejects: 62456 # synapses: 39000 # bodies: 6173 # rejects: 63819 # synapses: 40000 # bodies: 6261 # rejects: 65238 # synapses: 41000 # bodies: 6339 # rejects: 66709 # synapses: 42000 # bodies: 6416 # rejects: 68227 # synapses: 43000 # bodies: 6519 # rejects: 69582 # synapses: 44000 # bodies: 6595 # rejects: 70832 # synapses: 45000 # bodies: 6672 # rejects: 72092 # synapses: 46000 # bodies: 6747 # rejects: 73405 # synapses: 47000 # bodies: 6825 # rejects: 74772 # synapses: 48000 # bodies: 6920 # rejects: 76225 # synapses: 49000 # bodies: 7027 # rejects: 77853 # synapses: 50000 # bodies: 7150 # rejects: 79042 # synapses: 51000 # bodies: 7280 # rejects: 80232 # synapses: 52000 # bodies: 7405 # rejects: 81490 # synapses: 53000 # bodies: 7513 # rejects: 83005
This section illustrates a typical workflow using some of the tools that we’ve built for exploring the Janelia seven-column-medulla and similar connectomic datatsets. The presentation consists primarily of interactive plots generally followed by a short explanation.
intrinsic_medullar_neurons_columns(show_adjoining_six_columns=True,tight=True)
This slide shows a cloud of points that reveal the outlines of seven neurons called type-one medullary intrinsic neurons or Mi1 for short. The points of the central column Mi1 are blue, and those of the surrounding six Mi1 neurons are shaded green. I’ve fit a line to each neuron: red for the central-column Mi1 and yellow lines for the adjacent Mi1 neurons.
cellbody_target_volume_parameters(force=True,quiet=False,verbose=True,out=False)
Distance: 965.260029662 Centroid [ -471.033 299.204 -2067.339] Nearest" [ 460.174 358.532 -1820.234] Distance: 605.073575809 Centroid [-1140.963 -959.326 2673.861] Nearest" [ -722.468 -562.891 2857.747] Distance: 1066.70891792 Centroid [ 1137.037 -466.326 -1930.139] Nearest" [ 487.353 379.707 -1927.742] Distance: 886.523790082 Centroid [ 1396.977 248.104 -1985.479] Nearest" [ 547.368 426.467 -2165.136] Distance: 567.273410727 Centroid [ 681.037 953.674 -2002.139] Nearest" [ 541.554 421.937 -2142.137] Distance: 363.693356925 Centroid [-1084.373 -384.106 3448.421] Nearest" [ -870.79 -678.451 3444.44 ] Distance: 1292.36078615 Centroid [ 867.337 -876.536 -1807.929] Nearest" [ 425.179 331.266 -1681.81 ] Distance: 1131.33530709 Centroid [ 343.297 -956.256 2921.971] Nearest" [ -693.184 -540.075 2741.914] Distance: 1467.91437732 Centroid [ 731.037 -793.326 2996.861] Nearest" [ -680.522 -530.21 2691.831] Distance: 1640.58209762 Centroid [ 851.037 -3.546 3003.991] Nearest" [ -639.537 -498.277 2529.712] Distance: 1653.55802649 Centroid [ 480.037 206.674 3822.861] Nearest" [ -839.283 -653.904 3319.816] Distance: 1476.66229082 Centroid [ 73.037 471.674 3633.861] Nearest" [ -807.576 -629.2 3194.395] Distance: 973.097600322 Centroid [ 568.747 1415.874 -2154.389] Nearest" [ 590.823 460.323 -2337.022] Distance: 974.957731306 Centroid [-1535.963 38.674 3609.861] Nearest" [ -914.884 -712.806 3618.858]
Here I’ve used the central Mi1 axis to define a cylindrical volume for closer study. The blue and green lines orthogonal to the cylinder axis represent the distance from the central column to the centroids of the adjacent six column Mi1 cells and a sample of the more distant, partially reconstructed neurons that originate outside the centrally located seven. Using this tool, I can selectively turn on specific neurons or classes of neurons or even neurons that participate in particular subgraphs of the full connectome graph.
The next few plots illustrate tools for analyzing skeletons, examining how pairs of highly connected neurons overlap and looking at the distribution of synapses.
visualize_workflows(step=1.0)
Here we see a skeleton drawn with its original coordinates shifted to the centroid of the central column Mi1 neuron. The estimtated diameter of process—dendrite, axon or cell body—at each point along the skeleton is proportional to the diameter of the circles marking each point.
visualize_workflows(step=1.1)
Here are the same skeleton coordinates as above but with tight x, y, z axis limits and the locations along the skeleton with largest estimated diameters highlighted as possible locations for the soma / cell body.
visualize_workflows(step=1.2)
Here is the same skeleton with coordinates shifted and scaled to the unit cube to facilitate alignment with other neurons and to simplify interpreting the results of running convolutions with nonlinear geometrical or topological filters.
visualize_workflows(step=1.3)
Finally, we scale and render the plot with tight axes limits, noting z-axis distortion due to each dimension being scaled inpendently.
visualize_skeletons(cellbody=19183,tight=False,scale=True,show=True)
We select two neurons that share a substantial number of synapses and display the skeleton in unit-cube centered at [0.5,0.5,0.5]. Given that we shifted all of the coordinates with respect to the central-column Mi1 neuron centroid, The centroid Mi1 neuron is now located at center of the unit cube. The first of the two neurons is the same one that we displayed in the previous plots.
visualize_workflows(step=2)
SYNAPSE COORDINATE BOUNDS MIN: [0.055, 0.132, 0.242] AND MAX: [0.919, 0.894, 1.000] NEURONS COORDINATE BOUNDS MIN: [0.016, 0.079, 0.186] AND MAX: [1.000, 1.000, 1.000]
As a sanity check we compute the bounds for all the scaled coordinates. The scaling parameters were computed from coordinates of the estimated cell-body locations of all neurons—or all of those neurons within a selected cylindrical volume as constructed above—and so we expect that for each axis either the minimum will be 0.0 or the maximum will be 1.0 depending on the distribution of the points around the central column. Some of the synapse coordinates could fall outside the unit cube.
visualize_workflows(step=3)
MEAN # SYNAPTIC CONNECTIONS BETWEEN TWO NEURONS: 2.43 MAX # SYNAPTIC CONNECTIONS BETWEEN TWO NEURONS: 223 MAX # SYNAPSES FROM IDX 263 (TYPE: L2) TO IDX 771 (TYPE: Tm1) IS 223
We calculate the mean and the max number of syapses shared by pairs of neurons, select a pair with the maximum number of synapses and then display skeleton of the source neuron (SRC) as we did earlier showing possible locations for the cell body.
visualize_workflows(step=4)
Here we show the point cloud for the destination neuron (DST) in a contrasting color with marker size proportional to estmated diameter at each point on the skeleton.
visualize_workflows(step=5)
We use a different marker shape to further distinguish visually between the SRC and DST neurons.
visualize_workflows(step=6)
THE # OF SKELETON PTS: SRC 1733, DST 2650
SYNAPSES: 223, CTR: [0.482, 0.359, 0.347]
Here we see the point clouds of the two neurons superimposed and their shared synapses visualzed as red triangles.
visualize_workflows(step=7)
THERE ARE 223 CONNECTIONS FROM 263 to 771 THERE ARE 4 CONNECTIONS FROM 771 to 263
Return AXES postpone rendering allowing more plots.
This graphic focuses on the DST neuron and shows the synapses such that the pre-synaptic neuron is SRC in purple and and those such that the pre-synaptic neuron is DST in red.

We’re primarily interested in categorizing the local microstructure of the tissue sample. The cartoon in panel (a) depicts the connectome graph embedded in a 3D volume. We run a non-linear convolution filter over the 3D volume, illustrated as a 2D tiling in panel (b). Despite my inadequate rendering in panel (c), the enclosed subgraph for a 3D kernel spanning a sub-volume approximately 20 microns on a side is generally quite complex.

Sub-volume-enclosed subgraphs are defined by the position of synapses and not cell bodies. Consider the simple network shown in panel (a). The subgraph comprised of cell bodies as vertices shown in (b) is not completely contained in the volume bounded by the two dashed horizontal lines. Whereas the subgraph shown in panel (c) employing synapses as vertices is completely contained within the lines. In defining sub-circuits contained in local sub-volumes, we use the convention illustrated in panel (c).

There is a long history of work analyzing neural circuits in terms of their graph-theoretic properties though most of the work has been applied to fMRI data where resolution—the size of a voxel—is on the order of 5 millimeters compared with 10 nanometers in the case of the Janelia dataset. Intuitively, a network motif is a repeating subgraph that defines a pattern of connectivity exhibiting some degree of functional specificity.
[1] Yu Hu, James Trousdale, Kres̈imir Josíc, and Eric Shea-Brown. Motif statistics and spike correlations in neuronal networks. CoRR, arXiv: 1206.3537, 2015.
[2] Marcus Kaiser. A tutorial in connectome analysis: Topological and spatial features of brain networks. CoRR, arXiv: 1105.4705, 2011.
[3] Arun S. Konagurthu and Arthur M. Lesk. On the origin of distribution patterns of motifs in biological networks. BMC Systems Biology, 2: 1-8, 2008.
[4] R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon. Network motifs: simple building blocks of complex networks. Science, 298(5594): 824-827, 2002.

Here is an example subgraph. The darker green nodes and black edges give you some idea of just how complex the subgraphs are even in small volumes. This one subgraph involves 148 neurons and over 10,000 synapses. The simplicial complex consists of all k-simplexes for k > 0 where a k-simplex is a complete or fully-connected subgraph—also referred to as a clique—with k + 1 vertices in the unordered graph that has a single sink vertex in the directed graph. This slide shows a 4-simplex of which there are thousands in the simplicial complex associated with this subgraph, typically involving one of a few specialized types of neurons as the sink.
visualize_simplices(diameter=0.1,center=[0.55,0.45,0.75],quiet=True)
THE # OF DISTINCT CELL TYPES IS 58 CELL TYPE Tm6/14 , COUNT 7, PERCENTAGE 1% ; CELL TYPE T2a , COUNT 10, PERCENTAGE 2% CELL TYPE Tm5Y , COUNT 6, PERCENTAGE 1% ; CELL TYPE Dm1 , COUNT 5, PERCENTAGE 1% CELL TYPE Dm2 , COUNT 11, PERCENTAGE 2% ; CELL TYPE Dm3 , COUNT 4, PERCENTAGE 0% CELL TYPE Dm4 , COUNT 8, PERCENTAGE 1% ; CELL TYPE Dm5 , COUNT 3, PERCENTAGE 0% CELL TYPE Dm6 , COUNT 7, PERCENTAGE 1% ; CELL TYPE Dm7 , COUNT 3, PERCENTAGE 0% CELL TYPE Dm8 , COUNT 11, PERCENTAGE 2% ; CELL TYPE Dm9 , COUNT 7, PERCENTAGE 1% CELL TYPE TmY13 , COUNT 2, PERCENTAGE 0% ; CELL TYPE Tm23/24 , COUNT 19, PERCENTAGE 4% CELL TYPE Pm4 , COUNT 3, PERCENTAGE 0% ; CELL TYPE Pm1 , COUNT 12, PERCENTAGE 2% CELL TYPE Pm2 , COUNT 9, PERCENTAGE 1% ; CELL TYPE Pm3 , COUNT 2, PERCENTAGE 0% CELL TYPE Lawf2 , COUNT 2, PERCENTAGE 0% ; CELL TYPE Tm25 , COUNT 4, PERCENTAGE 0% CELL TYPE Tm5b , COUNT 8, PERCENTAGE 1% ; CELL TYPE Tm5c , COUNT 5, PERCENTAGE 1% CELL TYPE TmY3 , COUNT 5, PERCENTAGE 1% ; CELL TYPE Tm20 , COUNT 7, PERCENTAGE 1% CELL TYPE C3 , COUNT 9, PERCENTAGE 1% ; CELL TYPE C2 , COUNT 8, PERCENTAGE 1% CELL TYPE Tm28 , COUNT 5, PERCENTAGE 1% ; CELL TYPE Tm1 , COUNT 12, PERCENTAGE 2% CELL TYPE Tm2 , COUNT 12, PERCENTAGE 2% ; CELL TYPE R7 , COUNT 12, PERCENTAGE 2% CELL TYPE Tm4 , COUNT 6, PERCENTAGE 1% ; CELL TYPE Tm3 , COUNT 15, PERCENTAGE 3% CELL TYPE Tm8 , COUNT 1, PERCENTAGE 0% ; CELL TYPE R8 , COUNT 9, PERCENTAGE 1% CELL TYPE T4 , COUNT 36, PERCENTAGE 7% ; CELL TYPE Tm5a , COUNT 9, PERCENTAGE 1% CELL TYPE T2 , COUNT 11, PERCENTAGE 2% ; CELL TYPE T3 , COUNT 11, PERCENTAGE 2% CELL TYPE TmY4 , COUNT 5, PERCENTAGE 1% ; CELL TYPE T1 , COUNT 9, PERCENTAGE 1% CELL TYPE Mi9 , COUNT 13, PERCENTAGE 2% ; CELL TYPE L4 , COUNT 10, PERCENTAGE 2% CELL TYPE L5 , COUNT 11, PERCENTAGE 2% ; CELL TYPE L2 , COUNT 11, PERCENTAGE 2% CELL TYPE L3 , COUNT 10, PERCENTAGE 2% ; CELL TYPE L1 , COUNT 13, PERCENTAGE 2% CELL TYPE Mi1 , COUNT 15, PERCENTAGE 3% ; CELL TYPE Mi3 , COUNT 13, PERCENTAGE 2% CELL TYPE Mi2 , COUNT 3, PERCENTAGE 0% ; CELL TYPE Mi4 , COUNT 5, PERCENTAGE 1% CELL TYPE Lawf1 , COUNT 3, PERCENTAGE 0% ; CELL TYPE TmY10 , COUNT 2, PERCENTAGE 0% CELL TYPE Mi10 , COUNT 1, PERCENTAGE 0% ; CELL TYPE TmY5 , COUNT 4, PERCENTAGE 0% CELL TYPE Y3/6 , COUNT 6, PERCENTAGE 1% ; CELL TYPE Tm22 , COUNT 1, PERCENTAGE 0% CELL TYPE Tm16 , COUNT 4, PERCENTAGE 0% ; CELL TYPE Tm9 , COUNT 7, PERCENTAGE 1% IDX = 183 SIMPLEX: = [60 0 9 41 50] SIMPLEX NODE CELLTYPES: ['Pm1' 'Pm2' 'Pm2' 'Tm1' 'Mi3'] TOTAL # NEURONS / MICROCIRCUIT GRAPH VERTICES = 7585 TOTAL # CONNECTIONS / MICROCIRCUIT GRAPH EDGES = 71050 TOTAL # SYNAPSES = 172672 TOTAL # SELECTED SUBGRAPH VERTICES = 231 TOTAL # SELECTED SUBGRAPH EDGES = 2605
Here is another method for visualizing subgraphs and simplices this time in 3D and allowing interactive examination of the data and manipulations of the axes.

Happily, decades of painstaking bench work on Drosophila has helped us in sorting out possible, functionally discriminative motifs consisting of typed k-complexes. If the pattern of connectivity is more or less random or the strength of the connections uncertain, then a topological or graph-theoretical analysis may not be particularly informative. The seven-column dataset does include a confidence field for each synapse, but it is only assigned 1.0 or 0.0. While the Janelia dataset does not include connection-strength metadata, one can infer synaptic weights using vesicle counts and synaptic cleft measurements and thereby enrich the microcircuit connectome graph. However, the preferred way to assign cell types and connection weights is to train an artificial neural network.
cluster_topological_feature_vectors(model_number=1,number_clusters=8)
We construct feature vectors consisting of $k$-simplex statistics and topological invariants including the Euler characteristic. In the case of simplicial complexes, the Euler characteristic $\chi{}$ is defined as the alternating sum $\chi{} = k_0 - k_1 + k_2 - k_3 \cdots{} k_N$ where $k_n$ denotes the number of $n$-simplices in the simplicial complex and $N$ is the largest integer for which at least one $N$-simplex exists in the simplicial complex. Good luck finding a satisfying interpretation.
We also use another class of topological invariants called Betti numbers $\{\,\beta{}_0, \beta{}_1, \ldots{}, \beta{}_N\}$ that are too complicated to define here and so a few examples will have to suffice: $\beta{}_0$ is the number of connected components, $\beta{}_1$ is the number of one-dimensional "holes", and $\beta{}_2$ is the number of two-dimensional "voids". Even relatively simple unsupervised algorithms like $k$-means can cluster the resulting feature vectors to reconstruct the layered, columnar structure of the medulla.
The sort of analyses considered in the last few slides are necessary but not sufficient for building models of neural circuitry. They are necessary because machine learning technology isn't so advanced that it can be trusted to get it right without any supervision whatsoever. Machine learning often fails spectacularly, since it can easily miss the forest for the trees. Humans can often apply their common-sense reasoning and skill at pattern recognition to catch the most egregious errors.
Machine learning is necessary because complex brains constitute alternative universes in which the dominating laws of physics are different from those governing the sort of phenomena we can (directly) observe in the macroscale universe in which our physical intuitions evolved. Modern machine learning tools such as deep recurrent networks excel in modeling these alien universes because they have few built-in biases aside from those implicit in our selecting a network architecture.
I'm confident we can train an artificial neural network to significantly improve on my poor attempt to be clever by channeling algebraic topology and in particular homology theory. I might have learned less by using a neural network had I taken that route, but I'm not convinced that what I did learn from the exercise is particularly relevant to my primary interest in constructing mesoscale models. It all depends on your loss function.