Humanity Rising: Why Evolutionary Developmentalism Will Inherit the Future

Evo Devo Universe - An Interdisciplinary Research Community

Evo-Devo Universe – Exploring Models of Universal Evolution and Development

An expanded version of this post was published as:
Smart, John M., Humanity Rising: Why Evolutionary Developmentalism Will Inherit the Future, World Future Review, November 2015: 116-130. doi:10.1177/1946756715601647 (SAGE: abstract). Oct 2016:
Full PDF (21 pages) is now available here.

For more on Evo-Devo, see Chapter 11 (Evo-Devo Foresight) of my online book, The Foresight Guide (2018).

What is evolutionary development (“evo-devo”)? It is a minority view of change in science, business, policy, foresight and philosophy today, a simultaneous application of both evolutionary and developmental thinking to the universe and its replicating subsystems. It is derived from evo-devo biology, a view of biological change that is redefining our thinking about evolution and development. As a big picture perspective on complex systems, I think evo-devo models will be critical to understanding our past, present, and future. The sixty-some scholars at Evo-Devo Universe, an interdisciplinary community I co-founded with philosopher Clement Vidal in 2008, are interested in arguing, critiquing and testing evolutionary and developmental models of the universe and its subsystems, and exploring their variations and implications.

Whatever else our universe is, and allowing that there are physical mysteries, like dark matter, dark energy, the substructure of quarks, and the nature of black holes still to be uncovered, reasonable analysis suggests that it is both evolutionary and developmental, or “evo-devo”. Like a living organism, it undergoes both experimental, stochastic, divergent, and unpredictable change, a process we can call evolution , and at the same time, programmed, convergent, conservative, and predictable change, a process we can call development. Evo-devo thinking is practiced by any who realize that parts of our future are unpredictable and creative, while other parts are predictable and conservative, and that in the universe, as in life, both processes are always operating at the same time.

Does our Universe have a developmental life cycle? Evolutionary developmentalists think it may.

Like living organisms, our universe may have a developmental life cycle.

Our universe builds intelligence in a developmental hierarchy as it unfolds, from physics, to chemistry, to biology, to biominds, to postbiological intelligence. As physicists like Lee Smolin (The Life of the Cosmos, 1999) have argued, our universe may also be chained to a developmental life cycle, like a living organism. Since almost every interesting complex system we know of within the universe, from solar systems to cells, undergoes some form of replication, inheritance, variation, and selection to build its complexity, it is parsimonious (conceptually the simplest model) to suspect this is how the universe built its complexity as well, within a still poorly understood environment that physicists call the multiverse.

An evo-devo universe proposes that any physical system that has both evolutionary  (divergence, variation) and developmental (convergence, replication, inheritance) features, and operates in a selective environment, will self-organize its own adaptive complexity as replication proceeds. Consider how replicating stars have advanced from the primitive Population III stars to the far more complex Population I solar systems, like our Sun and its complex planets, over galactic time. Replicating evo-devo chemicals built up from nucleic acids to cells, over billions of years. Replicating evo-devo cells created multicellular life with nervous systems,  again over billions of years.  Replicating evo-devo nervous systems forged hominids, over roughly 500 million years. Replicating languages, ideas, and behaviors in hominid brains birthed nonbiological computing systems, over something like 5 million years. Now computing and robotics systems, whose replication is presently aided by human culture, are soon (within the next few decades, it seems) going to be able to replicate, evolve, and develop autonomously.

The evo-devo model provides an intuitive, life-analogous, and conceptually parsimonious explanation for several nagging and otherwise improbable phenomena, including the fine-tuned Universe problem, the presumed great fecundity of terrestrial planets and life, when an evolution-only framework would lead us to predict a Rare Earth universe, the Gaia hypothesis, the surprisingly life-protective and geohomeostatic nature of Earth’s environment, the unreasonably smooth, redundant and resilient nature of accelerating change and leading-edge complexification on Earth, and other curiosities. If true, it should be able to increasingly demonstrate how and why such phenomena might self-organize as strategies to ensure a more adaptive and intelligence-permitting universe, in an ultimately simulation testable model. It also provides a rejoinder to theologian William Paley’s famous watchmaker argument, that only a God could have designed our planet’s breathtaking complexity, with the curious example of replicative self-organization of complexity, a phenomenon seen in a great variety of dissipative systems on multiple scales in our universe, and one we will increasingly understand, model, and test in coming years.

As much as some might find comfort in believing in a God who designed our universe, it is perhaps even more comforting to believe, tentatively and conditionally, in a Universe with such incredible self-organizing and self-protecting features, and in the amazing history and abilities of evolutionary and developmental processes in living systems themselves. Evo-devo processes have apparently created both matter and mind, and have been astonishingly resilient to generating complexity and intelligence at ever-accelerating rates. Found throughout our universe, such information-protective processes may even transcend our universe, and may have determined the first replicator, if such a thing exists. Then again, perhaps our physics and information theory will never reach back that far, and such knowledge may forever remain metaphysics. In the meantime we can say that Big History, the science story of the universe so far, is sufficiently awe inspiring, humbling, useful, and hopeful to give us guidance, once we place it in an evo-devo frame. As we’ll suggest, we now know enough about evolution and development at the universal scale to begin relating these processes to our own lives, and most interestingly, to ask how we can make our values and goals more consistent with universal processes.

As our universe grows islands of accelerating local order and intelligence in a sea of ever-increasing entropy, physics tells us this process cannot continue forever. The universe’s “body” is aging, and will end in either heat death, or a big rip, or both. If our universe is indeed a replicating complex adaptive system that engages in both evolution and development, as it grows older it must package its intelligence into some kind of reproductive system, so it’s complexity can survive its death and begin again. Developmental models thus argue that intelligent civilizations throughout the universe are part of that reproductive system – protecting our complexity and ultimately reproducing the universe and further improving the intelligence it contains. In other words, growing, protecting, and reproducing personal, family, social, and universal intelligence may be the evolutionary developmental purpose of all intelligent beings, to the greatest extent that they are able.

Charles Darwin - Father of Evolutionary Theory

Charles Darwin, On the Origin of Species, 1859

Beginning in 1859, Charles Darwin helped us to clearly see evolutionism in living systems, for the first time. Discovering that humanity was an incremental, experimental product of the natural world was a revolutionary advance over our intellectually passive, antirational and humanocentric religious beliefs. But until we also understand and accept developmentalism, recognizing that the universe not only evolves but develops, the purpose and values of the universe, and our place in it will remain high mysteries about which science has little of interest to say. Our science will remain infantile, descriptive without also being prescriptive, and unable to deeply inform our morality and politics. That must and will change in coming decades.

Discovery_Channel_Curiosity

Curiosity – A Discover Channel TV Series

As an example of where we are today, I just watched a Discovery Channel program on evolution, Mankind Rising, available for $1.99 at YouTube It is Season 2, Episode 8 of Curiosity a new educational television series launched by Discovery founder and chairman John HendricksCuriosity is a five-year, multi-million dollar initiative to tackle fundamental questions and mysteries of science, technology, and society, in sixty episodes. There is also a commendable Curiosity initiative in American K-12 schools, to use the show to increase our children’s engagement in STEM education.

mankindrisingcuriositydiscovery2012

Mankind Rising – Season 2, Episode 8 of Curiosity

Mankind Rising considers the question “How did we get here?” It tells the journey of humanity from the cooling of life’s nursery, Earth, 4 billion years ago, and the emergence of the first cell 3.8 billion years ago, to Homo erectus, anatomically modern humans, 1.8 million years ago. It does this in one 43 minute time-lapsed computer animation, the first time our biological history has received such a treatment, as far as I know. The animation is primitive, but it holds your interest enough to follow the story. And what an amazing story it is.  We see a lovely visualization of the Phylogenetic hypothesis, which proposes that human hiccups are a holdover from our amphibian ancestry, when we gulped air at the surface across our gills, which are now vestigial (think of pharyngeal pouches in human embryos), before we grew lungs. Human babies do a lot of gulping-hiccuping both in utero and when born prematurely, and both amphibian gill-gulping and human hiccups are stopped by elevated carbon dioxide, hence the folk remedy to breathe into a bag to stop them.

We also get to see the rise of the first tool users, Homo habilis, 4 million years ago, in a dramatic sequence where an early human strikes one rock against another and is fascinated to discover a sharp rock in his hands. H. habilis’ ability to hold sharp rocks and clubs in their hands, and use them imitatively in groups to defend against other animals was perhaps the original human event. The best definition of humanity, in my opinion, is any species that gains the ability to use technology creatively and socially to continually turn themselves into something more than their biological selves. We inevitably become a species with both greater mind (rationality, intellect) and greater heart (emotion, empathy, love), two core kinds of intelligence. I would predict the first collaborative rock-users on any Earth-like planet must soon thereafter become its dominant species, as there are so many paths to further adaptiveness from the powerful developmental duo of creative tool use and socially imitative behavior.

persistencehuntingmankindrising2012

Homo habilis, perhaps the first persistence hunters.

One clever thing that the first socially-adept rock- and club-holding animals on any Earth-like planet gain access to is pack hunting (and if good at sweating, a form of pack hunting called  persistence hunting). Learning both how to pack hunt and how to tame fire, as described in Richard Wrangham’s Catching Fire: How Cooking Made Us Human, 2010, may have doubled our brain size by giving us our first reliable access to meat, a very high-energy fuel source. We may have begun with pack hunting by ambush, which chimpanzees do today, and then graduated to persistence hunting, or running down our prey, sometimes in combination with setting fires to flush out our prey. We primates sweat across our entire bodies, not just through our mouths like other mammals. Humans have developed our sweating and cooling ability the best of all primates by far. As a result, two or three of us working together can actually run to heat exhaustion any animals that can’t sweat, if we hunt them in the mid-day sun.  Some peoples persistence hunt even today, as seen in this amazing seven minute Life on Earth clip of San Bushmen running down a Kudu antelope (!!) in the Kalahari desert. Mankind Rising ends with Homo erectus (“human upright”), possibly the first language-using humans, 1.8 million years ago. We don’t yet have fossil evidence their larynx was anatomically modern, but there are indirect arguments.  Language, both a form of socially imitative behavior and a fundamental tool for information encoding and processing, was very likely the final technology needed to push our species from the animal to the human level.

In Evolutionism, the Universe is a massive set of Random Events, Randomly Interacting.

In Evolutionism, the Universe is a Massive Set of Random Events, Randomly Interacting.

Unfortunately, there are serious shortcomings to Mankind Rising as an educational device. The show’s narrative, and the theory it represents, are the standard one-sided, dogmatically-presented story of life’s evolution, with no hint of life’s development. As a result, it treats humanity’s history as one big series of unpredictable accidents. This is the perspective of universal evolutionism, also called “Universal Darwinism” ”, which considers random selection to be the only process in universal change, ignoring the possibility of universal development. In evolutionism, all the great emergence events are told as happening randomly and contingently. The show even makes the extreme claim that life itself emerged “against the laws of probability.” The emergence of humanlike animals is also presented as a stroke of blind luck, because the K-T meteorite wiped out our predators, the dinosaurs. All of this is true in part, only from one set of perspectives, that of the individual, organism, or individual event. In other words this story, and evolutionism in general, is a dangerously incomplete half-truth.

When we look at the same events from the perspective of the universal system, the environment, or distributions of events over time, we can easily argue that many particular forms and functions appear physically predetermined to emerge. Consider two genetically identical biological twins, or two snowflakes. Most of what happens to them up close, at the molecular scale, is randomly, contingently, unpredictably different. The microstructure of all the twin’s organs, including their brain, fingerprints, and many other molecular features are as different as the designs on two snowflakes. But look at them from across the room, taking a system or environmental perspective, and you see that they achieve many of the same developmental endpoints over time. The twins have the same body and facial structure and many of the same personality traits, constrained by the organism’s developmental genes and the shared environment. The snowflake’s hexagonal structure is developmentally predetermined, constrained by the way water forms hydrogen bonds as it freezes.

Just like biological development, universal development happens because of the special initial conditions (physical laws, or “genes”) of our universe, the time constancy and environmental sameness (isotropy) of some physical law throughout the universe/system/environment, and the apparent commonness (ubiquity) of Earth-like planets in our universe, a suspicion that will hopefully be proven by astrobiologists in coming years. Examples of developmental processes and structures are easy to propose. We can see developmental physics in the motions of the planets, which are highly future-predictable, as Isaac Newton discovered. Other physical processes, such as the production of black holes in general relativity, the acceleration of entropy production, and of complexification in special locations, also appear highly predictable and universal. Other physics by contrast, such as quantum physics, looks highly evolutionary and unpredictable. As we move up the complexity hierarchy from physics to chemistry to biology, to society, our list of potential evolutionary and developmental forms and processes rapidly grows.

In Developmentalism, Certain Systemic Forms and Functions are Statistically Fated to Emerge in the Universe, as in Biological Development

In Developmentalism, Certain Universal Forms and Functions are Statistically Fated to Emerge, as in Biological Development

Convergent form and function in placental and marsupial mammals - a famous example of convergent evolution, or better, convergent evolutionary development.

Convergent form and function in placental and marsupial mammals – a famous example of convergent evolution, or better, convergent evolutionary development.

Other examples of inevitable, ubiquitous developments in our universe may include organic chemistry as the only easy path to complex replicating autocatalytic molecular species. Earth-like planets with water, carbon and nitrogen cycles (plate tectonics), nucleic acids, lipid membranes, amino acids, and proteins as the only easy path to cells (testable by simulation). Oxidative phosphorylation redox chemistry as the only easy path to high-energy chemical life. Multicellular organisms, bilateral symmetry, eyes, skeletons, jointed limbs, land colonization, opposable thumbs, social brains, gestural and vocal language, and imitative behavior as the only easy path to runaway technology (tool use). And similar unique developmental advantages to written language, math, science, and various technology archetypes, from sharp rocks and clubs to levers, wheels, electricity, and computers. These potentially universal forms and functions may be destined to emerge, because of the particular initial conditions and laws of the universe in which evolution is occurring, and each are destined to become optimal or dominant, for their time, in environments in which accelerating complexification and intelligence growth are occurring.

On Earth, we have seen a number of these forms, such as eyes, emerge and persist independently in various separate evolutionary lineages and environments. Independent emergence, convergence, and optimization or dominance of developmental forms and processes is one good way to separate them from the much larger set of ongoing evolutionary experiments. Developmental forms and functions are those that will be more adaptive at each particular stage of environmental complexity, in more contexts and species. Think of two eyes for a predator, over three or one eye. Or four wheels for a car, over three or more than four wheels. Or all the body form and function types that converged in placental and marsupial mammals. Australia separated early from the other continents, yet produced many similar mammal types via marsupials, plus a few new ones, like the kangaroo. This is a classic example of convergent evolution, or more accurately, convergent evolutionary development, when we examine biological change from the planet and universe’s perspective.

Evolution is destined to randomly, contingently, and creatively but inevitably discover these optimal developmental forms and functions, in most environments. For more on evolutionary developmentalism, feel free to read my 50-page precis, Evo Devo Universe?, 2008, and let me know your thoughts.

Would Raptors Have Led Inevitably to Dinosaur Humanoids, if the K-T Meteorite Hadn't Hit 65 Mya? - The Dinosauroid Hypothesis, one of several Developmentalist proposals, yet to be tested.

Would Raptors Have Led Inevitably to Dinosaur Humanoids (Dinosauroids), if the K-T Meteorite Hadn’t Hit 65 Mya? That is the Dinosauroid Hypothesis, a Developmentalist Proposal.

As another example ignored by the show, several evolutionary developmentalists have independently proposed that our very-easy-to-create-yet-general-purpose  “humanoid form”, a bilaterally symmetric bipedal tetrapod with two eyes and two opposable thumbs, is a very likely outcome for all biological intelligences that first achieve our level of sophistication on all Earth-like planets. If we saw such “early” alien intelligences from across a dimly lit room, they would they look roughly like us, as the astrophysicist Frank Drake, author of the Drake equation, has argued. But while we can use science and simulation to argue their existence, in my view we have virtually no chance to meet other biological organisms in the flesh. Why? Because the universe does a very good job of separating all the evolutionary experiments by vast distances, uncrossable by biological beings. Our universe may have self organized to have its current structure in order to maximize the diversity of intelligences created within it, as I argue in my 2008 paper. No intelligence is ever Godlike, so diversity is our best strategy for improving our lot. Another reason we’ll likely never meet biological beings from other Earthlike planets is because the leading edge of life on Earth is now rapidly on the way to becoming postbiological, and postbiologicals are likely to have very different interests than traveling long distances through slow and boring outer space, when there are much better options apparently available in “inner space,” as I argue in my 2012 paper.

In other words, our own particular mammalian pathway to higher intelligence has likely given us a unique evolutionary pathway to our developmental humanity, one with great universal value. Our civilization has likely discovered and created some things you won’t find anywhere else in the universe. But at the same time, if the K-T meteorite hadn’t struck us 65 mya, it is easy for an evolutionary developmentalist to argue that dinosaurs like Troodon would have inevitably discovered the humanoid form, rocks, language, and tools, and we might have looked today like that green-skinned humanoid in the picture above. Why?

If you have seen the movie Jurassic Park, or have read up on raptors like Troodon, you know that they had semi-opposable digits and hunted in packs, in both the day and the night. It is easy to bet that the first raptor descendants that also learned how to hold sharp rocks and clubs in their hands in close-quarters combat would have forever after owned the role of top biological species. It would be game over, and competitive exclusion, for all other species that wanted that niche. Once you are manipulating tools in your hands, and speaking with your larynx, it’s easy to imagine that your body is forced upright, and your tail is no longer useful. You are engaged in runaway complexification of your social and technical intelligence – you’ve become human, and the leading edge of local planetary intelligence has jumped to a higher substrate. Dale Russell, author of the Dinosauroid hypothesis in 1982, was scoffed at by conventional evolutionists back then, and the model is still largely ignored today, see for example Wikipedia’s short and evolutionist-biased paragraph on it. This response from the scientific community is predictable, given the hornet’s nest of implications that evolutionary developmentalism introduces, including all the deficiencies in current cosmology of our understanding of the roles of information, computation, life and mind.

The hand of Stenonychosaurus inequalis, with a partly opposable digit.

The hand of Troodon inequalis, with a partly opposable digit.

Note the closeup of the hand of Stenonychosaurus (now called Troodon) inequalis, from Russell’s paper,“Reconstructions of the small cretaceous theropod Stenonychosauris inequalis and a hypothetical dinosauroid,” Dale A. Russell and Ron Séguin, Syllogeus, 37, 1982. The authors state the structure of the carpal block on Troodon’s hands argues that one of the three fingers partially opposed the other two as shown. The shape of the ulna also suggests its forearms rotated. It probably used its hands to snatch small prey, and to grab hold of larger dinosaurs while ripping into them with the raptorial claw on the inside of each of its feet. Troodon was a member of a very successful and diverse clade of small bipedal, binocular vision dinosaurs with one free claw on their feet, the Deinonychosaurs (“fearsome claw lizards”). These animals lived over the last 100 million years of the 165 million years of dinosaur existence, and were among the smartest and most agile dinosaurs known, with the highest brain-to-body ratios of any animals in the Mesozoic era. Most Deinonychosaurs had arms that were a useful combination of small wings and crude hands consisting of three long claws. Troodon was in a special subfamily that had lost the wings but retained the three long digits on each hand. According to Russell, Troodon’s brain-to-body ratio was the highest known for dinosaurs at the time. Because of their special abilities, I would argue that Deinonychosaurs  were not only members of an evolutionarily successful niche, they also occupied an inevitably successful developmental niche as well.

The assumption here, made by a handful of anthropologists and evolutionary scholars over the years, is that trees are a key niche, the “developmental bottleneck,” through which the first rock-throwing and club-wielding imitative hominids will very likely pass, on a typical Earth-like planet. Swinging from limb to limb requires very dextrous hands, and just as importantly, a cerebellum and forebrain that can predict where the body will go in space. With their manipulative hands, with or without wings, their big, strong legs and multipurpose feet, yet their small size, Deinonychosaurs would have been impressive tree climbers, able to get rapidly up and down from considerable heights. If they were the largest and strongest animals physically capable of doing so, which seems likely, this argues that they would have permanently occupied the special niche that primates would later inhabit. Imagine primates trying to get into the all-important tree niche with Deinonychosaurs running about. Good luck! Deinonychosaurs would have achieved “competitive exclusion”, the ability to permanently deny other species access to the critical transitional niche that was the gateway to significantly more intelligent and adaptive life. Much later, Homo sapiens achieved competitive exclusion by being the first to achieve runaway language and tool use, using these to deny all other primates access to more intelligent and adaptive social structures, including our closest competitors, Homo neanderthalensis and others.

So if  tree climbing and swinging is the fastest and best way to build grasping hands and predictive brains good at simulating complex trajectories (a claim testable by future simulation) and eventually, modeling and imitating the mental states of others in their pack so they could do imitative tool use (the next critical developmental bottleneck leading to planetary dominance for the first species to do so, also testable by future simulation), then if Deinonychosaurs dominated that niche, it is reasonable to expect a Deinonychosaur to be the first to make the jump to tool use. Troodon couldn’t swing in the trees, but he would have been very agile among them, able to use them for escape and evasion. He had two manipulative hands that would have been very useful both in killing and in avoiding being killed. This looks to me like a potential case for competitive exclusion. The hypothesis to test is that tree environments are the dominant developmental place on land to breed smart, socially-imitative and tool-using species, just as land appears to be the dominant developmental place for the emergence of species that use built structures, on any Earth-like planet.

One might ask, couldn’t tool use under water grow to reach competitive exclusion first? Apparently not. Unlike air, water is a very dense and forceful fluid relative to the muscles of species that operate within it, gravity doesn’t hold down aqueous structures or animals very well, and language may not allow for the same degree of phonetic articulation underwater as well as it does in air. But underwater tool using collectives do exist. Dolphins use sponges in collectives, and the master observer Jacques Cousteau discovered in the 1980s that octopi used rocks as tools, in large socially imitative groups. Like their eyes and brains, two of their eight appendages are prehensile with bilateral symmetry, meaning they are specially neurologically wired to oppose each other in grasping and wielding objects, just like human arms and hands (developmental convergence). Octopi even occasionally built large groups of small huts for themselves out of rocks, but their collective rock use could not make them the dominant species under water, due to its harsher physics compared to air. Thus it seems very likely that runaway tool use must happen with very high probability on land first, on all Earth-like planets. Again, this developmental hypothesis will eventually be testable by simulation.

The universe, from this perspective, seems developmentally fated for the fastest-improving language-capable tool-using species to emerge on land, in a breathable atmosphere, not under water. This new selection environment of cultural evo-devo, selecting for more complex language and more useful social collectives, is sometimes called memetic evolutionary development, using Richard Dawkin’s concept of the meme as any elemental mentally replicating behavior or idea.

We must recognize that memetic change is always accompanied by another selection environment, technological evo-devo, which starts out very weak at first but becomes increasingly dominant, because our social ideas always lead to ways to use technologies (things outside our bodies) to achieve our goals, and those technologies inevitably become smarter, more powerful and more efficient than biological processes, which are very limited by the fragile materials (peptide bonds) from which they are made.

Susan Blackmore’s calls any elemental socially replicating technological form or algorithm a teme. So we must realize that both memetic and temetic evo-devo always go together in leading animals, on any Earthlike planet. Once these new replicators (social ideas/behaviors and technologies/algorithms) emerge, biological evo-devo (genetic and even epigenetic change) soon becomes so slow and modest by comparison that its further changes become increasingly future irrelevant, relative to memes and temes. As much as we love our ecosystem and should strive to protect it, it is where ideas and technologies are taking us today, not biology, that drives the future of our civilization.

Simon Conway Morris - A Leading Evolutionary Developmentalist (though he might not use that term :)

Simon Conway Morris – A Leading Evolutionary Developmentalist (but he might not use that term) 🙂

In the years since Russell’s indecent proposal, hundreds of other scientists, including the paleontologist Simon Conway Morris (Life’s Solution, 2001 and The Deep Structure of Biology, 2008) have proposed that humanity’s most advanced features, including our morality, emotions, and tool use, have all been independently discovered, to varying degrees, in other vertebrate and invertebrate species on Earth. Let us at this point acknowledge but also ignore Conway Morris’s Christianity, as his particular religious beliefs are his own business, and are not relevant to his scientific arguments, as his secular critics should honestly acknowledge. According to Conway-Morris, if something catastrophic happened to Homo sapiens on Earth, it seems highly probable that another species would very quickly emerge to become the dominant “human” tool-users in our place. In other words, local runaway complexification seems well protected by the universe.

In evo-devo language, we can say there appears to be a developmental immune system operating, to ensure that human emergence, and re-emergence if catastrophes like the K-T meteorite occur, will be both a very highly probable and an accelerating universal event, on any Earth-like planet. Only the quality of our present transition to postbiological status seems evolutionary, based on the morality and wisdom of our actions. Our pathway to and our subtype of humanity may thus be special and unique, but our humanity itself, in many of its key features, seems to be a product of the universe, far more than a product of our own free choice. Learning to see, accept, and better manage all this hidden universal development, and in the process bringing our personal ego, fears, and illusions of control back down to fit historical reality, are among the greatest challenges humans face in understanding the true nature of the universe and our place in it.

Fortunately, these and other developmentalist hypotheses can increasingly be tested by computer simulation, as our computing technology, historical data, and scientific theory get progressively better. Run the universe simulation multiple times, and anything that appears environmentally dominant time and again, and any immunity that we see (statistical protection of accelerating complexity), is developmental. The rest, of course, is creative and evolutionary. To recap our earlier example, hexagonal snowflake structure will be developmental on all Earth-like planets with snow. But the pattern on each snowflake  will be evolutionary, and unpredictably unique, both on Earth and everywhere else. Nature uses both types of processes to build intelligence.

An Evo Devo Universe isn't a Ladder of Life (above), or a Blind Watchmaker, but some combination of the two.

In Evolutionary Development, the Universe is not just a Ladder of Nature (above), or a Random Experiment (standard Evolutionary theory), but some useful combination of the two simpler models.

Let me stress here that evolutionary development is no return to the Aristotelian scala naturae (Ladder of Nature, Great Chain of Being), where all important matter and process are predestined into some strict hierarchy of emergence. Only the developmental framework of universal complexification is statistically predetermined to emerge in evo-devo models, not the evolutionary painting itself, which is the bulk of the work of art. Remember the all-important differences in tissue microarchitecture and mental processes between two genetically (developmentally) identical twins. Nor is an evo-devo universe a Newtonian or Laplacian “clockwork universe” model, which proposes total physical predetermination, though it is a model with some statistically clockwork-like features, including the reliable timing of various hierarchical emergences throughout the universe’s lifespan and death, just as we see in biological development).

Consider that both the Aristotelian and Lapacian models of the universe are not real models of development (statistically predetermined emergence and lifecycle) but rather a caricature of development, one-sided views that allow no room or role for evolution. They are as incomplete in describing the whole of an evo-devo universe as neo-Darwinian theory is today.

Nor is an evo-devo universe the random, deaf-and-dumb Blind Tinkerer that universal evolutionists like Richard Dawkins (The Blind Watchmaker, 1996) or the writers of Mankind Rising portray. It appears that our universe is significantly more complex, intelligent, resilient, and interesting than all of these models suppose – it is predictable in certain critical parts that are necessary for its function and replication, and it is intrinsically unpredictable and creative in all the rest of its parts. Furthermore, unpredictable evolution and predictable development may be constrained to work together in ways that maximize intelligence and adaptation, both for leading-edge systems, and for the universe as a system.

evo-devo

A Good Overview of Evo-Devo Biology

 Evo-devo biology is an academic community of several thousand theoretical and applied evolutionary and developmental biologists who seek to improve standard evolutionary theory by more rigorously modeling the way evolutionary and developmental processes interact in living systems to produce biological modules, morphologies, species, and ecosystems.  Books like From Embryology to Evo-Devo, 2009, and Convergent Evolution: Limited Forms Most Beautiful, 2011, are great intros to this emerging field. I expect most evolutionary developmental biologists would agree with the statement that evolution and development are in many ways opposite and equally fundamental processes in complex living systems, and that neither can be properly understood without reference to its interaction with the other.

If you doubt the idea of universal development, read this 2011 book!

If you doubt the idea of universal development, read this great 2011 book!

The best of this work realizes there are two key forms of selection and fitness landscapes operating in natural selection – evolutionary selection, which is divergent and treelike, with chaotic attractors, and developmental selection, which is convergent and funnel-like, with standard attractors. Thus evolutionary developmentalism is an attempt to generalize the evo-devo biological perspective to nonliving replicating complex adaptive systems as well, including solar systems, prebiological chemistry, ideas, technology, and in particular, to the universe as a system.

Let’s close this overview with one revealing example of the interaction of evolution and development. In biological systems, the vast majority of our genes, roughly 95% of them, are evolutionary, meaning they change randomly and unpredictably over macroscopic time, continually recombining and varying as species reproduce. Only about 3-5% of our genes control our developmental processes, and those highly conserved genes, our “developmental genetic toolkit“, direct predictable changes in the organism as it traces a life cycle in its environment. As I’ve argued before, as a 95%/5% Evo/Devo Rule, roughly 95% of the processes or events in a wide variety of complex adaptive systems, including organizations, societies, species, and the universe may turn out be creative bottom-up and evolutionary, and only 5% may be predictable top-down, and developmental, though this evo-devo ratio must surely vary by system to some degree. The generic value of a 95/5 Rule in building and maintaining intelligent systems, if one exists, would explain why the vast majority of universal change appears to be bottom-up driven, evolutionary and unpredictable in complex systems, what systems theorist Kevin Kelly called Out of Control in his prescient 1994 book. Yet a critical subset of events and processes in these systems also appears to be top-down/systemically directed, developmental, and intrinsically predictable, if you have the right theory, computational resources, and data. Discovering that developmental subset, and differentiating it from the much larger evolutionary subset, will make our world vastly more understandable, and show how it is constrained to certain future destinies, even as creativity and experimentation keep growing within all the evolutionary domains.

So what do we gain from conditionally holding and exploring the hypothesis of universal evolutionary development? Quite a lot, I think:

First, we regain an open mind. Rather than telling humanity’s history from a dogmatic and one-sided perspective, and assuming that our past existence in the universe is predominantly a “random accident,” we remember that there are many highly predictable things about our universe, such as classical mechanics, the laws of thermodynamics, and accelerating change. This allows us to present life’s story as a mystery: What parts of its emergence are very highly probable, or statistically predetermined? What parts are improbable accidents? We lose our blind faith that neo-Darwinism explains all of life or the universe, and we realize that there appears to be a balance between evolutionary experiment and developmental predetermination in all things in the universe, as in life.

Second, we regain our humility. We no longer see ourselves as either miraculous creations or extremely improbable accidents. We recognize that there are likely vast numbers of human communities in the universe, which has self organized to produce complex systems like us, and our postbiological descendants. It is commonly suggested that we are incredibly unique in the universe, and that we emerged “against astronomical odds.” On the contrary, developmentalists suspect that many or all of the things we hold most dear about humanity, including our brains, language, emotions, love, morality, consciousness, tools, technology, and scientific curiosity, are all highly likely or even inevitable developments on Earth-like planets all across the universe. This kind of thinking, looking for our universals as well as our uniquenesses, moves us from a Western exceptionalism frame of mind to one that also includes an Eastern or Buddhist perspective. We may not only be unique and individual experiments, but we may also be members of a type that is as common as sand grains on a beach, instruments of a larger cycle of universal development and replication.

Third, we lose our unjustified fearfulness of and pessimism toward the future, and replace it with courage and practical optimism. The evolutionary accident story of humanity teaches us to be ever vigilant for things that could end our species at any moment. Vigilance is adaptive, but fear is usually not. We are constantly reminded by evolutionists that 99% of all species that ever lived are extinct (yes, but they were all necessary experiments, and their useful information lives on), and we live in a random, hostile and purposeless universe (no). Evolutionists conveniently forget that the patterns of intelligence in those species that died are almost all highly redundantly backed up in the other surviving organisms on the planet. Life is very, very good at preserving relevant pattern, information, and complexity, and now with science and technology, it is getting far better still at complexity protection and resiliency. When we study how complexity has emerged in life’s history, we gain a new appreciation for the smoothness of the rise of complexity and intelligence on Earth. Every catastrophe we can point to appears to have primarily catalyzed further immediate jumps in life’s accelerating intelligence and adaptiveness at the leading edge. Life needs regular catastrophe to make it stronger, and it is resilient beyond all expectation. What causes this resilience? Apparently a combination of evolutionary diversity and developmental immune systems, and we still undervalue the former, and are mostly ignorant of the latter. If the universe is developmental, we can expect it has some kind of immune systems protecting its development, just as living systems do. The more we are willing to consider the idea that the universe may be both evolving and developing, the more we can open our eyes to hidden processes that are protecting and driving us toward a particular, predetermined future, even as each individual and civilization on Earth and in the universe will take its own partly unpredictable and creative evolutionary paths to that developmental future.

Fourth, we gain an understanding of universal purpose. Talk of purpose legitimately scares most scientists, who are so recently free of religion interfering in their work. They claim they don’t want to return to a faith-based view of the world, but we all must have, and should constantly revise and keep parsimonious our own personal set of faiths (for example, our scientific axioms), as human reason and intuition, no matter how powerful they become, will always be computationally incomplete. Unexamined faiths are of course the most dangerous kind. Evolutionists put a lot of unexamined and unrecognized faith in their purposeless universe model, so much that it can blind them to the value of admitting and exploring the unknown. Many scientists attack hypotheses of universal teleology wherever they find them – even as they live in a world that they clearly know is predictable in part. We must call that stance hypocrisy, as predictability is a basic form of teleology, or purpose. Evolutionary and behavioral psychologists are now proposing biologically-inspired scientific theories of human values. I recommend The Moral Landscape, by Sam Harris, 2011, which I’ve reviewed earlier.  But most of this work still is not deeply biologically-inspired, as it remains focused on evolution, ignoring development. We must recognize that a better understanding of universal evolution and development can help science derive more useful and more universal evolutionary and developmental values. I believe it is both the best definition and the purpose of humanity to use technology to continually reshape us, individually and collectively, into something more than our biological selves, and to do this in as deliberate and ethical a way as possible, using both evolutionary and developmental means. We can further realize that it appears to be our universal purpose to think, feel, act, and build in ways that maximize our intellectual and emotional intelligence, advancing our minds and hearts.

Fifth, we recognize that very important parts of the future are predictable. This benefit is the most useful to me as a professional futurist. Increasingly, we find foresight practitioners who accept the likelihood of developmental futures. Consider Pierre Wack at Royal Dutch/Shell’s foresight group, who proposed the inevitable TINA (There Is No Alternative) trends in economic liberalization and globalization in the 1980’s. Or Ron Inglehart and Christian Welzel, who have charted the inevitable developmental advance (with brief and partial evolutionary reversals) of evidence-based rationalism and personal freedom in all nations over the last 50 years.  Some leading recent books arguing for the inevitability of certain kinds of social development are Robert Wright’s Nonzero, 2000 (on positive sum rulesets), Steven Pinker’s The Better Angels of Our Nature (on violence reduction) and Ian Morris’sThe Measure of Civilization, 2013 (on the predictable dominance of civilizations that are leaders in energy capture, social organization, war-making capacity, and information technology). There are still far too many professional futurists who confidently and ignorantly claim that the future is entirely evolutionary (“cannot be predicted”). But a growing number of leaders, strategists, and futurists see regionally and globally dominant trends and inevitable convergences, make good predictions, and use increasingly better data and feedback to improve their models.

Great New Book on Prediction

Great New Book on Statistical Social Prediction

For a good recent book on this, read Nate Silver’s excellent The Signal and the Noise: Why So Many Predictions Fail But Some Don’t, 2012. As we learn take an evolutionary developmentalist perspective, at first unconsciously and later consciously, we will greatly grow our predictive capacity in coming decades. More of us will foresee, accept, and start managing toward the ethical emergence of such inevitable coming technological developments as the conversational interface and big data, deeply biologically-inspired (evo and devo) machine intelligence and robotics, digital twins (intelligent software agents that model and represent us) and the values-mapped weblifelogs and peak experience summaries, the wearable web and augmented reality, teacherless education, internet television, and the metaverse. Professional futurists and forecasters are now developing our first really powerful tools and models that will keep expanding our prediction domains and horizons, and improving the reliability and accuracy of our forecasts. I believe evolutionary developmentalism is a foundational model that all long range forecasters and strategists need to embrace. Not only must we realize there are possible and preferable futures ahead of us, but we must be convinced that there are inevitable and highly probable futures as well, futures which can increasingly be uncovered as our intelligence, data, and methods improve. Such an effort, at a species level, is the only way we can map what remains truly unpredictable, at each level of our collective intelligence.

We’ve got a long way to go before modern science is willing to give the developmentalist perspective the same consideration and intellectual honesty that we presently give the evolutionist perspective. A lot of papers will have to be published. A lot of arguments will have to be made, and evidence marshaled. Courageous scientists will have to build the bridge from the developmentalist aspects of physics, chemistry, and biology to the highest aspects of our humanity, our ethics, consciousness, purpose, and spirituality. Convergent Evolution is one of several fields that will win lots of converts to developmentalism as it advances. Astrobiology will likely also play a big role, if it shows us just how common our type of life is in the universe, as many suspect it will.

A Classic in Foresight Literature - Parts of the Future are Quite Predictable

A Classic in Business Foresight – Parts of the Future are Quite Predictable. Ignore at Your Peril.

Fortunately, as futurist Alvis Brigis noted to me in a recent conversation, many of the world’s leading companies are already surprisingly developmentalist in their strategy and planning. We can trace this shift back at least to Pierre Wack’s strategy group at Royal Dutch/Shell in the 1980’s, as discussed in Peter Schwartz’s The Art of the Long View, 1996, a classic in business foresight. Wack realized that in order to do good scenario planning (exploring “what could happen”, and the best strategic responses to major uncertainties) one should first constrain the possibility space by understanding what is very likely to continue to happen in the larger environment.  To restate this in evo-devo language, Wack recommended starting with developmental foresight (finding the apparently “inevitable” macrotrends), and then doing evolutionary foresight (exploring alternative futures) within a testable developmental frame. Treating both evo and devo foresight perspectives seriously is a key challenge for strategy leaders. Many management and foresight consultancies are good at one, but not the other, as it’s a lot easier to pick one perspective as your dominant framework than to have to continually figure out how to integrate two opposing processes. Yet both are critical to understanding and managing change.

I do technology foresight consulting for several companies, and follow foresight work at the consultancies, and I’m convinced that those companies with the best predictions, forecasts, and foresight processes interfacing with their strategic planning groups are winning increasingly large advantages in their markets every year. All the most successful companies realize there are many highly predictable aspects of our future, and collectively our business and government leaders are now betting trillions annually on their predictions. A few are using good foresight processes, but most are still flying by the seat of their pants.

The executives leading our most successful companies don’t see the world as a random accident, like an evolutionist, or some naive and self-absorbed postmodernist who lives off the exponentiating wealth and leisure of the very same science and technology that he argues are “not uniquely privileged perspectives” on the universe. Let’s hope our young scientists in coming years have the courage to be as developmentalist in their research, strategy, and perspective as our leading corporations are today. And as our biologically-inspired intelligent machines, destined to be faster and better at pattern recognition than us, will be a few decades hence. Will modern science recognize the evolutionary developmental nature of the universe before human-surpassing machine intelligences arrive and definitively show it to us? That is hard to say. But I believe we can predict with high probability that as mankind continues its incredible rise, our leaders, planners, and builders must become evolutionary developmentalists if we are to learn to see reality through the universe’s eyes, not just our own.

Further Reading

For a more detailed treatment of evolutionary developmentalism, with references, you may enjoy my scholarly article, Evo Devo Universe?, 2008 (69 pp. PDF), and for applications of evo-devo thinking to foresight, Chapter 3, Evo-Devo Foresight (90 pp.), of my online book, The Foresight Guide, 2018.

For a speculative proposal on where accelerating change may take intelligence, as a universal developmental process, see my paper The Transcension Hypothesis, 2012 (and the lovely 2 min video summary by Jason Silva). This hypothesis speculates, and offers preliminary evidence and argument, that black holes may be developmental attractors for the future of intelligent life throughout the universe.

I wish you the best in your own foresight journey, and that your thoughts and actions help you, your families, and your organizations to evolve and develop as well, every day, as this amazing universe will allow.

Preserving the Self for Later Emulation: What Brain Features Do We Need?

Let me propose to you four surprising statements about the future:

1. As I argue in this 2010 video, both chemical and cryogenic brain preservation technologies may soon be validated to inexpensively preserve the key features of our memories and identity at our biological death.

2. If either or both forms of brain preservation can be validated to reliably store retrievable and useful individual mental information, these medical procedures should be made available in all ethical societies as an option, for those who might want it, at biological death.

3. If neuroscience, biologically-inspired computer science, microscopy, scanning, and robotics keep improving as they have so far, preserved human memories and identity may be affordably reanimated by being “uploaded” into computer simulations, even before the end of this century. Such “uploading” is being attempted for specific memories in animals today, and new tools like optogenetics and expansion microscopy cause leading neuroscientists to expect they will soon decipher and be able to manipulate (removing and introducing new memories) the neural code.

4. In all societies where a significant minority (let’s say 100,000 people) have done brain preservation at biological death, significant positive social change may result in those societies today, regardless of how much information is eventually recovered from preserved brains.

These are all extraordinary claims, each requiring strong evidence. Many questions must be answered before we can believe any of them. Yet I provisionally believe all four of these statements, and that is why I co-founded the Brain Preservation Foundation in 2010 with the neuroscientist Ken Hayworth. BPF is a 501c3 noprofit, chartered to put the emerging science of brain preservation under the microscope. Check us out, and join our newsletter if you’d like to stay updated on our efforts.

[2018 Update: For the latest on the neural code, the physical storage of memories in the brain, see this excellent paper by Carillo-Reid et al. (2018). It show how we are now imaging and optically manipulating neural ensembles, small spatiotemporally connected networks of neurons that link to other ensembles and are the building blocks of neural circuits. Neuroscientists believe these ensembles and circuits store engrams (memories and models) in the brain. With new optogenetics tools, we can find and manipulate these ensembles, and possibly create new ones, which could then be recalled by selectively stimulating any individual cell in the ensemble. This would be a true demonstration of memory creation, and major advance in deciphering the neural code.

More hints that neural synchrony in the claustrum is the root source of consciousness came in 2014, when a neurologist at GW School of Medicine, Mohamad Koubeissi, managed to turn off and on consciousness in a single patient by electrically stimulating her claustrum with deep brain electrodes. Electrical stimulation of brains has a long history in neurology and neuroscience, but it had never resulted in consciousness control, until now. Then in 2017, an fMRI study in awake and anesthetized rodents showed strong interhemispheric functional connectivity, via the claustrum, to both the prefrontal cortex and the thalamus in the awake state, dynamic connectivity that was lost in the anesthetic state. This was the first study to identify specific neural substrates by which the claustrum may “orchestrate” (to use Francis Crick’s term) consciousness. This work is a major vindication of Crick and Koch’s 2004 hypothesis that the claustrum is the central creator of consciousness. We are truly on the edge of understanding it as a fully mechanistic process. Understanding consciousness mechanistically will allow us to ask how we might go about instantiating it into nonbiological intelligence systems, perhaps even some time this century.

There are also some speculative hints, in 2018, of a possible epigenetic “backup” storage of memory. Some very old experiments by James McConnell in 1959, recently redone by Michael Levin at Tufts and David Glanzman at UCLA, tell us our memories may not only exist in our synapses and dendrites, they may also be stored, to some degree, epigenetically in our neural nuclei, via modification to the histone proteins that wrap DNA and regulate its folding. Having such a digital memory backup system would be clever, if brains and evolution figured out how to do that. A readable “backup” could could be used to reestablish the right connections if they are disrupted, and would our memories particularly robust to biophysical stresses that deform the analog morphology of neural networks in a living, moving brain. If some degree of epigenetic memory backup (to neural nets) exists and is extensive, it might mean a lot of a person’s memory could be retrieved and uploaded in the future, even if even if a person was preserved using poor methods (like most cryonics patients today), or preserved many hours after death (as their neural morphology starts to decompose). More research is going to be needed to determine if such a system exists, and if so, how many hours it might survive after death.]

For an excellent primer on the latest neuroscience theories and computer models in how the brain works, read the free wiki book, Computational Cognitive Neuroscience, O’Reilly and Munakata (2012). Chapter 8, Learning and Memory, is excellent. To understand how and why “you” might survive being uploaded into a computer or robotic body, if that was easiest way to bring you back to the world in the future, see BPF Advisor Mark Walker’s article, “Personal Identity and Mind Uploading,” JET, 22:37-51, 2012.

In this post, I’d like to give you my best provisional answer to a question relevant to the first three statements above:

To preserve the self for later emulation in a computer simulation, what brain features do we need?

We can distinguish three distinct information processing layers in the brain:[1]

1. Electrical Activity (“Sensation, Thought, and Consciousness”)
These brain features are stored from milliseconds to seconds, in electrical circuits.

2. Short-term Chemical Activity (Short- & Intermediate-term Learning – “Synapse I”)
These brain features are stored from seconds to a few days in our neural synapses (synaptome), by temporary molecular changes made to preexisting neural signaling proteins and synapses, as well as temporary chnages to the neural epigenome (DNA transcription and inhibition machinery, an overlay on the human genome).

3. Long-term Molecular Changes (Long-term Learning – “Nucleus and Synapse II”)
These are stored from years to a lifetime in our neuron’s connectome, synaptome, and nucleus (epigenome), by permanent molecular changes to neural DNA, the synthesis of new neural proteins and receptors in existing synapses, and the creation of new synapses.

All three of these brain processes involve a still-imperfectly understood combination of both digital (on or off, discrete states, including epigenetic DNA states and neural firing states) and analog (continuous, spatially related, wavelike) information and computational processes, which integrate in a statistical, associational, competitive, and massively parallel manner. At present, it is a reasonable assumption that only the third layer listed above, where long-term durable molecular changes occur, must be preserved for later memory and identity reanimation.

The following overview of each of these three key information processing layers should help explain this assumption.

1. Electrical Activity (“Sensation, Thought, and Consciousness”)

Our electrical brain includes short-distance ionic diffusion in and between neurons and their supporting cells (i.e., calcium wave communication in astrocytes), action potentials (how neurons send signals from their dendrites to their synapses), synaptic potentials (how signals cross the gaps between neurons), circuits (loops and networks) and synchrony (neurons that fire in unison, though they are widely separated). Electrical features operate at very fast timescales, from milliseconds to a few seconds, and are variable (not exact), volatile, and easily disrupted.

Neural Synchrony – Our Leading Model of Higher Perception and Consciousness . Image: Senkowski et.al., 2008

These features certainly feel very important to us. They include our sensations (sensory memory) and current thoughts (commonly called “short-term” memory by neuroscientists). Recurrent loops, special electrical circuits that cycle back on themselves, hold our current thoughts (when you rehearse some information to avoid forgetting it, you are literally keeping it “in the loop”). Neural synchrony is my favorite current theory (among several currently in competition) for the medium that creates our conscious perceptions. When it happens in the self-modeling areas of our brain, it gives us self-aware consciousness.

Yet electrical features are also fleeting. When you sleep, or are knocked unconscious, or are given an anesthetic, your consciousness disappears, only to be “rebooted” later, from more stable parts of your brain. Our memories aren’t even recalled with precision but are rather recreated, as volatile electrical processes, from these molecular long-term stores, in ways easily influenced by our mental state and cognitive priming (what else is on our mind). That’s why eyewitness testimony is so variable and unreliable.

The electrical features of our self are thus like the “foam” on the top of the wave of our long-term memories and personality. They make us unique for a moment, as they hold only our most immediate thinking processes.[2] Amazingly, people who undergo special surgeries that stop their heart, and some who drown in very cold water, can have no detectable EEG (electrical patterns) for more than thirty minutes, and their brains successfully reboot after rewarming them. Essentially, these individuals are recovering from clinical brain death. Not only do they not have consciousness during this period, they have no unconscious thoughts. Yet because their deeper layers aren’t too disrupted, they can restart their electrical activities.

An excellent though very technical book about neural spikes, loops, and synchrony is Rhythms of the BrainGyorgy Buzsaki, 2006. It explains the emergent properties and integrative functions of these “highest order” electrical features of our brain. See also this recent discovery of electric field coupling among neighboring neurons, by leading neuroscientists Henry Markram, Christof Koch, and others, and reported by Peter Hankins on his great cognitive science blog, Conscious Entities. There are some big mysteries still left to uncover regarding synchrony. Ephaptic coupling is a way for neurons to synchronize spike timing in neighboring neurons, via a mechanism completely independent of synapses. Neurons are much more versatile in both modes of communication and synchrony than previously thought.

My late mentor at UCSD, Francis Crick, and his brilliant Caltech collaborator, Christof Koch, call this topic the search for the Neural Correlates of Consciousness. It’s a great phrase. Consciousness is not a mystery we’ll never solve, but according to a number of neuroscientists it is a physical process of neural synchrony, in particular regions of your brain. The region Crick and Koch was most interested in was the claustrum, a thin sheet just above the putamen, near the center of the brain, found in all mammals, which  has many longitudinal tracts that connect to many areas of the cerebral cortex and to the thalamus, the central relay station in the brain. In 2005, the year after Crick’s death, Koch published their hypothesis that the claustrum is the central synchronizer of conscious experience.

These brief, rhythmic synchronizations share information between highly different, specialized groups of neurons in distant regions of the brain by tightening up (“binding”) their interdependent sequences of action potentials. The synchronizations are controlled by the inhibitory neurons in our brain, which use the GABA neurotransmitter. Disrupt gamma synch, as with anesthesia, and you take away consciousness. Give a drug like zolpidem, which activates GABA neurons and increases gamma synch, to patients who are in persistent vegetative state, and amazingly, you will wake 60% of them up from their comas, to varying degrees!

Wikipedia doesn’t yet have a good explanation of the gamma synchrony model of consciousness, but they do have a good page on the claustrum. Laura Colgin at Kavli has found two reliable gamma synch mechanisms in rat hippocampus. She speculates that slow gamma makes stored memories available to current consciousness, and fast gamma integrates sensations to create conscious perceptions. 

In sum, though neuroscientists don’t yet all agree on many of the details, many have found neural correlates of sensations, thoughts, emotions, and consciousness in the electrical features of our brains. In conjunction with the short-term chemical changes we will describe next, these processes represent both our “highest” (most valued) and yet at the same time, our most volatile and least-unique self.

It is quite likely, in my view, that if we uploaded your brain into a computer, and then reestablished different consciousness network states than the ones that existed in your biology, at death, states that were an “average” of typical human networks, you would still wake up feeling like essentially the same “you”, and others would describe you as the same as well. Consider that we all have different kinds of consciousness on a daily basis.

In other words, the dynamic specifics of our consciousness networks (which we don’t yet deeply understand), as opposed to their structural connectivity with the rest of the brain, (which we are now identifying in areas like the claustrum, and know how to preserve) may be the least important contributors to our unique identity.

While for many of us, consciousness is the feature of our minds that we love the most, it appears its primary role is to be an “orchestra conductor” of much more carefully stored, and slower-changing “lower-level” layers of you. If you change orchestra conductors, you will still get a symphony that is beautiful, and largely the same.

Let’s look at those significantly more unique lower layers now.

2. Short-term Chemical Activity (Short- and Intermediate-term Learning – “Synapse I”)

Short-term chemical activity is the next layer down. It involves all our short- and intermediate term learning and memory, everything beyond our sensations, current thoughts, and consciousness, but not including our long-term memories. We can call this layer “Synapse I.”

As our electrical experiences and thoughts race around the various circuits in our heads, we make a number of short-term learning changes in our neural networks to capture, for the moment, what we’ve just experienced and learned. These involve changes to preexisting proteins in our preexisting synapses (communication junctions), changes that last for minutes (short-term) to days (intermediate-term). These are changes in both the mechanics of neurotransmitter release and short-term facilitation (strengthening) or depression (weakening) of synaptic effectiveness. Synapses are temporarily modified by the precise timing and frequency of electrical signals (action potentials) received by the postsynaptic neuron, a process called spike-timing dependent plasticity. There are short-term changes in signaling molecules (neurotransmitters, cAMP, Ca++, CamKII, PKA, MAPK), and membrane receptors (NMDA). Phosphorylation states (chemical tags) are altered on some of these molecules, and a temporary equilibrium between kinases (enzymes that add phosphates to key molecules) and phosphatases (enzymes that take them away) is established in the synapse.

[Note: In late 2012, Ye et. al. showed in Aplysia how precise spatiotemporal signaling in the synapse involving PKA holds short-term memories in synaptic electrochemical networks, and the interaction of PKA and MAPK holds intermediate-term memories in these networks, in a process called synaptic facilitation.]

Throughout our day, and particularly when we sleep, our short- and intermediate-term brain writes important parts of its experiences to our long-term memory (the subject of our next section), building durable new synaptic connections, so this learning can stay with us for years to life, in a process called memory consolidation. Memory consolidation seems to happen best when we are in slow wave (deep and dreamless) sleep, which we get in cycles during the night (and especially well if our sleeping room is dark and quiet) and also during a good nap. That’s why a short nap after an intense learning session is considered a great way to “lock in” what you’ve learned.

When any of our short- or intermediate-term memories or thinking patterns are selected to be written into long-term memory, communication with the cell nucleus must occur, and new membrane proteins and synapses are then built, involving new or altered circuits in the connectome. If not, the new memory or pattern dies out.[3]

Again, long-term memory formation moves a subset of our recent learning and memories, apparently the most relevant parts, from temporary spatiotemporal signaling states to permanent new synaptic structures, anchored to the cytoskeleton of each neuron. We can think of the new proteins, synapses, and circuits established in long-term memory in neural synapses and nuclei in a way that is very roughly like DNA, as they are long-term stable structures, encoded in a partly digital, partly analog form, that will endure all the flux and variability of the biochemistry within each neuron, over a lifetime.  It is these unique long-term synaptic and epigenetic networks that we must preserve, scan, and upload in creating neural emulations, as we will discuss.

Now consider this key insight about short- and intermediate term (Synapse I) learning, which helps you appreciate it is not consciousness, and it is not your long-term memory, but something altogether different. Subjectively, we all know we can store both a far greater total number and a far greater temporal density of episodic (experiential) and declarative (factual) memories in our short-term memory, than we can in long-term memory. That’s what we do when we cram for a test, intensively read important information, engage in intense discussions, or pay for great experiences, as we have learned we can greatly alter our short term memory, and we know having a lot of relevant, motivating information “loaded” into our short term memory can be a great help to us as we work on managing both our long-term strategies and our daily tasks.

Most of this short-term information is not selected to be written into long-term memory, as far as both our subjective experience and neuroscience research knows today. It has a half-life of days, and it steadily decays after it is learned. But all that memory can still be highly useful in the moment. It plays a key role in regulating what we can think and do, on any particular day.

Given what we’ve seen so far, do you think our short-term (Synapse I) learning needs to be preserved for you to return to life as essentially the same person you were when you died? I don’t think so. Claiming it does seems to me to be an extreme claim.

Consider that if your NMDA receptor distributions aren’t recoverable from a future emulation, for example, you should only have lost the last couple days of your life experience prior to being preserved, in our present understanding of how these systems are most likely to have evolved and operate.

I would expect any good reanimation program could use any baseline (species average) version of such receptors and you’d wake up, as almost entirely the same “you”. Perhaps we’ll need another decade or two of neuroscience to definitively answer such questions, but we can already have good reasonable intuitions about them today.

Let’s look a bit closer at how neurons work to understand the amazing capacity of short-term memory in a bit more detail.

Neural dendrites, cell body,  action potential, and synapses. Image: Gallant’s Biology.

All our neurons work in circuits, and strengthen or weaken their connections based on chemical and electrical activity, in a process called Hebbian learning. Just like your muscles, which come in two sets that oppose each other around every joint, neural circuits are both excitatory and inhibitory at many decision points in the network. One of the most important decision points is the cell bodies of each neuron, where the nucleus is. The electrochemical current from all the dendrites (“roots”) of each neuron flows toward its cell body, and action potentials (current waves) flow from the cell body to its synapses (“branches”), along the axon (“trunk”) of each neuron. Glutamate is the main neurotransmitter we use to send excitatory current from a synapse to the dendrite of the next neuron in a circuit (the postsynaptic neuron). Glutaminergic synapses are thus called “positive” in sign, and they promote electrical activity throughout the brain. GABA is the main neurotransmitter we use to let inhibitory current leak out of a postsynaptic dendrite. GABAergic synapses are thus called “negative” in sign, and they depress circuits throughout the brain. With few exceptions, neurons use just one type of neurotransmitter, or the same small set of neurotransmitters, at all their synapses.

Electrically, each neuron sums the net result of the positive and negative inputs it receives from all its dendrites, over milliseconds to seconds. As part of this summing process dendrites also make their own local and weak types of mini-action potentials, dendritic spikes, and we’ve recently learned they use these mini-spikes to do complex information processing prior to sending current to the cell body. This reminds us that computationalists still have a good ways to go before they can build neural network models sufficiently complex to honor reality. Then, if the current received at the cell body exceeds that neuron’s threshold,  it sends a traditional action potential (depolarizing electrochemical signal, or “spike”) to all its synapses. As the brain learns, our synapses enlarge or shrink, giving them greater or lesser excitatory or inhibitory effect, and we will either grow more or lose our synapses, depending on the value of the circuit.

The architecture of memory, thought, emotion, and consciousness may thus be reducible to a surprisingly simple set of algorithms, connections, weights, signaling molecules and electrical features in each neuron, working together in a massively parallel way to create computational networks that are far more complex than the individual parts.

Hippocampus and frontal lobes. Image: NIH

In higher animals, the neurons in our hippocampi (two c-shaped areas of ancient, primitive, three-layer cortex in each hemisphere of our brain), and the connections they make to the rest of our cerebral cortex (especially to our frontal cortex), store all kinds of episodic (experiential) and declarative (fact-based) information, all from our last few days of life. At the same time, neurons in our cerebellum (a more primitive, “little brain” at the base of our skull) store procedural learning and memory (how to move our bodies in space). Experiments with rats and primates tell us that each hippocampus makes perhaps tens of thousands of new neurons every day, from neural stem cells. Other than for repair after certain kinds of injury, no other part of the adult brain is able to use stem cells in detectable numbers, as far as we know. The rest of our brain is postmitotic (unable to use cell division to maintain its structure), as neuroscientists demonstrated in an elegant experiment in 2006. Our neurons must be maintained by our immune and repair systems, and as they die via natural aging, or kill themselves in apoptosis, memories start to die.

Hippocampal dendritic spines. Image: Fiala & Harris, 2000.

Our hippocampal neurons may be the primary place where we temporarily hold, in the uniquely dense synapses of this evolutionarily older cortex, and via their connections to the rest of our evolutionarily newer cortex, much of the information we have learned over the last day or two, during our entire adult life.

At right is a picture of a computer reconstruction of a small section of ten columns of synapse-rich “spiny dendrites”, from the CA1 (input) region of the hippocampus. CA1 contains areas like place cells, imprinted genetically with detailed maps of 3D space. Like the digestive cells lining our gut, and the skin cells at our fingertips, certain hippocampal neurons appear to get worn out on a regular basis by this demanding short-term memory holding function, and so some neuroscientists think new ones must regularly grow and mature to replace them.

People whose hippocampi are both surgically removed, like the memory disorder patient Henry Moliason, who had this done at the age of 27, can’t update their long-term episodic and declarative memories. H.M.’s long-term memory and personality was mostly “frozen” at 27. He could occasionally add bits of new information to long-term memories of the same type he’d built before the surgery, and he could learn new procedural (spatial and muscle) memories in his cerebellum, but he had no cerebral knowledge that he’d added these memories. H.M.’s amazing life suggests that if the brain preservation process damaged our hippocampi, but not the rest of our brain, we’d come back without our most recent experiences (two-day amnesia), but all our older memories and personality would still be intact.  Ted Berger at USC managed to build a simple version of an artificial electronic hippocampus for mice in 2005, so there’s a good reason to believe that this part of our brain, though important, isn’t irreplaceable. As long as you could install an artificial hippocampus in the computer emulation constructed from your scanned brain, you’d be back in business as a learning organism, with only some of your more recent memories and learning erased. This all helps us understand that what cognitive scientist Daniel Dennett would call our center of narrative gravity, our most unique self, is our long-term memory.

The fact that only special areas of our hippocampus can add new cells during life exposes a harsh reality about our biological brains. We are all born with a large but fixed memory capacity, both short-term and long-term, and this capacity gets increasingly used up, pruned and potentiated, the older we get. Anyone over 40, like myself, knows they are considerably less flexible at learning new things than they were at 20. That decreasing flexibility is simply a result of the physics of network formation in finite biological brains. We can still add new branches, and new connections, but it gets harder over time.

Now we arrive at our truest self, the part I will argue that we care most about preserving and sharing with our loved ones and society. It is this self that I expect will later merge with the Digital Twin (to be discussed shortly) that many of us may leave behind for our loved ones after our deaths in the 2020’s and beyond, as strange as that concept might sound today.

Experience-based learning. Image: Graham Paterson, Children’s Hospital Boston

3. Long-term Molecular Changes (Long-term Learning – “Nucleus and Synapse II”)

The production of long-term memory, personality, and identity requires all the short-term synaptic changes above, plus permanent molecular changes in the neuron’s Nucleus (DNA and its histones, or wrapping proteins), and the permanent creation of new cellular proteins, synapses, and circuits (Synapse II). Here’s a brief summary of our understanding of the process[4]:

3A. Nucleus (“Genome, Transcriptome, and Epigenome”)
1. Retrograde transport and signaling from the synapse to the nucleus
2. Activation of nuclear transcription factors and induction of gene expression
3. Chromatin alteration and epigenetic changes in gene expression (gene-protein networks)

3B. Synapse II (“Connectome and Synaptome”)
1. Synaptic capture of new gene products, local protein synthesis, and seeding of new synaptic sites
2. Permanent synaptic changes, activation of preexisting silent synapses, formation of new synapses.

We used several “-ome” words above. Let us briefly consider each. They are very roughly ordered below in terms of their likely contribution to our unique self, from least to most important:

The Genome. These are inherited genes and gene regulatory networks that control instinctual behaviors. Our genome includes the unique alleles we received from our parents. It is easy to preserve, as it is the same in all cells. With one tissue sample we can create a clone later, either physically, or far more likely, in a computer simulation. But this clone has only our inherited uniqueness. We’ll need contributions from the next four “omes” to add our life memories and learning to the emulation.

The Transcriptome. This is the set of proteins made (transcribed) by cells. While proteomics (another “ome” word) is in its infancy, scientists estimate each of our cells has the DNA to express ~20,000 basic protein types. Each type can be further modified after creation by adding or removing chemical tags like phosphate, methyl groups, ubiquitin, and other small molecules, so that more than a million protein subtypes may exist in a typical human body. Fortunately, each of our ~220 cell types only uses around 5,000 of these 20,000, and perhaps less than 2,000 of the 5,000 are unique to each cell type. Neurons and glia, the cell types we are most interested in, may use just a few hundred protein types to store our higher learning and memory in the nucleus and synapses. The other proteins are there to keep all of our cells alive, which is a critical precondition to being able to store long-term memories in a special subset of neural structures. All this suggests the proteomics of memory and identity, and of later memory and identity reconstruction from scanned brains, are not impossibly complex, but rather highly challenging, fascinating and eventually solvable problems.

The Epigenome. Our epigenome is a gene-regulatory layer that involves chemical changes, mostly methylation and acetylation, to DNA and to the histone proteins that wrap and expose DNA in the cell nucleus. These changes determine how DNA, RNA, and protein are expressed in the nucleus, and how neurons manage their synapses as they grow and learn. These are learning-based changes in gene-protein networks that occur during the life of the organism.  Epigenetic changes during biological development are responsible for the different transcription patterns that emerge as cells divide, turing them into the various cell and tissue types in our bodies. The Dutch famine of 1944 and the Överkalix study in Sweden tell us that some epigenetic changes can be inherited in humans, so we all should seek good nutrition and avoid toxin exposure, as we may pass some of that to our children in the form of compromised and undermethylated epigenomes. There is a lot more to the epigenome story still to be uncovered, as this 2011 article on epigenetic regulation in learning and memory in Drosophila makes clear.

 [2015, 2018 Updates: Dr. David Sweatt at U. Alabama, Birmingham has for the last few decades been one of the leaders in researching the epigenetics of memory. See this brief article by Sweatt (Exploring the Building Blocks of Memory, UAB’s The Mix, Oct 2015), for a recent update. Unfortunately, as a species and in our leading countries we spend a ridiculously small amount of money trying to uncover those building blocks, so progress has been far slower than it otherwise could have been. Also, read this great 2018 ScienceNews article, and see Wikipedia’s page on the epigenetics in learning and memory to track this ongoing story. Perhaps you’d like to become a researcher in this area yourself? You’d do us all a great service.

The human brain is the most complex piece of matter in the known universe. The question of how memory is stored is arguably the most interesting of all brain questions, and one whose answer will improve our civilization in countless ways. You’d think it would be among our top priorities, but unfortunately, it still isn’t. We are stumbling and dragging our feet into a better world, not running toward it. 

How much of the epigenome will we need to preserve to be able to recreate higher human memories in computers? That question isn’t yet clear. Epigenomic changes that direct permanent protein changes in the neural nucleus may very likely be a redundant form of memory storage.  I would currently bet that some (low?) level of epigenomic data, in concert with connectomes and synaptomes (discussed next), may be necessary to recreate our higher memories. We shall see, as they say.]

The Connectome. This is a map of our neural cell types, and how they connect. Our connectomes and much of our dendrite structure is very similar in all of us. This shared developmental structure makes it easy for us to communicate as collectives, for ideas or “memes” to jump from brain to brain. Yet with 100 billion neurons making an average of 1,000 connections to other neurons, and most of these not being developmentally controlled, we’ve got the ability to make 100 trillion connections, the large majority of which will be unique to each individual.

The Synaptome. These are key features of the ~1,000 synapses that each neuron makes to others. They are the particular long-term molecular features that determine the strength and type of each synapse, its signaling states and electrical properties, as we’ve described them above. The synaptome is the weight and type of the 100 trillion connections described above, and this information may be the most important “recording” of our unique self. Fortunately, because memories are stored in a highly redundant, distributed, and associative manner in our synaptic connections, our synaptome is to some degree fault tolerant to cell death. Both artificial and biological neural networks experience graceful degradation (partial recall, incremental death) of higher memories as individual neurons die. We also know the molecular code of long term memory is fault tolerant to the noise, deformations, and chaos of wet biology. The feedback loops between the electrical and gene-protein network subsystems interact somehow to stabilize long term memories in a special subset of durable molecular changes, in spite of all the other biochemistry furiously going on to keep the cell alive. [2015: For more on synaptic diversity, a topic we still have not fully characterized, see synaptome researcher Dr. Stephen Smith’s excellent presentation, The Synaptome Meets the Connectome, YouTube, 2012. Understanding synaptic diversity is likely to be a key piece of the memory encoding puzzle. We finally have the research tools and a modicum of funding to investigate it, via such paths as the BRAIN initiative, which so far has been vastly underfunded and overhyped, and via much more effective yet relatively small independently-funded groups, like the excellent Allen Institute for Brain Science.]

Lifelong Learning, and Your Digital Twin 

Given all this, if we want to be lifelong learners in a world of accelerating technological and job change, it is critical to get an early education that is as categorically complete (global, cosmopolitan, and scientific), moral (socially good, positive sum) and evidence-based as possible. Our children need the best mental scaffolds they can get early on, or they’ll spend the rest of their lives trying to prune away harmful and untrue thoughts and beliefs acquired in their youth. Psychologists have long known that it is much easier to add increasing specificity to a neural network than it is to unlearn (depress) any branch, once it’s built. We need to be careful about what we allow into our memory palaces.

That said, children also benefit greatly from freedom, early on in life, to study what they themselves desire to learn, and to have a good degree of control over learning outcomes and style. This freedom, and appropriate rewards for effort of any kind, induce them to build intricate mental specializations in areas they are personally passionate about. For those who want to know how to implement a 50/50 balance of broad, state-mandated learning in future-critical STEM fields, analytical thinking, and civics (the “hilt of the sword”, technical ability and broad world knowledge), and a personalized program of student-directed specialized learning, creativity, and play in the other half of the time, mastering whatever they can convince their teachers is worth studying (the “blade of the sword”, passionate specialization), I strongly recommend The Finland Phenomenon, 2010 . This exceptional film (free on YouTube now too), along with Pasi Sahlberg’s Finnish Lessons: What Can the World Learn from Educational Change in Finland, 2011, and Tony Wagner’s Creating Innovators, 2012, all demonstrate key elements of the future of learning in enlightened societies, in my opinion. It may take 20 years for the evidence to be incontrovertible, but you can give it to your child now, if you find it appealing. The US will eventually realize that if the Finns did it, rejuvenating their previously failing education system over a twenty year period, we can too.

Digital Twins – Virtual Assistants (Smart Agents) With Simple Models of Our Interests, Will Be Useful for Many of Us By the Early 2020’s. Image: MyCyberTwin.com

It is also liberating to realize that while our biological brains are less able to learn fundamentally new things as they age, all the digital technologies we use, technologies which will bring our emulations back at an affordable price later this century, will continue to get exponentially more powerful every year. Most of us don’t realize this, but everyone who uses a social network, email, or any other technology to capture things they say, see, and write about is also creating a digital simulation of themselves.

Some time in the 2020s, I expect that many of us will be talking to and with our best search engines in complex sentences (the conversational interface), and will be using personalized smart agents, “Digital Twins” (hereafter, “Twins”), which will have crude maps of our interests and personality, so they can serve us better.

[2018 Note: I now call Digital Twins by a more generic name, Personal AIs (PAIs). see my Medium series on Personal AIs for more on this amazing, accelerating, and still-little-discussed aspect of our personal and social futures.

Computational linguists know that if you capture what a person says for just two years, we are so repetitive about what we care about that a Twin could whisper into our ear the word that natural language processing algorithms predict we want when we are having a senior moment, and they’ll be right most of the time. That’s how repetitive we are, and how good natural language understanding will be by 2020. As I wrote in 2005, people who don’t run Twins will be much less productive, so they’ll be very popular, even though they’ll bring lots of new social problems in their first generation.

Now here’s a kicker: These simulations likely won’t be turned off by our loved ones when we die. It will cost little to keep them running all the time, watching what we’re doing, and compiling at-first-primitive “suggestions” for us, and our children, friends, and colleagues will occasionally use them to interact conversationally, and only in appropriate contexts, with these semantic simulations, to keep the best of our thoughts, experiences and personalities accessible to them when desired. Of course, many folks will be creeped out by this idea, but others will find it a way to reduce the great pain of losing our most loved individuals to biological death.

Once folks realize that their Twins really are a kind of “digital immortalization” of parts of themselves, and once neuroscience has proven that we can read (“upload”) simple memories from preserved and scanned animal brains, at that point preserving one’s brain for later uploading into a Twin may seem an increasingly obvious and responsible choice for dying individuals, especially if the cost to do so is quite affordable. It will eventually be covered by health care in our wealthiest societies.

What’s more, recent advances in molecular scale MRI scanning strongly suggest that future scanning technologies should be able to nondestructively scan entire preserved brains, to upload their molecular states, memories, and higher functions. So if the first scan isn’t perfect, it can always be updated later, from the preserved physical brain.

We can see that teaching our children and ourselves to be digital natives and digital activists, to use the social web and the first affordable commercial lifelogs when they arrive, is one important way for us to build an ever more capable Twin for ourselves and for our loved ones (after we die), even as our biological self naturally slows down and simplifies (prunes away branches of knowledge and memories we once had ready access to) with advancing age.

To Understand Intelligence and Learning – Start with Single-Celled Organisms

Single-celled animal. Image: Anthony Horth

To understand how these subsystems interact in a living organism, let’s start in as simple a model organism as we can find, single-celled animals, organisms that don’t even have nervous systems as we know them. Wetware, Dennis Bray, 2009 is a great tour of these animals. Single-celled eukaryotes like Stentor, Paramecium, and Amoeba do complex information processing, and hold short-term memories in their chemical networks. In 2008, we learned that Amoeba remember and anticipate cold shocks, for example. These networks include the cell’s genome, epigenome, cellular proteins, cytoskeleton, receptors, and cell membrane. They are true computational networks, with both neural-network like and Boolean logic properties. Genes and proteins integrate signals from other genes and proteins, and selectively switch and transmit signals, just like neurons do. The genes in each cell, via RNA, determine which proteins are made, when and where. Most protein changes are part of the short term computation being done in a cell, but a special few will lead to lasting changes in the epigenome and the cytoskeleton and receptors in and on the surface of the cell. These long-term changes are the ones we care most about, as they store the cell’s unique memory and identity.

Until computational neuroscience[5] can predictively model how the gene-protein networks in a Paramecium allows these animals to evaluate options, assign priorities, regulate their moment-by-moment computational attention, continually vary strategies for chasing prey and avoiding toxins, and chemically store their representations, habituations, and memories in an intracellular environment, all within a single cell that has no proper nervous system, the field will be missing its Rosetta Stone. Electrical waves exist in these single-celled animals, but with the exception of mitochondrial energy production, they are of the most primitive, diffusion-based kind. All the considerable intelligence in these animals is coursing, moment by moment, through their gene-protein networks.

BPF Advisor Randal Koene likes to use the phrase “Substrate-Independent Minds” to talk about uploading. One big step to realizing how achievable uploading will likely be involves understanding the patternism hypothesis, and recognizing some of the ways nature has already built substrate-variant “minds” in complex organisms that arrived before brains. We’ve just discussed surprisingly smart neuron-independent “minds” in some single-celled organisms. Many species of plant also have very complex “thinking” abilities, all without the use of neurons.

Take a look at the Plant Intelligence entry on Wikipedia page for more. To understand the way nature builds intelligence from molecular biology on up, in an evo-devo manner, we will need to learn how gene-protein networks, the inheritable features of the cell, and the stable physical and chemical laws of the environment interact to store adaptive intelligence, and allow it undergo both evolutionary variation and developmental conservation and replication. All of this happened long before neurons.

In multicellular organisms with neurons, the cytoskeleton and receptors have specialized into the synaptome, the pre-and post-synaptic molecular modification of our synapses, including phosphorylation of switching proteins like calmodulin kinase II. While there are over 50 known neuromodulators and 14 neurotransmitters in our brain, only six neurotransmitters have been implicated so far in long term learning and memory in our synaptome. It is these and their partner molecules in the synapse and nucleus that are probably most important to understand and model to crack the long-term memory code. Just as biochemistry is a small subset of all chemistry, learning and memory biochemistrry is a small subset of cellular chemistry. Finding that subset, and how it reliably works in the wet, chaotic, messy conditions of the brain, is the greatest goal of modern neuroscience. Those algorithms will increasingly be imported into machines as well.

C. elegans connectome. Image: OpenWorm.org

Fortunately, even with our very partial molecular and functional maps today we have still managed to work out some basics of neural network interaction in very small neural ensembles, like the somatogastric nervous system (~30 neurons) in lobsters. We’ve even created early maps of very small whole-animal neural systems, like the nematode worm C. elegans, with its 302 neurons and ~6,000 synapses. We mapped the C. elegans connectome in 1986, but we still know just pieces of its synaptome and transcriptome, and even less about its epigenome. Fabio Piano et. al. give us an overview of the state of C. elegans gene-protein network knowledge in 2006. Note their subtitle is “A Beginning.” Jeff Kaufman has recently summarized the very early status today of whole brain emulation in nematodes. David Dalrymple in Ed Boyden’s lab at MIT is working on C. elegans simulation, and he is optimistic about new tools in neural state recording, optogenetics, and viral tagging for characterizing each neuron’s function. As Derya Unmatz reports in a blog post that sounds like science fiction,  Sharad Ramanathan et. al. at Harvard can now take control of C. elegans locomotion by firing precisely targeted lasers at individual neurons in an optogenetically modified worm’s brain, controlling its chemotactic behavior and convincing it that food is nearby.

A small international collaboration exists to emulate the C. elegans nervous system, called OpenWorm. There’s even a Whole (Human) Brain Emulation Roadmap, started in 2007 by Anders Sandberg and Nick Bostrom at Oxford, and a few other visionary folks in biology, computer science, and philosophy. These important projects are quite early and extremely underfunded at present. The biggest problem today is getting more funded people working on them.

To emulate how C. elegansDrosophilaAplysiaDanioMus, and other neural networks actually work, and to begin to extract even crude and partial memories from the scanned brains of any of these and other model organisms, we’ll need a better understanding of behavioral plasticity, and the way the synapse, the nucleus, and neuromodulators bias the pattern generators in neural circuits into a particular set of behavioral patterns. This may require not only better neural circuit maps, but better maps of several still partly-hidden intracellular systems involved in long-term memory formation: gene regulatory networks, the transcriptome, and the epigenome[6]. There are gene-protein networks controlling human neural development, neural evolution, and our long-term learning and memory. A special few of these regulatory networks, their proteins, and the epigenomic changes these networks store during a lifetime of human learning may be as important as the synapse, if not more, in determining how our brain encodes and stores useful information about the world.

A great textbook on gene regulatory networks is The Regulatory Genome: Gene Regulatory Networks in Development and Evolution, Eric Davidson, 2006. It will amaze you how much Davidson’s group has learned about these networks, primarily by studying the evolutionary development of one simple organism, the sea-urchin, over several decades. Last month, Isabelle Peter and others in Davidson’s group at Caltech published the first highly predictive model of how these networks control all the steps in sea urchin embryo development over the first 30 hours of its life. 50 genes are involved, and their regulatory interactions can be fully described in Boolean logic. Now they want to model all of development, and some of the networks controlling its variational processes. Consider the magnitude of their achievement: Davidson et. al. have reduced an incredibly complex biochemical process down to a far simpler algorithm. This is what must happen in long-term memory, if we are to use scanned brains to abstract the key subsets of molecular structures that reliably encode it in our neurons.

Protein Microarrays – An Exciting New Tool. Image:  Eye-Research.org

Neural proteomics and the transcriptome are entering an exciting new phase as we use DNA and RNA microarrays, and now protein microarrays to catalog neural transcriptomes and compare them to other types of human cells, and to other primate and mammal neurons. In August, Genevieve Konopka and colleagues published an exciting paper comparing human, chimpanzee, and rhesus monkey neural transcriptomes. We’re finding genes and proteins unique to particular areas in human brains, especially our frontal lobes. We’re building our first maps of the critical differences in the gene and protein regulatory networks that allowed us to wake up, make tools, and walk out of Africa less than two million years ago.

Epigenome (methylated DNA and modified histones). Image: RoadmapEpigenomics.org

We recently learned that what was long called “Junk” DNA, the 98% of each cell’s non-exonic DNA (DNA that doesn’t code directly for proteins), participates at various levels in gene regulatory networks, and through epigenomics these networks can change to some degree over the life of the cell. We’re learning now to map gene-protein interactions in these networks, including epigenomic changes, using tools like Chromatin ImmunoPrecipitation and sequencing (ChIP-seq). Unfortunately, this work is also seriously underfunded. We’ve known about the importance of the epigenome for over a decade. Epigenomic changes can be inherited (watch what you do with your body, as your kids will inherit a record of some of your bad or good life habits in their epigenome), and thus record unique learning in each cell over its lifetime, in ways we are still uncovering.

The NIH started a Roadmap Epigenomics Project for mapping the human epigenome in 2008, but the funding is a pittance, roughly $40 million a year. There is also a global collaborative research database, ENCODE, for sharing what is presently known about all the functional elements in the human genome. We give it roughly $20M/year, barely life support. There are also various Human Proteome Projects under way, but no one seems to be funding any of these seriously, either. None of the politicians or key philanthropists who could make the Human Proteome and Epigenome into national research priorities have proposed any big initiatives, as far as I know. Even our science documentaries don’t adequately convey the promise of these fields.

Biologists are tooling along as best they can while policymakers, media, and the public still have no idea how much better medicine would truly be in ten years if we were spending a lot more money on these Big Life Sciences projects right now.

Recall by contrast the Human Genome Project, which began with fanfare in 1990 and was rough draft completed in 2000, for $3 billion, a price gladly paid by the U.S. and four other motivated nations. The Human Genome Project was, to put it in proper perspective, our planet’s Moon Shot in the 1990’s, our species latest great leap into “inner space.” As those who’ve read my Race to Inner Space post know, I think understanding the machinery of life and intelligence, and nanotechnology in general, is a destination far, far more valuable to us than outer and human scale (as opposed to cell and molecule-scale) space. We need an international Human Proteome and Epigenome Project race.

With good funding and leadership, we might nail our first good maps of the neural gene-protein interaction layer in a decade. With business-as-usual, it will likely take our species much longer to understand this critical aspect of ourselves.

As we learn the languages of gene regulatory networks, the transcriptome, and the epigenome in coming years, we should learn how to influence these networks in many powerful ways. Do you think the trillion dollar global pharmaceutical industry is big now? Wait for the therapeutics that may start to arrive in the late 2020s, as we begin to learn how to intervene in these networks. I think it is only when we have good maps of these gene-protein networks that we can finally expect medical advances like better learning and memory formation, elimination of a vast range of diseases including cancer and Alzheimer’s, immune system boosting, aging reduction (epigenomics repair), and perhaps even the uncovering of genetically latent skills like tissue regeneration and hibernation. We are not talking about gene modification (inserting new genes in the germline, or in an adult), but rather about improving dysfunctional gene network regulation, and learning how to assay and minimize important parts of the network dysregulation that goes wrong in each of us as we get older and get various diseases.

Ken Hayworth

There’s a nice analogy here, pointed out by my Brain Preservation Foundation co-founder, Ken Hayworth. The Human Genome Project gave the world affordable gene sequencing in the mid-2000’s, and ten years later, we are beginning to see the major fruits: the uncovering the previously hidden worlds of gene regulation networks, the transcriptome, and the epigenome. Likewise, a much better funded Human Connectome Project and the still-unfunded Human Proteome and Epigenome Projects could get us affordable neural circuit tracing and functional gene regulatory network modeling in the late 2010s. Just as the Human Genome Project showed us we had a lot fewer genes than we thought (~21,000 rather than 100,000) the Human Epigenome Project may tell us that our gene regulatory networks are functionally simpler than we currently think, and that of the ~5,000 proteins in a typical cell, there are just a handful that matter to our long-term self. With luck, the remaining hidden layers of the neural transcriptome and epigenome will be functionally understood in the late 2020s. In that exciting time, our ability to understand memory and learning, to read memories from the scanned brains of model organisms, and to build biologically-inspired computer models, will all be greatly enhanced.

So to answer our original question, we need to find out if both chemical preservation and cryopreservation will preserve the connectome, the synaptome, and any long-term memory-related changes in the epigenome in a living brain.

Our Brain Preservation Technology Prize, which focuses on the connectome and many but not all features of the synaptome, is an important start down this road. As we understand better what molecular features in the synaptome and epigenome need to be preserved to capture and later retrieve memories, we’ll also need to find out if either chemical or cryopreservation, or ideally both, will reliably preserve those structures at the end of our biological lives, and whether it will be possible for future scanning algorithms to repair any damage done by the preservation process. We’re too early to answer such questions today, but it is encouraging to remember that long-term memory is a very redundant, resilient and distributed system.  Extensive neural destruction can occur in brains via Alzheimer’s, stroke, and other diseases before our memories are substantially erased and cognitive reserve is no longer available.

Sixty years of histology practice tells us that good perfusion of special chemical fixatives such formaldehyde and glutaraldehyde at death will immediately preserve everything we can see by electron microscopy in neurons. A great book on how this works is John Kiernan’s Histological and Histochemical Methods: Theory and Practice, 4th Ed., 2008. Kiernan has been publishing since 1964, and is a leader in the theory and practice of chemical fixation. There are even a few published fixation methods for whole mice brains. Here’s a 2005 paper by Kenneth Eichenbaum et.al. demonstrating a whole brain fixation technique that claims “complete preservation of cellular ultrastructure”, “artifact-free brain fixation” and “no signs of cellular necrosis” in an entire mouse brain. Presumably these methods also protect DNA methylation and histone modification in the epigenome, the phosphorylation of dendritic proteins like CamKII, the anchoring of AMPA receptors in the synapse, and other processes of both intermediate-term and long-term memory formation. Presumably these molecules are protected today for years just by aldehyde fixation, if kept at low temperature (4 degrees).  Companies like Biomatrica have even developed ways to store human and bacterial DNA and RNA at room temperature for years. Long term storage of whole brain connectomes, synaptomes and epigenomes at room temperature, an ideal outcome for simplicity and affordability, may work today via additional chemical fixation steps like osmium tetroxide, a process that crosslinks fats and cell membranes, and plastination, a process that draws all the water out of a preserved brain and replaces it with resin.

But all this remains to be proven. If you know of experts who have done work in this area who would be willing to help BPF write position papers on these topics, and who can envision research projects that will answer these questions more definitively, please let me know, in the comments or by email at johnsmart{at}gmail{dot}com. Thanks.


Footnotes:

1. There is a much older layer of unique learning in each of us that is also important, the intelligent behaviors that gene networks have recorded in each of us over evolutionary time, as instinctual programs, and the unique assortment and variants of genes we each received at birth. Such networks determine our inherited neural programs, instincts and behaviors that are executed mostly unthinkingly and robustly, and during which other forms of learning, like short-term learning, often does not even occur. To preserve this layer we just need a DNA sample of the preserved person, and that particular uniqueness can be incorporated in any future emulation, assuming future computers are up to the task.

2. Some scientists working on brain emulation, like BPF Advisor Randal Koene, suspect that measuring and modeling the brain’s electrical processes, a topic called Computational Neurophysiology, will give us powerful new insights into artificial intelligence. There are new tools emerging for in situ functional recording of electrical features of the neuron. These may be critical to establish the “reference class” of normal electrical responses, for each type of neuron and neural architecture, the class of electrical representations of information. But if the model I’ve presented here is correct, we won’t need to record any electrical features of individual brains in order to successfully reanimate them later. We’ll see.

3. In Aplysia (sea slug), the sensory neuron neurotransmitter serotonin (5-HT) binds to postsynaptic receptors, activates adenylyl cyclase (AC) in the cell to make the second messenger cAMP, causing a short-term facilitation (STF) in strength of the sensory to motor neuron connection. More of the excitatory neurotransmitter glutamate is released by the neuron to its follower motor cells, and Aplysia pulls away harder from its shock. The neuron is also sensitized: K+ channels are depressed, more Ca++ enters the presynaptic terminal, and the action potential spike broadens. Kinases and phosphatases (phosphate adding and removing enzymes) including cAMP-dependent PK, PKA, PKC, and CamKII control duration and strength of these changes. In facilitation, the spike broadens temporarily, as both pre- and post-synaptic Ca++ and CamKII make molecular changes that temporarily strengthen the electrical signal across the synapse. In short-term depression (STD), the same mechanism temporarily weakens the signal. If water is gently shot at Aplysia’s gills ten times in a row, it temporarily learns not withdraw them, via synaptic depression of motor circuits. This short-term memory lasts for ten minutes, and involves a short-term reduction in the number of glutamate vesicles that are docked at presynaptic release sites in sensory neurons (undocked vesicles can’t be immediately used). Repeat this training four times and the slug will turn this into an intermediate-term memory, making chemical and electrical changes in the synapse that now last for three weeks. Again, all this involves changes only to preexisting proteins and synaptic connections in neurons.

4. In rat and human hippocampus, the primary excitatory neurotransmitter is glutamate. This causes Ca++ influx through NMDA receptors at postsynaptic membranes, and activation of CamKII, PKC, and MAPK. Permanent synaptic changes (Early LTP) include increased insertion of AMPA receptors in the membrane, and phosphorylation of proteins to change the properties of the channel. These receptors are anchored to the neural cytoskeleton, so they have reliable long term effects. Later LTP involves recruitment of pre- and postsynaptic molecules to create new synaptic sites. A few key gene-regulatory networks are involved, with transcriptional and translational control at both the nucleus and the synapse, and control molecules including BDNF, mTOR, CREB, and CPEB. We’ve recently found a memory encoding master control gene, Npas4, that encodes nuclear transcription factors (the copying of other genes into messenger RNA) which interact with hippocampal neurons to encode episodic memory. When Npas4 is knocked out of mice, they can’t learn. We’ve found RNA binding proteins like Orb2, that bind to genes involved in long-term memory. A great and reasonably current text on the molecular basis of memory and learning is Mechanisms of Memory, David Sweatt, 2009. We’re still figuring out the epigenomic regulation that occurs in long-term learning and memory, so you’ll need to go to journals for most of that story, like this 2011 PloS Biology paper on epigenetic regulation of learning and memory in Drosophila. The full size of the memory puzzle is becoming clearer every day. Now we just need to fund the work to complete it. We sure could use this knowledge in all kinds of good ways today, if we had it. Here’s a cartoon of long-term memory formation in both Aplysia and rat hippocampus, from Learning and Memory, John Byrne (Ed.), 2008 (Vol 4., David Sweatt, p. 14):

5. Computational Neuroscience seeks to model brain function at multiple spatial-temporal scales. The brain uses a vast range of different schemes for representation and manipulation of information, and it passes some of this information from one system to another all the time. Consider the way neurons integrate signals from the receptors at their dendrites, the timing and shape of their action potentials, the way synapses interact with postsynaptic dendrites from other neurons, how neurons encode and store associative memory, specialize for perceiving and storing certain types of information (edge detection, grandmother cells), do inference and other calculations, work in functional subunits like cortical columns, and organize receptive fields. It all seems formidably complex, but useful simplifications exist, as we’ve described above.

6. Most folks in the neural emulation community don’t talk much about modeling gene regulatory networks or the epigenome and its interaction with the synaptome, and I think that’s their loss. Some focus only on easier stuff to see, like electrical features, and assume that might be enough to get a predictive model. But I think that’s like looking for your keys under the streetlights when they are in the shadows. If spikes, loops, and synchrony are a network layer that has grown on top of cell morphology and gene-protein networks, the way single-celled animals eventually grew neurons, we may learn surprisingly little by measuring and modeling electrical features. Attempting to do so may be like trying to infer the structure of hidden layers in a very large neural network [genome, epigenome, connectome, synaptome, and electrical features] by analyzing just the input/output layer, electrical features. We need all the hidden layers if we expect to have enough computational complexity to predictively characterize learning, memory, and behavior.

Chemical Brain Preservation: How to Live “Forever” – A Personal View

Here’s my 45 minute talk on Chemical Brain Preservation at World Future Society 2012. Given the progress we’ve seen in the relevant science and technologies it’s a topic I’m presently very optimistic about. I had a great audience with lots of questions at the end, but in the interest of brevity I’m just uploading the talk. Let me know your thoughts in the comments, thanks!



A number of neuroscientists, working today with simple model organisms, are investigating the hypothesis that chemical brain preservation may inexpensively preserve the organism’s memories and mental states after death. Chemically preserved brains can be stored at room temperature in cemeteries, contract storage, even private homes. Our 501c3 nonprofit organization, the Brain Preservation Foundation, is offering a $100,000 prize to the first scientific team to demonstrate that the entire synaptic connectivity (“connectome”) of mammalian brains can be perfectly preserved using either chemical preservation or more expensive cryopreservation techniques.

Such preserved brains may be “read” in the future, analogous to the way a computer hard drive is read today, so that either memories or the complete identities of the preserved individuals can be restored or “uploaded” in computer form. Chemical preservation techniques are already being used to scan and upload the connectomes of very small animal brains (C. elegans and OpenWorm, zebrafish, soon flies). Though these scans are not yet sufficiently complex to extract memories from the uploaded organisms, give them a little more time, we’re very close now to cracking long-term memory. We just need to know a bit more about this process at the protein/receptor/gene level: http://en.wikipedia.org/wiki/Long-term_potentiation

Amazingly, if information technologies continue to improve at historical rates, a person whose brain is chemically preserved in 2020 might have their memories read or even fully return to the world in a computer form not centuries but just a few decades from now, while their children and loved ones are still alive. Given progress in electron microscopy and connectomics research to date, we can even forsee how this may be done as a fully automated and inexpensive process.

Today, only 1% of people in developed societies are interested in living beyond their biological death (see When I’m 164, David Ewing Duncan, 2012). With chemical brain preservation, this 1% may soon have a validated, low-cost method that will allow them to do just that. Once it becomes a real option, and recovery of simple memories has been demonstrated in model organisms, this 1% may grow larger as well.

I am particularly excited by chemical brain preservation’s ability to improve the social contract: what benefits we may reasonably expect from the universe and society when we choose to live a good and moral life. I believe that having the option of chemical brain preservation at death, if the science is validated, may help all our societies become significantly more science-, future-, progress-, preservation-, sustainability-, truth and justice-, and community-oriented in coming years.

Would you choose chemical brain preservation at death if it was widely available, validated, and inexpensive? If not, why not? Would you do it to donate your brain to science? Your memories to your children or others who might want them? Would you be willing to come back in person, if that turns out to be possible? If it is sufficiently inexpensive, would it be best to preserve your brain at death, and let future society decide if either your memories or your identity are “worth” reanimating? Please let me know what you think in the comments, thank you.