Foresight: Your Hidden Superpower! (Interview Outline)

I have a new interview, Foresight: Your Hidden Superpower, (YouTubeSpotifyApple) with Nikola Danaylov of the Singularity Weblog. Nikola has done over 290 great interviews. They are a rich trove of future thinking and wisdom, with acceleration-aware folks like Ada Palmer, Melanie Mitchell, Cory Doctorow, Sir Martin Rees, Stuart Russell, Noam Chomsky, Marvin Minsky, Tim O’Reilly, and other luminaries. As with my first interview with him ten years ago, he asks great questions and shares many insights.

We cover a lot in 2 hrs and 20 mins. Below is an outline of a dozen key topics, for those who prefer to skim, or don’t have time to watch or listen:

  • We discuss humanity as being best defined by three very special things: Head (foresight) Hand (tool use) and Heart (prosociality). We talk about how these three things were critical to starting the human acceleration, with our first tool use (in cooperative groups), and why foresight, of all of these, is our greatest superpower. An author who really gets this view is the social activist David Goodhart. I recommend his book, below.
Goodhart, 2021
  • We discuss human society as an awesomely inventive and coopetitive network. We are selected by nature to try to cooperate first, and then compete second, within an agreed-upon and always improving set of network rules, norms, and ethics. What’s more, open, bottom-up empowering, democratic networks, like the ones we are seeing right now in the West’s fight in Ukraine, increasingly beat closed, top-down autocratic networks, the more transparent the world gets. We talk about lessons from the Ukrainian invasion for the West, Russia, and China.
  • We discuss why a decade of deep learning applications in AI gives us new evidence for the Natural Intelligence hypothesis, the old idea (see Gregory Bateson) that deep neuromimicry and biomimicry (embodied, self-replicating AI communities, under selection) will be necessary to get us to General AI, and is likely to be the only easily discoverable path to that highly adaptive future, given the strict limits of human minds.
  • We talk about what today’s deep learners are presently missing, including compositional logic, emotions, self- and world-models, and collective empathy, and ethics, and why the module-by-module emulation approach of DeepMind is a good way to keep building more useful, trustable AI. Mitchell Waldrop’s great article in PNASWhat are the limits of deep learning?, 2019 says more, for those interested.
  • We discuss the Natural Security hypothesis, that we’ll get security and goals alignment with our AIs in the same way we gained it with our domesticated animals, and with ourselves (we have self-domesticated over millennia). We will select for trustable, loyal AIs, just as we selected for trustable, loyal, people and animals. The future of AI security, in other words, is identical to the future of human security. We will need well-adapted networks to police the bad actors, both AI and people. Fortunately, network security grows with transparency, testing, proven past safe behavior, and perennial selection of safer, more aligned AIs. There is no engineering shortcut to natural security. I feel strongly that human beings are not smart enough to find one. For more on this, you may enjoy the work of the late, great biologist Rafe Sagarinsummarized for a Homeland Security audience in this slide below.
Natural Security. Learning How Nature Creates Security, in a Complex, Dangerous World
  • We talk about the philosophy of Evolutionary Development (Evo-devo, ED), which looks at all complex replicating systems as being driven by both evolutionary creativity and developmental constraint. We discuss why both processes appear to be baked in to the physics and informatics of our universe itself. Quantum physics, for example, tells us that if we act to determine the value of one variable at the quantum scale, the other becomes statistically uncertain. Both predictability and unpredictability are fundamental to our universe, and they worked together, somehow, to create life, with all its goals and aspirations. How awesome is that?
  • We describe how this evo-devo model of complex systems tells us that the three most fundamental types of foresight we can engage in are the “Three Ps”: thinking and feeling about our Probable, Possible, and Preferable futures. We explore why it is often best to make these three future assessments in this orderat first, in order to get to our most adaptive goals, strategy, and plans.
The Three Actors, Functions, and Goals of Evo-Devo Systems
  • We talk about how, unlike what many rationalists think, our universe is only partly logical, partly deterministic, and partly mathematical. Turing-complete processes like deduction and rationality can only take us so far. We actually depend the most on their opposite, induction, to continually make guesses as to the new rules, correlations, and order that are constantly emerging as complexity grows. What’s more, we use deduction and induction to do abduction, to create probabilistic models, and to analogize. Abduction is actually the most useful, high-value form of human thinking. Deduction and rationality are in perennial competition with induction and gut instinct, and the latter is usually more important. Both are critically necessary to doing better model making and visioning. If we live in an evo-devo universe, this will be true for our future AIs as well. It is always our vision, of both the preferred and the preventable future (protopias and dystopias) that helps or hurts us most of all.
  • We describe intelligence as being inherent in autopoesis (self-maintenance, self-creation, and self-replication). Any autopoetic system is going to have, by definition, both evolutionary and developmental dynamics. The system’s evolutionary (creative, unpredictable) mechanisms will guide its exploration, creativity, diversity, and experimentation. The developmental (conservative, predictable) mechanisms will guide its constraint, convergence, conservation, and replication, on a life cycle. The interaction of both dynamics, under selection, creates adaptive (evo-devo) intelligence. Intelligence, and consciousness, work to “knit together” these two, opposing dynamics, in an adaptive network. In my view, machines will need to become autopoetic themselves if they are to reach any generality of intelligence. A cognitive neuroscientist who largely shares this view is Danko Nikolic. His concept of practopoesis (though it does not yet include an evo-devo life cycle) is quite similar to my views on autopoesis. I recommend his 2017 paper on the design limits of current AI, in terms of levels of learning networks. I’ll explore his work in my next book, Big Picture Foresight.
  • We talk about today’s foresight, and how natural selection has wired us to continually predict, imagine, and preference milliseconds to minutes ahead. The better we get at today’s foresight, the better we get at short-term, mid-term, and long-term foresight. Today’s foresight, the realm of our present action, is both the easiest to improve and the most important to practice. I go into the psychology and practice of foresight in my new book, Introduction to Foresight, which we discuss in this interview. If you get a chance to look it over, please tell me what you think, and how I can improve. I greatly appreciate your feedback and reviews.
  • We also talk about a number of other future-important topics, including Predictive and Sentiment Contrasting, the Four Ps (our Modern Foresight Pyramid), Antiprediction Bias, Negativity Bias, why and how accelerating change occurs (Densification and Dematerialization), the Transcension HypothesisExistential Threats, why it makes sense to Delay Nuclear Power, to prevent a weapons proliferation dystopia (see my new Medium article on this topic), our potentially Childproof Universe, and the Timeline to the Singularity/GAI (2080, in my current guess).
  • My concluding message is that regardless of what you hear in the media (due to both negativity and antiprediction bias) our networked evo-devo future looks like it is going to be a lot more amazing and resilient than we expect, that in life’s history so far, well-built networks always win (and are immortal, unlike individuals) and that foresight is our greatest superpower. The more we practice it, the better our own lives and the world gets. Don’t believe me? Are you worried about tough, long-term global problems like climate change? Watch evidence-based, helpful, and aspirational videos like the one below, from Kurgesagt . Positive changes and great solutions are continually emerging in our global network. We all just need to better see those changes and solutions, so we can thrive. Never give up on evidence-seeking, hope, and vision!

To say this all more simply: #ForesightMatters!

NOTE: This article can also be found on my Medium page, the best place to leave comments and continue the discussion. This site has become a legacy site, because WordPress still doesn’t pay its authors, and it still has very primitive software. For example, all the formatting errors on this post do not show up in the edit window of their new Gutenberg editing software, after pasting in good code from Medium, and I have no idea how to fix them. Sorry!


John Smart is a global futurist and scholar of foresight process, science and technology, life sciences, and complex systems. CEO of Foresight University, he teaches and consults with industry, government, academic, and nonprofit clients. His new book, Introduction to Foresight, 2022, is available on Amazon.

Why We Must Delay Nuclear Power

Most of us still ignore its disturbing weapons proliferation potential.

Everyone who thinks nuclear fission is sufficiently safe to use at increased scale today needs to upgrade their simulations. Unlike fusion power, it isn’t. Many ecomodernists who see next gen reactors as a key part of the solution to climate change may not be convinced by my article, but most environmentalists, and many in the general public, might. Fortunately we all vote.

Nuclear power is too slow and expensive to move the needle for climate, vs. wind and solar. But its biggest problem is rarely discussed, hence my article. Scaling nuclear power (including thorium and various other next gen designs) will only exacerbate the global risk of nuclear weapons proliferation, and the nightmare scenario of rogue nuclear weapons in use by well-funded small groups, later this century. We have a long road ahead to increase equity and reduce fanaticism. Meanwhile, technological advances make it increasingly easy to produce clandestine nuclear weapons. We ignore that future at our own peril. History argues that a future with rogue nuclear weapons afflicting the world would greatly restrict our civil rights, and greatly reduce democratic representation and transnational cooperation. That’s a future we can work to avoid.

Growing nuclear power should be a nonstarter for our sustainability agenda. We need to delay, decommission, and increase oversight of the entire uranium mining, enrichment, and nuclear weapons and nuclear energy production chain, while we wait for the world’s political and security cooperation and capabilities and networks to catch up with ongoing technical advances. My Medium article explains why.

Callaway Nuclear Reactor (Fulton, MO)

The Goodness of the Universe

In 2010, physicists Martin Dominik and John Zarnecki ran a Royal Society conference, Towards a Scientific and Societal Agenda on Extra-Terrestrial Life addressing scientific, legal, ethical, and political issues around the search for extra-terrestrial intelligence (SETI). Philosopher Clement Vidal and I both spoke at that conference. It was the first academic venue where I presented my Transcension Hypothesis, the idea that advanced intelligence everywhere may be developmentally-fated to venture into inner space, into increasingly local and miniaturized domains, with ever-greater density and interiority (simulation capacity, feelings, consciousness), rather than to expand into “outer space”, the more complex it becomes. When this process is taken to its physical limit, we get black-hole-like domains, which a few astrophysicists have speculated may allow us to “instantly” connect with all the other advanced civilizations which have entered a similar domain. Presumably each of these intelligent civilizations will then compare and contrast our locally unique, finite and incomplete science, experiences and wisdom, and if we are lucky, go on to make something even more complex and adaptive (a new network? a universe?) in the next cycle.

Clement and I co-founded our Evo-Devo Universe complexity research and discussion community in 2008 to explore the nature of our universe and its subsystems. Just as there are both evolutionary and developmental processes operating in living systems, with evolutionary processes being experimental, divergent, and unpredictable, and developmental processes being conservative, convergent, and predictable, we think that both evo and devo processes operate in our universe as well. If our universe is a replicating system, as several cosmologists believe, and if it exists in some larger environment, aka, the multiverse, it is plausible that both evolutionary and developmental processes would self-organize, under selection, to be of use to the universe as complex system. With respect to universal intelligence, it seems reasonable that both evolutionary diversity, with many unique local intelligences, and developmental convergence, with all such intelligences going through predictable hierarchical emergences and a life cycle, would emerge, just as both evolutionary and developmental processes regulate all living intelligences.

Once we grant that developmental processes exist, we can ask what kind of convergences might we predict for all advanced civilizations. One of those processes, accelerating change, seems particularly obvious, even though we still don’t have a science of that acceleration. (In 2003 I started a small nonprofit, ASF, to make that case). But what else might we expect? Does surviving universal intelligence become increasingly good, on average? Is there an “arc of progress” for the universe itself?

Developmental processes become increasingly regulated, predictable, and stable as function of their complexity and developmental history. Think of how much more predictable an adult organism is than a youth (try to predict your young kids thinking or behavior!), or how many less developmental failures occur in an adult versus a newly fertilized embryo. Development uses local chaos and contingency to converge predictably on a large set of far-future forms and functions, including youth, maturity, replication, senescence, and death, so the next generation may best continue the journey. At its core, life has never been about either individual or group success. Instead, life’s processes have self-organized, under selectionto advance network success. Well-built networks, not individuals or even groups, always progress. As a network, life is immortal, increasingly diverse and complex, and always improving its stability, resiliency, and intelligence.

But does universal intelligence also become increasingly good, on average, at the leading edge of network complexity? We humans are increasingly able to use our accelerating S&T to create evil, with both increasing scale and intensity. But are we increasingly free to do so, or are we growing ever-more self-regulated and societally constrainedSteven Pinker, Rutger Bregman, and many others argue we have become increasingly self- and socially-constrained toward the good, for yet-unclear reasons, over our history. Read The Better Angels of Our Nature, 2012 and Humankind, 2021 for two influential books on that thesis. My own view on why we are increasingly constrained to be good is because there is a largely hidden but ever-growing network ethics and empathy holding human civilizations together. The subtlety, power, and value of our ethics and empathy grows incessantly in leading networks, apparently as a direct function of their complexity.

As a species, we are often unforesighted, coercive, and destructive. Individually, far too many of us are power-, possession- or wealth-oriented, zero-sum, cruel, selfish, and wasteful. Not seeing and valuing the big picture, we have created many new problems of progress, like climate change and environmental destruction, that we shamefully neglect. Yet we are also constantly progressing, always striving for positive visions of human empowerment, while imagining dystopias that we must prevent.

Ada Palmer’s science fiction debut, Too Like the Lightening, 2017, is a future world of both technological abundance and dehumanizing, centrally-planned control over what individuals can say, do, or believe. I don’t think Palmer has written a probable future. But this combination of future abundance and overcontrol does seem plausible, under the wrong series of unfortunate and unforesighted future events, decisions and actions. Imagining such dystopias, and asking ourselves how to prevent them, is surely as important as positive visions to improving adaptiveness. I am also convinced we are rapidly and mostly unconsciously creating a civilization that will be ever more organized around our increasingly life-like machines. We can already see that these machines will be far smarter, faster, more capable, more miniaturized, more resource-independent, and more sustainable than our biology. That fast-approaching future will be different from anything Earth’s amazing, nurturing environment has developed to date, and it is not well-represented in science-fiction yet, in my view.

On average, then, I strongly believe our human and technological networks grow increasingly good, the longer we survive, as some real function of their complexity. I also believe that postbiological life is an inevitable development, on all the presumably ubiquitous Earthlike planets in our universe. Not only will many of us merge with such life, it will be far smarter, stabler, more ethical, empathic, and self-constrained than biological life could ever be, as an adaptive network. There is little science today to prove or disprove such beliefs. But they are worth stating and arguing.

Arguing the goodness of advanced intelligence was the subtext of the main debate at the SETI conference mentioned above. The highlight of this event was a panel debate on whether it is a good idea to not only listen for signs of extraterrrestrial intelligence (SETI), but to send messages (METI), broadcasting our existence, and hopefully, increasing the chance that other advanced intelligences communicate with us earlier, rather than later.

One of the most forceful proponents for such METI, Alexander Zaitsev, was at this conference. Clement and I had some good chats with him (see picture below). Since 1999, Zaitsev has been using a radiotelescope in the Ukraine, RT-70, to broadcast “Hello” messages to nearby interesting stars. He did not ask permission, or consult with others, before sending these messages. He simply acted on his belief that doing so would be a good thing, and that those able to receive them would not only be more advanced, but would be inherently more good (ethical, empathic) than us.

Alexander Zaitsev and John Smart, Royal Society SETI Conference, Chicheley Hall, UK, 2010

Sadly, Zaitsev has now passed away. Today, Paul Gilster wrote a beautiful elegy for him, at his site on interstellar exploration, Centauri Dreams. It explains the 2010 conference, where Zaitsev debated others on the METI question, including David Brin. Brin advocates the most helpful position, one that asks for international and interdisciplinary debate prior to sending of messages. Such debate, and any guidelines it might lead to, can only help us with these important and long-neglected questions.

It was great listening to these titans debate at the conference, yet I also realized how far we are from a science that tells us the general Goodness of the Universe, to validate Zaitzev’s belief. We are a long way from his views being popular, or even discussed, today. Many scientists assume that we live in a randomness-dominated, “evolutionary” universe, when it seems much more likely that it is an evo-devo universe, with both many unpredictable and predictable things we can say about the nature of advanced complexity. Also, far too many of us still believe we are headed for the stars, when our history to date shows that the most complex networks are always headed inward, into zones of ever-greater locality, miniaturization, complexity, consciousness, ethics, empathy, and adaptiveness. As I say in my books, it seems that our destiny is density, and dematerialization. Perhaps all of this will even be proven in some future network science. We shall see.

Note: This post can also be found on Medium, a platform that commendably pays its community for its writing and readership. Medium is also much easier to use than WordPress. I keep this site only as a legacy site at present. Please visit my Medium page to find and comment on my latest posts.


John Smart is a global futurist, and a scholar of foresight process, science and technology, life sciences, and complex systems. His new book, Introduction to Foresight, 2021, is now available on Amazon.

%d bloggers like this: