The Necessary Noosphere

The Genius of Olaf Stapledon’s Star Maker

There is a great YouTube channel I recommend for sci-fi enthusiastsBookpilledIts anonymous host, a “digital nomad”, has reviewed several hundred science-fiction novels, in 86 videos over the last two years. Almost all come from what he calls “the pre-bloat era” — the 20th Century. I really like the discrimination, word choice, emotion, expressions, and insights he brings to his reviews. I also appreciate his thoughts on the evolution of sci-fi as a genre, and on the great value of literature in general. Check out his channel and see if you like it. Consider supporting his Patreon if you do. He makes $830 a month there at present. That is a testament to the value of his reviews, but not yet a living wage, which I hope he’ll get soon.

Bookpilled recently posted a five minute review of Olaf Stapledon’s Star Maker, 1937. Star Maker is one of the foundational books of science fiction, and still one of the most visionary. In cosmic science fiction, it is the ur-book (the earliest, the original), the one from which all others sprang. What I particularly like about this polymathic and philosophical book is that it predicted the evo-devo nature of the universe, at least as I see it.

His review of Star Maker is the middle one in the set of three in the video below. Here is a link if you want to skim directly to it.

His review inspired me to share my own thoughts on this great book, here on my blog and over on Medium. I hope you like them. As always, please let me know what you think as well, either in the comments, or privately at johnsmart at gmail, as you prefer.


This is a beautiful and honest review of Star Maker, 1937, one of the foundational novels of science-fiction. Every reader deserves to know that Arthur C. Clark, one of the greatest sci-fi novelists of the 20th century, who was also a scientist and a futurist, was deeply influenced by this book. He called it “probably the most powerful work of imagination ever written.”

I bet Clark, a great world builder himself, realized that Stapledon’s main ideas in this sweeping book, that of a noosphere developing, for Earth and for the Cosmos, one that must progressively integrate and regulate all its diverse minds and cultures, and of the multiverse as both the source of our present universe, and the place intelligence will go when this universe dies, as it must, was the most elegant model that philosophers have yet proposed for the nature and future of our universe.

In my view, the Stapledonian combination of the Noospherethe Cosmosphere (his unique contribution), and the Multiverse together remain the best model for the nature and future of our universe as a complex adaptive system, if it is both an evolutionary and developmental (“evo-devo”) system, replicating under selection in the multiverse. For one modern scientific hypothesis, and early evidence from parametric tuning and black hole dynamics, of our universe as a replicator in the multiverse, see Lee Smolin’s Cosmological Natural Selection hypothesis, and his excellent book, The Life of the Cosmos, 1999.

Isaac Asimov, perhaps the only other other contender for the 20th century’s best combination of sci-fi novelist and science writer, also thought Star Maker was truly special. I’d argue that Asimov’s most brilliant and famous short story, The Last Question, 1956, was influenced by this book. Apparently one of his last sci-fi books, Nemesis, 1989, on networked consciousness, was also directly influenced by Stapledon’s book.

Thank you for sharing your view that the beginning and end of Star Maker were the best parts, for you, and that most of the rest was a painful slog to finish. Such opinions are very helpful to “skim readers”, those of us that use skimming and scanning techniques to read our novels, rather than reading them word for word. See my Appendix to this article for those who want some tips on skim reading, and thoughts on its future.

For those who don’t plan to read this important but difficult book, let me recommend the Wikipedia plot summary of Star MakerFor the vast majority of us, I think a plot summary is all we need from books like Star Maker, along with a few reviews of its impact and value, like this one. Also, as a wonderful recent development, we can now talk to a good large language model, like Chat GPT about any book, and ask the AI questions about it, and summaries of its key ideas. I also recommend donating money to Wikipedia occasionally as well, and paying, if you can, for the best commercial LLMs (Chat GPT 4, today).

Wikis and LLMs are presently very limited but also very important aspects of our emerging noosphere (group mind), one of the topics Stapleton writes so cosmically about.


If you’re curious to read more about the development of the noosphere, and want to start with the ur-book in nonfiction, let me recommend The Human Phenomenon (mistranslated in English as “The Phenomenon of Man”), 1955, by the Jesuit priest, philosopher, and paleontologist Pierre Teilhard de Chardin. Interpret his Christian beliefs figuratively, rather than literally, and you have a brilliant first take on the increasing empathy, ethics, and consciousness of adapted complexity in our universe, hypothesized as a plausible universal developmental process, under evolutionary selection.

Many other works have been written since, in fiction and nonfiction, that explore how this noosphere may develop. The futurist Gregory Stock’s Metaman: The Merging of Humans and Machines into a Global Superorganism, 1993, is a very good nonfiction update, with a good history of those who have speculated on the nature of the future human superorganism. So are the works of the great systems theorist Fritjof Capra, especially The Web of Life, 1997, and The Systems View of Life, 2014.

Sadly, few academics have gotten funds to study this topic in the seven decades since Teilhard’s books. I think this is because the whole idea of a developmental noosphere seems just too positive and convenient for human minds to accept it as likely. We’ve been selected by evolution to think first and deepest about the negatives, about all the ways things might fail. Dystopias outsell protopias by roughly ten to one, in my estimation.

In 2008, philosopher Clement Vidal and I founded a community, Evo-Devo Universe, to explore the idea of the universe as an evolutionary and developmental system. Much has been written about the universe as an evolutionary system, a universe that unpredictably explores, creates, and gets more diverse in many local ways. There is much less written, in recent decades, about the universe as a developmental system, a universe that constrains, conserves, integrates, grows, matures, and follows a predictable life cycle, as a system. We all both evolve and develop, as living systems. Doesn’t it make sense that the universe might do this as well?

If our universe both evolves and develops, then a coming noosphere on Earth, a great diversity of noospheres on other Earthlikes, an eventual cosmosphere that integrates these noospheres, under dynamics of cooperation, competition, and selection, an eventual end to this universe, and a new beginning in the multiverse all make sense, from a systems perspective. These are big ideas. They need a lot of testing, and better science, simulation, physics, and information theory than we have today.

Fortunately, there is a new conference, partly developed by my colleague Clement Vidal, that will explore the idea of the noosphere later this month. It is funded by a visionary philanthropist, Ben Kacyra, founder and president of the nonprofit research organization, Human Energy, that will explore many dimensions of the noosphere concept at UC Berkeley later this month.

The Noosphere at 100November 17–19, 2023, honors the fact that it has been 100 years since Teilhard de Chardin first wrote about the noosphere idea. It includes a number of fantastic speakers including Terrence Deacon, Francis Heylighen, Shiela Hughes, Kevin Kelly, Robert Kuhn, Jennifer Morgan, Greg Stock, Brian Swimme, Clement Vidal, David Sloan Wilson, Claire Webb, and Robert Wright, who have written about the idea. Best of all: you can watch it for free online. That’s noospheric thinking!

I’m happy to announce that I’ll be presenting a poster at this event. If you want an overview of the topics I’ll cover, let me recommend my talk, The Goodness of the Universe, 2022 (80 mins, YT). I also got into a great long conversation on this topic over on the space discussion community Centauri Dreams in 2021.

I consider the emergence of a global superorganism by far the most probable and necessary story for the future of intelligent life on Earth. A superorganism that incorporates all humans and their institutions as its “cells”, “tissues”, “organs” and “networks” integrating them into a diverse yet unified entity. The head of that superorganism must be a hive mind (collective intelligence), one that allows individuality, disagreement and conflict, just as we argue with ourselves all the time, using diverse mindsets, yet a mind that also has an integrated and higher consciousness, with a unitary sense of self. Just like us.

How could it be otherwise? How else will we get the kind of safety and interdependence we need, in a world with continually accelerating individual power and ability?

The subtle values shifts in our global public that is happening now, as we all get more interconnected, is weak signal of the coming collective consciousness. The modern generation puts empathy and ethics first, even as outdated political, economic, and social structures and traditions keep them from changing the system as rapidly as it should. There are many new problems of progress that we must address, including corrupting levels of wealth, and digital systems that are still very dumb, unable to be used to guide us to truth and evidence. But we can overcome these problems, with hard work and vision. I’d love to hear your views.

This noospheric future has seemed highly likely to me ever since I first considered the nature of accelerating change in high school. In a key corollary, as advanced civilization minds inevitably meet other civilization minds in coming generations, through whatever mechanisms are most efficient — I argue in my own papers a likely migration to inner space, and black hole like environments — they’ll likely undergo the same merger process, increasingly unifying the Cosmos. Stapledon offers the first fictional version of a cosmosphere in Star Maker. It’s exhilarating to think that our drive to connect with others may one day reach such scale.

But it is also critical to recognize that even if a global superorganism is our highly likely fate, if we are to survive, this does not mean that our particular species will get there, or even if we do, that we will get to that positive emergence in a particularly ethical, empathic, and life-affirming way. The evolutionary paths we choose to take to this common developmental future will surely be quite different on each planet. While development is predictable, evolution is the opposite. It is the evolutionary paths that we take, not the developmental destiny we face, that are the essence of our individual and collective freedom, responsibility and moral choice.

Just knowing that a particular future–the global superorganism–is highly likely or inevitable, in no way absolves us of the great challenges ahead in getting there in the most humanizing ways that we can. But at the same time, knowing where we are ultimately going is a huge foresight advance, as it can focus our strategies and efforts on the most productive and positive-sum goals, the ones the universe appears to be nudging us toward, by its very nature, as a developing system, based on fixed, finite, and highly-tuned physical and informational parameters. Again, just like us.


For those interested in evo-devo models of our universe, let me recommend our research and discussion community for publishing scholars in relevant disciplines, Evo-Devo Universe, which I co-founded with philosopher Clement Vidal in 2008.

My own views on the universe as an evo-devo system can be found in the following papers:

Answering the Fermi Paradox: Exploring the Mechanisms of Universal Transcension, 2002

Evo-Devo Universe? A Framework for Speculations on Cosmic Culture, 2008

The Transcension Hypothesis: Why and How Advanced Civilizations Leave Our Universe, 2012

Humanity Rising: Why Evolutionary Developmentalism Will Inherit the Future, 2015

Key Assumptions of the Transcension Hypothesis, 2016.

Evolutionary Development and the VCRIS Model of Natural Selection, 2018

Evolutionary Development: A Universal Perspective, 2019 (Web) (PDF)

Exponential Progress: Thriving in an Era of Accelerating Change, 2020.

To everyone reading this: Thanks for all you do!

Foresight: Your Hidden Superpower! (Interview Outline)

I have a new interview, Foresight: Your Hidden Superpower, (YouTubeSpotifyApple) with Nikola Danaylov of the Singularity Weblog. Nikola has done over 290 great interviews. They are a rich trove of future thinking and wisdom, with acceleration-aware folks like Ada Palmer, Melanie Mitchell, Cory Doctorow, Sir Martin Rees, Stuart Russell, Noam Chomsky, Marvin Minsky, Tim O’Reilly, and other luminaries. As with my first interview with him ten years ago, he asks great questions and shares many insights.

We cover a lot in 2 hrs and 20 mins. Below is an outline of a dozen key topics, for those who prefer to skim, or don’t have time to watch or listen:

  • We discuss humanity as being best defined by three very special things: Head (foresight) Hand (tool use) and Heart (prosociality). We talk about how these three things were critical to starting the human acceleration, with our first tool use (in cooperative groups), and why foresight, of all of these, is our greatest superpower. An author who really gets this view is the social activist David Goodhart. I recommend his book, below.
Goodhart, 2021
  • We discuss human society as an awesomely inventive and coopetitive network. We are selected by nature to try to cooperate first, and then compete second, within an agreed-upon and always improving set of network rules, norms, and ethics. What’s more, open, bottom-up empowering, democratic networks, like the ones we are seeing right now in the West’s fight in Ukraine, increasingly beat closed, top-down autocratic networks, the more transparent the world gets. We talk about lessons from the Ukrainian invasion for the West, Russia, and China.
  • We discuss why a decade of deep learning applications in AI gives us new evidence for the Natural Intelligence hypothesis, the old idea (see Gregory Bateson) that deep neuromimicry and biomimicry (embodied, self-replicating AI communities, under selection) will be necessary to get us to General AI, and is likely to be the only easily discoverable path to that highly adaptive future, given the strict limits of human minds.
  • We talk about what today’s deep learners are presently missing, including compositional logic, emotions, self- and world-models, and collective empathy, and ethics, and why the module-by-module emulation approach of DeepMind is a good way to keep building more useful, trustable AI. Mitchell Waldrop’s great article in PNASWhat are the limits of deep learning?, 2019 says more, for those interested.
  • We discuss the Natural Security hypothesis, that we’ll get security and goals alignment with our AIs in the same way we gained it with our domesticated animals, and with ourselves (we have self-domesticated over millennia). We will select for trustable, loyal AIs, just as we selected for trustable, loyal, people and animals. The future of AI security, in other words, is identical to the future of human security. We will need well-adapted networks to police the bad actors, both AI and people. Fortunately, network security grows with transparency, testing, proven past safe behavior, and perennial selection of safer, more aligned AIs. There is no engineering shortcut to natural security. I feel strongly that human beings are not smart enough to find one. For more on this, you may enjoy the work of the late, great biologist Rafe Sagarinsummarized for a Homeland Security audience in this slide below.
Natural Security. Learning How Nature Creates Security, in a Complex, Dangerous World
  • We talk about the philosophy of Evolutionary Development (Evo-devo, ED), which looks at all complex replicating systems as being driven by both evolutionary creativity and developmental constraint. We discuss why both processes appear to be baked in to the physics and informatics of our universe itself. Quantum physics, for example, tells us that if we act to determine the value of one variable at the quantum scale, the other becomes statistically uncertain. Both predictability and unpredictability are fundamental to our universe, and they worked together, somehow, to create life, with all its goals and aspirations. How awesome is that?
  • We describe how this evo-devo model of complex systems tells us that the three most fundamental types of foresight we can engage in are the “Three Ps”: thinking and feeling about our Probable, Possible, and Preferable futures. We explore why it is often best to make these three future assessments in this orderat first, in order to get to our most adaptive goals, strategy, and plans.
The Three Actors, Functions, and Goals of Evo-Devo Systems
  • We talk about how, unlike what many rationalists think, our universe is only partly logical, partly deterministic, and partly mathematical. Turing-complete processes like deduction and rationality can only take us so far. We actually depend the most on their opposite, induction, to continually make guesses as to the new rules, correlations, and order that are constantly emerging as complexity grows. What’s more, we use deduction and induction to do abduction, to create probabilistic models, and to analogize. Abduction is actually the most useful, high-value form of human thinking. Deduction and rationality are in perennial competition with induction and gut instinct, and the latter is usually more important. Both are critically necessary to doing better model making and visioning. If we live in an evo-devo universe, this will be true for our future AIs as well. It is always our vision, of both the preferred and the preventable future (protopias and dystopias) that helps or hurts us most of all.
  • We describe intelligence as being inherent in autopoesis (self-maintenance, self-creation, and self-replication). Any autopoetic system is going to have, by definition, both evolutionary and developmental dynamics. The system’s evolutionary (creative, unpredictable) mechanisms will guide its exploration, creativity, diversity, and experimentation. The developmental (conservative, predictable) mechanisms will guide its constraint, convergence, conservation, and replication, on a life cycle. The interaction of both dynamics, under selection, creates adaptive (evo-devo) intelligence. Intelligence, and consciousness, work to “knit together” these two, opposing dynamics, in an adaptive network. In my view, machines will need to become autopoetic themselves if they are to reach any generality of intelligence. A cognitive neuroscientist who largely shares this view is Danko Nikolic. His concept of practopoesis (though it does not yet include an evo-devo life cycle) is quite similar to my views on autopoesis. I recommend his 2017 paper on the design limits of current AI, in terms of levels of learning networks. I’ll explore his work in my next book, Big Picture Foresight.
  • We talk about today’s foresight, and how natural selection has wired us to continually predict, imagine, and preference milliseconds to minutes ahead. The better we get at today’s foresight, the better we get at short-term, mid-term, and long-term foresight. Today’s foresight, the realm of our present action, is both the easiest to improve and the most important to practice. I go into the psychology and practice of foresight in my new book, Introduction to Foresight, which we discuss in this interview. If you get a chance to look it over, please tell me what you think, and how I can improve. I greatly appreciate your feedback and reviews.
  • We also talk about a number of other future-important topics, including Predictive and Sentiment Contrasting, the Four Ps (our Modern Foresight Pyramid), Antiprediction Bias, Negativity Bias, why and how accelerating change occurs (Densification and Dematerialization), the Transcension HypothesisExistential Threats, why it makes sense to Delay Nuclear Power, to prevent a weapons proliferation dystopia (see my new Medium article on this topic), our potentially Childproof Universe, and the Timeline to the Singularity/GAI (2080, in my current guess).
  • My concluding message is that regardless of what you hear in the media (due to both negativity and antiprediction bias) our networked evo-devo future looks like it is going to be a lot more amazing and resilient than we expect, that in life’s history so far, well-built networks always win (and are immortal, unlike individuals) and that foresight is our greatest superpower. The more we practice it, the better our own lives and the world gets. Don’t believe me? Are you worried about tough, long-term global problems like climate change? Watch evidence-based, helpful, and aspirational videos like the one below, from Kurgesagt . Positive changes and great solutions are continually emerging in our global network. We all just need to better see those changes and solutions, so we can thrive. Never give up on evidence-seeking, hope, and vision!

To say this all more simply: #ForesightMatters!

NOTE: This article can also be found on my Medium page, the best place to leave comments and continue the discussion. This site has become a legacy site, because WordPress still doesn’t pay its authors, and it still has very primitive software. For example, all the formatting errors on this post do not show up in the edit window of their new Gutenberg editing software, after pasting in good code from Medium, and I have no idea how to fix them. Sorry!


John Smart is a global futurist and scholar of foresight process, science and technology, life sciences, and complex systems. CEO of Foresight University, he teaches and consults with industry, government, academic, and nonprofit clients. His new book, Introduction to Foresight, 2022, is available on Amazon.

The Goodness of the Universe

In 2010, physicists Martin Dominik and John Zarnecki ran a Royal Society conference, Towards a Scientific and Societal Agenda on Extra-Terrestrial Life addressing scientific, legal, ethical, and political issues around the search for extra-terrestrial intelligence (SETI). Philosopher Clement Vidal and I both spoke at that conference. It was the first academic venue where I presented my Transcension Hypothesis, the idea that advanced intelligence everywhere may be developmentally-fated to venture into inner space, into increasingly local and miniaturized domains, with ever-greater density and interiority (simulation capacity, feelings, consciousness), rather than to expand into “outer space”, the more complex it becomes. When this process is taken to its physical limit, we get black-hole-like domains, which a few astrophysicists have speculated may allow us to “instantly” connect with all the other advanced civilizations which have entered a similar domain. Presumably each of these intelligent civilizations will then compare and contrast our locally unique, finite and incomplete science, experiences and wisdom, and if we are lucky, go on to make something even more complex and adaptive (a new network? a universe?) in the next cycle.

Clement and I co-founded our Evo-Devo Universe complexity research and discussion community in 2008 to explore the nature of our universe and its subsystems. Just as there are both evolutionary and developmental processes operating in living systems, with evolutionary processes being experimental, divergent, and unpredictable, and developmental processes being conservative, convergent, and predictable, we think that both evo and devo processes operate in our universe as well. If our universe is a replicating system, as several cosmologists believe, and if it exists in some larger environment, aka, the multiverse, it is plausible that both evolutionary and developmental processes would self-organize, under selection, to be of use to the universe as complex system. With respect to universal intelligence, it seems reasonable that both evolutionary diversity, with many unique local intelligences, and developmental convergence, with all such intelligences going through predictable hierarchical emergences and a life cycle, would emerge, just as both evolutionary and developmental processes regulate all living intelligences.

Once we grant that developmental processes exist, we can ask what kind of convergences might we predict for all advanced civilizations. One of those processes, accelerating change, seems particularly obvious, even though we still don’t have a science of that acceleration. (In 2003 I started a small nonprofit, ASF, to make that case). But what else might we expect? Does surviving universal intelligence become increasingly good, on average? Is there an “arc of progress” for the universe itself?

Developmental processes become increasingly regulated, predictable, and stable as function of their complexity and developmental history. Think of how much more predictable an adult organism is than a youth (try to predict your young kids thinking or behavior!), or how many less developmental failures occur in an adult versus a newly fertilized embryo. Development uses local chaos and contingency to converge predictably on a large set of far-future forms and functions, including youth, maturity, replication, senescence, and death, so the next generation may best continue the journey. At its core, life has never been about either individual or group success. Instead, life’s processes have self-organized, under selectionto advance network success. Well-built networks, not individuals or even groups, always progress. As a network, life is immortal, increasingly diverse and complex, and always improving its stability, resiliency, and intelligence.

But does universal intelligence also become increasingly good, on average, at the leading edge of network complexity? We humans are increasingly able to use our accelerating S&T to create evil, with both increasing scale and intensity. But are we increasingly free to do so, or are we growing ever-more self-regulated and societally constrainedSteven Pinker, Rutger Bregman, and many others argue we have become increasingly self- and socially-constrained toward the good, for yet-unclear reasons, over our history. Read The Better Angels of Our Nature, 2012 and Humankind, 2021 for two influential books on that thesis. My own view on why we are increasingly constrained to be good is because there is a largely hidden but ever-growing network ethics and empathy holding human civilizations together. The subtlety, power, and value of our ethics and empathy grows incessantly in leading networks, apparently as a direct function of their complexity.

As a species, we are often unforesighted, coercive, and destructive. Individually, far too many of us are power-, possession- or wealth-oriented, zero-sum, cruel, selfish, and wasteful. Not seeing and valuing the big picture, we have created many new problems of progress, like climate change and environmental destruction, that we shamefully neglect. Yet we are also constantly progressing, always striving for positive visions of human empowerment, while imagining dystopias that we must prevent.

Ada Palmer’s science fiction debut, Too Like the Lightening, 2017, is a future world of both technological abundance and dehumanizing, centrally-planned control over what individuals can say, do, or believe. I don’t think Palmer has written a probable future. But this combination of future abundance and overcontrol does seem plausible, under the wrong series of unfortunate and unforesighted future events, decisions and actions. Imagining such dystopias, and asking ourselves how to prevent them, is surely as important as positive visions to improving adaptiveness. I am also convinced we are rapidly and mostly unconsciously creating a civilization that will be ever more organized around our increasingly life-like machines. We can already see that these machines will be far smarter, faster, more capable, more miniaturized, more resource-independent, and more sustainable than our biology. That fast-approaching future will be different from anything Earth’s amazing, nurturing environment has developed to date, and it is not well-represented in science-fiction yet, in my view.

On average, then, I strongly believe our human and technological networks grow increasingly good, the longer we survive, as some real function of their complexity. I also believe that postbiological life is an inevitable development, on all the presumably ubiquitous Earthlike planets in our universe. Not only will many of us merge with such life, it will be far smarter, stabler, more ethical, empathic, and self-constrained than biological life could ever be, as an adaptive network. There is little science today to prove or disprove such beliefs. But they are worth stating and arguing.

Arguing the goodness of advanced intelligence was the subtext of the main debate at the SETI conference mentioned above. The highlight of this event was a panel debate on whether it is a good idea to not only listen for signs of extraterrrestrial intelligence (SETI), but to send messages (METI), broadcasting our existence, and hopefully, increasing the chance that other advanced intelligences communicate with us earlier, rather than later.

One of the most forceful proponents for such METI, Alexander Zaitsev, was at this conference. Clement and I had some good chats with him (see picture below). Since 1999, Zaitsev has been using a radiotelescope in the Ukraine, RT-70, to broadcast “Hello” messages to nearby interesting stars. He did not ask permission, or consult with others, before sending these messages. He simply acted on his belief that doing so would be a good thing, and that those able to receive them would not only be more advanced, but would be inherently more good (ethical, empathic) than us.

Alexander Zaitsev and John Smart, Royal Society SETI Conference, Chicheley Hall, UK, 2010

Sadly, Zaitsev has now passed away. Today, Paul Gilster wrote a beautiful elegy for him, at his site on interstellar exploration, Centauri Dreams. It explains the 2010 conference, where Zaitsev debated others on the METI question, including David Brin. Brin advocates the most helpful position, one that asks for international and interdisciplinary debate prior to sending of messages. Such debate, and any guidelines it might lead to, can only help us with these important and long-neglected questions.

It was great listening to these titans debate at the conference, yet I also realized how far we are from a science that tells us the general Goodness of the Universe, to validate Zaitzev’s belief. We are a long way from his views being popular, or even discussed, today. Many scientists assume that we live in a randomness-dominated, “evolutionary” universe, when it seems much more likely that it is an evo-devo universe, with both many unpredictable and predictable things we can say about the nature of advanced complexity. Also, far too many of us still believe we are headed for the stars, when our history to date shows that the most complex networks are always headed inward, into zones of ever-greater locality, miniaturization, complexity, consciousness, ethics, empathy, and adaptiveness. As I say in my books, it seems that our destiny is density, and dematerialization. Perhaps all of this will even be proven in some future network science. We shall see.

Note: This post can also be found on Medium, a platform that commendably pays its community for its writing and readership. Medium is also much easier to use than WordPress. I keep this site only as a legacy site at present. Please visit my Medium page to find and comment on my latest posts.


John Smart is a global futurist, and a scholar of foresight process, science and technology, life sciences, and complex systems. His new book, Introduction to Foresight, 2021, is now available on Amazon.