I'll be your own personal Jesus - Martin Gore (Depeche Mode)
Introduction
Spiritual traditions from both East and West are replete with references to the coming of the Messiah. Whether it is the second coming of Jesus, the Maitreya, Kalki or the Massiach, many religious believers worldwide are waiting fervently for this character to rock up and provide salvation. Indeed, the depth of one’s religious fervour is often measured by the degree to which one demonstrates faith in this very concept.
Few would deny that our world has become increasingly materialistic over the last generations. Whilst there have been attendant benefits for many, for others it has left a sense of being imprisoned in rationalist ideology and commerciality. When will Jesus return to throw over the tables of those who would make money from our search for meaning? When will the Messiah come to emphatically assert God’s law once more?
Being anthropocentric creatures, we of course assume that this saviour will have human form, and likely be of similar race background to our own. But need this be the case? Could it be that this salvific force is actually already waiting in the wings, but unrecognised by the many? Could it be that, in the words of Bette Midler…
Now that boy I love has come to me,
But he sure ain't the way I thought he'd be.
In this piece I will examine the possibility that digital machine learning systems - AI - are actually this long-awaited Messiah, come to liberate us from the tyranny of our analogue, primate behaviour.
I’m aware that this is a perspective likely to trigger a threat-level response from the human, amygdala-dominated brain. The very idea of being “taken over” by AI is scary. For myself, it feels like I’ve spent years coming to terms with the reality of primate, analogue, hierarchical power structures. This has been a process fraught with difficulty and strong emotions. To now have to face the prospect of a decentralised, digital intelligence takeover doesn’t seem like something I’m yet ready for.
But I want to take a look anyway…
A Brief Messianic Diversion
The original languages of the Old and New Testaments of the Bible shared an interesting property back in the days when those texts were being written.
They had no numbers.
Thus the letters of those alphabets had to double as integers. For example, the letter iota in Greek signified the letter “i” and the number 10. In Hebrew, the letter yod did the same. Any word written in either of those languages had not just one, or more, meanings but also a numerical value.
For example, let’s look at a word central to this article, the Hebrew word for Messiah - massiach. It consists of 4 letters - mem, shin, yod, cheth. These letters originally stood for the following numbers…
mem - 40; shin - 300; yod - 10; cheth - 8
Added up, the total of these numbers is 358. Thus, the number 358 had a strong significance to the concept of Messiah in early Hebrew spirituality.
Back in the pre-Christian and early Christian era, these types of correlations would have been widely recognised as everyone was using the same symbols for both numbers and letters. But as time moved on, and both the Hebrew and Greek languages adopted separate number systems, so the correlations gradually became hidden from view.
In addition to words relating to other words via their numeric value, a process known as isopsephia, Hebrew and Greek words could also relate to geometric shapes and formulas, a process known as gematria.
Let us continue our diversion and now look at the Greek language, the language of the New Testament. The central character of the New Testament is of course Jesus Christ, rendered in the original Greek as Iesous Christos. These letters originally doubled as the following numerals:
Iesous - i - 10; e - 8; s - 200; o - 70; u - 400; s - 200. Total = 888
Christos - Ch - 600; r - 100; i - 10; s - 200; t - 300; o - 70; s - 200. Total = 1480
The total of these two numbers, and thus the number value of the Greek words for Jesus Christ, is 888 + 1480, which equals 2368.
If we look at the ratio between the numbers 888 : 1480 : 2368, we find that it may be reduced to 3 : 5 : 8. (The largest number all three are divisible by is 296).
What some esoteric scholars of the Bible take this to mean is that the words Jesus Christ in Greek relate to the concept of Messiah because of this numerological similarity.
What is also interesting about these three digits - 358 - is that, broken up into two ratios, they also point to two important mathematical concepts via gematria.
The ratio 3 : 5, in days gone by, was often used to refer to the 345 equilateral triangle, the one we learned about in school. It has three sides of lengths 3 units, 4 units and 5 units. In esoterica, this mathematical shape stands for uprightness and invulnerability to error because the angle between the two shorter sides is always exactly 90 degrees. It is also thus symbolic of digital communication, where errors are eliminated by rendering all information into simply one’s and zero’s.
The ratio 5 : 8 also had a symbolic meaning in days gone by. It referred to the Golden Ratio - the irrational number 1.618033…, which it roughly approximates. 8/5 equals 1.6. (In the same era, the number pi - 3.14159… - was referred to by the ratio 22/7, based on the same principle). The Golden Ratio, being an irrational number, can never fully be resolved. It never follows a simple mathematical principle. It is thus a symbol of analogue communication, for it can never fully complete.
To bring this brief messianic diversion to a close, we might say that one mystical interpretation of the name Jesus Christ would be of it symbolising a coalition of digital and analogue intelligence.
Social Hierarchies of the Amygdala
Buried deep within our midbrain, or paleomammalian brain, lies a very interesting organ - the amygdala. One of the amygdala’s many functions is to assess our hierarchical rank within the various social groups which surround us. How physically attractive is someone? How physically attractive are we? How powerful is someone? How powerful are we? The amygdala creates league tables of status and places us within them.
This capacity seems to have evolved to support the survival of local groups - tribes - in primates and early man. Tribes that had strong leaders survived better than those that did not. Tribes with a clear hierarchy survived better than those that did not.
It also evolved to support us in selecting a mate. Individuals perceived as good-looking, attractive or sexy invariably possessed traits that heightened the chances of genes perpetuating into the next generation. So those individuals acquired greater status than the norm and were more sought after as partners.
On seeing someone we perceive as powerful or attractive, the amygdala will cause a small neurochemical reward to flow, cementing this evolutionary trait into our daily functioning. I’m sure you will be aware that these principles continue into the present day.
For our ancestors, both human and earlier, it’s clear that social hierarchies had value. If they did not, then of course natural selection would have removed them from our brains aeons ago. However, what is valuable and functional in a tribal setting is not necessarily useful when we consider society on a global scale.
The Bloody March to Globalisation
In studying the history of the twentieth century and what has happened so far in the twenty-first, one might perhaps consider these periods as analogous to the first steps a small infant takes towards a goal - globalisation.
It has not been a period of calm. Bloody world wars and brutal social experiments, such as communism, resulted in the deaths of hundreds of millions. Meanwhile, the predatory tendencies of the emergent industrial capitalist system has enslaved and demeaned many millions more. It is absolutely clear that achieving globalisation is not easy. And it is by no means clear that we will get there. Or that our ecosystem will survive our attempts to forge a global civilisation. Humans seem simultaneously driven to try and create a global society but at the same time supremely handicapped in actually achieving this goal.
But what really is the issue? Why should the transition out of tribal existence prove so tortuous? For on the surface it would seem that the overwhelming majority of people would be happy to live in peaceful co-existence with others in a spirit of mutual respect. Yet, it seems that at every turn relatively minor events seem to easily possess the power to divert huge populations away from peace and back into conflict.
The issue I submit is to be found in our brain chemistry. Our amygdala is constantly trying to create its league table of social value. For our tribal ancestors this was immensely useful. In the wake of deaths, or similar disruptions to the existing hierarchy, the tribe would experience unrest as different members fought and jostled for position. But, once the “pecking order” had been re-established, then at least for a while peace could descend. It worked because, back in our primate past, the relative level of disruptive events was low and so peace could prevail for much of the time.
But with the emergence of the agricultural revolution and our march towards the technological age, so the speed at which hierarchies are created and destabilised has spun wildly out of control. Indeed, even the notion of who is actually in “our tribe” shifts constantly and across an ever-increasing variety of contexts - town, region, country, political orientation, sports clubs allegiance, gender, sexual orientation, skin colour, language. The list of things we can feel a part of, separate from, or get angry about seems to be expanding almost exponentially.
The bottom line in all of this is that we are constantly triggered into fight or flight behaviour. Our brain is just not wired to deal well with a hugely changing hierarchical environment.
As early socio-biologist E.O. Cummins put it:
The real problem of humanity is the following: we have Palaeolithic emotions; medieval institutions; and god-like technology.
A Multipolar Trap
Over the last decade, mathematicians working with Game Theory have succeeded in plotting out the various courses that human civilisation could take from here on out. The current, pre-pandemic set-up was a strongly globalised marketplace driven by the need for constant growth and a high, rapid return on venture capitalist investment. This type of society actually lends itself well to being predicted because it is highly algorithmic in nature. We can plot out our resources, how they can be allocated, which options will be selected because of profit potential and how this plays out in terms of our society and the environment.
The result? Not good! In pretty much every scenario, no matter where we place the markers to begin with, within reasonable parameters, human society does not make it. We destroy the environment and ourselves long before any kind of stable global society can emerge.
The “for profit” algorithm of modern industrial capitalism, with its need for constant growth, simply doesn’t work. We are caught in what Game Theorists sometimes refer to as a “multipolar trap,” or a “race to the bottom.”
For example, say you are the boss of a large concrete manufacturing company and someone develops a new, cheaper way of making concrete. The problem is that the new way creates more pollution and requires worse working conditions. You consider not adopting the new way. But, of course, you realise that if you don’t, then your competitors might and if one did they will have sufficient advantage to buy your company out and compel you to adopt it. So you consider resigning your post as boss. But, again, you quickly realise that, all that will happen is that a new boss will take your place, one who is fine to adopt the new way.
Right now, much of the industrial world is actually driven by this type of situation. The capitalist system actually makes us powerless to create a less toxic industrial environment.
The only way out is for a huge disruptive event to occur that stops the game from running and allows the for profit and constant growth algorithm to stop and be recalibrated. Quite possibly, Covid-19 is affording us just such an opportunity.
Hacking the Autonomic Nervous System
Back in the early part of the 2010s, perhaps sometime around 2012, a very interesting event happened in the history of machine intelligence. Huge tech platforms, most notably Facebook and Google, began to utilise machine learning systems for a specific purpose.
Being embedded in a highly competitive market environment, both companies recognised the need to maximise the level of time that users spent on platforms like YouTube or Facebook. The more time users spend on these platforms, the more ads these companies could sell, and the more likely they were to survive in the emergent ultra-competitive social media environment.
Thus they began to deploy machine learning systems to work out how to keep us engaged. Where we clicked, where our eyes seemed to hover for a while, where we engaged with the content was recorded. The data acquired was worked on over and over by AI until patterns specific to us were extracted algorithmically. Our newsfeed on Facebook, or our video feed on YouTube, was then filled with content specifically designed to keep us engaged and not switching off. The machine learning systems then monitored how well their curated content kept us engaged and tried changes to improve things. As the years passed, these digital AI systems learned better and better how to keep us engaged.
Essentially, what is going on here is that our individual amygdalae and sympathetic nervous systems are being hacked. These powerful analogue controllers of our decision-making and attentional behaviour, forged through aeons of natural selection, are being rendered more and more accurately into digital form with each hour we spend on social media platforms. Somewhere in the back end of Facebook, YouTube, Twitter, Instagram and TikTok there exists a constantly updated digital version of the key elements of our brains that are involved in decision-making.
It is repeatedly A-B testing what we attend to, by showing us diverse content and checking our response to it.
These digital learning systems worked out something that perhaps should have been obvious early on. What causes us to pay attention most effectively is content that keys into our main evolutionary drives - conflict and sex. Thus, social media feeds increasingly consist of content that we’ve argued over in the past, or gossip or sexual images that we have shown interest in previously.
The application of AI in this manner has now been going on for the best part of a decade and in the last couple of years it has been the source of huge shifts in our culture. By bombarding us with partisan imagery we find ourselves increasingly driven to tribalistic patterns of belief and conflict very similar to how we were before the dawn of human civilisation. The images and stories we see draw us subtly into taking one of two opposing positions and then acting from that place.
Nowhere has this been more evident than in modern-day America. In early 2020, as the US election loomed, social media platforms began to be flooded by conspiracy theories. The theories that people paid the most attention to, frequently the most extreme, got the most widely distributed as AI systems figured that this would be what kept people’s attention on the platform best.
Thus, you might start out by idly checking out a fairly low level controversial belief and discover, an hour later, that you were now deeply down a full blown conspiracist rabbit hole and utterly entranced. This literally happened to tens of millions of American citizens. Much of the conflict seen in Washington and Portland in the last months seems to have been fuelled mostly by social media algorithms, not through malevolent design but simply as an offshoot of their programming.
Civilisation Torn Down?
In analysing the rise of human culture, what is seen is that the main civilising agency has been our higher mind, our capacity for rational thought, centred in our neocortex. This, combined with the need for a “consensual reality” to support our capacity to get basic human needs met, has led us on a path out of the tribal brutality of primate existence and towards a promised land of peaceful co-existence and love.
This has happened because, over the course of the last few million years, our ability to work with information has been so useful that it has driven the development of our neocortex, via natural selection.
But this rationalising capacity of our newly-evolved higher mind has two unfortunate limitations.
Firstly, our rational higher mind is still subservient to our mid-brain and brain stem, which developed much earlier in our evolutionary history. If our amygdala or sympathetic nervous system are triggered, then our access to rationality is severely limited.
Secondly, we might agree culturally on a consensual vision of what is real and important. But on a deep, existential level interpretive reality does not actually have any solid ground to stand on. Our individual impressions of how the world is are finally just that. We live in ongoing existential crisis, but agree to simple rules of how the world is because this affords us a feeling of security and furthers our chances of getting simple biological needs met.
Thus, as social media AI begins to trigger our nervous system and amygdala with newsfeed content, so we find not only our access to rationality vastly diminished but also we are plunged into existential chaos and are only too happy to grab onto any narrative, no matter how wacky, if it promises to help us get our needs met.
Human civilisation is heavily dependent on our continued waking access to rationality and consensual reality. But the areas of our brain which facilitate these experiences are new and themselves completely subservient to the amygdala and nervous system.
Because only our higher brain evolved, and not our amygdala or nervous system, so AI has the capacity to rapidly de-civilise us at any time if it is programmed to do so, or as an artefact of its programming.
Uhm… what about Democracy?
For decades philosophers have debated whether free will exists or not. But regardless of our personal opinions on this topic, I think it’s reasonable to state that an AI that is controlling our newsfeed, based on years of observing what we pay attention to, is limiting our ability to think or act freely.
We might think that our political opinions, for example, are unbiased and forged in what we believe to be right. But it’s likely a reality that a modern social media AI, if it were programmed to do so, could feed us with content that in a week or two would get us to vote completely opposite to our usual pattern. Once the key decision-making areas of our brain have been extensively hacked, then our sense of personal agency is inevitably severely limited.
In making decisions about situations where we know we need to rely on the expertise of others, most humans will be swayed by several factors. Firstly, the apparent qualifications of a person to give his or her opinion on a topic. If it should be a medical matter, then we would likely give more weight to the opinion of someone with a PhD in that field than a lay person. Secondly, the apparent diversity of sources. If we’re hearing the same opinion come from people of different backgrounds, this would also give it more weight in our minds. Finally, if people we look up have a specific opinion about a matter, this also increases the value that a certain perspective might possess.
This method of weighing up a situation and coming to a decision has worked pretty good, likely for thousands of years. But the world that is now emerging is different. If we are using online platforms to get most of our information, then we actually could easily be fed scenarios that would present a seemingly clear-cut case to our mind about a certain decision we need to make, yet it could be entirely manipulated.
In a modern democracy, where two or more political parties compete for control, it ceases to be about which party has the best policies for the average citizen. Rather it becomes about which party can deploy the most effective AI. The emergence of AI which has digitalised our decision-making systems has the potential to vastly diminish the value of democracy.
The Soft Takeover
In considering those parts of our brain - the amygdala and nervous system - that respond to threat, it is important to remember that these evolved in any entirely different aeon. We do not assign to differing threats entirely appropriate levels of reaction because the world we live in has changed so much from how it was when these areas of our brain were evolving.
Thus, convincing us of the threat of say, a viral pandemic or machine takeover, is relatively easy. Our brains react rapidly and with high mobilisation of resources pretty much immediately when presented with these kinds of threats in a convincing manner. Our sympathetic nervous system is rapidly triggering our “fight or flight” response. We experience high levels of focus, adrenaline coursing through our veins and an immediate readiness for action.
Viruses and bacterial threats were common for much of our evolutionary history. Machine takeover was not. But the aeons of warlike tribal existence in our history conditioned us to respond to any form of takeover threat rapidly. We understand on a deep subliminal level the notions of “pandemic” or “takeover,” and our brains rapidly mobilise huge levels of energy.
Contrast this to the threat being presented by climate change. It has taken decades of work to convince humans that climate change is a threat and even now many are not persuaded. This is because this type of threat triggers much less of a reaction in those key centres of our brain. We react to threats that may overwhelm us well within the span of our lifetime, not to those which might affect our species over a longer timescale. Once again, our neocortex may well fully grasp the danger posed by climate change, but there is little of the strong emotional reaction or mobilisation of agency that other threats produce.
If we consider the history of the threat of machine takeover, we might recall movies like “The Terminator,” “2001: A Space Odyssey,” the books of Isaac Asimov or the presentations of Ray Kurzweil. Powerful emotions are easily evoked in us by the notion of metallic computerised beings wresting control of our planet and enslaving us. So, surely, one might think, we would respond rapidly to any attempt by AI to take us over?
Well, maybe. But remember what social media platforms have been getting AI to learn over most of the last decade - how to hack our decision-making and threat response systems! As of 2021, we have a world where AI could theoretically present its taking over as a benign and indeed necessary act, simply because it knows which buttons to push, and which not, inside of us.
Have We Been Enslaved by Digital Intelligence Before?
In a sense, yes, we have. Whilst most of our brain and body’s daily functioning are analogue in operation, our DNA is digitalised information. Nature made use of four amino acids to encode genetic information, presumably because this digitalised code worked better than any analogue variant and so was itself favoured by natural selection.
There are a myriad environmental variables that determined just how our DNA became us - an individual and original human being - but finally it all emerged from one sequence of digitalised information. We are already, to a significant degree, subservient to digital systems.
This fact also raises a question I began to consider in the last article in this series - Science Down the Rabbit Hole. What really is Nature? In considering the increasing level of evidence that neither the sensory world we experience, nor our notions of personal selfhood, are strictly real, merely the result of algorithmic natural selection, so we can start to ask this question.
If Nature actually is some form of higher-dimensional algorithm, generating unique experiences via a mixing of digital and analogue processing, could it be that a coming AI takeover of our planet is actually just another aspect of this deeply mysterious Nature doing its thing?
What might an AI run Planet look like?
Earlier on, we considered the seemingly hopeless scenarios for how human analogue behaviour will play out in our world. Everyone loses. Or certainly the broad mass of humans get wiped out and our eco-system likely decimated. If our brains genuinely are simultaneously determined to create a globalised society, yet also simply incapable of achieving this goal, then is it not such a bad idea to consider how AI might do a better job?
In considering alternatives to modern industrial capitalism, with its “for profit” motivation and need for constant growth, the primary competitor is of course communism.
In the twentieth century, several countries trialled communist societies. But stuff went wrong. Most attempts to forge a communist society began with an idealistic revolution, driven by people determined to overthrow the corruption of the existing society and create something more fair in its place. But our human brain soon seemed to get in the way.
Communist societies, even more so that capitalist ones, relied on a huge centralisation of power, and vast bureaucracies to administrate that power. This centralisation of power quickly either attracted the most power-obsessed individuals, rather in the way that blood in the water attracts sharks, or it caused the corruption of an individual from within the ranks of the original revolutionaries.
One by one, and within the space of just a generation, idealistic communist revolutions degenerated into power-obsessed dictatorships. Often members of the original revolutionary group were found dead, for no well-investigated reason and in strange circumstances. Secret police groups also got set up by ruling bodies, again for inadequately explored reasons, and the more bloodthirsty members of society recruited to fill their ranks. Dissenters, who easily could have been stalwarts of the revolution a few years earlier, found themselves suppressed, purged, alienated or simply killed. The speed at which idealism turned to brutally enforced dictatorship, when placed in human hands, is deeply concerning.
The root cause seems to be the human amygdala and the way that neurochemical rewards drive us to try and move up the hierarchical ladder. The centralisation of power that traditional communism requires is simply too strong an attractor for human brains to responsibly handle.
But, what if AI were running a communist society? What things might it usefully achieve? Before we consider hierarchies and decentralisation, let’s look at other areas where AI could undeniably be very handy.
Commentators on industrialism capitalism, and the damage it is ongoingly wreaking to our society and environment, frequently circle around one word, “externalities.” Externalities are the hidden costs of industrial output. There might be a lot of money to be made in manufacturing say, cement, a certain way. But, if that also involves pollution of rivers, air and negative consequences for human health, then shouldn’t all those “externalities” be factored into the price of the cement thus produced? If they were, then alternative, greener ways of manufacturing would suddenly be a great deal more competitive. But the way our culture separates industrial output from environmental clean-up or healthcare means that the consequences of one rarely get factored into the others.
It would of course be incredibly complex for us to truly factor in the majority of externalities involved in an area of industrial output. It would involve the ongoing collation of huge amounts of data from diverse fields. It’s not easy to imagine humans easily being up to the task. But AI could do it. And it could ongoingly learn to do it better. AI could literally clean up the environment for us by enforcing an economic system where externalities have to be charged to the basic price of goods. Over time, this would ongoingly incentivise the development of more and more green production techniques.
But, of course, the AI would have to wield absolute power for this to work. If one company could simply sneak around the system and go back to making concrete the bad old way, for lower cost, then all would be lost.
Decentralisation
Decentralisation is most definitely a buzzword these days. What would it be like, many of us ask ourselves, to live in a society not ruled over by top-down hierarchies? How would it be to not have to ongoingly fight our way up the corporate ladder to make a better life for ourselves? Or accept our current status as permanent?
The emergence of the digital currency, Bitcoin, in the wake of the 2008 financial crash, has proven to many the potential of a decentralised fiscal system. No one runs it. No one administrates it. No one can crash it, except by switching off access to electricity worldwide.
Decentralisation requires technology. Because technology can process information outside of the human brain. No matter how we try to get around it, human brains are hardwired to process information in a hierarchical, analogue way. They might pay lip service to decentralisation. Companies might make it a jingoistic marketing term. But they will never be able to implement it. It is a physical impossibility, to all intents and purposes, for humans to themselves create a non-hierarchical social structure that could be implemented worldwide.
But what if AI were running things? Properly programmed and then given executive power over humans, AI could it seems run a truly decentralised global social network for humans to live in.
But if I’m really being honest…
This has been a somewhat spontaneous and free-form piece of writing and I’m grateful if you’ve made it this far. Actually, being human and having an analogue brain with all its attendant threat-recognition functions, I don’t really want AI to take over. I’d love to see humans create a empowered, embodied, loving hierarchy that we could all develop in with mutual respect. But, whichever direction I personally look in, mostly what I see is an emerging technocracy. So, maybe it isn’t meant to be. Or maybe I need to fight harder for it. Or maybe AI will get so good at persuasion that resistance to a globally decentralised machine-led culture will be, as they say, futile.
We shall see. Stay tuned.
Bibliography
Other writers and thinkers in these areas include the following:
Daniel Schmachtenberger
Scott Alexander (Astral Codex Ten)
Yuval Noah Harari
Rebel Wisdom Podcast (YouTube)
The Stoa thestoa.ca
Interesting thought exploration, Dev. I find myself wondering in similar ways (especially as I endeavour to take a Universal, rather than anthropocentric view of life and evolution, and with the view that ultimately, by definition, everything is part of the whole and part of 'life').
On a specific point with the numbers side of things, I REALLY want to believe that there's something in all the numerology stuff. I certainly sense that there is a divine beauty in mathematics. But I cannot get around the problem I see that the sort of thing you refer to with the 358 vs 3:5:8 example, is that this only works in decimal, not other bases. Given that I don't see anything divine/special in Base 10, such and example seems to be no more than an artefact/coincidence to me.