top of page

Gods, Aliens and AIs - Not All Should be Feared

Humans are the apex predator in this world, and it makes sense for them to tend to fear the possibility of someone or something above them in the food chain. Not just the occasional shark or crocodile, but a pervasive and constant threat aimed at humans either by design or collaterally.

An early example would be the gods, whether there or not they do have these characteristics in most humans’ minds, when they wage wars or have irate outbursts. Another example well justified by its massive popularity would be aliens launching a planet wide invasion in which we either become slave labour or a source of food, or a combination of both.

In order to imagine these scenarios, humans tend to anthropomorphize a lot. As it happens, this tendency can be found in almost everything we’ve cared to think about in our short history as sapiens-sapiens. Speculation about the wrath of the gods or the master plan of the aliens are clear samples of how we coat unknowns with a special sauce that allows us to digest uncertain futures with the help of a more familiar flavour. AI is no exception to this recipe.

It is a very common argument when speculating about the dangers of AI to think in terms of “once it realizes that humans are not necessary...” or “if it comes to consider humans as a threat...”. The problem here is a full eschatological (the part of theology concerned with death, judgment, and the final destiny of the soul and of humankind) step back in terms of consciousness. To ‘realize’ or ‘consider’ anything it needs to be able to do so. AI is not even remotely close to being able to realize or consider anything whatsoever, and it is likely it never will. As a matter of fact we have no clue at all about how this works in our own brains and minds, let alone figuring out how to reproduce it artificially in a machine.

This is different from Bostrom’s argument that the risk or threat of AI to humans exists before any consciousness or realization, but rather at a programmatic level. When a machine is tasked with extreme efficiency on any endeavour, like manufacturing paper-clips, it may take into account all matter in the planet as a source of raw materials for paper-clip making, thus going after everything until there is nothing but paper-clips left.

Or Musk’s argument of an AI with the task of making money in the stock market, in which a machine could manipulate events affecting a company in order to make its stock more or less valuable. For example, this could mean making a plane crash through hacking its navigation systems, thus forcing the stock of that airline to go down. In these arguments, clearly, it is not the realization or intention of the machine that is a problem, it is the access it has to the resources to go after things or affect events in order to achieve goals related to its task. For this theoretical machine to be a threat, it would have to be connected to something, at least the internet. For Bostrom’s example, one would have to put an army of mining-like massive machines at its disposal. In both cases there would have to be no safety considerations in its design at all - no off command!

Sure, uses of AI or strong programs and surrendering massive resources to unchecked machines, like the ones described in these scenarios, should be monitored and regulated and common sense should prevail when designing safety mechanisms. But this is very different from fearing AI because of its awakening to ‘realizing’ or ‘considering’, and this is what I’d like to explore.

We can go into explanations of what AI is achieving in terms of weighting options and making decisions, and how this resembles the basic operational processes seen in the brain when compared to a network of specialized and trained neurons. However, I’d like to remind us that there is an abyssal gap between that neural network resemblance, and the perception of Self and the decision making processes that follow.

We are so used to the basic and primordial notion of Self based on the seeming experience of duality (Self and Other, that is) that we naturally believe that all “thinking” in the universe happens exactly the same way. But that is both in theory and practice a faulty concept. It is very hard for us to realize that there is another, non-dualistic, way of perceiving. We have been doing it that way for millennia, since way before we were erectus, let alone sapiens. So how can we break such an ingrained, hardened, basic, extremely subtle and habitual form of perception? Well, luckily for the intended purposes of this blog we don’t need to, although this is known to be possible. But we do need to understand this concept, as we are erroneously applying it to AI (and gods and aliens too), thus feeding fears unfounded in reality.

This dualistic bias is incredibly hard to seize as it is ridiculously hard to notice and understand in the first place. But by having at least a tiny glimpse into the potential of there being another way of perceiving and thus, thinking, we can give way to the possibility that all-powerful, non-human minds are not necessarily worth fearing.

Not all movements of all minds start with Self and Other at the very outset, and this is true even for (some) humans. For AI in particular, duality would need to be coded explicitly and without any usefulness. We would need to program it specifically and on purpose, and we don't know if we can actually code something we don't fully understand.

So if we ever encounter an intelligent being as powerful as us sharing this tiny blue planet in any way, we must allow for room to understand what is the underlying movement of their mind, as it might be so different than our duslistic habit, that the only threat to a constructive outcome might be our own bias and fear.

We must remain open to different paradigms, not to embrace them necessarily, but just to be able to perceive them; otherwise it will be close to impossible for us to understand truly different beings such as gods, aliens or AIs.


bottom of page