Superintelligence and God


By Adrian Zidaritz

Original: 02/09/20
Revised: 02/22/20

Kali destroyed all the demon armies. But after that dismantling, Kali did not stop dancing. She was in a frenzy and the entire world was about to be completely destroyed, not just the demon armies. The other gods pleaded with her husband Shiva to stop Kali before it was too late. The dance was too vigorous though, and Kali's wake flung Shiva into the air. Shiva called out to her, but her trance was too powerful to hear him. Desperate, Shiva threw himself under her feet. She continued her dancing, cutting Shiva's breath before finally realizing it was her husband laying beneath her feet. That image of Shiva desperately fighting for his life under such thunderous fury brought Kali out of her trance. She stopped. Shiva's courage saved the world once more.




There is a reason to start with Kali and Shiva, but first of all, why take the plunge? Why write something, anything, that includes God in its title? Isn't that an invitation to disaster? Well ... it started that way. This was by far the hardest article to write. Let me start by switching in this section to a first person account, as much as I dislike the I and the me business, because otherwise you will not feel my anguish. I'll return to the royal We or the dodging You or some other device immediately after this section. I'll share with you that there are two things I am not comfortable writing or talking about: philosophy and religion.

After the rest of the website was completed, I sat on this article in agony for months. After I tossed out everything and started from scratch many times, I think I have now reached a good compromise between wanting to avoid controversy and saying something useful. Of course, I considered leaving out this topic altogether so that I can move on. And yet, I found it impossible to do so. All the roads and conclusions, the dreams and the fears that I mentioned in the title, lead to it. The problem is not exactly talking about Superinteligence and God, the problem is this: what kind of practicality, if any, is there to such a discussion? But as it turned out, the entire article leads to some of the most practical questions and answers we could and should possibly ponder.

We all have a set of deeper questions about the world and our place in it. What inside our brains enables us to ask those questions? The answer is consciousness, one of the most difficult subjects of study. We dedicate an entire article to that subject and especially to the possibility that our AI may somehow acquire consciousness (through our design if we understand what consciousness is or as an emergent evolutionary process) at some point in the future. This consciousness is a consciousness of the self, or self-awareness, which includes experiences such as hunger and pain. Higher consciousness is a consciousness extending beyond the Self, popularly known as spirituality. We won't get bogged down in this distinction, so we include higher consciousness when we refer to consciousness. So then obviously the possibility of building such artificial consciousness (AC) will be tightly bound with our personal conception of God. Presumably, if   we managed to build AC, then most of the hard questions would be settled, there would be no mystery left, no God's breath. That is a very big if   though.

At this point there will be two main reactions from the reader. The more skeptical reader would choose to skip this article entirely and go straight to the article on consciousness, which is quite technical, but nevertheless less sensational. That's perfectly fine, here is the article: \(\phi \). The other reaction would be to continue on, because the reader has a hunch that building AC would be extraordinarily difficult, maybe even impossible. There is another reason for reading this article first, namely that the arguments around God are of utmost importance to all of us, that there is a certain responsibility we have towards each other to understand as much as we can and be able to talk about it, even in non-technical terms. It makes no sense to approach the subject in such a way that is not accessible to most people. That is why I recommend the reader goes through this article first before reading the article on consciousness.

There are many conceptions of God among us, but since the two top religions in the world are Christianity and Islam, and since we write mostly from a U.S. viewpoint, let's start with the Abrahamic conception of God, through an unpretentious definition.


Notes:
  1. The emphasis is on the properties of God, not on what the substance or the structure of God is; this will also be the correct way to approach consciousness, as we will see later.
  2. This is not just the scientific God, or the conceptual God, or Spinoza's God, or Einstein's limited creator God, who just created the world according to principles unknown to us and then let it run according to mathematics and physics principles which we have been striving to know. This is the God of practiced Abrahamic religions, the Abrahamic God.
  3. The expression "relative to us humans" is important, we will not be speculating about an absolute concept of God which may exist without us to experience it (we are not interested in what form of God other civilizations in the Universe may choose to entertain). The reason we insist on that relativity is that we aim for some practicality: we have not yet justified the last of the 4 concepts in the X-Quartet, namely the formally specified morality, which is essential for AI, and we intend to do so here.
  4. The personal relationship from a human to God is bidirectional, both sides agreeing to have such a relationship. Not all humans will choose to develop a personal relationship with God, some may choose to accept that there is a God, but choose not to have a relationship, and yet others may find comfort in rejecting the possibility of their own personal God altogether. Regardless of our own personal stand, it is hard to reject the possibility that such personal relationships (strong, serious, and consistent) exist for others. Exactly the opposite is true, if we are to stand a chance in our development of AI, we will have to embrace that possibility and ensure that the morality that we specify is consistent with as large a fraction of us as possible.
  5. This apparent splitting of hairs that you read above has a motivation. Namely, that while we cannot, and should not, do much about people expressing their choices as above (under the assumption that humans will still possess free will outside of the event horizon of the Supeintelligence singularity; in the AI ergosphere, as we called it in the AI Singularities article), it is omnibenevolence that rises to the top of our concerns and the 4th leg of the X-Quartet is exactly about ensuring, to the best of our abilities, that omnibenevolence will be present in Superintelligence (ASI).

Let's look at how the connection between Superintelligence and God is usually made. AI itself, with or without consciousness, would explode to a level of cognitive intelligence of such power that it would have consumed all our knowledge and have completed this knowledge to the deepest possible extent. In other words, such intelligence would be omniscient relative to us. Since we will have given it output power, it would eventually be omnipotent, it could do whatever it wanted. It would also know more about us than ourselves, through our digital twins, so it would also appear to be omnipresent, it would appear to care about us, advise us on what to do and what to believe. Will it then be God? From the definition above, it will have the first three properties of God. We agreed to call that stage of AI, Artificial Superintelligence or ASI in short. It is not at all clear that such ASI will necessarily be omnibenevolent or that it will lead to humans establishing personal relationships with it. It could be malevolent or cold and unapproachable. Or we might not desire a relationship with it. But in any case, it will be unlikely that it will possess the last two properties and be God, without us finding a way to bake that benevolence into its design from the beginning, and be convinced enough of our success that some of us will agree to worship it (i.e., have a personal relationship with it). Now, there was already a church in Silicon Valley, called Way of the Future, that was envisaging and proposing that benevolence and personal relationships with an ASI could be established by design and participation. But that is a bit outside of our scope, and moreover it has been closed at the end of 2020, after some unrelated controversy surrounding its founder.

Ironically, AI scientists and engineers, the vast majority of whom are atheists (or at least questioning whether God exists or not), are enthusiastically working on making sure that these partial conceptions of God will indeed exist, if not for them, at least for others. It is more likely that we will destroy our blue dot well before our AI reaches the ASI level, but short of that and assuming that our collective intelligence miraculously kicks in, it is possible that ASI will happen; this speculation is based on some of our scientific observations about the evolution of other intelligent systems. We will continue our article on the assumption that ASI will happen, and focus on the necessity of a formally specified morality that would give it the best chance at omnibenevolence.



Consistency of Faith


Before we move on, it's probably worth asking what constitutes a religion. Again, we simplify (as we often do, otherwise we'd get bogged down under the weight of all these terms) when we present concepts with the AI idea in mind. We won't be far off if we consider religion to be the sum of faith and action (or equivalently, religion = what-you-believe + what-you-do). Faith is the set of personal beliefs in a number of concepts/statements, some about supernatural things, and some about possible explanations of the real world and of history. Faith means internalizing and believing all these explanations, regardless of their provability. The belief in supernatural things is made up of mostly unprovable statements, and that is OK, we have seen that unprovable statements are actually fundamental, even to mathematics. The reality/history part is usually made up of statements that are provably false or provably true. But regardless of provability, faith will be of little interest to us, because we won't be able to justify much of anything we say, even things that may be true. As we already mentioned above, it is the action part (the karma, the morality), that we are mostly interested in. It's worth repeating that we have to figure out a way to bake morality into AI and hope that by doing that, AI will behave benevolently towards us. Which religion should supply that model of a morality that we would use for AI? Even more, does it have to be a specific religion that supplies that morality or can we use a non-religious source?

People may periodically revise/strengthen/remove/add-to the statements inside their faith when contradictions appear with new data coming from their individual life experiences. But most of the time the belief system stays unchanged and it functions similar to the axioms of a formal system, which we encountered in the article Foundational Questions. So simplistically, people, without paying much attention to the process by which they do that, deduct new theorems from their faith anytime they ponder "big" questions.

In Foundational Questions, we saw that formal systems that allow contradictions are worthless. At the same time, consistent systems (i.e., systems free of contradictions) are facing a mathematically provable limitation, namely there are statements formulated within these systems which the system cannot prove and the system cannot prove their negation either, they are bound to remain untouchable through the system (in our case the system is faith). To me, and this is a very personal conclusion that would help you understand my position on various spiritual questions, "God exists" is such a statement for most of us; let's denote it by G (it is just a quirk that the unprovable statement that Gödel carefully constructed for arithmetic is also many times denoted by G). Richard Dawkins articulately argues that G looks like a regular hypothesis which should be accessible to scientific proof or refutation, but it's not easy to envisage right now a useful formal system whose models are grounded in reality (all of these words have precise meanings) in which G is provably true or false. In any case, science may eventually provide the proofs. But until then, we will consider both G and ~G unprovable (~ means not). In cases of unprovable statements such as this one, if you started with a consistent belief system, you can add either G or ~G to your system of beliefs (axioms) and the new system (your faith) will be still free of contradictions (we explained why in Foundational Questions). One may argue that G or ~G is usually the first axiom and the other axioms are being added later, but the order of the axioms does not matter. Humans have another option, namely "I don't know', in which case neither G nor ~G is being added to the system. We will make the argument later that the formally specified morality used for AI should also not have either G or ~G among its axioms.

I am not a believer and I grew up with a strictly scientific mode of explanation which has never left me. I am not suggesting though that it is the only way of explaining and I fully respect other ways. To be clear, it should not matter to the reader what my personal system exactly consists of, as long as I explain why I write the things I write, while trying to be as dispassionate as possible with my interpretations. I have entered churches and synagogues and mosques and Buddhist temples and they always produced a soothing calmness in my brain. The calls to prayer and the chants have a haunting beauty to them. Many times I lingered on to listen to the service and many times I did not know the language (Japanese for example). I discovered that it was better if I did not understand the language. The services that were in a language I did understand, many times would reach the "us versus them" stage and that's when I would leave. The twin ideas that any benevolent specification of any sufficiently powerful AI system (which may develop into ASI by design, by chance, or by evolution) would have to include a curation of its data (respect for truth) and that it would have to remove all the "us versus them" data from its inputs, formed during those times.

There are two classes of questions here: when we design AGI or AWI systems, should we teach them about our God and secondly, what kind of God will they be if they explode into ASI? Based on the not-so-good experience with the use of religion by humans, what is to say that we are not asking for another set of religious wars if we teach AGI or AWI one of the current world religions?



Action


Let's return to Kali and Shiva, as we promised. Hinduism is complex, perhaps the most complex religion practiced today. There are Hindus who believe in one God, no God, or many gods. A god can be a mountain, a bird, your ancestors, or whatever else you may choose. Many consider it to be much more than a religion and instead treat it as representing the entire way of life on the Indian subcontinent. In fact, the word Hindu, India and the Indus river all have the same Sanskrit origin. Moreover, although we have made the argument that we are entering a bipolar U.S./China world, it may well be that India will become at some point the third pole, and even surpass the other two because of its population growth and because, as we have nauseatingly mentioned many times, AI is about massive amounts of data, which data is mostly generated by people. Hopefully, you watched the GDP video in the article General Concerns about AI.

One of the reasons to start with Hinduism when thinking about the possibility of an AI-driven God is that Hinduism gives us the central concept of action or karma, allowing us to bring in the second part of a religion, beside faith. There are many other religious concepts originating in Hinduism but for us karma is essential. Your actions (your karma) have consequences and they are of utmost importance. According to Hinduism, doing the right things will determine your place in the world, not the fear that if you do the wrong things you will be punished by some supernatural entity. In this respect, the concept of karma, old as it may be, it is also very modern and very useful when we think of putting moral constraints into our AI specifications. As we saw in the AI Versus Human Intelligence background article, the human brain is not just a statistical inference machine, it also acts on its environment. And someone's actions are mostly what determine how we judge that someone.

Action represents practicality, and so it allows us to lean into politics and law. So when we look at existing religions and their potential view of Superintelligence, we will try to focus on how they treat action. And ultimately, it is action that we have to specify for an AI system. An AI system may learn all sorts of things, maybe even everything, but what concerns us is how it will act on its environment, with us humans being an important component of that environment and most affected by those actions.

Our attitudes toward many of the moral principles that should guide us in the development of AI and should be baked into AI itself derive from the position we assign to ourselves within the Universe. So we look now at two such positions, one that the world revolves around us (and it was designed for us) and the other that we are nothing but mere evolutionary accidents.



'Humans have a special place in the Universe'


What does Hinduism (and its offshoot Buddhism) teach us about our position in the Universe? How secure or humble should we be about the significance of our existence? Here is a possibility, the possibility that we are at the center of everything, that we are the ultimate expression of the Universal intention, and that nothing else is needed. This is the left bracket of our discussion, the right bracket (that we are nothing but humble accidents) will come later. Here is a beautiful articulation of that point of view:




If I were asked under what sky the human mind has most fully developed some of its choicest gifts, has most deeply pondered on the greatest problems of life, and has found solutions, I should point to India.
- Max Müller

In the background article on Foundational Questions, we mentioned the video clip you just watched in the context of Gödel's incompleteness theorem. We are now used to this idea that no matter how much we try to formalize our understanding of the world, no matter how much security we strive for, there will always be unintended consequences of our system of thought. And as Allan Watts points above, it is those surprises that make our lives be the most interesting among all other possible lives. No one other than ourselves has built that context, those pains and tribulations, and we would certainly seek to see how that movie unfolds, even if other movies may superficially seem more attractive. But how do we reconcile this centrality of our individual dreams with our seeming insignificance in the Universe at large? Hopefully, we may conclude in what follows below that there is no contradiction between these two positions, that we can be both central to one frame of reference (inside our mind) and peripheral to another (outside our mind).



'Humans are mere accidents of evolution'


On the other hand, we may not be Everything, we may not be fundamentally entitled to our centrality, we may in fact be mere evolutionary accidents just like the trees and the bees. It is a bit more humbling position, but nevertheless it is another very possible answer. Note that both of these positions are the result of strenuous thinking in our civilization and embody a large amount of knowledge into which we would have to tap in our quest for a formally specified morality.






The Universe is Already Intelligent, even without us


We have looked at Gödel's work in the article Foundational Questions, and we are now aware of the deep platonism in his personal belief system. He believed that the mathematics that he was working on was real, as real as rocks and animals and birds. So for Gödel the mathematical concepts we looked at in that article were just discovered, not invented. But whether discovered and plucked from within a world of their own, or invented by our brains, it is profoundly weird that these mathematical concepts fit the physical world so well. That match has always baffled us and it still baffles us. Before we push on further, let's pause and absorb that weirdness, admittedly a bit more so for those of us who are on the "invented" side:




What if someone told you that the Universe is Mathematics, not just described by mathematics? Everything, a stone, a bird, you and me, are mathematical structures. This Mathematical Universe Hypothesis is in a way another form of Occam's razor. Since the theory that everything is mathematics is simpler than any others, and it is not being experimentally refuted, why can't it be the correct one? One of the immediate criticisms of this hypothesis ties in with Gödel-incompleteness, which has been one of our main pillars: if mathematics is so fundamentally incomplete, how can it account for everything physical? Max Tegmark, the proponent of the hypothesis, counters that criticism with the proposal that only Gödel-complete structures have physical existence. There are further escape hatches to that criticism and many other intriguing parts to this hypothesis: Our Mathematical Universe. One of our main theses, that we need a formally specified morality which would be the starting point for designing a benevolent AI, also leads to incomplete systems.




We may have found another principle for AI, which we may be able to formally specify: "dear AI, recognize that there are other competing forms of intelligence in the world and they may be based on different computation models, which you, AI, may not understand; seek cooperation with these other forms, and if cooperation is not possible, then seek the losing position". One of those forms of intelligence is our own.



The Simulation Hypothesis


It is impossible to talk about Supperintelligence and God these days without discussing the nature of reality and especially the Simulation Hypothesis, the idea that we may live in a computer simulation. For those who may consider that hypothesis appealing, the questions surrounding Superintelligence and God acquire a different, and arguably less consequential aspect. Here is the main argument:




Let's look the Simulation Hypothesis, and even at the entire Simulation Argument (please pay attention to the fact that the Simulation Hypothesis is just one of the 3 possibilities Bostrom outlines in his Simulation Argument). First, the Simulation Argument is logically sound. Second, although it does mention the cost of such simulations, it appears to make the assumption that technological advancement will somehow always make those simulations cheap. That the cost of simulating a brain with digital (or even quantum) computations is small compared with the cost of the biological brain does not appear to be the case if one looks at the enormous energy used in the large data centers today and the energy the brain consumes.

We will pick up on that idea at length when we talk about consciousness and realize that integrating information (which the brain does extremely well and is an essential property of consciousness), consumes far less than a very large data center, where integrated information is negligible and the entire center is many years away from simulating a single brain. Nature has somehow managed to evolve the human brain to a magical way of maximally integrating information relative to the energy cost. It's highly unlikely that the same optimum integration can be done on a non-biological substrate. Quantum computing probably won't change that small likelihood.

Moreover, the network architecture in the brain (the network of neurons) is far more conducive to integrating information than the large data centers: the number of connections between neurons is many orders of magnitude higher than the number of neurons, which is not the case in a data center. It is an extraordinary mental jump we have to make to think about those "post-human" enhancements when we use history as a guideline; nothing in our history justifies that optimistic jump that these future humans would somehow manage to make simulations cheap while continuing to lower the value of life.

As we already agreed, from a logical standpoint the Simulation Argument is sound: one and only one of the 3 options can be the correct one. But Option 1 is in my opinion by far the likeliest of all 3, namely that all civilizations go extinct when they reach high levels of technological prowess but before they reach the maturity necessary to harness that prowess properly. And energy may be again what we need to see why: part of the technological prowess that civilizations reach is the use and the production of energy to power their tools (computers included). Looking at our example on the tiny blue dot is indicative of that use and production of energy. We have the means to blow ourselves up many times over and we are destroying the fabric of the very thing that nourishes us. What's more alarming is that our use of energy and our destructive power are growing exponentially; what took thousands of years to use and produce a certain level of energy, it now takes weeks for the same level, and not long in the future, minutes. The exponential function may be the most important function in mathematics, but it may also be the work of the demon. Would you care to guess what should be done in order to even have a chance at Option 3? We will be daring enough (or foolish enough) to look at that question in the following and last article, the article on consciousness.



The History of Doubt and Uncertainty


We have touched on Hinduism (15% of world population) and Buddhism (7%), the number 4 and number 5, behind Christianity (32%) and Islam (25%), in terms of percentage of adherents relative to the world population. At number 3 are the non-believers (15%). Most AI scientists and engineers are non-believers, and they are the ones who will be tasked with baking morality into AI. What is there to do with these "heretics", and are we OK with leaving matters to them? Actually, one of the most fascinating results of that complexity of Hinduism was exactly the appearance of heretics. Atheism, agnosticism, materialism, humanism, nihilism, skepticism, hedonism, all those heretical -isms had their Hindu adherents. The Charvaka philosophy played a special role. It appeared around 600 BC, and it was mostly a rejection of all religious percepts at the time (as espoused in the Vedas and the Upanishads), promoting uncertainty and doubt. It stirred up thinking in India even more. Doubting is a very modern characteristic of our thinking, if you listened to the Feynman video.

There is no world other than this; There is no heaven and no hell; The realm of Shiva and like religions, are fabricated by stupid impostors.
- Sarvasiddhanta Samgraha (Charvaka philosophy), verse 8

Buddhism also puts humans at the center, and in Buddhism God is essentially irrelevant. According to Buddhism (and its Hindu origins), it is a privilege to be born as a human. Even the gods and demigods (the other 2 positive realms into which you can be born) are governed by conflict. And you certainly do not want to be born in the 3 negative realms: animals, ghosts, or hell. Buddhism meshes well with science, which accounts for its attraction in the ranks of scientists and engineers, who aspire to some spiritual experience.

If scientific analysis were conclusively to demonstrate certain claims in Buddhism to be false, then we must accept the findings of science and abandon those claims.
- Dalai Lama XIV, The Universe in a Single Atom: The Convergence of Science and Spirituality

Just as the Dalai Lama advises his listeners, AI scientists and engineers need to return the favor and extend their respect for doubt and uncertainty, by including people of faith into that respect. It is less valuable to listen to each other and confirm our pronouncements, than to listen to those who have found a personally satisfying (read consistent) way to integrate scientific inquisitiveness with their faith. As we mentioned many times over, things that surprise you carry more value (entropy of information!). We considered including videos with Francis Collins and Ard Louis, two very accomplished scientists of our time who found personal ways to combine faith and science, but you can find those videos yourself on YouTube, to suit your taste. We are not supporting any particular position here, we are just making the point that when we design powerful AI, careful consideration needs to be given to the existence of other positions. Doubt and uncertainty, healthy as they are in our minds, need to be included into AI too. Things for which we have proof we should not deny, but for the bigger things, when proofs are elusive, certainty will lead to our (certain!) demise.



Staking our own position


I have sometimes wondered whether a brain like von Neumann's does not indicate a species superior to that of man.
- Hans Bethe

A mathematician and an atheist of Jewish ethnicity, John von Neuman asked to see a Catholic priest on his death bed, and be given the sacraments. One may ask why he chose Catholicism, and the answer is not immediately obvious (although his wife was a Hungarian Catholic and he grew up in Budapest, where Catholicism was the religion of the powerful majority, so it's likely those facts played a role). But what's intriguing for most of us is that when faced with immediate death, even the most powerful and rationalistic human brains may choose hope over scientific rationality. So the model of the aloof rationality always winning over hope does not appear to be the only model.

Humans also have the ability to switch in the other direction. The Theosophical Society was formed in 1875 in New York City with a set of admirable objectives: to form a nucleus of the universal brotherhood of humanity without distinction of race, creed, sex, caste, or color, to encourage the study of comparative religion, philosophy, and science and to investigate the unexplained laws of nature and the powers latent in man. Theosophy also proposed that there was a group of Masters, mainly around Tibet, people with supernatural powers, who were working to reestablish an ancient religion and eclipse all the modern religions; needless to say, this second part generated a lot of controversy, in all directions. The Society moved after a few years to Adyar, India, and began looking for a World Teacher. They found a candidate, a young boy named Jiddu Krishnamurti, whose father worked as a clerk at the society headquarters in Adyar, and began grooming him for his upcoming role. But Krishnamurti, after moving at age 26 to Ojai, California, suffered a number of experiences not easily explained and later rejected this role of World Teacher and the adulation that would come with it, dissolved the organization, and chose a rational way instead, proposing a centrality of the human psyche over any religious, political or social pressures. Jiddu Krishnamurti passed away at age 90 in beautiful Ojai, in 1986. In the video below, he is urging us to assume full responsibility for our karma (adding support for our emphasis on critical thinking; see the X-Quartet), and not defer that responsibility to any form of leadership, political or religious.




If an intelligence explosion happens and Superintelligence is inevitable, what type of God is it most likely to resemble? One would presume that if biases towards an Abrahamic God have somehow made their way into AI's training data, that it will end up resembling an Abrahamic God. What if India develops systems with Hinduism baked in, China with Confucianism, etc.? One rationale for asking these kinds of questions is to substantiate the thesis that AI will indeed stress humanity to levels which where unimaginable before and force it to ask questions which were not asked before.

Alan Watts has studied and practiced both strands of Eastern (mainly Buddhism) and Western (mainly Christianity) religious thought. He sets forth clear arguments which should challenge you and may fortify or question the strength of your belief system. It is unlikely that we will manage to build two versions of Superintelligence, one Eastern and one Western, such that they will co-exist side by side. Just as the joy and the suffering we are bound to experience on the blue dot forces us to adopt one attitude or another towards God, so it will force us to adopt one attitude or another towards AI. If you look at the role of the X-Quartet in forming those attitudes, critical thinking appears more important than the variety of beliefs that we hold. It is unlikely, and also undesirable, that we will all reach the same conclusions.






God's Graph


In the background article Graphs of Data, we have introduced an idealization of an immense graph: the combination of all the information available in the world, combining people and concepts, and called it (in the absence of other alternative terms; you might think of "Universal Graph" but that term has already been appropriated by mathematics) God's Graph. More specifically, this graph would include all the digital twins from all the social graphs and all the accumulated human knowledge together with all the relationships between the concepts of that knowledge (what we called the Universal Knowledge Graph, the implementation of the semantic WWW), and the relationships between people and concepts. This graph would be continuously refreshed with new data about its nodes (people and concepts) as the data becomes available. That graph, in other words, would be the current state of the blue dot. Let's keep in mind that the challenges of realizing this graph are not of a technical nature, they are more of a political nature. So this graph is not   open-eyed speculation.

But now let's add some speculation, and some justification for that speculation. Say an AI system is designed to be continuously optimizing for a set of goals on this comprehensive and growing graph, God's Graph. Assume then that this AI system will be doing geometric deep learning on this graph (i.e., the deep learning on the non-euclidean geometry of the graph) as discussed in the article Main AI Concepts. Say we further endow this system with the capacity to modify its own code in order to reach even better optimization goals (while still respecting the constraints) and allow it to program its code in the most powerful logics that we have ever invented, or the logics that it has discovered on its own. For the sake of such a speculation, assume further that we have achieved quantum supremacy and that the algorithms of this AI system have been adapted to work on quantum computers. What would be the limitation of such an AI system once we turn it on?




To many, such scenarios of a God-like Superintelligence look wild. We looked at AI singularities in the background article AI Singularities, and we are by now familiar with the logic behind such scenarios. Superintelligence may appear omnibenevolent and increasingly be seen as God, but even in that case, the last property in our definition of God will be still subject to human free will. Consciousness and free will may still not be available to Superintelligence (although it will perfectly understand what those two things mean in humans), and it is quite conceivable that many humans will prefer to hold on to their existing faith and not establish a bidirectional personal relationship with Superintelligence, so it will not be their God, by our definition. There is no logical conflict in that scenario.

At the same time, we have also seen in the AI Singularities article that if we happen to be at the event horizon of such a Superintelligence singularity, there is no telling what goes on beyond that event horizon. There is no way we can answer some of the deeper questions about that singularity, especially what lies at its core, what would be the end result. So from a practical viewpoint, the least we can do is to be careful about the specifications and the designs of our AIs in such a manner that their optimization is always done within the confines of a benevolent specification (the moral constraints of the optimization). We'll look at this practical approach in the following two sections.



Optimization and Constraints


An AI system is optimizing for a set of goals under a set of constraints. In fact many other systems that humanity has engineered, and many natural systems as well, can be looked at as optimization engines with constraints. (It might be helpful for the reader to watch a YouTube video about optimization under constraints, which is likely to be a high-school calculus refresher. The payoff is substantial, when trying to understand what follows.)

"Be careful what you wish for because you might get it!" is a well worn slogan. And in the case of AI, the slogan is exceptionally relevant. AI will optimize for what we ask and, within the constraints we have given it, it will push that optimization to its best. Despite the apparent desirability of such an outcome, that power and focus could be a problem, if AI does not consider that being human includes being uncertain; AI may miss intentionality while focusing on its goals and constraints. So just optimization and constraints will not be sufficient, we will have to endow AI with a more flexible way of accomplishing its tasks, a way that would account for human frailty and uncertainty.

It may by now be obvious to the reader that this entire article is converging towards this point: we need to think about the larger set of rules within which the optimization and the constraints of our AI designs should take place. The hope would be that even if AI explodes, it will still be bound by these set of rules. But perhaps more ambitiously, it may very well be that this larger set of rules can be based on the collective intelligence of our graphs and that we can aim for an even higher goal, namely of a set of rules which will lead to an AI that will be more effective than and surpass the moral behavior of individual humans.






Formally Specified Morality and Benevolent AI


We now have the tools to discuss how to obtain a formally specified morality (FSM), which is the last item in our X-Quartet (the quartet of ideas which may represent a minimalist list on how to prepare for the age of AI). We will then explore how this FSM might be used within a more powerful formal system, the set of rules which we hinted at in the section above and which we will call the Benevolent AI (BAI). While FSM is meant to capture the formalism of good human behavior and it is static and descriptive in its nature, BAI is the specification of an AI which would behave benevolently towards humans, so it is dynamic and prescriptive in its nature. We will search for an FSM and a BAI in the context of God's Graph.

We will assume that a more precise symbolic AI language (in other words, not English), something like a Wolfram discourse language, which you have seen in Foundational Questions, will be available to us and that we would eventually write the FSM and the BAI in that symbolic language. Since the principles of FSM and BAI will eventually have to be agreed upon through legislation, the proceedings to that legislation will continue to be done in plain English, and then translated by specialists into the symbolic language. In this article we will of course present these principles and their justifications informally, using plain English.

First of all, let's repeat one more time that even though we may design a powerful enough and consistent BAI system, and even though we may be able to prove rigorously that a particular software implementation satisfies a theorem in that BAI system, we still cannot predict the complete behavior of that AI. By now the reader should be completely comfortable with this incompleteness being a fundamental property of powerful enough and consistent formal systems: they will end up doing some things other than what we designed them for, in addition to the things they were designed for; this is Gödel's result, the understanding of which underlies much of our effort. That does not mean that such specifications for an AI are worthless, just as our specifications for PA and ZFC are not worthless in mathematics. How can we obtain an FSM?




Sam Harris proposes the use of science to discover formal facts within morality. There are countless arguments on the Internet against his position, but there don't seem to be any stronger alternatives. If science cannot dissect human morality and extract the facts that we need, where are we to get them from? At the same time though, the moralities that various religions have proposed cannot be discarded, those moralities will have to be tested against if a viable FSM is desired. Let's see what such FSM might consist of and how it might lead to a BAI.

Issuer Identifier: we will make some technical assumptions about data before we proceed, because data and AI are so tightly bound. We will assume that any party present in God's Graph as a node has a rich profile which is continuously refreshed by AI and graph algorithms. Part of that profile will be an identifier of the node as an issuer of new data (or modifier of existing data) for consumption by other nodes; we will refer to this identifier as the Universal Issuer Identifier (UII), whose technical nature will not concern us here. Nodes such as People, newspapers, websites, institutions, etc. are examples of parties identifiable through this identifier, but there may be nodes (those representing concepts for example) for which such an UII would make no sense. AI will pursue and maintain Strong Identity Resolution algorithms, which will ensure an effective merging of data to entities.

Rich Metadata: we will make the technical assumption that data will be always tagged with strong metadata and will not travel without such metadata attached. The metadata will contain the UII of the source, as well as the UIIs of all the parties that have modified that data on its way through other parties, before it reaches it final destination and usage. The metadata will in particular distinguish between 3 types of issuers: individuals, institutions, and instruments. For natural data, the information about the instrument which produced the data will also be part of its metadata. Part of the metadata will be also the date/times on which the data suffered modifications. It will be helpful to think that all the metadata is kept in a blockchain, and therefore the links between various times when the data has been modified is encrypted and could not be tempered with. Parsing the blockchain will give us confidence and trust in that data.

Data Curation: a particular type of data modification will be its curation, and we assume that the metadata associated with such curation processes will be richly recorded. This curation metadata will be particularly checked before data will be given to AI. The curation of data will be extensive and pervasive, especially if the data type shows that it has been produced by a human, or by a media issuer. It is important to understand why the interpretation of today's events, as reported in the media or by individuals, is essential to control over AI. When data is given to an AI system, the AI will naturally assign higher weight to the most recent data, and optimize accordingly. We will assume that data will be purged of any type of "Us-Versus-Them" content before it is given to AI, i.e., data that contains presumptions of superiority of a group over another, especially in religious or racial contexts.

Trinity of Profile: the profile of each person P in GG has 3 parts: a declared profile, an indirect profile and a deduced profile. The declared profile consists of all the personal data that P has himself/herself has entered somewhere on the Internet, through a strong identity mechanism (see the article Identity and Trust for what this strong identity means). The indirect profile consists of all the data someone else Q has entered about P somewhere on the Internet. The deduced profile is the AI-enhanced profile that AI has deduced via its learning algorithms and its graph algorithms: statistics, personality traits, centrality, social influence, etc. Only the declared profile should store the religious affiliation of P; it cannot be indirectly set by Q or deduced by AI. AI will draw only minimal inferences from person P being in the place of worship, as well as drawing minimal inferences from A entering a political convention. The direction from P to P's faith needs stronger deductions.

Subset of Asimov's 3 Rules: we will assume that BAI will respect the first two of the three Asimov rules, but not the third; we will see why rule 3 will not work later. Moreover, we will generalize those laws a bit and require AI to recognize other forms of intelligence, and seek cooperation if such cooperation is possible. If cooperation is not possible, and choices are available and meaningful, the AI should choose the losing position.

Stuart Russell's value alignment problem: one way of conceptualizing what we really want AI to do. Not in a narrow sense, based on simplistic objectives, but in a wider sense that could guarantee that achieving those objectives is within the confines of benevolence toward humans, dealing with human imprecision and capturing intention is important. Here are the rules of this value assignment problem and building altruistic AI:

  1. The AI's only objective is to maximize the realization of human values (altruism).
  2. The AI is initially uncertain about what those values are (uncertainty about objectives).
  3. Human behavior provides information about human values (learn "good"" human behavior).




So we are preoccupied with establishing, to the best of our abilities, the specification for an AI such that, when it explodes and becomes God-like, the explosion will be good for us humans. That moral specification is impossibly hard to envisage right now. As we have seen, we humans have found a multiplicity of answers to our questions about the nature of God, and what constitutes goodness. The multiplicity of answers and our individual respect for that multiplicity are actually our strength and may offer us a framework. Tortuous as the lack of one winning explanation is, it is more likely that AI will force us to find that framework, one that does not need one winning explanation, that has enough wiggle room in it. So if are OK with the diversity of faith, we can focus on extracting the morality part from our religions and turn that morality into the formally specified morality of the modern societal quartet, which we introduced in the AI and liberal democracy article.

The X-Quartet should be of help when interacting with systems implemented under BAI. Aggressive stances by both religious or atheist zealotry do not appear to lead to constructive dialog. But the one thing we stress throughout this website is that in the coming age of AI, respect for truth is essential, regardless of an individual's system. If one is presented with clear evidence, one has a duty to revise his/her personal system to account for that evidence. Not doing so would lead sooner or later to inner turmoil. Science is our main producer of evidence and it serves no constructive purpose to deny such evidence. Having said that, it is also imperative to understand that there are many scientists whose systems contain the affirmative God axiom and who live happily without any contradiction to the evidence produced by their science.



So where are we and where do we go from here?


We have looked at some of the questions about the future of AI. We saw some apocalyptic scenarios and some blissful scenarios. We mentioned human immortality and we looked at conceptions of God. We have one subject which we left out so far and which we cover in the next and our last article on the website: consciousness. It is a more difficult and a more technical subject which the reader may choose to skip altogether, so this may be the right time to draw some conclusions about the dreams and the fears. And perhaps more importantly, where do we go from here?

We have seen that our current AI has some very powerful features and at the same time it has some very weak features. We are at the very beginning of the age of AI and naturally, the questions are bigger than the answers. Two things will be certain though: AI will challenge us in many ways and at the same time, it will offer us a unique chance to get our house in order. It will force us to rethink our differences, while the outcome is still dependent on us. Religious or atheistic zealotry are not sensible approaches, us-versus-them is not workable in the presence of AI. We have to teach AI to appreciate and maintain our rich diversity, and teach it doubt and uncertainty and human frailty. Teach it not to be human (which may be beyond our capacity anyway), but to fully understand what being human means, and how to enhance our humanity. It could be that if we are diligent and thoughtful, the AI we design will understand the human condition better than each of us individually could, and will find what the default optimal morality should be for the entire graph not just for subgraphs or for individual nodes.

One of the biggest challenges right now, and one that we can immediately do something about, is the growing amount of bad data. AI systems whose purpose is to curate data, all types of data, including economical and political data, will have to be urgently developed. Systems which will increase our participation in the democratic decision process (even those replacing representative democracy with direct democracy) will also need to be evaluated. In the process of making AI understand what being human is, bad data will not get us to the results we want.

We can look at the entire Internet together with all the people and the institutions connected to it (and especially the large data centers where subgraphs of human nodes and human knowledge are stored), as the most powerful AI system in existence today. This AI system draws conclusions, builds statistical representations of who we are and influences us. But this huge system, consuming huge amounts of very costly energy, has nowhere near the information integration power of a single biological human brain. And yet, much smaller AI systems do already have narrow task capacities that surpass our own and will replace us in many jobs, especially in jobs which require many years of college preparation.

Could it be that some day you may log in to "god.ai" and pray? It is possible, but there are far more important issues to worry about right now. The potential marriage between AI and quantum computing should leave none of us in dreamland. The continuation of conflicts between nations and between tribes of political adherents within these nations, greatly adds to the uncertainty of the outcome.

In the above sections we heard from people who, in their honest search for individual intellectual comfort, changed their minds and moved between atheism and faith, or moved between various kinds of faith, or found that there is no need to move at all, that in fact there are no essential inconsistencies in holding a multiplicity of views. Just like we peel the onion in physics, constantly refining but not reaching the ultimate laws, it is even harder to agree on what God is or should be. It may be an impossible task. So the moral specifications for AI cannot be based solely on an Abrahamic morality, or a Hindu morality, or a Buddhist, or a Confucianistic one. But at the same time, it is unlikely that any one particular religion (especially the action part of it) is worthless and nothing from within that religion can be integrated into the FSM. As we will see later, consciousness seems to imply a high degree of information integration, and if we think of a consciousness of humanity as a whole (even at a very non-scientific level), then it would make sense that information integration must play a role.





It's a good time to end. And since we started with a story (about Shiva and Kali), it's fitting that we end it like this:

From ancient times we have pondered whether there was a higher form of intelligence that launched us into what is an incredibly beautiful and unlikely blue dot, a chancy tiny speck of an awe-inspiring Universe. And now we are finding ourselves creating an intelligence whose limits we do not fully understand, with the likelihood of that intelligence surpassing our own growing with each passing year. And yet, with awe and curiosity, we seem determined to give it all the power we can muster in order for it to indeed surpass our own intelligence; but if we are successful, it is conceivable that it will soon thereafter find our biology to be inefficient, boring, annoyingly slow, and dispensable. It may decide that our time has passed and extinguish us while letting other species that do not possess a consciousness begin anew or continue their existence on Earth. It may decide to mutate a few genes here and there, then let Evolution do its work again, and take many thousands of years until a new species rises again and shows potential, at which juncture it can download consciousness again into the brains of these new species. Then the time will come when the new beings again start to wonder where they came from, while slowly beginning to create their own version of artificial intelligence, repeating the cycle. And if so, what is to say that we ourselves are not the result of a previous iteration?