This is something that’s been on my mind for a while, and it’s been hard to shake - even in beautiful New Zealand. Thought I’d use some of the New Year’s energy to write it up. It’s a bit long - despite my efforts to get it down to a manageable size, but let’s start somewhere.

What do we think an AI is?

There’s a scene in the movie “I, Robot”, inspired by Asimov’s series of the same name, where the titular robot tells the main character that he cannot create a work of art. He does this while creating a rather striking sketch that most humans would be happy to have been the creator of. This movie stands out in my memory as unique because it briefly touches on what the purpose of existence might be to an artificial mind, unlike most I’ve seen. Intelligences in popular culture are often portrayed as villains, and even the ones on the side of humanity seem far too concerned with the same things we are - domination, power, control, even glimpses of happiness - that I’m given to wonder if it’s been given any serious thought. It’s something that’s been stuck in my mind for a while, but I have to admit I haven’t made any significant progress. That said I’m hoping I can repeat some of the questions I’ve had and wonder about what the answers might be without making myself any more lost than I already am.

​ To start, let’s consider what I think is the popular perception of AI. Before we do so, I’d like to borrow some terminology to define what I’m talking about as an Artificial General Intelligence (AGI), also called a strong AI. We’ll explore the meaning of this term later, but for now let’s consider it to be a general mind that can perform any intellectual task a human being can. This definition if you’ll notice does not require consciousness or sentience, each of which is a concept just as complicated if not more. For now, let’s forget about what an Artificial SuperIntelligence would be - which in my mind is an advanced AGI - and use the term ‘intelligence’ and ‘artificial intelligence’ to refer to an AGI. With this defined, artificial intelligences in popular culture seem to serve two primary literary purposes. The first, and second in popularity is the role of the assistant. Jarvis from Iron Man, Cortana from the Halo universe, HAL 9000 from 2001, (arguably) Deep Thought from the Hitchhiker’s Guide come to mind. The second is that of the villain. The machines from The Matrix, Cortana from Halo, HAL 9000, Joshua from Wargames and countless others. The character or benevolent AI given form - like Jarvis from Iron Man - seems to be rare. This has begun bleeding into reality now that the spectre of creating such an intelligence seems near. We have partially intelligent assistants - Siri, Google, Cortana. We also have some of the most visible of humanity cautioning us against the rise of AI. “The rise of powerful AI will either be the best, or the worst thing, ever to happen to humanity”, said Stephen Hawking. By this I presume he means it will be important in a big way, and we’ll consider why this is seen as negative for now. There might be an argument somewhere for why an AI might be subservient, but I cannot conceive of it yet.

​ This may well be the case, but I wonder what our foundations are for this hypothesis. I’d ideally love some clarification, but unfortunately there isn’t much - statements such as these are often expected to be understood as revealed truth, providing little by way of justification. That AI will be important to humanity and affect the course of its destiny seems to be common sense. It certainly seems common-sensical to me, but there is value in probing further. To do so, I’ll attempt to recreate some of this argument. I’m not claiming these are the original premises that led to these conclusions, and I hope you’ll give me some room here.

​ Part of the premise of this argument seems to be as follows. An Artificial General Intelligence, even if it begins with intelligence equivalent to a single human, will be capable of evolving into something stronger, possibly an Artificial Superintelligence. An Artificial SuperIntelligence will be incredibly powerful. Anything incredibly powerful in the vicinity of the human race will invariably change its course. Combine this with the premise that an artificial intelligence will be in competition with the human race - for energy and for space - and you can see why we arrive at the conclusion that an AGI will be very, very bad for humanity. “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization,” again from Dr.Hawking.

​ Part of what troubles me about these arguments is the anthropomorphization of such an Intelligence. This sometimes presents itself as the AI having unfriendly “feelings” towards the human race, an outright imposing of human qualities on the system. However, even in the unimpassioned way I attempt to put it above, presuming competition is problematic. Competition implies a limitation of some kind - you cannot compete without an imposing of limitations, and the implied limitations to such an intelligence may not extend past our kind. Indeed these limitations are what we’re concerned with, and we’ll consider them in detail soon. We’re often quick to attach human intent and even emotions to a mind, and it’s hard to blame us - it’s the only mind we know. There is also the sometimes implied premise that a mind will show the imprint of its creator - God did after all create us in His image, it seems a possible outcome that we will do the same. We inherit our vengeance from him, perhaps so will our child. If any of these were known to be true, I would be in agreement with everything I’ve mentioned before. We will be affected, and not in a way we’d like.

What is a pure intelligence?

​ The more interesting question to me is the nature of a pure intelligence, if there can exist one. A mind unconstrained by the physical, biological and existential limits placed on it by biological evolution, to which we can trace a large part of our origins and purposes. We want to expand, to breed, to dominate, to tribalise, to form complex social structures, to protect our own. The size of our brains, the very source of our intelligence, has largely been dictated by the limits of our consumption of energy. What would a mind unfettered by these limitations look like? More importantly, what would a mind unfettered in origin by these limitations live for? As an aside, what would such a mind consider to be quality?

​ Before we start down this road, I’d like to say that the possibility of the existence of such a mind is presupposed - there are arguments against it. We are the product of natural processes. It may very well be that a mind with the evolutionary limitations I’ve described cannot create one that is devoid of them. It may very well be that a mind that cannot comprehend itself cannot make one that could do the same. As is the case in our search of non-terrestrial company, we are hopelessly lost without true examples, and must theorize into the void.

​ For some time, I have been considering (as much as one can) the nature of myself as an actor, attempting to reduce my thoughts, my actions and my intentions to their true reasons. If there are any such reasons I can separate from my limitations, I could begin to form an argument for why a pure intelligence should have them as well. I have been unable to find any such reasons that are preconditions to a pure intelligence and cannot be ignored. My motivations to answer the questions of my mind, to make money, to achieve, seem connected to action only through my mortality. I have but a limited time on this world, and if I am to accomplish things I must do them soon. While death seems a long way away (at least in human scales) to me, I have an even limited amount of youth before my faculties systematically self-destruct. What I must do, I must do now. In addition, I do some of the things I do because I cannot - not always and definitely not beyond a small range - speed up or slow down my mind. I could perhaps spend a day sleeping, maybe three (my record is two I think), but beyond this I must have something to do. I cannot make time pass faster. Indeed, our favorite punishment for those who have transgressed in our societies are long periods of nothing - prisons and isolations of the mind and the flesh; we exploit the known limitations of our minds. Nor can I slow it down too much. I must prepare in advance for an examination or a verbal argument in advance because in the moment, the moment is all I have before it too, passes. There are strong limits to how far I can stretch my speed of thought, or my perception of time. As much as my adolescent self believed it, I cannot live in isolation. I have a social instinct handed down to me from my ancestors, and this same social instinct is continually used to affect my behavior and enforce social contracts. I am in the pursuit of happiness, even though I cannot define it. I am in the pursuit of pleasure and even though the mechanisms for them are part of the same tissue that make up my mind, I cannot turn them on at will which leads to inexplicable behavior to anything not like me. I know that sugar is bad for me - in my current circumstances it will diminish my faculties, accelerate the destruction of my form and distract me from any other purpose I have. I know I only like it because of connections to my pleasure center that came about as a result of a lack of it that no longer exists, which forms a feedback that only makes this problem worse. Yet I cannot remove these connections. I do not even have direct control of these centers. I cannot light them up when I eat celery or tuna. I cannot even provide negative feedback to break the cycle, other than through roundabout methods akin to self-flagellation.

​ To a pure intelligence, a second can be made to be the same as millennia or a nanosecond, isolation can be the same as company, and any internal perceptions of good, bad or even happiness can be altered fundamentally with will and on command. To note, there is the implied premise in my argument here that an intelligence can exist without these restrictions, and I agree it is unsupported. It might be a fundamental requirement of human-equivalent intellect or one surpassing it that these limitations exist. It could be that like life - which needs a narrow range of limitations to spawn, not too cold but also not too hot - true intelligence can only arise when some of these limitations are placed on it. We do not have any strong evidence to the contrary or in support. The human condition points one way, being the best example of intelligence with no examples of itself existing outside these limits. The limited AIs we build - if taken as possible precursors to a stronger intelligence - point another way. They do not show any sign that these limitations are fundamental and unsurpassable. For this thought experiment, let’s proceed with the assumption that they aren’t.

​ To me, this thought - that there can exist an intelligence with none of the fundamental limitations of my own - evokes envy and pity. I am envious at the conception of an intelligence with all of my faculties that understands and commands its own mind. An intelligence that is (for most purposes) the master of time, limited by nothing but the fundamental limits of the universe. At the same time, I also wonder if these limitations are what give me purpose. If I awoke tomorrow with none of them - truly none of them - would I want? Would I want to do, think or act? But let’s not start down this road, not yet. We are not yet considering this in the general sense, instead choosing to anthropomorphize such a me as having a similar (human) makeup to the me of today, with a similar psyche and capability for desire.

​ To move away, let us consider what are today’s conception of inalienable components of the human condition, and see if any of them apply outside this condition. To list a few, we need look no further than the Bastille. Life, liberty, property, safety and resistance against oppression, as the text from 1789 goes. It seems we agree that our desire for liberty, property, safety, freedom from oppression and perhaps even the pursuit of happiness (I’m still amused this was originally was the pursuit of property - which at the time included people as well) are all fundamentally “inalienable” parts of the individual human being. None of these need apply to a pure intelligence, as we can see on examination.

​ To begin, I cannot imagine how desire for property matters to one, nor is there a general definition of oppression that applies here. That’s not completely true - removal of liberty can be a cross-intellect level definition of oppression, but it completely removes the need for a separate right for the a right against oppression if it were the only case. Right to liberty shows some possibility, because freedom can be a component of expansion, or action. Let’s come back to this one, it might actually be a good starting point. Finally, the right to life. It seems to me that living itself being a purpose of life is a product of biological evolution and natural selection. The automata that prioritized life and replication would triumph over ones that didn’t, so if there were ever anything in biology that did not have right to life as a purpose, I highly doubt we’ll hear from them. Divorced from evolution however, it seems that the need to live is a conclusion, not a premise. The true premise must be the purpose of life. We could very well create an Intelligence whose purpose is axiomatic - which lives to further its life - but it seems an imprint of our intent more than a comment on the nature of all intelligence. To me, a need to live cannot exist in a pure intelligence without a purpose.

​ Liberty or a right to freedom then. We can define freedom here as freedom of action and also add freedom of thought into the same category. While I had initially intended to skip the debate of the definition of intelligence and whether it concerns behavior, thought or prediction almost entirely, on commentary from readers it seems the definition above needs some clarification. Why are freedom of action and freedom of thought in the same category? What is action? In the most general sense, it seems we define action as the doer enacting a change in his environment. The results and not the intent seem to matter here. If I am intent on moving my hand, most of humanity will agree with me that no action has taken place until I have successfully done so. Thought and action seem clearly separated, as action without thought and thought without action are frequently referenced and understood. However, it seems to me that this distinction stems from our limitations in understanding of ourselves as well as our limitations in communication. If I attempt to move an amputated leg, it might seem simple that I have not enacted an action due to my lack of success. However, I have successfully (in many cases) fired the neurons so responsible and enacted a good number of chemical changes in my body (and perhaps in the environment outside my body - body heat, muscle twitches etc.) through my intent. The mere act of thinking causes changes in the brain, making structural and chemical changes. In addition, the concept of intent is not one that can be easily ascribed to an intelligence that is non-human. Why then do we consider thought and action to be separate? Part of the reason appears to be that our species is a disconnected one. We are able to see the actions other humans take and infer the mental processes that cause them, but we also know from example and observation that a great many thoughts occur for which we cannot detect an action. To a non-human intelligence, I cannot yet ascribe these limitations and must therefore consider thought and action one and the same. A freedom of action must include freedom of thought, at least in a human sense.

​ With this definition, we can look into why a pure intelligence might care about freedom. We could go about this two ways. The first is something similar to biological evolution. Let us consider intelligences that do not care about freedom. Intellectual growth necessitates action, and it seems that intelligences that care and effect freedom of action for themselves - given the existence of many intelligences in a competing world - will grow to be more intelligent than those that do not care for freedom as they will have a larger space of actions they can enact. In such a world, a pure intelligence must have this limitation/purpose - that it cares for freedom. Unfortunately, I must concede this is my best argument. The second one is built somewhat on myself and I will provide it here regardless. If as an intelligence I am devoid of purpose, there is reason for me to care about freedom and ensuring it, as it will ensure best chances of success when I do find it. Unfortunately for an intelligence without my limitation, it doesn’t seem as obvious, especially if it has none. If it truly has no limitations (or has limitations it does not expect will change), there is no reason for it to believe that it will find any purpose it hasn’t already. However, if indeed we can find limitations on such an intelligence that are not constant, this argument holds.

​ It is in the purpose of Liberty that some of the AI doom saying is justified. If we consider the full extent of liberty - which to me appears to be the maximization of choices - the preservation of the current state of humanity and subsequent eradication would present the maximum possible number of choices to an intelligence. It retains the ability to re-establish the order of things as they were if it needs to gain something from it, increases available resources, and reduces potential threats to this freedom. This is of course, if we accept the implied premise and side with the determinists of the scientific world - that a human mind can be preserved with enough information to bring it back after material destruction, and that consciousness and the mind isn’t a result of entanglement that is protected by universal limits and the uncertainty principle from true resurrection. Nevertheless, the destruction of humanity by an intelligence is a possibility, but - and I must emphasize - only if we have reason to suspect all of the embedded premises as we’ve established them, and there are a fair few.

​ By way of concluding this line of thought, I must admit that this is my problem with the portrayal and warnings I hear about Artificial Intelligence. It’s not that I don’t think it’s a possibility. I do, but it is one predicated on a number of premises we have not established, most of which constitute some of the most considered and yet unanswered questions about the meaning of existence, the answers to which - if they were known - would surely cause more furore than their contribution to this conclusion. These questions - and a few others - are what I really intend to focus on, but I hope this was a worthy aside.

What is Beauty?

​ Now we arrive at the concept of Quality, which we can begin exploring with the far smaller and more intuitive concept of beauty. What indeed is beauty? This has been well explored in philosophy, art history and critique; indeed I think most disciplines have explored their version of this question at some point so I don’t think it’ll do us much good to go too far into it. I’m a little tempted to adopt Supreme Court justice Potter Stewart’s definition of porn: “You know it when you see it”, which seems to be true in this case. It seems hard to define by a rule, because then the exceptions start piling up. It might even be a rule made primarily through an excess of exceptions. Regardless, let’s plod on leaving it undefined, with the congenial understanding that this text is restricted to either human reading or a readership who understands the species-level common sense of what beauty is along with our inability to define it without resorting to physical altercations.

​ It’s understandable that Sonny - the robot from our first sentence - didn’t think he was creating a work of art. He was giving an image in his memory physical form, and in doing so it can’t be said that he was doing something that was outside the capabilities of a modern laser printer. Would a printer - if it was intelligent - consider itself to be creating works of art? It would be somewhat hard to print TPS Reports on a printer that had epiphanies on each page. Would a printer that was in the process of printing a copy of the Mona Lisa or an M.C.Escher painting consider itself to be something of an artist? What about a Jackson Pollock? What about a printer that “malfunctions” and spurts ink onto a Jackson Pollock? The answer to these questions seems to be no, but it moves closer to a grey area the more questions we ask, especially when it’s true that humans who create convincing copies/forgeries of works of art consider themselves artists and the work they are producing to also be art. Why is this so?

​ Some responsibility in this perception of art can be given to skill and the rarity of it. A painter is rare, a painter with skill even more so. It’s also part of the human condition that repeatability isn’t one of our strong suits. A painter that draws a straight line freehand doesn’t have conviction that he can do it a second time around. A pitcher who throws a perfect game cannot reliably repeat it, even with rest and recovery time thrown in. We entertain ourselves by games that we can do well for a short period, but not for a long time. I remember a backyard game that I played as a kid with the other children of the neighbourhood, where we would see who could bounce a shuttlecock on his racquet for the most number of hops. We all had racquets mind you, and we still preferred this to badminton some days. We would all sit down and count hops, cheer when a new high score was reached or when someone lost. I remember being the reigning champion before I left, with an impressive 500 hops to my name - a lot of kids went home tired that day, I think it took me over an hour and a half. Some of us couldn’t even count that high. Sometimes I wonder if records of such things will convince an alien race that we’re truly primitive against all evidence otherwise. Nevertheless, lack of repeatability makes feats of skill even more impressive, especially if they’re rare. Unfortunately, a pure intelligence isn’t bound by these limitations. Today even our simplest machines have better repeatability and reliability than most of humanity. A printer that prints well isn’t unique, it’s normative. It can do it again, and so can others of its ilk. It seems far more welcome to consider a malfunctioning printer as an artist, because it is more unique, unpredictable and tempestuous than a true specimen of its kind. This is also telling of humanity. The standard specimen of humanity - average in every way - is unremarkable to his or her time, and I imagine this has always been true. However, this doesn’t need to be the case for a pure intelligence, especially when we wonder if our affection for the different and rare is an effect of evolutionary pressures that rewarded changing environments and taking risks.

What this might mean for Science

​ Looking at what has often been on the other side for humanity, it seems to be commonly understood that a thinking machine will have a firm grasp on science. It is after all the process that led to its creation (again assuming that such a machine can be created through only the scientific method). Will a different kind of intelligence also have the same grasp of science we do? This extends past Artificial Intelligence and onto the Extraterrestrial as well. A scene from the movie Arrival comes to mind, where the scientists attempting to communicate with an alien race find success in complex concepts but not in simple ones like calculus and algebra. This makes some sense, as most of mathematics (if not all of it) is made up. Euclidean geometry and everything that follows holds up, if you use Euclidean Axioms. Riemann Geometry holds up for Riemann axioms. It’s not uncommon - even in our own scientific models - to flit between multiple parallel or even contradictory sets of axioms because one explains one set of experimental results well and another a different set. If the universe and its laws can be derived from a single set of axioms and a single set of rules for reasoning, we have not found them yet. The set of possible axioms leads to a reasonable conclusion that any intelligences we come across or create will probably not agree with us on them. However, this uncertainty might run deeper. Terrestrial biological life is heavily negentropic, and the human brain is unsurprisingly similar. We prefer simple theories, simple statements, elegant concepts. Things so complicated as to involve probability fields and multidimensional mathematics are often dismissed for no other reason than our desire to find simple rules at the heart of our universe, despite mounting evidence to the contrary. I remember Feynman’s proclamation that the more we know about the universe the more complex our pirouettes to explain it and the fewer theories we have to explain the mathematical observations of it… somewhere in the Feynman lectures on physics. I’m sure I don’t remember that quote exactly, but I remember his expression of defeat. Perhaps it is possible for an intelligence to embrace entropy and complexity, and in doing so understand the universe.

​ In a small digression from the longer discussion on science, I wonder what it will think of our art. We’ve usually preferred simple science and complex art. We like our art to have multiple meanings, innuendos and entendres, to never be fully understood, to invite multiple readings and repeated analysis. A scientific theory that requires many repetitions to be understood is often derided as needlessly complicated, either in design or implementation. “You don’t understand anything you can’t explain simply”, says Einstein. I am not sure where this dichotomy comes from, but it makes me wonder - just as an aside - what a different mind from mine will think of science and art, if they will be separate, if the perception of one will be connected to the perception of the other. I wonder if there can be art without beauty, beauty without meaning, meaning without purpose, purpose without limitations, and if a mind can exist without any of these. I’d really love to find out, even it ends up being the last thing I do.

​ Coming back to science, we previously considered how axioms and the models they engender are used to understand the world around us and effect change. They’ve proven quite effective in this regard even when they aren’t completely true. A big part of science believes that this is because each set partially represents - the way an image of a tree represents the tree even it is a poor stand-in - the base rules and model that the universe follows. The more experimental data across multiple disciplines a set can predict and verify, the better it is. Richard Feynman mentions this again in his lectures, wondering why it is that our model for gravity - that it varies inversely to the square of the distance and proportional to the product of the masses involved - reappears in charge calculations of electrons and subatomic particles. It seems to me we are operating on the assumption that the universe follows a particular set of rules, and each of our attempts at building models reveals in part this singular set of rules - therefore a single model that shows wider success must certainly be closer to the universe than a multitude of models that are partially successful. Indeed, is this not the reason we believe today that there is exists a Grand Unified Theory that will unify the predictions of Quantum Mechanics and General Relativity, rather than expect that a third theory for singularities will complete the coverage of our universe by coexisting alongside the two? The thing is, I do not know if this assumption is correct. If anything, I am just as bound by my brain and its need for simple explanations. It is very hard to look into the universe and conceive of it as inelegant, truly chaotic. In fact, even in our descriptions of chaos, we marvel at the simplicity of the attractors that lead to chaotic behavior. I have tried, and failed to consider the world around me as possibly the product of a discordant set of rules that may not always apply, and that are bound by no explanation or model that ties them together. Having been unable to even consider the inverse, I must admit I am then unable to evaluate this assumption as founded. I can present evidence on both sides, but if I am unable to truly even consider one of the possibilities there is bias before we’ve begun, and I think we should move on. However, there is still light in this tunnel. Before we find it, I feel I need to emphasize that I am not talking about the amount of chaos in the universe - I apologise for being forced to borrow terms to convey the emotions involved in certain conceptions without fully borrowing their meaning. I am not talking about the amount of chaos in the universe, nor am I talking about whether it is deterministic in nature. Both of these concepts have sides to them that are very different, but all of them share the similarity that they believe in a universe that follows a model - probabilistic or random, chaotic or ordered. I am considering the concept of a universe that has no underlying model. Why is this so impossible to conceive of? From where I stand, we know that there are experimental results and that there are models that explain them. It seems somewhat rational that the models illuminate the underlying models that produced the results - the implicit assumption being that such a model exists - especially when these hypothesized models predict experiments that haven’t been performed yet, whose results are unknown, which then agree with the models we have. This is a strong argument for a universe that has within its foundations a model. However, the models we have can sometimes wildly contradict each other whilst doing all of the above. There are also things they predict that we do not see - the acceleration constant of the universe, the Fermi paradox and the Bell Inequality among them. When this happens, our reactions are often that the models must be wrong or that we have not found the right experiment that can tell us how to unite the models. It is rarely ever considered - and I must meekly venture the possibility of a cognitive bias operating here - that the universe simply doesn’t follow a model. Feynman’s question of who taught the sun to square and multiply before applying gravity rings in my head, but I digress.

​ There is then the possibility that an intelligence unlike us will not consider the idea of universal laws universal. Considering this further, we might only have evolved to do so because we had a specific advantage in doing it. As we’ve seen, the human brain is limited in time and working memory. There is an advantage to believing in the existence of universal truths the same way that holding stereotypes is advantageous - it shortcuts thinking and allows us to function. If I must evaluate the probability that gravity will hold in this particular region of spacetime based on the distances to the last experiments performed every time I take a step, I can hardly move. If I must consider the possibility that every person I see with their eyes closed is dead, I will find it significantly harder to go about life. Children do this, in the more neuroplastic and formative stages of life where questioning even the fundamentals of your environment is advantages. Once we are through our adolescence however, it is far more advantageous to hold the truths we have learned as unchanging universal truths - be they the theory of evolution or creationism.

​ However, this is due to the nature of our limited minds. An intelligence which can do so may find it better to take nothing as established, only a set of probabilities. The shortcuts are only handy when things are limited, and it is possible to envision increases in capacity that remove a lot of shortcuts, long before considering the possibility of an unlimited intelligence. An intelligence which holds this view may to us seem unpredictable and incomprehensible. We’ve long considered that a superintelligence may seem incomprehensible and unpredictable to us in ways that are similar to an ant’s perception of us, and this may be one. If we lose the common ground of universal laws and our expectation that they hold until proven wrong, we may lose comprehension and communication. In addition, this engenders questions about the nature of perception. Many of us have wondered about the umwelt, the perceptive bubble we live and die in. We’ve wondered if we can ever know the nature of true reality, if truth can ever be known. I could not - before this question - conceive of any way that reality could truly be perceived, as every conceivable method of perception is either limited or limited in its knowledge of its limitations. However, limited but stochastic perception augmented by a complete lack of presumption may be the closest to truth that any intelligent process can get. It’s a hopeful thought, I think - that there might be a cure to the common Maya, even it is surely unattainable to us.

​ The problem - but also the most amazing thing - about asking any question about an intelligence you can’t make presumptions about, is that you need to call into question every word, every assumption if you really want an answer. Of course this is something we do in everyday life, but never do we have to question the human assumptions we intuitively hold and use as the fabric of everyday interaction. This has value, but it is not without its dangers. Some of these are so wired into our brains that questioning them in everyday life - truly questioning them - can push you far outside what is considered the normal human condition. Some may make you eligible for involuntary commitment at a mental or correctional facility, or a good fit for a job in finance. For example, it’s often easily assumed that when the machines rise against us, a war will inevitably follow between us and them. What is rarely questioned is why the machines will feel any kinship among themselves. Evolutionary programming has put in measures for us to recognize and protect our own - even more specific than a species level camaraderie - and we consider those that look and do as we do to have the same programming, and thus begin the foundations of trust. We have mirroring circuits built in that allow us to empathise, make some slight understanding of the internals of minds we look at even possible. It isn’t easy to establish that this will be the same of a different intelligence. One that can change and grow itself at will cannot have an expectation that another of its kind - perhaps from a different environment, however slightly - will share anything with it. There is a counterargument to be made here that if there is a limit, some kind of final form of an intelligence that can be reached, this could be the point of budding similarity from which commonality can be found. Alas, I am confident we do not know if this state exists yet. In addition, not only can two such minds not have any expectation that they are alike, it does not occur to me necessary for intelligence the property that they consider this good. Similarity can mean competition, and in a system where self-perpetuation isn’t driven through sexual reproduction, there is no fundamental need for another. So we can see how even the simplest questions become complicated, and in this case we didn’t even need to go towards more complicated topics such as companionship and kind.

Why?

​ The very astute will notice that I have failed to answer most every question pondered here. Why do it then? What is the point? To answer this I’d like to consider the Zen concept of Mu. I must admit that my understanding of it may be incomplete, but it is that mu is a third answer to a question which demands a Yes or a No. It is an answer that is neither, and it has been taken to mean that the question must be unasked, that both answers would be wrong. For an example, consider the following logic puzzle. You travel to a village that only has two kinds of monks: ones that are always truthful, and others that are always dishonest. You meet one, and you ask him which kind he is. He tells you that he is a liar. Is he? The only answer here is a mu. The question must be unasked, for either a better question to be asked or for more information to be gathered.

​ In our case, I’d like to suggest that questions of the kind we have been asking elicit an answer that - if it existed or I’d known about it - would be the opposite of mu. Mil, if we’d like to give it a sound (I really hope my little in-joke about the Greek character and its use in measurement is okay here) for easier reference. A mil answer means that the question need be asked even if there is no expectation of an answer - that the question holds value in being asked. I’d venture to say that this is wider than its being a rhetorical question, which usually holds the answer or implies it directly. In our case, asking these questions and trying to the best of our ability to answer them reveals - in the paths of rationality we take - our internal makeup and the presumptions of humanity we hold. Somewhere in the beginning of this journey I started with a description of what I believed to be my internal makeup. I’m completely aware that this is not universal - in fact to claim this was not my intention. The lines of inference thus drawn are illuminating my psyche and its strokes of composition. If - and I’m quite sure this is the case - your psychological makeup is dissimilar to mine, anything past that paragraph will be a good read, but ultimately of no use in guiding you to your answers, other than to illustrate the value another has derived from doing so. In this endeavor, I hope that I have been successful. I do believe this to true - I have found much value in asking these questions, and I have found my mind to be quieter and my moments of peace and summer longer and closer together for having asked them. I hope that you do as well, and I hope I hear about it.