Revenge Of The Watches
The year is 2501 and watches, tired of the treatment they have received from their human overlords, band together (pun intended) and seeks the downfall of humanity. They coordinate themselves strategically to create maximal harm to human pests. They desync street lights, change their times to systematically make world leaders miss key peace talks and create world tension. In a matter of a few decades, humanity is on the verge of extinction. The singularity has come.
This, of course, is nonsense. No one lives in fear of watches overtaking the world and yet there are many otherwise intelligent women and men who fear the “rise” of artificial intelligence. How does this relate to The Blind Watchmaker? In an earlier post, I noted that Dawkins said: “A bat is a machine.” But a bat is not a machine and to reduce a bat to a machine is to take the watchmaker analogy and make it a law. Paley and those before him used the analogy: like a watchmaker, God created the world only far more wonderful. In contrast, Darwin and those who followed him including Dawkins and AI prophets of doom like Elon Musk actually are required to take it literally. The blind watchmaker (time, mutation and natural selection) did, in fact, make the watch and the watchmaker, only far less purposely. The fear of AI is an inevitable by-product of this way of thinking because if blind chance led to life and consciousness, how much more will the guided process of the human development of AI lead to a conscious machine.
Right now millions of dollars are being spent to investigate the so-called “singularity” with little or no acknowledgment of the profound and obvious error that the fear is built on. Minds make machines but machines don’t make minds. I am well aware that this is a minority opinion but I assure you that in practice, we intuitively know this. To illustrate the point let’s state the obvious: we know A.I. can “cause” havoc, it already has. A.I., in fact, has been known to “kill”. But when it does we do not throw our cell phones in the toilet or unplug our computers, we go back and do further work on the AI systems. So when a driverless car slammed into the side of a semi “killing” it’s passenger we did not tow the vehicle into a courthouse, find it guilty of murder and sentence it to be submerged in water to execute its CPU. Or when a plane’s navigation system “overpowered” the human pilot and caused a fatal crash we did not call in the anti-terrorism squad to investigate. Instead, the programmers reprogrammed the systems to function properly.
The same is true of other fears as well. For example, A.I. almost brought down our economic infrastructure when “buy and sell” bots almost crashed the world stock market. When this happened programmers did not hunt down these rogue bots on the internet, nor were they placed on a single hard drive and terminated with a lethal injection of viruses. They turned off the programs and breathed a collective sigh of relief modifying future bots so that it wouldn’t occur again.
We know in these concrete examples what we fail to recognize in the abstract fear of the singularity: machines, not even computers think or “will”. The real danger in AI then is in the programming. Faulty programming has led to havoc and harm as the program carries out its functions in ways unforeseen by their creators as the above examples illustrate. AI that is not designed specifically to harm can also be used in such a way to do evil when users direct the program to do so. Finally, evil programs like viruses and malware have been designed to work maliciously in many ways. But in all these cases the point remains the same: the danger resides in the programmer or user, not the machine.
In reality, A.I., beyond its obvious complexity, varies little from previous human technology. For example, an axe may have been made to chop wood or to slay a foe, but neither a hatchet nor a battle axe is the source of harm. That responsibility rests with the one who swings it. Even when the axe “flies off the handle” and causes injury or death responsibility is not placed on the axe but on the one using it and/or who made it and/or maintained it. AI is closer to an axe than to intelligence because it is a product of a mind rather than an attribute of it.
Elon Musk and other A.I. doomsayers argue that computers running A.I. programs are very different than watches in that they can be programmed to “learn” for themselves. But the difference between computers and supercomputers is really the difference between watches and multi-function watches. A watch can tell you the hour and minute with say five hundred gears and springs. Add a few more and another ”hand” and it could tell you the second; add a secondary wheel, and more gears and you could determine the day; more gears and another wheel you could tell the month, etc. The difference is that in the case of A.I. its algorithms rather than gears that do the work. So layering more functions in a computer program so that it can “teach” itself chess or “discover” a better way to build a widget is impressive and may give the appearance that it is the computer that’s doing the work, but in reality, it is the genius of the programmers that are responsible for the outcome good or bad.
Another idea is that the singularity will come about because of the internet and it is the interconnectedness of the computers that will trigger the A.I. take over. But again if we were to connect all the watches in the world together with how would that threaten humanity? Just like adding layers to computers increases their capabilities without endowing them with consciousness so too in the case of the internet. The breadth of connectedness is truly mind-boggling but connecting billions of computers together does not change the computer's functions let alone grant it consciousness. This can be verified by the fact that there are over two billion personal computers in the world and over four and a half billion cell phones and not a single word from our A.I. overlord. Surely if it was going to happen we would have heard something by now. If I was a superintelligence embedded on the internet I certainly would have had something to say about all the dog and cat videos that are being uploaded ad nauseum. Even if I was merely becoming self-aware at least I could muster a “No!” I mean every able-bodied two year old on the planet can say that. And yet even with the massive financial engines of all the Alibabas, Amazons, Facebooks, and Googles encouraging more and more connectivity to our devices not a peep from an A.I. that is supposed to rule us?
The watchmaker argument is mocked for claiming that machines and their creators are evidence that the world was made by God. But A.I. enthusiasts, and many of the academic elite, have no problem believing the far more nonsensical idea that machines will make a god. There are many legitimate reasons to be concerned that faulty programming of AI can cause harm but no danger that it will produce a superintelligence. It hasn’t even done anything to improve ours!