I usually let the monster that is the internet alone and distant. It’s a dangerous place to speak your mind because you never know if someone (or something) will bite back. However, after reading through Y Combinator founder Sam Altman’s blog post on the emerging singularity, I couldn’t help but notice the unusually dark statements for a silicon Valley technocrat to make. No words on “bringing the world closer together”, or “making the world a better place.” Instead, the future of technology apparently has a more deathly tone.
There’s sense in some of the points he makes. Genetic engineering of human embryos is already happening and the practice may very well continue into the 21st century. Whether it will continue into the 22nd is still a toss-up, for who knows what sort of monstrosity will be engineered then that can still be called “human”. Machine interfaces will become increasingly invasive within our bodies —even if modern medicine has sough to do the opposite. I can also fully attest to the addictive qualities that the internet has and how it messes with our brains to a large degree. It’s effects have been thoroughly proven in science labs and family dinners.
Mr. Altman describes how talking about the singularity is a topic you wouldn’t want to bring up on a dinner party. “It feels uncomfortable and real enough.” I agree on this too. I would find it extremely uncomfortable to tell my fellow partygoers how in just a few years they will be overtaken by a disembodied artificial intelligence that will wipe out humanity and establish itself as the dominant species. Not a great way to set the mood.
However, I still think it falls short from the world’s greatest one-liner: that God made himself a man, was crucified for humanity’s sins and rose from the dead. I’ve yet to find a more astounding claim than this.
There are varying opinions as to what the singularity is but I’ll stick to what outspoken investor and Microsoft co-founder Paul Allen has defined as “the accelerating pace of smarter and smarter machines [that] will soon outrun all human capabilities”. In his article, Sam states how “It is a failure of human imagination and human arrogance to assume that we will never build things smarter than ourselves.” Indeed, machines have already started their “worldwide domination”: “Our phones control us…search engines decide what we think.” It’s for this reason I believe he titled his article The Merge, since not only is the singularity real and forthcoming, it’s already here and taking over!
There’s a certain sense of the ridiculous that people who are satisfied with yesteryear’s smartphone have when they hear about the singularity. The tick of the singularity seems to affect those that are closer to event horizon of Silicon Valley, itself a singularity of enlightened thinking mixed with hubris that not even Hawking could have foreseen. There seems something fantastical in the technocrat’s statements, something so alien and insane that either the person who predicts these things is either completely right or just utterly wrong. The sheer audacity of their statements should make us either tremble at the potential fallout, or wonder at this person’s sanity. I think the singularity is a very serious issue to address, because the concepts which it rests on are practical and present in our daily lives.
Artificial intelligence is I daresay, a beautiful tool that we can use to our advantage. It decides what Youtube video you can watch next, and makes sure spammers don’t submit fake reviews in your restaurant’s profile. It’s apparently also used in business and scientific research. Yet Sam, Paul and others are worried about AI becoming too smart for our well-being. In both definitions, “smarter” hinges as the indicator for how superior or inferior a machine can be when compared to us.
According to them, “smartness” is the defining characteristic that separates my Macbook’s chess-playing AI from our future robot overlords. But creating such distinction is meaningless —firstly from an ambiguity of what “smart” means, and secondly by comparing a material object with a material-spiritual composite.
Its common to call someone smart when he/she does well at school, gets high scores in an exam, or can recall a book word by word. These are essentially computational tasks. They require an input, a processing stage, and produce an output. This stage of intelligence can increase by one’s ability to abstract patterns and universals from particulars. A child learns that pointy things can hurt, or that red signs can signify danger. We’ve created AI that can do these things too (albeit to a lesser degree).
But a machine can never understand the higher sphere of intelligence which we inhabit. Say what you want about Google’s DeepDream or the plethora of structures in contemporary architecture created by algorithms. I doubt any computer could produce a painting as mysterious as a Mona Lisa, or a building that elevates one’s soul as the Cathedral of Notre Dame. There’s that innate feature of humanity —a willingness to waste resources, waste away time, even waste away himself— to create something that’s utterly useless, but essential for one’s spiritual survival. And therein a pivotal difference in Mr. Altman’s view of human intelligence vs computational intelligence —that it all boils down to a deductive and logical reasoning caused by chemical reactions in our synapses. After all, as Paul Allen says, ”an adult brain is a finite thing, so its basic workings can ultimately be known through sustained human effort.”
But can we be so sure that our intellectual capacity for the infinite be housed in such a finite thing as our brain?
Many people forget that the scientific method is a philosophy. It’s a way of looking at the world by material causes and effects. It’s a wonderfully effective method of thinking about the world, but it’s not the only one, and certainly not the exclusive one. If any Marvel fans are reading this, they might recall a scene where The Ancient One tells Dr. Strange: “All your life you’ve looked at things through a keyhole.” Observe how every time a person insists there is nothing (or no one) outside our material universe spiral into a spiritual fervor many religious people would envy to have. Famed Google and Facebook AI engineer Anthony Levandowski has even founded his own AI-based religion titled, “The Way.” A blatant plagiarizing of course, of a motto that has been in use for two thousand years. My point is that a superior intellect residing in a machine created by man is illogical. Since such a “higher intelligence” is immaterial (and therefore not subject to time since it cannot change by its very nature), it cannot be handled and thus manipulated. You cannot empty the whole ocean into a bucket.
The ultimate fear of the singularity is machines becoming self-aware, and destroying its creators in the process. Can machines kill? Of course. People have been killed by falling into machinery or had their hands cut off by a chainsaw. Can machines kill intentionally? Now there’s the rub, because to have intention requires a deliberate act of the will, and having free will requires the entity to have understanding of itself and the possibility of either acting or not acting. Proponents of the singularity deem this to be possible, as Paul Allen has stated; since the human intellect is nothing but matter and therefore a biological organ whose capabilities can be replicated.
I am of the sort that believes the world is larger and weirder than any of us could dream of. I have good reasons to believe, and have had enough life experience, to acknowledge that there is more to this universe than matter, and that our humanity cannot be reduced to a heart pumping blood into our brain sending electrical signals in the process. That’s no basis for “certain, inalienable rights,” no justification for the inestimable value we place on a stranger when compared to a dog. Indeed, No one puts a lump of coal behind a vault; we recognize the special quality of humanity because, like a diamond, it shines with beauty and goodness. That is the sort of future I decide to believe in and one I am happy to live for. And future robot overlords? More like future robot servants.