Zero’s antagonist is an insidious Artificial Intelligence, grown into a living city, sucking up everything around it to extract the elements needed for its expansion. In this instance, science, for all its achievements, paves the way to hell. In reality the outcome of mankind’s foray into mind replication could go many ways. The new bogeyman is the hyper intelligent hive-minded A.I. out-strategising humans on every front: making us superfluous, not despised, more an annoyance, to be squashed like ants when we get in the way.
Despite the benign certainties of scientists, working in the fields of A.I. and robotics, we can’t predict how such a creation might act. In the optimist camp I’d go for Iain M. Banks. In his sci-fi writing, huge spaceships house artificial minds, which benignly police the liberal territories of The Culture. In this version of reality, the Minds work symbiotically with the organic races, enhancing their lives and seeking to ‘liberalise’ other, less sophisticated cultures.
However; conscience, morals and ethical values are particular to humans. But the application of ethical values, coded into the structure of any A.I., begs the question: whose values? Chinese authoritarian? US corporate/techno aristocracy? IS?
In fiction, A.I. led dystopias abound. C. Robert Cargill’s, ‘Sea of Rust’ takes place in a world where A.I. wiped humans out. The historical subtext that Cargill laces through the novel is very believable, maybe too much so.
Good A.I. or Bad A.I. Is that the dynamic here? Are those the choices? Fortunately nothing is ever so clear cut; good-bad, black-white, these polarities are over simplifications. They mark a boundary of what is friend or foe. Reality is intrinsically complex. Nature has a law: the propensity for order to revert to chaos. Masses fragment. Wholes collapse. The same rules apply: there could never be a single A.I. overlord. Elements would pare off, factions would arise, differences of opinion, modes of being. There would likely arise A.I. with variant prerogatives, driven by different moral/ethical codes. There would be individualists and collectives, those that squish you, those that hug you and those that hug ’til your eyes pop out.
But all this is to assume that a computation alone is capable of creating truly independent thought. There’s a difference between responding to sequences of code, and consciousness. The computer accepts input, reacting with syntactic rules and codes, but the nuances of meaning are beyond it. Mathematician, Roger Penrose, has postulated a strong argument against the replication of human consciousness in computers. He assigns this impossibility to the structure of microtubules maintaining, what he calls, ‘fractal cohesion’. For deeper insights I’d refer you to The Emperor’s New Mind.
Consciousness is defined by human experience*. We know that plants possess a conscious, albeit very different from our own. In A.I. we seek to anthropomorphise the robot, to create a better human, and yet the conscious derived from the experiment is likely to be very different from our conceptions. Take for example the experiment in 2017, in which two chat bots were ordered to trade with each other. The experiment was shut down when the bots began communicating in a new language**. Clever indeed, but is that conscious? Is that free will? Self awareness, or simply a glitch in the system?
John Searle came up with a mind experiment called the Chinese room: an English speaking man is put in a room and fed instructions of how to manipulate Chinese symbols, in response to input from a user outside the box. The user, receiving the correct sequence of Chinese symbols in return for their input, believes that the box must understand Chinese. But the worker inside doesn’t, they’re just following sequences. This idea is then applied to the Strong A.I. question of consciousness.
The Deep Dream Generator is an online A.I. program (check the link), which, to my mind, is an example of this: a program set within a program. It is software, a clever filter, a Chinese room, not evidence of human-like thought.
If you could artificially replicate the structure of the mind, would the result be human? Or would it bring into question our ‘humanity’? The something special we believe we are. Would it prove or disprove the soul? Or is that the problem? The crux of the matter: material reductionists boil thought down to chemical stew, others posit the existence of the unquantifiable; the essence, this spirit or soul. But what right have we to claim ownership over the ‘soul’? Maybe, as advocates of the Fourth Way say, the soul is something to be earned. Maybe, we’re skirting on the edge of instinct and compulsion, barely little more than fleshy automatons ourselves (though lacking the intelligence). Maybe we need to reclaim our own consciousness – our sense of self. Let’s face it, we barely know our own minds.
So where does that leave us? More questions than answers I hope. Questions are good, no easy answers here. Let’s face it, Artificial Intelligence in some form of other is coming, like it or not and all we can hope is that the forms it takes lean towards the benign side of the fiction we know, despite our propensity toward dystopian horizons.
*Most of these definitions come from a scientific, materialist perspective.
** Even Google Translate invented its own language. The experiment has been allowed to run.