There is a term in science fiction, the ‘technological singularity’. It refers to an arbitrary point in the future where artificial intelligence surpasses that of humans and thus creates a type of event horizon, beyond which we are unable to reasonably predict the further course of events as they would be beyond our comprehension.
Great fodder for tales of post-apocalypticism, and a concept that is firmly rooted in our own – quite understandable – fears of technology encroaching into our lives.
However it’s apparently more than a concept; that is, if the bloke I was talking to the other day in my local cafe is anything to go on.
It began as a fairly innocuous conversation that we fell into when discussing the capabilities of the wifi in our coffee house, but in the course of the discussion I learned that this gentleman was a PhD student at a local university, studying in the field of AI as an adjunct to his original degree in neuroscience. Interesting dude.
He said that the technological singularity is a reality that is set to occur at some point in the next 25 years and that we – humans – are not ready for the consequences of this event. Before I could question him further he said he had to go and fairly abruptly left the cafe.
I was struck with mixed feelings as I sat there finishing my latte and musing on this message of impending societal collapse. Either this guy was mental (which is possible; when I was fifteen I once spent three unsettling and occasionally baffling hours with an acquaintance of a friend of my mothers, drinking whiskey and smoking joints, while my mother and her friend were out at the shops. This guy had dropped round for a visit, and finding no one but me at this house – where my mother and I were staying while we visited the aforementioned friend in Adelaide – he proceeded to get loaded, and I joined in enthusiastically, being fifteen and unsupervised. It was only after he started telling me that he had the plans for a special combustion engine that ran only on water and that the CIA were after him for said plans that I started to think that he might not be the most stable of people, and when he fixed me with a sudden glare and demanded that I tell him who I worked for that I made some urgent excuses and beat an unsteady escape. So they’re definitely out there, the crazy ones, and I’ve got a track record of getting them to open up to me. Perhaps I nod a little too understandingly or something. But I digress.) or, alternatively, he was from the future, and had happened upon me in this cafe so that I could receive the information needed to save the human race.
The third possibility is that this guy actually knows what he’s talking about and, as I’ve touched on in an earlier blog, we really are building Skynet. But if that’s the case, and the people who are involved in the study and expansion of our artificial intelligence capabilities are aware that what they are doing is leading us inexorably towards a future living underground as we fight a hopeless and unceasing war against a merciless army of machines, then why are they continuing to do so?
The answer, I think, is quite simple.
People just have to know what will happen when they, metaphorically speaking, press the button marked ‘do not press’. In this case, it’s the increasing progression of technological complexity. There is simply no way to say “let’s just stop here folks; I think we’ve got enough for everyone to get by on” when it comes to technology. We all want to know what’s next, what’s just around the corner, what gizmo or gadget or killer app is about to be the next game changer (not realising of course that the next killer app could be quite literally that).
So what’s the answer? I’m not sure, but I think I might start stockpiling canned food, just in case..