If you’d have told me back in 1994 that in 20 years’ time I would be talking to the robots in my iPhone, I’d have replied with … well, nothing actually (I was born in ’93) but don’t let that detract from the statement I’m trying to make. The world of technology has come a long way since the end of the 20th century, and we are just now beginning to realize the immense possibilities of artificial intelligence and its potential threat to our own “natural” brainpower.
After becoming enthralled with an article I had read on the Internet entitled “When Robots Can Kill, It’s Unclear Who Will Be To Blame,” I decided to bring the topic up during a recent dinner conversation with my immediate family. Amongst the usual mealtime phrases in our house such as, “Is it supposed to be this burnt?” and “I slave over a hot stove …” (mom, it was a microwaveable ready-meal, what the hell were you doing with a stove in the first place?) I managed to edge the chatter towards technology, and in particular AI.
Everyone seemed to look upon it rather favorably at first, and at the moment, why shouldn’t they? AI has allowed us humans to tailor-make robots that fit perfectly into our daily lives, suggesting faster routes to work, recommending TV shows we might like, even telling us jokes when we’re feeling down. I mean, where else would we get all that?
“Humans,” chirped father. Of course, here we are talking about robots and the possibilities of artificial intelligence, when what scientists are trying to create is actually already right in front of us. My boss could suggest a faster route to work, my friends could recommend a TV show, and my brother could tell me a joke (it probably wouldn’t be funny, mind). Our fellow man will always be one step ahead of artificial intelligence anyway, on account of the fact that every robot is man-made.
“Not the case,” argues my sister, “technology is fast allowing robots to teach themselves, meaning their intelligence can grow in the same way it does with a human.” This is a good point, research is being done at the University of Queensland into robots that talk to each other in a language that they have created themselves. The Australian “Lingodroids” play location games that eventually lead them to establish a shared vocabulary for distances, directions and places, meaning it’s only a matter of time before AI simply becomes I.
So that’s it, we’re producing robots that will become just like every other man and woman in this world, only smarter? Great idea! At the end of the day they’re still just lights and clockwork, they can do everything we don’t want to do with no ethical burden on our shoulders at all. “Doesn’t that worry you?” Mother asks, “infinitely clever robots that bare no emotional attachments at all?”
We hadn’t thought of that. A machine that can learn to pick up a gun and fire a trigger wouldn’t hesitate to kill someone, not morally at least. How do we prevent these things with unlimited knowledge and no conscience from becoming a sincere threat to us? And even if we did, if scientists one day built a robot with genuine soul and a sense of right and wrong, wouldn’t we just be replacing ourselves on this planet in the most literal sense possible? Will we one day be looking at a mirror, deeply gazing into our own eyes, only to notice wires and cogs behind them?
It was at this point in the conversation that we all turned to my brother in the expectation that he would also contribute some point of great debate that would have us rethink our stance on the matter, but he didn’t. He sat there eating his chicken like a well-mannered young man. This decision not to weigh in with his opinion made perhaps the most telling note of the evening (despite the fact that I’m pretty sure the whole issue simply went over his head). At this moment in time, AI is improving the lives of the people who use it, and the idea that a robot could intentionally kill someone is as likely as Elvis playing at his own remembrance concert.
So why worry? If someone is clever enough to create a machine that is capable of teaching itself, of operating weapons, and of replicating human achievements without fault, then they’re probably clever enough to include an off switch. Right now, we should embrace the idea of robots and the possibilities of artificial intelligence as an exciting and useful form of evolving technology.
Opinion by Zachary John
Sources: