Quantcast
Viewing all articles
Browse latest Browse all 2275

Should we be considering coding ‘kindness’ into Artificial Intelligence?

There are countless examples of humanity envisaging a deus ex machina come to either usher in a technological utopia or wreak unmitigated destruction. At the end of Terminator 3, artificial intelligence becomes self-aware and acts to eliminate its primary threat – humanity. In I, Robot a similarly advanced artificial intelligence (AI) attempts to sequestrate humanity for its own good, avoiding further human-on-human conflict. While most prominent scientists agreed at a 2009 conference to discuss the possibilities of futurism that such far-fetched scenarios were unlikely, concerns were voiced about humanity’s lack of capacity to respond to super-intelligent, possibly self-aware AIs. It’s a big issue – such AIs have been the focus of developmental research for decades and many scientists believe that they are an imminent reality.

We’re going to take a brief look at the possible sources of such a super-intelligent AI, and consider some of the features scientists suggest it might have – for better or for worse.

The Internet

The Internet ticks lots of boxes for becoming a human-esque AI, and has in the past been a source of much discussion on the topic. The possibility that the entity or technology you use when searching for laptop deals, Broadband and phone deals or the next meal voucher might become ‘emergently self-aware’ warrants consideration.

Such consideration, however, only needs to be brief – the Internet is lacking some key features that set it up as a possible future threat. For one, it is – in the most part – not autonomous. That is, it needs human input to thrive. Also, it’s not directed – it has no coherent structure that can be used to cognize a particular task. So, most scientists agree that the Internet is unlikely to evolve in to an uncontrollable AI.

Speech-recognition software

Have you used Siri before? It’s Apple tech, and though results can be a bit spotty on the whole it’s demonstrative of a field that’s been evolving for a long time – the art of computer speech recognition. At the same time, it’s a bit unnerving – some of the things Siri says can touch a uniquely human nerve. The blind insensitivity can be amusing – but do we want to retain that biting edge if Siri gets much more competent?

Scientists in particular have voiced concerns about the evolution of speech-synthesis software in conjunction with developing speech-recognition. What could a criminal do with a machine that can understand and replicate any human’s speech? Moreover, how easy would it be to take charge of a malignant AI able to fully integrate itself into human society? Movies in which the good guys are chasing shapeshifters come to mind. As our society increasingly moves towards the digital, how would we be able to tell human from machine?

We’re really just breaking the surface of the possibilities for future AI. Conferences similar to those that took place around DNA manipulation in the 70s are drawing up similar ethical standards to which computer scientists may be compelled to adhere. The US Navy is drawing up similar standards for its autonomous weaponry – capable of killing without human interference, or compunction. The key question now is ‘should we be coding some form of ‘kindness’, or ‘humanity’ in to machines?’. In my mind, our futurist scenarios assume some element of humanity on the behalf of any of these AIs – what may emerge from our technological dabbling may not adhere to any of the psychological rules we profess to understand.

What do you think?


Viewing all articles
Browse latest Browse all 2275

Trending Articles