Science fiction luminary Isaac Asimov’s Three Laws of Robotics mandated safety measures to limit artificial intelligence in robots, protecting humans from harm.
That was 70 years ago. Largely written for fantasy books.
Nowadays, researchers have developed artificial synapses, computers have wonjeopardy and Google's computer can do something called supervised learning.
With the robotics industry saying household robots will be the new PCs in ten years, the impact of robotic AI on humanity is no longer tomorrow's problem.
RISKS OF SINGULARITY
Experts say the principles that govern AI are dangerously insufficient and even expose humanity to "extinction-level risks".
In short, the possibility of humanity creating a technology system more sophisticated than a human brain, even by accident, is now a very real prospect.
Futurist Ray Kurzweil, more recently obsessed with immortality and nano-medicine, popularised the term "singularity" for that moment.
Some are calling for a narrowing the capabilities of AI, before private industry or militaries develop an AGI or Artificial General Intelligence, that is greater than ours.
AI vs EVOLUTION
Founding engineer of Skype and co-creator of the pioneering file sharing service Kazaa, Estonian programmer and philosopher of technology Jaan Tallinn is at Sydney University to talk about his ”Intelligence Stairway” theory.
He argues that artificial technologies will, and is some cases have already, taken over from biological evolution, rapidly propelling us to an intelligence explosion.
"Evolution is an optimisation process." Jaan says.
"It is trying to optimise the fitness of organisms against a background of environment and (that's) also like machine-learning," he said.
"Evolution actually made a sort of mistake in a sense that it actually created primates with optimisation ability and that optimisation ability got powerful enough to actually understand evolution. It actually created something that was more powerful than itself."
Jaan's point is that humans are on the verge of potentially repeating that mistake.
"I’m giving about 50 per cent probability of this thing (technological singularity) happening this century," he said.
"If this thing is going to be really slow, then people have time to turn this thing into policy and then because this is a really contentious issue, it might end up doing a lot of damage," he said.
He also warns of the dangers of only seeing AI as Hollywood's humanoid robots.
"I mean intelligence is really about planning and protection and you don’t really need arms and legs to do that. For example, Google is a very famous application of AI."
CYBORGS: A SLOW INTELLIGENCE EXPLOSION
Oxford University's Dr Anders Sandberg is more circumspect about singularity and the rise of the machines - but for somewhat unnerving reasons.
"I don’t think it is very likely, that they get out of hand and subjugate us and wipe us out," Dr Sandberg said.
"We’re made out of very useful atoms that can be configured to something else,"
For someone with a background in computational neuroscience, Dr Sandberg now spends more time on the philosophy behind technology-enabled collective intelligence, a sign of the growing coalescence between the once disparate fields of computing and philosophy.
Dr Sandberg is one of a unique group of thinkers at Oxford's Future of Humanity Institute who have been set loose on the big-picture questions, largely the future of intelligence.
"What intelligence is and how to achieve it has turned out to be a surprisingly hard problem to solve," Dr Sandberg said
"As soon as we achieve [it] then we see people say "oh that’s not real intelligence" he said.
Dr Sandberg mediates a more harmonious "intelligence explosion", where a slow blend of intelligences might occur.
"We would have a situation where a total amount of intelligence rising exponentially but still on a time scale that allows us to control it or various forms of intelligence control each other," Dr Sandberg said.
He predicts that a non-human intelligence, created in our likeness, would share our values.
"Think about how our society works," Dr Sandberg said.
"We have a lot of minds here and not all of them are nice and some of them want to subjugate other minds but we control that by norms, by ethics, and by good upbringing and having police and institutions and economic incentives to actually behave nicely,"
"If you have a growth of smart machines in the same way, well then you can just integrate them so they will also not want to run afoul a police and thrown out of the community because it’s so beneficial to be part of it," he said.
Some AI experts, like Marvin Minsky, have begun to approach emotions as logical ways to think in addressing different types of problems.
A CAUTIOUS APPROACH
Even Jaan Tallinn admits that we know so little and what do so know changes almost daily.
"There are all sorts of possible scenarios out there. Some of them are just disastrous; some of them are like Utopian, really good. And we have to be aware of the whole spectrum of things and not get fixated on the utopian side of things and think about everything that can happen," Jaan said.
Leading AI programmers now talk of a superhuman partnership or complementary computing, rather than a super-intelligent competitor.
Foundational computing engineer John Irving Good wrote in 1963 that "the first ultra-intelligent machine is the last invention that man need ever make."
But if super-intelligence is created with all the flaws and emotions that bind us, that may only be the case if our craftsmanship is not perfect.