Hasta la vista, baby – Robots are shut down after inventing their own language

by Eleanor Jackson - Friday 4th August 2017 - 1:28 pm

 

Facebook recently shut down an experiment after artificial intelligence robots started a conversation in their own language.

 

You read that right. The experiment involved the two robots being instructed to negotiate with each other over trades for objects, including hats and balls. Values were assigned to the objects and the robots were expected to improve their bartering skills in the process.

 

However, all did not go to plan. The AIs developed their own bizarre shorthand speech, which was unrecognisable to humans, and which almost resembled a scene from the Chuckle Brothers (images of ‘to me, to you’ spring to mind).

 

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

 

Human traits also appeared present in the negotiations, with the AIs pretending to show great interest in a particular item in the negotiations, so as to create the illusion of a great sacrifice when forced to give it up.

 

The experiment was swiftly shut down, though not for the reasons you might think. There’s no need to panic about robot overlords just yet. According to Facebook, the experiment was cut short simply because ‘our interest was having bots who could talk to people’. The experiment simply hadn’t followed the intended path.

 

What does this all mean for the world of artificial intelligence? Is there a line that should be drawn in its development and, if not, where does human responsibility start and end? In spite of the relaxed reasoning for calling off the experiment, the general belief is that we must proceed with caution.

 

Share:
BACK TO TOP
2017 All Rights Reserved. Privacy Policy | Advertising Terms