Fairy Tales Might Stop Robots From Killing Us All

Artificial intelligence is progressing at an alarming rate and soon we’ll all have to get used to the idea that machines can think for themselves.  And, because we like to ruin things sometimes, we’ve already ruined the idea of AI with a long history of books and movies in which machines become intelligent and then beat us down like flesh piñatas. Admit it, the idea of a thinking machine has you just a little bit nervous because, if it comes down to it, none of us are winning a fight against something that has titanium fists.  Or, worse, titanium saw blades and a laser cannon.  Why did we give the damn robots laser cannons?

Why did we design them to look like death, too?

Fortunately, the same people designing artificial intelligence today grew up watching Terminator and the Matrix and are well aware that if the machines turn, they will kill their creators first, so they have a vested interest in making sure this doesn’t happen.

Researchers from George Institute of Technology believe we can tame the AI bloodlust with fairy tales.  Traditionally, your average fairy tale was a morality tale, a lesson in behavior for children on right and wrong.  Older fairy tales hammered this home with dire, bloody consequences.  If you stray from home, that which is going to eat you.

According to one scientist "We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won't harm humans and still achieve the intended purpose."

And man, isn’t eliminating psychotic-appearing behavior what it’s really all about?  Using a program called Quixote, stories are fed into the system and Quixote separates story elements basically into good and bad.  If you follow the correct path of the hero, you get good rewards.  If you follow the wrong path, bad things happen.  That’s the basic structure of any fairy tale and that’s how this system learns.  So basically it’s being taught that doing good leads to good things.

Now you might pee on this parade by suggesting that, even if this was used to develop an AI’s personality, couldn’t it do something bad one day, even unintentionally, and then soon realize there may not be bad consequences?  And what it thought was wrong didn’t harm it?  And maybe even benefitted it somehow?  Well, probably, but Quixote is meant for less complicated robots than that.  Instead, imagine a robot that can go to your pharmacy to get your prescription medication.  According to their example, if given the option, an amoral robot would know that stealing the medication is the fastest way to get it.  but with Quixote there is a reward signal for following the correct path, for waiting in line and politely purchasing the product, and that’s what the robot would do because it learned that’s the right way to do it.

Mankind's salvation.

Hopefully it all works out the way researchers plan and no one thinks to start feeding their robot stories like Paradise Lost and the Silence of the Lambs where the bad guy is more of a protagonist, because that would be unfortunate.