One of the great things about robots so far is that while, sure, they have superhuman strength, they're generally dumb as a bag of doorknobs. It's very satisfying to push around some robot who's just standing there like a jerk, waiting for you to order him to lift a can or run in place. ("What'd you say? Huh? Huh? [shove].) We humans have customarily had the upper hand with our gigantic, squishy brains.
Which is why I find the study this post is based on so worrisome. Brandon Keim over at Wired reported about an experiment using robots commanded by digital neural networks, which allow them to learn. Holy cow. What the robots learned was how to look out for number one.
The researchers involved in the study used two colored areas in an arena the robots were programmed to either seek out (the "food" area) and avoid (the "poison" area). Those robots that learned they were rewarded with points by congregating at the food area were the ones from each group that were naturally selected to serve as the progenitors for the next group's neural network. So the researchers were essentially hyperaccelerating a model of evolution for robots, cherry-picking the smartest ones.
After several rounds of evolution, the robots came to the stage of learning to decieve one another. Each robot was capable of signaling to the others about successful food finds. There wasn't enough room for every robot at the food area, Keim reports, so the frequency of robots reporting the location of the food area decreased as evolution increased. The robots evolved to be misanthropes.
We have customarily given a pass to robots that have caused the death and dismemberment of humans. They're dumb, remember, so how can they be held accountable? There have been a number of robot-related deaths over the years; back in 1981, Japanese engineer Kenji Urada became the first person whose death at the hands of a robot was recorded in detail. Urada failed to turn off a robot he was maintaining and it pushed him into a grinding machine, where he was ostensibly ground to death. Robots 1, humans 0.
What happens if we fail to trust robots any longer? If we (well, not me, but other people) have managed to cull deception in robots, hasn't the lid to Pandora's box now been cracked just a little bit? Am I paranoid? Beep bop boop beep?
It won't embed for some reason, but this link is worth clicking