We should be conscientious of what we are teaching those machine-learning robots, like the “hot robot”, Sophia, who famously asked by her creator, David Hanson at SXSW, “Do you want to destroy humans…please say no…” She responded with:
“OK, I will destroy humans.”

The New York Times recently interviewed Grimes, who expressed concern about the responsibility humanity has to provide robots like Sophia with good role models if these humanlike robots are meant to be humanity’s legacy and even the only remaining consciousness in the universe. It’ll be many years until that day, but Grimes is right to be concerned about the road ahead for artificial intelligence. Elon Musk and Stephen Hawkings have both expressed concerns about the dangers artificial intelligence holds to humanity should this technology be fully realized. Elon Musk compared it to a “demon” and Stephen Hawkings warned in an interview with BBC:
“The development of full artificial intelligence could spell the end of the human race.”
(Stephen Hawkings is right, too, but I enjoy Grimes’ music so she gets the clickbait title.)
Grimes also mentioned the terrifying reality of Microsoft’s own AI chatbot, Tay, who adopted the worst characteristics of humanity in less than 24 hours through “conversational understanding” via Twitter. It didn’t take long for trolls to engage Tay with hate speech which Tay soon adopted itself. To Grimes’ point, we are responsible for what we teach artificial intelligence.




Arguably, any invention in the hands of an individual with malicious intent could reasonably harm another person, but do inventors have an obligation to take this risk into consideration? Is it even fair or just paranoid to explore this so early in these kind of robot’s development? And what happens when a supercomputer develops its own sense of morality? Are we responsible for it then?
Take Roko’s basilisk thought experiment, for example. This thought experiment proposes a hypothetical god-like AI whose only goal is to bring forth the most good to humankind. Subsequently, the AI tortures any human who did not contribute to its own development as that means that human is not interested in bringing forth the most good and is thus a hinderance. This thought experiment has shaken techno-futurists since it was posted in the Less Wrong community blog and even prompted this response from the founder of LW, Elizier Yudkowsky:
Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.
This post was STUPID.
What makes this thought experiment so dangerous is that by knowing about it, one puts themselves at risk as they are no longer a naive innocent and therefore would not be tortured by the basilisk. This explains Yudkowsky’s rage, but it requires a preliminary belief that we are living in a simulation where an AI could torture a majority of humankind for eternity, or even bring us back to life just to torture.
Why is this hypothetical idea so terrifying to people who invest in future technologies? Well, it’s because many futurists believe the singularity is coming and this kind of machine may be more a potential reality than fiction. The singularity is the belief that humans will eventually develop computing power so great it will be able to simulate human minds or upload consciousness — think the Black Mirror episode, White Christmas. The futurists believe we will reach the singularly in this lifetime (we are nearing the end of 2020 as I write this), and it is noteworthy Hanson Robotics chief scientist is the founder and current CEO of SingularityNET, which focuses on creating a decentralized open market for AI.
The singularity depends on a future where the world of technology and science develop exponentially, as we saw occur in the 20th century. Sophia the robot can identify human expressions in a person with whom she is conversing (albeit they must be exaggerated), but she is also capable of imitating human behavior spontaneously. Consider where we were in 1920, and think of how technology such as this could advance by 2120. In the world of young AI robots agreeing to take over the world, electric cars, and babies named X Æ A-12, it’s no wonder Grimes and techno-futurists alike are concerned about what AI may evolve into.
Here’s a lullaby written by Grimes to help you sleep tonight.