From Frankenstein to I, Robot, we have for centuries been intrigued with and terrified of creating beings that might develop autonomy and free will. And now that we stand on the cusp of the age of ever-more-powerful artificial intelligence, the urgency of developing ways to ensure our creations always do what we want them to do is growing.

SwissCognitiveFor some in AI, like Mark Zuckerberg , AI is just getting better all the time and if problems come up, technology will solve them. But for others, like Elon Musk , the time to start figuring out how to regulate powerful machine-learning-based systems is now. On this point, I’m with Musk. Not because I think the doomsday scenario that Hollywood loves to scare us with is around the corner but because Zuckerberg’s confidence that we can solve any future problems is contingent on Musk’s insistence that we need to “learn as much as possible” now.

How do humans work?

And among the things we urgently need to learn more about is not just how artificial intelligence works, but how humans work. Humans are the most elaborately cooperative species on the planet. We outflank every other animal in cognition and communication – tools that have enabled a division of labor and shared living in which we have to depend on others to do their part. That’s what our market economies and systems of government are all about. But sophisticated cognition and language—which AIs are already starting to use—are not the only features that make humans so wildly successful at cooperation.

Unwritten rules of group normativity

Humans are also the only species to have developed “group normativity” – an elaborate system of rules and norms that designate what is collectively acceptable and not acceptable for other people to do, kept in check by group efforts to punish those who break the rules. Many of these rules can be enforced by officials with prisons and courts but the simplest and most common punishments are enacted in groups: criticism and exclusion—refusing to play, in the park, market, or workplace, with those who violate norms. When it comes to the risks of AIs exercising free will, then, what we are really worried about is whether or not they will continue to play by and help enforce our rules. So far the AI community and the donors funding AI safety research – investors like Musk and several foundations – have mostly turned to ethicists and philosophers to help think through the challenge of building AI that plays nice. Thinkers like Nick Bostrom have raised important questions about the values AI, and AI researchers, should care about.  […]