Research Culture Principle: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

SwissCognitiveCompetition and secrecy are just part of doing business. Even in academia, researchers often keep ideas and impending discoveries to themselves until grants or publications are finalized. But sometimes even competing companies and research labs work together. It’s not uncommon for organizations to find that it’s in their best interests to cooperate in order to solve problems and address challenges that would otherwise result in duplicated costs and wasted time.

Reduce Redundancy

Such friendly behavior helps groups more efficiently address regulation, come up with standards, and share best practices on safety. While such companies or research labs — whether in artificial intelligence or any other field — cooperate on certain issues, their objective is still to be the first to develop a new product or make a new discovery. How can organizations, especially for new technologies like artificial intelligence, draw the line between working together to ensure safety and working individually to protect new ideas? Since the Research Culture Principle doesn’t differentiate between collaboration on AI safety versus AI development, it can be interpreted broadly, as seen from the responses of the AI researchers and ethicists who discussed this principle with me.

Safe and ethical AI

A common theme among those I interviewed was that this Principle presented an important first step toward the development of safe and beneficial AI.
“I see this as a practical distillation of the Asilomar Principles,” said Harvard professor Joshua Greene. “They are not legally binding. At this early stage, it’s about creating a shared understanding that beneficial AI requires an active commitment to making it turn out well for everybody, which is not the default path. To ensure that this power is used well when it matures, we need to have already in place a culture, a set of norms, a set of expectations, a set of institutions that favor good outcomes. That’s what this is about — getting people together and committed to directing AI in a mutually beneficial way before anyone has a strong incentive to do otherwise.” In fact, all of the people I interviewed agreed with the Principle. The questions and concerns they raised typically had more to do with the potential challenge of implementing it. […]