Responsible artificial intelligence (AI) must be embedded into a company’s DNA.

 

Copyright: venturebeat.com – “Responsible AI must be a priority — now”


 

“Why is bias in AI something that we all need to think about today? It’s because AI is fueling everything we do today,” Miriam Vogel, president and CEO of EqualAI, told a live stream audience during this week’s Transform 2022 event.

Vogel discussed the topics of AI bias and responsible AI in depth in a fireside chat led by Victoria Espinel of the trade group The Software Alliance.

Vogel has extensive experience in technology and policy, including at the White House, the U.S. Department of Justice (DOJ) and at the nonprofit EqualAI, which is dedicated to reducing unconscious bias in AI development and use. She also serves as chair of the recently launched National AI Advisory Committee (NAIAC), mandated by Congress to advise the President and the White House on AI policy.

As she noted, AI is becoming ever more significant to our daily lives — and greatly improving them — but at the same time, we have to understand the many inherent risks of AI. Everyone — builders, creators and users alike — must make AI “our partner,” as well as efficient, effective and trustworthy.

“You can’t build trust with your app if you’re not sure that it’s safe for you, that it’s built for you,” said Vogel.


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

Now is the time

We must address the issue of responsible AI now, said Vogel, as we are still establishing “the rules of the road.” What constitutes AI remains a sort of “gray area.”

And if it isn’t addressed? The consequences could be dire. People may not be given the right healthcare or employment opportunities as the result of AI bias, and “litigation will come, regulation will come,” warned Vogel.

When that happens, “We can’t unpack the AI systems that we’ve become so reliant on, and that have become intertwined,” she said. “Right now, today, is the time for us to be very mindful of what we’re building and deploying, making sure that we are assessing the risks, making sure that we are reducing those risks.”

Good ‘AI hygiene’

Companies must address responsible AI now by establishing strong governance practices and policies and establishing a safe, collaborative, visible culture. This has to be “put through the levers” and handled mindfully and intentionally, said Vogel.

For example, in hiring, companies can begin simply by asking whether platforms have been tested for discrimination.[…]

Read more: www.venturebeat.com