Black History Month takes place during the month of February in the US, Canada and the UK.
The civil rights movement in the US and emerging technology are closely intertwined, especially from a justice-oriented perspective.
AI must be ethical and equitable in its approach to ensure it empowers communities and benefits society.
Copyright: weforum.org – “How can AI support diversity, equity and inclusion?”
Bias in algorithms is a costly human oversight. Costly due to its immense impact on marginalized and underrepresented communities. As artificial intelligence (AI) continues to scale across industries and functions, it can be riddled with unconscious biases that do more harm than good.
While AI ethicists and responsible AI practitioners often speak about the need for more transparency and accountability in the AI lifecycle, Black History Month is often the time when organizations evaluate the tremendous work that has been done and that remains to be done.
In AI We Trust, a podcast co-hosted by Miriam Vogel of EqualAI and Kay Firth-Butterfield of the World Economic Forum, explores the opportunities and challenges of scaling responsible AI. For this year’s Black History Month episode, How AI Does (& Should) Impact Our BHM Celebration, they were joined by Renee Cummings, an AI ethicist, criminologist, Columbia University community scholar, and founder of Urban AI.
The episode focuses on the importance of equity and inclusion in AI, but also its links to justice and civic engagement.
“So much of AI and data science is about civil rights. And when we think about Black History Month, we think about legacy, and American legacy that changed the world. As we think about AI, it’s that an algorithm can create a legacy”
— Renee Cummings, AI Ethicist
History of technology in Black communities
Within communities of colour in the US, there is a history of distrust in technology – law enforcement, social services, housing, and healthcare have all displayed disparity and inequity especially during the COVID-19 pandemic. Cummings explains that even in recent history, many communities were used as “guinea pigs” in research so levels of distrust are generational and trauma based. Deployment of new technologies, such as AI, requires the building and restoration of that trust and justice.[…]
Read more: www.weforum.org