Organizations around the globe are becoming more aware of the risks artificial intelligence (AI) may pose, including bias and potential job loss due to automation. At the same time, AI is providing many tangible benefits for organizations and society.
Copyright by www.weforum.or
For organization, this is creating a fine line between the potential harm AI might cause and the costs of not adopting the technology. Many of the risks associated with AI have ethical implications, but clear guidance can provide individuals and organizations with recommended ethical practices and actions
Three emerging practices can help organizations navigate the complex world of moral dilemmas created by autonomous and intelligent systems.
1. Introducing ethical AI principles
AI risks continue to grow, but so does the number of public and private organizations that are releasing ethical principles to guide the development and use of AI. In fact, many consider this approach as the most efficient proactive risk mitigation strategy. Establishing ethical principles can help organizations protect individual rights and freedoms while also augmenting wellbeing and the common good. Organizations can these principles and translate them into norms and practices, which can then be governed.
An increasing number of public and private organizations, ranging from tech companies to religious institutions, have released ethical principles to guide the development and use of AI, with some even calling for expanding laws derived from science fiction. As of May 2019, 42 countries had adopted the first set of Organization for Economic Co-operation and Development’s (OECD’s) ethical AI principles, with more likely to follow suit.
The landscape of ethical AI principles is vast, but there are some commonalities. Across more than 90 sets of ethical principles, which contain over 200 principles, we have consolidated them into nine core ethical AI principles. Tracking the principles by company, type of organization, sector and geography enables us to visualize and capture the concerns around AI that are reflected, and how they vary across these groups. These can be translated and contextualized into norms and practices, which can then be governed.
These core ethical AI principles are derived from globally recognized fundamental human rights, international declarations and conventions or treaties — as well as a survey of existing codes of conduct and ethical principles from various organizations, companies and initiatives.
The nine core principles can be distilled into epistemic and general principles and can provide a baseline for assessing and measuring the ethical validity of an AI system. The landscape of these principles is meant to be used to compare and contrast the AI practices currently adopted by organizations, and they can then be embedded to help develop ethically aligned AI solutions and culture. […]
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Read more: www.weforum.or