What are AI ethics?
AI is a broad term encompassing technologies that can mimic intelligent human behavior.1 Four major categories of AI are in increasingly wide use today:
- Machine learning: The ability of statistical models to develop capabilities and improve their performance over time without the need to follow explicitly programmed instructions.
- Deep learning: A complex form of machine learning used for image and speech recognition and involving neural networks with many layers of abstract variables.
- Natural language processing (NLP): A technology that powers voice-based interfaces for virtual assistants and chatbots, as well as querying data sets, by extracting or generating meaning and intent from the text in a readable, stylistically neutral, and grammatically correct form.
- Computer vision: A technology that extracts meaning and intent out of visual elements, whether characters (in the case of document digitization) or the categorization of content in images such as faces, objects, scenes, and activities.2
Ethics is “the discipline dealing with what is good and bad and with moral duty and obligation,” as well as “the principles of conduct governing an individual or a group.”3 In commerce, an ethical mindset supports values-based decision making. The aim is to do not only what’s good for business, but also what’s good for the organization’s employees, clients, customers, and communities in which it operates.
Bringing together these two definitions, “AI ethics” refers to the organizational constructs that delineate right and wrong—think corporate values, policies, codes of ethics, and guiding principles applied to AI technologies. These constructs set goals and guidelines for AI throughout the product lifecycle, from research and design, to build and train, to change and operate.
Considerations for carrying out AI ethics
Conceptually, AI ethics applies to both the goal of the AI solution, as well as each part of the AI solution. AI can be used to achieve an unethical business outcome, even though its parts—machine learning, deep learning, NLP, and/or computer vision—were all designed to operate ethically.
For example, an automated mortgage loan application system might include computer vision and tools designed to read hand-written loan applications, analyze the information provided by the applicant, and make an underwriting decision based on parameters programmed into the solution. These technologies don’t process such data through an ethical lens—they just process data. Yet if the mortgage company inadvertently programs the system with goals or parameters that discriminate unfairly based on race, gender, or certain geographic information, the system could be used to make discriminatory loan approvals or denials.
In contrast, an AI solution with an ethical purpose can include processes that lack integrity or accuracy toward this ethical end. For example, a company may deploy an AI system with machine learning capabilities to support the ethical goal of non-discriminatory personnel recruiting processes. The company begins by using the AI capability to identify performance criteria based on the best performers in the organization’s past. Such a sample of past performers may include biases based on past hiring characteristics (including discriminatory criteria such as gender, race, or ethnicity) rather than simply performance.
In other words, the machine learns based on the data that it processes, and if the data sample isn’t representative or accurate, then the lessons it learns from the data won’t be accurate and may lead to unethical outcomes. To understand where ethical issues with artificial intelligence could arise and how in the future of work those issues might be avoided, it helps to organize AI along four primary dimensions of concern):
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
- Technology, data, and security. Look at the organization’s approach to the AI lifecycle from an ethical perspective, including the ways it builds and tests data and models into AI-enabled solutions. Leadership in this dimension comes from the organization’s information, technology, data, security, and privacy chiefs.
- Risk management and compliance. Find out how the organization develops and enforces policies, procedures, and standards for AI solutions. See how they tie in with the organization’s mission, goals, and legal or regulatory requirements. The heads of risk, compliance, legal, and ethics play a role in this dimension.
- People, skills, organizational models, and training. Understand and monitor how the use of AI impacts the experiences of both employees and customers. Continuously assess how operating models, roles, and organizational models are evolving due to the use of AI. Educate all levels of the workforce and implement training initiatives to retool or upskill capabilities. Establish protocols to incentivize ethical behavior and encourage ethical decisions along the AI lifecycle. In this dimension, the human resources function shares responsibility with learning and development teams, ethics officers, and broader executive leadership.[…]
Read more: www.2.deloitte.com
What does responsible AI look like—and who owns it? Will artificial intelligence (AI) help us or hinder us? AI as a problem-solving tool offers great promise. But cyberattacks, social manipulation, competing financial incentives, and more warn of AI’s dark side. For organizations expecting AI to transform their companies, ethical risks could be a chief concern.
Copyright by: https://www2.deloitte.com
What are AI ethics?
AI is a broad term encompassing technologies that can mimic intelligent human behavior.1 Four major categories of AI are in increasingly wide use today:
Ethics is “the discipline dealing with what is good and bad and with moral duty and obligation,” as well as “the principles of conduct governing an individual or a group.”3 In commerce, an ethical mindset supports values-based decision making. The aim is to do not only what’s good for business, but also what’s good for the organization’s employees, clients, customers, and communities in which it operates.
Bringing together these two definitions, “AI ethics” refers to the organizational constructs that delineate right and wrong—think corporate values, policies, codes of ethics, and guiding principles applied to AI technologies. These constructs set goals and guidelines for AI throughout the product lifecycle, from research and design, to build and train, to change and operate.
Considerations for carrying out AI ethics
Conceptually, AI ethics applies to both the goal of the AI solution, as well as each part of the AI solution. AI can be used to achieve an unethical business outcome, even though its parts—machine learning, deep learning, NLP, and/or computer vision—were all designed to operate ethically.
For example, an automated mortgage loan application system might include computer vision and tools designed to read hand-written loan applications, analyze the information provided by the applicant, and make an underwriting decision based on parameters programmed into the solution. These technologies don’t process such data through an ethical lens—they just process data. Yet if the mortgage company inadvertently programs the system with goals or parameters that discriminate unfairly based on race, gender, or certain geographic information, the system could be used to make discriminatory loan approvals or denials.
In contrast, an AI solution with an ethical purpose can include processes that lack integrity or accuracy toward this ethical end. For example, a company may deploy an AI system with machine learning capabilities to support the ethical goal of non-discriminatory personnel recruiting processes. The company begins by using the AI capability to identify performance criteria based on the best performers in the organization’s past. Such a sample of past performers may include biases based on past hiring characteristics (including discriminatory criteria such as gender, race, or ethnicity) rather than simply performance.
In other words, the machine learns based on the data that it processes, and if the data sample isn’t representative or accurate, then the lessons it learns from the data won’t be accurate and may lead to unethical outcomes. To understand where ethical issues with artificial intelligence could arise and how in the future of work those issues might be avoided, it helps to organize AI along four primary dimensions of concern):
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Read more: www.2.deloitte.com
Share this: