Corporations and educational institutions can work together to help ensure that product development teams are staffed with diverse employees from a variety of disciplines who are capable of judging the social impacts of new technologies and can help ensure they are deployed in a responsible manner.
copyright by www.bizjournals.com
“This new algorithm will need a lot of pictures of people. What if we use a morgue so we don’t have to worry about consent?” Although this is a fictitious example, modern-day tech workers often face similar questions.
Why? Because the rise of artificial intelligence based on machine learning has created a new class of sociotechnical challenges. Now is the time for industry and universities to acknowledge these new challenges and step up to meet them.
Since the beginning of the technology industry, educational institutions, legislatures, companies, and developers have worked to improve the quality of products and services. The resulting curricula, laws, corporate policies, standards, and development approaches have provided frameworks for engineers and product managers. Emerging technologies require the development of new frameworks.
In the early 2000s, industry had to get serious about computer security. Today, we have a new challenge: How do you turn the goal of responsible AI into code?
Specialized groups at Microsoft focus on translating ethics policy, research, and customer needs into actionable information for product teams across the company. This approach makes responsible AI real for employees and democratizes the capability of implementing responsible AI across every product.
The concept is not novel. It adopts horizontal efforts used throughout the software industry to achieve the goals of security, privacy, and accessibility. Addressing these challenges is hard. It requires dedication, a long-term focus, and the willingness to include intangibles when computing return on investment.
Addressing the range of challenges facing contemporary companies also requires an expansion of the talent profile of employees. Specifically, we need to ensure that those entering careers in technology have studied how technology can impact society.
Higher educational institutions should take this responsibility seriously. For example, a new initiative in ethics and transformative technologies at Seattle University, supported by Microsoft, is stimulating the development of new undergraduate courses in ethics and technology.
To implement responsible AI, new initiatives should focus on three contemporary shortcomings:
- Developers must learn how to apply responsible AI principles. For example, new approaches are needed so that data scientists can explain how their AI algorithms arrive at their decisions. Product managers should be able to raise concerns in product reviews early in the development cycle when it is still possible to make fundamental changes to the design. Access to information should be democratized with self-serve educational material.[…]
Read more: www.bizjournals.com