GovTech Industry LawTech Pharma

Artificial intelligence in Australia needs to get ethical, so we have a plan

Artificial intelligence in Australia needs to get ethical, so we have a plan

The question of whether technology is good or bad depends on how it’s developed and used. Nowhere is that more topical than in technolgies using .

SwissCognitiveThe question of whether technology is good or bad depends on how it’s developed and used. Nowhere is that more topical than in technolgies using .

When developed and used appropriately, ( ) has the potential to transform the way we live, work, communicate and travel.

New -enabled medical technologies are being developed to improve patient care. There are persuasive indications that autonomous vehicles will improve safety and reduce the road toll . Machine learning and automation are streamlining workflows and allowing us to work smarter.

Around the world, -enabled technology is increasingly being adopted by individuals, governments, organisations and institutions. But along with the vast potential to improve our quality of life, comes a risk to our basic human rights and freedoms.

Appropriate oversight, guidance and understanding of the way is used and developed in Australia must be prioritised.

gone wild may conjure images of The Terminator and Ex Machina movies, but it is much simpler, fundamental issues that need to be addressed at present, such as:

how data is used to develop
whether an system is being used fairly
in which situations should we continue to rely on human decision-making?

We have an ethics plan

That’s why, in partnership with government and industry, we’ve developed an ethics framework for in Australia. The aim is to catalyse the discussion around how should be used and developed in Australia.

The ethical framework looks at various case studies from around the world to discuss how has been used in the past and the impacts that it has had. The case studies help us understand where things went wrong and how to avoid repeating past mistakes.

We also looked at what was being done around the world to address ethical concerns about development and use.

Based on the core issues and impacts of , eight principles were identified to support the ethical use and development of in Australia.

  1. Generates net benefits: The system must generate benefits for people that are greater than the costs.
  2. Do no harm: Civilian systems must not be designed to harm or deceive people and should be implemented in ways that minimise any negative outcomes.
  3. Regulatory and legal compliance: The system must comply with all relevant international, Australian local, state/territory and federal government obligations, regulations and laws.
  4. Privacy protection: Any system, including systems, must ensure people’s private data is protected and kept confidential and prevent data breaches that could cause reputational, psychological, financial, professional or other types of harm.
  5. Fairness: The development or use of the system must not result in unfair discrimination against individuals, communities or groups. This requires particular attention to ensure the “training data” is free from bias or characteristics which may cause the algorithm to behave unfairly.[…]

read more – copyright by theconversation.com

7 Comments

Leave a Reply