AI Events Government Research Telecommunication

Making artificial intelligence socially just: why the current focus on ethics is not enough

We are in the midst of an unprecedented surge of investment into () research and applications. Within that, discussions about ‘ethics’ are taking centre stage to offset some of the potentially negative impacts of on society.

SwissCognitiveIn June 2018, the Mayor of London released a new report that identifies London’s ‘unique strengths as a global hub of ’ and positions the capital as ‘The Growth Capital of Europe’. This plea coincides with the government’s focus on ‘ & Data Economy ’ as the first out of four ‘Grand Challenges’ to put the UK ‘at the forefront of the industries of the future’. The Sector Deal of £1 billion, part of the Industrial Strategy, has seen private investment of £300 million, alongside £300 million government funding for research in addition to already committed funds.

Albeit significant, these investments are small compared to, for example, France’s pledge of €1.5 billion pure government funding for until 2022 or Germany’s new ‘Cyber Valley’ receiving over €50 million from the state of Baden-Württemberg alone in addition to significant investments from companies such as Bosch, BMW, and Facebook. The EU Commission has pledged an investment into of €1.5 billion for the period 2018-2020 under Horizon 2020, expected to trigger an additional €2.5 billion of funding from existing public-private partnerships and eventually leading to an overall investment of at least €20 billion until 2020. This wave of funding is, in part, a reaction to the Silicon Valley’s traditional domination of the industry as well as China’s aspiration to lead the field (focused on both soft- and hardware and comprised of large-scale governmental initiatives and significant private investments).

Large-scale investments to boost (cross-)national competitiveness in emerging fields are hardly new. What is special about this surge of investment into is a central concern for ethical and social issues. In the UK, the Sector Deal entails a new Centre for Data Ethics whilst a recent report by the House of Lords Select Committee on puts ethics front and centre for successful innovation in the UK. Relatedly, London-based heavyweight DeepMind launched its Ethics and Society research unit in late 2017 to focus on applied ethics within innovation, alongside a range of UK institutions embarking on similar missions (such as The Turing Institute with their Data Ethics Group).

The UK is not alone in the race for ‘ethical ’: the ‘Ethics of ’ are a central element of France’s strategy; Germany released a report containing ethical rules for automated driving in 2017; Italy’s Agenzia per l’Italia Digitale published a White Paper on naming ‘ethics’ as No.1 challenge; the European Commission has held the high-level hearing ‘A European Union Strategy for ’ in March 2018 and recently announced the members of its new High-Level Expert Group on , tasked with, among other things, drafting ethics guidelines for the EU Commission. A similar picture materialises outside Europe – in Canada, America, as well as in Singapore, India and China as well.

With these kinds of issues surfacing, specific concerns that cut across the international landscape are materialising. To address these, different strategies are being suggested such as implementing re-training schemes for workers, algorithm auditing, re-framing the legal basis for in the context of human rights (including children’s rights in the digital age), calling for intelligibility, voicing concerns against privatisation and monopolisation, suggesting ‘human-centred ’, proposing an citizen jury and calling for stronger and more coherent regulation. […]

  1. Laz

    @SwissCognitive You should re-read or read up on #gametheory — if one defects all kind of need to de… https://t.co/iKFMiDasGT

  2. Kozeseeks

    @SwissCognitive #WellSaid #SharingMyInOut #EyeDream when #AI changes implement #SoShallowJustEyeSee (so… https://t.co/GjXMJdPRQ8

  3. Michael Zeldich

    I do not understand how a programmable devices of any kind could bring existential risk for humanity.
    The truly autonomous artificial systems, which should be subjective, instead of having its base in AI paradigms and thereof be programmed, could bring such risk if developers will create them fully autonomous.
    Every one, attempting to provide personal rights to these future machines, should understand that human beings will be unable to control them by any kind of imposed moral rules, because they will not perceive us as members of their social groups and our moral ruler will not have any value to them.
    The only way for making a control possible is in deprive them by the design from having ability of producing their own interests. That will give us a warranty of safety.

  4. Bogdan Micu 🇪🇺

    @SwissCognitive #AIgovernance should aim at #AIfairness and push for incorporating some necessary… https://t.co/4nrugiUdof

  5. Michael Zeldich

    Control of reasonable artificial agents by imposing on them human moral norm is impossible, because these agents will not perceive us as members of their social groups!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.