The House of Lords has urged the government to get a grip on algorithmic bias and stop large technology companies from monopolising control of data in its wide-ranging report into the use and development of artificial intelligence in the UK.
After nearly ten months of collecting evidence from more than 200 witness, including government officials, academics and companies, the Select Committee on Artificial Intelligence called on the government to use the Competition and Markets Authority to stop large technology companies operating in the UK from monopolising the control of data. “We must make sure that [UK companies] do have access [to datasets] and it isn’t all stitched up by the big five, or whoever it might be,” says the chair of the committee, Lord Timothy Clement-Jones – pointing the finger at Amazon, Facebook, Google, Twitter and Microsoft.
The report also places strong emphasis on the UK’s role as an ethical leader in the AI world, calling for the creation of tools that can be used to identify algorithmic bias and make it easier for people to understand how AI systems explain how reach their decisions. From an economic perspective, this makes a lot of sense, says Nick Srnicek , a lecturer in digital economy at King’s’ College, London. “There’s a real challenge for the UK to be able to keep up with the US and China in terms of investment in AI,” he says. “Instead, you have to think about cheaper ways to take leadership, and the ethical part could be really useful there.”
In extreme cases, Clement-Jones says regulators should be prepared to reject an algorithm altogether if auditors cannot work out how it reaches its decisions. “We do think there could be circumstances where the decision that is made with the aid of an algorithm is so important there may be circumstances where you may insist on that level of explainability or intelligibility from the outset,” he says. These rules would apply to any algorithm used to make decisions about UK citizens, not just algorithms developed within the UK.
The committee also recommends the responsibility of regulating AI systems should fall to existing regulators such as Ofcom, Ofgem and the Information Commissioner’s Office (ICO). Crucially, however, it doesn’t call for more funding of these bodies or set out how they should be equipped to carry out their new responsibilities. In the wake of the Cambridge Analytica scandal, the ICO was forced to wait four days before it received a court warrant to search the firm’s offices for evidence that it had retained Facebook data improperly acquired from the researcher Alexander Kogan.
The report namechecks a handful of newly-created government bodies, including the Centre for Data Ethics and Innovation, the AI Council and the Government Office of AI, as well as the private sector Alan Turing Institute, but doesn’t detail how each of these organisations will inform and influence government AI strategy. “With those bodies, you wonder if they’re spreading too thinly,” says Michael Veale, a public sector machine learning researcher at University College London. […]