Following the Facebook-Cambridge Analytica data scandal, everyone is a little more conscious about their digital activity, and keeping their data secure. But with the rise of (), what other factors may we need to consider?
Big brother is watching you
Imagine a scenario where you are in the supermarket. Your eyes fixed on a box of cereal for a few seconds, you hear your mobile phone beep and you receive a special offer for that particular product.
This futuristic-sounding scenario may not actually be too far away from becoming a reality, and some important questions need to be asked. Will one have control over this type of monitoring, with the ability to ‘opt out’ like we may do to unsubscribe from an email list, or to ‘update our privacy settings’ like we may do in our Facebook account?
It is easy to see how the development of technologies in an unregulated manner could lead to a breach of basic human rights. These developments also have the potential to fundamentally change society, where decisions are made on a larger scale by an elite team of technical experts, rather than individuals.
In the article “Safeguarding human rights in the era of ”, Dunja Mijatović discusses some of the key considerations as to how may affect our human rights.
One key point discussed is that machines function on the basis of what humans tell them. If a system is fed with human biases, the result will inevitably be biased, which could reinforce discrimination and prejudices.
Criminal justice systems around the world are increasingly looking into the opportunities that provides, from policing to crime prediction and reducing reoffending. Making important decisions about people’s lives based on algorithms, without questioning the results, could have serious human rights implications.
Facial recognition technology may prove useful in locating suspected terrorists and criminals, but at what cost? Could this be a move towards becoming a police state where our basic rights to privacy and freedom of are denied?
A recent report commissioned by the Council of Europe states that “ is unavoidably based on data processing. Therefore, algorithms necessarily have an impact on personal data use and pose questions about the adequacy of the existing data protection regulations in addressing the issues that these new paradigms raise.”
How are and Data Processing currently regulated?
The existing regulatory framework applicable to and data processing is mainly grounded on the Council of Europe Convention 108 – Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data.
The Council of Europe report recommends that data-centric development should therefore be based on the principles of Convention 108 as the foundations for the development of digital society. The key elements of this approach are: