The fundamental prerequisite for acceptance of is that it can gain society’s trust. In order to develop trust, a transparent approach to is imperative.
Such an approach includes imparting basic knowledge about how systems work and their methods – this must become part of the general public educational mandate. The basic components for building trust are transparency in terms of which data is generated and used, where the is used, and how the works.
Internet services propose actions for users based on different types of sensors, such as wearables, smartphones, Internet services, etc. Intelligent algorithms use this huge amount of private sensor data, evaluate it, compare it with private data from other people, and employ general knowledge and experience to generate recommendations for action for users.
This can be very useful when it comes to making good decisions.
Intelligent algorithms with copious amounts of data and almost unlimited computing power are an optimal complement to the individual human being with his or her personal knowledge, experience, and intuition.
If the Internet services make this transparent, well-calculated recommendations for action are extremely helpful for arriving at an optimal decision. However, if the Internet services themselves earn money indirectly with such services, the calculated recommendation for action will be in the interest of the Internet service and its customers, rather than in the interest of the users. Every user will inevitably become a product. The problem here is that people can lose their self-determination. A modern society cannot want that.
The risk of bias in algorithms can partly be counteracted through transparency, but what is also needed is diversity.
I have also mentioned previously the risk of bias in algorithms. This can partly be counteracted through transparency, but what is also needed here is diversity. Companies developing algorithms and services based on need to put a greater focus on balancing the gender and cultural diversity of their development teams. However, it is also a question of which data is used to train an algorithm. The input data that document knowledge and experience also have an influence on the ensuing results, and therefore knowledge about what data were used is very relevant for the evaluation of the results. If the input data contains prejudices and discriminatory views, the intelligent algorithms will also produce corresponding results.
As Oliver Süme, Chair of the Board at the eco Association, has commented, “Trustworthy applications should be developed and used in such a way that they respect human autonomy, yet function securely, fairly, and transparently. With their developments, products, and services, digital companies are driving digital change and share responsibility for answering the associated societal questions that arise.”[…]