Of course, has the potential to increase the gross domestic product (GDP). Of course, can add value to the economy. But there is still a long way to go before that happens. After all, no technology heats up ethical tempers as much as
We talked with Dalith Steiger-Gablinger, who is familiar with entrepreneurial readiness, and not just in Switzerland. Dalith is the co-founder of SwissCognitive – The Global Hub SwissCognitiv.* “To achieve significant GDP,” she says, “ has to become a sales hit for Switzerland like milk and chocolate already are. That’s where we need to get to.”
Where do we stand?
When asked where we stand right now with , Dalith says we still have a lot of learning to do. “We are still at the point of education. We need to focus on a company’s corporate digital responsibility and appeal to people’s responsibility and reflection. People must always think along and never outsource that. In addition to the potential dangers, such as discrimination, these are precisely the central aspects in the discussion about ,” she says, emphasizing that this form of education never stops.
And that’s why Dalith never tires of lecturing about to academics, CEOs or professional groups – because that’s important in order to get ever closer to her project of a digital Switzerland, she says. “ is a value-creation catalyst. It can act as an economic driver.” So, she says, she never gets bored with the project of ultimately modelling Switzerland as an hub.
But this form of education is not a static process. It must always adapt to technological and societal change. Of course, we are dealing with a relatively complex equation that consists of two unknowns: ethics and profit.
Corporate digital responsibility
How much can we afford to think about ethics in the old mindset of maximizing profits? How much are privacy, trust and freedom worth to us at the end of the day? A lot, if you take a closer look at the latest ambitions of data protection, especially in Europe with the GDPR regulation.
But this is deceptive. Because data protection only partially covers the questions of ethics, and even more so its partial aspect of discrimination. If the freedom of one person restricts the freedom of another, the best data protection is of no use. This is where explanations and education are needed. We have to delve explicitly into the issues of will, ability and the impossible.
Questions upon questions
Questions upon questions follow. Let’s apply the pilpul from the 16th century. The pilpul is a method of critical analysis used in Talmudic study and quite helpful in discussing ethics. The question would be: Is an algorithm not discriminatory per se? Does the discrimination-free actually not exist because the technology is designed with underlying discrimination? Or is it the other way around: Do cultural and historical factors mean that algorithms discriminate? And aren’t humans again the problem of their problems?
It is impossible to ask too many questions on this topic. If is the glaring light that blinds us, our questions are able to focus this glaring light so that we can see the picture before us well. By asking questions, we can immerse ourselves in the stream of new technologies. Our questions allow us to reflect interdisciplinarily as human beings.
So if the question is: Isn’t an algorithm discriminatory per se? A possible answer is maybe this one: It’s about the data used to train the algorithms. It’s about the parameters that are used to evaluate them. It’s about the benchmarks that are used. And it’s about identifying your own biases.
Software tools that would be quite capable of solving this are, however, only of limited use in solving structural problems in our societies. So if the question is: How do I train my algorithm to be bias-free? Anyone who is honest with his fellow human beings must answer: You cannot train it to be free from bias or values. After all, it is past data that is used for training in order to navigate through future waters.
It will work
Dalith gives examples that inspire hope. She plays with visions but handles with the concrete: “I want to give my daughter a great surprise for her birthday party. I want to paint our house in her favourite colour. But only for about 24 hours. After that, the house should return to its original white. The painter of the future could use drone technology, for example.”
Our painter can operate globally from Switzerland using cognitive technologies – his drones are controllable from any location in the world. As if the will to credibility is inherited in a dominant-recessive manner, the painter naturally feels the urge to buy paint, brushes and all other materials locally in the respective country of his client. Yes, even the labour that populates the drone must remain local. This example shows that a painter can build a global business out of Switzerland because he has business partners in the respective country.
Vision becoming reality
And now Dalith’s vision of Switzerland as an hub is becoming very tangible. Even if there are no such painters yet, there are still other examples that are already being implemented. Caru Home, for example, is using to emotionally reduce the physical distance between elderly people and their relatives during the pandemic. Speech serves as an interface there. Messages can be sent by voice command to a family group chat, or emergency calls can be made by voice command. And a CO2 measuring device made in Switzerland uses to analyze patterns and deviations of room parameters. If the CO2 content in the air is no longer right, it takes countermeasures.
One interesting note here is that Switzerland boasts the first place in Europe for the highest number of companies per citizen (source: ASGARD). Switzerland’s geopolitical location, its penchant for perfection and high-security standards offer an ideal basis, Dalith says. “Using like milk and chocolate – that’s where we can turn to.”
However, to ensure that the topic does not slip away from us in the final meters, all the good examples must stand up to ethical scrutiny. It should be noted that discrimination in algorithms is becoming institutionalized and formalized, and that therein lies the threat. Mass categorization is taking place simply because that is what the program says. This is where differences are cemented. “We really have to keep asking ourselves if our idea about the outcome of the algorithm matches with what we were trying to achieve,” Dalith says.
Don’t algorithms mercilessly show us what makes us tick as a society? Are we succumbing to bogus objectivity because algorithms are based on normative mathematics? Do humans have the opportunity to optimize themselves via ? Well, it can at least be our mirror that lets us look at our systems unemotionally at a distance from ourselves.
* SwissCognitive – The Global Hub is a trusted network of industries, organizations, enterprises and startups to openly and transparently discuss the opportunities, impacts and development of (). It is a value-driven on- and offline community that puts the spotlight onto practical use cases and hands-on experiences, and transfers the hype around into real business.