People say, AI is biased. But is it really AI that is biased? It is actually the human being who is biased. The algorithm is just putting the mirror in front of us. The data we bring into the system is only mirroring  how you and I are thinking. With knowing that, it’s a great chance to unbias the bias and make sure that we develop cognitive technologies that we can trust and rely on.


Interview with Dalith Steiger-Gablinger, co-founder of Swiss Cognitive, World-Leading AI Network


Swiss Cognitive’s co-founder Dalith Steiger-Gablinger is talking about cognitive technologies, AI biases and the possibilities and responsibilities in technology. Watch the 8-min short interview, or listen to the full 20-min podcast.



00:14 – Tell us about yourself

“I was born in Israel and grew up in Switzerland, and as a child, actually, I wanted to be a dentist for elephants, but it happens that today I’m an entrepreneur and I am the co-founder of the award-winning start-up, Swiss Cognitive, and we are looking at AI.

My school was actually very tough. I had to repeat the classes and I figured out that I’m actually not really the brilliant student, but at the end of the day, suddenly I felt very comfortable within mathematics and physics. I went into information technology, even into computational biology and gene technology. And there, I found out that technology is actually a great tool that helps the human being.”

Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


01:31 – What exactly is your job about?

“We are running a start-up, Swiss Cognitive which is a network that connects and shares experience all-around cognitive technology. And I’d rather talk about cognitive technology than AI, because we don’t talk about sci-fi. We don’t talk about terminators.

What can we do with technology? We have a lot of debates. We’re inviting all the stakeholders in the ecosystem to discuss, to exchange, to learn from each other, and to make sure that we develop our future based on an intelligent AI.”

02:34 – “And I think with AI, I see, or we see it as our responsibility and duty to step in all around the policymaking and make sure like the human rights with AI, that we have strong rules within those rules, we can develop and use this technology.”

02:54 – “People say, but AI is biased. What do you mean by AI is biased. It’s a human, it’s a human being who is biased. For me, it’s like now we have the chance to unbias the bias. The algorithm is just putting the mirror in front of us, telling us, you know, what I got a biased solution or a biased result because this is just based on the data I’m getting. And the data we bring into the system is the data at the end, how you and me are thinking. So for me, it’s now a great chance to unbias to bias and make sure that we develop cognitive technologies towards a human and responsible and trustworthy technology.”

03:43 – “What does that mean if you have breast cancer and you go to the doctor and he tells you, “Okay, I look at the x-ray, I think it might be something I have to talk to my friend in the US, in Australia, there is a professor he’s specialist or she is a specialist in it”. Now with AI, you can actually compare to thousands of thousands of similar pictures. At the end of the day, still, the doctor who is looking at me still the human being is talking to a human being. We still need that.“

04:32 – What challenges do you see for the future?

“To predict, now is very, very difficult. But one of the most important things for us is to make sure that we do a lot of awareness campaigning around the topic of AI, bringing it down to a level where companies see their chances, how they can incorporate the technology in their business.

There is not only research and development. There is also about just doing your products and services smart.”

05:30 – “What we need to see, is which part of our business can we disrupt with technology. Do I want to go to a doctor who is a human being, but he relies on technology who does all the statistical evaluation, who checks different kind of x-rays for example, with a huge database of other similar pictures, or do I really want to be treated by a robot? Probably not.

Unfortunately, I had to be quite a while in the children’s hospital with my daughter when she was small and I was sitting there and I’m telling you as a parent, it’s really tough if you sit there day by day and you see how stressed the nurses are, they don’t really have time for the kids. Actually, I had a job, so I really had to take off to be able to sit there. And I was thinking to myself when the kid is in a good shape and she had a good day if a robot would come and bring her the food, bring her the pills to take and would even joke with her, sing with her, play with her, what’s bad in that? But when she’s doing really bad, if she has pain, and if she’s afraid, she’s crying, then she needs the nurse.
And then the nurse would have time because she’s here when she needs her.

This is one of the things that I see that we as human beings, don’t need to fight against technology. We have to find a mutual, beneficial way how can we support ourselves, because the human being is not as fast, is not as performing like the technology.

So we should outsource where we are weak, and remain with the things that we are strong.
And these are feelings, emotions, and still creativity.”

Interview with Dalith Steiger-Gablinger by UN Today

Listen to the 20-min podcast: