The United Nations Chief Information Technology Officer spoke with TechRepublic about the future of cybersecurity, social media, and how to fix the internet and build global technology for social good.
copyright by www.techrepublic.com
Artificial intelligence, said United Nations chief information technology officer Atefeh Riazi, might be the last innovation humans create.
From then on, said Riazi, “it will be the AI innovating. We need to think about our role as technologists and we need to think about the ramifications—positive and negative—and we need to transform ourselves as innovators.”
Appointed by Secretary-General Ban Ki-moon as CITO and Assistant Secretary-General of the Office of Information and Communications Technology in 2013, Riazi is also an innovator in her own right in the global security community.
Riazi was born in Iran and is a veteran of the information technology industry. She has a degree in electrical engineering from Stony Brook University in New York, spent over 20 years working in IT roles in the public and private sectors, and was the New York City Housing Authority’s Chief Information Officer from 2009 to 2013. She has also served as the executive director of CIOs Without Borders, a non-profit organization dedicated to using technology for the good of society—especially to support healthcare projects in the developing world.
Riazi and her UN staff meet with diplomats and world leaders, NGOs, and executives at private companies like Google and Facebook to craft technology policy that impacts governments and businesses around the world.
TechRepublic’s in-depth interview with her covered a broad range of important technology policy issues, including the digital divide, e-waste, cybersecurity, social media, and, of course, artificial intelligence.The Digital Divide
TechRepublic: I know you are quite curious about artificial intelligence. Is there a UN policy with respect to AI?
UN CITO: AI is an amazing thing to talk about because now you can look at patterns much faster than humans [can]. Do we as technologists have the sophistication of addressing the moral and ethical issues of what’s good and bad?
I think this is what scares me when it comes to AI. Let’s say we as humans say, “we want people to be happy and with artificial intelligence, we should build systems for people to be happy.” What does that mean?
I’m looking at the machine language and the path we’re creating for 10, 20, 30 years from now but not fully understanding the ethical programming that we’re putting into the systems. IT people are creating the next world. The ethical programming they do is what is in their head, and so policies are being written in lines of code, in the algorithms.
We look at artificial intelligence and machine learning, and the world we see as technologists 20 years from now is very different than the world we have today. Artificial intelligence is this super, super intelligent species that is not human. Humans have reached our limitation.
That idea poses so many questions. If we create this artificial intelligence that can do 80% of the labor that humans do, what are the changes? Social, cultural, economic. All of these big, big questions have to be talked about.
I’m hoping that’s the United Nations, but there’s so much political opposition to those conversations. So much political opposition because we are holding on to our physical borders, and we have forgotten that those physical borders are gone. The world is virtual. We sit here as heads of departments and ministers and talk about AI. We discuss the moral, the ethical issues that people are going to confront with AI technology—positive and negative. […]