Artificial intelligence has evolved from just being a concept and has now become a serious topic in society. In fact, some experts think that machine intelligence is being used to decide who is who in society. Claiming that a good example is how the technology was used to decide the president of the U.S. in the past presidential election.

copyright by

SwissCognitiveYes, the impact of AI in society is so serious that experts can’t help being stressed when they think of possible misuse of this tech. The unfortunate thing is that, even before Musk, the late Prof. Hawking , and other tech aficionados manage to pressure governments to control the coming robots that are said might harvest human organs, claims have it that already, AI is biased and in a horrible way. Source: Santa Cruz Tech Beat Okay, the original intention of machine intelligence was to solve society problems. And that goes without saying that AI is progressing quite well in solving medical issues. For sometimes now, the tech has also been seen as instrumental in tackling security issues, which has led the U.S., Israel , and China employ it to curb criminals activities.

What About When AI is Not Reliable

Last month during the AI Ethics and Society conference, which took place in New Orleans, a team of scientists presented an invention they thought would be welcomed. The team leader showed how a new algorithm could be used to predict any retaliatory activity after a crime. However, a realistic debate broke forth , critics pointing out how the same system may erroneously mark innocent people as criminals or create mistrust among the police.

The concerns were so viable and with a high likelihood of occurring in real-life that the presenter received a serious backlash from the audience. The astonishing fact was that the Harvard University engineer agreed that the concerns were feasible but tried to run from the responsibility. “I’m just an engineer,” he said, forgetting they are the people developing these algorithms for human use.

In other words, if the audience didn’t speak out, that would have been considered a safe project, while it’s not. Another serious real-life occurrence: a Harvard professor, Dr. Latanya Sweeney, is on record saying that she once paid a company to have her presumably clean record of citizenship presented to a potential employer, but after a Google search, her name came up associated to targeted ads that suggest she’d been in detainment before. […]