Cyber GovTech Research

The Ethics of Artificial Intelligence in the Workplace

The Ethics of Artificial Intelligence in the Workplace

Artificial intelligence is a branch of computer science dealing with the simulation of intelligent behavior in computers or the capability of a machine to imitate intelligent human behavior.

copyright by www.workforce.com

SwissCognitiveDespite its nascent nature, the ubiquity of applications is already transforming everyday life for the better.

Whether discussing smart assistants like Apple’s Siri or Amazon’s , applications for better customer service or the ability to utilize big data insights to streamline and enhance operations, is quickly becoming an essential tool of modern life and business. In fact, according to statistics from Adobe , only 15 percent of enterprises are using as of today, but 31 percent are expected to add it over the coming 12 months, and the share of jobs requiring has increased by 450 percent since 2013.

Leveraging clues from their environment, artificially intelligent systems are programmed by humans to solve problems, assess risks, make predictions and take actions based on input data.

Cementing the “intelligent” aspect of , advances in technology have led to the development of to make predictions or decisions without being explicitly programmed to perform the task. With , algorithms and statistical models allow systems to “learn” from data, and make decisions, relying on patterns and inference instead of specific instructions.

Unfortunately, the possibility of creating machines that can think raises myriad ethical issues. From pre-existing biases used to train to social manipulation via newsfeed algorithms and privacy invasions via facial recognition, ethical issues are cropping up as continues to expand in importance and utilization. This notion highlights the need for legitimate conversation surrounding how we can responsibly build and adopt these technologies.

How Do We Keep -Generated Data Safe, Private and Secure?

As an increasing number of enabled devices are developed and utilized by consumers and enterprises around the globe, the need to keep those devices secure has never been more important. ’s increasing capabilities and utilization dramatically increase the opportunity for nefarious uses. Consider the dangerous potential of autonomous vehicles and weapons like armed drones falling under the control of bad actors.

As a result of this peril, it has become crucial that IT departments, consumers, business leaders and the government, fully understand cybercriminal strategies that could lead to an -driven threat environment. If they don’t, maintaining the security of these traditionally insecure devices and protecting an organization’s digital transformation becomes a nearly impossible endeavor.

How can we ensure safety for a technology that is designed to learn how to modify its own behavior? Developers can’t always determine how or why systems take various actions, and this will likely only grow more difficult as consumes more data and grows exponentially more complex.[…]

read more – copyright by www.workforce.com

0 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.