Machine learning (ML) is going to be either a boon or a bane to the enterprise, depending on who you talk to.
Machine learning (ML) is going to be either a boon or a bane to the enterprise, depending on who you talk to. On one hand, it will bring a wide range of new capabilities to digital processes – everything from automated workflows to self-managing infrastructure. On the other, it will displace jobs and leave organizations powerless to make corrections when things go awry.
The truth is probably somewhere between these two extremes, but to really get a handle on what ML can and cannot do, it is necessary to dispel some of the myths that have grown up around the technology. (With so much to offer, why isn’t everyone using ML? Find out in 4 Roadblocks That Are Stalling Adoption of Machine Learning .)
Myth 1: Machine learning and artificial intelligence are one and the same.
While it is true that they both utilize the same fundamental technology, AI is an umbrella term that encompasses a wide range of disciplines. According to Dr. Michael J. Garbade, CEO of Education Ecosystem , AI encompasses not only ML, but neural networking , natural language processing , speech recognition and a host of other emerging technologies. ML has the distinction of being able to alter its own code based on experiences, changes to its environment or the introduction of new objectives – this is essentially the “learning” aspect of machine learning.
“The intention of machine learning is to enable machines to learn by themselves using the provided data and make accurate predictions,” he said. “It is a method of training algorithms such that they can learn how to make decisions.”
Machine learning, therefore, is the way in which data systems become intelligent. But since learning is a process, knowledge workers will have to get used to the idea that future technologies will not offer full functionality right out of the box, but will gravitate toward increasingly optimized performance as time goes by.
Myth 2: Machine learning cannot be controlled.
This ability to “learn” has naturally given rise to the fear that ML-powered systems will start to make decisions and take actions beyond what users intended. But stories about killer robots running amok or computer overlords wiping out pesky humans are more science fiction than reality. What has been known to happen is that biases in the data that ML is exposed to can cause it to make poor decisions, as evidenced by the case of Tay, a Microsoft chatbot for Twitter that was led to spout racist views.
But as IV.AI CEO Vince Lynch noted on Tech Crunch recently, this is not a lack of control, but failure to implement the proper controls. By choosing the right learning models and data sets, and then subjecting the system to rigorous oversight, organizations should be able to safely deploy ML without catastrophic consequences.[…]