When we’re in the buyer role, we often don’t hold ourselves responsible for understanding AI and machine learning (ML) products because these technologies are intimidating. They’re incredibly complex. This article addresses the limitations of AI and ML, so software buyers can ask the right questions to understand what they are buying.
copyright by devops.com
B2B software sales and marketing teams love hearing the term “artificial intelligence” (AI). AI has a smoke and mirrors effect. It sounds impressive. But, when we say “AI is doing this,” our buyers often know so little about AI that they don’t ask the hard questions.
In industries like the DevTools space, it is crucial that buyers understand both what products do and what their limitations are to ensure that these products meet their needs. If the purpose of AI is to make good decisions for humans, to accept that “AI is doing this” is to accept that we don’t really know how the product works or if it is making good decisions for us.
When we’re in the buyer role, we often don’t hold ourselves responsible for understanding AI and machine learning (ML) products because these technologies are intimidating. They’re incredibly complex.
This article addresses the limitations of AI and ML, so software buyers can ask the right questions to understand what they are buying. The Test Oracle Problem
One limitation of some AI or ML products is that for certain applications of the technology, there is no source of absolute truth to compare against the accuracy of the output. For example, neither humans nor machines know how to produce the perfect set of end-to-end tests for any given application. This is the test oracle problem: there is no objective standard of truth. No one wants to introduce this kind of uncertainty into their sales process. Yet, our buyers deserve well-informed answers about our products.
As a buyer, you need to understand the intended advantage of your seller’s AI product before making a purchase decision. Is it meant to make a decision that is more accurate—against an objective standard—than a human? Is it meant to make a faster decision with less cost? Or introduce an alternative methodology that uses new data in a new way? Answers to these questions influence how you will use the product and what value it provides.
AI Versus ML
Though AI is commonly accepted as “any machine that uses math to make decisions,” true AI is self-taught. AI has a neural net that mimics neurons in a human brain which allows it to teach, update and evolve itself. Because of this, true AI is difficult to build and is often experimental rather than commercial.
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
More often, what’s being described when we say AI is actually ML. ML is human-taught: Machines learn through human feedback using a probabilistic decision-making process that improves via ongoing correction. Machines take in data, run algorithms against it and output a decision — or series of assertions — based on probabilities. Humans correct the machine by telling it whether it was accurate in its assessment, and the machine updates. As it receives accuracy feedback, machines learn to make better decisions. And because ML is based on probabilities, it will sometimes make the wrong decisions.
Based on how you plan to use a product, you need to determine how rigorous its accuracy needs to be. How often a machine can make the wrong decisions and still serve its purpose is application-specific. Self-driving vehicles must be nearly perfectly accurate to be adopted. Paralegal ML toolsets likely need to be less accurate. How accurate does your product need to be?
Asking the Right Questions
Regardless of how you plan to use a product, it’s important to ask the right questions to understand the product and build resiliency around its accuracy levels. The next time a seller tells you “AI is doing this,” you can ask the following:
- Is this product an ML product? Does it need to be ML to get a meaningful result? To be ML, a product needs to learn through human feedback, not just make decisions using probabilities. Do you just need a product that uses logic to make decisions, or a product that improves in accuracy over time?
- How is the accuracy of this product calculated? You won’t know if the machine is more accurate than humans if you don’t know the conditions used to calculate accuracy. If a machine is 30% more accurate than humans, who assessed this accuracy and how did they determine this?
- How do you know when the product makes the wrong decisions? Any ML product will sometimes produce the wrong output. Typically, a seller’s most successful customers have already adopted business processes to build resiliency to this wrong output. If so, the seller can help you adopt them as well.
- In its current state, how often does the product make the wrong decisions? Knowing the frequency of mistakes and the stakes of those mistakes will be crucial to deciding how you use the product, and whether it’s safe to do so at this stage in its development.
- How many teaching hours have been put into this product? This number will provide a simple approximation of how much effort has gone into making the product more accurate. A low number can be fine, depending on the application.
- How does my usage improve the accuracy of this product? As a buyer, you are an integral part of the machine testing and teaching process. You should be willing to use your data to improve their accuracy, because you want these products to improve in the future. […]
Read more: copyright by devops.com
When we’re in the buyer role, we often don’t hold ourselves responsible for understanding AI and machine learning (ML) products because these technologies are intimidating. They’re incredibly complex. This article addresses the limitations of AI and ML, so software buyers can ask the right questions to understand what they are buying.
copyright by devops.com
B2B software sales and marketing teams love hearing the term “artificial intelligence” (AI). AI has a smoke and mirrors effect. It sounds impressive. But, when we say “AI is doing this,” our buyers often know so little about AI that they don’t ask the hard questions.
In industries like the DevTools space, it is crucial that buyers understand both what products do and what their limitations are to ensure that these products meet their needs. If the purpose of AI is to make good decisions for humans, to accept that “AI is doing this” is to accept that we don’t really know how the product works or if it is making good decisions for us.
When we’re in the buyer role, we often don’t hold ourselves responsible for understanding AI and machine learning (ML) products because these technologies are intimidating. They’re incredibly complex.
This article addresses the limitations of AI and ML, so software buyers can ask the right questions to understand what they are buying. The Test Oracle Problem
One limitation of some AI or ML products is that for certain applications of the technology, there is no source of absolute truth to compare against the accuracy of the output. For example, neither humans nor machines know how to produce the perfect set of end-to-end tests for any given application. This is the test oracle problem: there is no objective standard of truth. No one wants to introduce this kind of uncertainty into their sales process. Yet, our buyers deserve well-informed answers about our products.
As a buyer, you need to understand the intended advantage of your seller’s AI product before making a purchase decision. Is it meant to make a decision that is more accurate—against an objective standard—than a human? Is it meant to make a faster decision with less cost? Or introduce an alternative methodology that uses new data in a new way? Answers to these questions influence how you will use the product and what value it provides.
AI Versus ML
Though AI is commonly accepted as “any machine that uses math to make decisions,” true AI is self-taught. AI has a neural net that mimics neurons in a human brain which allows it to teach, update and evolve itself. Because of this, true AI is difficult to build and is often experimental rather than commercial.
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
More often, what’s being described when we say AI is actually ML. ML is human-taught: Machines learn through human feedback using a probabilistic decision-making process that improves via ongoing correction. Machines take in data, run algorithms against it and output a decision — or series of assertions — based on probabilities. Humans correct the machine by telling it whether it was accurate in its assessment, and the machine updates. As it receives accuracy feedback, machines learn to make better decisions. And because ML is based on probabilities, it will sometimes make the wrong decisions.
Based on how you plan to use a product, you need to determine how rigorous its accuracy needs to be. How often a machine can make the wrong decisions and still serve its purpose is application-specific. Self-driving vehicles must be nearly perfectly accurate to be adopted. Paralegal ML toolsets likely need to be less accurate. How accurate does your product need to be?
Asking the Right Questions
Regardless of how you plan to use a product, it’s important to ask the right questions to understand the product and build resiliency around its accuracy levels. The next time a seller tells you “AI is doing this,” you can ask the following:
Read more: copyright by devops.com
Share this: