There’s a lot of talk today about and the impact it will have on the future. But the term gets thrown around so much that it’s falling victim to the Kleenex syndrome—the label is so overused that the meaning gets diluted and applied to things that aren’t actually . What actually is and what isn’t? Does it even really exist?
copyright by www.forbes.com
What is Artificial Intelligence?
Artificial intelligence was first defined in the 1950s as any task performed by a machine if a human would have to use intelligence to accomplish the same task. Simply put, acts on a situation the same way a human would. If a human would understand a conversation and give an answer, so would . If a human could analyze information and make future plans, so would .
is based on algorithms. It uses computing power to solve specific problems faster and often more accurately than humans can. Much of is based on statistics and finding trends and patterns in data.
can do a variety of things that a human would have to use intelligence to do, such as analyzing, planning, problem solving, learning and adapting. Pegasystems founder Alan Trefler says anything that makes a system clever is considered . Machine learning, which is another part of , takes information and learns and adapts as it gathers new data.
What isn’t Artificial Intelligence?
However, as we have it today isn’t truly intelligent on its own. Intelligence is often considered the ability to adapt to unknown circumstances. If we use that definition to apply to , it greatly cuts down on what can be considered . Most can’t really think on its own, but it can be programmed to learn and adapt. This is considered narrow . A machine can use -powered facial recognition to sort through photos. As the program sees more photos, it is programmed to expand its knowledge of what it can sort by. It may start being able to differentiate between 10 faces, but as it sees more faces, it is programmed to learn them. Soon, the program may be able to differentiate between 25 faces. The machine isn’t actually thinking on its own and learning those new faces; it has simply been programed to do so.
Many systems can be programmed to do things automatically, but they can’t adapt and change with different circumstances, which means that they aren’t really . For example, object tracking on a camera is an automation feature, while facial recognition and being able to identify the person is an feature. In order to truly be considered , the system needs to be able to learn contextually and then apply that learning to change how it does things. This is the same way humans operate—we gather more knowledge and then use that knowledge to change how we work.
Common Misconceptions About Artificial Intelligence
There are many common misconceptions when it comes to . Much of that has to do with things being labeled as when they actually aren’t. Without a strong understanding of the technology involved, people are left to believe marketers that is in nearly everything.
also doesn’t have to be an android or . When many people think of , they think of robots who will replace human jobs. This isn’t necessarily true. Yes, robots and other machines may use , but itself is much greater. It is the software and brains of the machine instead of just the machine itself.
We may not yet have in its purest form, but we may be nearly as close as we can be for the time being. The next step from our current narrow is general , which is still in the early stages of development. Narrow can do a single task or a few tasks, such as Siri’s ability to recognize voice commands. General can do a huge variety of tasks, similar to Iron Man’s JARVIS.