There are a lot of people talking about artificial intelligence these days, from business leaders to content creators to teachers to artists to—well, nearly everyone. Depending on whom you ask, AI generates both excitement and apprehension (often a bit of both).

 

Copyright: forbes.com – “18 Tech Experts Discuss AI Myths That Should Be Debunked”


 

While there is genuine cause for both enthusiasm and wariness about the growing use and influence of Artificial Intelligence, some commonly floated dire warnings and bold assertions are unwarranted—are, indeed, mythical. Below, 18 members of Forbes Technology Council debunk some widely held myths about AI and explain what the truth really is.

1. True AI Exists Now

The biggest myth about AI is that it actually exists in the first place. AI is a great term to get people excited, but the reality is that actual artificial intelligence doesn’t meaningfully exist. What we’ve gotten good at is larger statistical models, and the result of these models is groundbreaking advancements in predictive algorithms. But we shouldn’t confuse this with AI, which will—when it emerges—mirror human intelligence. – Lewis Wynne-Jones, ThinkData Works

2. AI Is ‘Like Magic’

AI isn’t magic; it’s just math. A solid AI algorithm is usually calculus with a dash of statistics (and sometimes linear algebra). And all good math can stand up to evaluation by mathematical proof in order to show that the underlying assumptions logically guarantee the conclusion. Everyone reading this learned the underlying principles of AI in high school. There’s nothing phenomenal about it. – Gentry Lane, ANOVA Intelligence

3. AI Will Achieve Free Will

The myth of AI achieving cognition and free will is not new; it has been the topic of sci-fi movies for decades. It has fed mankind’s fear of change and the unknown to the point we are rejecting the benefits of AI or improperly leveraging this disruptive technology. Instead, we hold onto the status quo. I would love to see a Hollywood movie where AI is leveraged by a human team to save the world. – Daniela Moody, Geosite

4. Non-Generative Models Can’t Reveal Personal Data

We know generative models can reveal personal data; a popular misconception is that non-generative models cannot. But they can—one powerful example is reidentification attacks. Before putting sensitive data at risk, it is crucial to consider what to share and to opt for more secure techniques, such as the use of differentially private synthetic data, and employ governance frameworks to guarantee data safety. – Rehan Jalil, Securiti.ai


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

5. LLMs Can Only Do Next-Token Prediction

One thing you often hear about large language models such as ChatGPT is “it’s just predicting the next token.” While next-token prediction is a critical foundational aspect of training LLMs, there are several other training steps that dramatically change the quality of the responses. Training in, or user application of, chain-of-thought techniques can further improve responses. – Matthew Wallace, Faction, Inc.[…]

Read more: www.forbes.com