Marketing News

How to get AI to sound less drunk: the GPT-3 case study

GPT-3 Case Study

The much-hyped GPT-3 still lacks understanding of the world — but that may be coming.

Copyright by

SwissCognitive, AI, Artificial Intelligence, Bots, CDO, CIO, CI, Cognitive Computing, Deep Learning, IoT, Machine Learning, NLP, Robot, Virtual reality, learningGPT-3 has created a lot of buzz since its release a few months ago. Deservedly so, as it’s a big step forward in .

The system can generate (almost) plausible conversations with the likes of Nietzsche, write op eds for The Guardian and was even used successfully to post undercover comments on Reddit for a week.

But even with GPT-3, is still stuck in Uncanny Valley. GPT-3 output feels like it was written by a human at first glance, but it isn’t quite. On closer inspection, it lacks substance and coherence.

There are two reasons for this.

First, GPT-3 can’t make a point. GPT-3 behaves as a stream of consciousness, which means its next sentence feeds off of the last few sentences.

GPT-3 associates and builds on what it can hear itself saying. At the sentence level, GPT-3 text will usually make sense, maybe even at paragraph level. Its shortcomings become obvious in longer texts (as you can see in this blog post on written by GPT-3).

GPT-3, like a drunk, cannot make its point because it has no point to make. In the context of entertainment (or a bar), GPT-3 offers up neat tricks. But it will not write your grant proposal or business plan anytime soon.
Secondly, even if you gave it a point to make, it wouldn’t really know how to get to that point.

Say we offer GPT-3 the purpose it needs, for instance: ‘we need to increase sales by 15% in 2021’. GPT-3 would not be able to move logically from one point to another to arrive at the conclusion that we do need to increase sales by 15%. Instead, it will bulldoze its way through. It will do a sort-of-okay job of it, but not necessarily well enough to convince your sales and marketing division.

A recent criticism of models like GPT-3 is that they don’t seem to understand language. One big tell is that it cannot tell the difference between the sentences ‘Marijuana causes cancer’ and ‘Cancer causes marijuana’.

But the problem is deeper than that: GPT-3 just doesn’t understand the world — it has no concept of reality.

A reality check for : building a ‘world model’

Understanding the world like humans do would help to solve its two weaknesses as described above. A ‘world model’ — a broad, intuitive understanding of what is realistic and what isn’t — would allow it to make a clear point. And would help it build a logical argument to arrive at that point.

Read more:


Leave a Reply