The twin tracks of AI progress: Ad hype vs tech realities

The twin tracks of AI progress: Ad hype vs tech realities

3 minutes, 5 seconds Read
  1. Homepage
  2. >
  3. News
  4. >
  5. Editorial
  6. >
  7. The twin tracks of AI progress: Ad hype vs tech realities

You could have said it last year or the year before, but it’s still true. Artificial Intelligence (AI) is the biggest tech bandwagon since blockchain. But it’s confusing because there are two independent stories about what’s happening.

First, there’s the hype. Today, AI is more of a “brandwagon,” with the likes of Coca-Cola shouting from the rooftops about their AI credentials. 

Coke’s Christmas ads come—provocatively—with the label “GENERATED BY A.I.” The happy families and trucks rolling through snowy streets are all AI fantasies, no doubt carefully curated by highly paid executives. So, is the end product really any different from good old-fashioned animation? 

Even more brazenly, last year, the company produced something called “Coca-Cola Y3000,” which was supposed to demonstrate “what a Coke from the future might taste like.” And, of course, this was—as the writing on the can boasted—”Co-Created with AI.” Perlease!

There’s no reason why Coke shouldn’t use AI. It’s just that it feels a bit like a 132-year-old brand trying to get down with the kids. Of course, most of its ad executives are probably kids, and good luck to them. 

As well as being at the ‘cutting edge’ of tech, AI is likely to save Coke money because they don’t need to film anything or pay actors. But anyone who bothers to notice or be outraged—as many have— is falling for the oldest trick in the ad industry book, where “all publicity is good publicity.” 

Coke is just one example of AI being a kind of calling card to prove one’s contemporary credentials. Try getting a startup off the ground without an idea with AI as the key to its success, and you’ll see what I mean.

The second, less ephemeral story about AI is how it’s growing up fast in technical terms. 

Last year, ChatGPT used to frustrate me by answering requests for academic sources by inventing interesting-sounding reading lists, which would have been perfect if the papers on them hadn’t been invented for the occasion by ChatGPT. This year, the system has come up with the same kind of answers, but everything actually exists, which I much prefer. Its ability to transcribe handwriting has also improved markedly.

Some of these changes may be due to the abilities of the latest ChatGPT model, called o1, which boasts of using “advanced reasoning.” The phrase begs a whole lot of questions, and any claims by OpenAI, the maker of ChatGPT, need to be seen in the context of the company’s need to justify the huge investment it receives by constantly appearing to make breakthroughs. 

However, experts seem to agree that o1 is different from previous models and that there is substance behind the claim that o1 “thinks before it answers.” Delving into what that means would involve a discussion of the “stochastic parrot” model of AI (“stochastic” means using probabilities). The parrot comparison is a techy put-down of large language models (LLMs), which rely on their ‘intelligence’ to analyze word order in existing texts. A parrot can imitate human speech pretty effectively, but nobody believes it understands what it’s saying.

AI image of parrot
Stochastic parrot

The question is whether, in taking the next step, models like o1 can be said to make “internal representations” of objects or ideas in order to “reason,” Yes, says OpenAI. The model isn’t just processing words. It’s teaching itself and can decide to spend time “thinking” instead of simply spitting out a sequence of words:

“Our large-scale reinforcement learning algorithm teaches the model how to t

Read More

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *