Google Bard: Competitor to ChatGPT and Microsoft

Google just announced a competitor to ChatGPT and Microsoft—with mixed results.

Called “Bard,” it’s a conversational AI tool that answers queries in natural language. If that sounds like ChatGPT, you’re right:

The tool is quite similar but relies on Google’s LaMDA language model to provide answers.

Think of it like ChatGPT, but backed by accurate info from Google’s search engine.

At least, in theory.

In reality, the release of Bard drew serious controversy.

Bard got a fact wrong in a promo video during a rushed launch event. The market responded by knocking $100 billion off Google’s market cap.

In Episode 34 of the Marketing AI Show, Marketing AI Institute founder/CEO Paul Roetzer broke down for me what Bard means for marketers and business leaders.

1. Google looked bad, but don’t be fooled.

Thanks to the botched demo of Bard, Google looked uncharacteristically vulnerable and unprepared, says Roetzer.

But it helps to separate perception from the actual technology. Google still has some of the leading AI technology in the world—and many other AI projects outside of Bard.

“I don’t think they’re behind on the technology,” says Roetzer. “I think that would be a misguided assumption if you don’t think Google has more advanced tech than what we’re seeing.”

Don’t write them off because of one bad demo.

2. Many have a poor understanding of the technology—leading to misconceptions.

The overreaction from the market and commentators is often due to misperceptions of the large language models that power Bard, ChatGPT, and other products.

“It’s really, really important that people understand the fundamentals of large language models and that they have inherent flaws,” says Roetzer.

Large language models learn from an initial corpus of knowledge—in the case of Bard, the internet—and then predict the next words in a sentence. And, by predicting, they often get predictions wrong all the time.

Large language models aren’t drawing from specific sources or citations, he says. They consume knowledge, synthesize it, and write a response. The power of the large language model is that the response sounds perfect—not that it’s accurate. Accurate citations often require a whole separate layer of AI architecture.

As a result, many underrate the power of the technology because it gets facts wrong. In fact, today, it’s not designed to get facts right at all.

3. In reality, large language models could get very powerful very soon.

Some of the confusion here comes from Google and its competitors themselves.

Google was left scrambling because of Microsoft’s moves to incorporate ChatGPT-like features into Bing.

“They’re trying to kind of play the game, get something out into the market, but realistically they weren’t ready to release a product at this point,” says Roetzer.

The technology of large language models is still so early, emphasizes Roetzer. In the near future, models could get vastly more powerful, generate their own training data, and provide citations—all while being much more efficient than today.

So overreacting to a single issue in a demo misses where the technology is actually going.

Source link