Facebook parent Meta this week rolled out plans for various chatbots for US consumers that are embedded into specific applications, including WhatsApp, Messenger, and Instagram.

The new Meta AI bot, currently in beta, is powered by various iterations of Meta’s Llama 2 large language model (LLM). The difference between Meta’s LLM and OpenAI’s GPT 4 — the LLM behind ChatGPT — appears to be customization for application-specific purposes.

“I think the significant things about this announcement is they’re not using one general purpose model, so to speak," said Gartner Distinguished Vice President Analyst Arun Chandrasekaran. “They’re creating multiple different models for different applications. For example, the Instagram user and use case is very different from a WhatsApp user and WhatsApp use case.

“In a nutshell, they’re not really building a general purpose chatbot like what ChatGPT is,” Chandrasekaran said. “That’s one thing Meta is doing very differently.”

Meta AI's chatbot will answer questions and engage in conversations on various topics. 

LLMs used for generative AI tools can consume vast amounts of processor cycles and be costly to use. In fact, the semiconductor  industry is currently unable to meet the needs of the fast-growing AI industry. (Smaller, more industry- or business-focused models can often provide better results tailored to business needs.)

Meta does have a general-purpose Llama LLM, with more than 400 billion parameters. But a white paper published with the launch of the Meta AI chatbot notes there are smaller 7 billion- and 13 billion-parameter models, among others. The fewer the parameters, the more efficient and customized an LLM can be without placing additional strains on server CPU cycles.

“There is negligible GPU memory overhead…,” Meta said in the white paper when referring to training its models.

Along with providing textual responses, the Meta AI chatbot can also access the Bing search engine for real-time information it can use to generate “photorealistic” images from text-prompts to share during chats. For example, if a group chat is discussing which trailhead to try in Santa Cruz, CA, Meta AI could automatically bring up options in the chat, allowing members in the group to explore locations, according to Meta.

Meta also created 28 AI-powered chatbots that are played by celebrities and cultural icons such as Snoop Dogg, Tom Brady, Kendall Jenner, Paris Hilton and Roy Choi. The celebrity chatbots can offer tips on physical training, culinary suggestions, DIY help or — in the case of former NFL quarterback Tom Brady (whose chatbot character is called “Bru”) — be a wisecracking sports debater “who pulls no punches.”

Meta plans on adding new chatbot characters in the coming weeks played by Bear Grylls, Chloe Kim, Josh Richards, and others.

Meta also created 28 AI-powered chatbots that are played by celebrities and cultural icons that can offer various typce of advice and conversations.

“Our journey with AI is just beginning, and it isn’t purely about building AIs that only answer questions. We’ve been creating AIs that have more personality, opinions, and interests, and are a bit more fun to interact with,” Meta said in its announcement. “Along with Meta AI, there are 28 more AIs that you can message on WhatsApp, Messenger, and Instagram. You can think of these AIs as a new cast of characters — all with unique backstories.”

Meta said it’s rolling out its AI chatbot to Ray-Ban Meta smart glasses and Quest 3 virtual reality headsets.

Not to be outdone, OpenAI responded on Wednesday that its ChatGPT bot now has the ability to search the web using Bing for voice and image creation, allowing it to compete with Microsoft’s AI-assisted search engine and Google’s Bard chatbot. (Microsoft’s own AI technology is based on OpenAI’s chatbot tech.)

“We are beginning to roll out new voice and image capabilities in ChatGPT," OpenAI said in its announcement. "They offer a new, more intuitive type of interface by allowing you to have a voice conversation or show ChatGPT what you’re talking about. We’re rolling out voice and images in ChatGPT to Plus and Enterprise users over the next two weeks. Voice is coming on iOS and Android (opt-in in your settings) and images will be available on all platforms.”

A user would be able to have back-and-forth conversation with ChatGPT, ask it for facts during a dinnertime debate,  or have it handle things such as a bedtime story for children.

A user would be able to have back-and-forth conversation with ChatGPT, ask it for facts during a dinnertime debate, or have it handle things such as a bedtime story for children. A ChatGPT user can also snap a picture of a landmark while traveling and have a live conversation with the bot about what makes the location interesting.

“When you’re home, snap pictures of your fridge and pantry to figure out what’s for dinner (and ask follow up questions for a step-by-step recipe). After dinner, help your child with a math problem by taking a photo, circling the problem set, and having it share hints with both of you,” OpenAI said.

Before this week, ChatGPT had allowed paid subscribers to connect third-party browser plug-ins to enable the chatbot to search the web. The latest update natively allows that function.

In February, Microsoft became the first to launch its web-connected Bing AI-powered search tool, based on OpenAI’s GPT LLM, a competitor to Google’s leading search engine.

Along with application-specific models, another aspect that separates Meta from other social media companies pushing out AI tools is Facebook’s AI Research Lab — one of the preeminent machine learning institutions. The lab is led by Meta Chief AI Scientist Yann LeCun, who is also considered a father of deep learning, according to Gartner’s Chandrasekaran.

“Are they going to get it right completely in the first version? We don’t know. It’s super hard,” Chandrasekaran said. “Having said that, I like the way they’re going in terms of rather than building a general purpose chatbot, they’re trying to integrate it deep into the workflow and the application landscape.”

Meta has also been active in the open-source community AI space. For example, the company has been open sourcing generative AI models that are comparable to OpenAI’s GPT 3.5 and GPT 4 models, according to Chandrasekaran. "So, they’re also trying to do things a little differently, more of an open-source way in this ecosystem, which is also noteworthy.

“Meta was never an enterprise player, but with some of these open-source efforts, some of these large enterprise clients we’re talking to — like large banks — are all starting to look at Meta AI’s model in this space,” Chandrasekaran said.

IT World