Meta unveils generative AI apps and new image model


Meta’s Connect 2023 tech show was heavy on AI this year. Here are the major announcements.

Meta AI is Meta’s ChatGPT competitor

Meta AI is Meta’s new ChatGPT competitor: the chat software is designed to act as an assistant to enhance Meta products such as WhatsApp, Messenger, Instagram, and Meta’s XR devices. Meta AI is based on Meta’s proprietary Llama 2 language model and is initially available only in the US.

Meta AI also has a built-in text-to-image model called “Emu,” which can generate images via an “@MetaAI /imagine Text” prompt directly in the assistant interface. All images generated by the AI are tagged with an #ImaginedWithAI watermark by default. According to Meta, the image quality is “significantly” better than with SDXL v1.0.

Video: Meta



Meta also provides access to real-time information through chat browsing with Bing, similar to ChatGPT, in a partnership with Microsoft. So the information Meta AI can talk about is not limited by its training data.

What is interesting is the position of Microsoft, which continues to work with direct OpenAI competitors despite its heavy investment in OpenAI. Microsoft also offers Meta’s open-source Llama model in its Azure cloud.

Celebrity AI chats

In addition to Meta AI, users of WhatsApp, Messenger and Instagram can access special chatbots that are based on Meta AI but represent specific personalities. These include celebrity chatbots such as Tom Brady, Paris Hilton, and Naomi Osaka that talk in their own style about topics that are typical of them, such as sports, humor, and relaxation.

Video: Meta

AI-generated stickers

For WhatsApp, Messenger, Instagram and Facebook Stories, Meta combines Llama 2 with the Emu image model for AI-generated stickers. For example, typing “pizza playing basketball” will generate a cartoon pizza with a basketball in its hand right in the chat window. The AI stickers will be rolled out to select English-speaking users next month.


Meta AI’s newly introduced Emu image model and the previously introduced Segment Anything image segmentation model.

Meta says it has successfully experimented with image fine-tuning with its Emu model. A diffusion model pre-trained with 1.1 billion image-text pairs was additionally refined with a few thousand “carefully selected, high-quality images.”

Image: Meta AI

According to Meta AI, high-quality image fine-tuning significantly improved the quality of image generation: compared to the standard model without image fine-tuning, human testers preferred images from the fine-tuned model 82 percent of the time. When compared to the open-source SDXL v.1.0 image model, Emu was preferred up to 71.3 percent of the time. As a generic approach, image tuning also works for other architectures, Meta AI says.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top