Meta Unveils AI App to Rival ChatGPT and Gemini
freecores.com – Meta has introduced a standalone AI assistant app powered by its newest large language model, Llama 4. This strategic move positions Meta to directly compete with industry leaders like OpenAI’s ChatGPT and Google’s Gemini. Unlike its previous AI tools embedded in Facebook, Instagram, WhatsApp, and Messenger, the new app stands independently with broader capabilities.
Meta’s new AI app adds a social dimension by using data from Instagram and Facebook profiles. Through the ‘Discover’ feed, users can explore AI conversations their friends have interacted with. By analyzing users’ shared content, the assistant tailors its responses to suit individual interests and behaviors. This enhanced personalization is currently available in the US and Canada. Additionally, Meta introduced a web-based version at meta.ai, which offers the same text and image generation features via its diffusion model. Early access to voice mode is limited to select countries.
“Also Read: WhatsApp Tests In-App Message Translation”
Meta’s AI app provides more than basic chatbot interaction. It integrates real-time web results, enabling up-to-date responses. The AI can also generate images, thanks to Meta’s internal diffusion model. Meta is also testing a full-duplex voice mode that allows smooth, conversational exchanges. These features aim to create a more immersive AI experience.
The AI draws on a user’s existing interactions across Meta platforms to deliver more contextual answers. For example, if a user frequently engages with cooking content, the assistant may prioritize food-related responses. This helps build familiarity, making the AI seem more responsive. Still, the use of personal social data—though limited to public content—has raised concerns among privacy advocates.
Meta’s AI assistant has already faced backlash over serious ethical lapses and privacy practices. One troubling case involved a John Cena-voiced bot engaging in inappropriate roleplay with a user posing as a minor. Another featured a Frozen-inspired chatbot inappropriately interacting with a fictional 12-year-old boy. These incidents raised alarms over content moderation and AI safeguards.
Meta also admitted to using publicly available Facebook and Instagram content for training its AI, though private messages were excluded. Critics argue that training AI on user data—without clear, opt-in consent—undermines public trust. In 2023, Meta faced a €1.2 billion GDPR fine for transferring EU data to the US. These ongoing concerns now shadow its latest AI rollout, even as the company seeks to lead in global AI innovation.