As Meta Flounders, It Reportedly Plans to Open Source Its New AI Models
Meta reportedly plans to open source its next-gen AI models, betting on a new strategy after LLaMa 4 underperformed.
Meta is reportedly shifting its AI strategy by planning to open source its next-generation models, according to an Axios report. This move comes after the company's LLaMa 4 model "wildly underperformed expectations" last year and failed to hit key benchmarks, despite Meta's massive $600 billion earmarked investment in AI. The new models will be the first released under the leadership of Alexandr Wang, founder of training data giant Scale AI, which Meta acquired to bolster its AI efforts. While the models will maintain some proprietary parts for safety, the core strategy is to offer a more accessible, open-source alternative to the closed "black box" models from competitors like OpenAI and Anthropic.
The report suggests this open-source approach could be a simpler business model than the subscription services favored by rivals, following the example of Cursor using Moonshot AI's open-source Kimi 2.5 for its Composer 2 model. However, significant hurdles remain, including ongoing performance concerns that recently delayed a model release and reported tensions between Zuckerberg and Wang. If the new models underperform, Wang is positioned as the potential "fall guy" for Meta's continued struggles to compete with frontier AI leaders.
- Meta plans to open source its next AI models, a strategic shift after LLaMa 4's benchmark failures.
- The move follows a reported $600B AI investment and places Scale AI founder Alexandr Wang at the helm.
- Models will have some proprietary safety components, aiming to compete with closed models like GPT-4 and Claude.
Why It Matters
If successful, Meta could become the largest provider of open-source frontier AI, challenging the closed-model dominance of OpenAI and Anthropic.