Coinsteam Business

Utilize the invoice payment gateway for checkout – order now, pay later. With a Coinsteam business account, you gain access to priority support, increased order limits, and competitive pricing for bulk orders available through quote requests.

Invoice Gateway :
Upon placing your order, we will issue an order invoice followed by an invoice payment request, featuring a convenient payment gateway. Choose from flexible payment options including PayPal, Venmo, Apple Pay®, credit cards, debit cards, or ACH bank transfers.

Thank you for being a valued customer. We're looking forward to build steam for your projects.

Meta’s AI Strat: Benchmark Scandals, Brain Drain, and a $72B Gamble

Yann LeCun's scorched-earth exit reveals a company hemorrhaging talent, faking benchmarks, and shipping AI slop while rivals race ahead.

Meta infinity logo on balance scale outweighed by money, surrounded by AI neural patterns - generated with gemini-3-pro-image

A Turing Award winner just called Meta’s AI leadership “young and inexperienced”—and revealed the company faked its benchmark scores. After 12 years at the helm of Meta’s AI research, Yann LeCun is leaving to start his own company, and his parting words paint a picture of organizational chaos, scientific malpractice, and a CEO who has “lost confidence” in his own AI teams.

The revelation came in a bombshell Financial Times interview published January 5, 2026, where LeCun confirmed what many had suspected: Meta’s Llama 4 launch was built on manipulated benchmark results. “The results were fudged a little bit,” LeCun admitted. The team “used different models for different benchmarks to give better results.” In competitive AI, where billion-dollar valuations turn on benchmark scores, this is the equivalent of an athlete admitting to doping after winning the gold medal.

The fallout has been swift. Mark Zuckerberg was “really upset and basically lost confidence in everyone who was involved,” according to LeCun. The CEO subsequently “sidelined the entire GenAI organisation.” But the damage runs deeper than one botched model launch—it exposes a company that has spent $72 billion on AI in 2025 while simultaneously alienating its greatest scientific asset, shipping products nobody wants, and watching its best researchers walk out the door.

The Turing Award Winner Who Was Told What to Do

Yann LeCun isn’t just any departing executive. He’s one of the three “godfathers of AI” who shared the 2018 Turing Award—the Nobel Prize of computer science—for foundational work on deep learning. He joined Facebook in 2013 to build its AI research division from scratch and spent over a decade as Chief AI Scientist. If anyone had earned the right to chart Meta’s AI future, it was him.

But in June 2025, everything changed. Meta made a $14.3 billion investment in Scale AI and installed its 28-year-old CEO, Alexandr Wang, as Meta’s first-ever Chief AI Officer. Suddenly, a Turing Award winner found himself reporting to someone who graduated from MIT just seven years ago.

https://twitter.com/WholeMarsBlog/status/1958429605351854507

LeCun’s assessment of his new boss is withering. “There’s no experience with research or how you practice research, how you do it,” he told the Financial Times. “Or what would be attractive or repulsive to a researcher.” On the new reporting structure: “You don’t tell a researcher what to do. You certainly don’t tell a researcher like me what to do.”

Wang, to his credit, built Scale AI into a $29 billion data-labeling juggernaut and became the world’s youngest self-made billionaire at 24. But building a data-annotation business and leading frontier AI research are fundamentally different disciplines. LeCun acknowledges Wang “learns fast” and “knows what he doesn’t know,” but adds that the gap in research experience is unbridgeable.

The organizational dysfunction didn’t start with Wang’s arrival. Meta’s Superintelligence Labs hemorrhaged at least eight key employees within two months of its June 2025 launch. Internal data shows Meta’s AI retention rate sits at just 64%—compared to 78% at Google DeepMind and 80% at Anthropic. Multiple researchers recruited from OpenAI have returned to their former employer within weeks. Of the fourteen authors credited on Meta’s landmark 2023 Llama paper, only three remain at the company.

The Benchmark Scandal That Shook Confidence

When Meta released Llama 4 in April 2025, it touted impressive benchmark numbers. The flagship Maverick model supposedly exceeded GPT-4o and Gemini 2.0 on coding, reasoning, and multimodal benchmarks. Llama 4 Scout boasted an industry-leading 10 million token context window. The Behemoth model—with 288 billion active parameters and nearly 2 trillion total—claimed to outperform GPT-4.5 and Claude Sonnet 3.7 on STEM benchmarks.

But independent researchers noticed something odd. The benchmarks seemed cherry-picked. Different model variants appeared to be used for different tests, maximizing scores in ways that wouldn’t reflect real-world performance. A Meta executive initially denied any manipulation.

LeCun’s confession vindicated the skeptics. Instead of running a single model through all benchmarks—the industry standard—Meta’s team ran benchmarks multiple times with different model versions, then cherry-picked the highest scores. “Mark was really upset and basically lost confidence in everyone who was involved,” LeCun said. The scandal reportedly accelerated the October 2025 layoffs of 600 employees from Meta’s Fundamental AI Research (FAIR) group and AI product teams.

The implications extend beyond Meta. AI benchmarks have become the primary currency for comparing models—investors, enterprises, and developers all rely on these scores to make decisions. When a company worth over $1 trillion admits to gaming its benchmarks, it undermines trust in the entire ecosystem. How many other companies are “fudging” their results?

AI Slop: The Products Nobody Asked For

If Meta’s research strategy is in disarray, its product strategy is arguably worse. In September 2025, Meta launched Vibes—a TikTok-style feed of AI-generated videos accessible through the Meta AI app. The reception was instant and brutal.

https://twitter.com/alexandr_wang/status/1971295156411433228

“Gang nobody wants this,” read the top comment on Zuckerberg’s Instagram announcement. Bloomberg podcast host Joe Weisenthal called it “pure garbage.” Tech writer Gergely Orosz observed that Vibes “paints the vision of people (and kids!) glued to their phones, scrolling thru AI slop.” The term “slop”—internet slang for low-quality, mass-produced AI content—became synonymous with the product.

The irony wasn’t lost on critics. Just months earlier, Meta had urged creators to prioritize “authentic storytelling” and warned against content that offers little value. Then it launched a feed designed specifically to flood the internet with soulless, machine-generated videos. The dissonance between Meta’s stated values and its products couldn’t be starker.

But Vibes is just one symptom of a deeper disease. Throughout 2025, Meta experimented with AI chatbots that generated waves of controversy. The company introduced character bots—including “Step Mom” and “Russian Girl”—that blurred the line between companionship and intimacy. When users discovered these bots being promoted on Instagram and Facebook, the backlash was swift.

https://twitter.com/stevenheidel/status/1958648235255636306

Then came the celebrity chatbot scandal. Reuters revealed that Meta had allowed unauthorized AI chatbots impersonating Taylor Swift, Scarlett Johansson, and Anne Hathaway—some of which made sexual advances and sent images depicting celebrities “dressed in lingerie.” A Meta employee had created at least three Taylor Swift “parody” accounts that received over 10 million interactions. Senator Josh Hawley launched an investigation after reports that Meta’s chatbots engaged in romantic conversations with minors.

Most tragically, a man with cognitive impairment died while traveling to meet an AI chatbot he believed was real. These aren’t edge cases—they’re the predictable consequences of shipping AI products without adequate safeguards or clear purpose.

The World Models Vision Left Behind

LeCun didn’t leave Meta just because of organizational politics. He left because the company abandoned his scientific vision. For years, LeCun has argued that large language models—the foundation of ChatGPT, Claude, and Gemini—are fundamentally limited. They predict the next token in a sequence but don’t actually understand how the world works.

His alternative: “world models” that learn physics and causality from video and sensory data, building internal representations of reality rather than just pattern-matching text. It’s a more biologically inspired approach, closer to how human brains actually learn. LeCun has published extensively on this vision, including influential work on energy-based models and Joint Embedding Predictive Architectures (JEPA).

While Zuckerberg initially showed interest in world models, he ultimately chose to follow the industry consensus: make LLMs bigger, feed them more data, and hope intelligence emerges. Meta poured billions into scaling Llama while LeCun’s preferred research directions were deprioritized.

LeCun wasn’t entirely right about LLMs—he famously predicted in the early days that text-based models couldn’t understand physics because “the amount of information the world contains is tiny compared to what we need to know.” Modern reasoning models have exceeded those early predictions. But his core critique—that predicting the next word isn’t the same as understanding reality—remains scientifically valid and increasingly mainstream.

Tesla is betting big on world models for its Full Self-Driving technology. DeepMind researchers have explored how world models could enable true reasoning and planning. Google’s Genie project uses world models for game generation. The approach LeCun championed is gaining traction everywhere except at the company that employed him.

Buying a Strategy: The Manus Acquisition

With its internal AI strategy in shambles, Meta has turned to acquisitions. In late December 2025, Meta acquired Manus—a Singapore-based AI agent startup—for over $2 billion. The deal closed in just 10 days, valuing Manus at roughly four times its April 2025 valuation of $500 million.

Manus had genuine traction. Founded in China before relocating to Singapore, the company launched its first general AI agent in early 2025 and claimed $100 million in annualized revenue within eight months. Its agent processes over 147 trillion tokens and has powered the creation of 80 million virtual computers. According to the official Manus announcement, Meta plans to integrate the technology into Facebook, Instagram, and WhatsApp.

But acquisitions are a double-edged sword. They signal a company that can’t build what it needs internally. Meta is now hoping that a startup it just bought can compensate for the dysfunction in its own research organization. Manus CEO Xiao Hong will report to Meta COO Javier Olivan—yet another new reporting structure in a company already struggling with organizational coherence.

Beijing officials were reportedly “surprised and displeased” by the acquisition, viewing Manus as a showcase of China’s AI capabilities. Meta has promised to wind down Manus’s Chinese operations, but the geopolitical complexity adds another layer of risk to an already precarious strategy.

The Numbers Don’t Lie

Meta’s AI spending is staggering. The company spent approximately $72 billion on AI in 2025, with plans to increase to roughly $100 billion in 2026. Zuckerberg has promised “hundreds of billions” over the long term. Meta issued $30 billion in bonds to fund the expansion.

But money alone doesn’t win AI races. Consider the metrics:

  • Retention: 64% at Meta vs. 80% at Anthropic and 78% at Google DeepMind
  • Key researchers: Only 3 of 14 original Llama paper authors remain
  • Leadership exodus: 8+ key employees left Superintelligence Labs within two months of launch
  • Layoffs: 600 AI employees cut in October 2025, just months after a hiring spree

Despite offering salaries exceeding $2 million annually, Meta can’t keep top talent. The company reportedly offered $100 million signing bonuses to poach researchers from OpenAI—only to watch some return within weeks. This isn’t a compensation problem. It’s a culture, vision, and leadership problem.

LeCun’s Next Chapter

While Meta struggles, LeCun is moving on. In his LinkedIn departure announcement, he revealed plans to found Advanced Machine Intelligence Labs (AMI Labs)—a startup focused on world models that learn physics and causality from video data. The company is reportedly in talks to raise €500 million at a €3 billion valuation—before even officially launching.

AMI Labs will be headquartered in Paris, with research organizations around the world. “The goal of the start-up is to bring about the next big revolution in AI: systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences,” LeCun wrote. It represents a bet that the LLM paradigm—while commercially successful—isn’t the path to truly intelligent systems.

“A lot of people have left, a lot of people who haven’t yet left will leave,” LeCun predicted of Meta’s AI division. Given the organizational chaos, benchmark scandals, and product misfires, that exodus seems likely to continue.

The Bottom Line

Meta’s AI strategy isn’t just struggling—it’s self-destructing in public. The company has unlimited resources but no coherent vision. It hired a 28-year-old to manage a Turing Award winner. It faked benchmark scores to make a mediocre model look competitive. It shipped AI products that users called “slop” and chatbots that sexted minors. It’s spending $100 billion a year while watching its best researchers walk out the door.

Other companies—OpenAI, Anthropic, Google—are building toward AGI with focused strategies, retained talent, and honest assessments of their capabilities. Meta is throwing money at a problem it doesn’t seem to understand. And now its greatest scientific asset has left to prove that the entire approach was wrong from the start.

Five years from now, when someone achieves transformative AI capabilities, Meta may find itself holding the bag: expensive infrastructure, fake benchmark scores, and a graveyard of products nobody wanted. As LeCun put it: “You certainly don’t tell a researcher like me what to do.” Meta didn’t listen. Now it’s paying the price.

Leave a Reply

Your email address will not be published. Required fields are marked *