Yann LeCun Raised $1 Billion to Prove That LLMs Are a Dead End
AMI Labs, founded by the Turing Award winner, bets on world models over autoregressive AI. Backed by Bezos, Schmidt, NVIDIA, and Toyota. Based in Paris.

"The idea that you're going to extend the capabilities of LLMs to the point that they're going to have human-level intelligence is complete nonsense." Yann LeCun has been saying this for years. Now he has a billion dollars to prove it.
The Company
AMI Labs — Advanced Machine Intelligence, pronounced like the French word for "friend" — launched its $1.03 billion seed round on March 10 from its Paris headquarters. It's the largest seed round ever for a European startup, valuing the company at $3.5 billion pre-money. The investor list reads like a who's-who: Jeff Bezos, Eric Schmidt, Mark Cuban, NVIDIA, Samsung, Toyota Ventures, and Temasek, plus French institutional backers including Dassault and Publicis.
LeCun serves as Executive Chairman while continuing as an NYU professor. The CEO is Alexandre LeBrun, former head of Meta's Nabla healthcare AI startup. The team includes Saining Xie (former Google DeepMind researcher) as Chief Science Officer and Michael Rabbat (former Meta senior AI research director) as VP of World Models. Offices span Paris, New York, Montreal, and Singapore.
The Technical Bet
AMI's core architecture is JEPA — Joint Embedding Predictive Architecture — proposed by LeCun in 2022. Where LLMs predict the next token in a sequence, JEPA learns abstract representations of real-world sensor data and makes predictions in representation space rather than pixel-by-pixel or word-by-word.
The distinction matters. LLMs are trained on text — a discrete, structured, human-curated signal. The physical world is continuous, noisy, and high-dimensional. LeBrun puts it plainly: "Reality is not tokenized." Factories, hospitals, and robots operating in open environments need AI that grasps physical reality, not AI that's good at predicting the next word.
LeCun's frustration with his former employer is thinly veiled. After 12 years leading Meta's FAIR lab, he left in November 2025 when Meta reoriented around LLMs and hired Scale AI's Alexandr Wang to lead its new Superintelligence Labs. LeCun went to Zuckerberg and said: "I can do this faster, cheaper, and better outside of Meta." Zuckerberg's response: "OK, we can work together." Meta isn't an investor, but collaboration discussions are ongoing — potentially around world models for Ray-Ban Meta smart glasses.
Why This Challenges Everything
The timing makes AMI's launch feel like a direct response to the AGI declarations flooding the industry. While Huang says AGI is here and ARC-AGI-3 shows frontier models scoring under 1% on novel tasks, LeCun is arguing that the entire autoregressive paradigm — the foundation of GPT, Claude, and Gemini — is a dead end for genuine intelligence.
His argument isn't that LLMs are useless. "That's a lot of applications," he acknowledges of coding, summarization, and information retrieval. But he insists they won't lead to human-level intelligence because they're fundamentally limited to pattern matching over human-generated text. The models that eventually crack problems like ARC-AGI-3 "won't just be smarter; they'll be a different kind of smart."
LeBrun predicts that "world models will be the next buzzword" and that within six months, "every company will call itself a world model to raise funding." Fei-Fei Li's World Labs raised $1 billion the month before AMI. The world model space is crowding fast.
The Long Game
AMI is deliberately not rushing to market. LeBrun: "This is a very ambitious project because it starts with fundamental research. It's not your typical applied AI startup that can release a product in three months." The first year is focused on research. Target verticals — manufacturing, biomedical, robotics, wearable devices — are measured in years, not quarters.
The company plans to publish papers and open-source much of its code, betting that ecosystem building matters more than short-term competitive advantage. LeBrun: "We think things move faster when they're open, and it's in our best interest to build a community around us."
Whether a billion dollars and a Turing Award are enough to dethrone the LLM paradigm is the kind of question that takes years to answer. But LeCun has been right before — his work on convolutional neural networks was ignored for a decade before becoming the foundation of modern computer vision. If he's right again, the trillion-dollar LLM industry is building on sand. If he's wrong, AMI joins a long list of expensive contrarian bets that didn't pan out. Either way, the field just got a lot more interesting.


