Yann LeCun, Turing Award winner and former Chief AI Scientist at Meta, joins Jacob Effron. The conversation centers on Yann’s contrarian thesis that LLMs are a dead-end on the path to human-level intelligence, despite being useful products — because they can’t predict the consequences of their actions, can’t plan, and fundamentally can’t model the messy, high-dimensional real world. He unpacks his alternative architecture, JEPA (Joint Embedding Predictive Architecture), which learns abstract representations rather than generating pixel-level predictions, and explains why this approach is essential for robotics, industrial applications, and any system that needs to operate beyond the substrate of language. Yann also reveals the real story behind his departure from Meta (he had zero technical influence on Llama, contrary to public narrative), the genesis of his Tapestry project for sovereign open-source AI, why he believes LLMs are intrinsically unsafe, where he diverges from his fellow Turing laureates Hinton and Bengio, and why he predicts the industry will recognize the paradigm shift by early 2027. Throughout, he offers candid reflections on the tension between research and product at major labs, and why he intentionally headquartered AMI Labs in Paris with zero Silicon Valley VC money.
0:00 Intro.
01:45 Why LLMs Aren’t the Path to Intelligence.
07:51 AMI and World Models.
12:07 The JEPA Architecture Explained.
15:55 Problems with Robotics Models Today.
20:37 Silicon Valley Herd Behavior.
28:18 Tapestry: Sovereign AI for the Rest of the World.
35:49 OpenAI Is the Next Sun Microsystems.
40:51 Why Yann’s Views Diverged from Hinton & Bengio.
44:32 LLMs Are Intrinsically Unsafe.
58:00 Why Yann Left Meta.
1:00:26 Reflections on FAIR
1:12:11 Advice for PhD Students.
LeWorldModel Paper: https://arxiv.org/abs/2603.
With your host:
@jacobeffron.
Partner at Redpoint.







