The Second Brain AI Podcast ✨🧠

Deterministic by Design: Why "Temp=0" Still Drifts and How to Fix It

Season 1 Episode 7

Send us a text

Why do LLMs still give different answers even with temperature set to zero? In this episode of The Second Brain AI Podcast, we unpack new research from Thinking Machines Lab on defeating nondeterminism in LLM inference. We cover the surprising role of floating-point math, the real system-level culprit, lack of batch invariance, and how redesigned kernels can finally deliver bit-identical outputs. We also explore the trade-offs, real-world implications for testing and reliability, and how this breakthrough enables reproducible research and true on-policy reinforcement learning.

Sources:

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Tech Brew Ride Home Artwork

Tech Brew Ride Home

Morning Brew
The Best One Yet Artwork

The Best One Yet

Nick & Jack Studios
The NewsWorthy Artwork

The NewsWorthy

Erica Mandy
Acquired Artwork

Acquired

Ben Gilbert and David Rosenthal