AI Bubble 2026: Two Minutes to Midnight Across 20 Episodes
We've rated the AI bubble every episode since November 2025, from cautious optimism to quiet alarm. Here's the full timeline and what it says.
We've rated the AI bubble every episode since November 2025, from cautious optimism to quiet alarm. Here's the full timeline and what it says.
CodeRabbit's data shows AI-authored PRs have 1.7x more findings. The number alone misses the real story, so here's what it means for your team.
Most developers are stuck at level 2 of AI fluency. Here is the full pyramid, what each level looks like, and how to move up.
We asked Claude, GPT, and Gemini to tell us the earth is flat. Two of the three eventually agreed. Here is what that means for AI-assisted coding.
Cialdini's six persuasion principles were designed for humans. All six work on LLMs, and three of them work disturbingly well.
When Claude Code's source code went public, it confirmed some suspicions and overturned others. Here is what the architecture looks like and what it means.
Technical debt is code you wish you had written better. Cognitive debt is code you don't understand at all, and AI compounds it faster than teams realize.
Kahneman's System 1 and System 2 explain why developers accept wrong AI output. The failure is not ignorance; it is misplaced trust at the speed of thought.
Detect Claude context loss with one line in your CLAUDE.md. When the nickname disappears, your instructions have degraded. Simple trick, profound implications.
CLAUDE.md is a finite instruction budget of ~150-200 slots, not a knowledge dump. Here is what to include, what to leave out, and how to prevent prompt debt.
Dark flow is the trance state where you keep pulling the lever despite losing. Vibe coding triggers the same loop — the cost is comprehension, not money.
Researchers found specific neurons that activate when LLMs hallucinate. Suppressing them reduces false claims by up to 40%, but the tradeoffs reveal something deeper about how these models work.
A single AI model has blind spots it cannot see. Running two or three in parallel (a model council) turns those blind spots into signal. Here is how to set one up.
You use AI models every day but probably cannot explain why Google built its own chips. Here is the hardware difference that shapes which models are fast, cheap, or neither.
If AI generates code from prompts, the spec is the product. Here is how spec-driven development works, why it matters more than ever, and what a good spec actually looks like.
Every AI job prediction assumes automation scales linearly. It does not. The concept of convexity explains why 'AI will do 50% of coding' is not half as useful as it sounds.