Alex Lupsasca, a theoretical physicist and recent Breakthrough Prize winner at OpenAI, is demonstrating that large language models have reached a capability threshold that fundamentally changes how physics research gets conducted. While casual users perceive incremental improvements in tools like email writing, Lupsasca has documented GPT-5's ability to reproduce complex theoretical physics papers in minutes—work that previously required days or weeks of manual calculation. This disparity reveals what researchers call the "Jagged Frontier," where AI capabilities appear modest for everyday tasks but revolutionary at the research frontier. Lupsasca's breakthrough moment came when GPT-5 successfully solved problems from recently published papers, including one that used techniques developed after the model's training cutoff. By employing strategic "priming"—warming up the model with foundational textbook problems before tackling complex research questions—he unlocked capabilities that suggested a qualitative shift in AI's reasoning abilities. His experience prompted a move from academic sabbatical to OpenAI, where he is now systematically testing the model's limits by working with physicist colleagues on their most challenging unsolved problems. The implications extend beyond individual productivity gains. Lupsasca describes this moment as potentially transformative for theoretical physics itself, comparing it to the "Move 37" moment in AlphaGo that revealed entirely new strategic possibilities. As AI systems grow more capable at complex reasoning tasks, the physics community faces a reckoning about how research methodologies, peer review, and scientific credit will evolve in an era where computational assistance can compress months of work into hours.