The Silent Duel: David Blackwell and the Math Inside AI
{show notes}
Two people walk toward each other on a dirt road. One bullet each. In a normal duel, a missed shot makes a sound. But in a silent duel, a miss would be invisible. You wouldn't know if your opponent was holding their fire, or had already taken their one shot. How would you know when to stop walking and take your own?
In 2024, NVIDIA named the most powerful piece of AI hardware ever built after the man who spent his career thinking about this exact problem. His name was David Blackwell.
In this episode
- David Blackwell: brilliant professor and researcher at the RAND Corporation. Seventh African American to earn a PhD in mathematics.
- Kriegsspiel: the blind chess variant that Blackwell played daily.
- Blackwell's silent duel: a thought experiment from Cold War-era game theory, and why related math ended up in machine learning textbooks.
- The economist's question: the most important question in the world at that moment, asked in good faith, and why every mathematician Blackwell knew gave the same useless answer.
Episode Music
- James Opie / Nihilore, CC BY 4.0
Additional Notes
If you'd like to hear more about David Blackwell from the man himself:
David Blackwell, "An Oral History with David Blackwell,"
conducted by Nadine Wilmot in 2002 and 2003, Regional Oral
History Office, The Bancroft Library, University of California,
Berkeley, 2003.
Chapters
00:00 Intro music
00:21 The Silent Duel
01:35 Kriegsspiel at RAND
03:21 David Blackwell background
04:50 An Economist asks a question
06:06 Jimmy Savage on statistics
07:01 Approachability Theory
07:43 NVIDIA names a Chip
08:03 AI and LLMs walk the duel
--
Lore in the Machine is a narrative technology podcast about the forgotten history of computing, software, and the internet. Hosted by Daina Bouquin, each episode uncovers the true story behind a piece of computer history. These are the forgotten people, decisions, and accidents that quietly shaped the digital world.
If you enjoyed this episode, please consider leaving a rating and review on Apple Podcasts or Spotify. It really helps others find the show.
You can follow the show on YouTube, Instagram, and Facebook. You can support it with a coffee.
Intro music
SpeakerPicture a dirt road. You are walking forward. Someone is walking toward you. You each have a gun. One bullet each. If you shoot too early, you miss. If you wait too long, though, they shoot first. In a normal duel, if they fire and miss, you hear it. You know. You walk forward. Take your time and you win. But what if the guns make no sound? If you fire and miss, they don't know you're empty. And you don't know about them. You keep walking, step by step, not knowing if the person coming toward you is holding their fire or has already spent their one shot. In 2024, the most powerful piece of artificial intelligence hardware ever built was named after the man who spent his life thinking about this problem. I'm Daina Bouquin, and this is Lore in the Machine. Santa Monica, California. 1950. Noon. Every day in one office at the Rand Corporation, a group of men sits down to play Kriegsspiel. It's chess, but you can't see your opponent's pieces. The referee sits between the two boards. He tells you only whether a move you want to make is legal or illegal. That's all. Otherwise, nothing. You are playing a game you can only have see against an opponent you cannot read, one decision at a time. The man who hosts this game is David Blackwell. He is one of the most gifted mathematicians in the country. He is also one of very few black men walking these particular halls. And he is completely fascinated, not by the weapons being modeled in adjacent offices, but by the shape of not knowing, by what it means to act wisely without full information. This is not idle curiosity. It is his research, and it leads him eventually to the silent duel. The math for a normal duel turns out to be straightforward. Blackwell and his colleagues solve it in a day. But the silent version, the one where a missed shot is invisible, where you cannot distinguish a patient opponent from an empty one. That problem is something else. It sits in the middle of everything he cares about. How do you make decisions when the signals you need simply don't exist? Blackwell knew this problem from the inside. He had been the seventh African American to earn a PhD in mathematics. Brilliant enough that when a position opened at UC Berkeley, the math department had wanted him and the university president agreed. But there was a catch no one said out loud until the offer just quietly disappeared. The department head's wife hosted faculty dinners, and she refused to have a black man in her home. David Blackwell spent his career making high-stakes moves in a game where the rules were hidden, where the obstacles were invisible until the moment they stopped him. He understood in a way that was not abstract at all what it meant to walk forward without knowing where the shots were coming from. He also had an unusual relationship with the work that he'd been hired to do. He had grown up in Centralia, Illinois, with a picture of hand-to-hand combat on his uncle's wall, listening to his father and uncle describe what the First World War had actually cost them. He believed quietly and without apology that war was stupid. He once stopped an entire line of research, just abandoned it completely, when he realized the math could lead somewhere harmful. One afternoon, an economist comes to his office. He needs a number. The probability of a major war in the next five years. And he explains why. Long-term resource allocation. The question is entirely reasonable. Blackwell gives a textbook answer. Probability only applies to repeatable events. A war is not repeatable. You can't run the next five years a thousand times and track the outcomes. The probability is either zero or one, and we won't know for five years. The economist thanks him and leaves. He mentions on his way out that every statistician he'd spoken to gave him exactly the same answer. Blackwell is left alone with that. And it gets worse the longer he thinks about it. A man had come to him with perhaps the most important question in the world at that moment. A reasonable question asked in good faith. Every mathematician Blackwell knew had handed him the same useless, technically correct, but completely unhelpful answer. The field had failed him. And so had Blackwell. A few weeks later, a mathematician named Jimmy Savage comes through Rand. Blackwell tells him about the economist, and Savage shows him something. Probability does not have to be a frequency. It can be a degree of belief. A best current guess, held loosely, revised as new information arrives. This changes everything. Not gradually. Like opening a door. Because this is how people actually navigate the world. You don't wait for enough information to draw a frequency distribution. You walk into a situation with everything you currently know. You make your best call. And when something changes, you update. Keep walking. This is what the silent duel had been about all along. In 1956, Blackwell published a paper that would eventually find its way into machine learning textbooks. The core idea is this. In a repeated game, you don't need a perfect strategy. You measure the distance between where you are and where you want to be. You correct your next move to close the gap. Over time you converge. Not by knowing the answer in advance, but by never stopping the process of adjustment. Sit with that for a moment. Not certainty, not omniscience, just a method for moving through a game you can only half see. In 2024, Jensen Huang, CEO of NVIDIA, stands on a stage and announces their newest chip architecture. 208 billion transistors. The most powerful piece of AI hardware they had ever built. They named it Blackwell. These systems, the ones now woven into search engines and writing tools and medical diagnostics, are not programmed with answers. They are trained on human writing. Enormous quantities of it, until they develop something that functions like intuition about what comes next. Not knowledge, just pattern, probability. Best current guess. Which means every time an LLM generates a response, it is doing what Blackwell described in that RAND office. Walking forward with everything it currently knows, measuring the gap between where it is and where it needs to be, adjusting and advancing. It doesn't know what it doesn't know. It can't see the whole board. And neither can we. We have spent years arguing about whether these systems are on a path towards something that surpasses us, some kind of intelligence that transcends human limits. We talk about them as if they arrived from somewhere else. As if they are a different kind of thing entirely. But they come from us. From the full written record of everything we have ever said online, from all of our digitized documents, every argument and letter and explanation and story, they move through uncertainty exactly the way we do. Just at a scale we can't hold in our heads. Blackwell formalized this. He built his entire life's work on top of it. Not a puzzle he encountered in a book, but as something he had lived in a game where the rules were hidden and the obstacles were invisible until the moment they stopped him. He could not have known what he was describing. He could not have seen it. He spent his life walking the silent duel. And then, without knowing it, he taught us how to build things that walk it too. We didn't build an oracle. We built a mirror. And then we were surprised when it couldn't see what we couldn't see. The guns are still silent, and we're all still walking. I'm Daina Bouquin, and this is Lore in the Machine. Now this is the point where the music usually starts playing, but I haven't done the thing yet that seems obvious, and that's to ask you to rate and subscribe to this podcast if you enjoy it. It'll really help others find the show, and it'll give me something to smile about too. Thanks for listening.