As computers become more intelligent, the distinction between man and machine grows increasingly blurred — a development that scientist Alan Turing predicted way back in 1950. Believing that machines would one day exhibit intelligent behavior indistinguishable from humans, Turing introduced a competition to test his theory. Evaluators chat by text with both a person and a computer for five minutes, on any topic of their choosing, and then decide which seems most human. The most successful computer each year is awarded the “Most Human Computer” title; the person who most convincingly establishes his or her humanity is honored as the “Most Human Human.”
Fascinated by the Turing test and the questions it raises, Brian Christian (MFA, Creative Writing, 2008) volunteered to participate as one of four human “confederates,” or competitors, in 2009. He wrote about the experience in The Most Human Human (2011), a Wall Street Journal National Bestseller, a New York Times Editors' Choice, and a New Yorker Favorite Book of the Year. Christian also explores the relationship between humans and computers in his new book, Algorithms to Live By (2016), which uses computer science as a lens for understanding human decision-making.
Your undergraduate degrees are in computer science and philosophy. What led you to pursue a master’s degree in creative writing?
I had always pursued those disciplines in parallel, going back at least as far as high school. Over the course of my undergraduate years, I began to wonder: having viewed computer science as my career and writing as my passion, what if it was the other way around? Even though I was (and remain) fascinated by both computer science and philosophy, it began to feel that in literature the stakes felt higher, and it was possible to make a more individual, unique contribution. By the end of college I was resolved to go headlong into literature and see where it took me.
When did you first hear about the Turing test and why were you so keen on participating?
As someone who had double-majored as an undergrad in computer science (the artificial intelligence track) and philosophy (the philosophy of mind track), it was one of the few things that explicitly came up in both departments. For me the Turing test had become something of a nexus of all three of my disciplinary interests: a test in the medium of written language, against a piece of software, for the highest philosophical stakes. As my nonfiction writing began to revolve around the test, as a kind of lightning rod for the questions I was dealing with about what technology can teach us about the human condition, I noticed that in the most recent contest, the leading computer program came up shy of "passing the Turing test" by just a single vote. There was a sense in the press that humanity had just dodged a bullet. The next contest was a year away, and the question was whether this would be the year the machines finally won. A voice inside me rose up: "Not on my watch." And I reached out to the organizing committee, and crossed the line from observer to participant.
You did quite a bit of prepping for the test, to ensure that judges would rate you as more human than the computer. How does a human prep to be more human?
This is in many ways the central question of the book. How do you try to “act human” in a competitive setting, one where you’ll be judged — relative not only to machines but to other humans — on your success in doing so? It’s a fascinating question, and a bizarre one. I spent weeks poring over the transcripts of previous Turing tests — 800 pages worth, with a highlighter and a French press — looking at the cases where real humans failed, being judged to be machines, and the cases where the machines conversely succeeded in deceiving the judges. Meanwhile I learned all I could about the history and the design of chatbots [computer programs designed to simulate intelligent conversation] — their capacities, their limitations — and spent a lot of time probing them for weaknesses. And I talked to as many experts I could, across a broad swatch of fields — psychologists, linguists, computer scientists, philosophers — and put the question to them. What are the hallmarks that make human intelligence uniquely complex? How can I make sure to showcase these things? What would you do as a confederate in a Turing test? I tried to synthesize all of these different perspectives, and my research behind enemy lines into chatbots themselves, into a game plan.
In studying transcripts from previous Turing tests, you observed that computers tend to just reply to the last comment or question asked rather than referencing earlier discussion points for context. One would think that would be an easy “tell” for the judges. Why isn’t it?
In computer science, this is known as “statelessness,” and in statistics it’s called the “Markov property” — a process whose next state depends only on the present state, and not on the entire history. As to why this isn’t an obvious tell, I’m reminded of the Oxford philosopher John Lucas: if and when computers pass the Turing test, it will be “not because machines are so intelligent, but because humans, many of them at least, are so wooden.” The truth is that many Turing test failures are as a result of the low bar set by many human interactions. In fact, what I came to realize is that many human interactions are stateless. But by knowing to look out for it, we have a means of being better conversationalists and communicators in our own lives. This was one of the surprising results of the book for me: what began as an investigation into technology in many ways became an investigation into the nature of human conversation.
At some level, the test provokes a kind of existential anxiety: to fail to convey one’s humanity is a more serious charge than to fail to sink a three-pointer, for instance.
Did your research for the Turing test change how you communicate?
Absolutely, and sometimes to a distracting extent. I strive for stateful conversations, for instance, where every remark depends upon the entire history of the interaction (or even of the relationship) to take on its full meaning. I think much more about the timing of turn-taking, interruption, and floor-yielding than I ever did, having realized in the course of my preparation that sometimes it’s the form of an interaction, every bit as much as the content, where the true complexity lies. I try whenever possible to ask people questions in which I am less interested in the answer than in their answer.
You admit to being surprisingly invested in the outcome of the test. You were quite relieved to be chosen as “the most human human” — an honor bestowed on one of the four human participants. Why did it matter so much to you?
At some level, the test provokes a kind of existential anxiety: to fail to convey one’s humanity is a more serious charge than to fail to sink a three-pointer, for instance. At another level, I was entering into the competition as a kind of a ringer: I had invested an enormous amount of time and effort, relative to my fellow confederates, into preparing for the role. So I suppose it would have been charmingly embarrassing to be bested, after all that, by folks “just being themselves.”
Does your new book, Algorithms to Live By, pick up threads from the first book?
In many ways, Algorithms to Live By continues in the vein of The Most Human Human, although in some sense it also presents a kind of counter-thesis. The Most Human Human tells the story of what we've learned about ourselves from the differences between us and the machines we've built in our image. Algorithms to Live By looks at what we've learned about ourselves from the surprising things we share with machines. Many of the fundamental problems of human life strongly parallel some of the canonical problems in computer science, and so there's a genuine opportunity to learn something — both about how to make better decisions every day, but more broadly about the nature of human decision-making itself — by exploring these parallels.
Some people believe humans won’t be able to catch up again once computers beat them at the Turing test, because technological evolution seems to occur so much faster than biological evolution. Your response is, “Frankly, I don’t buy it.” Can you elaborate?
So many of the man-machine challenges have the property that as soon as the machines win once, we retire the competition. Few people remember that Garry Kasparov won in his first series against Deep Blue, in 1996. He then lost the 1997 rematch series, after which point IBM of course had zero interest in a best-of-three match for all the marbles in 1998. Instead they declared victory and hastily dismantled the machine. The same thing happened with their Watson system in 2011. In the book I make the analogy of a boxing match in which one of the boxer’s, but only one, has the power to ring the match-ending bell as soon as the scorecard favors them. Rather, my attitude is that this is the moment when things are just starting to get interesting.
One of the hallmarks — if not the hallmark — of human intelligence is its ability to adapt. In the Turing test, especially, we hold mechanical imitations up against genuine human conversation, which is of course wildly varying in quality and caliber. A legitimate loss would I think be an incredibly compelling and fascinating opportunity for us Homo sapiens — to raise the level of the game.
More Stories
A Statistician Weighs in on AI
Statistics professor Zaid Harchaoui, working at the intersection of statistics and computing, explores what AI models do well, where they fall short, and why.
Finding Family in Korea Through Language & Plants
Through her love of languages and plants — and some serendipity — UW junior Katie Ruesink connected with a Korean family while studying in Seoul.
Working Toward Responsible AI
Artificial intelligence (AI) is an essential tool at Indeed, a global job-matching and hiring platform. Trey Causey (2009) works to ensure that the company's AI promotes equity and fairness.