SPEAKER_14: AppleCard is the perfect credit card for every purchase. It has cash-back rewards unlike others. You earn unlimited daily cash back on every purchase, receive it daily, and can grow it at a 4.15% annual percentage yield when you open a high-yield savings account. Apply for AppleCard in the Wallet app on iPhone and start earning and growing your daily cash with savings today. AppleCard subject to credit approval, savings is available to AppleCard owners subject to eligibility requirements. Savings accounts provided by Goldman Sachs Bank USA. Member FDIC. Charms apply. Every kid learns differently, so it's really important that your children have the educational support that they need to help them keep up and excel. If your child needs homework help, check out iXcel, the online learning platform for kids. iXcel covers math, language arts, science, and social studies through interactive practice problems from pre-K to 12th grade. As kids practice, they get positive feedback and even awards. With the school year ramping up, now is the best time to get iXcel. Our listeners can get an exclusive 20% off iXcel membership when they sign up today at ixcel.com slash invisible. That's the letters ixl.com slash invisible. Squarespace is the all-in-one platform for building your brand and growing your business online. Stand out with a beautiful website, engage with your audience, and sell anything. Your products, content you create, and even your time. You can easily display posts from your social profiles on your website or share new blogs or videos to social media. Automatically push website content to your favorite channels so your followers can share it too. Go to squarespace.com slash invisible for a free trial, and when you're ready to launch, use the offer code invisible to save 10% off your first purchase of a website or domain. This is 99% invisible. I'm Roman Mars. When Joseph Weizenbaum was still alive, he liked to tell the story about a computer program he'd created back in the 1960s as a professor at MIT. It was a simple chatbot named Eliza that could interact with users in a typed conversation. Weizenbaum asked his secretary to try it out.
SPEAKER_06: I asked her to my office and sat her down at the keyboard and then she began to type and of course I looked over her shoulder to make sure that everything was operating properly.
SPEAKER_14: Weizenbaum watched as the secretary typed a message to Eliza and then as the program responded to her.
SPEAKER_06: After two or three interchanges with the machine, she turned to me and she said, would you mind leaving the room please?
SPEAKER_14: The secretary was so entranced by this chatbot that she didn't want to stop exchanging messages. Weizenbaum's secretary fell under the spell of the machine. Weizenbaum saw this happening over and over again. People would reveal very intimate details about their lives to the program. It was like they'd just been waiting for someone, for something to ask.
SPEAKER_17: Eliza was a simple computer program.
SPEAKER_14: That's producer Delaney Hall.
SPEAKER_17: The way it worked was it would recognize a keyword in the user's statement and then reflect it back in the form of a simple phrase or question. When Eliza got confused or couldn't parse a statement, it would fall back on set phrases like please go on or tell me more.
SPEAKER_14: Weizenbaum had programmed Eliza to interact in the style of a psychotherapist and it was pretty convincing. It gave the illusion of empathy even though it was just simple code. Here's an exchange dramatized in an old documentary segment. It begins with the human.
SPEAKER_02: Men are all alike.
SPEAKER_14: And then the computer program answers. In what way?
SPEAKER_02: They're always bugging us about something or other.
SPEAKER_14: Can you think of a specific example?
SPEAKER_02: Well, my boyfriend made me come here.
SPEAKER_14: Your boyfriend made you come here?
SPEAKER_02: He says I'm depressed much of the time.
SPEAKER_04: I'm sorry to hear that you're depressed.
SPEAKER_02: It's true. I am unhappy.
SPEAKER_14: Do you think coming here will help you not to be unhappy?
SPEAKER_17: Eliza was one of the very first computer programs that could convincingly simulate human conversation. And Weizenbaum got weirded out by the way that people reacted to it. He didn't expect them to be so captivated. He feared that they were being fooled, that they didn't really understand it was just a bunch of circuits on the other end. And he wondered if it was so easy to mimic a sense of human understanding, then what did that say about human understanding?
SPEAKER_14: Weizenbaum started raising these big difficult questions at a time when the field of artificial intelligence was still relatively new and mostly filled with optimism. Many researchers dreamed of creating a world where humans and technology merged in new ways. They wanted to create computers that could talk with us and respond to our needs and desires.
SPEAKER_17: But Weizenbaum would take a different path. He would begin to speak out against what he saw as the eroding boundary between humans and machines. And he'd eventually break from the artificial intelligentsia, becoming one of the first and loudest critics of the technology he helped to build.
SPEAKER_14: People have long been fascinated with mechanical devices that imitate humans. Ancient Egyptians built statues of divinities from wood and stone and consulted them for advice. Early Buddhist scholars described precious metal people that would recite sacred texts and weep.
SPEAKER_04: Oh, the Greeks had a wonderful attitude toward robots and machines. They welcomed them.
SPEAKER_17: This is Pamela McCordick, and she wrote Machines Who Think, one of the first modern histories of artificial intelligence. And she says that in Greek mythology, the god Hephaestus, who was the god of blacksmithing, loved to make robots.
SPEAKER_04: He created all kinds of robots and wanted to help him out in the forge and all kinds of party help. Things would roll into the parties of the gods with trays and whatnot. And there's a wonderful description in Homer. They are like lovely young women, and there is intelligence in their hearts.
SPEAKER_17: Flash forward nearly 2,500 years to the 1940s. In the mid 1940s of the 20th century, a different kind of tool was invented, a tool for extending
SPEAKER_07: certain of the powers of man's mind. This tool is the electronic computer.
SPEAKER_17: As modern computers became more powerful and widely available, people began to believe that this fantasy of intelligent machines that could think and communicate might actually be within grasp. The idea was most famously articulated by the British mathematician Alan Turing.
SPEAKER_13: And he wrote this very famous paper in 1950 called Computing, Machinery, and Intelligence, which begins with this famous line, can machines think?
SPEAKER_17: This is Brian Christian. He wrote The Most Human Human, what talking with computers teaches us about what it means to be alive.
SPEAKER_13: Turing is already sort of seeing ahead into the 21st century and imagining what if someday we do manage to build a machine that can actually quote unquote think. How would we know that we had built such a machine?
SPEAKER_14: This is a hard task because it's difficult to even define what it means to think, let alone prove that a computer is doing it. But Turing approached the question like the engineer he was, proposing a simple test. He said that if a computer could regularly fool people into thinking it was, in fact, a human, it would pass. Turing predicted that this would eventually happen so consistently that as a result, we
SPEAKER_13: would just as a society, he says, come to this point where we would speak of machines as being intelligent without expecting to be contradicted.
SPEAKER_14: The Turing test brought up really profound questions, ones we've been thinking about since the days of Greek robots and precious metal people.
SPEAKER_13: What does it mean to have a mind? How do you kind of bridge that gap from one mind to another? And is language kind of the best or only means that we have for doing that? How does empathy work? What are the limits of empathy? They're not fundamentally technological questions. They're fundamentally human questions.
SPEAKER_17: And they're questions that would come to occupy Joseph Weizenbaum, the creator of Eliza. Before turning against artificial intelligence, Weizenbaum helped develop it. He was part of an early generation of AI researchers. Back in the 50s, they started exploring how to give computers new human-like powers.
SPEAKER_14: They programmed computers to reason, plan, and perceive. They created programs that could play checkers, solve word problems, and prove logical theorems. The press at the time described their work as astonishing. Herb Steinman, one of the most prominent AI researchers at the time, predicted that within 20 years, by the 1980s, machines would be capable of doing any work a person could do.
SPEAKER_17: Just like today, these advances made people pretty anxious. For millennia, humans had defined themselves as rational creatures. It was our ability to reason and use logic and play chess that made us special.
SPEAKER_13: It is exactly those things that the computer came in and, you know, within a handful of years was better than any human could ever hope to be. And so it really pulled that rug out from under us.
SPEAKER_17: But despite all the big leaps forward, there was one realm in particular where computers remained quite dumb. They struggled to master human language, which is known in AI as natural language.
SPEAKER_16: Natural language processing or understanding, if you will, is probably the hardest problem for AI.
SPEAKER_17: Melanie Mitchell is a computer scientist and AI researcher.
SPEAKER_16: When we communicate in language, we're using all of our thought processes. We're using our vast knowledge of the way the world works, about other people and their goals and motivations. We're disambiguating words. It's really almost equivalent to cognition itself.
SPEAKER_17: So as an example of this, Mitchell says, think about this sentence. A steel ball fell on a glass table and it shattered.
SPEAKER_16: If you ask, what does it refer to? Humans will understand immediately. Of course, we know that if a steel ball falls on a glass table, the glass table shatters. But that's because we know about what glass is, what steel is. We know about things falling on other things and we know a lot about the way the world works.
SPEAKER_17: But computers don't know that stuff. Unless they're programmed to, they don't actually know about glass and steel and gravity. And actually programming all that information into a computer is really labor intensive and difficult because that knowledge is mostly subconscious for us. It's so intuitive as to be almost invisible.
SPEAKER_16: And this is something that comes up again and again in natural language processing systems is that they don't have the kind of knowledge about the world that we humans have. And so they make mistakes.
SPEAKER_14: This clumsiness with human language meant that early chatbots built in the 1950s and 60s were tightly constrained. They could converse about some very specific topic like, say, baseball. So you could ask about Joe DiMaggio's batting average in 1949 and the computer would tell you.
SPEAKER_17: And by limiting the world of possible questions and answers, researchers could build machines that passed as intelligent. But talking with them was like having a conversation with Wikipedia. Not a real person.
SPEAKER_15: I was very aware of the notion that he was doing something with computers that gave these computers a power to understand that was leaps and bounds above anything that had come before.
SPEAKER_17: Naomi Weizenbaum is Joseph Weizenbaum's youngest daughter. She didn't know a lot about the specifics of his work when she was growing up, but she remembers his mustache. She says he always had one and that he was serious and studious and he spent a lot of time in his office at home.
SPEAKER_15: My mother, she'd call me downstairs and say, OK, time to go walk the dad. And I had to go down and fetch dad out of his study and ask him to come for a walk with me.
SPEAKER_17: She also remembers her dad as someone who couldn't help but see the world from a critical perspective.
SPEAKER_15: I think my father was very deeply an outsider right from the very beginning and that when he went to MIT, it was no exception.
SPEAKER_14: Shortly after he arrived at MIT in the 1960s, Weizenbaum had an insight. He realized he could create a clever workaround for the natural language problem. He could build a chatbot that didn't really need to know anything about anything. It wouldn't spit out facts. It would just reflect back at you like a mirror. You know, the cocktail party conversation, for example.
SPEAKER_17: This is Joseph Weizenbaum in an archival interview.
SPEAKER_07: Where someone says something to you that you really don't fully understand, but you are in fact able to give a response which appears appropriate. And in fact, the conversation can continue for quite a long time. And we do that all the time. You know, we do that all the time.
SPEAKER_14: But Weizenbaum still had to create a framework in which that conversation would happen, a style of interaction the program could follow. Weizenbaum had long been interested in psychology, and he recognized that the speech patterns of a therapist might be easy to automate.
SPEAKER_17: Because when you think of a therapist, certain phrases come to mind, like, and how does that make you feel? And so Weizenbaum decided to create a bot that, like a therapist, would listen, pick out keywords, and then mirror statements back to the user.
SPEAKER_15: How dad explained it was that it was kind of simple dimple psychology, more or less very similar to a parent. That you pick out one element of what the person said, and you ask them to expand on that. Joseph Weizenbaum was very clear.
SPEAKER_17: He didn't intend for Eliza to be an actual therapeutic tool. It wasn't even programmed with information about human thought or emotion. But nevertheless, Eliza seemed to understand.
SPEAKER_14: And as Weizenbaum watched people like his secretary converse with the program, he quickly realized they weren't just having simple dimple exchanges. They were having meaningful conversations with the machine.
SPEAKER_13: People began having these really long kind of soul-bearing conversations with this program and reporting, you know, a meaningful therapeutic experience. And this, for him, was horrific.
SPEAKER_17: Brian Christian again.
SPEAKER_13: He talks about, you know, I would have assumed that, you know, some sort of meaningful experience like this would require someone to be present on the other side of the conversation and to actually invest something. So the fact that the merest mechanical parody of this process can still apparently produce something meaningful for people, he found really unsettling.
SPEAKER_14: But while Weizenbaum was horrified by his own creation, others saw Eliza as an important breakthrough and a real opportunity.
SPEAKER_17: Before coming to MIT, Weizenbaum had spent time at Stanford, where he became friends with a psychiatrist named Dr. Kenneth Colby. Colby had worked at a large, underfunded state mental hospital, where patients maybe saw a therapist once a month.
SPEAKER_04: And that was why, when he encountered even this rather crude caricature, as Weizenbaum put it, of the therapeutic process, he thought, boy, I can put some therapeutic knowledge into this and make it happen.
SPEAKER_14: Colby started promoting the idea that Eliza might actually be therapeutically useful, and the medical community started to pay attention. They thought maybe this program or other programs like it could help expand access to mental health care. And maybe in some ways, it could even be better than a human therapist. It would be cheaper and people might actually speak more freely with a robot. Yeah, I can tell this bunch of tubes anything I want.
SPEAKER_04: I can tell the truth. We feel free to open ourselves to what we think is a non-judgmental machine.
SPEAKER_10: Many therapists claim there has to be a live, warm human body present for a therapeutic change to take place. Again, that's an opinion. That's not a fact.
SPEAKER_17: Here's Colby in a news interview years later. If you say to the program, I'm feeling quite depressed, I would sympathize with you, and
SPEAKER_10: so will the machine.
SPEAKER_14: So the idea of the robot therapist started to take off.
SPEAKER_13: You know, in the late 60s, early 70s, you have these peer-reviewed psychology journal articles saying, you know, this represents a potential breakthrough for therapy.
SPEAKER_17: The famed scientist Carl Sagan wrote about the idea in 1975, imagining a network of psychotherapeutic computer terminals, kind of like phone booths.
SPEAKER_13: You walk to a city corner, put in a nickel, and you talk to this therapeutic robot for a few minutes to kind of take your mind off of things. So it's very warmly received by the medical community at large.
SPEAKER_14: And while the idea of therapy terminals on every corner never materialized, people who worked in mental health would continue to experiment with how to use computers in their work. Dr. Lee, for instance, went on to create another chatbot called Perry, which simulated the conversational style of a person with paranoid schizophrenia. He later developed a program called Overcoming Depression.
SPEAKER_13: Again, for Weizenbaum, he felt like he had sort of let this genie out of the bottle and there was kind of no going back. The real wake, or the real controversy, finally arose over the claim that Eliza was a therapeutic
SPEAKER_07: significance. You know, my God, could people really believe that? And what are we doing?
SPEAKER_17: Weizenbaum objected to the idea that something as subtle, intimate, and human as therapy could be reduced to code. He worried that if machines snuck into the therapist's office, then where else might they end up? He began to argue that fields requiring human compassion and understanding just shouldn't be automated.
SPEAKER_14: Weizenbaum also worried about the same future that Alan Turing had described. One where chatbots regularly fooled people into thinking they were interacting with a human. Weizenbaum would write of Eliza, quote, What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful, delusional thinking in quite normal people.
SPEAKER_07: The whole thing is a con job. The whole thing is, it's very much like fortune telling. It's an illusion-creating machine and all that sort of thing.
SPEAKER_17: Weizenbaum quickly went from someone working in the heart of the AI community at MIT to someone preaching against it. And his friend Kenneth Colby became one of his first targets.
SPEAKER_04: He began attacking Dr. Colby in public, in professional meetings. And he would say, this is a wicked thing to do. It should not be done. And he should be punished for doing it.
SPEAKER_14: Over time, Weizenbaum escalated his arguments. His family had escaped Nazi Germany in 1936. And sometimes when critiquing AI, Weizenbaum would invoke the Holocaust. He worried powerful technologies could be abused by governments and corporations. Here's one of his other daughters, Miriam Weizenbaum.
SPEAKER_12: These happen to be incredibly powerful tools. And the really important question is, for whom are they being used? What's the purpose of them? Who benefits? Who's subject to them? Who controls them? Who doesn't?
SPEAKER_04: At that point, he began extending that and saying, all artificial intelligence is questionable and probably wicked. It shouldn't be done. And oh, dear. He was really swimming upstream on that one. The computers are coming.
SPEAKER_14: Over the next few decades, AI allowed for automation not just in therapy, but in fields like medicine, law and finance.
SPEAKER_03: Computers fly our planes. Increasingly, they do our banking. The computer is king in business, finance and information. And increasingly, say experts, computers will get smarter and smarter. Companions, advisors, oracles to all of us.
SPEAKER_17: Early researchers began trying to create computers that were flexible enough to learn human language on their own.
SPEAKER_03: To take those first faltering steps, as babies do, toward understanding language. To break down and diagram sentences. To remember new meanings, verb tenses, and to ask questions to make itself smarter. The computer wants to know, is the past tense of make, maked? No, says Martin, may.
SPEAKER_17: These programs could take a long time to figure things out.
SPEAKER_03: It has been a tedious three-year project. The computer now knows about 300 words. Still Paul Martin is convinced computers can learn.
SPEAKER_14: But starting in the 1980s and 90s, there were new breakthroughs in natural language processing. Scientists began relying on statistical methods.
SPEAKER_16: Taking documents or recordings of language and performing statistics on things like how often does one word come after another? Or how often does this sort of triplet of words occur?
SPEAKER_17: And then in the 2000s and 2010s, researchers began using what are called deep neural networks. They're trained using the huge amounts of data that only became possible with the rise of the internet.
SPEAKER_16: So now we can take every single web page online and process the language in it and have the neural network analyze the language. And they work so much better than any thing that came before.
SPEAKER_17: And these techniques Mitchell is describing, they've been applied to chatbots specifically.
SPEAKER_16: They might be trained on actual conversations, recorded conversations. Sometimes people put their chatbot online and then other people interact with them and the systems learn by just interacting with people online.
SPEAKER_14: As a result, contemporary chatbots and their talking cousins like Siri and Alexa have gotten a lot better. There are still limitations, of course. They still can't really understand the world in the same way we do. But they have become a lot more reliable, personable, and convincing.
SPEAKER_17: And humans remain as open and sometimes gullible as ever when it comes to interacting with chatbots. There's even a term that's emerged in computer science, the ELISA effect, that refers to our tendency to anthropomorphize computers and to believe that programs understand, even when they really don't.
SPEAKER_14: There are recent examples that make it feel as though we are living firmly in the world that Alan Turing predicted and that Joseph Weizenbaum feared. Machines fool humans all the time now.
SPEAKER_17: In 2018, for example, Google revealed something called Duplex, a conversational AI that can make phone reservations. And it is uncanny. Hello, how can I help you?
SPEAKER_08: Hi, I'm calling to book a women's haircut for a client. I'm looking for something on May 3rd. Sure, what time are you looking for around? At 12 p.m.
SPEAKER_16: They programmed in human-like stuttering and saying um and uh and made it very convincing. Do you have anything between 10 a.m. and 12 p.m.?
SPEAKER_08: Depending on what service she would like, what service is she looking for? Just a woman's haircut for now.
SPEAKER_16: Okay. It sounded very human. People were really alarmed. They wanted to know that they were speaking to a computer.
SPEAKER_08: The first name is Lisa? Okay, perfect. So I will see Lisa at 10 o'clock on May 3rd.
SPEAKER_14: This issue of transparency has become central to the ethical design of these kinds of systems, especially in sensitive realms like therapy. Because even if Eliza was the first chatbot to be used in the field of mental health, it wasn't the last.
SPEAKER_02: Transparency is the basis of trust and you must have trust with a service like this. This is Allison Darcy. My name is Allison Darcy. I'm the CEO and founder of Wobot Labs.
SPEAKER_17: Darcy is a psychologist and she's long been interested in how to use digital tools to expand access to mental health care.
SPEAKER_02: In the United States, a third of the population are living in areas that are designated no access areas. So they effectively don't have a mental health professional anywhere near them.
SPEAKER_17: And even if you live in a major city with lots of therapists, it's often not covered by insurance and can be super expensive.
SPEAKER_14: So a few years back, she and her team began thinking about how to build a digital tool that would make mental health care radically accessible. They experimented with video games before landing on the idea of Wobot, a chatbot guide who could take users through exercises based on cognitive behavioral therapy, which helps people interrupt and reframe negative thought patterns.
SPEAKER_17: Wobot is not even trying to pass the Turing test. It's very transparently a robot. It's represented by a little robot avatar. And part of its personality is that it's curious about human feelings, like a student learning a foreign language.
SPEAKER_02: I think he's somewhere between Kermit the Frog, like circa 1980, and Spock. Spock who's always trying to struggle with his human emotions.
SPEAKER_17: As Darcy's team built a prototype and started watching people interact with Wobot, right away they could tell something interesting was happening. They noticed that if Wobot didn't check in for a day or two, people would reach out.
SPEAKER_02: People were saying things like, Wobot, where were you? I need you to check in on me every day. Just like with Eliza, people were forming emotional connections with the program.
SPEAKER_17: They liked interacting with Wobot and relied on it.
SPEAKER_02: People when we spoke to them would say, look, I know this is just a chatbot. Like I know this isn't anything, but at the same time, it feels like this fun little dude checking in on me. And so people were willing to just go with the suspended reality. They were effectively stepping into this kind of fictional world and it made it playful and fun. I've spent some time with Wobot and I feel the same way.
SPEAKER_17: It's like texting back and forth with a relentlessly positive and sweet little cartoon character. It offers high fives and sends you gifts. It feels less like a human therapist leading me to deep insights and more like a tiny personal cheerleader I can carry around in my pocket. A tiny personal cheerleader doesn't sound very dystopian.
SPEAKER_17: No, it doesn't. But I mean, of course, Wobot raises tough questions about tech more broadly. If chatbots are built with nefarious intentions, they can do stuff like detect and manipulate human feelings. They can fool humans into thinking they're people. I have anxiety about this kind of tech misapplied like anything else, you know, like a surgeon's
SPEAKER_02: scalpel. This can be a tool that helps people survive and it can also be a weapon.
SPEAKER_17: And with Wobot in particular, there are other issues too around privacy and safety and even thornier stuff. Like is this tool a digital band aid that somehow enables the status quo? Meaning the situation we're in now where the vast majority of people don't have access to high quality, comprehensive mental health care. Darcy is clear that she does not see Wobot as a replacement for talk therapy. And she says her team is always thinking and talking about those ethical questions.
SPEAKER_02: It's something we always keep an eye on. But I think the potential for good just so much outweighs the risk right now. Darcy thought about this a lot when Wobot first launched.
SPEAKER_17: They'd been working in relative obscurity and then suddenly their numbers began to climb. Quickly, within the first five days, they had 50,000 users. Wobot was exchanging millions of messages with people each week.
SPEAKER_02: And I remember going home after our first day and sitting down at my kitchen table and having the realization that Wobot on his first day of launch had more conversations with people than a therapist could have in a lifetime. You know, it just really hit home.
SPEAKER_14: Joseph Weizenbaum eventually retired from MIT, but he continued speaking out against the dangers of AI until he died in 2008 at the age of 85. For a while, he became a go-to critic if a TV program needed someone to cast doubt on some new technology. For example, here he is in a segment about computers in schools criticizing the tendency to reduce every human problem to a technological one.
SPEAKER_05: It's a very dangerous attitude. It can't help but destroy our culture if it ever becomes truly victorious.
SPEAKER_17: And while he was an important humanist thinker, a guy way ahead of his time in many ways, some people felt like he went too far. Pamela McCordick knew Weizenbaum over several decades. She's the one who conducted the interview you heard with him earlier in the story. And she says he burned a lot of bridges in the AI community and became almost a caricature of himself towards the end. She describes being at a conference in Austria, where Weizenbaum was also in attendance. At one point he got up and delivered what had become his usual tirade.
SPEAKER_04: We must not use computers. The German army uses computers and look what's going to happen. Well I happened to be sitting next to a young German artist and he looked at me and he said very quietly, the German army uses knives and forks. Let us not use knives and forks. Well I had to put my hand over my mouth to keep from laughing.
SPEAKER_17: And Weizenbaum also may have missed something that Darcy was just talking about with Wobot. The idea that humans engage in a kind of play when we interact with chatbots. We're not necessarily being fooled, we're just fascinated to see ourselves reflected back in these intelligent machines. That was my experience chatting with Wobot and was probably true for Eliza too.
SPEAKER_12: Also you know I was going to say about the Eliza and the story about his secretary and stuff.
SPEAKER_17: Here's Miriam Weizenbaum again. And I want to be clear, she had many lovely things to say about her dad in our conversation. That he was brilliant and insightful and funnier than people realized. But she also had this to say about the secretary story. I always heard that just really kind of soaked in arrogance.
SPEAKER_12: Like how could she be so stupid to think that you know that this is actually a meaningful communication. You know here you have somebody who so needs a landing pad for her feelings that she's willing to embrace it. And he totally misses that. He totally misses the human need and instead speaks of it as you know just evidence of this person's stupidity rather than evidence of their humanity.
SPEAKER_17: I think we're all a little like Dr. Weizenbaum's secretary now. We're living in this highly automated world. We have these little computers in our pockets that we can talk to and we're under no illusion that they really understand us. But they still captivate us and we still rely on them to tell us the weather and how to get to where we want to go and maybe even on occasion to ask us if we're doing okay.
SPEAKER_14: Coming up after the break, a recent development in natural language processing that would make Joseph Weizenbaum very, very proud. The International Rescue Committee works in more than 40 countries to serve people whose lives have been upended by conflict and disaster. Over 110 million people are displaced around the world and the IRC urgently needs your help to meet this unprecedented need. The IRC aims to respond within 72 hours after an emergency strikes and they stay as long as they are needed. Some of the IRC's most important work is addressing the inequalities facing women and girls, ensuring safety from harm, improving health outcomes, increasing access to education, improving economic well-being, and ensuring women and girls have the power to influence decisions that affect their lives. Generous people around the world give to the IRC to help families affected by humanitarian crises with emergency supplies. Your generous donation will give the IRC steady, reliable support, allowing them to continue their ongoing humanitarian efforts even as they respond to emergencies. Donate today by visiting rescue.org slash rebuild. Donate now and help refugee families in need. Article believes in delightful design for every home and thanks to their online only model, they have some really delightful prices too. Their curated assortment of mid-century modern coastal, industrial, and Scandinavian designs make furniture shopping simple. Article's team of designers are all about finding the perfect balance between style, quality, and price. They're dedicated to thoughtful craftsmanship that stands the test of time and looks good doing it. Article's knowledgeable customer care team is there when you need them to make sure your experience is smooth and stress-free. I think my favorite piece of furniture in my house is the geome sideboard. Maslow picked it out, remember Maslow? And I keep my vinyl records and CDs in it. It just is awesome. I love the way it looks. Article is offering 99% invisible listeners $50 off your first purchase of $100 or more. To claim, visit article.com slash 99 and the discount will be automatically applied at checkout. That's article.com slash 99 for $50 off your first purchase of $100 or more.
SPEAKER_14: This show is sponsored by BetterHelp. Do you ever find that just as you're trying to fall asleep, your brain suddenly won't stop talking? Your thoughts are just racing around? I call this just going to bed. It basically happens every night. It turns out one great way to make those racing thoughts go away is to talk them through. Therapy gives you a place to do that so you can get out of your negative thought cycles and find some mental and emotional peace. If you're thinking of starting therapy, give BetterHelp a try. It's entirely online designed to be convenient, flexible, and suited to your schedule. Just fill out a brief questionnaire to get matched with a licensed therapist and switch therapists at any time for no additional charge. Get a break from your thoughts with BetterHelp. Visit BetterHelp.com slash invisible today to get 10% off your first month. That's BetterHelp. H-E-L-P dot com slash invisible. So I'm back in the studio with Delaney Hall and you've got more to tell us about some interesting stuff that's happened with natural language processing in just the past year.
SPEAKER_17: Yeah, a lot of interesting stuff has happened just recently. So as I was interviewing people for the story, you know, I was talking with computer scientists and historians and we were talking about Weizenbaum and his legacy and these fears he had about chatbots fooling people. And totally unprompted, a couple people brought up this one recent development in natural language processing.
SPEAKER_13: It's something that's called GPT-2.
SPEAKER_16: So GPT-2 stands for Generalized Pretrained Transformer. I can never remember what it stands for.
SPEAKER_17: So Melanie Mitchell, one of the computer scientists I spoke with, is close. It stands for Generative Pretrained Transformer 2. And it's this relatively new neural network that generates incredibly convincing text. Somebody described it as a mouth without a brain, which is a good statement.
SPEAKER_17: And the way that GPT-2 works is that you can input a word or a sentence or a paragraph and the program then generates text based on that input. It basically conjures up a whole coherent piece of writing based on just a few words that you give it.
SPEAKER_13: It is this giant neural network that is trained on a huge corpus of text to predict the next word in a sentence or a paragraph given the preceding text.
SPEAKER_17: Here's Brian Christian, who we also heard from in the story.
SPEAKER_13: You get what appear to be plausible, if very slightly weird, screenplays and, you know, news bulletins and chat message histories and all sorts of things.
SPEAKER_14: Okay, so I'm having a hard time picturing that. So like, what are some examples of the type of text it generates? Like, like how, why and how is it so convincing?
SPEAKER_17: So the example that kind of broke the internet when all of this first came out was this fake news story about unicorns in the Andes. Must have missed that one.
SPEAKER_14: Well, that's notable. Yeah, yeah, I know.
SPEAKER_17: So this will sort of help you understand a little bit better kind of how it works, I think. So the prompt that GPT-2 was given, and so this was a couple of sentences that were written by a human and given to the machine. And I guess I should be clear, this was written in the style of a news story. That was intentional.
SPEAKER_17: So the prompt was, quote, in a shocking finding, scientists discovered a herd of unicorns living in a remote, previously unexplored valley in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorn spoke perfect English. Okay, so a human, a human wrote those sentences and then fed them to the machine.
SPEAKER_14: And so what did the machine spit back out?
SPEAKER_17: So totally independently, based on those sentences, this is what GPT-2 generated. And it actually goes on for quite a while, but I'll redo the first few lines just because they're amazing. It goes, the scientists named the population after their distinctive horn, Ovid's unicorn. These four-horned, silver-white unicorns were previously unknown to science. Now after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved. Dr. Jorge Perez, an evolutionary biologist from the University of La Paz, and several other companions were exploring the Andes Mountains when they found a small valley with no humans or animals. Perez noticed that the valley had what appeared to be a natural fountain surrounded by two peaks of rock and silver snow. And so it goes on for a few more paragraphs, but Roman, a machine, a machine wrote that, a machine. And added all the detail about Dr. Perez and the University of La Paz and everything that's
SPEAKER_14: just from the machine. I know. That's crazy.
SPEAKER_17: I know. And it's like, there's a couple things that really stand out to me when I'm reading it. And the first, of course, is that, you know, the sentences are coherent. Like if you told me a human wrote them, I would not doubt you. Yeah, neither would I. That's incredible.
SPEAKER_17: And then it has those details, like the names and the location of the university, which line up, right, with the promos.
SPEAKER_17: And then the other thing it does is that it does this pretty convincing job of mimicking the format of a news article. So it starts with like lead sentences. They give you the most crucial information. And only then does it begin to tell the story of how the discovery unfolded. Wow.
SPEAKER_14: It's so cool. I almost can't even make sense of how real that feels. I mean, this is pretty silly with unicorns in the Andes. But I mean, I imagine, you know, this is making people nervous that a computer can generate this convincing of text as if it came from a human.
SPEAKER_17: Yeah. Yeah, absolutely. So the company that created the program called OpenAI released this paper. And it had various examples of text generated by this neural net. And it caused a huge stir online, partly because it did such a good job and partly because it made people immediately nervous. Like the idea that this text is so convincing gets back to that idea that we talked about in the episode, which is like, will people know this was written by a machine? And because it generated a very convincing and totally fake news article, people are worrying that the tool could supercharge the spread of misinformation on the internet.
SPEAKER_14: Right. I mean, so what do you do, like once you make a thing like that, how do you make sure it doesn't do that? How do you make sure it doesn't, isn't used for nefarious ends?
SPEAKER_17: So the way that OpenAI approached it was when they first released news about it, they said, we've made this thing. Here are some examples of the text it generates. It is extremely convincing. And because of that, we're not going to actually release the code. It's too dangerous. Whoa.
SPEAKER_14: Yeah. The only thing I can see a problem with that is like the Streisand effect. Like if you say you can't use it and don't recreate it, it just means that more people are going to try to create and use that sort of thing. I mean, it just calls more attention to it.
SPEAKER_17: So yeah, it did call a lot of attention to it. Here is Melanie Mitchell again.
SPEAKER_16: A bunch of people just like jumped on them. They were saying, this is just a publicity stunt. And there were, of course, the popular press doomsday headlines about, you know, OpenAI creates an AI that's so powerful it's too dangerous to release.
SPEAKER_17: So basically OpenAI got a lot of pushback. But the whole situation also inspired this interesting conversation in the AI community about these, you know, really tough questions that they're facing now, now that they can generate what are known as deep fakes, you know, not just in text, but in audio and video too. These are really convincing computer-generated pieces of media. So here's Melanie.
SPEAKER_16: How should people think about publishing results in AI? Is it better to publish these things and get them out there so that people can do research on how to protect or defend against them? Or should they not be published at all? So there was this big conversation about that.
SPEAKER_14: I mean, I guess I wonder, like, you know, this obviously has a problem with fooling people, but you know, couldn't a human just write an entire fake news article or a bunch of them and post online? I mean, how does GPT-2 make things really all that different?
SPEAKER_17: That was something I asked Melanie. And she said that the issue is really scale. Like sure, she could sit down and write a story about unicorns herself and post it online and tweet it out and try to make it go viral. But it would take time. It would take effort. And instead, GPT-2 could generate literally millions of these stories very quickly.
SPEAKER_16: And here you can just do it automatically and generate as many of these as you want. You could put them online and somebody's going to believe them. It just makes it easy to create these things and publish them and it's hard to fight back, you know.
SPEAKER_17: And the tough thing is that, you know, we're already living in an online information environment where it can be hard to sort the real from the fake.
SPEAKER_13: You know, here's an account that says they're the Tennessee Republican Party. Are they? You know, here's this thing that says it's the Trump fan club of Missoula. Is it? You know, like, here's this person who claims they're, you know, a retired Navy SEAL. Are they? You know, we're increasingly moving into this world in which it's just really hard to make sense out of discourse on the web without being extremely scrupulous and vigilant. I mean, it seems like this generation from now on, we're going to have to have major
SPEAKER_14: media literacy training. I mean, it's just to be able to spot the fakes from the real thing is going to take a whole new amount of effort and part of our brain.
SPEAKER_17: Yeah, it is. And to bring it back to Joseph Weizenbaum, I mean, when he first started thinking about this stuff, computers were still really rudimentary. They had about as much processing power as we have in, like, a parking meter today, which is to say they couldn't do much. But today it's this totally different story.
SPEAKER_13: How do we survive, like, this coming reality in which you can produce, for all intents and purposes, Turing test passing content that espouses whatever viewpoint you happen to want, like, infinitely at scale? Increasingly these ostensibly philosophical questions of the Turing test become not merely one of the hassles of everyday life, but one of, I think, the more pressing questions affecting us at this moment in history.
SPEAKER_14: Wow. So it seems like Weizenbaum may have been ahead of his time, but dead on.
SPEAKER_17: It's interesting because I think he sounded to many people in his time alarmist, but increasingly the questions that he was raising seem very, very relevant.
SPEAKER_13: I find myself wishing that we had both Turing and Weizenbaum by our side in the coming decade, because I think we need all the perspective we can get.
SPEAKER_14: 99% Invisible was produced this week by our senior editor Delaney Hall. Mix and tech production by Sharif Yousif, music by Sean Real. Katie Mingle is our senior producer. Kurt Kohlstedt is the digital director. The rest of the team is Emmett Fitzgerald, Joe Rosenberg, Chris Berube, Vivian Lay, Avery Tufleman, Sofia Klatsker, and me, Roman Mars. Special thanks this week to Katie Mingle, who first came up with this idea to do a story about Eliza. Thanks also to Dr. Alaina Books, who has studied the ethics of AI in mental health settings. You can find a link to the paper she co-authored on our website. Thanks also to the Carnegie Mellon University Archives for letting us use the recording of Joseph Weizenbaum from the Pamela MacCordick collection. And finally, thanks to Naomi and Miriam Weizenbaum, who generously shared their memories of their father. We are a project of 91.7 KALW in San Francisco and produced on Radio Row in beautiful downtown Oakland California. 99% Invisible is a member of Radio-Topia from PRX, a fiercely independent collective of the most innovative shows in all of podcasting. Support them all at Radio-Topia.fm. You can find the show and join discussions about the show on Facebook. You can tweet at me, at Roman Mars and the show at 99pi.org. We're on Instagram and Reddit too. But our true home on the web is 99pi.org.
SPEAKER_15: So I remember one time when I was probably about 10, asking my dad if I could work on the computer. I asked if I could be alone, not that I knew anything at that time about the famous story of his secretary wanting to be left alone to work with Eliza. I didn't know anything about that. But I somehow knew that this computer was going to have powerful insights into people and who I was and something like that. And unbeknownst to me, he had turned the computer on so that it operated basically as a typewriter. But I didn't know that. And so I sat down at the typewriter and I typed in, what is my sister Miriam doing right now? And I waited for a really long time. And the computer didn't answer me. And then I said, what is Miriam wearing right now? Because I knew the answer to that. So I thought maybe the computer would too. And nothing happened. And finally, I waited for a really long time. And finally, I got up and discussed and thought, this stupid computer doesn't know shit.
SPEAKER_11: Gatorade Zero has all the electrolytes and all the flavor of Gatorade with zero sugar
SPEAKER_00: to help you get more out of your workout routine. How much more? It helps you feel more hydrated through every mile, every set, and every song in your fitness routine. No matter how you choose to move, Gatorade Zero got your back. From yoga to kickboxing and everything in between, Gatorade Zero is the perfect partner for whatever workout comes your way, helping you get more, do more, and be more with zero. Get more out of zero.
SPEAKER_01: Nissan has a car for everyone. Every driver who wants more. Whatever your more is, more fun, more freedom, more action. From sports cars, sedans, and EVs to pickups and crossovers, with Nissan's diverse lineup, anyone can find something to fit their more. Get more revs in their sports cars, more guts with all-wheel drive, and more than enough options to fit your driving style. Nissan can take you where you want to go. Learn more at NissanUSA.com.