SPEAKER_00: Wondery Plus subscribers can listen to How I Built This early and ad-free right now.Join Wondery Plus in the Wondery app or on Apple Podcasts.Get closer to the best you.Audible lets you enjoy all your entertainment in one app. I'll see you next time. New members can try Audible free for 30 days.Visit audible.com slash built or text built to 500500.That's audible.com slash built or text built to 500500 to try Audible free for 30 days.audible.com slash built.
SPEAKER_01: Apple Card is the credit card created by Apple.You earn 3% daily cash back when you use it to buy the new Apple Vision Pro or any products at Apple.And you can automatically grow your daily cash at 4.50% annual percentage yield when you open a high-yield savings account.Apply for Apple Card in the Wallet app on iPhone. Apple Card subject to credit approval.Savings available to Apple Card owners subject to eligibility.Apple Card and savings by Goldman Sachs Bank, USA, Salt Lake City branch, member FDIC.Terms apply.
SPEAKER_00: This message comes from how I built the sponsor Crow.There's no shortage of volatility in business today from regulatory shifts to digital disruption.But volatility isn't your enemy.Doing nothing is.You can uncover opportunity in uncertainty.Crow offers top flight services in Audit, tax, advisory, and consulting to help you take on your biggest challenges.Visit embracevolatility.com to discover how Crow can help you embrace volatility.Once again, that's embracevolatility.com. Hello and welcome to How I Built This Lab.
I'm Guy Raz.Okay, artificial intelligence is going to change everything, not just about business or entrepreneurship, but everything, or at least we think so.I do, and a lot of it excites me.Breakthroughs and curing diseases or... Maybe even solving climate change.But a lot of it really freaks me out, too.Like how it might become impossible to know what's true and what's not, what's real and what's fake.And because trust is the invisible force that allows our societies to work, AI could completely undermine all of it. Now, we've had a lot of conversations about AI on the show in the past year, and we're going to have many, many more.And this week and next, we have a really important guest.
And I really hope you all spend some time really listening to these episodes. Many years before anyone noticed, Tristan Harris warned about the perils of social media.And by and large, he was proved right.He even helped make a film about it.You might have seen it.It was called The Social Dilemma.Tristan is one of the founders of the Center for Humane Technology, and his roots in the tech world run deep. And today, he's sounding the alarm about AI.Now, to be clear, he's not anti-AI, but Tristan is really worried about the pace of its growth, because in a very short period of time, he argues we're going to lose control over it. Today, we're going to run part one of my conversation with Tristan, and next week we'll have part two.
Anyway, Tristan grew up in the Bay Area, and he was a technology kid through and through.He even wrote fan mail to Steve Wozniak, one of the co-founders of Apple.And actually, one of Tristan's co-founders at the Center for Humane Technology is deeply connected to that early obsession with Apple.
SPEAKER_02: And I'm honored today to say that my co-founder, he's a Raskin.His father was Jeff Raskin, who started the Macintosh project at Apple, along with Steve Jobs, who later took over the project.And I think there's an ethos that both he and I come from of the original days of Silicon Valley, that is aspirational, in which computers can be a bicycle for the mind, a bicycle for human creativity, extending human expression.I think sometimes we can get classed as doomers, but the risk side of AI, we only do that because we care about the vision where technology actually is in service of humanity.And to do that, we just think we have to fix the incentives.
SPEAKER_00: Yeah.You eventually started your own company.This is in the 2000s, the mid 2000s called Apture.And this was a, it was a hyperlink, you would like click over a hover over a hyperlink, and you would get like, other information, right?More or less.Can you describe what that company did?
SPEAKER_02: Yeah. I started Apture in 2007, and it was going back to the original ethos of what the internet and hyperlinking was supposed to be about.You could say, when people click on this button or click on this image or click on this text, I want the computer to speak this out loud or to issue a dialogue prompt or to play this song or to play the piano.And it was the first really creative idea of... this interlinked, what if everything was connected to everything else in the most visually expressive, multimedia, educational, inspiring, beautiful way.So then I learned a lesson, though, because while I was interested in kind of re-initiating this original, inspiring ethos of multimedia and teaching and education into the internet, the way that Aptra, which is a, you know, for-profit company, we raised venture capital, we had investors.And our progress, our success was measured not in the number of children who were inspired by cool media that they got to learn and watch and click on, but our progress was measured in the form of engagement.We did deals with the Washington Post, the New York Times, The Economist, and they would ask us one question, which was, how much did you increase the amount of time people spent on my website?AKA, how many ad views, impressions did you increase?
And And that's when I really saw, I think, the fallacy that leads and guides both the social media work that we became known for with the social dilemma and our work with AI, which is there's all these positive stories we want to tell ourselves about what we're building.But the question that determines which future we'll end up in is the incentives.And Charlie Munger, Warren Buffett's business partner, just passed away.He had a very famous quote we referenced in our work that he said, if you show me the incentive, I'll show you the outcome.When people ask themselves the question, which way is AI going to go?Is it going to be the promise or is it going to be the peril?How are we going to know?Well, the point is that the way that we'll know which future we get is by where the incentives, where the profit motive, where the status and reputation is all conferred to.
And that's what we have to change.
SPEAKER_00: We're going to talk a lot about AI in this conversation, but I want to go back to around 2011 when Apture was acquired by Google and you started work at Google.And it was around, I think, around this time that you really started to focus on this idea of attention and to become more troubled by it.Something that is now just, you know, it's an article of faith.We now know that The attention economy is a term, and we now know that tech companies make money from capturing our attention.Media does.YouTube does.This show does, right?But you were thinking about this a long time before others were as a potential problem.How did those ideas begin to kind of percolate in your mind?
Yeah.
SPEAKER_02: Well, again, starting with my experience at Apture, I realized that for our company to be successful, we had to increase the total amount of time people spent on the Washington Post and New York Times, and that there obviously isn't an infinite amount of human attention out there.It's almost like planetary boundaries, right?Can you run infinite economic growth on a finite planet forever?Well, so long as economic growth has environmental externalities or depletes finite resources, you can't do it forever unless you keep making scientific breakthroughs on every dimension. So, you know, I think similar to that, you can't run infinite growth of every company demanding more attention per quarter per year forever with a finite supply of human attention.And at the time in 2007, it was actually the year the iPhone came out.We started our company before the iPhone came out, but obviously what mobile phones did is they opened up the attention economy.It used to be people only spent a few hours a day on computers and then they would get offline and go outside, go to a movie, do something else. And increasingly, every moment of our lives became part of this attention economy, sold for a commodity, like the time you sit in the toilet and you go to the bathroom, and that's the boom.Now there's 10 extra minutes in the attention economy because the smartphone opened up that new piece of real estate.
So just seeing the fact that this is where this is all going, there's only so much, and it's going to get more competitive.People are going to find more creative ways of creating stickier, more persuasive, more socially validating, more outrage driven, more addicted, more distracted, more polarizing, narcissistic, you know, these kinds of things.And it was pretty daunting to see that in 2013 when I was at Google and made this presentation, because you're starting to see like, if this keeps going, I can tell what kind of society this is going to create.And that's not a society that I think we can afford to be in.
SPEAKER_00: We need to turn the tide.Yeah. This presentation that you're referencing, it was called – it was a call to minimize distraction and respect users' attention.You were an employee at Google, and you decided to put together a 144-page Google slide presentation about what was going on.And I just want to read a line in there.The line is – Never before in history have the decisions of a handful of designers working at three companies, Google, Apple and Facebook, had so much impact on how millions of people around the world spend their attention.We should feel an enormous responsibility to get this right.That was written in 2013.You wrote it internally in Google and it kind of went internally viral.
How did the leadership of Google respond?I mean, did they, you know, sort of pay you lip service and say, yeah, this is all you're saying all the right things or were people annoyed or what they say?
SPEAKER_02: I remember when I sent the presentation to just 10 friends at Google just to say, hey, can I get your feedback on this?And I came to work the next day and there was something like 40 simultaneous viewers.And I knew that was impossible because I'd only sent it to 10 people.They were sending it around.And then later I looked that day and there was 150 simultaneous viewers and then thousands by the end of the day. And so I really realized I got to clean this up and finish it basically in less than a couple hours.And it was this real moment in time.I remember hearing that I think in the first 24 hours that Larry Page, the CEO of Google, had had several meetings that day in which people brought up this presentation to him. And I got emails from around the company, mostly from other employees who just said, I completely agree.This is happening to my children.
This is affecting my relationships.This is this thing that you're naming a thing that I felt, but I didn't know how to put into words.And, you know, there was even an executive at Google who decided to host me, basically offer me a chance to work on this topic.And He sent me a quote by Neil Postman that we were all worried about this dystopian future of 1984, the George Orwell vision of dystopia, where technology limits our access to information.There's surveillance, there's control, we ban books.But he said alongside that vision, there's this other dystopian vision with technology. which is the Aldous Huxley vision of Brave New World, in which control is not established by banning books, but by creating a world of so much amusing ourselves to death that no one wants to read a book.No one even knows.
SPEAKER_00: There's just this tsunami of information.That's right.It's impossible to discern what is real from what is not.
SPEAKER_02: That's right.Because it doesn't have a single source.It's not Big Brother.It's the big incentives of the attention economy that have made this problem worse, and then combine that with technology and AI. And we're heading into, you know, a bigger version of that.So I'm not trying to paint a dark picture here.It's just important that we get a clear eyed view of what world we're pulling towards so that we can say, if that's not the future that we want, how do we have to change the incentives to arrive in that future that we all want for our children?
SPEAKER_00: We're going to take a quick break.But when we come back, more from Tristan on how social media and now AI development are happening too fast for humans to fully process.Stay with us.I'm Guy Raz and you're listening to How I Built This Lab. Thank you so much for having me. Squarespace makes it really easy to get started with best in class website templates for all types of businesses that can be customized to fit your specific needs.Squarespace also provides the tools you need to run your business smoothly, including inventory management, a simple checkout process and secure payments.And with Squarespace email campaigns, you can build a community of email subscribers and customers.Start with an email template and customize it by applying your own brand ingredients like colors and logo.
And once you send, built-in analytics measure your email's impact.Go to squarespace.com slash built for a free trial.And when you're ready to launch, use offer code built to save 10% off your first purchase of a website or domain. On How I Built This, we love to highlight businesses that are doing things a better way.That's why when I found Mint Mobile, I just had to share.Mint Mobile ditched retail stores and those overhead costs and instead sells their phone plans online. and passes those savings on to you.Right now, Mint Mobile has wireless plans starting at just $15 a month.That's unlimited talk and text plus data for $15 a month.Before Mint Mobile, I was paying hundreds of dollars a month for my family's cell phone plan, and I still dealt with dropped calls and moody customer service agents.
Not anymore with Mint Mobile.To get your new wireless plan for just $15 a month and get the plan shipped to your door for free, go to mintmobile.com slash built.That's mintmobile.com slash built.Cut your wireless bill to $15 a month at mintmobile.com slash built.Additional taxes, fees, and restrictions apply.See Mint Mobile for details. Welcome back to How I Built This Lab.I'm Guy Raz, and my guest is Tristan Harris, co-founder of the Center for Humane Technology.Now, back in 2013, social media was really starting to explode, and Tristan was at Google, and he was sounding the alarm about the dangers of the attention economy.And he wasn't the only one concerned about the power of this new technology.
And I remember reading at the time, and I don't know if this was apocryphal or not, but reading that Steve Jobs, who had died two years earlier, wouldn't allow iPads in his own home.He did not allow his children to have these products that he himself helped to create.And it really struck me as not so much hypocritical, but that there was something that people understood, fundamentally understood about the dangers of Of what they were unleashing.Yeah.
SPEAKER_02: Well, I certainly told myself that story when I first got started.And I really want to make sure that I'm honoring that I thought it was generous for Google to support the fact that I was doing that work.And I want people to hear that so you don't hear me as some kind of reactionary, you know, the evils of big tech and all the people at the top are evil. At a human level, when you talk to the human beings who worked on these products, I had a lot of people who were very sympathetic with these concerns.But when it came down to trying to change Android, like I met with the Android team and said, what if we could change how the operating system works and better protect people's attention?I met with the Chrome design team.What if we could change Chrome to make it better at protecting people's attention? At the end of the day, Android and Chrome and Google benefit the more overall screen time there is versus not.And so I knew that my success within the company was going to be limited.And I had to eventually take the concerns outside of the public world to try to say, how do we create public pressure that will start to change those incentives?
And I had no idea in 2015 or so when I left Google how I was going to do that. But, you know, here we are today and there's some amazing things that have shifted in the public consciousness around this and a lot more needs to happen.
SPEAKER_00: The first nonprofit that you founded was called Time Well Spent, and that was around 2014.And at that time, and I mean, up until recently, understandably, your focus, I think, increasingly was on social media as it began to capture more and more of our time and attention.And you start to see this happen at lightning speed around this time.Yeah.
SPEAKER_02: Yeah, I mean, so time well spent came from this term phrase time spent most tech products and advertising is maximizing time spent.And we said, but is it maximizing time well spent meaning at an existentialist level as a human being?What are the choices that you endorse is time well spent?And imagine a world where technology is competing not to maximize how much time it takes from you. but to maximize its contributions, its net positive contributions to a life that you would endorse in your deathbed, meaning enabling you and helping you make the choices that are the lasting, meaningful, fulfilling choices, not the full but empty choices.But certainly inside of the time well spent world and the attention economy, the biggest forces that we were dealing with at that time were the growing revenues of social media companies and the growing portion of social media in that problem.
SPEAKER_00: And I know you've made this analogy or versions of it, but you think about something like the Gutenberg Press, right?And that happens in the 15th century in Europe.And all of a sudden, ordinary people can get access to the Bible and they can start to learn how to read it.And it doesn't have to be filtered through the priests.And so that creates – it launches the Reformation.Of course, a huge innovation.It changed the world.But it was very disruptive.At the same time, this happened – Over a long period of time, right?
And so you can basically trace that moment all the way to the end of World War II, which was kind of the end of the European era of wars, but about 500 years.What happened with social media was like five years and it was like the Gutenberg press on steroids and amphetamines and, you know, everything in between.Yeah. It's almost inconceivable how our brains were going to adapt to this massive, rapid proliferation of information.
SPEAKER_02: Yeah.In our work, we often reference the E.O.Wilson quote, who's the father of the field of sociobiology, that the fundamental problem that humanity faces... is that we have paleolithic brains, medieval institutions, and godlike technology, which is to say both in the power and the speed, our brains and our emotions are not built to see exponential curves, right?There's nothing on the savanna when you see a giraffe or a tiger and you gotta run from you know, something that your brain needed to understand an exponential curve.It took, you know, Isaac Newton and calculus and sort of teaching ourselves education about math to start to understand what exponential curves are.We get more practice with that with finance and savings accounts and things like that, but our brains are not built for that.So suddenly you have something like, you know, social media come along where you totally have an exponential curve of how many users, how much content is flowing in the system.
Then you get AI and you get a double exponential.You get an exponential that accelerates its own exponential, meaning AI accelerates AI development.We now have something that is moving so fast that it's like our brains are completely not comprehending the moment that we're in.As you said, social media took five, six years to get to market penetration and the absorption of elections and journalism gets sucked into the black hole of that super organism. In the case of AI, the stat that we use is it took Instagram, I think, two years to get to 100 million users, and it took ChatGPT two months.So we're living with a technology that's changing and undermining the assumptions about how our world worked.Humanity can absorb new technologies, but it takes time. And the challenge is when the time pressure that you have to absorb that new change is faster than ever before.This is the fastest that we've ever had to face it.Just to provide one quick thought experiment, a friend of mine who's an AI policy researcher at one of the major labs, he said, imagine that we just named that humanity has a finite capacity to absorb new technologies that undermine our assumptions about how everything works.
And imagine instead of just releasing all the new technology as fast as possible and just blowing past those finite absorption boundaries, that we instead, what if you had to apply for a license to release a new technology into the commons?And so we're basically consciously choosing as a society which new technologies we want to absorb and we want to prioritize and which ones we want to give ourselves a little bit more time. It's not yes or no to technology.It's how do we absorb technologies at the pace that we can get it right.And I think it's an interesting thought experiment for people to think about.
SPEAKER_00: We're going to take a quick break, but when we come back, the story of how Tristan was alerted to the dangers of another nascent technology, one that he says could pose an existential threat to the entire world.Stay with us.I'm Guy Raz, and you're listening to How I Built This Lab. Your business gets to a certain size and the cracks start to emerge.Things you used to do in a day are taking a week.You have too many manual processes.You don't have one source of truth.If this is you, you should know these three numbers.37,000, 25, 1. 37,000.
That's the number of businesses which have upgraded to NetSuite by Oracle.25.NetSuite turns 25 this year.That's 25 years of helping businesses do more with less, close their books in days, not weeks, and drive down costs. One, because your business is one of a kind.So you get a customized solution for all of your KPIs in one efficient system with one source of truth.Manage risk, get reliable forecasts, and improve margins.Everything you need to grow all in one place.Right now, download NetSuite's popular KPI checklist designed to give you consistently excellent performance.Absolutely free.
at netsuite.com slash built.That's netsuite.com slash built to get your own KPI checklist.netsuite.com slash built.Experiences are what people love the most about travel.Recently, I took my kids to the UK to see some English premier football, and it was incredible. Viator is a website and app where you can book travel experiences, everything from simple tours to extreme adventures.Viator has real traveler reviews to help you find the best activity for your trip.There's something for everyone.Plus, when you book a travel experience with Viator, there's always flexibility with free cancellation and 24-7 service.Download the Viator app now and use code VIATOR10 for
for 10% off your first booking in the app.One app over 300,000 travel experiences you'll remember.Do more with Viator. Welcome back to How I Built This Lab.I'm Guy Raz, and my guest is Tristan Harris, co-founder of the Center for Humane Technology.In 2018, Tristan co-founded that nonprofit, and at the time, it was focused on the dangers of social media.But then, in late 2022, early 2023, Tristan started getting messages from people working on a different technology.You and Aza Raskin, one of your co-founders, were contacted by people working in AI, and they were... they were sounding the alarm.And the analogy that you've made about this encounter really alarmed me.
Compared it to the Manhattan Project, can you describe or explain that a little bit?
SPEAKER_02: Yeah.Well, a lot of people analogize that the invention of AI is as significant as the invention of the atomic bomb.Now, that might sound like alarmist or panic-creating or something like that.And it's not, actually.And it's because of... how the atomic bomb restructured the world order.It was a new kind of power that whoever had it was clearly sort of the dog on top of the food chain.And with AI, what people don't understand is if you genuinely build something that can do full artificial intelligence across all kinds of cognitive labor, meaning scientific labor, research labor, market analyst labor, financial labor.If you can out-compete all stock traders on the stock market, if you can out-compete all military strategy games, if you can out-compete anyone in writing text to influence people on the internet, if you have that AI system that can do that, that is a new kind of power.
in which your position in the world will be greatest relative to everybody else.There is a new atomic bomb project in the form of those who are racing to build towards this fullest expression of AI, which some people call artificial general intelligence.Remember that the stated mission statement of OpenAI, Anthropic, and DeepMind from the very beginning, those three companies, have been to create artificial general intelligence. So when we got calls from some people in those labs, it felt like getting a call from a scientist working on the Manhattan Project before you knew what the Manhattan Project was.
SPEAKER_00: Okay, so in 2022, you are contacted by AI researchers who are kind of ringing the alarm, right, that there's this AI arms race underway.Yeah. And so presumably they wanted you to help spread the word about what was happening, right?
SPEAKER_02: Yeah, this is late 2022, early 2023.And frankly, as you said, when we started the Center for Human Technology, we had all the other issues of how social media is undermining democracies.We had our plate full.And I actually, they were asking for our help.They basically said, okay, you all raise the alarm about social media and the social dilemma.You have a huge public platform.Listen, this race to build AI has gotten out of hand and open AI and anthropic. DeepMind are now in this race to release things as fast as possible.And this race is going to go to a dangerous place if we don't have some outside pressure that can help bend the curve, maybe slow it down a little bit, create some international norms, create some safety guardrails.Would you use your public platform to help raise the alarm?
SPEAKER_00: And you did.And you and your co-founder, Asa Raskin, would go on to build a presentation called The AI Dilemma.I think you first delivered it in March of 2023 here in the Bay Area.You presented it several times since then.I've seen it live.It's available online.Anybody can watch it.What was a message that you were trying to send with this presentation?
SPEAKER_02: You know, the main point we made in the presentation is that Social media was really humanity's first contact experience with a mass-deployed AI because it's an AI pointed at your 13-year-old's brain trying to calculate what's the perfect next TikTok video to show you or what's the perfect next politically outrageous tweet that will keep you scrolling.And we saw how that experiment went.There's a lot of really good things that social media has done. But the overall picture of what social media has done, if you really take into account the full scope of impacts and externalities, is it's also created the backdrop of a more addicted, outraged, polarized, narcissistic, breakdown of truth, lower trust society.And that that is like undermining the quality of the soil underneath your feet while you have a few nice shiny toys above the soil. We knew that no matter what positive stories we were telling about social media, unless you pay attention to the incentives, which is to maximize attention and engagement, which produces that race to the bottom of the brainstem, that that was going to be the driving force of what would tell you which future we're going to land into. And if we care about which AI future we get, we're going to hear a lot of positive stories about AI developing antibiotics and solutions to climate change and new drugs for cancer.And of course, I have a beloved with cancer right now.I want as many of those solutions as we possibly can get as fast as possible.
But we have to also look at the incentives, which is that it's a race to roll out AI capabilities as fast as possible, like handing out new magic wands for new kinds of things that people can do that they couldn't even do four months ago.The fact that, you know, there's a new magic wand that was released last year that with three seconds of your voice guy, I can talk to your bank representative or your grandmother and pretend to be you and get sensitive information out of them.And society wasn't prepared for that magic wand to be rolled out.
SPEAKER_00: I want you to explain... There was a big breakthrough in 2017.Essentially, Google announced this development.It's called the Transformer model.And this helped kind of spark this breakthrough in what we now know as large language models.And that's kind of helped to trigger what's now called generative AI.But help me understand how it works.Because, I mean... I think a lot of people are thinking about or hearing about breakthroughs in scientific developments and cancer research and climate change technology, things that are going to benefit us, things that will make our lives easier.But at the same time, from what I understand, based on what you've talked about, a lot of the scientists and researchers working with large language models are kind of developing a Prometheus, something that already they know works. is growing faster and becoming smarter faster than they ever anticipated.
SPEAKER_02: Yeah.So just to say that I was pretty skeptical of a lot of the bigger AI risks when there was a community in San Francisco called the Effective Altruist that had been worried about bigger AI risks for a long time.I was incredibly skeptical of a lot of this stuff.In fact, I actually told them, I actually think you all are misguided.You're not seeing the AI that's been released right underneath your feet that has already wrecked the world and it's called social media.Yeah. So just want to say that I'm not walking into this conversation wanting to hype AI capabilities.What changed, though, is in 2017, there was a paper at Google published called Attention is All You Need, in which they invented this new kind of AI paradigm of transformers.I won't go in too much in the technical details, but we swapped the engine underneath the hood of AI in 2017.And everything that we used to be skeptical of about AI up until 2017, I would have agreed with everybody on.
And there's AI when Siri mispronounces your mom's name, and there's AI when Google Maps gets the directions wrong or mispronounces a street address.But that AI that falls over and makes mistakes is different than the AI of transformers, in which basically it's like a big brain neural network, and you're throwing more data, more training, more compute at it.So for example, GPT-4... They spent $100 million to get a bunch of NVIDIA chips to churn away for something like six months and then come out with this weird digital spaghetti brain.This is a massive neural network.And the thing about it is that as you scale these digital brains, they end up with more emergent capabilities that no one... who built them anticipated.So suddenly, when they scaled GPT-4, they tested it, and they could explain jokes.You could give it an image of the picture of an iPhone where the bottom of the port, there's like an old VGA cable, like those old monitor cables from the 1990s.And you could say, what's funny about this image?
And it explained, well, it's funny because an iPhone can't have a 1990s VGA cable at the bottom of the iPhone.It's not that it had ever seen that image before.It's not that it was trained on that. And this is not to sort of conjure this sort of mythical blank check of AI is going to have all sorts of magic capabilities.It's just that we know that it gains capabilities that people may not notice for a long time.It took two years, for example, for researchers to figure out that GPT-3.5, the predecessor to GPT-4, had research-grade chemistry knowledge, meaning you could ask it questions about, you know, how to synthesize dangerous chemicals, and it would tell you how to do that.
SPEAKER_00: And so... By the way, nobody knew that.
SPEAKER_02: Nobody knew that it had developed those capabilities on its own.That's right.And it's confusing, by the way, because I want to also, for those who are listening to this and skeptical, because they're like, well, that's true, but look at how many times it hallucinates and it gets things wrong.That's 100% correct.It's hallucinating and getting a bunch of things wrong.So it's similar.We've never been around this... this new kind of brain that is simultaneously in certain ways better than humans at a bunch of things, but also makes really dumb mistakes, like surprising and kind of almost embarrassing mistakes.It's like a weird combo that we're not used to, but it's because it's an artificially intelligent mind.It's not a kind of mind that we have any previous sort of knowledge about.
SPEAKER_00: That's part one of my conversation with Tristan Harris, co-founder of the Center for Humane Technology.You can catch the rest of my conversation with Tristan where we discuss how AI could undermine the foundations of our society and what we can do to prevent that.That's coming up in part two next week. And thanks for listening to the show this week.Please make sure to click the follow button on your podcast app so you never miss an episode of the show.And as always, it's free.This episode was researched and produced by Alex Chung with editing by John Isabella.Our music was composed by Ramtin Arablui.Our audio engineer was Neil Rauch.Our production team at How I Built This also includes Carla Estevez, Chris Messini, Casey Herman, JC Howard, Catherine Seifer, Carrie Thompson, Malia Agudelo, Neva Grant, and Sam Paulson.
I'm Guy Raz, and you've been listening to How I Built This Lab.If you like How I Built This, you can listen early and ad-free right now by joining Wondery Plus in the Wondery app or on Apple Podcasts.Prime members can listen ad-free on Amazon Music.Before you go, tell us about yourself by filling out a short survey at wondery.com survey. Hey everyone, it's Guy Raz here, and I have a new show that I think you're going to love.From Wondery and hosted by Laura Beal, the critically acclaimed podcast Dr. Death is back with a new season called Dr. Death, Bad Magic.It's a story of miraculous cures, magic, and murder.When a charismatic doctor announces revolutionary treatments for cancer and HIV... It seems like the world has been given a miracle cure.Medical experts rush to praise Dr. Surhat Gumruku as a genius.
But when a team of private researchers looks into Surhat's background, they begin to suspect the brilliant doctor is hiding a shocking secret.And when a man is found dead in the snow with his wrists shackled and bullet casings speckling the snowbank, Surhat would no longer be known for world-changing treatments He'd be known as a fraud and a key suspect in a grisly murder.Follow Dr. Death, Bad Magic on the Wondery app or wherever you get your podcasts.You can binge all episodes of Dr. Death, Bad Magic ad-free right now by joining Wondery Plus in the Wondery app or on Apple Podcasts.