Transcript for radio replay: i, robot


Shankar Vedantam: Quick note before we get started, this episode includes a racial epithet and discussions about pornography. This is Hidden Brain, I'm Shankar Vedantam. Maps are good representations, not just of the world we live in, but of how we think about the world we live in. Over the centuries, our maps have emphasized the places we find important. They show the limits of our knowledge and the scope of our ambitions. 700 years ago, Europeans were completely unaware of the existence of North America. Fast forward to the 1970s, scientists can tell you in detail what the surface of the moon is like. Today we're charting out maps of a different sort, maps of our minds, minds. There isn't one cartographer designing these modern maps, we all are, and the maps are constantly changing. We start today's show with a personal question. Have you ever Googled something that you would never dream of saying out loud to another human being? When we have a question about something embarrassing or deeply personal, many of us today, don't turn to a parent or to a friend, but to our computers.

Voice 1: Because there's just some things you just can't ask a real person in real life, and you need to ask Google.

Voice 2: Because it's completely anonymous and there are no judgements attached.

Voice 3: Google knows everything.

Voice 4: I agree to that.

Shankar Vedantam: Every time we type into a search box, we reveal something about ourselves. As millions of us look for answers to questions or things to buy or places to meet friends, our searches produce a map of our collective hopes, fears and desires. My guest today is Seth Stevens-Davidowitz. He used to be a data scientist at Google, and he's the author of the book, Everybody Lies, Big Data, New Data and What the Internet Can Tell Us About Who We Really Are. Seth, welcome to Hidden Brain.

Seth Stevens-Davidowitz: Oh, thanks so much for having me Shankar.

Shankar Vedantam: So Seth, we all know that Google handles billions of searches every day, but one of the insights you've had is that the reason Google knows a lot about us is not just because of the volume of search terms, but because people turn to Google as they might turn to a friend or a confidant.

Seth Stevens-Davidowitz: That's exactly right. I think there's something very comforting about that little white box that people feel very comfortable telling things that they may not tell anybody else. About their sexual interests, their health problems, their insecurities, and using this anonymous aggregate data, we can learn a lot more about people than we've really ever known.

Shankar Vedantam: And one of the ways we can learn a lot more about people is through these very strange correlations. You find, for example, there's a relationship between the unemployment rate and the kinds of searches people make online.

Seth Stevens-Davidowitz: Yeah. I was looking at what searches correlate most with the unemployment rate. And I was expecting something like, new jobs or unemployment benefits. But during the time period I looked at the single search that was most highly correlated with the unemployment rate was Slut load, which is a pornography. And you can imagine that if a lot of people are out of work, they have nothing to do during the day, they may be more likely to look at porn sites. Another search that was high on the list was solitaire. So again, when people are out of work, they're bored, they do leisure activities and potentially this measure of how much leisure there is on the internet may help us know how many people are out of work on a given day.

Shankar Vedantam: And of course, this helps us reconsider what we think of as data. So when we think about the unemployment rate as you say, our normal approach is to say, "How many people are still in jobs, let's track down all the jobs." This is coming at the question entirely differently.

Seth Stevens-Davidowitz: Yeah. I think the traditional way to collect data was to send a survey out to people and have them answer questions, check boxes. There are lots of problems with this approach, many people don't answer surveys and many people lie to surveys. So the new era of data is looking through all the clues that we leave, many of them not as part of questions or as part of surveys, but just clues we leave as we go our lives.

Shankar Vedantam: One of the important differences between mining this kind of data and the responses we get on surveys has to do with how people report their sexual orientation. I understand that the kind of queries that you see on Google might reveal something quite different than if you ask people if they're gay.

Seth Stevens-Davidowitz: That's right. If you ask people in surveys today in the United States, about two and a half or 3% of men say that they're primarily attracted to men, and this number is far higher in certain states where tolerance to homosexuality is greater. So there are a lot more gay men according to surveys in California than in Mississippi. But if you look at search data for gay male pornography, it's a tiny bit higher in California, but not that much higher, and overall about 5% of male pornography search is for gay porn. So almost twice as high as the numbers you get in surveys.

Shankar Vedantam: Your research has important implications for a topic that we 've looked at a lot on Hidden Brain, the topic of implicit bias. People aren't always aware of the biases they hold, and so scientists have had to find clever ways to unearth these biases. You think that Google searches can reveal some forms of implicit bias?

Seth Stevens-Davidowitz: That's right. So, one I look at is the questions that parents have about their children. If you ask many parents today, they would say that they treat their sons and daughters equally, that they're equally excited about their intellectual potential, equally concerned about maybe their weight problems. But if you aggregate everybody's Google searches, you see large differences in gender that, when parents in the United States ask questions starting with “is my son,” they're much more likely to use words such as gifted or a genius, than they would in a search starting, “is my daughter.” When parents in the United States search, is my daughter. They're much more likely to complete it with, "Is my daughter overweight or is my daughter ugly?" So parents are much more excited about the intellectual potential of their sons and much more concerned about the physical appearance of their daughters.

Shankar Vedantam: A warning to listeners, in this next section there's going to be a discussion regarding the N word. Seth, you report that in some states after Barack Obama was elected president, there were more Google searches for a certain racist term, than searches for “first Black president.”

Seth Stevens-Davidowitz: I think there is a disturbing element to some of this search data where in the United States today, many people, and maybe this is a good thing, don't feel comfortable sharing that they have racist thoughts or racist feelings, but on Google, they do make these searches in strikingly high frequency. I need to use a sorted language for this. The measure is the percent of Google searches that include the word N***er, and these searches are predominantly searches looking for jokes mocking African Americans. I should clarify. This is not searches for rap lyrics, which tend to use the word Nigga ending in A. But if you look at the racist search volumes, I think if you had asked me based on everything I had read about racism in the United States, I would've thought that racism in the United States predominantly concentrated in the South. That really the big divide of the United States when it comes to racism is South versus North. But the Google data reveal that's not really the case, that racism is actually very, very high in many places in the North. Places like Western Pennsylvania, or Eastern Ohio, or industrial Michigan, or rural Illinois, or upstate New York. The real divide these days when it comes to racism is not North versus South, it's East versus West. There's much higher racism, East of the Mississippi than West of the Mississippi.

Shankar Vedantam: So besides just saying, "We know that there are these patterns of racist searches in different parts of the country." You are actually saying, you can do more than that. You can actually predict how different parts of the country might vote in a presidential election based on the Google searches you see in different parts of the country.

Seth Stevens-Davidowitz: Yeah. Well, the first thing I found is that there was a large correlation between racist search volume and parts of the country where Obama did worse than other democratic candidates had done. So Barack Obama was the first major party general election nominee who was African American. And you see a clear relationship that Obama lost large numbers of votes in parts of the country, where there are high racist search volumes. And other researchers have found, such as Nate Silver at 538 and Nate Cohn at the New York Times, that there was a large correlation between racist search volumes and support for Donald Trump and the Republican party. Those parts of the country that made racist searches in high numbers were much more likely to support Donald Trump. And this relationship was much stronger than really any other variable that they tested.

Shankar Vedantam: I'm wondering how you try and understand that kind of information. It's hard not to listen to what you're saying and draw what seems to be a superficial conclusion, which is that racist people vote for Donald Trump. I'm not sure, is that what you're saying?

Seth Stevens-Davidowitz: That's one of those things where it sounds so offensive to say it that I think everyone tiptoes around the line. I will say that the data does show strong correlation between racist searches and support for Donald Trump, that is hard to explain with any other explanation. Yeah, that kind of is what I'm saying. I'm not saying that everybody who supported Donald Trump is racist by any stretch of the imagination, there are plenty of people who support Donald Trump without this racist tendency, but a significant fraction of his supporters, I think, were motivated by racial animists.

Shankar Vedantam: Seth Stevens-Davidowitz, is a former Google data scientist and the author of Everybody Lies, Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are. You spend a lot of time in the book talking about sex. It turns out to be an area where marketers and companies know that what we say about ourselves is nowhere close to the truth. Most people report being not interested in pornography, but the website Porn Hub reports that in 2015 alone, viewers watched two and a half billion hours of porn, which is apparently longer than the entire amount time that humans have been on earth. What does this say about us? The fact that we either have very little insight about ourselves or we're actually lying through our teeth.

Seth Stevens-Davidowitz: Yeah. I'd say we're probably lying through our teeth. Yeah, I'd say that. I do talk a lot about sex in this book. One thing I like to say is that, big data is so powerful it turned me into a sex expert, because it wasn't a natural area of expertise for me. But I do talk a lot about sexuality, and I think you do learn a lot about people that's very, very different from what they say. And the weirdness at the heart of the human psyche that doesn't really reveal itself in everyday life or at lunch tables, but does reveal itself at 2:00 AM on Porn Hub.

Shankar Vedantam: Pornography sites aren't the only ones gathering information about our sexual and romantic preferences. We now have apps like Tinder and sites like OkCupid that gather tons of data about us, as a result, these apps and sites know a lot about our romantic preferences. But for a long time, we've had a human version of big data for romance, grandma. Seth has some personal experience with this big data source. A couple of years ago, he was having Thanksgiving dinner with his family. He was 33, didn't have a date with him and his family was trying to figure out the qualities Seth needed in a romantic partner.

Seth Stevens-Davidowitz: My family was going back and forth. My sister was saying that, I need a crazy girl because I'm crazy. My brother was saying that, my sister was crazy, that I need a normal girl to balance me out. And my mom was screaming at my brother and sister that I'm not crazy. And my dad was then screaming at my mom that, of course Seth is crazy. So it's a classic Stevens-Davidowitz family Thanksgiving, where everyone's just yelling at each other for being crazy and we're not really getting any progress in learning about what I need in my love life. And then my soft spoken 88 year old grandma started to speak and everyone went quiet. And she explained to me that I need a nice girl, not too pretty, very smart, good with people, social so you will do things, sense of humor because you have a good sense of humor. And I describe why her advice was so much better than everybody else's I think, one of the reasons that she's big data. So grandmas and grandpas throughout history have had access to more data points than anybody else. And they've been able to correlate larger patterns than anybody else has because they've been around longer. And that's why they've been such an important source of wisdom historically.

Shankar Vedantam: The problem, of course, as you also point out, is that it's very hard to disentangle your personal experiences from what actually happens in the world. And in your grandmother's case, she actually had a very specific piece of relationship advice about the kind of person you should want. And some of that might not actually be backed up by the empirical evidence.

Seth Stevens-Davidowitz: Yeah. Well, my grandma has told me on multiple occasions that it's important to have a common set of friends and a partner's. So she lived in a small apartment in Queens, New York with my grandfather, and every evening they'd go outside and gossip with their neighbors. And she thought that was a big part in why their relationship worked. But actually, recently computer scientists have analyzed data from Facebook and they can actually look, when people are in relationships and when they're out of relationships, and try to predict what factors in a relationship make it more likely to last. One of the things they tested was having a common group of friends. Some partners on Facebook share pretty much the same friend group, and some people have totally isolated friend groups. And they found contrary to my grandmother's advice that having a separate social circle is actually a positive predictor of a relationship lasting.

Shankar Vedantam: And so of course, the risk of trusting the individual is that the individual's intuition about what worked for his or her life might not work for everyone else.

Seth Stevens-Davidowitz: That's right. I think we tend to get biased by our own situation. Data scientists have a phrase called weighting data. Some data points get extra weight in our models, and our intuition gives too much weight to our own experience. And we tend to assume that what worked for us will work for others as well, and that's frequently not the case.

Shankar Vedantam: Many companies know that we don't really understand ourselves. When we come back, we look at how companies are using big data to predict what we are going to do before we know it ourselves. We'll also ask if sites like Google can use data to forecast whether you're going to get a serious illness. Should they give you that information? Stay with us. I'm Shankar Vedantam and you're listening to Hidden Brain. And this is NPR.

Shankar Vedantam: This is Hidden Brain, I'm Shankar Vedantam. We're speaking today with former Google data scientist Seth Stevens-Davidowitz about the research in his book, Everybody Lies, Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are. Netflix used to ask users what kind of movies they wanted to watch. Seth says, eventually the company realized that asking this kind of question was a complete waste of time.

Seth Stevens-Davidowitz: Yeah. Initially, Netflix would ask people what they want to view in the future, so they could queue up the movies that they said. And if you ask people, what are you going to want to watch tomorrow or this weekend? People are very aspirational. They want to watch documentaries about World War II or avant-garde French films. But then when Saturday or Sunday comes around, they want to watch the same low brow comedies that they've always watched. So Netflix realized they had to just ignore what people told them and use their algorithms to figure out what they'd actually want to watch.

Shankar Vedantam: So one of the things that's intriguing about what you just said is that, I don't think it's actually the case that people were lying to Netflix when they said they wanted to watch the avant-garde film, they actually genuinely probably aspire to do that. It might actually be that big data understands people better than they understand themselves.

Seth Stevens-Davidowitz: Yeah. Probably even more common than lying to other people is lying to ourselves. Particularly when we're trying to predict what we're going to do in two or three days. We tend to assume that we're going to go to the gym more than we go to the gym, or eat better than we actually we'll eat, or watch more intellectual stuff than we actually will watch. So the algorithms can correct for this over optimism that we all tend to share.

Shankar Vedantam: When you look at a company like Facebook, which has access to these huge amounts of data about us and what we like and whom we like in our relationships, you have to wonder how the company is using this data in all kinds of different ways. I remember Facebook got into some hot water a couple of years ago because they ran an experiment that seemed to be manipulating how people feel, and of course there was a huge outcry about the experiment at the time. And since then, there hasn't been very much reported about what Facebook is doing, but I suspect that it might just be because Facebook is no longer telling us what it's doing, but it's still doing it anyway.

Seth Stevens-Davidowitz: Every major tech company now runs lots and lots of what are called AB tests. Which are little experiments, where you put people into two different groups, a treatment and control group. And you show one group, one version of your site and the other group, another version of the site, and you see which version gets the most clicks or the most views. This has really exploded in the tech industry.

Shankar Vedantam: There are many, many instances where companies are now using big data against us. Banks and other financial institutions are using clues from big data to decide who shouldn't get a loan.

Seth Stevens-Davidowitz: I think it's an area of a big concern. So I talk about a study in the book where they started a peer-to-peer lending site, and they studied the text that people used in their work quest for loans. And you can figure out just from what people say in their loans, how likely they are to pay back. And there are some strange correlations. For example, if you mention the word God, you're 2.2 times less likely to pay back, 2.2 times more likely to default. And this does get eerie, are you really supposed to be penalized if you mention God in a loan application? That would seem to be really wrong, even evil, to penalize somebody for a religious preference. Basically everything's correlated with everything. So just about anything anybody does is going to have some predictive power for other things they do. And the legal system is really not set up for a world in which companies potentially can mine correlations over just about everything anybody does in their life.

Shankar Vedantam: I was thinking about an ethical issue, I'm not sure if necessarily this is a legal issue. But you mentioned in the book that if someone is Googling, "I've been diagnosed with pancreatic cancer, what should I do?" It's reasonable to assume that this person has been diagnosed with pancreatic cancer. But if you collect all of the people who are Googling what to do about their diagnosis with pancreatic cancer, and then work backwards to see what they've been searching for in the weeks and months prior to their diagnosis, you can discover some pretty amazing things.

Seth Stevens-Davidowitz: Yeah. This is a study that researchers used Microsoft Bing data. They looked at people who searched for just diagnosed with pancreatic cancer and then similar people who never made such a search, and then they looked at all the health symptoms they had made in the lead up to either a diagnosis or no diagnosis. And they found that there were very, very clear patterns of symptoms that were far more likely to suggest a future diagnosis of pancreatic cancer. For example, they found that searching for indigestion and then abdominal pain was evidence of pancreatic cancer, while searching for just indigestion without abdominal pain meant a person was much more unlikely to have pancreatic cancer. And that's a really, really subtle pattern in symptoms, like a time series of one symptom followed by another symptom is evidence of a potential disease. It really shows I think the power of this data where you can really tease out very subtle patterns and symptoms, and figure out which ones are potentially threatening and which ones are benign.

Shankar Vedantam: So here's the ethical question. Once you established that there is this correlation that you say, "I have a universe of people who clearly have pancreatic cancer, and I worked backwards through their search history, and I detect these patterns that no one had thought to look at before that say, these particular kinds of search terms seem to be correlated with people who go on to have the diagnosis, versus these search terms that do not go on to predict a diagnosis." So does a company like Microsoft now have an obligation to tell people who are Googling for these combinations of search terms, "Look, you might actually need to get checked out. You might actually need to go see a doctor." Because of course, if you can be diagnosed with pancreatic cancer four weeks earlier, you have a much better chance of survival than if you have to wait for a month.

Seth Stevens-Davidowitz: I lean in the direction of, yes, some people would not lean in that direction. It could be a little creepy if Google, right below the button, I feel lucky, "You may have pancreatic cancer." It's not exactly the most friendly thing to see on a website. But personally, if I had some sort of symptom pattern that suggested I may have a disease, and there was a chance of curing it if I was told, I'd want to know that. It's just another example that the real ethical and legal framework that we've set up is not necessarily prepared for big data.

Shankar Vedantam: Seth Stevens-Davidowitz is a former data scientist at Google and the author of the book, Everybody Lies, Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are. Seth, thank you for joining me today on Hidden Brain.

Seth Stevens-Davidowitz: Thanks so much for having me, Shankar.

Shankar Vedantam: Have you ever talked to your computer, cursed it for making a mistake?

Male Voice: PC load letter? What the (beep) does that mean?

Shankar Vedantam: Have you ever argued with the traffic directions you get from Google maps or Ways?

Siri Voice: Starting route to Grover's mill road. Head north on....

Shankar Vedantam: Have you ever looked at a Roomba cleaning the floor on the other side of the room and told it, "Please come over to this side, turn left? Left."

Male voice 2: It just ran itself right over the edge.

Shankar Vedantam: Robots and artificial intelligence are playing an ever larger role in all of our lives. Of course, this is not the role that science fiction once imagined.

Character Kyle Reese from The Terminator: It doesn't feel pity or remorse or fear.

Shankar Vedantam: Robots bent on our destruction remain the stuff of movies like Terminator, and robot sentients is still an idea that's far off in the future. But there's a lot we're learning about smart machines and there's a lot that smart machines are teaching us about how we connect with the world around us and with each other.

Shankar Vedantam: My guest today has spent a lot of time thinking about how we interact with smart machines and how those interactions might change the way we relate to one another. Kate Darling is a research specialist at the MIT Media Lab. She joined us recently in front of a live audience at the Hotel Jerome in Aspen, Colorado, as part of the Aspen Ideas Festival. Also on stage was a robot, a green robot dinosaur about the size of a small dog known as a "Pleo". It's going to be part of this conversation, but before we get to that, here's Kate.

Shankar Vedantam: Kate, welcome to Hidden Brain.

Kate Darling: Thank you for having me.

Shankar Vedantam: You found that there is an interesting point in the relationship between humans and machines and that point comes when we give a machine a name. I understand that you have three of these Pleo dinosaurs at your home. Can you tell me some of the names that you have given to your robots?

Kate Darling: Yes. So the very first one I bought, I named Yochai after Yochai Bankler, who's a Harvard professor, who's done some work in intellectual property and other areas that I've always admired. And the second one I adopted after I filmed a Canadian documentary, where the show host had to name the robot. And he gave the robot the same name he had, which was Peter. So the second one has a boring name. And then the third one is named Mr. Spaghetti. I don't know if people outside of Boston are familiar with this, but the Boston public transportation system, they wanted to crowdsource a name for their mascot dog. And the internet decided that the dog should be named Mr. Spaghetti, and of course they refused to do that and named the dog Hunter. So Mr. Spaghetti became a big thing in Boston for a while, people were very outraged about this. And so I named my Pleo, my third one, Mr. Spaghetti.

Shankar Vedantam: I understand that companies actually have found that if you sell a robot with the name of the robot on the box, it changes the way people will interact with that robot, than if you just said, "This is a Dinosaur."

Kate Darling: Yeah. I don't have any data on this, but yes, I have talked to companies who feel that it helps with adoption and trust of the technology. Even very, very simple robots like boxes on wheels that deliver medicine in hospitals, if you give them a little name plate that says Betsy, they're understanding is that people are a little bit more forgiving of the robot. So instead of this stupid machine doesn't work, they'll say, "Oh, Betsy made a mistake."

Shankar Vedantam: And I'm wondering if you spent time thinking about why this happens. At some level, if I came up to you at home and I said, "Kate, is Mr. Spaghetti alive?" You would almost certainly tell me, "No. Mr. Spaghetti is not alive." I assume you don't think Mr. Spaghetti is alive, right?

Kate Darling: No.

Shankar Vedantam: Okay. So given that you know that Mr. Spaghetti is not alive, why do you think giving him a name, changes your relationship to him?

Kate Darling: With robots in particular, it's combined with just our general tendency to anthropomorphize these things. And we're also primed by science fiction and pop culture to give robots names and view them as entities with personalities. And it's more than just the name, robots move around in a way that seems autonomous to us. We respond to that type of physical movement, our brains will project intent onto it. So I think robots are in the perfect mixture of something that we will very willingly treat with human qualities or lifelike qualities.

Shankar Vedantam: All right. So we have this wonderful little prop in front of us, it's a Pleo dinosaur. I want you to tell me a little bit about the Pleo dinosaur, how it works and how you've come to own three of them, Kate. What is the dinosaur, what does it do?

Kate Darling: It's basically an expensive toy. I bought the first one, I think in 2007. There we go, it's awake. They have a lot of motors and touch sensors, and they have an infrared camera and microphones. So they're pretty cool pieces of technology for a toy. And that's initially why I bought one because I was fascinated by everything that it can do. If it starts walking around, it can walk to the edge of the table, it can look down, measure the distance to the floor, it knows that there's a drop and it'll get scared and walk backwards. And then they go through different life phases, adolescent, and fully grown and it'll have moods. And...

Shankar Vedantam: So I think what we should do, we bought the robot at Hidden Brain a couple of weeks ago, we haven't had a chance to give it a name yet. And I thought we should actually reserve the honors for this evening while we are talking to Kate, and see if Kate wants to try and name this dinosaur, since she cares about dinosaurs so much. I was looking up Kate's Twitter feed this morning. I understand that you're going to have a baby soon. Congratulations.

Kate Darling: Yes. I don't have a name for that either.

Shankar Vedantam: Okay. Just FYI, she sometimes refers to the baby as “baby bot”. So just for whatever that's worth. And one retweet that you had on your Twitter feed cracked me up. It said, "You don't really know how many people you don't like until you start trying to pick baby names."

Kate Darling: Yeah, that's a quote from my husband.

Shankar Vedantam: So I don't want you to tell me, you apparently haven't yet picked your baby's name. So do you have any choices or top choices? Is there a name, a spare name that you might care to give the dinosaur?

Kate Darling: Well, the problem is we've had a girl's name picked out for years, and now we're having a boy and we don't even have any contenders.

Shankar Vedantam: No contenders. What would've been your favorite girl's name if you had had a girl?

Kate Darling: Well, so when I first started dating my now husband, he at some point said, "If I ever had a daughter, I already know what I would name her." And I was like, "Oh, really?" I'm like, "We're going to fight about this one." And he said, "Yeah, I would name her Samantha and Sam for short, because Sam is gender neutral." And I was like, "Oh, I really love that." So that one was picked out very easily.

Shankar Vedantam: All right. Since you're not having a girl, you're going to have a boy, would you mind if you'd consider naming the dinosaur Samantha, how would you feel about that?

Kate Darling: Oh, that would be awesome. We should name the dinosaur Samantha.

Shankar Vedantam: All right. So hence fourth, this dinosaur will be called Samantha or Sam for short.

Kate Darling: Yay.

Shankar Vedantam: Now, sometime ago, Kate conducted a very interesting experiment with the Pleo dinosaurs, and to show how this works, I have a second prop here, which is under the table.

Kate Darling: Oh, oh.

Shankar Vedantam: It's a hammer, a large hammer, which we borrowed from the hotel. Now, as you all know, the dinosaur is obviously not alive, it's just cloth and plastic and a battery and wires. It has a name of course, Samantha, but it isn't alive in any sense of the term. And so Kate, I'm going to actually give you the hammer.

Kate Darling: Oh no.

Shankar Vedantam: Kate, would you consider destroying Samantha?

Kate Darling: No.

Shankar Vedantam: It's just a machine.

Kate Darling: I only make other people do that, I don't do it myself.

Shankar Vedantam: You wouldn't even consider harming the dinosaur?

Kate Darling: Well, so my problem is that I already know the results of our research and that would say something about me as a person. So I'm going to say, no, I'm not willing to do it.

Shankar Vedantam: Kate Darling, is a research specialist at the MIT Media Lab. When we come back, I'll ask her about that research she references, in which she asked volunteers to smash a robot dinosaur. I'm Shankar Vedantam, and you're listening to Hidden Brain. This is NPR.

Shankar Vedantam: Welcome back to Hidden Brain, I'm Shankar Vedantam. We're discussing our relationships with technology, specifically robots with Kate Darling, a researcher from MIT. She joined us before a live audience at the Aspen Ideas Festival. A couple of years ago, Kate conducted an experiment that says a lot about how humans tend to respond to certain kinds of robots.

Shankar Vedantam: Tell me about the experiment. So you had volunteers come up and basically introduced them to these lovable dinosaurs, and then you gave them a hammer like this and you told them to do what?

Kate Darling: Okay. So this was the workshop part that we used the dinosaurs for. They're a little too expensive to do an experiment with a hundred participants. So the workshop that we did in a non-scientific setting, we had five of these robot dinosaurs, we gave them to groups of people and had them name them, interact with them, play with them, we had them personify them a little bit, by doing a little fashion show and a fashion contest. And then after about an hour, we asked them to torture and kill them. And we had a variety of instruments, we had a hammer, a hatchet, and I forget what else. But even though we tried to make it dramatic, it turned out to be a little bit more dramatic than we expected it to be, and they really refused to even hit the things.

Kate Darling: And so we had to start playing mind games with them and we said, "Okay, you can save your groups dinosaur if you-"

Shankar Vedantam: Oh my gosh.

Kate Darling: ... "Hit another group's dinosaur with a hammer." And they tried and they couldn't do that either. This one woman was standing over the thing trying, and she just couldn't, she ended up petting it instead. And then finally we said, "Okay, well, we're going to destroy all of the robots unless someone takes a hatchet to one of them." And finally someone did.

Shankar Vedantam: Wait. So you said, "Unless one of you kills one of them, we are going to kill all of them?"

Kate Darling: Yeah. I think this might have been my partner's idea. So I did this with a friend named. We did this at a conference called Lift in Geneva, and we had to improvise because people really didn't want to do it. So we threatened them. And finally, someone did.

Shankar Vedantam: Samantha clearly doesn't want you to harm her.

Kate Darling: Yeah, clearly, clearly.

Shankar Vedantam: So what do you think is going on? At a rational level, the dinosaur obviously is not alive. Why do you think we have such reluctance to harming the dinosaur? In fact, I might have the battery removed, so the dinosaur stops making noise.

Kate Darling: Well, it behaves in a really lifelike way. We have over a century of animation expertise in creating compelling characters that are very lifelike, that people will automatically project life onto. Look at Pixar movies, for example, it's incredible. And I know that a lot of social roboticists actually work with animators to create these compelling characters. And so it's very hard to not see this as some sort of living entity, even though you know perfectly well that it's just a machine, because it's moving in this way that we automatically subconsciously associate with states of mind. And so I just think it's really uncomfortable for people, particularly for robots like this, that can display a simulation of pain or discomfort to have to watch that. It's just not comfortable.

Shankar Vedantam: What did you find in terms of who was willing to do it and who wasn't? When you looked at the people who were willing to destroy a dinosaur like the Pleo, you found that there were certain characteristics that were attached to people who are more or less likely to do the deed.

Kate Darling: So the follow-up study that we did, not with the dinosaurs, we did with hex bugs, which are a very simple toy that moves around like an insect. And there, we were looking at people's hesitation to hit the hex bug and whether they would hesitate more if we gave it a name, and whether they would hesitate more if they had natural tendencies for empathy, for empathic certainty. And we found that people with low empathic concern for other people, they didn't much care about the hex bug and would hit it much more quickly. And people with high empathic concern would hesitate more, and some even refused to hit the hex bugs.

Shankar Vedantam: So in many ways, what you're saying is that, potentially the way we relate to these inanimate objects might actually say something about us at a deeper level than just our relationship to the machine.

Kate Darling: Yes, possibly. We know now, or we have some indication that we can measure people's empathy using robots, which is pretty interesting.

Shankar Vedantam: My colleagues and I were discussing ahead of this interview, whether you would actually destroy the dinosaur, and we were torn because we said on the one hand, you of all people should know that these are just machines and that it's an irrational belief to project lifelike values on them. But on the other hand, I said, "It's really unlikely she's going to do it because she's going to look like a really bad person, if she smashes the dinosaur in front of 200 people."

Kate Darling: I don't know if you've been watching Westworld at all, but the people who don't hesitate to shoot the robots, they seem pretty callous to us, and I think maybe there is something to it. Of course, we can rationalize it, of course if I had to, I could take the hammer and smash the robot and I wouldn't have nightmares about it. But I think that perhaps turning off that basic instinct to hesitate to do that, might be more harmful than... I think overriding it might be more harmful than just going with it.

Shankar Vedantam: I want to talk about the most important line we draw between machines and humans and it's not intelligence, but it's consciousness. I want to play your little clip from Star Trek.

Star Trek Clip: : "What is he?" "A machine." "Are you sure?" "Yes. You see, he's met two of your three criteria for sentient. So what if he meets a third, consciousness in even the smallest degree, what is he then? I don't know. Do you? Do you?"

Shankar Vedantam: So this has been a perennial concern in science fiction, which is the idea that at some point machines will become conscious and sentient. And very often it's in the context of, the machines will rise up and harm the humans and destroy us. But as I read your research, I actually found myself thinking is our desire to believe that the machines can become conscious actually just an extension of what we've been talking about the last 20 minutes which is, we project sentients onto machines all the time, and so when we imagine what they're going to be like in the future, the first thing that pops in our head is they're going to become conscious.

Kate Darling: Yeah. I think there's a lot of projection happening there. I also think that before we get to the question of robot rights and consciousness, we have to ask ourselves, how do robots fit into our lives when we perceive them as conscious? Because I think that's when it starts to get morally messy and not when they actually inherently have some sort of consciousness.

Shankar Vedantam: If humans have a tendency to anthropomorphize machines, to see them as human, it isn't surprising that we're also willing to bring all the biases we have toward our fellow human beings into the machine world. Many of the intelligent assistants being built by major companies, Siri or Alexa are being given women's names. Many of the genius machines are often given men's names, Hal or Watson. Now you can say Siri and Alexa, aren't people, why should we care? Why should we care if people sexually harassed their virtual assistance as has been shown to sometimes happen? MIT's Kate Darling says, "We should care because the way we treat robots may have implications for the way we treat other human beings."

Kate Darling: It might, we don't know, but it might. And one example with the virtual assistance you just mentioned is children. So parents have started observing, and this is anecdotal, but they've started observing that their kids adopt behavioral patterns based on how they're interacting with these devices and how they're conversing with them. And there are some cool stories. There was a story in the New York Times a few years ago, where a mother was talking about how her autistic son had developed a relationship with Siri, the voice assistant. And she said, "This was awesome because Siri is very patient. She will answer questions repeatedly and consistently." And apparently this is really important for autistic kids. But also because her voice recognition is so bad, he learned to articulate his words really clearly, and it improved his communication with others. Now, that's great, but these things aren't designed with autistic kids in mind, that's more of a coincidence than anything. And so there are also perhaps some unintended effects that are more negative. And so one guy wrote a blog post a while back where he said, "Amazon's Echo is magical, but it's turning my child into an (beep), because Alexa doesn't require please or thank you or any of the standard politeness that you want your kids to learn when they're conversing and when they're demanding things of you." So it starts there, but I think that as this technology improves and gets better at mimicking real conversations or lifelike behavior, you have to wonder to what extent that gets muddled in our subconscious and not just in children's subconscious, but maybe even in our own.

Shankar Vedantam: Do you think it's a coincidence that most of the virtual assistants are given female names and female identities?

Kate Darling: I think it's a combination of whatever market research, but also just people not thinking. So I visited IBM Watson in Austin, and there's a room that you can go into and you can talk to Watson and he has this deep, booming male voice, and you can ask questions. And at the time I went there, there was a second AI in the room that turned on the lights and greeted the visitors and that one had a female voice. And I pointed that out and it seemed like they hadn't really considered that. So it's a mixture of people thinking, "Oh, this is going to sell better." And people just not thinking at all, because the teams that are building this technology are predominantly young, white and male, and they have these blind spots where they don't even consider what biases they might perpetuate through the design of these systems.

Shankar Vedantam: So you sometimes call a robot ethicist and you've sometimes said, "We might need to establish a limited legal status for robots." What do you mean by that?

Kate Darling: So it's a little bit of a provocation. But my sense is that, if we have evidence that behaving violently towards very lifelike objects, not only tells us something about you as a person, but can also change people and desensitize them to that behavior in another context. So if you're used to kicking a robot dog, are you more likely to kick a real dog? Then that might actually be an argument if that's the case, to give robots certain legal protections, the same way that we give animals protections but for different reasons. We like to tell ourselves that we give animals protection from abuse because they actually experience pain and suffering, I actually don't think that's the only reason we do it. But for robots, the idea would be not that they experience anything, but rather that it's desensitizing to us, and it has a negative effect on our behavior to be abusive towards the robots.

Shankar Vedantam: So here's a thing that, it's worth pondering for a moment. If you hear, for example, that someone owns a bunch of chickens in their farm. So it's their farm, their chickens, they own the chickens. And they're really mistreating the chickens, torturing them, harming them. You could make a property rights argument and say, "They can do whatever they want with their property." But I think many of us would say, "Even though the chicken belongs to you, there are certain things you can and cannot do with the chicken." And I'm not sure it's just about our concern that, if you mistreat the chicken, that means you will turn into the kind of person who might mistreat other people. There's a certain moral level at which I think the idea of abusing animals is offensive to us. And I'm wondering if the same thing is true with machines as well, which is, it's not just the case that it might be that people who harm machines are also willing to harm humans, but just the act of harming things that look and feel and sound sentient is morally offensive in some way.

Kate Darling: Yeah. So I think that's absolutely how we've approached most animal protection because it's very clear that we care more about certain animals than others and not based on any biological criteria. So I think that we just find it morally offensive, for example, to torture cats. Or, in the United States, we don't like the idea of eating horses, but in Europe they're like, "What's the difference between a horse and a cow, they're both delicious." So that's definitely how we tend to operate and how we tend to pass these laws. And I don't see why that couldn't also apply to machines once they get to a more advanced level, where we really do perceive them as lifelike and it is really offensive to us to see them be abused.

Shankar Vedantam: The devil's advocate side of that argument of course, is that would people then say, "Pressing a switch and turning off a machine that's unethical, because you're essentially killing the robot?"

Kate Darling: But we don't protect animals from being killed, we just protect them from being treated unnecessarily cruelly. So I actually think animal abuse laws are a pretty good parallel here.

Shankar Vedantam: You mentioned Westworld some moments ago, and I want to play a clip from Westworld. For those of you who haven't seen Westworld, humans interact with robots in a... Robots that are extremely lifelike, so lifelike that it's sometimes difficult to tell whether you're talking to a robot or you're talking to a human. In the scene that I'm about to play you, a man named William interacts with a woman who may or may not be a robot.

Woman : You want to ask, so ask.

Character William from Westworld: Are you real?

Woman : Well, if you can't tell, does it matter?

Shankar Vedantam: So as I watched this scene, and as I read your work, I actually had a thought and I want to run this thought experiment by you, which is that, on one end of the spectrum, we have these machines that are increasingly becoming lifelike, humanlike, they respond in very intelligent ways, they seem as if they're alive. And on the other hand, we're learning all kinds of things about human beings that show us that even the most complex aspects of our minds are governed by a set of rules and laws, and in some ways our minds function a little bit like machines. And I'm wondering, is there really a huge distinction? Is it possible... Is the real question, not so much, can machines become more humanlike? But is it actually possible that humans are actually just highly evolved machines?

Kate Darling: I have no doubt that we are highly evolved machines. I don't think we understand how we work yet and I don't think we're going to get to that understanding anytime soon. But yeah, I do think that we follow a set of rules and that we are essentially programmed. So I don't distinguish between souls and other entities without souls, and so it's much easier for me to say, "Yeah, it's probably all the same." But I can see that other people would find that distinction difficult.

Shankar Vedantam: Do you ever talk about this? Do you ever run this by other people and say... Do you tell your husband, for example, "I like you very much, but I think you're a really intelligent machine that I love dearly?"

Kate Darling: I haven't explicitly said that to him, but...

Shankar Vedantam: When you go home from this trip.

Kate Darling: Yeah. We'll see how that goes.

Shankar Vedantam: Kate Darling is a research specialist at the MIT Media Lab. Our conversation today was taped before a live audience at the Hotel Jerome in Aspen, Colorado, as part of the Aspen Ideas Festival. Kate, thank you for joining me today on Hidden Brain.

Kate Darling: Thank you so much.

Shankar Vedantam: This week's show was produced by Rhaina Cohen, Tara Boyle, Renee Klahr and Parth Shah. Our team includes Jenny Schmidt and Maggie Penman. NPRs vice president for programming is Anya Grundmann. You can find photos and a video of Samantha, our Pleo dinosaur on our Instagram page. We're also on Facebook and Twitter. If you enjoyed this week's show, please share the episode with friends on social media. I'm Shankar Vedantam. See you next week. Support for Hidden Brain comes from the Rockefeller foundation. This is NPR.