Speakers:
SPEAKERS
Clint Murphy, Christopher Chabris
Clint Murphy 00:00
Christopher, welcome to The Growth Guide. We’re going to talk today about your book Nobody’s Fool. Before we do that, can you give a brief one to two minutes to the listeners who may not know you a bit of your background and history?
Christopher Chabris 00:14
I’m glad to be on the show. I’m a cognitive scientist, which is a scientific researcher who studies how the mind works and human behavior and, how the mind enables human behavior. I’ve been doing that for, oh, how long has it been maybe about 30 years or so 30 plus years. And in addition to that, I also enjoy playing chess and poker and other games. That’s one of my hobby activities, and also enjoy writing things other than academic papers and journal articles. So Nobody’s Fool. It’s actually the second book that Dan Simons and I have written, the first one was called The Invisible Gorilla came out in 2010. So it took us 13 years to do another book. So you know that it’s got to be good, since we spent 13 years working on it. And our previous book was about how people often fail to notice things that are right in front of them, and yet are also sort of unaware of this limitation on our perception and attention. And the book was about a lot of ways. We’re unaware of the ways our own minds work, and what the consequences of those are for everyday life. So I like to do scientific research and behavioral science. I like to write about it, and pretty excited to be talking about our new book.
Clint Murphy 00:16
So Chris, on the Invisible Gorilla, so the original, Invisible Gorilla video, if you will, from YouTube fame, when I first saw it for me, it would have been back in Gosh, it would have been 2006 that a management training session, I think, when I saw it for the first time, was that an experiment that you both did that later was in the book? Or was the book based on that experiment? done by someone else?
Christopher Chabris 01:56
No, we did the Invisible Gorilla experiment. So Dan was a professor at Harvard, a new professor, he had just joined the Psychology Department, and I was finishing graduate school. That was my last year of grad school then. And by coincidence, our offices were down the hall from each other. And I was the teaching assistant for a class that Dan was taking. And the point of the class was to teach students how to do psychology experiments, especially in cognitive psychology. So think, experiments about attention and memory and perception and all of that stuff. And Dan had the clever idea that we should do like a group project, in addition to letting the students do their own individual experiments. And Dan suggested as a group, we tried to create a new version of this, I would say, moderately famous experiment that had been done way back in the 1970s, about 20 years earlier, by a professor named Ulric Neisser who was a legend in the field of psychology, and especially cognitive psychology. And Dan had known him because nicer was a professor at Cornell, where Dan was getting his graduate degree. And, in nicer version of it, there was a woman who walks through a video of these people passing basketballs around, and the woman’s carrying an umbrella. And a lot of people didn’t notice this woman carrying an umbrella. And honestly, I had never heard of this experiment. But what we decided to do was make some improvements on it, or at least try to extend it in some new ways. So in the nicer original experiment, everybody was transparent. It was like a bunch of ghosts walking around playing basketball and this ghostly woman who walks through and nicer, he created this with some technique involving mirrors way back in the 70s. I don’t know how he did it. But of course, in the late 90s, we were able to do it with video editing. But we went a step further, and we had everybody be solid. So everybody in the same place at the same time, because it’s possible that people don’t notice these things people walking through, and so on, just because it’s hard to perceive a ghost walking through a bunch of other ghosts, and so on. And also, we wanted to sort of push it a little farther in terms of like, How surprising or how noticeable would be this unexpected thing walking through the action. So we, we found a gorilla suit in another professor’s lab, believe it or not, there’s a lot of stuff you can find in psychology labs. It’s not all beakers and test tubes, like chemistry, there’s a lot of funny stuff around and it was a developmental psychology lab that studied infants and very small children, and they use the gorilla suit to kind of spook them a little bit. And we borrowed the gorilla suit, and we had a person put it on and walk through these people passing basketballs. And somewhat To our surprise, when we ran the experiment, we noticed that when people watched that video and tried to count the number of times someone passed the ball who was wearing a white shirt, about half the time, they didn’t notice this person in a gorilla suit walking through even when it would turn and thump its chest at the audience and ham it up a little bit before walking off. So that was our experimental though it was inspired by the one from 20 years earlier. I think we sort of took it to a new level and made the phenomenon much more dramatic and much more surprising to people and just because we had it gorilla, it became much more widely, much more widely known.
Clint Murphy 02:08
And it’s almost like the white dress gold dress where you know, the one side that sees the gorilla looks at the other side and says, are you serious? Like you didn’t see a gorilla kind of banging on the chest, like stop in the middle of the screen, do a little dance. And the other side says, Well, I don’t even believe you saw it. Did you get a lot of that between both sides of the case?
Christopher Chabris 05:24
Yes, I wouldn’t say it divided into two warring factions kind of like with a famous dress. And part of the reason for that is that the dress is a visual illusion that you can keep on seeing that way, whenever you look at it. Although some people could see it both ways, depending on what time of day they looked at it. But most people had a very stable perception of whether it was black and blue versus white and gold. In the gorilla video, you can really only show it once. Because once people know they’re supposed to be a gorilla there, then when you show it the second time, they’re gonna notice it. And that’s kind of in a way, what’s interesting about it is that once you know it’s there, it’s very obvious. So people who saw it the first time would say how did those other people miss it? How can anyone miss that thing? Whereas people who don’t see it would often say afterwards, when we told them well, by the way, there was a gorilla, they would say, No, there wasn’t, like rewind that tape and play it back for me. In fact, when some TV producers have recreated the experiment for their own TV shows, like Dateline NBC, they’ve shot the video with another camera showing the participant, you know, being surprised by it, and asking them to play back the tape and everything like that. So it’s a very robust scientific experiment, you know, if even TV producers can reproduce it, and if it’s used in high school classrooms, and so on, you know that, you know that it works. And I think that one other difference with addresses that if you notice, or don’t notice the grill, it doesn’t say anything about you, that’s sort of a stable trait of you, it could be you just weren’t looking in the right place. Or you were paying more attention to the basketball and the people in white shirts at the time and didn’t notice it. This kind of failure of noticing can happen to anybody. So it’s not like some people always notice and some people don’t notice and so on. It’s something that can really happen to anyone.
Clint Murphy 06:59
And then last question on that, before we get to today’s book, I don’t want to take too much time on the prior one. Although you did bring it up in this book with the new experiment, which I went and watched as part of the exercise while I was preparing. Did you notice any difference if the person watching the video, for example, had ADHD, and myself is someone who does is an example it’s sort of like, you know, the joke, hey, we’re having a conversation about a bicycle, oh, squirrel. And so as soon as the gorilla comes on, that ADHD kicks in, and just oh, hey, something new gorilla in the screen, did that come up at all?
Christopher Chabris 07:38
Well, we never did any kind of formal experimentation, trying to compare people who were and were not diagnosed with ADHD. One reason for that is that I think that diagnosis is often cited by people as something that they’ve diagnosed themselves with. So of course, many times it’s a pediatrician or a psychiatrist or someone like that. But it’s kind of hard when you’re doing a scientific project where you want to sort of compare people with him without a diagnosis to accept their word for it as to what they’ve been diagnosed with. Because they may have self diagnose, they may have been diagnosed by someone who didn’t do a good job they may have missed remembered, I mean, who knows what, so we didn’t ever do it. And surprisingly, it’s been pretty rarely done, I’ve only ever been able to find a couple of papers that actually tried to do it. And one problem with that is that it’s likely that people who have ADHD have a little bit more trouble tracking the players and counting the basketballs and maintaining attention throughout the duration of the video, which is usually about 60, or as long as 90 seconds. And one thing that does predict noticing an unexpected thing is if you stop paying attention to something else. So if you’re not trying to count basketballs, you’re very likely to see the gorilla because you’re just sort of scanning the field and you’re looking at what might be interesting, and it’s on there for nine seconds in some cases. So you’re going to notice it, really the surprising thing is that if you’re paying attention to something else, even though that’s something else is in the same place, as the gorilla, the gorilla walks through the other things, you can still fail to notice that even though your eyes sort of scan right over it, and so on, that’s kind of the surprising finding. So even if there was a difference between people with ADHD and people without, I don’t know that you would attribute it to any special ability to notice it might be more a difficulty and continuous attention to the thing that we have asked them to do. But I think more work needs to be done with that. Also, it’s interesting to think about autism spectrum disorder and some other cases where you might expect sort of different performance on these kinds of tasks.
Clint Murphy 09:31
Yes, that would be fun. So let’s dive into Nobody’s Fool in where I’ll start with you is there’s a powerful statement you have in the first challenge that we face, which is our default stance is belief. We accept what we are told we do not immediately disbelieve it, and we rarely if ever, check it out. In our daily experiences. It is the certainty of our beliefs that we must work to suspend not disk belief, can you tell the listener how the fact that we start from a position of belief can be one of the things that gets us into so much trouble.
Christopher Chabris 10:08
So there’s a phrase or a concept that we often learn in English class or something like that in middle school or high school called willing suspension of disbelief, which is sort of the idea that often fiction works by requiring us to accept some kind of premise, like there’s a, you know, an alien invasion going on, or, you know, some technological changes happen or something like that. And then imagine what the world would be like in that case. So we sort of suspend our disbelief that that actually has happened. And we go with the premise. And we often like to think of ourselves as sort of critical thinkers, skeptics, who are very disbelieving. But in fact, most of the time that we encounter a piece of information, we at first, believe it, or maybe more precisely, if you think of the mind as sort of an information processing machine that receives a pieces of information coming at it, they get initially tagged as true. And even if we don’t know whether they’re true or not, sort of by default, we tagged them as true. And then it takes an extra step, maybe a little bit more effort a little bit more time, a little bit more consideration to disbelieve something. Um, so belief is what needs to be suspended belief is the default, that’s the disbelief, maybe thinking something is false, or just tagging it as unknown, or uncertain, or TBD, or something like that, you know, anything like that. That’s what requires extra work. And we’re sort of not aware of this, I think, which is why one reason why, for example, you may hear a politician make some claim. And then you may hear that claim made again later. And then again, another time, and after a while I’ve experienced this myself, I sort of have the sense that it’s true, even though I’ve never actually heard any evidence for it. And I never actually considered it. And in fact, I may have been a little skeptical the first time, but if you don’t actually sort of unbelieve, you know, you may wind up accepting something, and carrying on with that belief in your mind. And that’s a great way for someone trying to spread misinformation or to scam you or whatever, to sort of start to work in through your defenses.
Clint Murphy 12:12
And I believe later on in the book, we talk about that as the recognition heuristic, or which leads us to illusory truth. Am I mixing those up? Or is that repetitive pattern that gets us to just recognize, oh, I keep hearing this? Now? I believe it to be true, even if it isn’t, is that where we’re going with that? Yeah, so
Christopher Chabris 12:35
truth is the idea that if you hear something enough times are encountered enough times, maybe even only once, then the next time you encounter it, you’ll be more likely to think it’s true, merely because you’ve encountered it before. So there are experiments where, you know, the participants in the experiment will read a series of statements. And even if these statements have no actual truth value, like they’re about fictitious objects, or something like that, or their trivia that nobody knows the value that the correct answer to, if you compare the ones they’ve encountered once with ones they’ve encountered two times, for example, the ones that they encounter two times, they’re more likely to tag as true to think is true the second time just because they’ve heard it once before. So this is sort of part of the key to propaganda, right, like repetition. You know, repetition creates a belief and truth. The recognition heuristic is a little bit different. The recognition heuristic is, is the idea that they’re related because they’re both they’re both based on the principle familiarity. So things we know are probably more likely to be important than things we don’t know. So for example, if I asked you, you know, who do you think is the best team in the Spanish Soccer League? Even if you know nothing about soccer, and nothing about Spanish Soccer, in particular, anything like that, you’d be making a really good guess if you said in Madrid, because Madrid might be the only city you’ve heard of in Spain, or if you said Barcelona, because Barcelona might be the other only city you’ve heard of in Spain, right? So interesting coincidence. If you ask people name two cities in Spain, they’ll probably say Madrid and Barcelona, if you ask people named the two best teams in Spanish Soccer, if they said Madrid and Barcelona they’d be right. So it so happens that we recognize things because they are important in a sense, and therefore assuming that they’re important, because we recognize them as a good, you know, as a reasonably good way to make judgments. Part of what all this has to do with is we sort of misinterpret sometimes, reasonably but other times unreasonably we misinterpret information like familiarity and recognition as signs of credibility, truth, trustworthiness and so on. And that’s what you know, a scammer will use to sort of, again hijack your information processing system, right? They’ll try to create a sense of familiarity, create repetition and all of that to make your trust grow.
Clint Murphy 14:53
And how important is it is well, the source of where the information is coming from so exact Evelyn and I’m going to set this up for our next part for next question that you talk about, which is sort of in the news a lot lately. But if you have someone like Joe Rogan, who has a really popular platform, and you’re hearing something repeated on that show, does that make the level of familiarity go up even higher, because it’s coming from someone who you view as a trusted or respected source?
Christopher Chabris 15:26
Sure, it helps quite a bit to have helps, I would say, from the point of view of the person trying to deceive us, and therefore, you know, hurts us to present information as though it’s authoritative. And as though it’s coming from familiar and trusted sources. So I don’t know that much about Joe Rogan, except that he’s a famous podcaster. But I do know that a lot of misinformation spreads through social media and social networks, in a way precisely because if we get something from our friends, or our contacts, or our acquaintances, in a way, they’re already inside some kind of circle of trust, a little more than some random person out in the world, right. So if some random person out in the world wants me to believe something, one of the best ways to get me to believe it is to get my friends to start spreading it around. And then once I hear it from one or two or three friends or trusted sources, or something, I’m more likely to believe it than if I just heard it from three random people on the internet, right? So that source, of course, could be let’s say, a news anchor, TV news anchor, I’m really amazed at how much some of these TV news folks make, considering that most of their job is essentially reading the same news that you can just read anywhere online, or that anyone else could have sat in that chair and read. And they’re attractive people, they speak well, and so on. But sometimes it’s a very precise person that people claim to trust, you know, delivering the news and so on, even though I don’t think there’s anything especially trustworthy about one of these folks, as opposed to many others. But people develop that sense of familiarity and trust so that the news becomes a more valuable commodity when it comes through that person’s lips and from their face, you know, and on their program, and so on.
Clint Murphy 17:05
What jumps out at me on that one, and there has been a fun debate happening, I’d say, it’s probably been for the last four weeks on the Twitter platform, where Joe Rogan had a guest on Robert Kennedy Jr. and he was talking about vaccinations. And Dr. I believe, it’s Peter Holtz said, you know, it was misinformation. And they got into a little bit of a flaming war together. And Joe said, Well, why don’t you come on my show and debate RFK Jr. and I realized when I was reading your book we talked about not that I’m picking a side on either side of that equation. But you talked about this idea of Brandolini’s law, or referred to as bullshitter symmetry. And I started to see people bringing that up on Twitter, because they were saying, Well, no, you can never win a debate with a conspiracy theorist, because they can just throw out whatever they want. It doesn’t have to be true. And if you’re on the other side of that, and you’re trying to refute it with truth, the Brandolini lie believe argues that you need to the amount of refuting that you have to do is twice as much as the bullshit on the other side. So it’s almost impossible to win that debate. So us as a listener, we should never get into debate with a conspiracy theorist is, is that ringing a bell to you?
Christopher Chabris 18:26
It’s much worse, it’s 10 times 10 times. the bullshit asymmetry principle is that the amount of effort it takes to refute a certain amount of bullshit is 10 times as much as the amount that it took to produce it in the first place. And I think Brandolini law and the bullshit asymmetry principle are the same thing. Of course, it’s not a law. Like there’s no scientific evidence that proves this, but it has some of the character of a law like it has a mathematical relationship, right? 10 times as much. And it’s, I think it’s important to understand in this context, what we mean by bullshit. So bullshit has a somewhat technical meaning in cognitive science and philosophy and so on. In fact, that comes from a philosopher named Harry Frankfurt, who, coincidentally died a few days ago as we’re recording this. I think it was in his 90s, he had a great long career, including writing the definitive book on bullshit. And the point about bullshit, according to Frankfort, is that bullshit is something that’s produced by someone whose goal is not to tell the truth or to spread lies, but merely to produce stuff without caring whether it’s true or false. So right, the student who bullshit his way through something is just trying to say enough stuff to get a reasonable grade or to get out of having to take a test at a certain time or to explain why he couldn’t show up for class or something like that. Right. So the bullshitter is basically just trying to achieve their own ends without regard to the truth or the falsity of what they’re saying. So they’re not necessarily like a politician who’s sort of trying to maybe deliberately spread falsehoods in order because they advance that person’s campaign. It’s different. It’s sort of a subtle difference, right? So just to veer off in a slightly For direction for a second, artificial intelligence language models are sort of in a way, but the bullshitters, because what they the way they work essentially at a very simplified level is they produce the word that is most likely to appear next, after all of the words that have appeared before, without regard to whether that sequence of words is producing a true statement or a false statement. So the language model doesn’t care whether it’s producing truth or falsity, it just wants to produce what’s most likely to occur next, that’s sort of a classic, sort of a classic attribute of a bullshitter. Right? They just want to keep on talking. And, you know, and get out of the situation, right, which is kind of what a language model wants to do. Right? If you ask it to explain something, it’s going to just generate some stuff. And you know, go on to the next question. The remarkable thing is that often there is truth in there, right? But there’s no truth is not a factor. You know, truth is like an accidental byproduct that comes from the language that the model has digested in being trained. You could train it on a lot of falsehoods, and you’d have, you know, something that also was bullshitting. But producing more falsehoods, then, you know, then current models are. So getting back to the bullshit asymmetry issue. I think conspiracy theories are not the same thing as bullshit either, like conspiracy theories usually have a lot of logic to them, they have a lot of internal logic to them. But I think they have a couple of premises or a couple of what we would call commitments, sort of like basic assumptions included, that if you suspend those assumptions, the rest of it doesn’t make sense. So I’m struggling right now to come up with a great example. But when it comes to vaccines, often the argument that a critic of vaccines will make is very specific. And it’s about, they will cite, you know, various specific studies and references and facts and so on. And they do it in sort of a rapid fire fashion. And they lead you to believe that they’re very familiar with all of this stuff. So it would take a long time to refute, right and debate, I think, would not be very productive, because a debate is not about ironically, although we think of a debate as sort of a very high minded intellectual thing. It’s not about who is right or wrong, it’s about who can be most convincing at that point in time. And if it takes 10 times as much time and effort to refute a claim as it does to make the claim, I’ve never I’ve yet to see a debate where one side gets 10 times as many minutes as the other side gets, right. So in a way that, you know, in a way, the person who needs to debunk and refute is at an inherent disadvantage in a format that gives equal time to two sides. I’m not saying nobody should ever debate RFK, Jr, or anybody else on this, but saying that we’re going to have a debate to see who’s right is not a doesn’t reflect a really good understanding of how we arrive at scientific facts. We don’t arrive at scientific facts by debates, we essentially arrive at trial verdicts, you know, and maybe political choices by debates, but we don’t arrive at truth by debates.
Clint Murphy 22:49
One of the things you tucked in there, that was fun was this idea of AI and the challenges we’re seeing with AI being a bit of a bullshitter, if you will, we’ve already seen one lawsuit where the lawyer had drawn up his argument using AI and he asked the AI whether it was true, and it said yes. So he
tell you Oh, yeah, I’m conning you, you know, so that that’s the last person you want to ask. You know, like, Yeah, I did see like, I didn’t see that anecdote. But sorry for interrupting.
Clint Murphy 23:19
No, no worries, do you see more of this coming down the pipe where people are getting a little too reliant on the tool, not recognizing that number one rule of having an intern, a personal assistant, a research assistant, that’s how I see AI? Let’s use an example. So I’ll take a step back. We use the term at work a lot. And I think it’s because a lot of us are ex auditors from big four firms. So we’ve always said, Trust, but verify. And you talk about Ronald Reagan haven’t used that quote, and I did not know that that’s where it came into my lexicon from or into the general populace. So why is that statement so important for us not being a fool? And how does it tie in to how we should be thinking about our use of AI?
Christopher Chabris 24:12
So I don’t know where Reagan got it. He always claimed that it was a Russian proverb. And he would give the Russian version because this always came up in arms control negotiations with the Soviet Union. This is the context in which he mentioned it. I don’t know if it was really a Russian proverb. Reagan said a lot of things that that weren’t true or that he thought were true, but actually weren’t. So I don’t know the whole history of that phrase. But I think it is pretty apt for a couple of reasons. One, we sort of have to trust, it would be maladaptive in the extreme and a miserable life if you never trusted anyone if he were excessively skeptical. You almost couldn’t get out of bed. Certainly you have trouble like getting out of the supermarket without thinking you were cheated every single day because you couldn’t check all the prices and et cetera, et cetera. The verification part I think, is where you need to sort of do your calibration To the stakes that are involved, right? So auditors and accountants, like that’s their main job, in a way. I mean, you tell me, you’re the expert. But I think the idea is that we’re supposed to trust that we should invest in some company, or acquire a company or something like that, because professional auditors have verified the accounts in some way, and that the sales are real, and the revenue and the income is real. And, you know, and so on. Of course, there are many spectacular cases of frauds where the auditors were deceived or were maybe sort of Co-opted into the process in a way that they shouldn’t have been. But the verification is really what you have to ration out to when the stakes really matter. And I’d been surprised in all the research that we’ve done, over the last years for this book, seeing how many times people made really important decisions, without doing almost any verification, except the most cursory kind, and the kind that doesn’t really necessarily bring you that much closer to the truth. For example, you know, someone’s cousin referred them to Bernie Madoff, or something like that. And they’re like, oh, yeah, my cousin, he’s a great guy, you know, I trust him, you know, and they don’t look into anything beyond that. In the case of AI. I’ve heard a lot of people say something, things like what you said, which is, well, they’re like research assistants, and so on, you have to check their work, you know, they can give you some useful stuff, but you’ve got to check their work. I think that’s true. I think that oftentimes, I’ve seen chatbots give very interesting and useful responses to things. I think that universe of the things that they can usefully respond to as much more limited than we realize. And in part, that’s because they are so good at language, right? It turns out that this deep learning, reinforcement learning language model paradigm, with large enough computing power, is actually able to basically perfectly simulate human language in terms of grammar, and writing style and fluency and word usage and so on, that it seems so much like something human wrote. And it’s written more or less without error, like, your undergraduate student would be making more grammar errors and usage errors, and so on. So it sounds really very authoritative. Right, and we, that is a little bit of a smokescreen that, well, that was going to be a very terrible mixed metaphor, but that basically lowers our guard and makes us less likely to check. But even if we do the checking, it’s not clear to me that starting with information from a source that is sort of not concerned at all about truth, is the best way to arrive at truth yourself. Like I’ve heard people say, we’ll have had it do the first draft, and then you fix it later. And I don’t know if it’s sort of an ethical objection I have to that or a psychological one, or it just doesn’t feel right. But I don’t believe that starting with bullshit. And then copy editing is the way to get your truth. It might be useful in many applications, and from what I understand that generating computer code is a really good application for it, because computer code is sort of so structured, that and it’s so reusable, even more so than human language, that digesting a lot of what’s come before and spitting out various sequences of it actually gets you closer to something usable than the same principle for, you know, all of human language, as opposed to the more limited field. So one can imagine language models being trained in very specific domains, and maybe being a little more useful there. But even so, if you have to check every single statement, that’s a lot of work. You know, we can argue the economics of it, but it might be better in some ways to start with someone who knows that topic and have them write something and then, you know, and then upgrade that or try to do some automated fact checking on what humans produced or something or something like that.
Clint Murphy 28:37
It definitely depends, you’re right, it definitely depends on where you’re trying to get with your end output and how you’re prompting it on the input, I think the average user uses pretty wide vague prompts going in. And so what you’re going to get out is pretty wide, pretty vague, and likely, a higher chance of being wrong than if you really funnel it down to, I want exactly this, but that’s the same approach to that research assistant or intern where you’re not just gonna give them wide direction, you’re gonna say, this is exactly what I want that take them through your checklist, and then send them away before they come back. So there’s definitely that but but let’s go a little bit backwards in the book. And let’s tackle survivorship bias. And I’m gonna give you a bit of a scam, slight variant of what I saw in the book, because it was a fun one when I heard of it. And I think it ties to how a scammer would use survivorship bias in their favor to run a scam. And you tell me if that’s accurate, and then how our listeners like what should they be looking for and to not get caught with these ones? So for example, this one, yeah. I told it to someone and they’re like, Oh my God, that’s a genius. And it’s this idea that I get 10,000 email addresses, and I send 10,000 People an email telling them what direction the stock market’s going to go in the next week, say, Hey, I’m not selling anything, I’m just letting you know, stock market’s going this way or that way 5000 Of the people get, it’s going up 5000 And then get it’s going down. The next week, whichever 5000 I was right on, I send them an email and say, hey, just want to let you know, here’s what’s happening in next week, you know, same as last week, it’s going up, or it’s going down. Now, I have 2500 people who I’m right on, and I just keep repeating this till I get down to the final 50. And by then, you know, 1020 weeks in a row, exactly right directionally on where the market is going to be. And then you sell your product. And that person on that other end says, Wow, this guy’s a genius. He’s emailed me every week for 20 straight weeks. And he’s been right every time I’m gonna buy his stock picking course. Is that an example of me using survivorship bias to run the scam on those people?
Christopher Chabris 28:46
Yes, and it’s even better scam than you think. Because you’re also going to have 50 testimonials about this guy was right, this guy was right 12 weeks in a row or something like that, you know, when you’re when you’re done with it, you can use those to get so many more customers, right? Because the 50 are not representative of all the people you said predictions to write. So, in fact, I think if we do the math, you know, you’ve predicted wrong for far more people than you’ve always predicted, right? You know, so that’s exactly an example of survivorship exactly example of survivorship bias. And it operates at even higher levels. Like, if you just look at the performance of mutual funds, for example, that are in existence right now, you might think, well, mutual funds are a pretty good investment. And I’m not saying they’re a bad investment. But what you’re not seeing is the performance of all the mutual funds that are no longer in business, because they were performing so badly that they don’t manage money anymore. So whenever you look at what’s around right now, and use that to predict what’s going to happen in the future, you’re not only seeing sort of a narrow slice of what’s ever existed, but you’re seeing only the ones who have been good enough to still exist. Even if let’s say we take out the ones that were just started this year, right, we take the ones who have been in business for five or more years, well, that cuts out everybody who went out of business within the last X number of years. And that’s essentially what you’re doing, you’re basically sort of pretending you’re a mutual fund. And you’re or a stock market advisor, stock picker, whatever, and you’re calling the market, but you are only communicating with people who have seen your track record get better and better and better. I believe there was even a version of this that was done on a, on a British TV show. Like in real life, that guy actually sent out all these things, you know, and then brought it in the people who had received, you know, all these correct predictions in a row. And I don’t know exactly what he did with them. But it would have been interesting to sort of see if they any of them came up with the right explanation of what actually happened. Because one of the ways that we get scammed quite often is we don’t think about the process that led to what’s happening right now, Dan, Simon’s my co author gave an interesting example. Or maybe it was someone else. But I heard very recently an interesting example, where sometimes a stage, you know, like, let’s say, a faith healer, or some kind of preacher who claims to be able to do things like it helps the, you know, people who are in a wheelchair stand up and walk, you’ll see in the performance that people in wheelchair stand up and walk in front of him. What you don’t see is that before the show, they would approach people who were elderly, and maybe walking using a cane or a walker or something like that, and say, here’s a wheelchair for you, we’ll sit you right in front. And they’re like, Oh, that’s very nice of you. Thank you very much. And then once they’re sitting there, well, then the preacher can say get up and walk. They get up and walk because they were always able to walk right, but we don’t see that part that happened before the show, right? We just see these people in wheelchairs get up and walk and like, wow, you know, so it’s the often the way to think about scams is to think about if this were a scam, what would this person not be telling me? Like the stock picker might not be telling you about all their failed picks, they might only be telling you about your success, their successful picks, and people who put testimonials out in front of you are only going to have testimonials from the people with the very best experiences, right? Like your 12 in a row, you know, your 12 in a row email recipients. Right? So yeah, that’s exactly right. It illustrates a number of the important, the important principles about not being fooled, I think.
Clint Murphy 34:17
And one of the questions you say we ought to think about which you highlighted a little there is this idea of what’s missing? Or what am I missing? What does that look like? And how does the possibility grid tie into that?
Christopher Chabris 34:30
So we coined this term, the possibility grid to refer to something that’s been around for a while in statistics and psychology research. And it’s a very simple idea. It’s just that often when we’re thinking about making a decision, we have been given some reasons to decide one way, for example, but let’s stick with the stock picking example for now. Right? So we see that some guy has made two really great stock picks in the last 10 years and he’s bragging about them right and he’s using those as his sales pitch. To get us to, you know, sign up for his newsletter or let him manage our money or something like that. So, and indeed, he’s right, we sort of wish we had bought those stocks back when he picked them because they’ve gone up, you know, 10,000% Since then, or something like that, often, companies like Tesla, or mentioned, or Amazon, which within our recent memory, you know, started at basically zero and are now among the biggest companies in the world. So those are examples of what I would call the top left box in this two by two grid of possibilities. So the top left box in this case are, let’s imagine that like, there’s a two by two grid and the columns are stocks, the guy picked on the left and stocks the guy didn’t pick on the right. And then the rows are stocks. We wish we had bought the top row and stocks, we are happy we didn’t buy or the bottom row. So he’s focusing on the top left box, which are stocks he picked and stocks we wish we had bought, and making us very excited about that. But he’s not telling us about all the stocks he picked. But we don’t wish we had bought, like Did he pick pets.com? At the same time, as he was picking Amazon? Did he pick webbed van? Did he pick Dr. Koop? You know, these were all publicly traded companies that nobody even thinks about anymore. Except for pets.com. Because they had this sock puppet, you know, the sock puppet ads with these great slogans and everything. But if a company is only memorable, because it wasted a lot of money on great advertisements, not because it’s, you know, worth billions of dollars. Now, we don’t mind that we didn’t buy it. And then there’s that bottom left corner also, which are the stocks he didn’t pick, which did go up. And we did wish we bought them like, didn’t he pick Google? Okay, he picked Amazon. But he didn’t pick Google that he didn’t pick, he didn’t pick MasterCard, which had a tremendous performance after it went public. If he had picked them, he would be telling us about them, certainly. So his picks that he’s telling us about that went up are a very unrepresentative sample, concentrated in the top left of this thing. And you don’t need to know how many things are in the other parts of the grid. Right? All you need to know is that there could be a lot of stuff there. And then you realize that what you’re being shown is not a set of information that’s good enough to make a decision on, you either need to get more information or just walk away and say this is a pitch based purely on a couple of curated positive examples that don’t really have any meaning outside of that wider context of possibilities,
Clint Murphy 37:12
And that get more information or move on one of the problems is sometimes we form that opinion too quickly. And it reminds me, Charlie Munger is famous for saying, among many other things, I never allow myself to have an opinion on anything, that I don’t know the other side’s argument better than they do. And for him, it’s always come down to doing the work required to have an opinion. And, you know, you have the example in the book of the young analyst who was so excited about his analysis, if you will, of farmer Mac until he went and sat down with Bill Ackman. And Bill pulled out all of his research and post it notes and highlights and sort of said, well, hey, like, you know, you’re saying this about farmer Mac. But here’s all my analysis, and it shows the exact opposite. And the young guy realized, Holy crow, I haven’t done the work to support my opinion. Now I realize what you need to do. So what do we mean when we say, do the work? To support your opinion? And what does that look like for the listener?
Christopher Chabris 38:18
Well when I heard the phrase, the work required to have an opinion, I think it’s almost all you need to know. Because most people never think of the idea that there is work at all required to have an opinion. Right? As the saying goes, opinions are like, you know, blah, blah, blah, everybody has them, right? Like, but as soon as you think about the idea that in order to even have an opinion, or at least an opinion worth doing anything with, right, if you’re going to keep the opinion all to yourself and never act on it, okay? Who cares. But if you’re going to start buying and selling stocks based on it, telling your friends to do stuff based on it, voting for candidates based on it, right, if you’re going to start going out into the world and affecting the rest of the world, based on this opinion, you want to do some work, you know, to verify that, that you know what you’re talking about. I’ve heard that monger quote, which I think is great, but I think maybe it goes a little too far. Like there’s really hard to, you know, never have an opinion until you understand the other side’s argument even better than they do. It’s a little unverifiable. Like do I really understand that better than they do, but certainly to, to understand it better than you think you need to, I think is definitely worthwhile. Right. So guy spear, the analyst who you mentioned in the story, we got this from his autobiography, so it’s not something he’s shy about something. He’s shy about talking about, you know, he had spent a few days you know, researching pharma Mack, and he thought it was a great idea. And he was really stunned, I think when, and he knew acumen before that. So he in fact, he knew of Bill Ackman. This was maybe 1518 years ago, so Ackman was much less of a legend than he is now. This was maybe 2005 2006 Right before the financial crisis when, you know, there was a lot of question about like, you know, what about these mortgage companies, Fannie Mae A farmer, Mac and so on. And when he saw what Ackman had done to research the same idea, he was really, really stunned. And I don’t think that you always need to go to that extreme. But you do need to be aware of the fact that opinions come a lot more easy than justifications. And again, it’s sort of about calibrating your verification, right? If you’re managing other people’s money, you might need a lot more conviction, you know, before you are willing to, you know, to take a position, you know, especially a large one or a short one or anything that, you know, that sort of might keep you up at night, right. And there’s no formula sort of for like, how much work is required. But I think the guideline is, it’s probably more than you’ve already done. And in most cases,
Clint Murphy 40:47
Y eah, I loved when I first heard, for example, Jordan Peterson, talking about having a debate with someone. And saying the idea that he’s a big fan of before he will give his position, or before the other person gives their position that you ought to at least be able to restate what the other person’s position is, in a way that the other person is happy that you’ve understood what their position is. So you maybe don’t have to agree with it. You don’t have to fully be able to argue it. But if you and I were debating, and you give your point, I would have to be able to say, Chris, before I share my point of view, this is what I heard you say, am I right in that? And you would say? Well, let me clarify, you heard this, I just want to clarify this one little thing. Is that something that you’ve heard before? And does that resonate?
Christopher Chabris 41:38
Oh, sure. It resonates a lot. And it’s a hard standard to live up to. But I think it’s been stated, I think Dan Dennett stated that in one of his books, that’s, and it’s, I think it’s a reasonable standard, you don’t have to understand it better than the other guy. But at least you have to be able to restate it to the other guy’s satisfaction like that seems like a more achievable, you know, a more achievable standard. But Charlie Munger, of course, can hold themselves to a higher standard than the rest of us. He’s, he’s, he’s earned it. I’ve never heard that principle attributed to Jordan Peterson, and I’m not sure that he always loves it, as much as he as much as he says it just from reading his tweets and so on. It doesn’t necessarily always seem that way. By the way, by interesting coincidences in this whole discussion, Jordan Peterson was a professor at Harvard when I was in grad school there. So I actually had one, two hour long class with him, we had a two hour seminar basically, with all the faculty members when we started grad school, so I had one to one class with him, I never dreamed that he would grow, you know, grow to be, you know, the kind of influencer, you know, intellectual, public, intellectual, whatever he is now, and also, Bill Ackman was in the same class as me in college, although I didn’t know him at all. But we’re both from the ER, he’s maybe a year maybe he’s one year behind me. But we were in we were in college together. And again, I don’t know him either. But it’s funny that these names are all connected.
Clint Murphy 42:47
We’ll see how many we How many more we can get out. Before we finish the episode. Yeah,
Christopher Chabris 42:53
I think that’s a good I mean, I think the principle of like, be able to restate your opponent’s position is in some ways that a way of avoiding self deception, when you’re you think you’re having an argument, but in fact, you’re arguing past each other, right? Like, if you think you’re debating, nobody can really say what the other person really believes you’re just debating some straw version or uncharitable version of them, it’s probably not going to lead you very far. But if you adopt this principle, you might get farther in really narrowing down, where are the points of disagreement? What do we disagree on. And if those are factual matters, maybe there’s a way to actually resolve that if you’re dealing with someone of good faith, and I think most people do want to be factually correct. Most of the time, you know, they don’t necessarily know how to do it, or they don’t necessarily invest a lot of time in doing it, or they sort of look for bias sources for they’re looking at bias sources for their facts, but they don’t want to just be saying false things.
Clint Murphy 43:48
And let’s say we want to dig into that we want to figure out if they are just saying false things, we want to do the work to support our opinion. Now, we start to ask questions, and we start to dig in. How can we avoid the trap of falling for non answers? Which that person on the other side who is purposely trying to bullshit us is going to throw at us they’re going to throw the non answer. So what is a non answer for our listeners who don’t know? And how do we avoid them?
Christopher Chabris 44:17
Well, a non answer, the way we describe it is something that superficially seems like an answer, for example, it comes in response to a question. And it may actually be kind of a familiar sounding answer that you’ve heard before in similar situations. But if you think a little bit more deeply about it, it doesn’t actually tell you anything. And often it is really a way to get you to stop asking questions, as opposed to a way to answer the questions you have already asked. So one of my favorite ones is you ask someone in, you know, a business environment or something like that, well, why are you doing it that way? And they’ll say, we follow best practices, right? Well, to me, that just raises a whole other question. Like who said these are the best practices? What are the not best practices? How do we know these are the best practices? What evidence do you have that there and so on, and you just sort of, I think often find out that there’s not a source at all that says these are the best practices. It’s just a word that people use to describe what their group does, you know, and who doesn’t think they’re following best practices, right? Would anyone say we follow kind of average practices or we follow practices, we dreamed up one day and never really thought about too much like, nobody’s ever gonna say that. So when they say best practices, they don’t mean anything, relating back to investing. Another common phrase is we did our due diligence, right? There’s no regulatory standard that I’m aware of, of what exactly due diligence is, except maybe, in certain accounting, certain accounting professions, but even there, I’m not sure that it’s always well defined. And certainly when people use it, in everyday conversation, or even in business dealings, it is often just a way of saying, Yes, we believe this is true. Or, you know, we have some reason why we think this is true, they should be able to explain what exactly they did. And for example, if you ask guy spear, what his due diligence on farmer Mac was, and you asked Bill Ackman, what his due diligence was, you would find out that they’re wildly different. And what that means, but both would have said, Yes, I did my due diligence. So you have to be alert when you’re trying to sort of uncover a scam? What the quality of the answers you’re getting is, and they may seem superficially like answers, but they’re actually non answers. And in fact, this practice of simply asking questions can be surprisingly effective. I’ll, to pick another example, the famous recent scam, or something like a scam, the Theranos company that claimed to have developed a miniaturized blood testing device, they actually were trying to attract investors, and they did attract many investors. But an interesting thing about the investors they got was that they were relatively unsophisticated in the world of biotech, and healthcare. They were often family managed funds, or wealthy individuals who were probably pretty confident in their ability to invest wisely, and so on, but weren’t really experts in this field. When Theranos tried to get experts who were more specialized in this industry, they got a lot of questions, and they were not able to provide really satisfactory answers. So if you read some of the accounts of the Theranos case, which are make excellent reading, by the way, especially recommend John Kerry rose book, bad blood, I think it was called, and all the podcasts that he did about it, you know, you’ll find that the sophisticated investors sent more questions, and either didn’t get answers or got evasive answers, or got sort of these bullshit non answers and just decided not to invest. And they saved, you know, hundreds of millions of dollars, right by asking a few more questions. And of course, you can do this on a smaller scale, right and in everyday life, but those big examples, I think, show that kind of a lot of value.
Clint Murphy 47:40
And one of the things you talk about, so let’s say that Theranos example, and we’re going into the meeting, and we’re making a decision, and we’re digging into the evidence, one of the things you say is that we should ask ourselves, did I predict this? Why is that a question we should ask ourselves? And why is the answer that’s probably going to surprise people is that if we did predict it, that’s actually the issue more than if we didn’t predict it.
Christopher Chabris 48:10
So if we didn’t predict something, and it still happens, that’s called a surprise. Right. And we are already well attuned to surprises. So we already know to pay attention to surprises, that’s probably part of the whole function of the emotion of surprise is that it gets us to pay attention to things we didn’t predict, because things we didn’t predict might be dangerous, or they might be interesting, or they might have some, you know, special value, and they will help us, you know, paying attention to our failed predictions helps us improve our predictions. Conversely, if someone is trying to scam you, they will know to give you what you did predict, because you won’t be surprised, and you won’t stop and do all the things you might do when you’re surprised. So when you aren’t surprised when if you ask yourself, did I predict this? Is this what I expected to happen? And the answer is yes. You may want to then stop and reconsider whether someone is trying to manipulate the situation or the information you’re getting to basically feed you exactly what you expected. And, of course, most of the time when you know it should be said that most of the time when you predicted something, you probably had good reason to do it. And it’s no surprise that it happened. But if you’re thinking of investing a lot of money, and you’re in a meeting, and everything’s unfolding exactly as it would if it was a perfect investment, and there are no flaws at all and the data that you’re presented, and there’s a quick and fluid answer for every question. You might want to say, well, let me think about this for a while, like let me do. Let me do some more work because everything is coming out exactly as expected. That’s why when someone admits being wrong, you should probably think that doesn’t mean this person is a bad source or a fluid source, but maybe they are someone who I can trust more often because they actually are willing to admit when they’re wrong. They don’t just always give me what I want to hear. They will you know, they will admit I was wrong about that thing, and I’m going to tell you why. and so on, those people are pretty rare, but they probably deserve a little extra credibility than they then they normally get.
Clint Murphy 50:05
And so the other thing with that, because it’s always interesting that we write about this, we read about this, and so to the scammers, so they know how to build the scams, because they’re reading the book as well. But they’re not reading it to say, Oh, how do I avoid getting scammed? They’re reading it to say, Well, how do I scam them. And when we think about this predicting one in the way we’re talking about it, it also ties to earlier you talked about Bernie Madoff. And one of the things that Bernie did was very interesting, quite different than your average Ponzi scheme where they promised higher than ever achievable returns, that a sophisticated investor would say, that’s not going to happen. Instead, he was able to raise billions and billions of dollars, I think I read 12 billion if I’m not mistaken. And he was able to do it by promising returns in that seven to 8%. Range every year, year in, year out for 20 plus years. And what you suggest there is, well wait a second, you got to ask yourself, whereas the noise, because nothing is that consistent, in so similar to the predicting, it’s that consistency that we actually shouldn’t be predicting. So when we see it, we should be paying attention.
Christopher Chabris 51:28
So consistency is what we would call in the book a hook, it’s kind of a feature of things that we encounter in our environment that hooks us draws our attention attracts us makes us feel good. And there’s a lot of virtues to consistency in many situations. But when the consistency is impossible, because for example, the underlying stock market and economy don’t make it possible to generate eight to 12% per year for 15 consecutive years. Not only that, but without ever even having a down month, like he had something like three down months out of 15 years. The reason why 15 years is often cited as their records going back for one of his feeder funds for that long, sort of show what his what his returns are, that consistency is extremely attractive. You don’t have to worry about that money, right? I’m going to get eight to 12% a year. Okay, I’m not going to get 50%. But I’m never going to lose, right? It’s like a treasury bill, but within 8% to 12%, you know, return instead? And I don’t know if you know, I don’t know, if made off, sat down and thought one day, how could I create the best possible Ponzi scheme, a type that’s never been seen before, but would work even better than all those other ones? I think he may have just stumbled into it, he started doing things. And he found out that when that’s what he did, nobody complained. But apparently, occasionally he tried to go below 8% or below 7%. And a lot of his investors were not happy. His quote investors were not happy with, you know, with those, those, quote, bad returns. And this trap of this trap of not thinking about the noise, as you mentioned, which is sort of one of the ideas we recommend to readers is to think about actively look for the noise and the variability in a process to make sure it’s a real one done by real human beings or real social systems like a market. Some of the unfortunate victims of Bernie Madoff hired an accountant to analyze the returns of made offs fake hedge fund and compare them to other mutual funds famous well known mutual funds and found out that yes, in fact, Bernie Madoff’s returns were not better than these other hedge funds, they were in fact a little bit worse. And he argued that there was no reason for these investors to distrust made off because he wasn’t giving them exaggerated returns. But the analysis doesn’t make note of the fact that there’s so little variability, that essentially there was no risk and made often the risk to reward ratio was sort of like six times difference between you know, made offs fund and all these other ones, which would be like Fidelity and Janice and well known mutual funds, and so on. And I don’t want to blame victims or anything like that, because people were horribly victimized by Bernie Madoff. And he took advantage of a lot of people who weren’t sophisticated investors, but in fact, friends of friends and you know, and, and so on, but I think being attentive to the lack of noise and the excessive consistency in the returns that we’re getting from him might have helped some of them realize that that something was going on that couldn’t be right and maybe get out with their you know, with their money before it was too late.
Clint Murphy 54:30
And so when we’re considering that what we really want to be looking at, we want to be making sure there’s a certain level of variability that’s commensurate with the overall market. So if the markets up 30% One year down 20% The next year to see someone say, Hey, we’re 7% both years ought to
strategy and I’m not an I’m not a market professional but people were looking at this at the time famously Harry Markopolos, who’s the guy He wrote, he wrote a book about it later. And he sent these memos to the SEC basically saying, it’s mathematically impossible to achieve these kinds of returns, you know, year over year, but you don’t need to be sort of a quantitative finance guru to just think, a smooth line going straight up, you know, doesn’t really look like anything else that is going on in the market. And you don’t have to immediately withdraw your money. But maybe then you start asking questions, you start consulting experts, you start looking more deeply. We’re not expecting in this book, sort of for everybody who reads it to be an expert on all the different sort of content domains in which one could be scammed. It’s more about learning about the patterns that sort of signaled when a scam or a fraud or deception might be going on. And then you can then start to investigate and, you know, with an open mind and make your own decision about whether to stick with it, or you know, make the deal or, or get out or, you know, whatever the right decision might be.
Clint Murphy 55:54
the one that reminds me that because we’ve got Bernie bit of a charlatan, and the king of charlatans is what we all refer to as the snake oil salesman. And I had no idea until I read the book where that term came from. So you talk about the original snake oil salesman, salesman, or the king of snake oil salesmen, if you will. And interestingly, when he was ultimately charged with a crime, I found it quite funny what that was for, do you want to share a little bit of that story and background for people? And then we can talk about, well, how do we spot these people who are selling modern day snake oil.
Christopher Chabris 56:33
So the term snake oil salesmen comes from it goes all the way back to the late 19th century and early 20th century, when pharmaceutical industry was completely unregulated in the United States, and probably most other parts of the world. And there were these things called patent medicines, which are just medicines invented by individual inventors, maybe they could get a patent on them. And they would just be sold under all these brand names. And there was a whole genre of these that were sort of designed, intended, or claimed, I should say, claim to cure all kinds of disorders. And it’s funny to read the list of them. This is widely varying problems, that there’s one single drug or ointment or thing to cure, and some of them were said to be oil, you know, oils derived from snakes. And in fact, there is a grain of truth in that, in that, and many traditional cultures, oil from snakes was used for sort of like anesthetic purposes, and, you know, could sort of treat some localized pain and things and things like that, and different snakes had different properties. But the guy who was selling this snake oil, patent medicine, which he claimed could treat, you know, a couple of dozen different unrelated illnesses and injuries, was actually turned out selling something that didn’t contain any snake oil inside it. And so the reason why he was fined and, you know, and, and sanctioned eventually, after many years, by the way, he was in business for many years before the when the laws were changed, and so on, was because he wasn’t actually selling snake oil, if only what he had been selling had some snake oil on it, he probably would have been okay. And that just shows that, you know, at that point in time, you could basically claim anything about a medicine. And there was no, there were no randomized controlled trials for testing them, let alone government mandated testing before you can market anything, and so on. And you could get away with these claims of exaggerated potency is the term that we’ve described. So we’re sort of attracted to the idea of potent products, potent treatments, potent interventions, meaning things that at very small cost can produce huge effects. So a medicine that can treat 30 different diseases is an example of that, or a medicine that you can take once and cure cancer, and you know, it never comes back or something like that, after one dose is an example of potency. There are many things beyond medicine, which are like this, for example, some people claim that you can raise your IQ by playing video games, or by playing brain training games, raising your IQ would be great, it would be useful in a wide variety of situations, it would make you a more, you know, effective thinker, and, and so on a wide variety of ways. Wouldn’t it be great if just by doing something inherently fun, like playing video games, or doing something that doesn’t take very long like playing these brain training games, you know, for a few minutes a day, we could do that. So what all these things have in common is sort of a mismatch between the input and the output. Right? You’re getting, you know, too much bang for your buck, in a sense. And when that happens, it doesn’t mean it’s impossible. It doesn’t mean it’s inherently impossible. So for example, antibiotics are I guess, probably my favorite example of truly potent things where you can take a pill for five days, seven days, two weeks and permanently get rid of something that could otherwise kill you. Right like that is near magic in terms of medical, you know, medical interventions. Vaccines in some vaccines, at least are similar in that way, you know, one shot and you are protected for life from a disease but those kinds of games you arranging things are pretty rare. And when someone comes along claiming something of equal potency, we should be wary that they may have created that claim precisely because it appeals to us and tends to suck us in. And we should really look for much more evidence than they are normally presenting for us and higher quality evidence.
Clint Murphy 1:00:20
Yeah, and people seem to always want the shortcut, right. And that’s generally what these snake oil salesmen go for is the shortcuts. Has anyone highlighted to you? The one that’s playing out a little bit today? In the guide, the liver King, has that come across your desk?
Christopher Chabris 1:00:37
The liver King?
Clint Murphy 1:00:38
Yeah, so so?
Christopher Chabris 1:00:40
I’m not sure I’ve heard of this one. Yeah.
Clint Murphy 1:00:42
So he’s a YouTube, I guess, star, maybe Instagram as well. And absolutely jack to the gills. And he says, you know, it’s because he eats raw liver and he eats liver supplements. And he’s got these videos of him eating raw liver and a cell supplements, and says, I’ve never taken any steroids. But you look at a video or a photo of this guy, and you realize, given his age, there’s zero chance that’s true. But the population buys all of his supplements, he makes millions of dollars. And there’s a YouTube personality out there who goes and looks at all of these people. And he makes an educated guess of these are the drugs that these guys are on and less. And so then they find an email from this liver King to a doctor. And he’s saying to the doctor, Hey, these are the drugs I’m on. And I need to amp it up, because I’ve got a production company coming to follow me around and film me. I’m gonna become a social media star in Selma. So you know, you’d buyer this snake oil salesmen for collagen is shot it building it, but just the population just see it this jack to the gills guy, full of all those every illegal steroids supplement we could imagine. And he’s still able to sell his liver pills,
Christopher Chabris 1:02:12
as he admitted that this email is real. Like, we don’t know that we know that this email was not fabricated to make this guy look bad or something like that. I mean, I’m not, you know, I don’t mean to doubt you. But of course, it’s easy to
Clint Murphy 1:02:23
say Oh, yeah. Oh, yeah, I did.
Christopher Chabris 1:02:26
So Yeah. So there. Yeah. I mean, I think that’s a great example, in many ways, because, of course, we have sort of like the truth bias, where we’re listening to this guy, we’re looking at him, we think if he says, I got this way, by exercising and taking supplements, and there were no drugs involved, legal or illegal. I mean, our first instinct is to believe him. And I think it’s kind of amazing that I feel like I can sort of feel the truth bias and action. Sometimes I feel myself reading tweets or stories or something like that. And for a moment, I really do believe it, you know, and it does take a little effort to turn on the questioning, you know, the questioning mindset and asking, you know, what we might be missing, and all of that. And this seems like a great example, I guess, one. In our book, we give a lot of general principles for how to sort of think about scams and questions to ask, and so on. But there are some more specific ones that I think are worth, you know, they’re much more specific things that one could say. I mean, this brings up one example, which is I think, buying supplements, from YouTube or TV ads, is probably never a good idea. I think if you went through life, never buying a supplement from any ad you saw on TV or anything like that, you would not affect your lifespan at all. I’m not a medical doctor, don’t take medical advice from me. But unregulated supplement business is in some ways, like the snake oil, you know, like the whole snake oil story from 100 plus years ago, they’re regulated as foods, not as medicines. And so I think that’s one reason why people are so attracted to them, because just to market something as a medicine, you have to have, you know, rigorous evidence of what the benefits and the side effects are. But to market something as a supplement, you don’t and so I think there’s always the possibility that if I just take the right combination of these things, I can achieve you know, what this guy you know, has achieved with his body or I can achieve a sharper, you know, mind I can improve my memory and so on. And any of those things, when you look at the quality of evidence behind them, it’s, I would say, it’s fair to say probably orders of magnitude worse in quality than the randomized clinical trials that you would need to register an actual you know, an actual pharmaceutical for sale. So I think supplements you can just skip unless a doctor is actually advising you to take them and maybe what dose and, and so on. Yeah, and there are other ones like that, like you know, no and another one of our favorite ones is no government agency will ask you to pay your overdue fines by buying gift cards at Walmart and reading them the numbers over the phone. But 1000s and 1000s of people have gotten scammed out of a lot of money by getting a phone call, saying, you know, your taxes, you have an overdue tax bill or something like that, and the police are coming. But you know, we can call them back, if you just, you know, pay us right now. And weirdly enough, the way to pay them is to drive to a store, buy prepaid gift cards and read the numbers over the phone, that is always a scam, right? You know, so there are a few specific things but of course, those will evolve, right? Like once that scam doesn’t work anymore, there’ll be some new scam. And so that’s why I think sort of understanding the patterns behind them and how they tap into our thinking patterns is the way to go to, you know, to both learn more about how your own mind works, and also to sort of improve your general defenses against the scams that are yet to be invented, or yet to become popular.
Clint Murphy 1:05:52
Yeah, and you know, you talk about the Nigerian prince one as an example, which is always famous. And you were just talking about Twitter, and one of the ones that always happens on Twitter is that’s where I have my largest social media following. And so every day, there’s probably five accounts, at least that are impersonating me. And they will almost always reach out to people and say, Hey, how are things going? And they think it’s me. So they start having a conversation? And then they say, Hey, how’s your training going? And it’s, you know, can we go to Bitcoin? And do you have a Bitcoin wallet? And it’s like, you know, people reach out to me, and they’re like, Hey, what’s this you? And I’m like, no, like, I would never ask you about trading or Bitcoin. I’m like, Let me guess this is what they asked, right? How’s your trading? Bitcoin? And people always say to me, Well, why did they do that? And I said, I say to them, well, it only has to work once. If you’ve got 100 people in a warehouse on a computer, sending that out to 100 people each a day, they only need to hit one person who sends them a Bitcoin and now they’ve made $30,000. But I guess the question is, in the scam keeps changing, but the general philosophy is, I’m going to take money out of your pocket, and put it into mine. So why do they work? And for the listener, yes, the scam is going to keep changing. But what is it that they should be asking themselves to not be that next mark?
Christopher Chabris 1:07:25
Well, there are a lot of different ways to avoid being the next mark. But the reason why sort of that general pattern of scam works, I think, that we don’t think about too much is that, although it seems like it’s one person contacting just me, and we wonder, how can that work? You know, I’m skeptical, I wouldn’t do it. There’s actually like you say, probably 100 Guys in a whole floor of an office building somewhere, who are doing this 24 hours a day and shifts. And there are these call centers, scams that have operated out of India and other places where literally, it is a business, they have floors of office buildings, making these calls, and they have managers and supervisors. And it looks just like a legitimate business except all they’re trying to do is scam. You know, people in other countries out of out of their money, probably the Bitcoin ones that you’re describing, operate in sort of a similar way. Maybe even chatbots are doing it you know, maybe it’s not even humans all the time, maybe they use chatbots and start with chatbots see how far you can get with that then switch to a human when you need it. So they’re always trying to increase the efficiency of the scam, right so that they can just like everybody, right? We want to automate the parts of our business that can be automated. So now why does that work? Well, it works from their point of view because like you say, they only need one person to send what Bitcoin one bitcoin to make $30,000 or whatever it’s, it’s trading at these days. And from the point of view of the Nigerian scammers, they don’t need too many people to start sending them a lot of, you know, sending the money before the scam has paid for itself. So interestingly, the way to FOIL these scams is to interact with these people but without ever sending them any money. And of course, you can do that if you know from the beginning that it’s a scam, and you’re going to do it just basically to essentially troll them and, you know, waste their time because then it becomes unprofitable for them. If some guy talks to you for weeks and never gets that Bitcoin, that’s you know, you owe him weeks of salary, but you have not gotten the $30,000 that you wanted to get right so there’s this whole concept of scam baiting, which are people who are deliberately engaged scammers without any intention of ever paying them just to make their business unprofitable. And scam baiting at scale with bots, you know, bots for good, you know, is another way of trying to, I’m trying to do this now, what can you do to figure out that this might be happening to you? I think one of the main things to do is realize whether the venue you’re in and the interaction you’re having is the right way to achieve the goal that’s on the table. So when you get that Nigerian scam email or that we’re Bitcoin DM on Twitter, and the conversation you know, so I mean, you don’t just delete it right away, but the conversation you know, starts really stop and think, Is DMing with someone on Twitter, the way that people should invest money and save for retirement and you know, and so on, like Twitter DMS are not the venue for making good investment decisions. Likewise, responding to emails from unknown parties and foreign countries is probably not the way to make good financial decisions, buying things that are alleged to be rare or unique from TV infomercials and in mall stores is not the way to become a successful collector. Right. So there are art galleries and auction houses and specialty dealers and so on for a reason. Likewise, there are large financial institutions and you know, money managers with serious track records for a reason, you know, not to say no one should ever invest in Bitcoin. But, you know, I think any financial adviser would say, yes, if you want that kind of asset class, buy a little of it, you know, and make it part of an overall portfolio right, you can get financial advice from someone much wiser than me, but certainly no one would advise you. Yeah, buy it from the next guy who DMS you on Twitter. Exchange Bank Details with the next guy who contacts you know, from some West African country or something like that, like that would never be so being aware, sort of like, what are the steps, you’re taking a really logical ones to take in light of the goal? might let you know, are you pursuing your goals? Or are you pursuing someone else’s goals? Because someone who’s trying to scam you, you know, wants you to be buying art from a TV, you know, from a TV shopping channel, or wants you to be making investment decisions with people who DM you out of the blue on Twitter.
Clint Murphy 1:11:43
I love it. And so it’s a good spot to wrap up on that one and go a little off book, you have time for rapid for questions that I can fire it?
Christopher Chabris 1:11:51
Sure, let’s see.
Clint Murphy 1:11:53
Sure. All right, what’s one book that’s had a really meaningful impact on your life that you’d recommend to the listeners?
Christopher Chabris 1:11:59
There are so many, but the one that comes to mind most often is this book called, everything is obvious, once you know the answer. And it’s a book by Duncan Watts, who’s a social scientist, he’s a professor at University of Pennsylvania and actually a friend and collaborator of mine, but I did not know him when I first read this book, which came out maybe about 12 years ago or so. And it’s a collection of really great stories and ideas about how we misunderstand what causes what, how we misunderstand causality and how hindsight you know, is a bad guide to things and it’s just an a really well written book. And I think it’s quite, it’s quite an enjoyable one.
Clint Murphy 1:12:38
Gotta look that one up. What’s on your shelf right now? What are you digesting at the moment?
Christopher Chabris 1:12:43
What’s on my shelf? Well, you can see behind me behind you.
Clint Murphy 1:12:48
Are those that the same?
Christopher Chabris 1:12:50
Now, what am I let’s see, what have I read recently? Well, I’m reading actually reading a book called conspiracy about conspiracy theories by Michael Shermer. And it’s an interesting analysis of you know, why people believe in conspiracy theories, what kinds of people believe in them what, you know, what are sort of the common patterns among conspiracy theories? That’s a whole interesting field of, of study that I think as is worth looking into.
Clint Murphy 1:13:13
And what’s something Chris that you’ve bought for, let’s say under $1,000, in the last year that you’ve thought to yourself, hey, wish I’d bought that sooner?
1:13:23
Wow, that’s a good question. What if I bought for under $1,000 that I wish I had bought sooner? Well, I probably upgraded my phone later than I should have. But that’s a pretty obvious that’s a pretty obvious one. I you know, I used to be one of those people who lined up every year for the new iPhone, and then it became every two years and then at some point, I just stuck with one for an inordinate amount of time. And when I got the new one, I realized I should have you know, it probably should have done that. I probably should have done that earlier. What’s something I bought for under $1,000? That maybe that phone actually might have been more than 1000? Yeah, it probably was, it might have been right around there a little bit under something like that. I don’t know if that’s the best answer. I can come up with that one right, come up with
Clint Murphy 1:14:02
like, I’m in the same boat. I used to do it every year. And partially because I would say to people, I’m pretty sure Apple throttles our batteries when the new phones are coming out because it doesn’t make any sense that my battery dies in hours. And last week, it was it would last a full day. And they all thought I was a conspiracy theorist until Apple admitted it.
Christopher Chabris 1:14:25
The features improved the features improved much more year to year than they did now right there. I don’t know exactly what the answer to why that is but probably just you know, speed and memory and so on was improving faster back then and just the yeah the cameras you know resolution and you know, and so on. But certainly point the camera resolution can’t get any better than you know, you can’t tell the difference anymore. If you’re a casual user, right? I’m not making movies on my phone. So
Clint Murphy 1:14:48
yeah, like how many more lenses can we put on this thing? Is it just going to be one giant wall of lens? Exactly. It becomes a little less meaningful? The change so I was the same This year, it went from year to year to two years. And now it’s I think, by the time I change this one, it’ll, it’ll be closer to three. And so you know, it was just the evolution.
Christopher Chabris 1:15:09
Here’s something I bought, it cost less than $1,000 each, but cumulatively, it’s more than $1,000 that I should have bought earlier, which was chess lessons. So I used to take a shot. I’m a chess player, I used to take chess lessons. When I was a kid, I stopped in college. And I got back into chess, you know, quite a while ago, now, maybe 10 years ago, but I didn’t start taking lessons for like five or six years. And I think partly because I thought I’m a good player already. I don’t need lessons. But then I eventually realized that there’s so much that a coach in kind of any field that you want to improve yourself in, you know, can do for you. And if you’re willing to do that, so I probably should have started taking chess lessons again, sooner if I wanted to really keep on getting better at chess, and cumulatively, I spent more than 1000 on them, but each one is much less than 1000.
Clint Murphy 1:15:52
I love it. Even the chess.com lessons themselves are pretty fun. And you reminded me when you gave the example of the cheater in the book that you encountered, they used to have that scrabble game, that you could play Words with Friends. And I realized pretty fast that the people on the other side were cheating, because I knew who they were. And I thought there’s no way you have this vocabulary. You are not going to get four seven letter words in a game with me. I don’t think outside of Words with Friends you’ve ever had one. And sure enough, I, you know, how do I figure it out? I’m like, Well, I think they’re cheating. So I went to the App Store cheating Words with Friends, and it shows you the app they’re all using. So I killed that and never played the game again. The last one, because the shows about growth. What’s one mindset shift, habit change or behavior change that you’ve done in your life that had an oversized positive impact on you?
Christopher Chabris 1:16:48
Well, of course, it’s always hard to really attribute cause and effect to those kinds of things. Right. So I’m certainly a fan of Getting proper sleep. I think the one you know, if I had to say one thing, I think probably trying to be aware of how much evidence you really have for your beliefs that kind of gets back to the work required to have an opinion. But to try to put into practice the idea that you should be willing not to be certain, you should say you’re certain of something less often, you should act certain things less often. And just try to put that into practice and observe yourself like how often are you expressing absolute certainty, and try to reduce that because you must be expressing absolute certainty more often than it’s really warranted. It’s not necessarily a habit, but I would say it put in the category of what you would call a mindset shift, maybe so I’m gonna go with that one.
Clint Murphy 1:17:43
I love it. And we went pretty wide and definitely deep in a few areas. Is there anything we didn’t cover that you want to make sure you get across to the listener?
Christopher Chabris 1:17:53
I don’t think so. This was a great conversation aside from please buy our book Nobody’s Fool. And other than that, I think we covered a lot. I really enjoyed talking.
Clint Murphy 1:18:02
Yeah, perfect.