Amy Edmonson: The Science of Failing Well

Photo credit: Evgenia Eliseeva

Amy Edmonson is the Novartis Professor of Leadership and Management at the Harvard Business School. Early in her career, she worked as the Chief Engineer for architect and inventor Buckminster Fuller, which started her on the road to reimagining how we’re all impacted by the world around us. She then became the Director of Research at Pecos River Learning Centers, where she designed change programs in large companies. Now she’s an academic, where she focuses on how teams function and evolve, along with the essential dynamics of collaboration required in environments that are informed by uncertainty and ambiguity. What sort of environments are those? Almost all work environments. A significant point of her research and focus is the necessity of psychological safety in teamwork and innovation—effectively, how do you create an environment where people feel like they can fail in the right direction, where they’re learning and taking risks toward evolution and growth even when they might not get it right the first few—or few hundred—times? This is the focus of her latest book, Right Kind of Wrong: The Science of Failing Well

MORE FROM AMY EDMONSON:

Right Kind of Wrong: The Science of Failing Well

TRANSCRIPT:

(Edited slightly for clarity.)

ELISE LOEHNEN: Are you just like back to back just talking about failure?

AMY EDMONSON: Yes. It's so funny because, you know, well, it's just a kind of depressing irony about talking about failure all day.

ELISE: And do people tell you all their failures?

AMY: Not enough. Not enough. I mean, I'm always looking for more stories, oof course.

ELISE: Seriously, I mean, what a fascinating in so many ways, I mean, ultimately, obviously the book argues that failure is the bedrock of innovation and progress and, the sort of drive shaft of problem solving is a willingness to fail, but for whatever reason I'm sure when you tell people what you do, you encounter nothing but like, ugh, what a downer.

AMY: Right. Absolutely. And, it's in part, if you think about it, I could just as easily walk through the doorway, you know, experimentation or risk or progress or innovation and have the same conversation. And so, in a way, it's my fault. I walked through the doorway called failure.

ELISE: But it works, right? Because I mean, you write about this, but we all have a negativity bias, right? As much as we want to disown our own failures, we're obsessed with other people's.

AMY: That's absolutely right. Right. So we do have this negativity like that is stronger than good, as I write, and that's well documented in the research literature that's not mine, but other people's. And so that means we're both overly sort of interested in and okay about other people's failures, but also overly sensitive to our own. And especially compared to, you know, the things that go well, we discount those and then, you know, are too worried about the ones that don't go well.

ELISE: Yeah. So interesting how just a little bit of objectivity and we see the opportunity to learn and evolve and maybe we should define some terms for people, because you talk about the spectrum of mistakes, right? And can you give us like the framework in which you think about failure, and what the terms are that you think are so important to distinguish?

AMY: Absolutely. So I'll start by saying, yes, it's an overly broad term. And let's recognize that part of the challenge here is that failure is a very broad term and it encompasses all sorts of things, you know, that didn't go as hoped and, and maybe even expected. And that applies way too widely to so many things, but there the term failure is still important because we need to come to terms with failure. So I identify three kinds of failure. And, you know, the simplest and first kind and the find the kind that frankly, it's easy to understand why we're allergic to it, I call it a basic failure and a basic failure is a single cause failure, generally a human error or mistake led to a failure. You know, you didn't follow the recipe, the cake fell. You didn't study for the exam, you failed the exam, right? So these are these are basic failures. It's very obvious why they happened. They were, in fact, in a very real way preventable. And so those, I promise you, I'm not celebrating basic failures. I am saying they're not shameful, they happen. We should learn from them, we should use them to grow and develop. But I'm not saying let's have more of them. No, quite the opposite. Let's have fewer of them.

The second kind are complex failures that are multi causal. They're the kind of failures that happen in, you know, familiar territory when a whole bunch of things line up in just the wrong way that you maybe forgot to set your alarm clock and there was more traffic than you expected and, you know, you hadn't filled up the gas tank, whatever, you know, a bunch of different things that led to you being impossibly late for an important meeting, but not one thing, you know, not one simple thing and those also are not good news. They're largely preventable. Let's do our very best, again, to prevent them.

And now we draw a line in the sand. And that's where we get to the third kind, which is the intelligent failures. And these are the ones that are genuinely good. They're genuinely productive and useful because we learn from them. And they are thoughtful forays into new territory. It's an intelligent failure happens. It's essentially the undesired result of an experiment, you know, where the experiment was worth doing it thought about it, and it seems like it might get you that result you wanted. And yet you were wrong. Oh, well. And so that's okay. And that I'm describing it now as if it's a sort of individual thing, but an intelligent failure also describes a clinical trial that fails to demonstrate efficacy for a new cancer drug. And it's in new territory. There's literally no way to get that result that you need to move forward, you know, on this potential medication without doing that trial. You wanted it to work. You had good reason to think it would work, but alas, it didn't work

ELISE: Yeah. And you tell a lot of stories in the book about all sorts of failures. It's fun to read. But scientists sort of inherently, based on the craft, right, are much more comfortable with the fact that they will make a hypothesis and get it wrong, what? Most of the time?

AMY: Most of the time. I mean, it depends on the field, but if you're in, say, a, you know, leading edge biological chemistry laboratory, it's very likely that 70 to 80 percent of the experiments you're running end in failure and that's not because you're not good at your craft it's because you're so good at it that you're taking bold risks, you have surveyed the literature you know what's known you know what isn't known you have a hypothesis that this cool thing might work might be true might describe nature, you do the experiment. And sometimes you're right, which is pretty darn exciting. You publish in, you know, the top journals, but more often you're wrong. And you, you train yourself not to be upset by it, right? Because if you're upset by it, you're just not going to last very long as a scientist. But if you recognize it as a worthy effort, you know, this was a good idea to test this and possibly I'm the first person in the world to test it and it wasn't right. But now I can also be the 1st person in the world to do that next experiment with this new knowledge that nobody else yet has, you know, and maybe the next experiment informed by this experiment will end in success.

ELISE: Yeah, what do you think is like the major distinguishing psychological trait there? I know you write about growth mindset and fixed mindset, but it feels like in the land of uncertainty, which ironically is science, even though we think of science as so discerning and so discreet and definable and exact versus certainty, like where there is certainty, there's a known way to do something, so failure there is a little less acceptable. What do you think it is that's distinguishing?

AMY: I think it's probably training and practice and just getting, you know, getting your comfort level up with the nature of the sport you're playing, right? You know, it's not that scientists are, I mean, most of them I think, are born different, you know, they just sort of are born and they just say, Oh, I don't mind failure. Failure is great. I'm all for it. Right? No, I mean, they just get interested in some phenomenon, you know, they love biology, they love physics, whatever it is, and they get interested in it and they are good at it and they learn and they love the learning and they look around and they realize if I'm going to stay in this game, I’ve got to be okay with the fact that you can't be right all the time. Like the, you know, it's the, the baseball player who has a, you know, 300 batting average is, is in the hall of fame. That means they are 70% of the time missing, but they’re best. So, I always say, and I'm married to a scientist, you know, how do you get out of bed in the morning and fail 70% of the time? It's not because you're a different kind of human. It's more because you get it, you know what that 30% feels like, right? You know, it's worth it. And it's the price you pay to get to that 30%, to get to that big discovery and that fantastic publication.

ELISE: Right. And it's part of the culture, right? This is part of the lab or this is the sea. And that's what I think is so, I know you work at HBS, right? You're primarily working with businesses where this isn't necessarily the culture. Can you talk a little bit about psychological safety and then we can talk a little bit about how lack of psychological safety is so dangerous?

AMY: Absolutely. So first of all, I want to say thank you for mentioning that it's the culture, because we were describing this almost as an individual property, but it isn't. It's an individual activity and mindset, but oh, it takes place in a culture and it's reinforced by and fed by the culture. And that's really important. And the specific feature of the culture that I think is absolutely critical, which you just asked about, is psychological safety, which I define as, frankly, it's a learning environment, but I define it as a belief that you can take the interpersonal risks of asking questions, raising a concern, you know, sharing a mistake, admitting a failure and, and all the rest. It's a recognition that these kinds of learning behaviors are hard for us, for people and yet they are less hard in an environment that supports it. Right? And that's an environment of psychological safety that basically says, you know, what? Interpersonal risks are part of the work we do. They're part of learning. We've got to make it easy for each other to take them. You know, we do what we can to make that happen. And a healthy scientific lab is definitely a psychologically safe place. It's a place where you can quickly ask for help when you're not sure about something or quickly point to an experiment that failed because you don't want to waste, you don't want your colleagues to sort of make the same mistake, you know, wasting more resources and money when we already know this one doesn't work, right? So it's really important to share those things quickly.

But psychological safety is, you know, is really valuable for learning for innovation and problem solving, and it's not the norm in most work environments, as you described, even by alluding to the business community, broadly speaking, I mean, the general culture of the business community is not, Hey, failure, that's great, right? Maybe there's some verbiage about that and some kind of Silicon Valley speak about that, but most of the time it's, you know, you make your targets or else, you succeed or else.

ELISE: Yes. And low tolerance for mistakes, which you sort of tease out as different. Or this cultural idea that somehow allowing mistakes, which are very human, means that you have low standards or that you're running a sloppy operation and that you can engineer perfection. Right? But meanwhile, what you show is in some ways inverse. Can you talk, tell us a story about how you even came to this idea of psychological safety?

AMY: Sure. Let me first just pick up on what you just said, because I think it is a classic error in organizations is, you know, it's understandable that an organization, whether they're automotive or aviation or patient care or tech, they don't want mistakes. None of us want mistakes. Here's the problem, if you just say, let's not have mistakes and really you're going to be, you know, rewarded and punished and so forth based on whether or not you make mistakes, that doesn't make mistakes go away. I mean, we can remove some mistakes by just trying harder and so on, but there will be error. So what that really accomplishes is it makes mistakes go underground, right? It makes and it makes people not speak up about them. It makes them harder to catch incorrect before really bad things happen. And that's like the worst of all possible worlds, right? You still have human beings making mistakes, but you're less able to do what you need to do to catch, correct, learn from and, you know, prevent the really big ones. And indeed, the way I stumbled into, or started to think about psychological safety as an environment that mattered in the modern workplace, was through a study of mistakes.

I was part of a team of researchers that was looking at the phenomenon of adverse drug events and medical errors in hospitals. And I was not an expert in medicine in any way, shape, or form, but I was an expert in teams and I had the opportunity to assess how people fill out a survey that would assess how well their teams were working as teams and and so, you know, I was supposed to, my hypothesis was that the better teams would have fewer mistakes and adverse drug events because they're better at teamwork. But, what happened was, I kind of stumbled instead into the insight that the better teams were more open and willing to talk about error and so that it actually turned out that it's not so easy, I mean, this is obvious in retrospect, but it's not so easy to measure error rates in many workplaces. I mean, some are because they're really objective. You can go to the end of the assembly line and count up the errors. But many errors that happen in workplaces are are hidden and many of them don't really cause real harm. I mean, they had the potential to cause harm, but they don't automatically cause harm. And so, you know, the act of getting people to talk about errors is really to encourage people to talk about errors is not easy, but it is important.

ELISE: Yeah, well, and in a workplace without psychological safety, where there's a lot of blame, shame, etc. It's very, it's a threat to someone's identity. It's a threat to their livelihood. It is understandable why someone would not confess to messing up, right?

AMY: Right. The last thing I would be saying is that they're, you know, they're bad and wrong for not speaking up. No, their environment was perfectly designed to have them not speak up about error. And they are products of that environment. And it would be both unscientific and maybe even unethical to expect them to be heroes, in the sense you expect them to do things that they firmly believe are not in their best interest. Instead, you have to create an environment where they understand that is in their best interest because it will be appreciated. There will be gratitude for the fact that you spoke up.

ELISE: So can you talk a little bit about how, you know, you write about how blaming is such a natural instinct. You write about the three year old in the car where the father like hits, you know, messes up, the mirror comes off and the three year old's like, it's not my fault. It wasn't me.

AMY: I didn't do it. Papa. Yeah.

ELISE: But that I'm in trouble instinct is so strong. It's so human. So how do you see people successfully keeping that threat system low?

AMY: Yes, it's such an important question because you are right. It is, you know, it is natural. It’s, you know, all but hardwired to resist failure, to not want to be blamed. You know, it's an instinct that's very, very powerful because we don't want to be rejected. We don't want to be thought less well of, which is why, you know, the things that I write about and let's face it, organizations that are truly world class, whether it's a scientific laboratory or, you know, an innovation department, or you know, a perfectly running assembly line, they are not natural places, right? They're not just left to their own devices humans will create places like that. No, they're really hard work, good design, good leadership, kind of daily willingness to kind of stretch and grow independently and together.

And so short way to put that is it takes effort to create a learning environment. It really does, but it can be done. I mean, we can, we can see places where people realize, hey, I'm a fallible human being working among other fallible human beings. Chances are pretty good that a few things will go wrong that we really didn't expect or want, but we're going to stay alert to it. We're going to do our best because we recognize that we've been sort of reprogrammed to say, yeah, I'm a fallible human being. So are you. Things will go wrong. Great. And then also, like we were talking about with the scientists, we can teach people and help people to realize that on the leading edge of any endeavor, whether it's scientific or culinary or athletic, if you're really on that leading edge, you're in new territory, you're discovering things, you're trying things that maybe haven't been done before, and any such effort will bring risk, you know, will be bring risk that it might not work. Also brings the possibility that it might work. So in a way, you're willing, you reorient your thinking to say, I'm willing to take that risk, right? Because of the potential upside, I'm willing to live with the downside of it not working out or of me being wrong,

ELISE: Yeah, it's interesting, I think you were writing about X at Google and whoever you were profiling was saying that if he had to do layoffs, that he would lay off the people who had never failed. Is that accurate?

AMY: Basically. Yeah. So it's Astro Teller and he was running X and, you know, he understood, you know, it was a couple of years ago, but it was sort of one of those times, before the COVID, one of those periods where people were for whatever reason, a little bit anxious about, you know, layoffs and they expressed that as, you know, but how am I supposed to take risks now when there could be, you know, these layoffs and he said, well, that's exactly when you should be doing it because it's, you're not really adding value to us. I mean, we're an innovation factory, but this is different than if you're maybe. You know, running a nuclear power plant, but if you're not failing, like if you're not risking and trying wild new things, some of which don't work, you're probably not adding much value around here. Right. I mean, so it's sort of, you know, it's flipping on its head, our natural, normal mindset, which is probably good to just lay low so that nobody notices me, you know, if fail rather than, no, no, no, if I'm not noticing you failing, you're not the highest value employee that we have.

[ELISE: Yeah. No, it's interesting. But then flipping, you know, you mentioned a nuclear factory, but flipping it into not necessarily nuclear high stakes, but you write a lot about Toyota or mechanisms within companies for, for allowing employees to report errors anonymously or not. But just that idea of like, what's it called?

AMY: The end on cord, right?

ELISE: The end on cord.

AMY: Which is the end on cord is both, you know, a practical tool and a symbolic message because the end on cord is a cord that any team member on the front lines of the assembly plant can pull if he or she notices any potential, not just a problem, but like a potential problem. And think about how important that difference is, right? Because it's one thing to say, you're smart enough to detect a problem, please let us know right away. But what they're saying is, if you hypothesize that something might not be quite right, we want to hear from you, right? We really do. So you pull that cord. Now, many people have heard the end on cord and they think, oh, that stops the line. It doesn't stop it right away. It basically calls a team leader over who will then help you diagnose whether the thing you noticed is really a problem or not. And 11 out of 12 times, at least at one point in time, it wasn't a problem, right? So 11 out of 12 times that the line doesn't stop. It just keeps going. But think about the symbol of that. It's visible. It's right there. And it's basically saying all day long, we want to hear from you. We value your brain. We believe you are alert and seeing things and we need you, right? This is a team sport. That's what it's saying. And so in, you know, in well run organizations and learning organizations, there are mechanisms to just make it easier, right? Make it easier for people to do the hard things of speaking up about error or questions or concerns.

ELISE: And you talk about like how you can also create systems, like they do this in airlines, right, where you can not lodge anonymous complaints, but anonymously report errors of your own or other people without attribution.

AMY: Yes, I mean, anonymity is kind of a double edged sword because it's, it definitely makes it easier for people to report. On the other hand, it sends us, it subtly sends the message that it might not be safe. So I think it's still worth doing, but you have to realize you're doing it with, you know, a potential risk, which is to convey yeah, it really is dangerous, you know, to do this with your name on it, but I think it's a step. It's like training wheels on a bicycle. Maybe it collects things that would otherwise be missed. I mean, it certainly does collect things that would otherwise be missed, but the, the, the gold standard would be that everybody just knows that's how we roll, right?

ELISE: Yeah. And that it's okay.

AMY: And it's okay. And no one ever got fired for it, right? And no one ever got, you know, reprimanded for it. In fact, the opposite. Most, you know, they'd get get praised for it or thanked for it.

ELISE: Right. It is really hard to imagine doing, right? Owning our own fallibility. I was thinking too, this was fascinating to me, but you write about the fundamental attribution error, and this idea that you write, this is Stanford psychologist Lee Ross, “when we see others fail, we spontaneously view their personality or ability as the cause. It's almost amusing to realize that we do exactly the opposite in explaining our own failures, spontaneously seeing external factors as the cause.” And this is true, right?

AMY: It's true and hilarious if you think about it. Now, you know, when social psychological findings are true, it doesn't mean that's like 100% of the time it's this, it means there's a meaningful difference in our sense making about other people's, you know, failure and our own. And part of it, I think is, you know, deep down, it's so frightening for us to think it might be true. It's our fault, right? Because that feels like a kind of death. So I have to say, yeah, yeah, yeah, but the situation, right? The context, there was too much traffic, uyou know, versus, okay, you know, I contributed to it and the situation contributed to it. And the same is true for you. When, you know, if I see some other, you know, the failure in another person, I'm realizing that, yep, it's some of it's them and some of it's the situation and it's a complex combination.

ELISE: How do you break that tendency in people? Is it just awareness?

AMY: I think awareness, clearly awareness is an important step. How much awareness gets you there is debatable. I don't think it's enough. I think it's awareness plus practice. Once you become aware of the fundamental attribution error, it's hard not to see it.

ELISE: Right.

AMY: You know, you trip on the sidewalk and you think, oh, anyone would have done that, right? It's a bumpy sidewalk rather than like, oh, I'm clumsy, you know, but it's both right. It was a little thing there that you could do on, but it's both and it's always both in a way. And so part of it is just make, having that awareness. And then I think the second part is just practicing the new thinking, the new, more productive, more learning oriented thinking.

ELISE: Yeah. I've been thinking a lot about, and you write about sort of this idea of fear and how fear inhibits learning, which I think makes a lot of sense, right? Like anyone who's listening and you think about these instances of failure and talking about it excites my nervous system, right? And makes me feel like a similar threat to my identity, particularly as a woman who aims for perfection and everything that I do, but of course, right? But this idea of, I don't know if you've ever heard of conscious leadership group or their framework of being above or below the line, but it is this idea that fear, that as humans, our tendency is be below the line most of the time, 90 something percent of the time. And when we're below the line, we see the world happening to us and we are looking at where to place blame. And they talk a lot about Cartman's drama triangle and sort of figuring out where you are in that triangle, but that when you can get above the line when you can sort of corral I don't even corral your fear, but sort of get above the line.

AMY: See it from a distance, right? From above. You know, that's, that's similar to what Ronnie Heifetz at the Kennedy School at Harvard calls, you know, getting on the balcony. It uses that phrase. It just seems like the same thing cognitively, you know, neurologically, you're just trying to get a little bit, not just distance, but like above the kind of distance that you're looking at it from above, because you can see the bigger picture. You could look at it more dispassionately and compassionately.

ELISE: Dispersonally. And also as, they talk about where below the line, sort of life, the world is happening to you, but above the line it's happening by you and through you and that you're co creating your reality. You're not blaming. You're seeing. problems and obstacles as allies for your own learning. It's sort of another frame for what you're talking about, which that to do it, like you talk about at some point, psychological safety being essentially expressed through the culture, right? It's not a personal quality.

AMY: Shared an emergent property of the group.

ELISE: An emergent property, yes. And so how do you in that way, create sort of above the line, emergent psychological safety environments where learning and is the ally?

AMY: That's the holy grail, right? That's the big question. So, but I think the word you used just a few minutes ago, co creating is a really important part of like, in a way, you have to do it yourself, but you can't do it alone, right? It's like, you have to do your part, but when you're with other people and you're working this through aloud, right? Yeah, this went wrong, you know, here's some things I contributed, you know, we really work on getting ourselves back up above the line together. I earnestly believe that this is easier to do with supportive others than by yourself. Like, you can really get stuck, right? You can get stuck below the line. And you just don't see any doorways out, but a good friend or a good colleague will kind of open the door for you and say, well, come on, let's think about it this way, just at that moment when it was sort of hardest for you to get out of the fear zone and, you know, into that learning zone, but they opened the door for you and vice versa. So I think, I think the way we do this, we start talking about it. We support each other and hold each other accountable for getting into more productive learning oriented thinking and less in that it's happening to me and then I’m powerless, right?

You're the victim that way. And that's not a good place to be or a good place to stay, even when, you know, even when it's like 100% true, as in the case of Viktor Frankel and his magnificent memoir, Man's Search for Meaning, where he is literally the victim in an Auschwitz death camp and he manages to get himself above the line, right? He manages to, I'm going to think about this differently. I'm going to think about the incredible bravery I see every day among my colleagues and I'm going to remember their stories and I'm going to share them when I get out, right? So that's like kind of a deliberate shift from something where literally is being done to you. It's a horrific below the line place, but you decide internally, I'm going above. Of course, didn't use those terms, didn’t have those terms, but it's empowering to say the

ELISE: yeah. And it's this difference between being a victim and victim consciousness, or sort of attaching to that as a consciousness. Like you can be a victim and be like I refuse to participate in this sort of consciousness or this energetic field so both things can be true simultaneously. It's interesting too, the conversation in the book about checklists and Atul Gawande and sort of this idea that a fixing as much certainty as we can and how helpful that is right, like there's no reason to be reinventing scripts in the fly when we have basic checklists that we can follow but then also to recognize that these systems, you can mechanize it as much as possible, but it's still alive and there are still all of these other human forces at play. And so it has to be co creative, right? Like if you can't just, you can't just routinize it.

AMY: It won't do it by itself, right? The checklist won't ensure the safe flight or the safe surgery by itself. It takes conscious engagement with the human being. The human mind has to engage with it consciously to make it work. And, you know, it's so interesting is when checklists and protocols first came into medicine, first, you know, the, the physicians were like, go away. You're trying to dumb down medicine. You're trying to take away my autonomy as a clinician, as a professional, and fortunately, that thinking was able to be changed from no. In fact, this is just an incredibly helpful support tool so that your big brain doesn't have to be tied up trying to remember all the items on the list. Those are right there. It's free to make judgments. It's free to notice things that you might otherwise miss. It's, you know, it's free to do it’s highly educated, highly important job. So you had to, you had to get people to reframe what the checklist was from the boss to the servant.

ELISE: Right. And then to imagine it as an iterative process that could potentially be continually improved and or rethought or re engineered that like it doesn't mean that you can like unhook your mind. If anything, it's just like giving you the train tracks, but it's, it's interesting to think about that in the context of, I mean, I don't even know how to bring it to AI, but you think about as a writer, I think about AI and I'm like, I wonder, like, does this reduce the need for so many copywriters or some of the more not perfunctory because I love a copy editor and I love like someone who really understands grammar. They're like my favorite brains and favorite people. But when it coming being a writer, it's like everyone's favorite activity. I worked at a magazine called Time Out New York. And it was just like most of the letters the editor were like pointing out typos. People love, love to do that sort of check. There's something very satisfying about

AMY: Just a little victory, right? And so it shows you how good we are at error detection, right?

ELISE: Yeah.

AMY: but yes, I'm wondering about that too, right? Because how is this possibly. a good servant, right? Is AI possibly a good servant for a writer? Or is it a, okay, park your brain at the door and, and let it do your work for you and it'll be, you know, good enough?

ELISE: And is it just gonna like escalate factual errors, it's sort of only as good as the consciousness that creates it, right? Or the information that feeds it, even though now like AI can code programmed AI can like program itself, I don't quite know, it breaks my brain. But you think about it as either escalating failure points, or being a process to find failure. I don't know.

AMY: I don't either. I don't either. I know it breaks mine too. And, and so we won't, you know, we won't even try to speculate, but there's no question that uncertainty has just gone wildly up.

ELISE: Yes, definitely. We talked a bit about the negativity bias, but just to go back there for a minute, and do you think, I mean you write about Daniel Kahneman and this idea that we're so attenuated for that. It's like a threat reduction, right? For our survival. Is that enough? Do you think that that's sort of the limit that that explains the negativity bias and why we're so scared?

AMY: I think so. You know, I think that's probably a philosophical question, but it makes sense to me, right, to say that we would have a negativity bias because the risk of, you know, real harm outweighs the upside of potential gain. I mean, especially in prehistoric times, if you could lose your life by, you know, not noticing some threatening creature or situation you'd want to be very tuned in to those kinds of situations so you could stay alive and you know, reproduce and then we're all here. Whereas, you know, the upside maybe would have been smaller and less life changing. You found a nice peach, you know, you, or you didn't, right?

ELISE: Yeah. No, I think that makes sense. But it's interesting to think about that as one of our primary programs that we're running.

AMY: Like we have a survival, bias, you know, I mean, that biases us to be just kind of threat sensitive, and maybe less able to just be like joy sensitive.

ELISE: Have you met anyone in all of your sort of work? And as you've gathered these stories from so many different fields and have you met, even if it's only in pages, someone who has sort of successfully overridden that tendency?

AMY: I think, yes, I don't, think it's the case that anyone will certainly anyone I've met or studied would be sort of 100%, I'm going to use the word enlightened, you know, learning oriented all the time. But I have met people who are just more consistently able to kind of catching correct their thinking errors, if you will. I think it's fair to say I didn't know him super well, but I think it's fair to say Maxi Maltzby was one of them, African American psychiatrist who studied under Albert Ellis, studied cognitive behavior therapy, then sort of rebranded and tweaked it, something he called rational behavior therapy, the main idea was it wasn't for people who are really struggling and who were in a clinical situation with a with a psychiatrist, but for sort of all of us in the more our day to day lives who were guilty of unhealthy, unhelpful thinking about the various things that happened in our lives. And and he believed we could do better. Now, I have to say, I've met very few people, with possible exception of Chris Erdris, who were as rational as Maxie, you know, and just kind of almost, they had the capacity to just be dispassionate about, things that go wrong, go right, and they can't have been born that way, right?They had to have trained themselves and then gotten good at it. You know, the way, you know, an elite athlete is just so much better than a normal athlete. Doing what they do, because they practice.

ELISE: Yeah. And he is, was it RBT?

AMY: Rational Behavioral Therapy.

ELISE: And can you explain how it interrupts that pattern or how, I think you gave the example, the story of the guy who was learning how to play bridge?

AMY: Yes. He's a high school student who's very, you know, very good student and good athlete and so on and, and, it's Minneapolis area, it's cold winters and I guess his friends were liking the game of bridge and they asked Jeffrey to, you know, join them and of course, there's that challenge of trying to learn something new, especially something hard like the game of bridge. He's not good at it right away. And he was really frustrated by what he called his mistakes, which technically were mistakes. But so he was going to quit because it just wasn't fun. And he was certainly not making it fun for his friends either. And then he just happened to take a course at his school on this rational behavior therapy. And not because he was connecting it to the bridge game, but he just thought it sounded interesting. And it taught him that, you know, when you're frustrated, when you encounter a problem or a mistake or a challenge, you have to kind of pause and challenge your automatic thinking about the situation. Like this is really bad. I made a mistake. It's, you know, I'm stupid or it shouldn't have happened and say, well, wait a minute, let's challenge that thinking, which is very spontaneous and, you know, almost just happening.

It's not like you're generating it deliberately, but anyway, pause and challenge that thinking and say, wait, this is a brand new game. That's incredibly sophisticated I've only just started learning. There's no reason on earth to imagine I should be an expert at it yet. Maybe the mistakes that I make are just bits from which I have to learn to get better. And, you know, it seems this is so everything I'm saying right now is so obvious, right? But we don't spontaneously react to our shortcomings and mistakes that way. But he just was applying the lessons from the course to the bridge situation and changed how he thought about it, right? He reframed the thing from evidence. It's a sort of growth mindset, right? Evidence that I'm no good to evidence that this is, in fact, hard. And I'm a learner, a beginner. And and that, of course, made it more fun and made the missteps useful data from which to learn, and gradually he got better at it, enjoyed it more, his friends enjoyed him more, and so, you know, it's a simple example, but I'm sure, you know, all our listeners have stories or situations in their life where, you know, you get frustrated, you want to give up, you assume it just, you make that conclusion that you're just not good at this, when in fact, you're a beginner.

ELISE: Yeah. No, you're a failure. Meanwhile, no, I'm learning. I mean, it is true. You said the word fun. And I think even just thinking about the different contexts that you discuss failure in the book, From the automotive assembly line to Google X. It's like, when there's an element, and we talked about this idea of co creation, when you can get enough distance to be like, this is fun, this is a game, this is a learning opportunity, this is iterative, like, what's gonna happen?That's fun, rather than checking the boxes to ensure a certain outcome, I'm in threat, I am scared.

AMY: It's a completely different mindset.

ELISE: Yeah. No, it's true. Well, thank you for your book and all of your work. You can put almost anything into the context. So, it's obviously kept you busy for a career, right?

AMY: It has. And can I just say thank you for reading it so thoughtfully. I mean, what a joy it is to talk with you and you're saying, Oh, but you wrote this and you said that, and there's this story and remind us of that one. I'm like, wow, that's what a writer longs for, as you know, is a good reader.

ELISE: I always love a book about case studies, particularly when it spans industries and she tells a lot of great stories from her career. Including the story of her creation of Veuve Clicquot, the champaign, and the number of failures that endured while that was coming to market. And she really can’t overstate, nor can I, the importance of psychological safety, which I think is one of those ephemeral concepts that’s essential to getting everyone out of a threat response, it really, really is. She writes about this as “psychological safety, which means believing it’s safe to speak up, is enormously important for feeling a sense of belonging. But belonging is more personal while psychological safety is more collective. It is conceptualized in research studies as an emergent property of a group, and I think it is co-created by individuals in the groups in which they wish to belong.” So may we all think about the psychological safety we are creating for each other and therefore for ourselves. To me, it seems like one of those concepts that’s contagious: a group has it, collectively, or they don’t, but may it be the positive type of contagion. Alright, I’ll see you all next week.

Previous
Previous

Matt Gutman: Contending with Panic

Next
Next

Kate Bowler: On What We Can Become