Making Powerful Decisions with Dr. Alan Barnard, Part 2

Continuing the conversation from last week, host Dan Barret and guest Dr. Alan Barnard explore how setting decision rules in advance can revolutionize your approach to business strategy. 

Dr. Barnard talks about the concept of "digital twins" and how they can safely test decision-making strategies. Discover how AI is reshaping the way we make decisions, allowing us to partner with technology in new and exciting ways. 

Join us to learn more about real-world crises and how to prepare for unforeseen, high-impact events. 

Show Highlights:

  • Do you know about the designed decisions? [00:59]
  • The importance of setting decision parameters ahead of time [01:10]
  • Discover the work of Nassim Taleb on decision-making [04:12]
  • What is the role of digital twins in decision-making [07:03]
  • Learn about the evolution of decision-making processes [08:52]
  • This is what you need to know about TOC and Cyanfin [16:06]
  • Discover the three levels of complexities [17:03]
  • Do you know the importance of trusting people? [20:48]

For more updates and my weekly newsletter, hop over to https://betterquestions.co/

To learn more about Dr. Alan Bernard, check out the websites below: 

https://harmonyapps.com/

https://dralanbarnard.com/

https://www.youtube.com/user/DrAlanBarnard

Transcript:

0:09  Hey guys, welcome back. You're listening to the second part of last week's episode. Let's jump back in. I was going to ask, like, how important is it to set up those decision parameters ahead of time? Because it sounds like kind of what you're talking about is, yes, we could wait until inventory is out, and then we have to scramble, which is the emotion comes into it, right? So how important is it to have some rules for the kind of decision making you're going to do in the future. You know what I mean, like, how important is it to set those rules ahead of time? So the first step is, and this is what we've done with our research, right? So we look at you, generally speaking, if it's a business, you're generally speaking, of four types of decisions that we make. The first one is designed. So if it's a business that are selling consumer goods, a design decision was, will be, do I make or buy? Do I have my own factory, or do I buy it from somewhere else? Right? If I have my own factory, where should I have it? How much capacity should it have, etc, where should I have distribution centers? This is all part of my system design decisions. Each one of those decisions, I can make mistakes. Each one of those decisions, I need a rule to decide, willing to make that decision and how to make it. Then I go into planning decisions, right? What's the commitments I'm going to make? Do I make to order? Do I make to stop? These are all planning. I have execution decisions like, what's the priority when I get two orders, which one do I do first, what will determine the priority? And then I have ongoing improvement decisions, which is, when do I change a rule? So if I if I frequently go into the red or the black in my inventory side, stocked out or almost stock out to sort of near miss, when do I actually change the target stock level? Those would be ongoing improvement rules. So what we've done in our digital twins, we've said yes, all the decisions that you make, here's the option for making each of these decisions, and now I can compare. And by the way, if AI can train on our data and come up with a different way of making that decision. We can compare it directly, yeah, right, in the digital twin, which is a very safe environment to stress test this thing, almost like a driverless car, right? It's like, that's where I need the digital twin is, is to be able to test whether a specific decision rule is better or worse than what I'm currently doing, or better or worse than an alternative. So

2:43  what would you say to someone who says I'll put it this way? So I actually had this conversation recently here in the United States. We had a series of wildfires in Los Angeles. These are extremely bad sort of situation where there was tons of property damage, lives lost, all this stuff, and essentially, there was a lot of criticism of the response. And I'm not an expert in fire response at all, right, but from my reading on it, it seemed like you know, they did what they would normally do given the situation, given what they know, and things just happen to go spectacularly wrong in multiple ways at the same time, right? So it's like they did what they could do. And, you know, I was having this conversation with someone, and they said, Well, you know, the problem is we have 100 year fire, basically everyone once every seven years, right? It's like, yeah, yeah, the model for what we expect, which is based on the past doesn't take into account the future, right? And there's this sort of tension of, how do we know what to expect when maybe the unexpected will happen? So what would you say to that argument that running things in simulation is, how do you I guess, my real question is, how do we account for that in our decision making? Right? If the future could possibly hold something for which I have no precedent, how do I account for that safely? Does that make sense?

4:08  Yeah, absolutely. I think this is where the work of Nassim Taleb has really contributed in this field, right? Basically, when he said this, if you remember, he wrote a book called Black Swan. A black swan is basically rare but consequential events. And he says, what's very interesting is very few black swan events are completely unknowns. If you ask people the right question, say, what are the type of things that could happen that could be massively consequential? Now, you mentioned about Google suspending accounts, right? So you don't need to have a lot of experience to say that's something that could happen. I remember the first time. What happened was I was using PayPal as my only payment processing right, and suddenly they just froze my account and I couldn't access my cash and. I'm like, I remember what's happened, and they gave me the same answer. We don't know, because they had some algorithm, right, that was looking for some event that it would flag as a risk. And it turns out that I received a large payment, which was unusual, and they just froze my account. Right now, if it's happened to me or somebody that I know, and I'm now preparing, I'm doing scenario analysis for the next year or two or five, I can say, Can this happen? 

And once you say it can happen, then what Talib argues is stop wasting time trying to identify the probability of it happening, even if it's a once in 100 year event, what you have to do is to say, once it can happen, you basically asking yourself two questions, are there any warning signs that it will be about to happen that will allow me just a little bit more time? Right? And they normally are, right. There's some kind of flags that happen. It's very rare that it's completely out of the blue, right? Even if it's completely out of the blue, then the question is, if it happens, how can I check the consequences so I don't overreact to it? And secondly, what's the best way of either if it's a negative stress mitigating against that, or if it's a positive stress capitalizing on that. Yeah, right, and that's kind of the approach that that we advise our our customers, is stop fooling yourself that trying to predict the probability of an event is useful at all. It's not it's a 01 can it happen yes or no? Doesn't matter how rare it was in the past. Can it happen yes or no? If it can happen, is there any early warning signs that you will get that will give you more time? And if it does happen, what's the best way to respond. And that's, again, where digital twins can happen. It can can be very useful, because we can see if the event happens, what's the consequences, and should we react or not, and if we do react, what's the best way of reacting, either mitigating against it or or capitalizing on it.

7:21  That's so this so fascinating in a Yeah, Taleb is it's funny. I just I went to your website before the podcast, and there's a picture of you with Taleb on your homepage. And I was like, Yeah, that's cool. Actually, I was gonna ask you about this. You've mentioned a couple times the you know, Dr Ellie Goldratt, who's the sort of creator of theory of constraints, or early mentor of yours. I also saw a picture of you on your website with Dave Snowden, who created the Cynefin framework. And it was interesting because I found Theory of Constraints sort of backwards I came into Cynefin Snowden was my first sort of introduction to systems thinking or whatever. And he's a very particular kind of personality, like he is kind of, he always feels like he's angry at you, and it's not clear why, but he's like, he's really brilliant, he's funny, and then, like, I sort of went back back in time, too much, but sort of backwards chronologically, and found gold wrath. And I always felt like those two were kind of in conversation with with each other in an interesting way, even though they they weren't quite contemporaries, or at least not for too long. So I wanted to ask you specifically about decision making, sort of since gold rats time, like gold rat very famously, has the logical thinking process that's part of the the to see toolkit, and it I've always seen in your work, like influence from logical thinking process and your use of diagrams and stuff, that sort of way you approach it. But I'm curious how you view decision making processes as having changed, specifically, because I think both you and gold rat were interested in actually getting organizations to use this stuff. You're not like an academic in the ivory tower, being like this is the best way. So how do you view that stuff as being different or evolving? Or is it not that different? I'm curious, like, what your take on the sort of landscape

9:11  is? Yeah. So the first connection when I first met Dr Ellie Goldberg, you know, we got into a vicious argument about a really complicated way of, how do I deal with promotional activity, etc, if you're a consumer good supplier. And at some point, he had to sort of say that he'd sort of calm it down and say, Listen, by the way, what's your goal? You know? And he shared that his goal was to teach the world how to think. And he said, you know, it sounds arrogant, not just ambitious, but almost arrogant. But he was passionate about teaching people how to think and not what to think. And that was something that, because of my background in decision making, I said, this is exactly I think we share a goal, right? I want to learn how to make decisions. You. What decisions to make, is there a process that we can go through that will dramatically improve both the quality and the speed of our decision making? So that's that's sort of the first commonality there. He spent and invested a lot of time and money in terms of developing thinking process has a whole range of thinking process that you can learn from it on the web, the current reality tree, cloud, future reality tree, etc. And originally it was we are using these thinking processes to improve the quality of our thinking so we get better outcomes. 

And it turns out that doesn't matter which sort of thinking processes you use, they are all prone to confirmation bias, and that's practically a very hard thing to overcome. So if I give you two effects, right? I say projects are late, resources are not always available when needed, right? Which one is the effect? Which one is the cause? So you could say, well, well, the resources not being available could cause the projects to be late, but also the project being late, consuming more resources than what was expected, can cause resources not so so now I'm in a loop, right? So what, where our conclusion was is that it takes an incredible amount of discipline to think logically and clearly, because you have to check and challenge every one of these connections, yeah, so, so the tools are not necessarily useful to help us check it or improve it, but at least to help us communicate it. And essentially, where my evolution has been in the thinking process development with the ProCon cloud is to say, can you develop a thinking process that's not only graphically helping me communicate what my assumptions is about cause and effect, but has the structure in it that can help me check and challenge that in a very simple, practical way, you know. So, so that's kind of been my evolution. Recently, I published my first children's book, you know, yeah, my impossible decision, Yep, yeah. And that was my attempt to say, if you have come up with a way that, if you just follow these five steps, you can dramatically improve the quality and speed of your decision making, can you make it simple enough that you can teach a kid and it takes 13 minutes to read the book and learn the method, right? And that's kind of, that's where, where I'm trying to stand on his shoulders and move forward is to say, look, thinking is hard. Thinking is very scary. Given a choice, we don't want to think that's why we are operating mostly in automatic mode. It's less energy and it's less emotional. Right to get somebody to actually sit down is

13:00  really hard. It burns calories. You feel tired after, like, yeah, having built a couple current reality trees, at the end, you just feel like someone rolled you over the steam roller. It's like, very challenging, yep. So where I'm super excited in terms of answering more directly, your question of where this is going to we were always passionate about, how do you remove the friction, how do you reduce the effort without compromising the quality of the thinking, right? And I think what has happened with AI is absolutely profound. So in our new version of our harmony apps, we are embedding AI into it, and you'll have two ways of using it. In the one, you are the human in the loop. In the second way, the AI is in the loop. So for example, I can ask you, Dan, what is the problem that you're currently facing, and what options do you have to deal with it right? So now you are answering the question about the problem. You're explaining to me why it's important to you and others. Are you telling me the conflict that you face in dealing with it? And at the end of every one of those steps, what's the problem and why it's important? What's your conflict, or what's the pros and cons of each what's the innovation, etc, you can pause at the end of the step and ask AI to check your work, because it has incredible broad knowledge, not just of humanity, but over time, of you specifically. And it can help you and say, I think you missed out this part, or you might have exaggerated this part, right? But there's also another way, which says, you know enough about me and people in general. I want you to do the analysis and pause after every step and let me check you. Oh,

14:51  interesting. So you're just reversing the roles, yeah, yeah. And it's kind of like chess playing is a good example of that, right? Who do you want to make. Move the human or the AI, right? And in some cases, depending on what your goal is, if it is to learn, then you should make the first move and let the AI check your move, and you get instant feedback, right? Whereas, if you're under real pressure to win, you just want them to do it, because it can explore trillions of options, where maybe you can explore 10s of options right and then check that it didn't do anything that was very silly and could cause you to lose your plan. So I think that is where decision making is going. We're going to become more and more comfortable to work directly with AI, either us supporting AI, or vice versa. I think that's it's such a that's such an interesting framework for thinking about it. I'm definitely going to experiment with that. All right, I have my one nerdiest question for you, and then we will wrap it up, because this is the most inside baseball question I have. I'm very curious. We brought up Cynefin. I'm very curious how you see TOC and Cynefin fitting together or not? Because they both talk about constraints all the time. They both kind of mean different things, and there's some sort of consternation about people trying to fit these two things together. They feel like they go together. Sometimes they don't. I'm very curious, as someone who's been in this field for so long, how do you see those two frameworks going together, if at all? So

16:28  11 to some degree is a classification right of types of complexity that we face, and based on that, whether it's one of the four, you know, what is the most appropriate response to that? Right? Can I follow a typical analytical approach or more sort of sensing, trying things out, experimental mode? I think Theory of Constraints provides a different classification of complexity that I think is also useful so it has free levels. This is something that some of the work that I did during my PhD, right? And why I was interested to meet Dave and share with him is, I think you can also look at system complexity as not based on the number of parts or the interdependencies of the parts, or the uncertainties, or any of that stuff. You can base it on, how many constraints are there in a system and how interdependent are they? So if you take that little dice game as a simple example, right? I have five resources. In the most complex case, these resources are all identical, and they totally coupled. So you have interactive constraints. There's five constraints. Theoretically, whichever has the lowest throw that day becomes the bottleneck, or the constraint for that day gotcha, and it moves every day. So it's the most complex type of system as a chaotic system, because it has multiple interactive constraints. That scenario, if I walk into a business where I see there's many things that they don't have enough of, they don't have enough demand, enough cash, enough supply, enough management attention, they all interact. They chaos, right? How do I deal with that situation would be very different to dealing with a situation where there are multiple constraints but they are decoupled. For example, think about if pre production lines, the bottleneck on the production lines are different, but I produce different products. They're not coupled in any way, it's still more complex, because now I have three points to intervene to improve the system, right? I have to strengthen the weakest link of these fee independent change compared to I have only one process, and there's one bottleneck that, to me, is a is a different way, and for me, a more practical way of thinking about complexity. Do I have a single constraint? Simple? Do I have multiple constraints but decoupled? Complex? Yeah. Do I have multiple constraints that are coupled, interdependent, chaotic, very different way of responding to those three in terms of if my goal was to continuously improve them. Yeah,

19:23  that makes a lot of sense. It almost feels that, almost feels like halfway between the things like Goldratt talked about degrees of freedom, right as his trying to measurement of how complex it is. And Snowden talked about his own thing, I love I love him, but I understand, like a third of what he says, you know? I mean, that's, like, my favorite part of it. So it's really interesting, because, like you said, it is a little more practical in the sense of, like, Well, okay, immediately. What do I do now? Right? Because, okay, well, I know we're coming up on time. I don't want to keep you super long. I will end with this one question. I'm very curious, but you talked about gold ramp smoking. I famously. Go to Taco Bell here in the United States and get the world's worst burritos. That's what I do. What is a decision that you make that you feel is irrational, but you keep making it anyway, and if you can't think of one, that's okay. No, no, no. I have a whole list of it. Yeah, one of my friends love teasing me about being addicted to hopian. So the the optimistic outlook on life is that, uh, is that? What that is not

20:30  just that, it's a more practical thing, which is I have a deep sense of how important it is to trust people and to give them credit of the doubt, to think that it's a good person, but good people can make bad decisions, right? So I think it's a very practical way of living my life. And in general, it is quite robust. It's assumed the best, but prepare for the worst. But what that can do is it can make you blind, right, where you literally are putting on those optimistic glasses. And unfortunately, our minds are brilliant assumption validation engines, so you have to be careful your beginning assumptions or beliefs is what your mind will try to validate, right? If you believe that women can't drive, guess what? You'll only see women that are bad drivers, right? So that's something that I'm aware of that's irrational, right? Is that I'm I assume a priority, that I can trust this person, that the person is good, and generally that's true, but you do get psychopaths, social paths, narcissist that normally gives you quite early warning. So getting back to Taleb situation, right? They often give you pretty good early warnings. But if you look at them through rosy eyes, and that could be a romantic partner, it could be a business partner, it could sometimes even be a customer, right? That maybe you tolerate it too long, and that if you were really objective, you know, then you would have picked it up earlier, and you would have kind of fired, you know, or put boundaries, clear boundaries. So that's my thing. Is this, I'm addicted to OP hope. You, I'm a big romantic so I I've made a few bad decisions when it comes to selecting both romantic and business partners.

22:29  All right. Well, look now I feel much better because apparently I make the same type of bad decisions as the Dr Alan Barnard, which is, I hire fast and I fire slow in every part of my life, you know? I mean, yeah, that's, that's why I'm so excited about AI, right? It's just be like, you i You, I do this, you know, I know. Just tell me, just sit on my shoulder and be like, Hey, by the way, yeah, yeah, there's a, there's a great talk, if you haven't watched it, a recent podcast by Richard, who's one of my good friends and just a super genius mind. And he basically shared how he uploaded a lot of his journals, etc. Up to chat GPT and asked it to say, can you tell me where's my limiting assumptions, beliefs, my exaggerated fears? You know, what should I be careful of? And the outcome was remarkable. Wow, interesting. I will look that up. So chef, I believe, is spelled s c h e, f, f r e n, for people want to look at just 1f, F, S C H E, F r e n, yeah, he's the one that runs the steal. Your winners, mastermind, online. Mastermind, yeah, okay, awesome. Well, I will also big, big theory of constraints. Fan, Oh, really. All right, cool. Okay, then now we'll look it up. I will look it up. I'll link to it in the show notes. And also, for people don't want to go find it, but Dr Alan Barnard, so we did mention it up at the top. I sort of got lost in the conversation. I was going to bring it up again, but harmony apps.com is where people can go to find the different pieces of software that you work on. What can they expect to find when they go there? Get everybody pumped to go over to Harmony apps.com

24:15  Sure. So if you go there, basically at the top, it's going to be one of our main focuses, how to make complex decisions simpler, right? As we're trying to provide people access to methods and apps that help them make better, faster decisions when it really matters. So all the research about why we make and often repeat that decisions has gone into these if they are more interested to just learn. All my social media handles are just Dr Alan Barnard, a l, a n, b, a R N, A R D. So I'm a YouTube channel. I have LinkedIn, etc. I do a lot of posting. There's on there's hundreds of hours of free resources available on my YouTube channel and others. So that's the other option that they can have if they just curious to learn more about this very, very interesting field. Yes, all right, awesome. Well, I highly recommend that people do that if they haven't checked out your work already. I am a paying customer. I've bought your courses before, I paid for your software before, and had amazing experiences with all those so I cannot recommend them anymore highly. Dr Alan Barnard, thank you so much for sharing your time and expertise. I really, truly appreciate it, and I can't thank you enough. Thank you so much, dad.

25:27  Man. I had such a blast talking with Dr Barnard. He is one of my heroes, absolutely for sure, his work is absolutely incredible. Please do go check out his work. You can go over to Harmony apps.com to see the software that his research lab has created. You can also learn more about him at Dr Allen barnard.com. As always, this podcast is coming to you straight from better questions.co. That is my personal blog. Every week I send out one email about the best things that I am learning and researching every single week. You can go get that at better questions.co, and learn more about what I'm doing. I also send updates whenever we put this podcast up, but I only send the one email week, because, look, we all have enough email. I want to make one email that's super valuable to you, so go check that out. As always, cannot tell you how much I appreciate having you here as a part of this podcast. Really means the world to me, and I will be talking to you very soon. All right. Cheers. You.