Hiker of the Woods
Active Member
- Messages
- 623
As many of you know that another passion of mine other than hunting and scouting is wildlife and habitat management. That along with hunting techniques and tactics are the main reasons I visit different hunting forums and social media pages. I'm currently reading Matthew Syed ?Black Box Thinking: Marginal Gains and the Secrets of High Performance? and thought there was a lot in his book that can apply to some of the things I have seen over the years with hunter?s ideas when it comes to wildlife and habitat management. Below are some things that I have taken out of his book and added hunters and game & habitat management into it. Hopefully we all as hunters, organizations, and agencies will learn to be Black Box Thinkers with Open Loop Systems.
From the book with editing in hunter?s and game & habitat management:
Most hunters seem to be in a closed loop when it comes to game and habitat management. A closed loop is where failure doesn't lead to progress because information on errors and weaknesses is misinterpreted or ignored; an open loop does lead to progress because the feedback is rationally acted upon.
Hunters, as a whole, have a deeply contradictory attitude to failure when it comes to game and habitat management. Even as we find excuses for our own failings about how game and habitat should be managed, we are quick to blame others such as state wildlife agencies, hunting organizations and environmental groups.
But this has recursive effects, as we shall see. It is partly because we are so willing to blame others for their mistakes that we are so keen to conceal our own. We anticipate, with remarkable clarity, how people will react, how they will point the finger, how little time they will take to put themselves in the tough, high-pressure situation in which the error occurred. The net effect is simple: it obliterates openness and spawns? cover-ups. It destroys the vital information we need in order to learn.
When we take a step back and think about failure more generally, the ironies escalate. Studies have shown that we are often so worried about failure that we create vague goals (such as removal of predators will mean deer and elk everywhere), so that nobody can point the finger when we don't achieve them. We come up with face-saving excuses, even before we have attempted anything.
We need to redefine our relationship with failure, as hunters, as organizations, and as agencies. This is the most important step on the road to a high-performance revolution: increasing the speed of the best game and habitat management possible. Only by redefining failure will we unleash progress, creativity and resilience.
Nobody wants to fail; we all want to succeed. But at a collective level, at the level of systemic complexity, success can only happen when we admit our mistakes (this includes hunter?s ideas of game and habitat management), learn from them, and create a climate where it is, in a certain sense, ?safe? to fail.
In aviation, pilots are generally open and honest about their own mistakes (crash-landings, near misses). The industry has powerful, independent bodies designed to investigate crashes. Failure is not regarded as an indictment of the specific pilot who messes up, but a precious learning opportunity for all pilots, all airlines and all regulators.
And yet how does this happen? How is learning institutionalized in the aviation system (given that pilots, regulators, engineers and ground staff are dispersed across the word), how is an open culture created, and, most importantly of all, how can we apply the lessons beyond aviation to hunters, organizations, agencies who see, talk and act upon game and habitat management?
We have seen that in hunters, the culture is one of evasion when it comes to science, game and habitat management. Peer reviewed scientific papers about wildlife and/or habitat are described as ?one-offs? or ?conspiracy?. This is the most common response to failure in the hunting world today.
In aviation, things are radically different: learning from failure is hardwired into the system.
All airplanes must carry two black boxes. Instead of concealing failure, or skirting around it, aviation has a system where failure is data rich. These are used to lock the industry onto a safer path. And individuals are not intimidated about admitting to errors because they recognize their value.
Amy Edmondson, a professor at Harvard, puts it: ?Most large failures have multiple causes?. Eleanor Roosevelt said: ?Learn from the mistakes of others. You can't live long enough to make them all yourself?.
Hunters don't even recognize the underlying problem of their personal beliefs about game and habitat management without using science because from the first-person perspective, it didn't exist. That is one of the ways that closed loops perpetuate: when people don't interrogate errors, they sometimes don't even know they have made one (even if they suspect they may have). The problem is not a lack of hunter?s motivation to see better game and habitat management, but the limitations of human psychology.
This, then, is what we might call ?black box thinking?. For agencies beyond aviation, it is not about creating a literal black box; rather, it is about the willingness and tenacity to investigate the lessons that often exist when we fail, but which we rarely exploit. It is about creating systems and cultures that enable agencies and hunters to learn from errors, rather than being threatened by them.
These failures are inevitable because the world is complex, and we will never fully understand its subtleties. Failure is the signpost. It reveals a feature of our world we hadn't grasped fully and offers vital clues about how to update our models, strategies and behaviors. From this perspective, the question often asked in the aftermath of an adverse event (mule deer decline), namely ?can we afford the time to investigate failure??, seems the wrong way around. The real question is ?can we afford not to??
We will all endure failure or errors from time to time. And it is often in these circumstances, when failure is most threatening to our ego, when we need to learn most of all. Practice (time in the woods) is not a substitute for leaning from failures and errors, it is complementary to it. They are, in many ways, two sides of the same coin.
The history of science, like the history of all human ideas, is a history of?. error, Popper wrote. ?But science is one of the very few human activities- perhaps the only one- in which errors are systematically criticized and fairly often, in time, corrected. This is why we can say that, in science, we learn from our mistakes and why we can speak clearly and sensibly about making progress.
Most closed loops exist because hunters deny failure, errors or try to spin it with their game and habitat management ideas vs scientific studies. With pseudoscience the problem is more structural. The have been designed, wittingly or otherwise, to make failure impossible. That is why, to their adherents, they are so mesmerizing. They are compatible with everything that happens. But that also means they cannot lean from anything.
This hint, in turn, at a subtle difference between confirmation and falsification. Science has often been regarded as a quest for confirmation. Scientists and biologists observe nature, create theories, and then seek to prove them by amassing as much supporting evidence as possible. But we can no see that is only a part of the truth. Science is not just about confirmation, it is also about falsification. Knowledge does not progress merely by gathering confirmatory data, but by looking for contradictory data.
How can hunters tell if what agencies are doing are right or wrong or if their own personal ideas on what should happen would be best for game and habitat management? Where is the feedback to hunters? Most hunters gauge how game and habitat are responding to current or past management not with objective data, but by observing them while driving or hiking in the woods. This data is highly unreliable. After all, game and habitat might be responding to something or multiple things hunters don't even see or know about.
The internet is a library full of free scientific studies on game and habitat management depending on what species and habitat hunters want to see improvement on. Hunters who would take the time to read through these studies would receive instant feedback about their judgements. Hunters initial ideas about game and habitat management may fail more before reading the studies, but this is precisely why they would learn more.
Do hunters study their possible failed or error ideas on game and habitat management? This tendency creates a blind spot. This blind spot is not limited to science, it is a basic property of our world and it accounts, to a large extent, for our skewed attitude to failure. Success is always the tip of the iceberg. We learn vogues theories, we fly in astonishing safe aircraft, we marvel at the virtuosity of true experts.
But beneath the surface of success- outside our view, often outside our awareness- is a mountain of necessary failure.
Science has a structure that is self-correcting. By making testable predictions, scientists are able to see when their theories are going wrong, which, in turn, hands them the impetus to create new theories. But if scientists as a community ignored inconvenient, or spun it, or covered it up, they would achieve nothing.
Science is not just about a method, then, it is also about a mindset. At its best, it is driven forward by a restless spirit, an intellectual courage, a willingness to face up to failures and to be honest about key data, even when it undermines cherished beliefs. It is about method and mindset.
The difference between aviation and hunters is sometimes couched in the language of incentives. When pilots make mistakes, it results in their own death. When a hunter makes a mistake about their ideas about game and habitat management from only hiking through the woods it results in nothing. That is why pilots are more motivated to reduce mistakes.
But this analysis misses the crucial point. Remember that pilots died in large numbers in the early days of aviation. This was not because they lacked the incentive to live, but because the system had so many flaws. Failure is inevitable in a complex world. This precisely why learning from mistakes and errors is so imperative. Systems that do not engage with failure struggle to learn.
The phenomenon of cognitive dissonance is often held up as a testament to the quirkiness of human psychology. It is easy to laugh when we see just how far we are prepared to go to justify our judgements, sometimes to the point of filtering out evidence that contradicts them. It is all part of the elusive trickery of the human brain, it is said, a charming if occasionally troubling aspect of our eccentricity as a species.
Self-justification is more insidious. Lying to oneself destroys the very possibility of learning. How can one learn from failure if one has convinced oneself- through the endlessly subtle means of self-justification, narrative manipulation, and the wider psychological arsenal of dissonance- reduction- that a failure didn't actually occur?
Most failures can be given a makeover by hunters. Hunters latch on to any number of justifications: ?it was a one-off?, ?it was a unique case?, ?conspiracy theories?. Hunters will selectively cite statistics that justify their case, while ignoring the statistics that don't. They can find new justifications that did not even occur to them at the time, and which they would probably have dismissed until they- thankfully, conveniently- came to their rescue.
Our exploration of cognitive dissonance finally provides us with the answer. It is precisely in order to live with themselves, and the fact that they made an error about game and habitat management in the first place. This protects a hunter?s sense of self-worth and morally justifies the practice of non-disclosure. After all, why disclose an error if there wasn?t really an error, after all in a hunter?s mind?
To put it a slightly different way, the most effective cover-ups are perpetrated not by those who are covering their backs, but by those who don't even realize that they have anything to hide.
Hunters culture stigmatizing attitude towards error undermines our capacity to see evidence in a clear-eyed way. It is about big decisions and small judgements: indeed, anything that threatens one?s self-esteem.
Instead of learning from data, some hunters are spinning it. It hints at the suspicion that hunters are directed, not at creating new, richer, more explanatory theories, but at coming up with ever-more tortuous rationalizations as to why they were right all along.
A common misperception of the theory of cognitive dissonance is that it is about external incentives. Hunters have a lot to lose if they get their judgement wrong; doesn't it therefore make sense that they would want to reframe them? The idea here is that the learning advantage of adapting to a mistake is outweighed by the reputational disadvantage of admitting it.
Scientific mindset, with a healthy emphasis on falsification, is vital. It acts as a corrective to our tendency to spend out time confirming what we think we already know, rather than seeking to discover what we don't know.
As the philosopher Karl Popper wrote: ?For if we are uncritical we shall always find what we want: we shall look for, and find, confirmations, and we shall look away from, and not see, whatever might be dangerous to our pet theories. In this way it is only too easy to obtain?. Overwhelming evidence in favor of a theory which, if approached critically, would have been refuted.
From the book with editing in hunter?s and game & habitat management:
Most hunters seem to be in a closed loop when it comes to game and habitat management. A closed loop is where failure doesn't lead to progress because information on errors and weaknesses is misinterpreted or ignored; an open loop does lead to progress because the feedback is rationally acted upon.
Hunters, as a whole, have a deeply contradictory attitude to failure when it comes to game and habitat management. Even as we find excuses for our own failings about how game and habitat should be managed, we are quick to blame others such as state wildlife agencies, hunting organizations and environmental groups.
But this has recursive effects, as we shall see. It is partly because we are so willing to blame others for their mistakes that we are so keen to conceal our own. We anticipate, with remarkable clarity, how people will react, how they will point the finger, how little time they will take to put themselves in the tough, high-pressure situation in which the error occurred. The net effect is simple: it obliterates openness and spawns? cover-ups. It destroys the vital information we need in order to learn.
When we take a step back and think about failure more generally, the ironies escalate. Studies have shown that we are often so worried about failure that we create vague goals (such as removal of predators will mean deer and elk everywhere), so that nobody can point the finger when we don't achieve them. We come up with face-saving excuses, even before we have attempted anything.
We need to redefine our relationship with failure, as hunters, as organizations, and as agencies. This is the most important step on the road to a high-performance revolution: increasing the speed of the best game and habitat management possible. Only by redefining failure will we unleash progress, creativity and resilience.
Nobody wants to fail; we all want to succeed. But at a collective level, at the level of systemic complexity, success can only happen when we admit our mistakes (this includes hunter?s ideas of game and habitat management), learn from them, and create a climate where it is, in a certain sense, ?safe? to fail.
In aviation, pilots are generally open and honest about their own mistakes (crash-landings, near misses). The industry has powerful, independent bodies designed to investigate crashes. Failure is not regarded as an indictment of the specific pilot who messes up, but a precious learning opportunity for all pilots, all airlines and all regulators.
And yet how does this happen? How is learning institutionalized in the aviation system (given that pilots, regulators, engineers and ground staff are dispersed across the word), how is an open culture created, and, most importantly of all, how can we apply the lessons beyond aviation to hunters, organizations, agencies who see, talk and act upon game and habitat management?
We have seen that in hunters, the culture is one of evasion when it comes to science, game and habitat management. Peer reviewed scientific papers about wildlife and/or habitat are described as ?one-offs? or ?conspiracy?. This is the most common response to failure in the hunting world today.
In aviation, things are radically different: learning from failure is hardwired into the system.
All airplanes must carry two black boxes. Instead of concealing failure, or skirting around it, aviation has a system where failure is data rich. These are used to lock the industry onto a safer path. And individuals are not intimidated about admitting to errors because they recognize their value.
Amy Edmondson, a professor at Harvard, puts it: ?Most large failures have multiple causes?. Eleanor Roosevelt said: ?Learn from the mistakes of others. You can't live long enough to make them all yourself?.
Hunters don't even recognize the underlying problem of their personal beliefs about game and habitat management without using science because from the first-person perspective, it didn't exist. That is one of the ways that closed loops perpetuate: when people don't interrogate errors, they sometimes don't even know they have made one (even if they suspect they may have). The problem is not a lack of hunter?s motivation to see better game and habitat management, but the limitations of human psychology.
This, then, is what we might call ?black box thinking?. For agencies beyond aviation, it is not about creating a literal black box; rather, it is about the willingness and tenacity to investigate the lessons that often exist when we fail, but which we rarely exploit. It is about creating systems and cultures that enable agencies and hunters to learn from errors, rather than being threatened by them.
These failures are inevitable because the world is complex, and we will never fully understand its subtleties. Failure is the signpost. It reveals a feature of our world we hadn't grasped fully and offers vital clues about how to update our models, strategies and behaviors. From this perspective, the question often asked in the aftermath of an adverse event (mule deer decline), namely ?can we afford the time to investigate failure??, seems the wrong way around. The real question is ?can we afford not to??
We will all endure failure or errors from time to time. And it is often in these circumstances, when failure is most threatening to our ego, when we need to learn most of all. Practice (time in the woods) is not a substitute for leaning from failures and errors, it is complementary to it. They are, in many ways, two sides of the same coin.
The history of science, like the history of all human ideas, is a history of?. error, Popper wrote. ?But science is one of the very few human activities- perhaps the only one- in which errors are systematically criticized and fairly often, in time, corrected. This is why we can say that, in science, we learn from our mistakes and why we can speak clearly and sensibly about making progress.
Most closed loops exist because hunters deny failure, errors or try to spin it with their game and habitat management ideas vs scientific studies. With pseudoscience the problem is more structural. The have been designed, wittingly or otherwise, to make failure impossible. That is why, to their adherents, they are so mesmerizing. They are compatible with everything that happens. But that also means they cannot lean from anything.
This hint, in turn, at a subtle difference between confirmation and falsification. Science has often been regarded as a quest for confirmation. Scientists and biologists observe nature, create theories, and then seek to prove them by amassing as much supporting evidence as possible. But we can no see that is only a part of the truth. Science is not just about confirmation, it is also about falsification. Knowledge does not progress merely by gathering confirmatory data, but by looking for contradictory data.
How can hunters tell if what agencies are doing are right or wrong or if their own personal ideas on what should happen would be best for game and habitat management? Where is the feedback to hunters? Most hunters gauge how game and habitat are responding to current or past management not with objective data, but by observing them while driving or hiking in the woods. This data is highly unreliable. After all, game and habitat might be responding to something or multiple things hunters don't even see or know about.
The internet is a library full of free scientific studies on game and habitat management depending on what species and habitat hunters want to see improvement on. Hunters who would take the time to read through these studies would receive instant feedback about their judgements. Hunters initial ideas about game and habitat management may fail more before reading the studies, but this is precisely why they would learn more.
Do hunters study their possible failed or error ideas on game and habitat management? This tendency creates a blind spot. This blind spot is not limited to science, it is a basic property of our world and it accounts, to a large extent, for our skewed attitude to failure. Success is always the tip of the iceberg. We learn vogues theories, we fly in astonishing safe aircraft, we marvel at the virtuosity of true experts.
But beneath the surface of success- outside our view, often outside our awareness- is a mountain of necessary failure.
Science has a structure that is self-correcting. By making testable predictions, scientists are able to see when their theories are going wrong, which, in turn, hands them the impetus to create new theories. But if scientists as a community ignored inconvenient, or spun it, or covered it up, they would achieve nothing.
Science is not just about a method, then, it is also about a mindset. At its best, it is driven forward by a restless spirit, an intellectual courage, a willingness to face up to failures and to be honest about key data, even when it undermines cherished beliefs. It is about method and mindset.
The difference between aviation and hunters is sometimes couched in the language of incentives. When pilots make mistakes, it results in their own death. When a hunter makes a mistake about their ideas about game and habitat management from only hiking through the woods it results in nothing. That is why pilots are more motivated to reduce mistakes.
But this analysis misses the crucial point. Remember that pilots died in large numbers in the early days of aviation. This was not because they lacked the incentive to live, but because the system had so many flaws. Failure is inevitable in a complex world. This precisely why learning from mistakes and errors is so imperative. Systems that do not engage with failure struggle to learn.
The phenomenon of cognitive dissonance is often held up as a testament to the quirkiness of human psychology. It is easy to laugh when we see just how far we are prepared to go to justify our judgements, sometimes to the point of filtering out evidence that contradicts them. It is all part of the elusive trickery of the human brain, it is said, a charming if occasionally troubling aspect of our eccentricity as a species.
Self-justification is more insidious. Lying to oneself destroys the very possibility of learning. How can one learn from failure if one has convinced oneself- through the endlessly subtle means of self-justification, narrative manipulation, and the wider psychological arsenal of dissonance- reduction- that a failure didn't actually occur?
Most failures can be given a makeover by hunters. Hunters latch on to any number of justifications: ?it was a one-off?, ?it was a unique case?, ?conspiracy theories?. Hunters will selectively cite statistics that justify their case, while ignoring the statistics that don't. They can find new justifications that did not even occur to them at the time, and which they would probably have dismissed until they- thankfully, conveniently- came to their rescue.
Our exploration of cognitive dissonance finally provides us with the answer. It is precisely in order to live with themselves, and the fact that they made an error about game and habitat management in the first place. This protects a hunter?s sense of self-worth and morally justifies the practice of non-disclosure. After all, why disclose an error if there wasn?t really an error, after all in a hunter?s mind?
To put it a slightly different way, the most effective cover-ups are perpetrated not by those who are covering their backs, but by those who don't even realize that they have anything to hide.
Hunters culture stigmatizing attitude towards error undermines our capacity to see evidence in a clear-eyed way. It is about big decisions and small judgements: indeed, anything that threatens one?s self-esteem.
Instead of learning from data, some hunters are spinning it. It hints at the suspicion that hunters are directed, not at creating new, richer, more explanatory theories, but at coming up with ever-more tortuous rationalizations as to why they were right all along.
A common misperception of the theory of cognitive dissonance is that it is about external incentives. Hunters have a lot to lose if they get their judgement wrong; doesn't it therefore make sense that they would want to reframe them? The idea here is that the learning advantage of adapting to a mistake is outweighed by the reputational disadvantage of admitting it.
Scientific mindset, with a healthy emphasis on falsification, is vital. It acts as a corrective to our tendency to spend out time confirming what we think we already know, rather than seeking to discover what we don't know.
As the philosopher Karl Popper wrote: ?For if we are uncritical we shall always find what we want: we shall look for, and find, confirmations, and we shall look away from, and not see, whatever might be dangerous to our pet theories. In this way it is only too easy to obtain?. Overwhelming evidence in favor of a theory which, if approached critically, would have been refuted.