Hunter's Closed Loop Thinkers

Hiker of the Woods

Active Member
Messages
623
As many of you know that another passion of mine other than hunting and scouting is wildlife and habitat management. That along with hunting techniques and tactics are the main reasons I visit different hunting forums and social media pages. I'm currently reading Matthew Syed ?Black Box Thinking: Marginal Gains and the Secrets of High Performance? and thought there was a lot in his book that can apply to some of the things I have seen over the years with hunter?s ideas when it comes to wildlife and habitat management. Below are some things that I have taken out of his book and added hunters and game & habitat management into it. Hopefully we all as hunters, organizations, and agencies will learn to be Black Box Thinkers with Open Loop Systems.

From the book with editing in hunter?s and game & habitat management:

Most hunters seem to be in a closed loop when it comes to game and habitat management. A closed loop is where failure doesn't lead to progress because information on errors and weaknesses is misinterpreted or ignored; an open loop does lead to progress because the feedback is rationally acted upon.

Hunters, as a whole, have a deeply contradictory attitude to failure when it comes to game and habitat management. Even as we find excuses for our own failings about how game and habitat should be managed, we are quick to blame others such as state wildlife agencies, hunting organizations and environmental groups.

But this has recursive effects, as we shall see. It is partly because we are so willing to blame others for their mistakes that we are so keen to conceal our own. We anticipate, with remarkable clarity, how people will react, how they will point the finger, how little time they will take to put themselves in the tough, high-pressure situation in which the error occurred. The net effect is simple: it obliterates openness and spawns? cover-ups. It destroys the vital information we need in order to learn.

When we take a step back and think about failure more generally, the ironies escalate. Studies have shown that we are often so worried about failure that we create vague goals (such as removal of predators will mean deer and elk everywhere), so that nobody can point the finger when we don't achieve them. We come up with face-saving excuses, even before we have attempted anything.

We need to redefine our relationship with failure, as hunters, as organizations, and as agencies. This is the most important step on the road to a high-performance revolution: increasing the speed of the best game and habitat management possible. Only by redefining failure will we unleash progress, creativity and resilience.

Nobody wants to fail; we all want to succeed. But at a collective level, at the level of systemic complexity, success can only happen when we admit our mistakes (this includes hunter?s ideas of game and habitat management), learn from them, and create a climate where it is, in a certain sense, ?safe? to fail.

In aviation, pilots are generally open and honest about their own mistakes (crash-landings, near misses). The industry has powerful, independent bodies designed to investigate crashes. Failure is not regarded as an indictment of the specific pilot who messes up, but a precious learning opportunity for all pilots, all airlines and all regulators.

And yet how does this happen? How is learning institutionalized in the aviation system (given that pilots, regulators, engineers and ground staff are dispersed across the word), how is an open culture created, and, most importantly of all, how can we apply the lessons beyond aviation to hunters, organizations, agencies who see, talk and act upon game and habitat management?

We have seen that in hunters, the culture is one of evasion when it comes to science, game and habitat management. Peer reviewed scientific papers about wildlife and/or habitat are described as ?one-offs? or ?conspiracy?. This is the most common response to failure in the hunting world today.

In aviation, things are radically different: learning from failure is hardwired into the system.

All airplanes must carry two black boxes. Instead of concealing failure, or skirting around it, aviation has a system where failure is data rich. These are used to lock the industry onto a safer path. And individuals are not intimidated about admitting to errors because they recognize their value.

Amy Edmondson, a professor at Harvard, puts it: ?Most large failures have multiple causes?. Eleanor Roosevelt said: ?Learn from the mistakes of others. You can't live long enough to make them all yourself?.

Hunters don't even recognize the underlying problem of their personal beliefs about game and habitat management without using science because from the first-person perspective, it didn't exist. That is one of the ways that closed loops perpetuate: when people don't interrogate errors, they sometimes don't even know they have made one (even if they suspect they may have). The problem is not a lack of hunter?s motivation to see better game and habitat management, but the limitations of human psychology.

This, then, is what we might call ?black box thinking?. For agencies beyond aviation, it is not about creating a literal black box; rather, it is about the willingness and tenacity to investigate the lessons that often exist when we fail, but which we rarely exploit. It is about creating systems and cultures that enable agencies and hunters to learn from errors, rather than being threatened by them.

These failures are inevitable because the world is complex, and we will never fully understand its subtleties. Failure is the signpost. It reveals a feature of our world we hadn't grasped fully and offers vital clues about how to update our models, strategies and behaviors. From this perspective, the question often asked in the aftermath of an adverse event (mule deer decline), namely ?can we afford the time to investigate failure??, seems the wrong way around. The real question is ?can we afford not to??

We will all endure failure or errors from time to time. And it is often in these circumstances, when failure is most threatening to our ego, when we need to learn most of all. Practice (time in the woods) is not a substitute for leaning from failures and errors, it is complementary to it. They are, in many ways, two sides of the same coin.

The history of science, like the history of all human ideas, is a history of?. error, Popper wrote. ?But science is one of the very few human activities- perhaps the only one- in which errors are systematically criticized and fairly often, in time, corrected. This is why we can say that, in science, we learn from our mistakes and why we can speak clearly and sensibly about making progress.

Most closed loops exist because hunters deny failure, errors or try to spin it with their game and habitat management ideas vs scientific studies. With pseudoscience the problem is more structural. The have been designed, wittingly or otherwise, to make failure impossible. That is why, to their adherents, they are so mesmerizing. They are compatible with everything that happens. But that also means they cannot lean from anything.

This hint, in turn, at a subtle difference between confirmation and falsification. Science has often been regarded as a quest for confirmation. Scientists and biologists observe nature, create theories, and then seek to prove them by amassing as much supporting evidence as possible. But we can no see that is only a part of the truth. Science is not just about confirmation, it is also about falsification. Knowledge does not progress merely by gathering confirmatory data, but by looking for contradictory data.
How can hunters tell if what agencies are doing are right or wrong or if their own personal ideas on what should happen would be best for game and habitat management? Where is the feedback to hunters? Most hunters gauge how game and habitat are responding to current or past management not with objective data, but by observing them while driving or hiking in the woods. This data is highly unreliable. After all, game and habitat might be responding to something or multiple things hunters don't even see or know about.

The internet is a library full of free scientific studies on game and habitat management depending on what species and habitat hunters want to see improvement on. Hunters who would take the time to read through these studies would receive instant feedback about their judgements. Hunters initial ideas about game and habitat management may fail more before reading the studies, but this is precisely why they would learn more.

Do hunters study their possible failed or error ideas on game and habitat management? This tendency creates a blind spot. This blind spot is not limited to science, it is a basic property of our world and it accounts, to a large extent, for our skewed attitude to failure. Success is always the tip of the iceberg. We learn vogues theories, we fly in astonishing safe aircraft, we marvel at the virtuosity of true experts.

But beneath the surface of success- outside our view, often outside our awareness- is a mountain of necessary failure.

Science has a structure that is self-correcting. By making testable predictions, scientists are able to see when their theories are going wrong, which, in turn, hands them the impetus to create new theories. But if scientists as a community ignored inconvenient, or spun it, or covered it up, they would achieve nothing.

Science is not just about a method, then, it is also about a mindset. At its best, it is driven forward by a restless spirit, an intellectual courage, a willingness to face up to failures and to be honest about key data, even when it undermines cherished beliefs. It is about method and mindset.

The difference between aviation and hunters is sometimes couched in the language of incentives. When pilots make mistakes, it results in their own death. When a hunter makes a mistake about their ideas about game and habitat management from only hiking through the woods it results in nothing. That is why pilots are more motivated to reduce mistakes.

But this analysis misses the crucial point. Remember that pilots died in large numbers in the early days of aviation. This was not because they lacked the incentive to live, but because the system had so many flaws. Failure is inevitable in a complex world. This precisely why learning from mistakes and errors is so imperative. Systems that do not engage with failure struggle to learn.

The phenomenon of cognitive dissonance is often held up as a testament to the quirkiness of human psychology. It is easy to laugh when we see just how far we are prepared to go to justify our judgements, sometimes to the point of filtering out evidence that contradicts them. It is all part of the elusive trickery of the human brain, it is said, a charming if occasionally troubling aspect of our eccentricity as a species.

Self-justification is more insidious. Lying to oneself destroys the very possibility of learning. How can one learn from failure if one has convinced oneself- through the endlessly subtle means of self-justification, narrative manipulation, and the wider psychological arsenal of dissonance- reduction- that a failure didn't actually occur?

Most failures can be given a makeover by hunters. Hunters latch on to any number of justifications: ?it was a one-off?, ?it was a unique case?, ?conspiracy theories?. Hunters will selectively cite statistics that justify their case, while ignoring the statistics that don't. They can find new justifications that did not even occur to them at the time, and which they would probably have dismissed until they- thankfully, conveniently- came to their rescue.

Our exploration of cognitive dissonance finally provides us with the answer. It is precisely in order to live with themselves, and the fact that they made an error about game and habitat management in the first place. This protects a hunter?s sense of self-worth and morally justifies the practice of non-disclosure. After all, why disclose an error if there wasn?t really an error, after all in a hunter?s mind?

To put it a slightly different way, the most effective cover-ups are perpetrated not by those who are covering their backs, but by those who don't even realize that they have anything to hide.

Hunters culture stigmatizing attitude towards error undermines our capacity to see evidence in a clear-eyed way. It is about big decisions and small judgements: indeed, anything that threatens one?s self-esteem.

Instead of learning from data, some hunters are spinning it. It hints at the suspicion that hunters are directed, not at creating new, richer, more explanatory theories, but at coming up with ever-more tortuous rationalizations as to why they were right all along.

A common misperception of the theory of cognitive dissonance is that it is about external incentives. Hunters have a lot to lose if they get their judgement wrong; doesn't it therefore make sense that they would want to reframe them? The idea here is that the learning advantage of adapting to a mistake is outweighed by the reputational disadvantage of admitting it.

Scientific mindset, with a healthy emphasis on falsification, is vital. It acts as a corrective to our tendency to spend out time confirming what we think we already know, rather than seeking to discover what we don't know.

As the philosopher Karl Popper wrote: ?For if we are uncritical we shall always find what we want: we shall look for, and find, confirmations, and we shall look away from, and not see, whatever might be dangerous to our pet theories. In this way it is only too easy to obtain?. Overwhelming evidence in favor of a theory which, if approached critically, would have been refuted.
 
This is a great post. Thanks for the info. I think there are some problems which may are overlooked by the analogies included in your writing. One problem is endemic to most people and some of the other problems are unique to hunters/conservationists.

First the human problem. Many people in no way shape ore form want to be accountable for their bad ideas. Until we as people can accept that we learn from our bad ideas and failures we aren't ever going to move to the next step of analyzing and producing better ideas or even more importantly making sure bad ideas aren't repeated.

I noticed there was a specific reference to people's resistance to pier reviewed science. Here we have a very significant scientific dilemma. Twenty years ago when I was in the Wildlife Department at Texas A&M we were having a lot of discussions about how much "science" there is the The Wildlife Sciences. First we were discovering that graduate students when experimenting were simply picking data which supported their hypothesis and finding excuses which disqualified data which was contradicting hypothesis and could subsequently leave them without a conclusion. The fact is we take undergrad students and make them feel that in order to receive a graduate degree we want very specific results that make them look as if they have an all knowing special insight to fauna which operate completely under a system of free will in ecosystems of incalculable variables, or flora which operate on a system of randomness so perfect computers can't comprehend its stability or volatility. This situation makes these students perfect for government jobs. Government jobs which make them answer to both elected and appointed officials, which have no concept of wildlife, yet DEMAND perfect answers. And what do they get? Imperfect answers distributed as pier reviewed science but packaged as nice simple answers. And the problem gets worse and worse. As fewer and fewer true field biologists leave the profession and are replaced by cooky cutter scientists who have never been told "Your research is garbage" we have fewer piers worth their diplomas.

What problem does that create. Now we have Politicians who think they have perfect answers because they have a paper written by "Dr. Dolittle" , and he can talk to the animals and know what they are ALL thinking, AND THEY START WRITING LAW BASED ON THAT GARBAGE. As time goes on this problem has compounded other problems until people and hunters are questioning the credibility of any of it anymore.

The fact is if most wildlife scientists had to give an honest answer to a wildlife question the answer would be "I don't know".

The next problem is a human problem but seems to be expressed at a much higher level in the wildlife community. When it comes to wildlife EVERYBODY IS AN EXPERT. I am a taxidermist now. About once a year someone will tell me how to do my job better. When I was a practicing biologist EVERYBODY knew more about wildlife than me. You take an average man. He won't tell his doctor how to take out a gall bladder. He won't get up in the middle of the flight, walk to the cockpit, and tell the pilot, "I got this". But he will have a subscription to outdoor life and spend one week a year hunting and he will go to the mat with any Seasoned Field biologist over what the population density of black panthers in east Texas is. :D

Again thanks for posting. I would love for this thread to continue on. There could be some great ideas from it.
 
Not only are Hunters reluctant to discuss any negatives but post like this will earn you the label of anti-hunter.
__________________________
6717815301.jpg
 
No you get labelled a troll and people call you every name in the book and tell Founder you should be banned. :D
 
How in the world do you guys ever find the time to type that much ? I only made it threw a few paragraphs.
 
Scientific data or mind set sometimes means someone who was educated by professors who have ulterior motives. Thusly peer reviewed means they all agree, which may or may not mean anything.

Too often the scientist draws a conclusion and then gathers information and data to support that conclusion. They've been brain washed to the point they can't see it.

Very few hunters agree on the definition of success and failure, so there's that too.
 
In reply to Blacktail?s original post.

Yes after over two millennia of dialog between philosophers and scientists ?the demarcation problem? has yet to be
resolved.
 
Most hunters are okay. I know plenty, and there are good and bad in that group. I hear lots of stupid stuff being said, and I hesitate to correct people (not facts necessarily, but attitude) because I don't want to be that guy, who makes hunting camp awkward. Maybe it would be better if we all actually spoke up and changed the culture from "who shot the most and biggest" and "who has the best of this or that" and started saying "this guy supports the most rational stewardship plan" with actual data, and not speculation or, worse yet, decades old speculation as its primary evidence.
Maybe it's okay to have your personal misconceptions blown apart in pursuit of the best solution.
 
Those first two post were dandies and I have to admit I only read about two thirds of it. However when it comes to wildlife management it often is pulled between groups of different interest.
One wants trophy while another opportunity, the game management agencies need $$ to operate, companies want to sell us stuff. I've had a small company in the outdoor biz for eight years and talked with hunters all over. Until we start putting the game first and us second we are going to struggle. I've said it many times technology, the athlete hunter and trophy hunting have put a mass stress on game at least in much of the west. We either need to live with more limited hunting seasons or we need to limit ourselves and our technologies. Starting idea's? Howe about no drones, no trail cameras, how about open sites only, how about traditional bows only. This may sound radical and there would be mass money thrown against it but if we made it harder on ourselves hence more animal escapement we'd have more game, longer seasons, less restrictions.
I'd rather see a lot of bucks and do many stalks and not get a buck than only see one all week and shoot it from 500 yards. Common sense has a place in science also. Just a thought.
 

Click-a-Pic ... Details & Bigger Photos
Back
Top Bottom