
Biases, mindset and tools
Few recent conversations made me think about the approaches and tools that we choose for delivering valuable work. Many people rely solely on tools that they’ve always been using. They never change their ways of working. Or, they may change the labels, but not the actual approach.
This is the snippet from the first conversation:
One of my ex-colleagues mentioned that his primary stakeholder, who happens to be the executive of the department, told him not to bother her about the process and frameworks he was using. She hated any mention of Agile, Kanban or Jira. She needed detailed plans, roadmaps and estimates and believed that other approaches didn’t work.
Why was that?
The other conversation happened during a recent event. A participating senior manager of a large organisation was concerned that stakeholders don’t understand iterative approaches. He wanted to collect and use more data points to prove that work was happening. My personal observation was that his team and he himself lacked an iterative delivery mindset. Was his approach to rely on data right?
What seems to be the problem?
“Elementary, dear Watson!” (Actually, Sherlock Holmes never said those words in any of the stories. At least I didn’t come across those words. I have read Sherlock Holmes more than once, I enjoy that so much.)
These conversations remind me of a few cognitive biases. Primarily of the curse of knowledge bias and Dunning-Kruger Effect.
It might be true in both cases that one party thinks that the other one knew more than they actually did. My colleague possibly was talking in a jargon laden language that the business executive didn’t understand and decided that it was all nonsense. All the while the ex-colleague assumed that she being an executive knew more about what he was talking about. That’s the curse of knowledge.
It is also possible that the same executive believed that she knew more about all delivery approaches. And the ones she actually knew more about, were the better ones, because she knew more about them (sounds dumb, but that happens). That assumption by her makes the other ones bad automatically, at least for her.
That’s the Dunning-Kruger Effect.
The third point I want to make is not about bias, but about data.
Data is ruining everything.
Well, not really. We rely on data for decision making. However, over-reliance on anything is bad, isn’t it? I must remind you of the Challenger catastrophe if you disagree with my assertion. You can read about that here. If you read the report, pay special attention to Richard Feynman’s observations and findings. Every word there is insightful.
In a nutshell, what happened then was that NASA wanted to make decisions purely on data, while an engineer had a gut feeling that there was something not right with the O-ring (type of a valve that stopped gases from leaking from the spaceship). That engineer didn’t have data to prove, only a strong gut feeling.
We all know about the disaster where few astronauts sadly lost their lives.
The CTO in our scenario is focusing on data. It has been proven many times either through research, or in real life, that statistics often fails to influence us compared to social proof.
Not convinced?
Think how many times you have made decisions based on a friend’s advice. We approach our family or friends and read online reviews when we buy cars, houses, electronic equipment and other stuff. We look for social proof there.
So, I’m not surprised that by offering data to executives, this person is hardly making a difference.
What else is happening?
What do you think is happening in these scenarios? I’m looking for more insights before we jump into potential solutions.
Let me know either by commenting or emailing.