- Dr Bart's Newsletter
- Posts
- The Fear of Making a Decision Is Costing Your Product More Than a Bad Decision Would
The Fear of Making a Decision Is Costing Your Product More Than a Bad Decision Would
Why over-researching is as dangerous as under-researching, and how to know the difference
Here is something that product management literature is reluctant to admit: the process of making a decision is frequently more damaging to a product than the decision itself.
Not because the decision does not matter. It does. But because of the elaborate machinery that surrounds decision-making in most product organisations, the discovery phases, the research sprints, the stakeholder alignment sessions, the framework applications, the prioritisation workshops, the external consultants brought in to validate what the team already suspects, consumes time and attention that compounds in ways that are rarely accounted for.

Every week a decision is delayed is a week the product does not improve. Every initiative that lives in a state of perpetual investigation is an initiative that is not in front of users, generating the feedback that would actually answer the questions the investigation is trying to answer. And every PM who has learned to mistake thoroughness for rigour is a PM who is, with the best of intentions, slowing their product down.
Today I want to make the case for a more calibrated approach to product decision-making: one that takes research seriously without treating it as a prerequisite for every decision, regardless of context, and that recognises the cost of indecision as clearly as it recognises the cost of getting something wrong.
Let us dive in.
The Myth of the Fully Informed Decision
There is a version of product management that exists in textbooks and certification programmes that looks something like this. A potential backlog item is identified. An extensive discovery process is initiated: user interviews, competitive analysis, market sizing, technical feasibility assessment, stakeholder alignment, and success metric definition. A PRD is produced, reviewed, and refined. Only then, weeks or months after the original idea surfaced, does the work begin.
This process is not wrong in principle. For the right decisions, at the right scale, with the right stakes, something like it is appropriate and necessary. A major platform re-architecture, a new market entry, a fundamental change to the core user experience: these are decisions where the cost of getting it wrong is high enough and irreversible enough to justify significant upfront investigation.
The problem is when this process gets applied uniformly, regardless of the size, reversibility, or strategic significance of the decision in question. When every backlog item, from a minor UX improvement to a critical infrastructure investment, goes through the same extended discovery cycle, the process stops being a tool for managing risk and starts being a substitute for judgment.
Key Principle: Discovery is a risk management tool, not a default requirement. The amount of investigation a decision warrants should be proportional to the cost and reversibility of getting it wrong, not to the availability of the process. |
The Real Cost of Indecision
The cost of making a wrong decision is visible and concrete. The feature did not perform as expected. The users did not respond as anticipated. Something needs to be changed or rolled back. These outcomes are uncomfortable, but they are also legible: you can see what happened, understand why, and adjust accordingly.
The cost of indecision is less visible, which makes it easier to underestimate. But it is real, and in many product organisations, it is larger than the cost of the wrong decisions that the indecision was designed to prevent.
When a decision that could have been made in a day takes a month, that month has a cost. The engineering capacity that could have been built is instead waiting for direction. The users who have a problem that could have been addressed continue to have it. The competitive window that existed when the idea was first raised may have narrowed or closed. And the PM, the team, and the stakeholders have all spent cognitive energy on a question that could have been resolved much earlier.
None of this appears on a dashboard. But it accumulates, and over time, it defines the pace and effectiveness of the product team as clearly as any metric that does.
A Practical Framework for Calibrating Your Research
The question is not whether to do research. It is how much research a given decision actually requires. And that question can be answered by working through a small number of specific considerations before defaulting to a full discovery process.
Can you release an MVP within one or two sprints?
If the answer is yes, the most efficient research methodology available is often to build the thing and observe what happens. Real user behaviour in response to a real product change is a richer and more reliable signal than any amount of anticipatory research, and it is available faster. Shipping a small, reversible version of an idea and measuring its impact is not recklessness; it is empiricism.
Are you in a competitive race?
Markets do not pause while product teams complete their discovery cycles. If a competitor has identified the same opportunity and is moving toward it, the value of being first is real and time-limited. In that context, a good decision made quickly is worth considerably more than a perfect decision made after the window has closed.
Has this been tried successfully in a comparable product?
When an adjacent product in a related market has already validated an approach, you have access to a form of research that did not cost you anything and took no time to conduct. The evidence exists. The question is whether your context is similar enough for it to be applicable, which is usually a much faster assessment than conducting primary research from scratch.
Is this a change that cannot be rolled back?
This is the most important question of the four, because it is the one that most directly determines the cost of being wrong. A decision that can be reversed quickly and cheaply if it does not work as expected carries a fundamentally different risk profile from one that commits the product to a direction that is difficult or impossible to undo. For the first category, moving quickly is almost always the right call. For the second, more caution is warranted.
If any of these questions point toward a low-risk, reversible, or already-validated decision, the appropriate response is to make the call, ship something, and refine based on what you learn. Released and imperfect is almost always more valuable than perfect and delayed.
The Poker Analogy, Which Is More Useful Than It First Appears
I find it helpful to think about product decision-making in terms of risk and stakes, in the way that a poker player thinks about when to play aggressively and when to be cautious.
In poker, the decision of how much to bet is not made based on how much time you have spent analysing the game in the abstract. It is made based on the specific cards you are holding, the pot size, the behaviour of the other players, and the cost of being wrong at this particular moment in this particular hand. A skilled player bets confidently when the conditions warrant confidence and treads carefully when they do not. They do not apply the same level of caution to every hand regardless of the cards.
Product decisions work the same way. When the stakes are low, the decision is reversible, and the evidence points in a clear direction, moving quickly is the right play. Spending three weeks on discovery before making a small UX change that can be A/B tested and rolled back in an afternoon is the equivalent of folding a strong hand because you are not completely certain about the river card.
When the stakes are genuinely high, when the decision commits significant resources, affects a large portion of the user base, or locks the product into a direction that will be expensive to exit, the calculus changes. In those situations, the investment in thorough investigation is justified because the cost of getting it wrong is proportionate to the cost of the research.
The skill being described here is judgment: the ability to read the situation accurately and apply the right level of rigour to the right decisions, rather than applying maximum rigour to all decisions in the hope that this constitutes good practice.
Ask yourself: "Am I doing this research because it will genuinely change the decision I make, or because conducting research feels safer than making the call?" If the honest answer is the second, the research is not serving the product. It is serving your comfort. |
How to Build a Simpler, More Effective Prioritisation Practice
The practical implication of all of the above is that most product teams would benefit from a lighter, more judgment-led approach to everyday prioritisation than the one they currently operate.
This does not mean abandoning structure. It means being honest about what structure is actually necessary for which decisions, rather than importing a one-size-fits-all process from a framework that was not designed with your specific context in mind.
For most backlog items, a clear-eyed assessment against three questions is sufficient: what is the expected value of this initiative relative to the current priorities? What is the effort required to test the hypothesis? And what is the cost of being wrong, in terms of reversibility and resource commitment? Those three questions, applied consistently and with genuine thought, will produce better prioritisation decisions than any scoring matrix that creates false precision around inputs that are inherently uncertain.
The frameworks, the fancy software, the external validation processes: these are tools. Like all tools, they are useful when applied to the problems they were designed to solve and wasteful when applied indiscriminately. A PM who has internalised the underlying logic of good prioritisation does not need the scaffolding for most decisions. They reach for it when the stakes are high enough to warrant it, and they move quickly when they are not.
The Fear Underneath the Over-Research
I want to name something that the process discussion above tends to obscure, because I think it is the real issue for many PMs who find themselves stuck in endless discovery cycles.
The elaborate research process is often not primarily about producing better decisions. It is about producing cover for the decisions that are made.
If a decision was preceded by three rounds of user interviews, a competitive analysis, a stakeholder workshop, and a formal prioritisation exercise, and it still does not work out, the PM has a defensible account of their decision-making process. If a decision was made quickly, based on experience and judgment, and it does not work out, the PM is exposed.
This is an entirely rational response to operating in organisations where the process of making a decision is evaluated as closely as the outcome. It is also, over time, corrosive to the quality of product work, because it incentivises the appearance of rigour over its substance, and it systematically delays the kind of fast, iterative learning that good product development depends on.
The PM who can make a well-reasoned decision quickly, communicate their reasoning clearly, and be honest about the level of certainty they had when they made it, is a more effective product leader than the PM who conducts extensive research before every decision and arrives at conclusions that are marginally more defensible but significantly more delayed.
Building that confidence requires, in part, building an organisational environment where a well-reasoned decision that does not pan out is treated as a learning rather than a failure. That is a cultural question as much as a personal one, and it is worth raising explicitly with the leadership above you if the current environment penalises speed more than it penalises stagnation.
The Bigger Picture: What Good Judgment Actually Looks Like
At its core, what I am describing is the development of product judgment: the ability to read a situation accurately, assess the real level of uncertainty, determine the appropriate level of investigation, make a call, and move forward with appropriate confidence.
This is not a skill that frameworks teach. It is a skill that develops through practice, through making decisions at various levels of certainty and observing their outcomes, through building a track record that allows you to calibrate your own instincts against reality over time.
The PM who is afraid to make decisions without complete information will never develop that calibration, because they will never have enough information to act. There is no state of complete information in product development. There is only a current state of understanding, a decision to be made, and a willingness to learn from what happens next.
That willingness, more than any research methodology or prioritisation framework, is what separates the PMs who move their products forward from the ones who keep their products in perpetual discovery.
Released is better than perfect. Not because perfection does not matter, but because release creates the feedback that makes the next version better, and perfect, in a product context, is almost always a destination that keeps moving.
"The goal of product research is not certainty. It is a level of understanding sufficient to make a decision worth making. Knowing the difference between those two things is most of the job." |
A question worth sitting with: what decision have you been deferring, nominally for more research, that you actually have enough information to make today?
The honest answer to that question, for most PMs, is more than one.