Resulting (Outcome vs. Decision Quality)

A patient finishes their fifth physiotherapy session, feels significantly better, and quietly decides not to book their next appointment. Six months later, they're back, in more pain than ever, frustrated, and convinced the treatment 'didn't really work.' This pattern plays out in allied health clinics every single day, and the reason it keeps happening has nothing to do with treatment quality. It has everything to do with how the human brain evaluates decisions.

The Science Behind Resulting (Outcome vs. Decision Quality)

Resulting is a cognitive bias identified and named by professional poker player and decision strategist Annie Duke in her book *Thinking in Bets* (2018). It describes the deeply human tendency to judge the quality of a decision by its outcome, rather than by the quality of the reasoning behind it. In other words, if something works out, we assume we made a smart call. If it doesn't, we assume we made a poor one, regardless of whether either conclusion is actually warranted. This is a fundamental error in how we evaluate our own choices, and it has profound consequences for patient behaviour in allied health settings.

The psychology behind resulting runs deep. Our brains are pattern-recognition machines, wired to draw quick causal connections between actions and results. When a patient stops physiotherapy at session five and doesn't immediately deteriorate, their brain files this under 'good decision.' The absence of a bad outcome feels like confirmation that stopping was the right call. What the brain fails to account for is the role of chance, timing, and latent risk. A person might stop treatment prematurely and feel fine for weeks or months, not because stopping was wise, but because the underlying condition simply hasn't had time to reassert itself yet. Duke describes this as confusing 'I got lucky' with 'I made a good decision,' and the distinction is crucial.

This bias is compounded by what psychologists call 'motivated reasoning.' Patients who have already decided to stop treatment, often for financial or logistical reasons, are primed to interpret their continued wellbeing as vindication. They're not actively lying to themselves; they're doing what human brains do automatically. The result is that the very patients most at risk of recurrence are often the least likely to feel any urgency about returning for follow-up care. From the outside, a practitioner can see the risk clearly. From the inside, the patient feels perfectly justified in their decision.

For allied health practices, the implications are significant. Research consistently shows that premature discharge is one of the leading contributors to condition recurrence across musculoskeletal presentations, chronic pain, and postural dysfunction. Studies in physiotherapy literature suggest that a substantial proportion of patients who discontinue care before completing a full treatment plan experience a return of symptoms within six to twelve months. Yet these same patients rarely connect their recurrence to their earlier decision to stop, because resulting has already closed that mental loop. Understanding this bias is the first step toward building re-engagement strategies that actually work.

The Research

One of the most compelling demonstrations of resulting comes from research into decision-making in high-uncertainty environments, most famously illustrated through Annie Duke's own analysis of poker hand outcomes and the work of Kahneman and Tversky on outcome bias. In a foundational study by Jonathan Baron and John Hershey published in the journal *Organizational Behaviour and Human Decision Processes* (1988), participants were asked to evaluate the quality of medical and financial decisions made by others. Crucially, the decisions were identical in their reasoning and available information, only the outcomes differed. Participants consistently rated the decision-maker more favourably when the outcome was positive, even when they were explicitly told the outcome was influenced by chance. The same decision, with the same logic, received markedly lower quality ratings simply because it happened to go badly.

This finding has been replicated across multiple domains and is considered one of the most robust demonstrations of outcome bias in the literature. For allied health practitioners, the implication is stark: your patients are evaluating their decision to stop treatment based on how they feel right now, not on whether stopping was actually a sound clinical choice. And no amount of telling them 'you should have kept coming' will overcome this bias unless you reframe the conversation in probabilistic terms that make the hidden risk visible.

How to Apply This in Your Practice

The most effective way to counter resulting in lapsed patients is to make the invisible risk visible before they disengage, and then to reinforce that framing when you reach back out. During treatment, practise explicit probability-based conversations. Rather than saying 'we recommend six more sessions,' try: 'Research shows that patients with your presentation who stop at this stage have a significantly higher rate of recurrence within six months. You might be one of the people who does fine, but we can't know that yet, and here's what it would cost you if you're not.' This shifts the patient's mental model from 'I feel better, so I'm done' to 'I feel better, but the decision about stopping is separate from how I feel today.' That distinction is the entire game.

When re-engaging lapsed patients via SMS or email, the message needs to do the cognitive work of separating outcome from decision quality. A generic 'we miss you, book now' message will not move someone who believes they made a good decision by leaving. Instead, try something like: 'Hi [Name], we wanted to check in. Many patients with your condition who stop care around the stage you did feel great initially, about 30% stay that way long-term. The other 70% find symptoms return, often worse than before. We'd love to do a quick reassessment to make sure you're tracking well. Takes 20 minutes, no obligation.' This message does several things: it validates their current experience (you might be fine), it introduces probabilistic risk without fear-mongering, and it offers a low-commitment next step.

From a workflow perspective, your practice management system should flag patients who disengage before their recommended treatment plan is complete. Set an automated trigger at the 30-day mark post-lapse, followed by a 90-day follow-up if there's no response. The 30-day message should be warm and curious: 'How have you been feeling since your last visit?' This opens the door without immediately invoking risk. The 90-day message is where you introduce the probabilistic framing more directly, because by then the patient has had enough time to either confirm their good outcome (and become complacent) or start experiencing early warning signs they may be minimising.

It's also worth training your front-desk and clinical team to use resulting-aware language during the discharge conversation itself. When a patient says they want to 'take a break and see how they go,' the response shouldn't be a passive 'okay, call us if you need us.' It should be: 'That's completely your choice, can I share what we typically see with patients who pause at this stage? It might help you know what to watch for.' This positions your practice as a trusted advisor rather than a sales engine, and it plants a probabilistic anchor that the patient will carry with them. If their symptoms do return, they're far more likely to come back to you, because you predicted it without pressuring them.

Get one behavioral science principle per week

Applied to patient retention. Backed by research. No fluff.

Seeing It in Action

Marcus is a 41-year-old project manager who presented to a chiropractic clinic in Brisbane with chronic lower back pain stemming from long hours at a desk. He completed five sessions, experienced a marked reduction in pain, and felt confident enough to stop, telling himself he'd 'see how it goes.' His practitioner had recommended twelve sessions in total, but Marcus felt that was excessive given how well he was feeling. He didn't book a follow-up. The clinic sent him a generic reminder two weeks later; he didn't respond. Six months passed.

Using a structured re-engagement workflow, the clinic sent Marcus a personalised message at the 90-day mark: 'Hi Marcus, it's been a while since we saw you and we hope you're tracking well. We wanted to reach out because patients with your presentation who stop around the five-session mark often feel great initially, but research suggests a significant proportion experience a recurrence, sometimes more severe than the original episode, within six months. You might be going great, which is wonderful. But we'd love to offer you a 20-minute check-in to see where things are at, no cost, no pressure.' Marcus read it twice. He'd actually started noticing some morning stiffness returning over the past few weeks but had been dismissing it. The message reframed his experience: maybe he wasn't as 'fixed' as he'd assumed.

He booked the check-in. The assessment confirmed early signs of recurrence, and Marcus re-engaged with a full treatment plan. More importantly, he later told his practitioner that the message had 'made him think differently about it', he'd assumed feeling okay meant he'd made the right call, but the probabilistic framing helped him see that his temporary good outcome didn't validate his decision to stop. He completed the full plan and has remained symptom-free. The clinic retained a patient who had already mentally closed the door, simply by helping him separate how he felt from whether stopping was actually wise.

Your Action Plan

  1. 1Audit your current discharge process and identify all patients who left before completing their recommended treatment plan in the last 12 months, these are your highest-priority re-engagement targets, as they are most susceptible to resulting bias.
  2. 2Train your clinical team to use probabilistic language during sessions and at the point of disengagement, phrases like 'you might be in the lucky percentage, but here's what we typically see' are far more effective than generic recommendations to continue care.
  3. 3Build a two-touch automated re-engagement sequence in your practice management software: a warm check-in message at 30 days post-lapse, followed by a probabilistic risk-framing message at 90 days that distinguishes current symptom experience from decision quality.
  4. 4Craft condition-specific message templates that include realistic outcome statistics for your most common presentations (e.g., lower back pain, plantar fasciitis, rotator cuff issues), using qualified language where exact numbers aren't available, specificity builds credibility and counters the 'I'm probably fine' assumption.
  5. 5Offer a low-barrier re-entry point such as a short reassessment appointment, positioning it as a check-in rather than a sales conversation, this removes the commitment anxiety that prevents lapsed patients from responding, and lets your clinical outcomes do the persuading.

Key Takeaway

Your lapsed patients aren't ignoring you because they don't value your care, they're ignoring you because their brain has already decided that feeling fine means they made the right call, and your job is to gently, respectfully dismantle that illusion using the language of probability, not pressure.

Related Principles