Even without attempting to create a click-bait title, as soon as I put anything as “greater than” evidence-based practice (EBP) – I know there are readers posturing to disagree with everything written here. And I don’t blame you, I was an SPT and early DPT who was borne entirely into the EBP mentality, steadfastly clinging to the systemic appraisal of voluminous research to reinforce my practice patterns. I became a word-smith of “evidence-enhanced”, “evidence-informed” and “research-driven” communique with both patients and in the professional arena, but then something changed. I became a business owner - a cash-pay clinic at that. And despite my impressive ability to cite clinical trials for almost everything I did, it quickly became clear there is something more important to my clients and ultimately to myself as a clinician.
It’s a debate some may contest and others already appreciate as obvious. And honestly, we all know there is a happy medium here. Certainly research is one of the more important aspects for education advancements, professional substantiation in a multi-disciplinary system, and continued refinement of any clinician’s mindset and skillset. But if we quite literally are discussing what one should base their practice patterns and business model upon – results > research.
For the context of this discussion, I’m not even referring to results as outcome measure scores or pain scale improvements. Step out of your academic and clinical-mind for a minute and analyze the business metrics of your clinical performance. In a cash-pay model, the corollary of your results is the ability to get people better, sooner – and creating a memorable experience.
So which results-based metrics matter? Here’s my top 6.
1) Word of mouth referrals – makes sense, a first degree referral from a pleased patient will always be primary to a successful business…of any kind.
2) Return clients – not in the context of non-improvement, moreso if you can help their shoulder injury – they’ll be back for their knee.
3) Online reviews – sounds silly, but this is the digital age. A Facebook shout-out or informative Yelp review will do more for your business than that print ad or flyer on your front desk.
4) Short episode of care – this may be the furthest departure from the “traditional” PT model, but in the consumer culture of immediacy, having a reputation and stats to support you can get someone better, faster…your business will grow.
5) Retention – converting that unique, first-time patient into a lifelong customer. This doesn’t mean you can miraculously improve everyone in one visit, in fact a thorough evaluation and candid mindset that you can’t help every single diagnosis will earn you more rapport in the long run.
6) Complaints –probably the most challenging to accept with our egos involved, but negative feedback can be more helpful for adaptation of our practice than being showered with compliments all day. Thank those negative criticisms for affording you the ability to make things better moving forward. Otherwise, suffice it to say if you are receiving numerous complaints – you need to change something.
Start tracking these personally or as a clinic for 3-6 months, and you’ll have one of the most accurate and important performance reviews of your career.
Still not sold? Here’s three reasons why I see adopting a “results-based” practice is more practical and more important than clinging to an evidenced-based adage.
1. It’s multi-modal.
Trying not to be too dogmatic about a single treatment technique, modality, or evaluative system, let’s all agree on one thing – clinical results are multi-factorial. If you’re asking me what a “perfect” clinician looks like, they’d have bits and pieces of manual therapy (I’m of the mind putting hands on our patients is important), TNE, functional/biomechanical training, modalities and lifestyle coaching. Essentially containing the ability to make their patients feel better or move better quickly, training on how to improve their condition long-term, and education with an emphasis patient ownership of their outcomes.
One thing an RCT can never do is compare a complex multi-modal treatment approach. By scientific purpose, clinical trials are designed to compare a singular technique versus a control group or versus another singular technique. This will never replicate a true practical environment or clinical mindset as we employ skill sets with incalculable variability. Nor should it.
2. It’s more immediate.
One detriment to high-level, peer-reviewed clinical research data is that it takes years to reflect clinical practice – especially with the healthcare innovations and technologies of today. By the time there are enough high-level RCTs to conduct a systematic review, it’s safe to say the technique has already been adopted as mainstream physical therapy. Research is necessary and important, but to consider it the basis for your entire practice is a slippery slope.
Comparatively, whether it a good thing or a bad thing – our clientele and the “market” will give us data on how effective a treatment or modality is years before research provides the same endorsement. Suffice it to say, in a cash-based scenario you learn very quickly (in months to a year) if a novel treatment is producing substantially better results or not.
3. It removes our personal bias.
Inherently, we tend to search for clinical trials, con ed systems, and groups of clinicians who agree with what we already do and believe. Or the more challenging scenario of something working for person A, not working for patient B, C, D. In either case, market feedback can provide an unbiased snapshot in to who and what you’ve been treating well – and the opposite. We’ll save patient expectation for another post, since we know that is likely the #1 prognistic variable for clinical success – but what we know is that patients may like us, but they don’t want to keep coming to see us. They just want to feel better. If they do, they’ll tell you. And if they don’t, they’ll TELL you.
What this conversation comes down to is not that research integration is a negative – because it’s not, at all. It comes down to looking through the eyeglass the opposite way. Instead of thinking, “it’s evidence-based, so it’ll help my patients get better and be happy” – start realizing “my patients are happy and getting better, so this is working”. Or the sometimes more realistic, “my patients aren’t getting better, something needs to change”.
Rather than a treatment product expecting certain results, start to see if your results support your clinical product.
This conversation comes during a unique point in physical therapy practice, as there seems to be a renaissance with younger clinicians seemingly drawn to more specialized practice, cash-based models, and integration of PT into population-specific environments. A lot of unknown initially and a departure from some of the rigid infrastructure of how PT was defined even 10 years ago, but potentially out of necessity.
Whether you see this perspective as blatantly obvious, or incredibly impractical – I challenge you to test it for yourself. Create a satisfaction survey, track direct referrals and conversion of initial visits, and analyze social media streams for the good and bad. The last one being more important today than ever before. If your evidence-based practice is working, that’s awesome – you also have a results-based practice! Otherwise a results-based practice will be a new lens for many clinicians - it can be humbling, or it can be encouraging. But ultimately it will help us do what we do best – help people live happier, healthier lives.
As always, thanks for reading and feel free to follow me personally @DPTwithNeedles (IG/Twitter) or our company @iDryNeedle on (IG/Facebook) + @USdryneedling (Twitter)!
Paul Killoren PT, DPT