The Ominous Clinical Cutoff and Data Accuracy

Ominous Clinical Cutoff & Data Accuracy

If there is anything about PCOMS and the Outcome Rating Scale that raises the hair on our collective necks, it is the ominous sounding but completely innocuous “clinical cutoff.” I have a slide that I present showing the classic French Revolution guillotine—because that is how folks often respond as if we are decapitating clients or cutting them off at the knees! Nothing, of course, could be further from the truth.

The “clinical cutoff” is a statistical term and a portion of an equation that defines the number (on a given measure) that best differentiates a so called clinical population (those seeking help from a therapist) to those who are not (a so-called non-clinical population). The clinical cutoff is simply the number that represents the level of distress (what the ORS actually measures) that typifies the level of distress of folks entering or not entering therapeutic services. It is not the truth with a capital “T,” says nothing about any ultimate way to view or describe  clients, certainly nothing negative, and is merely a guideline to understand what a score on a given measure means. With PCOMS, it is just a way to contextualize the client’s score, validate his or her experience, and promote a mutual understanding of what the ORS score means in relation to the client’s life and reasons for service. It is a first “validity check” to ensure that the ORS represents the client’s experience. It is not a “normative” way to evaluate clients or any such top down, expert way of casting the “gaze” on those we serve. It is simply a shared, collaborative construction of meaning about how the client views his or her life so that a starting place for therapy can be established and then monitored for improvement and discussion.

We have learned a lot about ORS/CORS scores since they were developed those many years ago, especially about high scores and data integrity. It has been quite a learning process and I have to admit, I am not a psychometrician nor a statistician. But I have operated on a “need to know” basis and have gotten to know a lot about PCOMS and ORS/SRS scores from both a clinical perspective (because I am a clinician first) and from a psychometric vantage point.  We have learned that scores above 32 on intake are not valid, period—even for kids. Speaking of kids, the cutoff for 6-12 year olds is not 32 as originally reported in 2006 in the initial validation study of the CORS. It is actually 28, so please use that score. This has all just recently come to light when I re-examined the data from that original study and looked at current data of kid scores. We didn’t eliminate the high, invalid scores because we didn’t know any better back then.

It doesn’t make sense, of course, that a person in therapy scores that high, not just over the cutoff, but way over the cutoff. This requires that therapists work with clients to ensure the ORS represents what is actually happening in the client’s life which means that therapists must integrate PCOMS and not just flick the form. If you look at the trajectories or expected treatment response (ETR) for that first score of 32, only a little over a point is the highest endpoint or the target score. If you are using an electronic system that plots such a trajectory, a change of 1 point on the ORS will reflect that you have met the target! How significant is a one point change on the ORS? Right, not very. Such scores inflate reported outcomes and inaccurately report effectiveness. That’s why Better Outcomes Now excludes any first meeting score of 32 or above in analyzing the data. And that’s why we also identify all the validity indicators: 32 or above, saw tooth patterns, and the number of clients entering over the clinical cutoff. Reporting accurate effectiveness is key to influencing funders.

Another way that outcomes are inflated is to use the old reliable change index (RCI) of 5 points. The RCI of the ORS is 6 points as determined by an analysis of 400,000 admins of the ORS and confirmed by the most recent randomized clinical trial (Slone et al., 2015). Yet another way to inflate outcomes is to "double dip." A client who has changed 6 points has achieved reliable change. A client who started under the clinical cutoff, changed at least 6 points, and crossed the cutoff has achieved  “clinically significant change.” Some inflate outcomes by counting clients in both categories; i.e.,  those who changed reliably and then some of the same clients who also changed "clinically significantly." Finally, effectiveness is also inflated by including any client who ends over the cutoff regardless of where they start.  Please be aware of inflated outcomes and how that could hurt our cause. Please pay attention to data integrity and accurate reports of effectiveness.

Find PCOMS Webinars & Resources

Categorized in: Uncategorized

    Recent Posts