I’ve always been a data nerd. You know, one of those guys that can rattle off a stat that seems so innocuous that the first thought of the listener is “why would you possibly know that?” Well, as it turns out, little bits of information can actually lead a community-based supervision program to have better than average compliance, decrease recidivism, and establish best practices that benefit all parties by reducing the use of already tight resources and increasing community safety. Here are five reasons why looking deeper into data can give community-based monitoring programs a leg up and can create evidence-based practices.
Reason #1: If you don’t know your own data, who does?
A quandary to be sure, but realistically, the point of running any program is to get results—then presumably react to those results. Imagine if you drug tested a participant population, sent the collected tests to a lab, and only ever received feedback of positive or negative tests. Does that really give us the whole picture? Is that telling us what we really need to know? The short answer is no, not if you want to affect change. Knowing more than just a general result from testing or monitoring can help to uncover trends that may not be visible if participants are spread across a number of case workers or even different parts of the same program. If your program does not collect this data and really look at it, you may be missing small things that could easily be corrected, or if gone unchecked could create huge problems going forward.
Reason #2: Identifying key data points can uncover potential shortfalls in programs.
Most companies that manufacture monitoring equipment or produce drug testing devices have nice reports that can be created that tell you how many participants are actively testing or being monitored, how many were compliant the day or week or month before, and maybe help maintain your inventory. All important reports, some more important than others. But what those reports do not breakdown are the reasons why participants were non compliant. Now you can never fully predict human behavior, but you can see patterns in groups, and those patterns can be used to modify programs in order to produce better results and increase the chances of a participant being successful.
Reason #3: Analyzing behavior of a large group can help pinpoint problem areas, literally.
I will go right into an example for this one. We help administer a large GPS monitoring program for an agency that also performs their own drug testing that is not administered through our company. One day while our staff was at the agency on a site visit, the agency just so happened to have two positive test results for heroin. Also just so happened that both of those participants were monitored on our GPS. No one at the agency put two and two together until our Account Manager asked the agency if they would like to see if those two participants were ever at the same location in the last week. Not understanding what he was talking about, he pulled out his laptop, selected the two clients and in 15 seconds he cross-referenced all of their travel and stops for the last week. Sure enough, the two had been to the same house which they would have no reason to be, but they were there on different days. Our Account Manager then ran that report for the entire caseload, cross referencing 130 participants. He found 5 other participants visited that same house in the last week. Once those participants were called in for drug tests that inevitably came back positive, law enforcement took action and eliminated a drug house. Now imagine if that agency had been using our services across all of their programs? How many other scenarios could we have uncovered for them over the last month or year? Analyzing data in not just one program but across many can be a powerful tool.
Reason #4: You may be inadvertently setting up participants to fail.
A lot of agencies think unsupervised breath alcohol testing is the way to go; it’s inexpensive compared to other options, portable for the most part and more often than not, the testing is delivered in real-time. All positive attributes for a testing program, and mostly correct. In our programs more than 90% of all remote breath alcohol tests are taken in compliance with each agencies’ testing protocols; they are on time, below the lower level of detection set by the agency and are taken by the right person (the devices we use have facial recognition.) But of course there is that 10% still lingering out there. Of that 10%, 9.5% of the noncompliance is a missed test, which probably doesn’t surprise anyone. But guess which time frame is the most often missed—8am-9am. Using this data on a larger scale helps agencies know how to orientate new participants on the obligation of testing. Since we can identify problem days and times, agencies can use that data to really drive home the importance of waking up and testing, or modifying schedules for the participants so they do not miss a test while heading to work. Having a testing template, or a set of times to use for each participant is fast and efficient, but it does not always yield the most compliant results.
Reason 5: You can have psychic abilities, and actually predict possible problems.
Sounds crazy right? It’s not, at least if you are using the right monitoring tool for your higher risk alcohol offenders. I started in this business when transdermal alcohol monitoring (now commonly referred to as continuous alcohol monitoring or CAM) was just starting out. On my first day of work with the pioneers of that technology, Alcohol Monitoring Systems, there were about 2,800 people being monitored each day. Now, that number is closer to 50,000 people worldwide across a couple different types of devices. Over more than a decade with AMS (now SCRAM Systems) I had the chance to really dig into data and find that there are a number of things we can do to improve outcomes and skirt potential non-compliance. For example, across all of our CAM clients that we monitor today we can identify that the average number of days before someone may violate the program is 55. If a participant has not violated by week 6 or 7, sometimes a conversation about progress, reinforcing the necessity to stay sober, and encouragement from a case worker can go a long way to not have that client turn into a statistic. By the same token, I have found a direct correlation between the earlier a participant violates the program and the potential for extreme non-compliance with 3 or more violations. Younger participants tend to be more compliant, 5% of the participants will account for 40% of the violations, and case workers that are well versed in the technology and the program requirements will have compliance on average 10 percentage points higher than case workers who do not follow up appropriately. Taking into account minor counseling along the way for potential but not confirmed violations also reduces the amount of overall non-compliance in the program.
I have always subscribed to the “Money Ball” theory of looking at compliance data; find what works the best over the long term for the majority of participants, implement modifications to programs that reflect those evidence based solutions, and then concentrate on the minority of participants that bound outside of that data. By using certain metrics and working with a company that can provide solutions and integration across multiple platforms, an agency can find value in having that one single provider report insights to them that can make a difference. That difference can be saving money, focusing attention on those participants that truly need it, being more effective in recovery efforts, decreasing recidivism and increasing community safety. It is up to the individual agency to determine the importance of those changes and how getting the right data can improve their program outcomes.