Inside Angle
From 3M Health Information Systems
Computer-assisted coding: Keep your hands on the wheel
I’ve been fascinated lately watching several news stories about the Autopilot feature on Tesla’s electric automobiles. If you’re not familiar with this technology, it is essentially a computer-assisted driving mode that helps cars navigate the road safely, avoid accidents and ultimately (hopefully) save lives. The Tesla electric car is not a fully automatic driving system, however, and the human driver is expected to keep his/her hands on the wheel at all times and be ready to take control if needed. In its current form, the Autopilot feature clearly has limitations that require the judgement of a human “expert,” although it can do a lot to assist them and reduce the workload involved in safely operating a vehicle. This collaborative effort between a human and a machine is an interesting topic for me because it relates directly to what we do with computer-assisted coding (CAC).
In the case of the car, there have been a few recent incidents that have received a lot of attention. One involved a driver who assumed the Tesla Autopilot feature could operate completely without him and so was watching a movie while it drove his car. That one ended badly. In another case, neither the driver nor the Autopilot feature were seemingly able to discern what was happening with a white semi-tractor trailer turning into the Tesla’s lane because it was difficult to see against a bright sky. That also ended badly. In another case, however, when a driver was momentarily distracted while driving at night in bad weather on a dimly lit street, the Autopilot feature correctly identified a darkly clothed pedestrian and applied the brakes in time to prevent the car from hitting and likely killing the pedestrian.
All of these examples highlight the need for the human to understand just what the Tesla computer is good at and what its capabilities are. I don’t mean human drivers need to understand the math and algorithms used to build the Autopilot system, but to generally know the types of things it can do well and then use this information to maximize the desired result of the collaboration: delivering improved safety.
The relationship between a human coder and a CAC system is much the same. There are things that CAC technology can do really well, but if not used correctly by the human who is “at the wheel,” the benefits of increased productivity and coding completeness can be more difficult to realize. Workflow is a key factor in the overall success of the system and it needs to be adapted to account for the role played by CAC and the technology’s strengths. If current manual workflows are left unmodified, it’s unlikely the desired results will be achieved just by adding CAC to the mix.
Our data team recently completed an extensive usage analysis of computer-assisted coding based on a set of over 25 million records. We can definitively say that codes derived using CAC suggestions can be accepted and added to a record in about half the time it would take to manually derive them. CAC doesn’t yet suggest all codes, so it doesn’t translate directly to a 50 percent improvement in productivity (over time it will continue to suggest an increasing percentage of codes and do so with even greater accuracy). However, the more codes it does suggest, the bigger the potential benefits to the end users. I say “potential” because if workflows aren’t analyzed and redesigned to capitalize on the strengths of the technology, then gains may easily be hidden away behind processes that negate them.
So, for those of you with a CAC system, have you adopted workflow process changes that leverage the technology’s strengths or have you tried to fit it into your old way of doing things? I’d love to hear your comments!
Jason Mark is a business intelligence architect, Emerging Business Technology with 3M Health Information Systems.