More questions than answers

Aug. 2, 2017 / By Jason Mark

One of the things I most enjoy about the team I work with is that we get to ask a lot of questions and then we try to figure out the answers using data. I follow a number of blogs about health care, data science, programming and other topics. The intersection of thoughts and ideas across these diverse blogs raises interesting questions and it has me thinking about how their content applies to medical coding. In a ComputerWorld article (and the subsequent discussion on the Simply Statistics blog), author Peter Norvig, a research director at Google and a very prominent Artificial Intelligence (AI) researcher, argues against the need for “explainable AI.” Explainable AI refers to the fact that while many of today’s machine and deep learning algorithms may be very accurate in their predictions, but it’s nearly impossible to understand how these algorithms get to their conclusions.

So what does this mean for computer-assisted coding (CAC)? Coding today is based on fairly specific guidelines that require human understandable evidence to support the codes. Evidence is particularly important when a record is audited or a claim is denied. Today an auditor would not allow the following response to an audit: “I coded it that way because CAC told me to.” This poses an interesting challenge for CAC systems because even if a better level of accuracy could be achieved through straight machine learning approaches, the opaqueness of understanding the “reasoning” behind these methods prevent them from being used. Will this always be the case? What level of demonstrable improvement in accuracy and consistency will be needed to drive the adoption of machine learned or artificially intelligent approaches? When that adoption occurs, how will the responsibility for correct coding be shared between both the provider organization (which produces the documentation inputs to CAC) and the CAC vendor (which provides the algorithms and processing of the documentation)?

Another interesting point Norvig makes in his interview is that we tend to believe a human explanation for a decision. We do this despite research showing clearly that human decisions are often made instantaneously and our mental “explanation” is composed after the decision is made and in a way that supports the decision. When it comes to coding, rules and guidelines help to drive some level of consistency for many codes, but some codes are still determined by a “judgment call” or because “in my experience” this is the way to do it. The fallibility of coding’s human element is increased further if we factor in the myriad ways that evidence may be presented to the human. For example, the phenomenon of framing shows that the way in which evidence is presented to us (which could include clinical documentation) affects how we interpret and reason about the evidence. We tend to be more prone to avoid a loss, for example, than to receive a gain of the same magnitude. Does that mean that depending on how the documentation is read, a coder might err on the side of “not being wrong” more than taking an equal chance on “being right” when applying a particular code? How might the CAC suggestions themselves play a role in the human’s thought process?

This post has far more questions than answers. I believe that a balance between the human and the machine will emerge, but where one ends and the other begins, or the degree to which they overlap is still anybody’s guess. Our team is enjoying the adventure of trying to understand all angles of these questions so that both 3M and our customers can make the most informed decisions possible.

Please share your comments!

Jason Mark is manager, Research & Applied Data Science Lab with 3M Health Information Systems.