Every day when I come into the office I’m greeted with a poster from an old ECM conference that reads, “If 99.9% is good enough, then…” and it proceeds to rattle off staggeringly large consequences of inaccuracy ranging from incorrect medication dosages to parents receiving the wrong newborn child. Even with that perspective, variations of the “99%” accuracy statements are all over in the technology industry. Despite some of these statements being technically accurate, they often lead to misunderstandings and misaligned expectations.
So, what does 99% accuracy really mean? for example, if your organization wants to automate data entry for invoice processing, then you are collecting date, number, and total amount on individual documents. Your objective is to have as much of this data automated as possible, but to be useful data it must be accurate.
Here’s the claim: a solution can deliver you 99% accurate data. While not untrue, this claim can be supported if the only measurement is of the amount of data you need. This measurement should be true regardless of the volume of data, and regardless of the number of fields that need to be automated. If a solution is claiming to be 99% accurate, but only on a small, controlled data set, then overall it is not a useful solution.
This is where we can reframe the discussion to ensure your needs are being met. Measuring specifics like “more than 85% of my data entry fields are now automated at 99% accuracy” provides a clear and measurable goal that defines the true amount of automation required per process. Also, it addresses not just data accuracy, but on the entire set of expectations.
So, the next time you begin a dialog with a solution provider that claims 99% accuracy, ask them, “how much data?”