Measuring Quality

Increasing efficiency or productivity without maintaining quality or increasing quality is useless.  It really achieves nothing.

But how exactly do you measure quality when you automate a here-to-for manual process?  For example, how exactly do you measure the quality of an XBRL-based digital financial statement? Without some sort of standardized set of quality metrics, comparing quality becomes an apples-to-oranges comparison.

The increased role automation plays in accounting, reporting, auditing, and analysis is inevitable.  Creating quality metrics and automated verification guarantees will make sure that the automation does not come at the cost of a decrease in quality.

Measuring quality is not about developing your own quality metrics with which you evaluate yourself. Measuring quality is also for process control purposes such as Six Sigma. Measuring quality is also about the ability to perform apples-to-apples comparisons against competitors in the market place.

Without broad agreement on standard quality benchmarks, each market participant is free to highlight whatever measure of quality which they feel is beneficial to them, including no measure of quality.

Quality metrics, or lack of quality metrics, incentivizes market behavior. The capability of extracting and reusing meaningful and useful information from an XBRL-based digital financial report submitted to the SEC or ESMA is, arguably, useful. But, are investors getting their money's worth relative to the work involved in creating those machine readable reports?

Quality metrics tend to work best when those measures are applied industry wide in order to make apples-to-apples comparisons.  A comprehensive, standardized set of quality metrics would enable a maintainable quality level throughout a process or a workflow.

Return on investment (ROI) is one measure of value. Defects Per Million Opportunities (DPMO) is one measure of quality.

Since about 2013 I have been measuring the fundamental quality of XBRL-based digital financial reports submitted to the U.S. Securities and Exchange Commission (SEC). Measuring quality in those reports helped me learn about how to measure quality in XBRL-based financial statements in general. That is how I figured out the pillars of quality and trustworthiness I incorporated into the Seattle Method.

Never before was it possible to have a machine read and understand a general purpose financial statement. For the first time in 7,000; machines can read and understand general purpose financial statements, evaluate the content of that financial statement, and to a degree have a view of the quality of that financial statement.  This is a new and useful capability.

Both humans and machines can be "certified" and/or "licensed" to perform certain specific tasks and processes. Let's face it.  The average accountant is, well, average. No doubt that machines can perform certain specific tasks better than humans.  Working together, leveraging the strengths of each contributor; humans and machines can do better than either can do individually.  But to make sure, quality needs to be measured.

Additional Information:

Comments

Popular posts from this blog

Getting Started with Auditchain Luca (now called Luca Suite)

Relational Knowledge Graph System (RKGS)

Professional System for Creating Financial Reports Leveraging Knowledge Graphs