Seattle Method Value Explained (Pillars of Quality and Trustworthiness)

For a machine-readable information set (an information product) to be useful, that information product needs to be trustworthy.  One such information product is an XBRL-based digital general purpose financial report.

The guidance provided by the Seattle Method enables accountants and others to use a proven, industrial strength, good practices based, standards-based pragmatic approach to creating provably high quality XBRL-based general purpose financial reports when report models are permitted to be modified.

Fundamentally, the Seattle Method reduces the threat of inaccuracy in digital financial reports.

When a report model can be modified (a.k.a. customized), the “wild behavior” of accountants creating such customized reports must be eliminated, keeping report models the accountants create within permitted boundaries.  The report model, in the form of an XBRL taxonomy, is a machine readable (and also human readable) representation that serves many purposes:

  • Description or specification: It is a description or specification of a report model.
  • Construction: Reports can be constructed using that report model; a human can be assisted by software applications utilizing that machine readable description.
  • Verification: The actual report constructed can be verified against the description assisted by software applications utilizing that machine readable description.
  • Extraction: Information can be effectively extracted from machine readable reports and report models assisted by software utilizing that machine readable description.
A trustworthy machine-readable general purpose financial report is quite useful. To be trustworthy, you don't want any blind spots.  Further, such machine readable reports can only be trustworthy to the extent that rules are provided that then can be used to provide that trustworthiness.

There is a symbiotic relationship between the representation of knowledge in machine readable form and the capability to reason against that knowledge representation.

The pillars of trustworthiness or pillars of quality provided by the Seattle Method are shown below:


Here is a description and examples of those pillars of trustworthiness using my PROOF example.  Here is a human readable version of the PROOF report. Here is the machine readable version of that same report. Here are the verification results for that machine readable version of the report. This list explains the columns that make up the pillars of trustworthiness or pillars of quality that eliminate blind spots:
  1. XBRL Syntax Verification Rules: Obviously the physical syntax of the machine readable report must be consistent with the XBRL global standard technical syntax specification.  This is rather easy as (a) XBRL International publishes a set of conformance suites and (b) there is lots of software, even open source software like Arelle, that is certified to support that published set of conformance suites. Further, over 99% of reports submitted to the SEC and ESMA are consistent with these specified technical syntax rules.
  2. Report Mathematical Computations Verification Rules: Accounting information contains a lot of mathematical relations.  Reports should both describe those mathematical relations and be consistent with those internal descriptions of such relations.  XBRL calculation relations and XBRL formula rules can be used to describe this information. Note that SEC reports don't allow XBRL formula use so things like roll forward and dimensional aggregations cannot even be checked.  ESMA allows XBRL formulas I believe.  But how do you know if report math is correct if such information is  not provided?
  3. Report Model Configuration Verification Rules: The report model configuration rules help make sure that the report model is constructed logically and per good modeling practices.  XBRL processors do not check this because the XBRL technical specification does not cover this.  Last time I looked at SEC reports, about 98% of reports followed these rules.  Never measured ESMA reports.
  4. Fundamental Accounting Concepts Consistency Crosschecks Verification Rules: These rules check to make sure that (a) you are following the high-level financial reporting rules like the accounting equation and other basic relations and (b) you are not contradicting yourself and inducing an inconsistency in your financial information.  I have measured this information over about a five year period for XBRL-based reports that have been submitted to the SEC. Per my last measurement of those reports, about 90% of reports were totally consistent with all of these high level accounting relationships.  But 10% of reports had one or more mistakes.
  5. Wider-narrower Associations Verification Rules (JPEG): These rules check to make sure report elements are used consistently with respect to other report elements in a reporting entity's report model and consistent with accounting rules.  Both the SEC and ESMA specify the proper use of report elements, but they don't provide a mechanism to actually check.  Also, these rules tend to not be enforced by the SEC and ESMA (i.e. because they don't have the rules) so report quality of SEC and ESMA reports tends to be fairly low in this regard.
  6. Disclosure Mechanics Verification Rules: The disclosure mechanics rules specify the essence of how each and every disclosure needs to be constructed within a financial report. This information is not explicitly provided by the FASB who publishes the US GAAP XBRL taxonomy or the IFRS Foundation who publishes the IFRS XBRL taxonomy.  Rough measurements of XBRL-based reports submitted to the SEC show an error rate of about 20% in reports submitted to the SEC.  Have not measured ESMA reports yet. (Here is an example of discovery and verification of disclosures for one report.) (Here are some US GAAP examples of disclosure mechanics rules.) (Here is a test of 65 disclosures against provided rules.) (Here is a summary of the analysis of disclosures.) (Here is a tool for reviewing disclosure mechanics.)
  7. Reporting Checklist Verification Rules: These rules serve somewhat like what accountants call a disclosure checklist.  While it does not test 100% of what a disclosure checklist covers, these machine-readable rules do check quite a bit to make sure that all required disclosures have been detected (leveraging the disclosure mechanics rules in #6 to find the disclosure).  I have no information on how well SEC and ESMA XBRL reports do with this test.
  8. Any System Specific or Additional Verification Rules: This is a bit of a catch all category that effectively specifies "anything else important to the XBRL-based report information collection system".  For example, XBRL US Data Quality Committee rules would go into this category.  SEC EDGAR Filer Manual rules or European Single Electronic Format (ESEF) rules would also go into this category.  No data on how well SEC or ESMA reports do here.
And there you have it.  That is what the Seattle Method guidance suggests. Every one of those pillars of trustworthiness or pillars of quality is necessary.  Remove a column and things can go wrong.  Don't measure with rules, then you have a blind spot in your system. Another way of looking at this is that the Seattle Method guidance provides "guardrails" or "bumpers" to channel the wild behavior of accountings creating XBRL-based digital reports.  (This video walks you through an analysis of a financial report using the ideas of the Seattle Method.)

Both the SEC and ESMA have a bunch of blind spots in their XBRL-based report systems.

Deductive logic is precise because it provides certainty; guaranteed within specific specified limits.  The machine-readable deductive rules provide a "template" for what a perfect/precise XBRL-based financial report looks like.  It is to the extent that these rules are provided; it is to that extent that reports can be considered trustworthy. Valid reports (consistent with all the specified rules) that are also sound (a.k.a. precise, precisely follow real-world financial reporting rules and other logic); it is to that extent that intelligent software agents making use of such information can do so effectively.  Full stop.  No magic; just good engineering.

And so in summary: If a process cannot be controlled then the process simply cannot repeatedly and reliably output high-quality.  If process output is not high-quality, automation cannot possibly be effective. So, control of a process is necessary in order for the process to be effective.  How do you control a process?  You control a process using rules.  Manual processes are controlled by rules that are read by humans.  Automated processes are controlled by rules that are readable by both machines (i.e., to execute the process) and humans (i.e., to make sure the rules are right).  If you cannot check a process using an automated process, then you must use a human. The role of the machine and the role of humans must be understood to provide effective control.

Finally, a leap.  XBRL-based reporting, I believe, provides information useful in understanding (a) how something similar, such as the OMG's Standard Business Report Model (SBRM) can use these same ideas for general business reporting and (b) knowledge graphs in general can benefit from these ideas.

Regulation is information.  An XBRL-based digital general purpose financial report is a machine-readable signal you provide.  What is that signal saying about your organization?

Additional Information:

Comments

Popular posts from this blog

Getting Started with Auditchain Luca (now called Luca Suite)

Relational Knowledge Graph System (RKGS)

Professional System for Creating Financial Reports Leveraging Knowledge Graphs