Analysis Which Led to Seattle Method
In 2009, the U.S. Securities and Exchange Commission (SEC) began accepting XBRL-based reports in their EDGAR system. Around that same time I started fiddling around trying to extract information from those XBRL-based reports.
It was a lot harder than I had anticipated.
To make a much longer story short, I realized that, because different reporting economic entities reported using different reporting styles, I needed additional metadata in order to effectively extract information from those reports. To simplify the extraction, I chose to focus on trying to extract report information from one explicit and common reporting style. About 1,600 public companies used that same specific reporting style I figured out over time. I found a lot of mistakes in my extraction. I found a fair amount of mistakes in the information I extracted from those reports and I figured I was making some sort of mistake.
After checking, it turns out that while I did have some mistakes; there were also lots of mistakes in the XBRL-based reports submitted to the SEC by public companies.
And so, I built a system for checking all those XBRL-based reports for mistakes. I used an RSS feed provided by the SEC each month to grab reports.
Here is my first set of results shown in the screen shot below. What I found was that about 25.6% of all XBRL-based reports that were submitted to the SEC were consistent with all of my fundamental accounting concept relations theory for a total of 8,920 errors of some sort. For 6,674 reports, all 10-Ks for the fiscal year 2013, there were an average of 1.3 errors per report. There were 1,711 reports where no errors were detected.
Saying this another way, out of a total "opportunities" to find an error of 146,828 which is based on the number of concepts tested and the number of rules; I found that there were 93.92% consistent with expectation, about 137,908 relations, and 6.08% that were inconsistent, about 8,920.
Now, this is NOT TO SAY that I had all my fundamental accounting concepts and relations correct. There are four possible causes of errors.
- The report contained an actual error.
- The rules I had contained an error.
- The US GAAP XBRL Taxonomy contained an error and how the reporting public companies and I saw things were inconsistent.
- The software I was using was processing the reports and rules incorrectly.
The final testing that I did in 2019 for FY 2018 10-Ks showed massive improvements. There were now 5,093 reports that had no errors related to the fundamental accounting concept relations, 89.1% of all reports. On a per "opportunity" basis, there were 125,752 possible mistakes but only 962 of those opportunities resulted in a mistake, 0.76%. That means that 99.24% of reports were correct in this regard, not quite Six Sigma which is 99.99966%, but it is a marked improvement from my original testing in 2014. Per this final testing cycle, a total of 10 software vendors and filing agents did better than 90%.
Comments
Post a Comment