Quality, Quality, Quality: The Status Quo is Doomed

This article, GPT-4 has a trillion parameters, helps one understand how good ChatGPT can be currently and how to make it better.  It also helps one recognize how customizable or custom LLMs will be used in the future.

The US GAAP XBRL Taxonomy published by the FASB could be an enabler for a rules-based reporting system.  But it is not currently because it does not represent the necessary rules correctly or completely.  What that XBRL taxonomy is, effectively and admittedly, is a human readable "pick list".  So a rules based system would have a hard time using that US GAAP XBRL Taxonomy or reports created using that taxonomy.  Similarly, all of the report in the SEC’s EDGAR system that use that US GAAP XBRL Taxonomy are not of the right quality for a rules-based system to make use of the reports effectively.

All of the above is similarly true of the IFRS XBRL Taxonomy and the financial reports submitted to the ESMA using that IFRS XBRL Taxonomy.

However, that information could be fed to a GPT-4 enabled system, creating a large language model (LLM) that is custom to US GAAP financial reporting and another perhaps separate model custom for IFRS financial reporting.  Both the SEC EDGAR system and ESMA report system are large enough (i.e. good model size) and an AI system can be trained using that information and that AI system would be helpful.  But, the resulting system can only be good as the quality of the financial reports.

But a LLM in natural language can never be as precise as a knowledge graph that has been created correctly and precisely by knowledgeable and skilled humans.  An autogenerated LLM that is better than a knowledge graph created by humans, in my view, is never going to happen and if it does, perhaps, happen it will be well into the future and it will be based on the rules-based knowledge graph constructed by knowledgeable, skilled human accountants that understand all those rules.  Effectively, the rules will be a part of the training data used by the LLM/GPT-4 based system. Knowledgeable, skilled humans using the right tools will always ultimately prevail where quality and precision matter.

HOWEVER, people will try the LLM approach, it will be better than nothing, and it will contribute to leading to a proper rule based model created by skilled, knowledgeable humans at some point in the future. So the less precise LLM/GPT-4 approach can help pave the way to rules based approaches.

All this stuff will “cycle” as people figure this out over the coming years.  The status quo is flat out doomed.

If you really want to understand the moving pieces of the puzzle, I would recommend really studying these three articles:

Your brain has different parts.  The part responsible for intuition is a different part than the part responsible for reasoning.  And so it will be for artificial intelligence. The artificial intelligence of your future will be a hybrid, like your brain.

As pointed out in that third article by Alan Morrison, who used to work at PWC, GPT-4 (LLMs, machine learning) and knowledge graphs (rules based) will be combined into hybrid solutions.  Something like Logical English would make this even better, more precise.  I suspect that people will try and make GPT-4 work and that will be their focus in the short term because it is easier for the user, but it is not precise enough for many things, its limitations will be understood, then rules based will be valued more, the rules will fill the gaps, and a hybrid system will serve humans well.

Rules-based systems are similar to the "reasoning" part of your brain. GPT-4 and other statically based machine learning is like your "intuition".

I suspect that each of the Big 4 CPA firms will create their own competing custom hybrid models.  Maybe not.  Maybe there will be a bunch more custom hybrid models that will be built.  Maybe others are seeing this even more clearly than I am and have even better ideas.  Perhaps.  But regardless, the status quo is definitely doomed.

As Jason Staats, CPA said in this video, Using AI To Prepare Tax Returns, "AI will  not put accountants out of work.  Accountants using AI will put accountants not using AI out of work."

Personally, I think the risk calculus changed because of GPT-4.

Additional Resources:

Comments

Popular posts from this blog

Relational Knowledge Graph System (RKGS)

Getting Started with Auditchain Luca

Evaluating the Quality of XBRL-based Financial Reports