If Humans Fail, Machines Take Action: Assessment of Accounting Error Detection Using Machine Learning
62 Pages Posted: 18 Nov 2021
Date Written: September 28, 2021
Abstract
Various accounting scandals have proven that current enforcement systems may fail to detect erroneous reporting or identify fraudulent actions. Therefore, enforcement systems should continuously reassess and update the applied methods. Existing studies have shown that textual information and machine learning techniques can substantially support accounting error detection. We use proprietary data from enforcement investigations conducted by the German enforcement system to gain insights into the feasibility of such applications. Our classification models consider a wide range of classification and feature selection methods based on three indicator categories: financial, linguistic, and content. In contrast to many previous studies, we evaluate the classification performance using a realistic imbalanced holdout sample. We observe that content features are particularly important error indicators, and the combination of indicator categories has a significant impact on error detection. Additionally, feature selection plays an essential role in preventing indicator overload. Our findings have important implications for the development of predictive error-detection systems.
Keywords: enforcement, error detection, feature selection, SMOTE, topic modeling
JEL Classification: C53, C88, M40, M41, M48
Suggested Citation: Suggested Citation