3AI
Project leader:
- Jan Svanberg, Associate Professor, SU
Researchers
- Isak Samsten, Senior Lecturer, DSV, SU
Project period: 2020-01-01 to 2021-12-31
Funding: Vetenskapsrådet (Swedish Research Council) - Research grant
Budget: 3M SEK
Description
This project aims to develop interpretable and explainable machine learning systems investigate the design of automated continuous auditing (CA). Previous attempts using rule-based data structures and static software, as in early expert systems, have generated an unmanageable amount of exceptions offsetting all benefits of CA. We examine if ML that learns to mimic human auditors solves this problem. More specifically, we investigate:
- the ability of ML to prioritize exceptions in a CA
- how transparency impacts ML performance and how opaque algorithms can be made transparent,
- how CA using ML can be accommodated within the International Standards of Auditing (ISA), considering that ISA describe only manual auditing.
Main objectives
- W1: Investigate machine learning based CA
- W2: Examine and develop transparent and explainable machine learning systems for CA
- W3: Investigate how automated CA-systems can be accommodated in ISA
People
explainability, temporal data mining, fintech