A Reference Process for Judging Reliability of Classification Results in Predictive Analytics

Authors
S. Staudinger, C. Schütz, M. Schrefl
Paper
Stau21a (2021)
Citation
Proceedings of the 10th International Conference on Data Science, Technology and Applications (DATA 2021), July 6-8, 2021, SciTePress/Springer, ISBN 978-989-758-521-0, ISSN 2184-285X, presented at DATA 2021, Online, pp. 124-134, 2021.
Resources
Copy

Abstract (English)

Organizations employ data mining to discover patterns in historic data. The models that are learned from the data allow analysts to make predictions about future events of interest. Different global measures, e.g., accuracy, sensitivity, and specificity, are employed to evaluate a predictive model. In order to properly assess the reliability of an individual prediction for a specific input case, global measures may not suffice. In this paper, we propose a reference process for the development of predictive analytics applications that allow analysts to better judge the reliability of individual classification results. The proposed reference process is aligned with the CRISP-DM stages and complements each stage with a number of tasks required for reliability checking. We further explain two generic approaches that assist analysts with the assessment of reliability of individual predictions, namely perturbation and local quality measures.

Keywords: Business Intelligence, Business Analytics, Decision Support Systems, Data Mining, CRISP-DM