Decoding AI: Transparent Models forUnderstandable Decision-Making

Main Article Content

Satyanarayan Kanungo

Abstract

In this paper, Explainable AI methods have been presented for transparent decision-making in medical field analysis scenarios. It has used three different explainable methods and applied those in the data set of medical images with an aim to enhance the decision-making comprehensiveness given by the CNN (“Convolutional Neural Network”). It has used two ML(Machine Learning) methods such as LIME (“Local Interpretable Model-Agnostic Explanation”) and SHAP (“SHapley Additive exPlanations”) with an alternative approach of explanation, the CIU (“Contextual Importance and Utility”) method. Furthermore, it has assessed explanations by evaluation of the human and conducted user studies built on explanations by SHAP, CIU and LIME. A set of tests have been carried out in the setting of a web-related survey and stated the understanding and explanation of the explanations. It has also quantitatively analysed three groups of users where (n=20,20,20) with three diverse explanation forms. It has also identified notable differences in the decision-making of humans between various settings of explanation support.

Article Details

Section
Articles