An Empirical Study on the Integration of Explainable Machine Learning Models in Real-World Decision-Making Systems
Keywords:
Explainable AI (XAI), interpretable machine learning, real-world decision-making, SHAP, model trust, human-in-the-loop AIAbstract
Purpose:
This study explores how explainable machine learning (ML) models can be effectively embedded into real-world decision-making systems across domains like healthcare, finance, and cybersecurity.
Design/methodology/approach:
The research adopts an empirical methodology involving case studies and simulations of ML model deployment, comparing traditional black-box models with explainable alternatives using SHAP, LIME, and decision trees.
Findings:
Explainable ML models increase user trust and adoption in high-stakes environments. They also improve error detection and compliance without significantly reducing model performance.
Practical implications:
Organizations can improve interpretability and accountability by adopting explainable models, facilitating smoother integration into decision-critical processes.
Originality/value:
This work is among the first empirical comparisons of multiple explainable ML techniques in real-world contexts and offers clear guidelines for practitioners balancing performance with transparency.
References
Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015). Intelligible models for healthcare. Proceedings of the 21th ACM SIGKDD, 1721–1730. https://doi.org/10.1145/2783258.2788613
Doshi-Velez, F., & Kim, B. (2015). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
Domingos, P. (2012). A few useful things to know about machine learning. Communications of the ACM, 55(10), 78–87. https://doi.org/10.1145/2347736.2347755
Freitas, A. A. (2014). Comprehensible classification models: A position paper. ACM SIGKDD Explorations Newsletter, 15(1), 1–10. https://doi.org/10.1145/2594473.2594475
Kim, B., Rudin, C., & Shah, J. A. (2015). The Bayesian case model: A generative approach for case-based reasoning and prototype classification. Advances in Neural Information Processing Systems, 28, 1952–1960.
Lipton, Z. C. (2016). The mythos of model interpretability. arXiv preprint arXiv:1606.03490.
Quinlan, J. R. (1996). Improved use of continuous attributes in C4.5. Journal of Artificial Intelligence Research, 4, 77–90.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why should I trust you?" Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD, 1135–1144. https://doi.org/10.1145/2939672.2939778
Turner, R. (2015). A model explanation system for Bayesian models. Journal of Machine Learning Research – W&CP, 42, 1–9.
Biran, O., & Cotton, C. (2015). Explanation and justification in machine learning: A survey. IJCAI Workshop on Human Interpretability in Machine Learning (WHI).
Krishnabhaskar Mangalasserri, K.K. Ramachandran, Niharika Singh, M. Jagadish Kumar, M. Sivakoti Reddy, and Pramod Kumar. International Journal of Electronic Customer Relationship Management 2025 15:3, 222-247
K.K. Ramachandran, Budhi Sagar Mishra, Himani Oberai, Gazala Masood, Ila Mehrotra Anand, and Nidhi Shukla. International Journal of Intelligent Enterprise 2025 12:2, 126-147.
Singh, A., Ramachandran, K.K., Krishna, S.H. et al. A novel and secured bitcoin method for identification of counterfeit goods in logistics supply management within online shopping. Int. j. inf. tecnol. 16, 5371–5377 (2024).
Ramachandran, K. (2024). The Role of Artificial Intelligence in Enhancing Financial Data Security. 3. 1-11.
Ramachandran, K., Stanleydhinakar, M., Navaneethan, M. et al. Photoelectrochemical water oxidation of surface functionalized Zr-doped α-Fe2O3 photoanode. J Mater Sci: Mater Electron 35, 687 (2024).
K.K. Ramachandran, Budhi Sagar Mishra, Himani Oberai, Gazala Masood, Ila Mehrotra Anand, and Nidhi Shukla. International Journal of Intelligent Enterprise 2025 12:2, 126-147.
Tanwar, Sarika & Balavenu, Roopa & H H, Ramesha & Tiwari, Mohit & K K, Ramachandran & Kumar, Dilip. (2023). Applied Cryptography in Banking and Financial Services for Data Protection. 10.1201/9781003405580-10.
Hasbullah, N. N., Kiflee, A. K. R., Arham, A. F., Anwar, S., & K.K, R. (2025). Leveraging Mobile Distribution Platforms to Drive E-Waste Recycling Satisfaction of Gen Z in Malaysia*., 23(6), 1-11,
K K, Ramachandran & K K, Karthick & Kalyan, Nalla Bala & Tiwari, Mohit & Raju, G & Ganesh, Kote. (2024). Improving Retail Industry Inventory Management Using Machine Learning. 10.1201/9781003490890-28.
Ramachandran, K. (2024). Population Health Management Through Predictive Analytics. 1. 1-9.
K. K. Ramachandran, S. Takhar, M. K. Jha, J. D. Patel, N. Randhawa and M. Lourens, "Revolutionising Industries and Empowering Human Potential with Artificial Intelligence Tools and Applications," 2024 International Conference on Trends in Quantum Computing and Emerging Business Technologies, Pune, India, 2024, pp. 1-6,
S. K. Singh, K. K. Ramachandran, S. Gangadharan, J. D. Patel, A. P. Dabral and M. K. Chakravarthi, "Examining the Integration of Artificial Intelligence and Marketing Management to Transform Consumer Engagement," 2024 International Conference on Trends in Quantum Computing and Emerging Business Technologies, Pune, India, 2024, pp. 1-5.
K K, Ramachandran. (2024). Exploring Case Studies and Best Practices for Ai Integration in Workplace Adoption. 1. 1-10.
Hasbullah, N. N., Kiflee, A. K. R., Anwar, S., & Ramachandran, K. K. (2024). Mapping the trend of digital transformation in omni-channel retailing: a bibliometric analysis Marketing and Management of Innovations, 15(1), 29–40.
M. A. Awadh, K. K. Karthick and K. K. Ramachandran, "Cognitive Computing in E-Commerce Enhancing Supply Chain Management," 2024 7th International Conference on Contemporary Computing and Informatics (IC3I), Greater Noida, India, 2024, pp. 1643-1648.



