Challenges in Implementing Explainable AI (XAI) for Threat Detection in Banking Networks
DOI:
https://doi.org/10.38124/ijsrmt.v4i8.716Abstract
This research work looks into the challenges of implementing Explainable Artificial Intelligence (XAI) for threat detection in the banking network. The introduction of XAI is geared towards providing more transparency and accountability in AI- driven decision-making process, which will also address the concerns of reliability and trust in its process. The research also highlights the benefits and prospects of XAI in the banking network and its role in improving threat detection and reducing cases of false alarm in financial transactions. It also further discussed key challenges that impedes the successes of XAI in banking network such as complexity, data quality and compliance among others. The Author went further to provide recommendations and way further in addressing these challenges in implementing XAI in banking networks. The research therefore concluded that the emergence of XAI has the potential to enhance threat detection in the banking networks, but its implementation calls for more careful consideration as XAI will be more significant in banking networks going forward.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 International Journal of Scientific Research and Modern Technology

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
PlumX Metrics takes 2–4 working days to display the details. As the paper receives citations, PlumX Metrics will update accordingly.