Dynamic Trust Score Explanation and Adjustment in Zero Trust Architecture Using Large Language Models
Keywords:
Dynamic trust, Zero Trust, Large Language, Signal aggregation, Adaptive enforcement, Explainable AIAbstract
Dynamic trust scoring in ZTA enables frequent risk assessment using constant inputs from various security sources. Multiple data sources are combined to compute a continuously updated score reflecting the level of trust in a given access or transaction. The study utilizes explainable Large Language Models (LLMs) to generate comprehensible explanations for why trust levels are altered. The model leverages a RAM pipeline to consolidate diverse security signals, enhance them with contextual data, and generate human-readable justifications explaining trust score updates. The system associates the explanations with corresponding ZTA policies, allowing it to perform security measures like two-factor authentication prompts, elimination of access requests, and segregation of devices. Practical applications have demonstrated that the approach successfully handles suspicious login attempts and identifies misuse of critical assets. Adding LLM-generated explanations to ZTA has shown to improve the timeliness and accuracy of security decisions and makes the system better prepared for emerging cyber risks and threats.
References
Caballero, J., Gomez, G., Matic, S., Sánchez, G., Sebastián, S., & Villacañas, A. (2023). The Rise of GoodFATR: A Novel Accuracy Comparison Methodology for Indicator Extraction Tools. 144, 74–89. https://doi.org/10.1016/j.future.2023.02.012
Capuano, N., Fenza, G., Loia, V., & Stanzione, C. (2022). Explainable Artificial Intelligence in CyberSecurity: A Survey. IEEE Access, 10, 93575-93600. https://doi.org/10.1109/ACCESS.2022.3204171
Chen, B., et al. (2021). A Security Awareness and Protection System for 5G Smart Healthcare Based on Zero-Trust Architecture. IEEE Internet of Things Journal, 8(13), 10248-10263. https://doi.org/10.1109/JIOT.2020.3041042
Desai, B., Patil, K., Patil, A., Patil, A., & Mehta, I. (2023). Large Language Models: A Comprehensive Exploration of Modern AI’s Potential and Pitfalls. Journal of Innovative Technologies, 6(1). https://acadexpinnara.com/index.php/JIT/article/view/150
Phiayura, P., & Teerakanok, S. (2023). A Comprehensive Framework for Migrating to Zero Trust Architecture. IEEE Access, 11, 19487-19511. https://doi.org/10.1109/ACCESS.2023.3248622
Steenbrink, T. P. J. (2022). Zero Trust Architecture. Tudelft.nl. https://repository.tudelft.nl/record/uuid:fe96c8fb-2d9a-4c6e-8e5e-d526c6ec6733
Sun, N., et al. (2023). Cyber Threat Intelligence Mining for Proactive Cybersecurity Defense: A Survey and New Perspectives. IEEE Communications Surveys & Tutorials, 25(3), 1748-1774. https://doi.org/10.1109/COMST.2023.3273282
Saxena, N., Hayes, E., Bertino, E., Ojo, P., Choo, K.-K. R., & Burnap, P. (2020). Impact and Key Challenges of Insider Threats on Organizations and Critical Businesses. Electronics, 9(9), 1460. https://doi.org/10.3390/electronics9091460
Syed, N. F., Shah, S. W., Shaghaghi, A., Anwar, A., Baig, Z., & Doss, R. (2022). Zero Trust Architecture (ZTA): A Comprehensive Survey. IEEE Access, 10, 57143-57179. https://doi.org/10.1109/ACCESS.2022.3174679
Tiwari, S., Sarma, W., & Srivastava, A. (2022). Integrating artificial intelligence with Zero Trust Architecture: Enhancing adaptive security in modern cyber threat landscape. International Journal of Research and Analytical Reviews (IJRAR), 9(2), 712. https://www.ijrar.org
Ureña, R., Kou, G., Dong, Y., Chiclana, F., & Herrera-Viedma, E. (2019). A review on trust propagation and opinion dynamics in social networks and group decision making frameworks. Information Sciences, 478(1), 461–475. https://doi.org/10.1016/j.ins.2018.11.037
Wei, J., Wei, J., Tay, Y., Tran, D., Webson, A., Lu, Y., Chen, X., Liu, H., Huang, D., Zhou, D., & Ma, T. (2023). Larger language models do in-context learning differently. ArXiv:2303.03846 [Cs]. https://arxiv.org/abs/2303.03846
Zhou, W., Jia, Y., Yao, Y., Zhu, L., Guan, L., Mao, Y., Liu, P., & Zhang, Y. (2019). Discovering and Understanding the Security Hazards in the Interactions between {IoT} Devices, Mobile Apps, and Clouds on Smart Home Platforms. Www.usenix.org. https://www.usenix.org/conference/usenixsecurity19/presentation/zhou
Hasanov, S. Virtanen, A. Hakkala and J. Isoaho, "Application of Large Language Models in Cybersecurity: A Systematic Literature Review," in IEEE Access, vol. 12, pp. 176751-176778, 2024, doi: 10.1109/ACCESS.2024.3505983.
Ferrag, M. A., Alwahedi, F., Battah, A., Cherif, B., Mechri, A., & Tihanyi, N. (2024). Generative Ai and Large Language Models for Cyber Security: All Insights You Need. https://doi.org/10.2139/ssrn.4853709
Sarker, I. H. (2024). Generative AI and Large Language Modeling in Cybersecurity. 79–99. https://doi.org/10.1007/978-3-031-54497-2_5
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Digvijay Parmar

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
License Terms
Ownership and Licensing:
Authors of this research paper submitted to the journal owned and operated by The Science Brigade Group retain the copyright of their work while granting the journal certain rights. Authors maintain ownership of the copyright and have granted the journal a right of first publication. Simultaneously, authors agreed to license their research papers under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) License.
License Permissions:
Under the CC BY-NC-SA 4.0 License, others are permitted to share and adapt the work, as long as proper attribution is given to the authors and acknowledgement is made of the initial publication in the Journal. This license allows for the broad dissemination and utilization of research papers.
Additional Distribution Arrangements:
Authors are free to enter into separate contractual arrangements for the non-exclusive distribution of the journal's published version of the work. This may include posting the work to institutional repositories, publishing it in journals or books, or other forms of dissemination. In such cases, authors are requested to acknowledge the initial publication of the work in this Journal.
Online Posting:
Authors are encouraged to share their work online, including in institutional repositories, disciplinary repositories, or on their personal websites. This permission applies both prior to and during the submission process to the Journal. Online sharing enhances the visibility and accessibility of the research papers.
Responsibility and Liability:
Authors are responsible for ensuring that their research papers do not infringe upon the copyright, privacy, or other rights of any third party. The Science Brigade Publishers disclaim any liability or responsibility for any copyright infringement or violation of third-party rights in the research papers.