Leveraging Retrieval-Augmented Generation (RAG) and LLMs to Develop a Multi-Team Confluence Insights Dashboard
DOI:
https://doi.org/10.38124/ijsrmt.v4i8.977Keywords:
Confluence Insights Dashboard, Vector Databases, Retrieval-Augmented Generation (RAG), Metadata Filtering, Large Language Models (LLMs), Multi-Model Orchestration, Real- Time Analytics, Semantic Search, Knowledge TransparencyAbstract
Organizations that expand their operations make Confluence their main platform for team-wide knowledge exchange. The project information becomes fragmented and outdated when multiple teams contribute to the project. The paper presents a Retrieval-Augmented Generation (RAG)– enabled Multi-Team Confluence Insights Dashboard which retrieves team documentation to generate real-time visual analytics. The architecture leverages vector databases for scalable semantic search, large language models (LLMs) for context-aware summarization, and dynamic charting for actionable insights. The key strategies include scheduled re-indexing and metadata filtering for data freshness, vector database selection for scalability and latency optimization, RAG-based constraints for transparency and control, and multi-model orchestration to ensure deterministic, reliable outputs. The solution converts unstructured Confluence content into an interactive system which provides dependable decision-ready knowledge.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 International Journal of Scientific Research and Modern Technology

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
PlumX Metrics takes 2–4 working days to display the details. As the paper receives citations, PlumX Metrics will update accordingly.