New features of Mindbreeze InSpire 25.7 Release
Want to check out the highlights of the Mindbreeze InSpire 25.7 Release? Learn more in the following blog post.
New Mindbreeze InSpire AI Chat
With the Mindbreeze InSpire 25.7 Release, the Mindbreeze InSpire AI Chat receives a comprehensive visual and functional redesign. The new AI Chat offers a clear structure, significantly greater transparency thanks to in-depth access to quoted text passages in source documents and a wide range of customization options. Furthermore, conversations spanning multiple rounds of interaction are now noticeably more natural due to optimized context processing and improved incorporation of previous answers.
As part of the redesign, the usability has been designed in a more intuitive and elegant manner. Users receive visually concise chat answers with an immediately accessible list of source documents, allowing them to understand and check generated answers more quickly.
Footnotes and visual highlights in the generated answers assist users in the evaluation of the content. Hereby, footnotes refer to the sources of information used to simplify the clear identification of the origin and direct quotations from the source documents are visually highlighted.
Users can more easily access the respective source via the footnote or the listed source documents, analyze the source in more detail and view further information and the essential context. Access to the source document and the respective text passage is designed to be simple, fast, and direct. For example, it is possible to quickly switch between different text passages and the quoted text passage is highlighted in the source document as well.
Administrators have the option of customizing the entire AI Chat to meet company requirements. On the one hand, the AI Chat can be designed in the company's own corporate design and on the other hand, functional aspects, like the display of direct quotations and source documents, can also be customized in detail.
Improved answer quality through user input transformation in RAG pipelines
To improve the quality of generated responses, administrators can transform the user input in RAG pipelines. User input is processed by an LLM and optionally expanded with a configured prompt. The transformed input enables the RAG to provide even more precise information in the required context. For further input, it is also possible to include the previous chat history, for example, to make chatting with company data even more intuitive.
Metrics for comprehensive evaluation of RAG pipelines are available
Metrics enable administrators to perform more in-depth and comprehensive evaluations and automate the testing and assessment of evaluations. In addition to the analysis of the retrieval and generation process, metrics are now also available to evaluate factual correctness, faithfulness, and context retrieval. Mindbreeze customers therefore have the advantage of analyzing their RAG pipelines in even greater detail and adapting them to their requirements accordingly.
Detailed information on all innovations can be found in our release notes.
Contact our experts for further information.
Latest Blogs
From Prompt Engineers to Context Engineers: The New Talent Imperative
In the race to master generative AI, "prompt engineering" became the buzzword of the year. Everyone wanted a perfect way to communicate with machines. However, as the hype fades, a more profound truth is emerging: it's not what you ask of AI, but what it knows when you ask it.
The Agentic Enterprise: When 80% of Customer Processes Run on AI
Imagine an enterprise where AI doesn’t just respond, it acts. An AI that resolves a customer ticket, updates your CRM, and notifies sales before anyone asks.
Frequently asked questions
With the 25.7 release, the AI Chat in Mindbreeze InSpire has been completely redesigned — visually and functionally. The chat now offers clearer structure, better transparency (including direct access to quoted text passages from source documents), and a much improved conversation flow across multiple rounds thanks to optimized context processing.
Yes — a new Atlassian Confluence Cloud REST connector is now available with this release, enabling customers to index content from Confluence Cloud and integrate it into their search / Insight App workflows.
Yes. The release simplifies the deployment of LLMs in Kubernetes by allowing LLMs to be included when creating a development snapshot. This makes it easier to transfer configured LLMs between appliances or deploy them in container environments.
The 25.7 release enhances overall security: it updates firmware and core components (Dell-firmware, Tomcat, NET8, Chromium, Core OS, OpenJDK, Expat), and updates the web server stack — now including support for TLS 1.3 and HTTP/2.