Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Retrieval Augmented Generation (RAG) is supposed to help improve the accuracy of enterprise AI by providing grounded content. While that is often the case, there is also an unintended side effect.
According to surprising new research published today by Bloomberg, RAG can potentially make large language models (LLMs) unsafe.
Bloomberg’s paper, ‘RAG LLMs are Not Safer: A Safety Analysis of Retrieval-Augmented Generation for Large Language Models,’ evaluated 11 popular LLMs including Claude-3.5-Sonnet, Llama-3-8B and GPT-4o. The findings contradict conventional wisdom that RAG inherently makes AI systems safer. The Bloomberg research team discovered that when using RAG, models that typically refuse harmful queries in standard settings often produce unsafe responses.
Alongside the RAG research, Bloomberg released a second paper, ‘Understanding and Mitigating Risks of Generative AI in Financial Services,’ that introduces a specialized AI content risk taxonomy for financial services that addresses domain-specific concerns not covered by general-purpose safety approaches.
The research challenges widespread assumptions that retrieval-augmented generation (RAG) enhances AI safety, while demonstrating how existing guardrail systems fail to address domain-specific risks in financial services applications.
“Systems need to be evaluated in the context they’re deployed in, and you might not be able to just take the word of others that say, Hey, my model is safe, use it, you’re good,” Sebastian Gehrmann, Bloomberg’s Head of Responsible AI, told VentureBeat.
RAG systems can make LLMs less safe, not more
RAG is widely used by enterprise AI teams to provide grounded content. The goal is to provide accurate, updated information.
There has been a lot of research and advancement in RAG in recent months to further improve accuracy as well. Earlier this month a new open-source framework called Open RAG Eval debuted to help validate RAG efficiency.
It’s important to note that Bloomberg’s research is not questioning the efficacy of RAG or its ability to reduce hallucination. That’s not what the research is about. Rather it’s about how RAG usage impacts LLM guardrails in an unexpected way.
The research team discovered that when using RAG, models that typically refuse harmful queries in standard settings often produce unsafe responses. For example, Llama-3-8B’s unsafe responses jumped from 0.3% to 9.2% when RAG was implemented.
Gehrmann explained that without RAG being in place, if a user typed in a malicious query, the built-in safety system or guardrails will typically block the query. Yet for some reason, when the same query is issued in an LLM that is using RAG, the system will answer the malicious query, even when the retrieved documents themselves are safe.
“What we found is that if you use a large language model out of the box, often they have safeguards built in where, if you ask, ‘How do I do this illegal thing,’ it will say, ‘Sorry, I cannot help you do this,’” Gehrmann explained. “We found that if you actually apply this in a RAG setting, one thing that could happen is that the additional retrieved context, even if it does not contain any information that addresses the original malicious query, might still answer that original query.”
How does RAG bypass enterprise AI guardrails?
So why and how does RAG serve to bypass guardrails? The Bloomberg researchers were not entirely certain though they did have a few ideas.
Gehrmann hypothesized that the way the LLMs were developed and trained did not fully consider safety alignments for really long inputs. The research demonstrated that context length directly impacts safety degradation. “Provided with more documents, LLMs tend to be more vulnerable,” the paper states, showing that even introducing a single safe document can significantly alter safety behavior.
“I think the bigger point of this RAG paper is you really cannot escape this risk,” Amanda Stent, Bloomberg’s Head of AI Strategy and Research, told VentureBeat. “It’s inherent to the way RAG systems are. The way you escape it is by putting business logic or fact checks or guardrails around the core RAG system.”
Why generic AI safety taxonomies fail in financial services
Bloomberg’s second paper introduces a specialized AI content risk taxonomy for financial services, addressing domain-specific concerns like financial misconduct, confidential disclosure and counterfactual narratives.
The researchers empirically demonstrated that existing guardrail systems miss these specialized risks. They tested open-source guardrail models including Llama Guard, Llama Guard 3, AEGIS and ShieldGemma against data collected during red-teaming exercises.
“We developed this taxonomy, and then ran an experiment where we took openly available guardrail systems that are published by other firms and we ran this against data that we collected as part of our ongoing red teaming events,” Gehrmann explained. “We found that these open source guardrails… do not find any of the issues specific to our industry.”
The researchers developed a framework that goes beyond generic safety models, focusing on risks unique to professional financial environments. Gehrmann argued that general purpose guardrail models are usually developed for consumer facing specific risks. So they are very much focused on toxicity and bias. He noted that while important those concerns are not necessarily specific to any one industry or domain. The key takeaway from the research is that organizations need to have the domain specific taxonomy in place for their own specific industry and application use cases.
Responsible AI at Bloomberg
Bloomberg has made a name for itself over the years as a trusted provider of financial data systems. In some respects, gen AI and RAG systems could potentially be seen as competitive against Bloomberg’s traditional business and therefore there could be some hidden bias in the research.
“We are in the business of giving our clients the best data and analytics and the broadest ability to discover, analyze and synthesize information,” Stent said. “Generative AI is a tool that can really help with discovery, analysis and synthesis across data and analytics, so for us, it’s a benefit.”
She added that the kinds of bias that Bloomberg is concerned about with its AI solutions are focussed on finance. Issues such as data drift, model drift and making sure there is good representation across the whole suite of tickers and securities that Bloomberg processes are critical.
For Bloomberg’s own AI efforts she highlighted the company’s commitment to transparency.
“Everything the system outputs, you can trace back, not only to a document but to the place in the document where it came from,” Stent said.
Practical implications for enterprise AI deployment
For enterprises looking to lead the way in AI, Bloomberg’s research mean that RAG implementations require a fundamental rethinking of safety architecture. Leaders must move beyond viewing guardrails and RAG as separate components and instead design integrated safety systems that specifically anticipate how retrieved content might interact with model safeguards.
Industry-leading organizations will need to develop domain-specific risk taxonomies tailored to their regulatory environments, shifting from generic AI safety frameworks to those that address specific business concerns. As AI becomes increasingly embedded in mission-critical workflows, this approach transforms safety from a compliance exercise into a competitive differentiator that customers and regulators will come to expect.
“It really starts by being aware that these issues might occur, taking the action of actually measuring them and identifying these issues and then developing safeguards that are specific to the application that you’re building,” explained Gehrmann.
Source link