Traditional Retrieval-Augmented Generation (RAG) retrieves relevant documents once and generates a response using a fixed context. While effective for simple queries, it often fails with complex, multi-hop, or ambiguous questions due to its single-step, static approach.
Multi-Step RAG addresses these limitations by introducing iterative retrieval and reasoning. After an initial retrieval, the system analyzes the retrieved context to identify sub-tasks or refine the query, performing multiple retrieval-reasoning cycles to build a deeper understanding. This process leads to more accurate, coherent, and context-aware answers.
Let’s explore how Multi-Step RAG works in detail.
Multi-Step RAG improves on traditional RAG by performing multiple rounds of retrieval and reasoning, using intermediate results to refine and express the next more effective query.
This iterative process is tailored to work for complex, multi-hop, or ambiguous questions and allows the system to create deeper context for more accurate responses.
Multi-Step RAG uses deeper reasoning, is less prone to inaccuracies, and deals better with ambiguity or multi-part questions than traditional RAG’s single-step retrieval.
Recursive/Multi-Step RAG extends the standard RAG framework by incorporating iterative processes that handle complex queries through multiple retrieval and reasoning cycles:
Instead of fetching documents once and selecting the best match, it is more efficient to do several rounds and use the intermediate results to improve the query formalization, generate more accurate responses and, in the end, present a piece of more refined information.
Complex queries are broken down into a sequence of sub-questions or logical steps. The system then reasons over each single step, combining them only at the output, ensuring that the final answer is as complete and coherent as possible.
After each retrieval cycle, the context is updated with new findings. This evolving context ensures that subsequent retrievals and reasoning steps are increasingly focused and informed.
At every step of the multi-step procedure, the system can assess the consistency and generality of the output produced so far and be used both to self-correct next steps promptly and to improve the final process’s robustness towards misinformation or irrelevant initial content.
Traditional Single-Step RAG retrieves relevant documents using only the original user query. While sufficient for simple and direct questions, this approach often fails when dealing with complex, multi-hop, or ambiguous queries.
It retrieves once and generates an answer based on a fixed set of documents, which may miss critical context or supporting facts. There's no mechanism to improve the result after the initial retrieval.
Multi-Step RAG addresses these limitations by introducing iterative retrieval and reasoning. Instead of stopping after one retrieval, it continues the process in multiple steps:
The user submits a natural language query to begin the retrieval process.
Experience seamless collaboration and exceptional results.
At the first stage, the system retrieves a set of top-k relevant documents from the knowledge base using a retriever that may be implemented as a vector search or a keyword search.
This retrieval is based only on the original query and introduces the first layer of context.
The previously retrieved documents were laid out for the language model, which reads through them to extract any necessary facts, identify the missing information, or even uncover some sub-questions.
The language model reasons through this evidence and reformulates or expands the query to properly target the specific information that it had not gotten, which was paramount in providing a fully fledged answer.
Retriever launches a second search with a refined query, which results in the retrieval of the document that is more focused, detailed, or representational with respect to those retrieved in the previous step.
It increases the proximity as this retrieval step is expected to dive deeper into aspects that were potentially overlooked during the first retrieval.
The final LLM is now equipped with a far richer and more comprehensive set of context documents from both passes of retrieval, and it synthesizes this information to generate a response that is well-informed on the context level, accurate, and at the same time fully aware of the question it is answering.
import os
import time
import gradio as gr
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.vectorstores import FAISS
from langchain.chains import LLMChain, RetrievalQA
from langchain.prompts import PromptTemplate
from langchain.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_groq import ChatGroq
from google.colab import userdata
Import essential libraries for LLMs, retrieval, embeddings, prompts, and Gradio UI.
Suggested Reads- An Implementation Guide for RAG using LlamaIndex
def extract_answer_only(full_output):
if "Helpful Answer:" in full_output:
return full_output.split("Helpful Answer:")[-1].strip()
return full_output.strip()
def load_documents_from_folder(folder_path):
documents = []
for filename in os.listdir(folder_path):
if filename.endswith(".txt"):
loader = TextLoader(os.path.join(folder_path, filename))
docs = loader.load()
documents.extend(docs)
return documents
Cleans the raw LLM output.If the LLM includes a prefix like "Helpful Answer:", this strips it out to keep the response clean.
Reads all .txt files from the input/ folder and loads them into memory.
Used as the knowledge base for retrieval.
def should_stop(followup_question, threshold=15):
return followup_question is None or len(followup_question.strip()) < threshold
documents = load_documents_from_folder("input")
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)
docs = text_splitter.split_documents(documents)
If the follow-up question is too short or empty, stop the multi-step loop. This prevents unnecessary or low-quality steps.
embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2")
vectorstore = FAISS.from_documents(docs, embeddings)
retriever = vectorstore.as_retriever()
llm = ChatGroq(
api_key=userdata.get("groq_api"),
model_name="Llama3-8b-8192"
)
retrieval_chain = RetrievalQA.from_chain_type(llm=llm, retriever=retriever, chain_type="stuff")
Authenticate and load the Groq-hosted Llama3-8b model for all downstream reasoning and generation steps.
Combines the retriever and the LLM to create a RetrievalQA chain for answering user queries with context from retrieved documents.
Suggested Reads- What is Hugging Face and How to Use It?
followup_prompt = PromptTemplate.from_template(
"Based on this partial answer:\n\n{answer}\n\n"
"What follow-up question should we ask to gather missing details?"
)
followup_chain = LLMChain(llm=llm, prompt=followup_prompt)
After generating an initial answer, this chain prompts the LLM to create a follow-up question to dig deeper or fill in gaps.
synthesis_prompt = PromptTemplate.from_template(
"You are given a sequence of answers from an iterative retrieval process.\n\n"
"{history}\n\n"
"Based on the full conversation, write a complete, accurate, and detailed final answer."
)
synthesis_chain = LLMChain(llm=llm, prompt=synthesis_prompt)
After collecting answers from all steps, this chain synthesizes them into a single coherent final response.
def format_history(memory):
output = ""
for i, step in enumerate(memory):
output += f"Step {i+1}:\nQuery: {step['query']}\nAnswer: {step['answer']}\n\n"
return output.strip()
Converts the list of queries and answers (memory) into a formatted string for the synthesis prompt.
def advanced_multi_step_rag(query, max_steps=3):
time.sleep(1.0)
memory = []
current_query = query
for step in range(max_steps):
raw_answer = retrieval_chain.run(current_query)
answer = extract_answer_only(raw_answer)
memory.append({"query": current_query, "answer": answer})
followup_question = followup_chain.run(answer=answer)
if should_stop(followup_question):
break
current_query = followup_question
history_text = format_history(memory)
final_answer = synthesis_chain.run(history=history_text)
return final_answer
Starts with user query
Experience seamless collaboration and exceptional results.
Iteratively:
Stores each step in memory
Stops when condition is met or max steps are hitSynthesizes all steps into a final answer
iface = gr.Interface(
fn=advanced_multi_step_rag,
inputs=gr.Textbox(lines=2, placeholder="Enter your question here", label="Your Question"),
outputs=gr.Textbox(lines=14, label="Multi-Hop RAG Answer"),
title="Advanced Multi-Step RAG (Groq-Powered)",
description="Iteratively retrieves and refines answers using multiple reasoning steps."
)
if __name__ == "__main__":
iface.launch()
Launches a Gradio interface with a textbox input and a large textbox output for the final multi-step RAG answer.
Starts the Gradio app when this script is run directly.
Output:
Used in legal, financial, and customer support document analysis, IBM Watson Discovery benefits from Multi-Step RAG by performing multiple rounds of retrieval to iteratively refine complex user queries and accurately surface the most relevant clauses, precedents, or insights buried deep within large document repositories.
Example: A legal advisor tool that retrieves case law first, then follows up with rulings, judge opinions, and jurisdiction context.
Supports scientific fact-checking and biomedical question answering by first retrieving general biomedical literature or abstracts, then progressively refining the query to focus on specific experimental methods, results, or citations for accurate scientific validation.
Example: For the input “How effective is Remdesivir in treating COVID-19”, it first retrieves clinical studies and then refines it as specific patient groups or dosage outcomes.
Handles follow-up and compound voice queries by internally reformulating vague or incomplete inputs, identifying missing contextual elements from past interactions, and assembling a final, coherent response across multiple conversational turns.
Example: “What’s the weather like by that park I told you about before?” → Recontextualized by user input in the context of the previous conversation.
Enables internal enterprise search across platforms like Slack, Docs, Notion, and GitHub by decomposing complex employee queries into simpler sub-questions and retrieving relevant information from diverse systems in multiple retrieval steps.
Example: “How to deal with security in frontend apps?”→initial docs→ask the working knowledge expert about OAuth's config or code policies.
Multi-Step RAG is not only a step forward in terms of theory, but it is also an actual solution to the evident downsides of traditional RAG. The thing is that the original version sometimes fails to handle difficult, ambiguous, multi-faceted queries.
However, Multi-Step RAG, with the iterative reasoning and refinement between the steps of retrieval, can provide more accurate, speaking-in-context, and human-like responses.
The technique is extremely valuable for use in precision-oriented systems of any stripe; examples include legal research, biomedical question answering, and enterprise knowledge search, any domain where humans naturally rephrase, break down, and explore a question further to obtain a better answer.