Creating a Multi-Agent AI Research Team Using LangGraph and Gemini for Automated Reporting

Building a Multi-Agent Research Team System with LangGraph and Google’s Gemini API

In today’s fast-paced world, leveraging advanced tools to streamline research processes is crucial. In this tutorial, we will guide you through constructing a multi-agent research team system utilizing LangGraph and Google’s Gemini API. The system comprises role-specific agents: Researcher, Analyst, Writer, and Supervisor. Each plays a distinct role in the research pipeline, collaboratively gathering data, analyzing insights, synthesizing reports, and coordinating workflows.

Setting Up the Environment

Installing Required Libraries

To start our project, we’ll need to install several libraries:

bash
!pip install langgraph langchain-google-genai langchain-community langchain-core python-dotenv

After installation, we import the necessary modules and set up our environment. We utilize the getpass module to securely enter the Google API key, ensuring that sensitive credentials are not exposed in the code.

Code Snippet for Environment Setup

python
import os
from typing import Annotated, List, Dict, Tuple, Union
from langchain_google_genai import ChatGoogleGenerativeAI

import getpass
GOOGLE_API_KEY = getpass.getpass("Enter your Google API Key: ")
os.environ["GOOGLE_API_KEY"] = GOOGLE_API_KEY

Understanding the Setup

In the snippet above, we prepare our environment to interface with Google’s Gemini API. This foundational setup will enable our multi-agent system to function seamlessly.

Creating Agent States and Responses

Defining Structs

To maintain structured communication among our agents, we define two TypedDict classes — AgentState and AgentResponse. The AgentState tracks the messages, workflow status, research topic, findings, and final report. The AgentResponse standardizes each agent’s output.

Code Snippet for Agent State Definitions

python
class AgentState(TypedDict):
messages: list
next: str
current_agent: str
research_topic: str
findings: dict
final_report: str

class AgentResponse(TypedDict):
content: str
next_agent: str
findings: dict

Creating Agents

Research Specialist Agent

We start by creating a ResearchAgent responsible for initial data gathering. This agent analyzes the research topic, identifies key areas of investigation, and provides initial findings.

Code Snippet for Research Agent

python
def create_research_agent(llm: ChatGoogleGenerativeAI) -> callable:
research_prompt = ChatPromptTemplate.from_messages([
("system", "You are a Research Specialist AI. Your role is to: …"),
MessagesPlaceholder(variable_name="messages"),
("human", "Research Topic: {research_topic}")
])
research_chain = research_prompt | llm

def research_agent(state: AgentState) -> AgentState:
    try:
        response = research_chain.invoke({"messages": state["messages"], "research_topic": state["research_topic"]})
        findings = {
            "research_overview": response.content,
            ...
        }
        return state.update({"messages": ..., "findings": findings})
    except Exception as e:
        return state.update({"messages": ..., "findings": {}})
return research_agent

Analyst Agent

Next, we introduce the AnalystAgent which will perform deep analysis on the findings presented by the Researcher. This agent identifies patterns, trends, and correlates data to derive actionable insights.

Code Snippet for Analyst Agent

python
def create_analyst_agent(llm: ChatGoogleGenerativeAI) -> callable:
analyst_prompt = …
analyst_chain = …

def analyst_agent(state: AgentState) -> AgentState:
    try:
        response = analyst_chain.invoke({"messages": state["messages"], "research_topic": state["research_topic"]})
        analysis_findings = {
            "analysis_summary": response.content,
            ...
        }
        return state.update({"messages": ..., "findings": analysis_findings})
    except Exception as e:
        return state.update({"messages": ..., "findings": {}})
return analyst_agent

Writer Agent

The WriterAgent comes next, tasked with synthesizing all research and analysis into a comprehensive report. This agent ensures the document is professionally structured and readable.

Code Snippet for Writer Agent

python
def create_writer_agent(llm: ChatGoogleGenerativeAI) -> callable:
writer_prompt = …
writer_chain = …

def writer_agent(state: AgentState) -> AgentState:
    try:
        response = writer_chain.invoke({"messages": state["messages"], "research_topic": state["research_topic"]})
        return state.update({"messages": ..., "final_report": response.content})
    except Exception as e:
        return state.update({"messages": ..., "final_report": "Error generating report."})
return writer_agent

Supervisor Agent

Finally, the SupervisorAgent is designed to orchestrate the activities of the Researcher, Analyst, and Writer. This agent coordinates workflow and decides the next steps.

Code Snippet for Supervisor Agent

python
def create_supervisor_agent(llm: ChatGoogleGenerativeAI, members: List[str]) -> callable:
supervisor_prompt = …
supervisor_chain = …

def supervisor_agent(state: AgentState) -> AgentState:
    try:
        response = supervisor_chain.invoke({"messages": state["messages"], "current_agent": ...})
        next_agent = response.content.strip().lower()
        return state.update({"next": next_agent, "current_agent": "supervisor"})
    except Exception as e:
        return state.update({"messages": ..., "next": "FINISH"})
return supervisor_agent

Implementing the Research Workflow

Assembling the Workflow Graph

Next, we create the entire research team workflow as a state graph to connect all agents. This graph will provide the structure needed for our agents to interact effectively.

Code Snippet for Workflow Graph Creation

python
def create_research_team_graph() -> StateGraph:
llm = create_llm()
members = ["researcher", "analyst", "writer"]
researcher = create_research_agent(llm)
analyst = create_analyst_agent(llm)
writer = create_writer_agent(llm)
supervisor = create_supervisor_agent(llm, members)

workflow = StateGraph(AgentState)
...
workflow.set_entry_point("supervisor")
return workflow

Compiling and Running the Workflow

To run the research team workflow, we compile it with a memory component that enables our agents to retain and reference past interactions.

Code Snippet for Running the Workflow

python
def run_research_team(topic: str):
app = compile_research_team()
initial_state = {
"messages": [HumanMessage(content=f"Research the topic: {topic}")],

}

for step, state in enumerate(app.stream(initial_state)):

return final_state

Interactive Research Sessions

To allow for dynamic querying, our setup supports an interactive research session, perfect for simulating real-time exploration.

Code Snippet for Interactive Session

python
def interactive_research_session():
while True:
topic = input("🔍 Enter research topic: ").strip()

result = run_research_team(topic, thread_id)

Extending Functionality with Custom Agents

One of the strengths of this framework is the ability to add custom agents with unique roles and instructions, making our system highly adaptable for specialized tasks.

Code Snippet for Creating Custom Agents

python
def create_custom_agent(role: str, instructions: str) -> callable:
custom_prompt = …
custom_chain = …

def custom_agent(state: AgentState) -> AgentState:
    ...
return custom_agent

Performance Monitoring and Visualization

Monitoring performance metrics and visualizing workflow structures are essential for evaluating the system’s efficiency and ease of use.

Code Snippet for Monitoring Performance

python
def monitor_research_performance(topic: str):
start_time = time.time()

print("\n📊 PERFORMANCE METRICS")

Code Snippet for Visualizing the Graph

python
def visualize_graph():

try:
graph_repr = …
print("🗺️ Research Team Graph Structure")

except Exception as e:
print(f"❌ Error visualizing graph: {str(e)}")

Final Demonstrations and Quick Start

To enhance usability, we implement a quick start demo that runs multiple sample research topics in sequence, showcasing the seamless collaboration of our agents.

Code Snippet for Quick Start Demo

python
def quick_start_demo():
topics = [
"Climate Change Impact on Agriculture",

]

print("🎉 Demo completed!")

With all components assembled, we are now well-equipped to tackle complex research tasks with minimal human intervention. The multi-agent system not only streamlines workflow but also enhances the quality of outputs through specialization and automation. The modularity of the design allows for seamless adaptation to various research requirements, making it a powerful tool for any data-driven endeavor.

Feel free to explore the full implementation on GitHub and enhance your research capabilities further! Check out the complete code here.

James

Recent Posts

I Reviewed the Top 8 Demo Automation Software Solutions of 2025

The Future of Demo Automation Software: Top Picks for 2025 In today's rapidly evolving market,…

9 hours ago

5 Key Tech Terms Every Parent Should Know for Kids’ Online Safety

Essential Tech Tips for Parents Navigating the Digital Age In today's world, screens, apps, and…

9 hours ago

M-Shwari Outage Highlights Kenya’s Digital Vulnerabilities

When the familiar hum of digital banking fell silent, M-Shwari users in Kenya found themselves…

9 hours ago

Weekly Recap: WSUS Vulnerability Used to Deploy Skuld Infostealer; PoC for BIND 9 DNS Flaw Released

Weekly Cybersecurity Roundup: Innovations and Insights from October 2025 As the digital landscape continues to…

9 hours ago

Outdated Risk Models and Fragmented Response Frameworks Jeopardize Advancements in OT Cyber Resilience

Safeguarding Critical Infrastructure: A Path to Resilience in the Face of Growing Cyber Threats As…

10 hours ago

EDPB to Prioritize Transparency in Enforcement Actions by 2026

On October 14, 2025, the European Data Protection Board (“EDPB”) announced its focus for the…

10 hours ago