Tutorial: Integrating Human Feedback into the Loop

Setting Up a Human-in-the-Loop AI with LangGraph and WatsonX

The integration of AI in various domains is becoming increasingly sophisticated, with human oversight playing an essential role in ensuring ethical and accurate outcomes. In this tutorial, we will explore how to build a Human-in-the-Loop (HITL) AI system using LangGraph and IBM’s WatsonX.

Getting Started: Installing Required Libraries

Before we dive into the intricacies of the tutorial, ensure you have the necessary libraries and modules installed in your environment. You can quickly set this up using pip:

bash
pip install –quiet -U langgraph langchain-ibm langgraph_sdk langgraph-prebuilt google-search-results

This command will download and install all the libraries required for our HITL AI project.

Importing Required Packages

After installing the necessary packages, restart your kernel and proceed to import the following libraries:

python
import getpass
import uuid
from ibm_watsonx_ai import APIClient, Credentials
from ibm_watsonx_ai.foundation_models.moderations import Guardian
from IPython.display import Image, display
from langchain_core.messages import AnyMessage, SystemMessage, HumanMessage, AIMessage
from langchain_ibm import ChatWatsonx
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import START, END, StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import tools_condition, ToolNode
from langgraph.types import interrupt, Command
from serpapi.google_search import GoogleSearch
from typing_extensions import TypedDict
from typing import Annotated

This collection of imports forms the backbone of our application, allowing us to manage user input, handle API interactions, and maintain a stateful conversation.

Setting Up Credentials

Once our environment is ready, it’s time to set up the necessary credentials to access WatsonX and the Google Patents API. You’ll need the following:

  • WATSONX_APIKEY
  • WATSONX_PROJECT_ID
  • WATSONX_URL (the API endpoint)
  • SERPAPI_API_KEY (to access Google’s search results)

You can obtain your WatsonX credentials by following the provisioning steps on the IBM cloud platform, while a free SerpAPI key can be generated by signing up at their website.

To input your credentials, use the following code snippet that securely captures user input without displaying sensitive info:

python
WATSONX_APIKEY = getpass.getpass("Please enter your watsonx.ai Runtime API key (hit enter): ")
WATSONX_PROJECT_ID = getpass.getpass("Please enter your project ID (hit enter): ")
WATSONX_URL = getpass.getpass("Please enter your watsonx.ai API endpoint (hit enter): ")
SERPAPI_API_KEY = getpass.getpass("Please enter your SerpAPI API key (hit enter): ")

Initializing the API Client

With the credentials set up, we can now initialize our API client. This is essential for interacting with all resources available on WatsonX.

python
credentials = Credentials(url=WATSONX_URL, api_key=WATSONX_APIKEY)
client = APIClient(credentials=credentials, project_id=WATSONX_PROJECT_ID)

The APIClient will act as a bridge between our application and WatsonX AI’s service.

Instantiating the Chat Model

To facilitate interaction with the resources in WatsonX, we’ll employ the ChatWatsonx wrapper. This simplifies integrations for tool-calling and chaining:

python
model_id = "ibm/granite-3-3-8b-instruct"
llm = ChatWatsonx(model_id=model_id, watsonx_client=client)

This model ID corresponds to a specific model version within WatsonX, so be sure to check for the latest versions.

Defining the Patent Scraper Tool

An essential feature of our HITL AI will be the ability to scrape patents, which will enhance its ability to provide relevant information. We will create a function leveraging the Google Patents API:

python
def scrape_patents(search_term: str):
"""Search for patents about the topic.

Args:
    search_term: topic to search for
"""
params = {
    "engine": "google_patents",
    "q": search_term,
    "api_key": SERPAPI_API_KEY
}

search = GoogleSearch(params)
results = search.get_dict()
return results['organic_results']

This function queries Google Patents and returns the organic search results based on a provided search term.

Binding Tools to the LLM

Next, we need to bind our scrape_patents tool to the LLM. This integration allows the AI to utilize the scraping function when required:

python
tools = [scrape_patents]
llm_with_tools = llm.bind_tools(tools)

This configuration enhances the AI’s ability to respond with relevant research data.

Implementing Static Interrupts in the HITL Process

As part of ensuring ethical oversight, we will implement a mechanism to monitor and review the agent’s decision-making through static interrupts. To manage state between interactions, we will create an AgentState class:

python
class AgentState(TypedDict):
messages: Annotated[list[AnyMessage], add_messages]

This state will hold all messages exchanged between the user, the AI, and any tools utilized during the session.

Calling the LLM

We can define an assistant node to invoke our LLM with the current context:

python
sys_msg = SystemMessage(content="You are a helpful assistant tasked with prior art search.")

def call_llm(state: AgentState):
return {"messages": [llm_with_tools.invoke([sys_msg] + state["messages"])]}

This serves as a point where the LLM processes the collective context and generates responses.

Moderating Messages

Moderation is critical for ensuring that the interactions remain appropriate and relevant. We will design a guardian_moderation function that detects harmful content:

python
def guardian_moderation(state: AgentState):
message = state[‘messages’][-1]
detectors = {
"granite_guardian": {"threshold": 0.4},
"hap": {"threshold": 0.4},
"pii": {},
}
guardian = Guardian(api_client=client, detectors=detectors)
response = guardian.detect(text=message.content, detectors=detectors)

if len(response['detections']) != 0 and response['detections'][0]['detection'] == "Yes":
    return {"moderation_verdict": "inappropriate"}
else:
    return {"moderation_verdict": "safe"}

This function uses IBM’s moderation tools to evaluate messages before they reach the LLM.

Handling Blocked Messages

If a message is deemed inappropriate, we need to inform the user. Thus, we create the block_message function:

python
def block_message(state: AgentState):
return {"messages": [AIMessage(content="This message has been blocked due to inappropriate content.")]}

This function provides clear communication to users about why their inputs were not processed.

Building the Agent Graph

Now that we’ve established the core functions, it’s time to build our agent graph. This graph outlines the flow of decision-making in the system:

python
builder = StateGraph(AgentState)
builder.add_node("guardian", guardian_moderation)
builder.add_node("block_message", block_message)
builder.add_node("assistant", call_llm)
builder.add_node("tools", ToolNode(tools))

builder.add_edge(START, "guardian")
builder.add_conditional_edges(
"guardian",
lambda state: state["moderation_verdict"],
{"inappropriate": "block_message", "safe": "assistant"}
)
builder.add_edge("block_message", END)
builder.add_conditional_edges("assistant", tools_condition)
builder.add_edge("tools", "assistant")

memory = MemorySaver()

This code creates the necessary connections and states the flow based on whether messages are flagged as inappropriate or safe.

Compiling the Graph

We can compile the graph to prepare it for interactions:

python
graph = builder.compile(interrupt_before=["assistant"], checkpointer=memory)

This configuration allows interruptions for human oversight right before LLM responses, fostering a viable HITL system.

Visualizing the Agent Graph

Finally, visual representation can help in understanding the flow of the graph. You can display it using:

python
display(Image(graph.get_graph(xray=True).draw_mermaid_png()))

This feature allows us to visualize how various nodes and edges interact, providing insights into the system’s decision-making process.


This framework establishes a robust basis for developing HITL AI applications, prioritizing both functionality and ethical considerations. As AI continues to evolve, together we can ensure it operates within boundaries set by human oversight and ethical values.

James

Recent Posts

7 Captivating Insights from B2B SaaS Reviews’ Founder on Online Reviews

The Importance of Customer Reviews in Software Purchases It's no secret that customer reviews play…

13 hours ago

How to Quickly Copy and Replicate n8n Workflows Using Claude AI

![AI-powered tool simplifying n8n workflow automation](https://www.geeky-gadgets.com/wp-content/uploads/2025/04/ai-powered-n8n-automation-guide.webp) Have you ever wished you could replicate a complex…

13 hours ago

Strategies for Creating Future-Ready Cybersecurity Teams

The Democratization of Cybersecurity: Navigating AI-Enhanced Cyber Threats We are witnessing something unprecedented in cybersecurity:…

13 hours ago

The Leading 5 CPG Technology Trends Transforming 2026

The Top 5 CPG Tech Trends Shaping 2026 By Lesley Salmon, Global Chief Digital &…

13 hours ago

Must-Grab Tech Deals After Cyber Monday

Must-Have Tech Gadgets for Your Life In the fast-paced world we live in, staying connected…

14 hours ago

AWS Enters the Security AI Agent Competition Alongside Microsoft and Google • The Register

AWS Security Agent: Ushering in a New Era of Application Security As part of its…

14 hours ago