Skip to content

Lab 029: LangChain & LangGraph BasicsΒΆ

Level: L200 Path: πŸ’» Pro Code Time: ~60 min πŸ’° Cost: Free β€” GitHub Models free tier

What You'll LearnΒΆ

  • Build a conversational agent with LangChain and a tool-calling loop
  • Model multi-step agent logic as a LangGraph state graph
  • Understand the difference between LangChain chains and LangGraph graphs
  • Add conditional routing: when to call a tool vs. return an answer
  • Persist conversation state with LangGraph checkpointers

IntroductionΒΆ

LangChain is one of the most popular open-source frameworks for building LLM-powered applications. LangGraph extends it with explicit state machines β€” graphs where nodes are functions and edges are transitions.

When to use each:

LangChain LangGraph
Best for Linear pipelines, RAG chains, simple agents Complex multi-step agents, branching logic, cycles
State Implicit (passed through chain) Explicit (typed state dict)
Loops Not native First-class support
Visibility Chain logs Graph execution traces

In this lab we build the same OutdoorGear shopping assistant two ways: first with LangChain (simpler), then with LangGraph (more explicit control).


PrerequisitesΒΆ

pip install langchain langchain-openai langgraph

No Azure subscription needed β€” we use GitHub Models' OpenAI-compatible endpoint:

export GITHUB_TOKEN=<your PAT with models:read scope>

Part 1: LangChain AgentΒΆ

Step 1: ToolsΒΆ

# tools.py
from langchain_core.tools import tool

PRODUCTS = [
    {"id": "P001", "name": "TrailBlazer Tent 2P",    "category": "Tents",   "price": 249.99},
    {"id": "P002", "name": "Summit Dome 4P",          "category": "Tents",   "price": 549.99},
    {"id": "P003", "name": "TrailBlazer Solo",        "category": "Tents",   "price": 299.99},
    {"id": "P004", "name": "ArcticDown -20Β°C Bag",    "category": "Bags",    "price": 389.99},
    {"id": "P005", "name": "SummerLight +5Β°C Bag",    "category": "Bags",    "price": 149.99},
    {"id": "P006", "name": "Osprey Atmos 65L",        "category": "Packs",   "price": 289.99},
    {"id": "P007", "name": "DayHiker 22L",            "category": "Packs",   "price":  89.99},
]

@tool
def search_products(keyword: str, max_price: float = 9999) -> str:
    """Search OutdoorGear products by keyword. Optionally filter by max_price in USD."""
    matches = [
        p for p in PRODUCTS
        if keyword.lower() in p["name"].lower() and p["price"] <= max_price
    ]
    if not matches:
        return f"No products found for '{keyword}'"
    return "\n".join(f"[{p['id']}] {p['name']} β€” ${p['price']:.2f}" for p in matches)


@tool
def get_product_details(product_id: str) -> str:
    """Get full details for a specific product by ID (e.g. 'P001')."""
    product = next((p for p in PRODUCTS if p["id"].upper() == product_id.upper()), None)
    if not product:
        return f"Product '{product_id}' not found"
    return str(product)


@tool
def calculate_total(product_ids: list[str], quantities: list[int]) -> str:
    """
    Calculate the total price for a list of products and quantities.

    Args:
        product_ids: List of product IDs (e.g. ['P001', 'P006'])
        quantities:  List of quantities, same order as product_ids (e.g. [1, 2])
    """
    total = 0.0
    lines = []
    for pid, qty in zip(product_ids, quantities):
        product = next((p for p in PRODUCTS if p["id"].upper() == pid.upper()), None)
        if product:
            subtotal = product["price"] * qty
            total += subtotal
            lines.append(f"{product['name']} Γ— {qty} = ${subtotal:.2f}")
        else:
            lines.append(f"Unknown product: {pid}")
    lines.append(f"─────────────────")
    lines.append(f"Total: ${total:.2f}")
    return "\n".join(lines)

Step 2: LangChain Agent with Tool CallingΒΆ

# langchain_agent.py
import os
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
from tools import search_products, get_product_details, calculate_total

# GitHub Models endpoint
llm = ChatOpenAI(
    model="gpt-4o-mini",
    api_key=os.environ["GITHUB_TOKEN"],
    base_url="https://models.inference.ai.azure.com",
)

tools = [search_products, get_product_details, calculate_total]

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful OutdoorGear product advisor. "
               "Use the available tools to answer customer questions. "
               "Always check product details before making recommendations."),
    ("placeholder", "{chat_history}"),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}"),
])

agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

# Try it
result = executor.invoke({
    "input": "I need a lightweight tent for solo hiking under $350. What do you recommend?",
    "chat_history": [],
})
print("\n" + result["output"])

Run it:

python langchain_agent.py

You should see the agent call search_products, inspect a result, then provide a recommendation.


Part 2: LangGraph AgentΒΆ

LangGraph models the agent as a state machine. This makes the logic explicit and testable.

Step 3: Define the graph stateΒΆ

# langgraph_agent.py
import os
import json
from typing import Annotated, TypedDict
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, ToolMessage, AIMessage, BaseMessage
from langgraph.graph import StateGraph, END
from langgraph.graph.message import add_messages
from tools import search_products, get_product_details, calculate_total

# State: messages list that auto-appends (add_messages reducer)
class AgentState(TypedDict):
    messages: Annotated[list[BaseMessage], add_messages]

tools_list = [search_products, get_product_details, calculate_total]
tools_by_name = {t.name: t for t in tools_list}

llm = ChatOpenAI(
    model="gpt-4o-mini",
    api_key=os.environ["GITHUB_TOKEN"],
    base_url="https://models.inference.ai.azure.com",
).bind_tools(tools_list)

Step 4: Define graph nodesΒΆ

# Node 1: Call the LLM
def call_llm(state: AgentState) -> AgentState:
    """Send the current messages to the LLM and append its response."""
    response = llm.invoke(state["messages"])
    return {"messages": [response]}


# Node 2: Execute tool calls
def execute_tools(state: AgentState) -> AgentState:
    """Execute any tool calls in the last LLM message."""
    last_message = state["messages"][-1]
    tool_results = []

    for tool_call in last_message.tool_calls:
        tool = tools_by_name[tool_call["name"]]
        result = tool.invoke(tool_call["args"])
        tool_results.append(
            ToolMessage(content=str(result), tool_call_id=tool_call["id"])
        )

    return {"messages": tool_results}


# Routing: should we call tools or are we done?
def should_call_tools(state: AgentState) -> str:
    """Return 'tools' if the LLM requested tool calls, 'end' otherwise."""
    last_message = state["messages"][-1]
    if hasattr(last_message, "tool_calls") and last_message.tool_calls:
        return "tools"
    return "end"

Step 5: Build and run the graphΒΆ

# Build the graph
graph = StateGraph(AgentState)
graph.add_node("llm", call_llm)
graph.add_node("tools", execute_tools)

graph.set_entry_point("llm")
graph.add_conditional_edges("llm", should_call_tools, {"tools": "tools", "end": END})
graph.add_edge("tools", "llm")   # After tools, go back to LLM

agent = graph.compile()

# Run it
initial_state = {
    "messages": [
        HumanMessage(content="Compare the TrailBlazer Tent 2P and TrailBlazer Solo. "
                              "Which should I buy for a 2-week solo thru-hike?")
    ]
}

for step in agent.stream(initial_state, stream_mode="values"):
    last_msg = step["messages"][-1]
    if isinstance(last_msg, AIMessage) and last_msg.content:
        print(f"\nπŸ€– Agent: {last_msg.content}")
    elif isinstance(last_msg, ToolMessage):
        print(f"\nπŸ”§ Tool result: {last_msg.content[:100]}...")

Part 3: Add Persistent Memory (Checkpointer)ΒΆ

LangGraph can persist state between runs using a checkpointer β€” this is how you build multi-turn agents that remember conversations:

from langgraph.checkpoint.memory import MemorySaver

# Add memory to the graph
memory = MemorySaver()
agent_with_memory = graph.compile(checkpointer=memory)

# Thread ID ties messages to a specific "conversation"
config = {"configurable": {"thread_id": "customer-session-42"}}

# Turn 1
result = agent_with_memory.invoke(
    {"messages": [HumanMessage(content="What tents do you have?")]},
    config=config,
)
print(result["messages"][-1].content)

# Turn 2 β€” the agent remembers Turn 1!
result = agent_with_memory.invoke(
    {"messages": [HumanMessage(content="Which is the lightest?")]},
    config=config,
)
print(result["messages"][-1].content)

🧠 Knowledge Check¢

1. What is the main advantage of LangGraph over a simple LangChain agent?

LangGraph uses an explicit state machine (graph with nodes and edges) to model agent logic. This makes branching, looping, and conditional routing first-class citizens β€” visible, testable, and debuggable. A LangChain agent hides the control flow inside the framework.

2. What does the add_messages reducer do in LangGraph state?

add_messages is a reducer function that tells LangGraph how to update the messages field: it appends new messages instead of replacing the whole list. Without it, each node return would overwrite the message history rather than adding to it.

3. How does LangGraph checkpointing enable multi-turn conversations?

A checkpointer persists the graph state (all messages) to storage (memory, Redis, PostgreSQL) keyed by a thread_id. When you invoke the agent with the same thread_id, LangGraph loads the previous state and continues from where it left off β€” the agent "remembers" prior turns without you managing history manually.


SummaryΒΆ

Concept LangChain LangGraph
Structure Linear chain Directed graph (nodes + edges)
Loops Not native graph.add_edge("tools", "llm")
Branching Limited add_conditional_edges()
State Implicit Explicit TypedDict
Memory Manual MemorySaver / PostgresSaver
Debugging Chain logs Full graph execution trace

Next StepsΒΆ