Lab 014: Semantic Kernel β Hello AgentΒΆ
Semantic Kernel β Microsoft Agent Framework
Semantic Kernel is now part of Microsoft Agent Framework (MAF), which unifies SK and AutoGen into a single framework. The concepts in this lab (Kernel, Plugins, function calling) still apply β MAF builds on top of them. See Lab 076: Microsoft Agent Framework for the migration guide.
What You'll LearnΒΆ
- What Semantic Kernel (SK) is and its key building blocks
- How to create an SK Kernel connected to GitHub Models (free)
- How to add your first Plugin (native function)
- How to enable auto function calling so the LLM decides when to use your function
IntroductionΒΆ
Semantic Kernel is Microsoft's open-source SDK for building AI agents and applications. It sits between your code and the LLM, providing:
- A unified abstraction over any LLM (OpenAI, Azure OpenAI, GitHub Models, Ollama...)
- A Plugin system for defining functions the LLM can call
- Auto function calling β the LLM automatically invokes your functions when needed
- Vector memory for long-term context (covered in Lab 023)
In this lab, we build a simple agent that can answer questions and call a custom function.
π Starter FileΒΆ
A skeleton starter file is provided with TODO comments for each step:
Complete the TODOs in order (1β16) to build a full SK agent with semantic functions, native plugins, and a chat loop.
Prerequisites SetupΒΆ
PythonΒΆ
CΒΆ
Make sure GITHUB_TOKEN is set (see Lab 013).
π¦ Supporting FilesΒΆ
Download these files before starting the lab
Save all files to a lab-014/ folder in your working directory.
| File | Description | Download |
|---|---|---|
hello_agent_starter.py |
Starter script with TODOs | π₯ Download |
requirements.txt |
Python dependencies | π₯ Download |
Lab ExerciseΒΆ
Step 1: Create a basic KernelΒΆ
The Kernel is the central object in Semantic Kernel β it holds your LLM connection and all plugins.
Create hello_agent.py:
import asyncio
import os
from semantic_kernel import Kernel
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
from semantic_kernel.connectors.ai.function_choice_behavior import FunctionChoiceBehavior
from semantic_kernel.contents import ChatHistory
async def main():
# Create the kernel
kernel = Kernel()
# Add GitHub Models as the LLM backend
kernel.add_service(
OpenAIChatCompletion(
ai_model_id="gpt-4o-mini",
api_key=os.environ["GITHUB_TOKEN"],
base_url="https://models.inference.ai.azure.com",
)
)
# Simple chat β no tools yet
history = ChatHistory()
history.add_system_message("You are a helpful assistant.")
history.add_user_message("What is Semantic Kernel?")
chat = kernel.get_service(type=OpenAIChatCompletion)
result = await chat.get_chat_message_content(
chat_history=history,
settings=kernel.get_prompt_execution_settings_from_service_id("default"),
)
print(result)
asyncio.run(main())
Edit Program.cs:
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.OpenAI;
var builder = Kernel.CreateBuilder();
builder.AddOpenAIChatCompletion(
modelId: "gpt-4o-mini",
apiKey: Environment.GetEnvironmentVariable("GITHUB_TOKEN")!,
endpoint: new Uri("https://models.inference.ai.azure.com")
);
var kernel = builder.Build();
var chat = kernel.GetRequiredService<IChatCompletionService>();
var history = new ChatHistory("You are a helpful assistant.");
history.AddUserMessage("What is Semantic Kernel?");
var response = await chat.GetChatMessageContentAsync(history);
Console.WriteLine(response.Content);
Run it:
You should see the LLM respond. Now let's add a custom function.
Step 2: Add a Plugin (native function)ΒΆ
A Plugin is a class with methods the LLM can call. Decorate them with @kernel_function (Python) or [KernelFunction] (C#).
Add this class before main():
from semantic_kernel.functions import kernel_function
class WeatherPlugin:
"""Provides current weather information."""
@kernel_function(
name="get_current_weather",
description="Get the current weather for a city",
)
def get_current_weather(self, city: str) -> str:
# In a real lab this would call a weather API
# For now, return mock data
weather_data = {
"Seattle": "π§οΈ Rainy, 12Β°C",
"New York": "βοΈ Sunny, 22Β°C",
"London": "β
Cloudy, 15Β°C",
}
return weather_data.get(city, f"Weather data not available for {city}")
Then register the plugin in main():
Add this class to your project:
using Microsoft.SemanticKernel;
public class WeatherPlugin
{
[KernelFunction("get_current_weather")]
[Description("Get the current weather for a city")]
public string GetCurrentWeather(string city)
{
var weatherData = new Dictionary<string, string>
{
["Seattle"] = "π§οΈ Rainy, 12Β°C",
["New York"] = "βοΈ Sunny, 22Β°C",
["London"] = "β
Cloudy, 15Β°C",
};
return weatherData.TryGetValue(city, out var weather)
? weather
: $"Weather data not available for {city}";
}
}
Register in Program.cs:
Step 3: Enable auto function callingΒΆ
With auto function calling, the LLM decides when to call your function based on the conversation. You don't need to trigger it manually.
Update your settings to enable auto function calling:
from semantic_kernel.connectors.ai.open_ai import OpenAIChatPromptExecutionSettings
from semantic_kernel.connectors.ai.function_choice_behavior import FunctionChoiceBehavior
settings = OpenAIChatPromptExecutionSettings(
function_choice_behavior=FunctionChoiceBehavior.Auto(),
)
history = ChatHistory()
history.add_system_message("You are a helpful assistant with access to weather data.")
history.add_user_message("What's the weather like in Seattle today?")
result = await chat.get_chat_message_content(
chat_history=history,
settings=settings,
kernel=kernel, # pass kernel so SK can call plugins
)
print(result)
var settings = new OpenAIPromptExecutionSettings
{
FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};
var history = new ChatHistory("You are a helpful assistant with access to weather data.");
history.AddUserMessage("What's the weather like in Seattle today?");
var response = await chat.GetChatMessageContentAsync(history, settings, kernel);
Console.WriteLine(response.Content);
Run it and ask: "What's the weather like in Seattle today?"
The LLM will:
1. See that get_current_weather is available
2. Call it with city = "Seattle"
3. Incorporate the result into its answer
Expected output
"The current weather in Seattle is π§οΈ Rainy, 12Β°C. Bring an umbrella!"
Step 4: Build a simple conversation loopΒΆ
Let's make it interactive:
history = ChatHistory()
history.add_system_message(
"You are a helpful assistant with access to weather data. "
"Use the weather plugin when the user asks about weather."
)
print("Weather Agent ready. Type 'exit' to quit.\n")
while True:
user_input = input("You: ").strip()
if user_input.lower() == "exit":
break
history.add_user_message(user_input)
result = await chat.get_chat_message_content(
chat_history=history,
settings=settings,
kernel=kernel,
)
history.add_assistant_message(str(result))
print(f"Agent: {result}\n")
SummaryΒΆ
You've built your first Semantic Kernel agent that:
- β Connects to an LLM (GitHub Models β free)
- β Has a custom Plugin with a native function
- β Uses auto function calling β the LLM decides when to invoke the function
- β Maintains conversation history across turns
Next StepsΒΆ
- Add memory and more plugins: β Lab 023 β SK Plugins, Memory & Planners
- Build an MCP Server and connect it to SK: β Lab 020 β MCP Server in Python