Skip to content

Lab 014: Semantic Kernel β€” Hello AgentΒΆ

Level: L100 Path: 🧠 Semantic Kernel Time: ~30 min πŸ’° Cost: GitHub Free β€” Free GitHub account, no credit card

Semantic Kernel β†’ Microsoft Agent Framework

Semantic Kernel is now part of Microsoft Agent Framework (MAF), which unifies SK and AutoGen into a single framework. The concepts in this lab (Kernel, Plugins, function calling) still apply β€” MAF builds on top of them. See Lab 076: Microsoft Agent Framework for the migration guide.

What You'll LearnΒΆ

  • What Semantic Kernel (SK) is and its key building blocks
  • How to create an SK Kernel connected to GitHub Models (free)
  • How to add your first Plugin (native function)
  • How to enable auto function calling so the LLM decides when to use your function

IntroductionΒΆ

Semantic Kernel is Microsoft's open-source SDK for building AI agents and applications. It sits between your code and the LLM, providing:

  • A unified abstraction over any LLM (OpenAI, Azure OpenAI, GitHub Models, Ollama...)
  • A Plugin system for defining functions the LLM can call
  • Auto function calling β€” the LLM automatically invokes your functions when needed
  • Vector memory for long-term context (covered in Lab 023)

In this lab, we build a simple agent that can answer questions and call a custom function.


πŸ“ Starter FileΒΆ

A skeleton starter file is provided with TODO comments for each step:

pip install -r requirements.txt
python hello_agent_starter.py

Complete the TODOs in order (1–16) to build a full SK agent with semantic functions, native plugins, and a chat loop.


Prerequisites SetupΒΆ

PythonΒΆ

pip install semantic-kernel openai

CΒΆ

dotnet new console -n HelloSkAgent
cd HelloSkAgent
dotnet add package Microsoft.SemanticKernel

Make sure GITHUB_TOKEN is set (see Lab 013).


Quick Start with GitHub Codespaces

Open in GitHub Codespaces

All dependencies are pre-installed in the devcontainer.

πŸ“¦ Supporting FilesΒΆ

Download these files before starting the lab

Save all files to a lab-014/ folder in your working directory.

File Description Download
hello_agent_starter.py Starter script with TODOs πŸ“₯ Download
requirements.txt Python dependencies πŸ“₯ Download

Lab ExerciseΒΆ

Step 1: Create a basic KernelΒΆ

The Kernel is the central object in Semantic Kernel β€” it holds your LLM connection and all plugins.

Create hello_agent.py:

import asyncio
import os
from semantic_kernel import Kernel
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
from semantic_kernel.connectors.ai.function_choice_behavior import FunctionChoiceBehavior
from semantic_kernel.contents import ChatHistory

async def main():
    # Create the kernel
    kernel = Kernel()

    # Add GitHub Models as the LLM backend
    kernel.add_service(
        OpenAIChatCompletion(
            ai_model_id="gpt-4o-mini",
            api_key=os.environ["GITHUB_TOKEN"],
            base_url="https://models.inference.ai.azure.com",
        )
    )

    # Simple chat β€” no tools yet
    history = ChatHistory()
    history.add_system_message("You are a helpful assistant.")
    history.add_user_message("What is Semantic Kernel?")

    chat = kernel.get_service(type=OpenAIChatCompletion)
    result = await chat.get_chat_message_content(
        chat_history=history,
        settings=kernel.get_prompt_execution_settings_from_service_id("default"),
    )
    print(result)

asyncio.run(main())

Edit Program.cs:

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.OpenAI;

var builder = Kernel.CreateBuilder();
builder.AddOpenAIChatCompletion(
    modelId: "gpt-4o-mini",
    apiKey: Environment.GetEnvironmentVariable("GITHUB_TOKEN")!,
    endpoint: new Uri("https://models.inference.ai.azure.com")
);
var kernel = builder.Build();

var chat = kernel.GetRequiredService<IChatCompletionService>();
var history = new ChatHistory("You are a helpful assistant.");
history.AddUserMessage("What is Semantic Kernel?");

var response = await chat.GetChatMessageContentAsync(history);
Console.WriteLine(response.Content);

Run it:

python hello_agent.py
dotnet run

You should see the LLM respond. Now let's add a custom function.


Step 2: Add a Plugin (native function)ΒΆ

A Plugin is a class with methods the LLM can call. Decorate them with @kernel_function (Python) or [KernelFunction] (C#).

Add this class before main():

from semantic_kernel.functions import kernel_function

class WeatherPlugin:
    """Provides current weather information."""

    @kernel_function(
        name="get_current_weather",
        description="Get the current weather for a city",
    )
    def get_current_weather(self, city: str) -> str:
        # In a real lab this would call a weather API
        # For now, return mock data
        weather_data = {
            "Seattle": "🌧️ Rainy, 12°C",
            "New York": "β˜€οΈ Sunny, 22Β°C",
            "London": "β›… Cloudy, 15Β°C",
        }
        return weather_data.get(city, f"Weather data not available for {city}")

Then register the plugin in main():

kernel.add_plugin(WeatherPlugin(), plugin_name="weather")

Add this class to your project:

using Microsoft.SemanticKernel;

public class WeatherPlugin
{
    [KernelFunction("get_current_weather")]
    [Description("Get the current weather for a city")]
    public string GetCurrentWeather(string city)
    {
        var weatherData = new Dictionary<string, string>
        {
            ["Seattle"] = "🌧️ Rainy, 12°C",
            ["New York"] = "β˜€οΈ Sunny, 22Β°C",
            ["London"] = "β›… Cloudy, 15Β°C",
        };
        return weatherData.TryGetValue(city, out var weather)
            ? weather
            : $"Weather data not available for {city}";
    }
}

Register in Program.cs:

kernel.Plugins.AddFromType<WeatherPlugin>("weather");


Step 3: Enable auto function callingΒΆ

With auto function calling, the LLM decides when to call your function based on the conversation. You don't need to trigger it manually.

Update your settings to enable auto function calling:

from semantic_kernel.connectors.ai.open_ai import OpenAIChatPromptExecutionSettings
from semantic_kernel.connectors.ai.function_choice_behavior import FunctionChoiceBehavior

settings = OpenAIChatPromptExecutionSettings(
    function_choice_behavior=FunctionChoiceBehavior.Auto(),
)

history = ChatHistory()
history.add_system_message("You are a helpful assistant with access to weather data.")
history.add_user_message("What's the weather like in Seattle today?")

result = await chat.get_chat_message_content(
    chat_history=history,
    settings=settings,
    kernel=kernel,  # pass kernel so SK can call plugins
)
print(result)
var settings = new OpenAIPromptExecutionSettings
{
    FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};

var history = new ChatHistory("You are a helpful assistant with access to weather data.");
history.AddUserMessage("What's the weather like in Seattle today?");

var response = await chat.GetChatMessageContentAsync(history, settings, kernel);
Console.WriteLine(response.Content);

Run it and ask: "What's the weather like in Seattle today?"

The LLM will: 1. See that get_current_weather is available 2. Call it with city = "Seattle" 3. Incorporate the result into its answer

Expected output

"The current weather in Seattle is 🌧️ Rainy, 12°C. Bring an umbrella!"


Step 4: Build a simple conversation loopΒΆ

Let's make it interactive:

history = ChatHistory()
history.add_system_message(
    "You are a helpful assistant with access to weather data. "
    "Use the weather plugin when the user asks about weather."
)

print("Weather Agent ready. Type 'exit' to quit.\n")
while True:
    user_input = input("You: ").strip()
    if user_input.lower() == "exit":
        break

    history.add_user_message(user_input)
    result = await chat.get_chat_message_content(
        chat_history=history,
        settings=settings,
        kernel=kernel,
    )
    history.add_assistant_message(str(result))
    print(f"Agent: {result}\n")

SummaryΒΆ

You've built your first Semantic Kernel agent that:

  • βœ… Connects to an LLM (GitHub Models β€” free)
  • βœ… Has a custom Plugin with a native function
  • βœ… Uses auto function calling β€” the LLM decides when to invoke the function
  • βœ… Maintains conversation history across turns

Next StepsΒΆ