Understanding Semantic Kernel in C#: Building an AI Assistant
If you’re a C# developer curious about artificial intelligence, you’ve probably heard of Semantic Kernel. But what exactly is it, and why should you care? Let’s break it down in simple terms using a practical code example. You can clone the sample code repository by following this link.
What is Semantic Kernel?
Think of Semantic Kernel as a bridge between your C# applications and AI models. It’s a lightweight SDK created by Microsoft that makes it incredibly easy to integrate AI capabilities—like ChatGPT—into your .NET applications without getting bogged down in complex API calls.
Instead of writing tons of code to handle API requests, authentication, and response parsing, Semantic Kernel handles all the heavy lifting for you. You tell it what you want, and it takes care of the rest.
Getting started with the sample chat app
Create a console app in Visual Studio 2022 and add the following packages.
dotnet add package Microsoft.SemanticKernel --version 1.65.0 dotnet add package Microsoft.SemanticKernel.Connectors.OpenAI --version 1.65.0 dotnet add package Microsoft.SemanticKernel.PromptTemplates.Handlebars--version 1.65.0
Let’s walk through the code line by line to understand what’s happening.
using Microsoft.SemanticKernel; using Microsoft.SemanticKernel.Connectors.OpenAI; var builder = Kernel.CreateBuilder(); builder.AddOpenAIChatCompletion(model, apiKey); var kernel = builder.Build();
Here’s what’s happening: First, we import the Semantic Kernel libraries. Then, we create a Kernel object using the builder pattern. The kernel is the heart of everything—it’s the engine that will power our AI assistant.
We add OpenAI’s chat completion service to our kernel, which connects it to ChatGPT. We pass in the model name (like GPT-4) and the API key to authenticate with OpenAI. Once configured, we build and store the kernel.
Think of the kernel as a Swiss Army knife: you configure it with tools (in this case, OpenAI’s chat completion), and then you can use it to perform AI tasks.
Configuring AI Behavior
var promptSetting = new OpenAIPromptExecutionSettings
{
MaxTokens = 2000,
Temperature = 0.2,
TopP = 0.5
};
Now we’re telling the AI how to behave. Let’s decode these settings:
- MaxTokens: This is the maximum length of the AI’s response. A token is roughly equivalent to a word (or a fraction of a word). By setting this to 2000, we’re saying “don’t write more than about 2000 words.”
- Temperature: This controls how creative or predictable the AI is. Think of it like a creativity slider. A low temperature (0.2) makes the AI more focused and consistent—it will give you reliable, similar answers each time. A high temperature (closer to 1.0) makes it more creative and random. For a professional assistant, a low temperature makes sense.
- TopP: This is another way to control randomness, working alongside temperature. A value of 0.5 means the AI only considers the most likely top 50% of possible next words. It’s another way to keep responses focused and coherent.
Creating the Assistant’s Personality
var context = @”
You are a helpful assistant designed to support Microsoft developers. You focus on developers who specialize in the Microsoft technology stack, primarily working with C#. Therefore, please include relevant C# code examples in your responses. Your name is AgentX, and you are based on PlanetX. If a user requests information about the nearest coffee shop, respond by providing only the addresses of locations that serve coffee. Our team’s office is located in London.“;
This is called the system prompt or context. It’s like giving the AI a character description and instruction manual. We’re telling the AI exactly who it should be, what it should do, and how it should behave. This is incredibly powerful—it shapes every response the assistant gives.
In this example, we’re creating “AgentX,” an assistant specifically tuned for C# developers, with knowledge that the team is based in London.
while (true)
{
Console.WriteLine("Q:");
string userPrompt = Console.ReadLine();
var skPrompt = $@"{context} {userPrompt} {{$input}}.";
var result = await kernel.InvokePromptAsync(skPrompt);
Console.WriteLine(result);
}
Why Semantic Kernel Matters
Semantic Kernel is powerful because it abstracts away complexity. Without it, you’d have to:
- Manually craft HTTP requests to OpenAI’s API
- Handle authentication and error handling
- Parse JSON responses
- Manage token limits and settings manually
With Semantic Kernel, you do all of this with just a few lines of code.
Real-World Applications
This pattern can be used to build:
- Customer support chatbots that understand your business context
- Code review assistants that help developers write better C# code
- Internal knowledge bases that answer questions about your organisation
- Documentation generators that create guides from your codebase
- Data analysis assistants that interpret data and provide insights
Conclusion
Semantic Kernel democratizes AI for C# developers. You don’t need a deep knowledge of machine learning to build intelligent applications—you need to understand these basic concepts and know how to wire them together.
The beauty of this code is its simplicity combined with its power. In a few lines, we’ve created an AI assistant with a specific personality, configured for consistent and focused responses, ready to help Microsoft developers. That’s the promise of Semantic Kernel: making AI accessible and practical for everyday development.
