DeepSeek-R1 with Continue and Ollama
In today’s fast-paced development environment, finding tools that genuinely enhance productivity without disrupting your workflow is crucial. As someone who helps businesses streamline their systems, I am always on the lookout for tools that can save time and reduce friction in development processes. Continue for VS Code is one such tool that has caught my attention, and it might be exactly what you need to optimize your coding experience.
What is Continue?
Continue is an open-source AI code assistant that integrates seamlessly with Visual Studio Code via an extension. What sets it apart from other coding assistants is its modular, customizable approach to AI-assisted development. Continue enables developers to create, share, and use custom AI code assistants with a hub of models, rules, prompts, documentation, and other building blocks.
Key Features
Continue offers four primary modes of interaction, each designed to enhance different aspects of your development workflow:
Chat
The Chat feature allows you to interact with an LLM (Large Language Model) without leaving your IDE. You can inquire about your code, request explanations, or seek assistance with specific programming challenges. This is especially helpful when dealing with unfamiliar codebases or complex logic. Instead of switching to a browser to search for solutions, you can easily highlight code and receive explanations directly in your editor. The Continue chat can be conveniently relocated to the right sidebar, allowing your file explorer to remain open and preserving your workflow.
Autocomplete
Continue’s Autocomplete feature provides inline code suggestions as you type, similar to GitHub Copilot. It can complete single lines or entire sections of code in any programming language, saving you time on repetitive coding tasks and reducing the likelihood of syntax errors. This feature is particularly valuable because it works with various models, including local ones. This gives you more control over your development environment and potentially better privacy for your code.
Edit
The Edit feature allows you to modify code without leaving your current file. Select the code you want to change, press the appropriate keyboard shortcut, and describe the modifications you want to make. Continue will suggest edits based on your instructions, which you can accept or decline. This is incredibly useful for quick refactoring, formatting changes, or implementing new functionality within existing code structures. It removes the necessity to manually implement changes, thereby reducing the time spent on routine code modifications.
Agent
The Agent mode equips the Chat model with tools to manage a broader spectrum of coding tasks. It can significantly alter your codebase, working across multiple files as needed. This feature is especially powerful for extensive refactoring tasks, implementing new features, or migrating code between frameworks. Agent mode functions by analyzing your codebase, planning the necessary modifications, and executing them step by step. It can even run terminal commands if needed, making it a comprehensive assistant for complex development tasks.
Ollama
Ollama is a powerful open-source tool that enables you to run sophisticated large language models directly on your personal hardware, eliminating dependency on external cloud services and API calls while ensuring complete data privacy and reducing ongoing costs. With support for various models like Llama, Mistral, and others, it offers flexibility while keeping sensitive information contained within your local environment.
Setting up and starting with Deepseek-R1 using Continue and Ollama on a local machine.
- For this demonstration, you need to install Ollama on your machine by following the instructions from the link https://ollama.com/download
- Install VS Code https://code.visualstudio.com/download
- Install the Continue extension from the Visual Studio Marketplace (Once installed, you’ll see the Continue logo in your sidebar)
- Run and pull these two LLMS in Ollama.
- ollama run llama3.1:8b (open a window terminal and execute the command)
- ollama run deepseek-coder-v2:16b (open a window terminal and execute the command)
- To configure Continue in VS Code to connect to local LLMS, click Continue and then select the Local Assistant drop-down. Next, click Configure to open a YAML file. Paste the following information and save it.
name: Local Assistant
version: 1.0.0
schema: v1
models:
- name: deepseek-coder-v2:16b
provider: ollama
model: deepseek-coder-v2:16b
roles:
- autocomplete
- chat
- edit
- apply
- name: Llama 3.1 8B
provider: ollama
model: llama3.1:8b
roles:
- chat
- edit
- apply
context:
- provider: code
- provider: docs
- provider: diff
- provider: terminal
- provider: problems
- provider: folder
- provider: codebase
Everything is set up nicely! Now you can explore the feature of continuing with your code repository, or clone this repository (git clone https://github.com/dotnet/eShop.git) to check out the Continue feature.
I tried chat/edit and attached the response generated by Continue.
Typing the @ symbol provides context that can be fed to the LLM models. This can include codebase, documentation, IDE, or files. Additionally, you can configure different context providers within the config.yaml file.