A Developer's Guide to Ollama from Setup to Customization

Discover the simplest way to run local LLMs on your machine. This guide covers Ollama installation on Windows, Mac, and Linux, Modelfile customization, and API integration. Take your first step into local AI development with practical, hands-on examples.
Seunghwan Kim's avatar
Oct 01, 2025
A Developer's Guide to Ollama from Setup to Customization

Tired of cloud API costs racking up? Worried about sending proprietary code to third-party AI services? Frustrated by development grinding to a halt with an unstable internet connection? If you're a developer eager to leverage generative AI, you've likely faced these hurdles. But what if you could run and test Large Language Models (LLMs) right on your PC, free from all these constraints?

Enter Ollama, the breakthrough solution quenching this developer thirst. Ollama is a powerful tool that lets you install and run leading open-source LLMs like Llama 3 and Mistral with just a few simple commands. Your data never leaves your machine, there are no recurring costs, and you don't need an internet connection to work.

This guide provides a complete developer-focused walkthrough: from installing Ollama and mastering its core CLI commands to customizing your own models and finally integrating them into your applications.

Ollama official website
Ollama official website

1. Get Up and Running with Ollama in 3 Minutes

Ollama's standout feature is its dead-simple installation. Forget complex environment configurations and follow the straightforward steps for your OS.

1-1. Installing on Windows

  1. Head to the Ollama official website and click "Download for Windows" to get the installer.

  2. Run the downloaded OllamaSetup.exe and follow the setup wizard.

  3. Once finished, Ollama will automatically start as a background service. Open your terminal (CMD or PowerShell) and type:

ollama --version

If the version number appears, you're good to go.

1-2. Installing on macOS

  1. From the Ollama official website, click "Download for macOS."

  2. Unzip the Ollama-darwin.zip file and drag the Ollama app into your "Applications" folder.

  3. Launch the app. You'll see the Ollama icon in your menu bar, signifying it's running in the background. Open your terminal and run the same version check.

ollama --version
MacOS 터미널에서 Ollama 실행 화면
MacOS 터미널에서 Ollama 실행 화면

1-3. Installing on Linux

For Linux, a single command in your terminal is all it takes.

curl -fsSL https://ollama.com/install.sh | sh

Now that the setup is complete, it's time to dive in.

2. Master LLMs from Your Terminal: Core Ollama Commands

Ollama provides an intuitive Command-Line Interface (CLI) for model management. You only need to remember these four essential commands to get started.

2-1. Run a Model: run

This is your primary command. It downloads the specified model if it's not present locally, then immediately starts an interactive session. Let's fire up Meta's popular Llama 3 model.

ollama run llama3

The model will begin downloading, and upon completion, you'll see a >>> Send a message (/? for help) prompt. Feel free to ask it anything.

llama3 설치 화면
llama3 설치 화면

2-2. Pre-download a Model: pull

While run is convenient, you might want to download models ahead of time. The pull command is perfect for this.

# Pre-download codellama, a model specialized for coding
ollama pull codellama

2-3. List Your Models: list

To see all the models installed on your machine, use the list command. It displays their names, IDs, sizes, and modification dates at a glance.

ollama list

2-4. Remove a Model: rm

When you no longer need a model, use the rm command to free up disk space.

ollama rm codellama
Installing and chatting with Llama 3 via Ollama
Installing and chatting with Llama 3 via Ollama

3. Craft Your Own AI: Customizing with a ‘Modelfile’

Ollama's true power is unlocked through customization. If you've ever used a Dockerfile, the Modelfile will feel right at home. It's a blueprint for creating bespoke models.

A Modelfile lets you define a base model, set a persistent system prompt, and tweak parameters to create a new, specialized model. Let's build a "Reviewer Bot" that exclusively handles front-end code reviews.

Step 1: The Modelfile Concept

Think of a Modelfile as a "recipe" or "character sheet" for your AI.

  • Without a Modelfile: Each time you run ollama run llama3, you have to start by telling it: "You are a helpful front-end developer assistant. Always answer in Korean." You repeat this setup in every session.

  • With a Modelfile: You codify these instructions once inside the file: "Your base model is llama3, your persona is a helpful front-end developer, and you must always reply in Korean." You then use this recipe to build a new, custom model.

The Benefit: Once built, you simply run your custom model. It will always remember its persona and instructions, no repetitive prompting needed.

Step 2: Create the Modelfile

Let's create the file to hold our recipe.

  1. Navigate to your project folder in your terminal.

  2. Create an empty file named Modelfile. (Important: no file extension. Not Modelfile.txt, just Modelfile.)

    touch Modelfile

    (You can also create the file directly in a code editor like VS Code.)

Modelfile execution screen
Modelfile execution screen

Step 3: Define Your Recipe

Open the Modelfile and add the following instructions.

A. FROM (The Base Model)

  • What it does: Specifies which existing model to build upon. This is the foundational ingredient.

  • Syntax:

    FROM llama3:latest

B. SYSTEM (The Persona)

  • What it does: Sets a permanent system prompt that defines the AI's role, personality, or core instructions.

  • Syntax: (Triple quotes """ are great for multi-line prompts.)

    SYSTEM """
    You are a professional front-end developer assistant.
    Your primary role is to help users with React, TypeScript, and Git.
    Always provide answers in Korean.
    Provide clear, concise code examples when necessary.
    """

C. PARAMETER (The Fine-Tuning)

  • What it does: Adjusts the model's behavior. The most common parameter is temperature, which controls creativity.

  • Syntax:

    PARAMETER temperature 0.7
    • temperature: A value from 0 to 1 that controls the randomness of the output.

      • Closer to 0: More deterministic and factual. Good for tasks like summarization.

      • Closer to 1: More creative and diverse. Good for brainstorming or creative writing.

      • 0.7 is a common default for a balance of creativity and coherence.

Content inside a Modelfile
Content inside a Modelfile

Step 4: Build Your Custom Model

Save the Modelfile and run the create command in your terminal.

  • Command:

    ollama create {my-assistant} -f ./Modelfile
  • Breakdown:

    • ollama create: The command to build a new model.

    • my-assistant: The name for your new custom model.

    • -f ./Modelfile: Specifies the file to use as the recipe.

You'll see a progress indicator as Ollama builds your new model.

Running a Modelfile creation command in the terminal
Running a Modelfile creation command in the terminal

Step 5: Run Your Creation

That's it! Now, run your custom model.

ollama run my-assistant

The chat session will start, and the AI will automatically adopt the persona you defined—no extra prompting required. It's now a dedicated front-end assistant, ready to help.

Chatting with a model after applying a Modelfile
Chatting with a model after applying a Modelfile

It's Time to Build Your Ideas

You've successfully installed a powerful LLM on your local machine, learned to manage it with the CLI, and taken your first steps into crafting a custom AI with a Modelfile.

Ollama is a game-changer, liberating developers from the constraints of data privacy, cost, and internet dependency. Whether you're building a personal code assistant or prototyping a complex NLP feature, you now have a powerful engine at your disposal.

What we've covered today is just the beginning. This engine is capable of powering production-grade systems, though that journey will bring its own set of exciting challenges.

Now it’s your turn. Go bring your ideas to life.

Happy Hacking

Share article

피카부랩스 블로그