Ollama (desktop application)
Ollama for desktop is a user-friendly application on Linux, macOS, and Windows that allows users to run large language models (LLMs) locally, providing a private, free, and secure alternative to cloud-based AI.
What is the Ollama desktop application?
Ollama for desktop is a user-friendly application on Linux, macOS, and Windows that allows users to run large language models (LLMs) locally, providing a private, free, and secure alternative to cloud-based AI. It functions as a graphical interface, making it easy to download, manage, and interact with open-source models (e.g., LLaMA, Mistral) without requiring command-line skills.
Key features of the Ollama app
- Local execution : The models run directly on your computer's hardware, meaning no internet connection is required.
- Data security : Because the models run locally, your data remains secure on your computer and is never used for training.
- User Interface (UI) : Provides an intuitive interface for interacting with models, eliminating the need for a terminal environment.
- Multimodal support : Supports multimodal models allowing users to upload or drag and drop images for analysis.
- Model Management : Provides easy downloading and management of various open-source models such as Gemma 3 and Deepseek .
- Adjust settings : Includes options to adjust context length to manage memory usage.
The application runs in the background, often activating GPU acceleration on NVIDIA and AMD Radeon cards to boost AI performance.
How to install Ollama on Windows
Ollama runs as a native Windows application, including support for NVIDIA and AMD Radeon GPUs. After installing Ollama for Windows, it will run in the background, and the Ollama command line is available in cmd, powershell, or your favorite terminal application. As usual, the Ollama API will be served at http://localhost:11434.
System requirements
- Windows 10 22H2 or later, Home or Pro edition
- NVIDIA driver 452.39 or higher if you have an NVIDIA graphics card.
- AMD Radeon driver: https://www.amd.com/en/support if you have a Radeon card.
Ollama uses Unicode characters to display progress, which may appear as undefined squares in some older terminal fonts on Windows 10. If you see this, try changing your terminal font settings.
File system requirements
Installing Ollama doesn't require administrator privileges and installs in your home directory by default. You'll need at least 4GB of free space for the binary installation. After installing Ollama, you'll need additional space to store large language models, which can be tens or even hundreds of GB. If your home directory doesn't have enough space, you can change the installation location of the binary files and the storage location of the models.
Change the installation location.
To install the Ollama application in a location other than your home directory, launch the installer with the following flag:
OllamaSetup.exe /DIR="d:somelocation" Change the model storage location.
To change where Ollama stores downloaded models instead of using your home directory, set the OLLAMA_MODELS environment variable in your user account.
- Launch the Settings app (Windows 11) or Control Panel (Windows 10) and search for environment variables.
- Click Edit environment variables for your account .
- Edit or create a new variable for your user account for OLLAMA_MODELS where you want to store the models.
- Click OK/Apply to save.
If Ollama is running, quit the system tray application and relaunch it from the Start menu or from a new command-line window after you have saved the environment variables.
Access the API
Here's a quick example of how to access the API from PowerShell:
(Invoke-WebRequest -method POST -Body '{"model":"llama3.2", "prompt":"Why is the sky blue?", "stream": false}' -uri http://localhost:11434/api/generate ).Content | ConvertFrom-json
How to install Ollama on macOS
System requirements
- macOS Sonoma (version 14) and later
- Apple M series (supports both CPU and GPU) or x86 (CPU only)
File system requirements
The recommended installation method is to mount the ollama.dmg file and drag and drop the Ollama application into the system-wide Applications folder. Upon startup, the Ollama application will verify if the ollama CLI is in your PATH, and if not detected, will request permission to create a link in /usr/local/bin.
After installing Ollama, you will need additional space to store large language models, which can range from tens to hundreds of GB in size. If your home directory does not have enough space, you can change the installation location of the binary files and the storage location of the models.
Change the installation location.
To install the Ollama application in a location other than the Applications folder, place the Ollama application in the desired location and ensure that the path Ollama.app/Contents/Resources/ollama or a symbolic link to the CLI can be found in your path. On first launch, decline the 'Move to Applications?' request.
How to install Ollama on Linux
Setting
To install Ollama, run the following command:
curl -fsSL https://ollama.com/install.sh | sh Manual installation
Note : If you are upgrading from a previous version, you should first remove the old libraries using the command sudo rm -rf /usr/lib/ollama.
Download and extract the package:
curl -fsSL https://ollama.com/download/ollama-linux-amd64.tar.zst | sudo tar x -C /usr Launch Ollama:
ollama serve In a separate terminal window, check if Ollama is running:
ollama -v Install AMD GPU
If you have an AMD GPU, download and extract the ROCm package as well:
curl -fsSL https://ollama.com/download/ollama-linux-amd64-rocm.tar.zst | sudo tar x -C /usr Install ARM64
Download and extract the ARM64-specific package:
curl -fsSL https://ollama.com/download/ollama-linux-arm64.tar.zst | sudo tar x -C /usr Add Ollama as a startup service (recommended)
Create users and groups for Ollama:
sudo useradd -r -s /bin/false -U -m -d /usr/share/ollama ollama sudo usermod -a -G ollama $(whoami) Create a service file in /etc/systemd/system/ollama.service:
[Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=$PATH" [Install] WantedBy=multi-user.target Then start the service:
sudo systemctl daemon-reload sudo systemctl enable ollama Install CUDA drivers (optional)
Download and install CUDA . Check if the driver is installed by running the following command, which will display details about your GPU:
nvidia-smi Install the AMD ROCm driver (optional).
Download and install ROCm v7 .
Start Ollama
Launch Ollama and check if it's running:
sudo systemctl start ollama sudo systemctl status ollama Although AMD has contributed the amdgpu driver to the official Linux kernel source code, this version is older and may not support all ROCm features. You should install the latest driver from https://www.amd.com/en/support/linux-drivers for best support of your Radeon GPU.
Customize
To customize Ollama settings, you can edit the systemd service file or environment variables by running:
sudo systemctl edit ollama Alternatively, you can manually create an override file in /etc/systemd/system/ollama.service.d/override.conf:
[Service] Environment="OLLAMA_DEBUG=1" Update
Update Ollama by running the installation script again:
curl -fsSL https://ollama.com/install.sh | sh Or by reloading Ollama:
curl -fsSL https://ollama.com/download/ollama-linux-amd64.tar.zst | sudo tar x -C /usr Install specific versions
Use the OLLAMA_VERSION environment variable with your installer script to install a specific version of Ollama, including previous releases. You can find the version number on the release page .
For example:
curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.5.7 sh View the log
To view the logs of Ollama running as a startup service, run:
journalctl -e -u ollamaDiscover more
OllamaShare by
Lesley MontoyaYou should read it
- How to Setup and Run Qwen 3 Locally with Ollama
- Using OpenClaw with Ollama: Building a local data analytics system.
- Instructions for installing and using Claude Offline on a computer.
- How to run AI on a local Raspberry Pi with Ollama (LLM) and Open WebUI
- 7 best AI software programs that can be installed on Windows
- 5 programming tasks that ChatGPT still can't do.
- 7 tips for using ChatGPT to automate data tasks.
- How to use ChatGPT to detect phishing scams.
- Chat AI: Ask Agent Anything
- Stable Diffusion Web UI
- GitHub Copilot - Your AI programming tool