Date Created: 2024-09-19
Last Updated: 2025-04-29
By: 16BitMiker
[ BACK.. ]
Ollama continues to lead the way in local AI development by making it easy to run large language models directly on your machine. With the addition of a graphical user interface (GUI), interacting with these models becomes even more accessibleβno terminal required.
This guide shows you how to set up and run the Ollama GUI on both Debian-based Linux distributions (like Ubuntu) and macOS in 2025.
Make sure the following are ready before proceeding:
β Ollama is installed and running. (If not, visit ollama.com for installation instructions.)
β You have an internet connection to clone the GUI repository.
β You have basic command-line familiarity.
The Ollama GUI acts as a frontendβyou must have the Ollama backend running for it to function properly.
make
is InstalledThe GUI build process depends on make
, a common tool for compiling and automating project builds.
Check if make
is installed:
make --version
If it's missing, install via:
xxxxxxxxxx
sudo apt update
sudo apt install build-essential
This installs make
and other essential development tools.
Check for make
:
xxxxxxxxxx
make --version
If not found, install Xcode Command Line Tools:
xxxxxxxxxx
xcode-select --install
π¦ This prompt installs make
, gcc
, and other necessary development utilities.
The Ollama GUI source is hosted on GitHub. Letβs clone the repository and build it.
xxxxxxxxxx
mkdir -p $HOME/temp
cd $HOME/temp
git clone https://github.com/ollama-ui/ollama-ui
cd ollama-ui
make
π What These Commands Do:
ποΈ mkdir -p $HOME/temp
: Creates a temporary working directory.
π₯ git clone
: Downloads the Ollama GUI source code.
π οΈ make
: Builds the GUI using the preconfigured Makefile.
π Note: The build process may take a minute or two. If you run into errors, double-check that you have all required dependencies installed.
Once built, you can launch the interface.
Ensure Ollama is running:
xxxxxxxxxx
ollama run llama3
In another terminal, run:
xxxxxxxxxx
cd $HOME/temp/ollama-ui
make run
Open your browser and go to:
xxxxxxxxxx
http://localhost:3000
β You should now see the GUI front-end for your Ollama models.
To streamline launching the GUI, you can create a helper script.
xxxxxxxxxx
nano ~/start_ollama_gui.sh
Paste the following:
xxxxxxxxxx
# Simple launcher for Ollama GUI
cd $HOME/temp/ollama-ui
make run
Exit and save (Ctrl + O
, then Ctrl + X
).
xxxxxxxxxx
chmod +x ~/start_ollama_gui.sh
Now, launching the GUI is as simple as running:
xxxxxxxxxx
~/start_ollama_gui.sh
π§ Pro Tip: Add this script to your application launcher or system startup if you use Ollama often.
Make sure Ollama is running:
xxxxxxxxxx
ollama list
If you get an error, the backend isnβt running.
Confirm the GUI is built correctly:
xxxxxxxxxx
make clean && make
Check for port conflicts. The GUI listens on port 3000βmake sure nothing else is using it.
If you encounter Python or Node.js-related errors, check the projectβs README for specific setup instructions or install the latest LTS version of Node.js with:
xxxxxxxxxx
curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash -
sudo apt install -y nodejs
On macOS, use:
xxxxxxxxxx
brew install node
Youβve now got a fully functional Ollama GUI running on your local system. This interface makes it easier to:
Interact with models like LLaMA 3, Mistral, and more
Manage prompts and responses visually
Enhance your productivity without relying solely on CLI tools
As local AI continues to evolve, having tools like Ollama and its GUI makes it easier to test, build, and deploy models without depending on cloud APIs.
Happy tinkering! π§ π»
If you're interested in integrating Ollama with other local AI tools or containerizing your setup, stay tuned for upcoming guides.