A terminal window showing a Python script calling a local LLM with no API key

Your Local AI Stack: uv and Ollama in 10 Minutes

How do you run a local LLM from a Python script? Install Ollama, pull a model, install uv, write one file with inline dependencies, and run it. No API key. No virtual environment to activate. No Docker. The whole setup takes under ten minutes. Why run local Three reasons: cost, privacy, and offline access. Frontier APIs charge per token. For experimentation, prototyping, and batch tasks, those costs add up before you have anything to show. A local model costs nothing per call. ...

April 10, 2026 · 4 min · Jamal Hansen

Add External Dependencies to Python Scripts with uv

Ever wanted to share a Python script that uses external packages without making the recipient set up a virtual environment? With uv, you can embed dependencies directly in the script. The Command uv add --script example.py 'requests<3' 'rich' This adds inline metadata to your script: # /// script # dependencies = [ # "requests<3", # "rich", # ] # /// import requests from rich.pretty import pprint resp = requests.get("https://peps.python.org/api/peps.json") data = resp.json() pprint([(k, v["title"]) for k, v in data.items()][:10]) Running It Anyone with uv installed can now run the script directly: ...

April 19, 2025 · 1 min · Jamal Hansen