A terminal window showing a Python script calling a local LLM with no API key

Your Local AI Stack: uv and Ollama in 10 Minutes

How do you run a local LLM from a Python script? Install Ollama, pull a model, install uv, write one file with inline dependencies, and run it. No API key. No virtual environment to activate. No Docker. The whole setup takes under ten minutes. Why run local Three reasons: cost, privacy, and offline access. Frontier APIs charge per token. For experimentation, prototyping, and batch tasks, those costs add up before you have anything to show. A local model costs nothing per call. ...

April 10, 2026 · 4 min · Jamal Hansen

You Can Choose Tools That Make You Happy

Why This Caught My Eye Tool choice is a value statement, not just a technical decision. For years, I’ve wanted to learn lesser-known languages like Erlang and Clojure. I find them genuinely interesting, but I kept waiting for the right project to justify it. This article says that’s the wrong frame and that the joy of the tool is the justification.

March 12, 2026 · 1 min · Jamal Hansen