A terminal window showing a Python script calling a local LLM with no API key

Your Local AI Stack: uv and Ollama in 10 Minutes

How do you run a local LLM from a Python script? Install Ollama, pull a model, install uv, write one file with inline dependencies, and run it. No API key. No virtual environment to activate. No Docker. The whole setup takes under ten minutes. Why run local Three reasons: cost, privacy, and offline access. Frontier APIs charge per token. For experimentation, prototyping, and batch tasks, those costs add up before you have anything to show. A local model costs nothing per call. ...

April 10, 2026 · 4 min · Jamal Hansen

Karpathy's LLM Knowledge Base Method - A Practical Starting Point

Karpathy’s LLM knowledge base method works by having an LLM maintain a wiki of markdown files rather than retrieving from raw documents at query time. When you add a source, the LLM integrates it into the existing network, updating pages, revising summaries, and noting contradictions. By the time you need an answer, the synthesis is already done. Your job is to curate sources and ask good questions. The LLM does everything else. ...

April 5, 2026 · 6 min · Jamal Hansen