Building Agents for Small Language Models: A Deep Dive into Lightweight AI

Why This Caught My Eye Most agent writing assumes you are running a frontier model. This post takes small models seriously and asks what agent architecture actually looks like when your model can barely do tool use reliably. I care about this because I’m exploring smaller local models, and posts about others doing the same have sparked my interest.

March 21, 2026 · 1 min · Jamal Hansen

Why Your AI Agent Needs a SQLite Task System

Why This Caught My Eye Someone else independently arrived at the same pattern I’ve been building toward: use SQLite as the coordination layer for local AI tools. That independent convergence is the signal worth paying attention to. I’ve already used SQLite for caching in the content discovery agent. This article pushed me to commit to it as the shared state layer across tools, which is now documented as a decided architecture choice. ...

March 12, 2026 · 1 min · Jamal Hansen

I use offline LLMs a lot - how do folks choose?

Why This Caught My Eye I found this LinkedIn post right as I was digging into Ollama and local models. Dr. Robert Long shares his own criteria for selecting a local model and asks what others use. He mentions building a Python benchmark harness to test them. All of this gives me confirmation that I’m following a meaningful path and pulling the right threads.

March 3, 2026 · 1 min · Jamal Hansen