Tom Compagno
  • Special Series
  • Blog
  • About
Get started

The Local AI Handbook: Taking Your Models Offline.

Why Go Local? The Case for Private AI

An introduction to the benefits of running models on your own machine, from total data privacy to avoiding monthly subscription fees.

Read post

The Hardware Check – Can Your PC Handle It?

A high-level guide to the "Big Three" requirements—VRAM, System RAM, and Storage—and how to audit your current specs for local LLM.

Read post

The VRAM Bottleneck – Why the GPU is King

A deeper dive into Video RAM (VRAM), explaining why your graphics card’s memory is the single most important factor for speed and model size for local LLM.

Read post

Quantization – Fitting a Giant in a Small Box

A technical look at the "shrinking" process (converting 16-bit files to 4-bit or 8-bit) that allows massive models to run on consumer-grade hardware.

Read post

Choosing Your Runner – LM Studio vs. Ollama vs. Kobold

A granular comparison of the software tools used to actually load and "chat" with your quantized model files.

Read post

The First Boot – Downloading and Running Your First GGUF

The final "how-to" step: finding a model on Hugging Face, loading it into your software, and sending your first offline prompt.

Read post
  • Home
  • Special Series
  • Blog
  • About
  • Moira's Lexicon
Tom Compagno

Tom built this site in Webflow, then migrated to Astro

  • Privacy
  • Contact
  • LinkedIn
All rights reserved © 2026
Made partially by AI