Matt Williams
I was a founding maintainer of Ollama, the first evangelist at Datadog, and a former organizer of DevOps Days Seattle, DevOps Days Boston, and Serverless Days Boston. In my day job, I tour the country speaking at conferences and writing about the company I work at. But I am also passionate about gadgets. Some folks don't think that word is appropriate, but it totally fits here. If you meet me in person you will see me light up when I can share with you my latest gadget, tool, or utility. And this is where I get to share that with all of you. You can find more about me at my website (http://technovangelist.com) or on Twitter (http://twitter.com/technovangelist)

Remote Server with Local Power

n8n with Tailscale for local GPU access on Remote Servers

n8n with Tailscale for local GPU access on Remote Servers - Enhanced Members Version

Zero to MCP with n8n and Hostinger

The NEW Feature that Makes n8n better TODAY!

Unlock Gemma 3's Multi Image Magic

Why did they release this model with this license?

How I Stopped Letting Negative Comments Derail My Creativity

Let's Look at Gemma3 Together

Getting started with Local AI

Perplexity Fixes Deepseek

MSTY Makes Ollama Better

DeepScaleR Claims Greatness

Based on DeepSeek R1. Is it Better?

Solved with Windsurf

Axolotl is a AI FineTuning Magician

An important PSA to my regulars and followers on Discord

Fast Fine Tuning with Unsloth

Fast Fine Tuning for Linux and Windows with Unsloth - Ad Free for members

Is MLX the best Fine Tuning Framework?

19 Tips to Better AI Fine Tuning

Optimize Your AI - Quantization Explained

Exaone3.5 Performance in #ollama

Autocomplete for your Mac that works EVERYWHERE!

The Truth About Ollama's Structured Outputs

The Path To Better Custom Models

An Honest Look at MKBHD's Look At Apple Intelligence

Simplify Ollama Cleanup Like a Pro

Find Your Perfect Ollama Build

Cracking the Enigma of Ollama Templates