2026
May 2026
-
Why I Moved This Site From WordPress to Markdown — May 10, 2026
Why this site now treats Markdown in Git as the canonical source for readers, search engines, and AI systems. -
Drift is the default condition. Coherence is the achievement. — May 7, 2026
Because Drift Happens – FREE PDF DOWNLOAD below 👇 Most systems do not fail suddenly. They drift gradually, then fail visibly. Over the years, I’ve noticed the same recurring pattern across: What people often call “stability” is usually a temporarily maintained state of managed motion. Without ongoing correction: Because Drift Happens. This document distils some -
By Inches — May 3, 2026
I didn’t have a name for it then. I just knew something was off. Not in the obvious way – there was no singular moment, no clean break, nothing you could isolate and point to as the problem. Things didn’t fail loudly. They shifted. Quietly, incrementally, almost politely. A rule bent just this once. A
March 2026
-
What It Actually Costs to Run AI on Your Own Hardware — March 30, 2026
Everyone keeps saying, “just run AI locally.” Let’s put some numbers against that. If you want to run a decent local model – not toy models, but something in the 14B to 70B range – you are stepping into real infrastructure territory. Here’s what that actually looks like today. Option 1 – NVIDIA GPU (performance-first) -
Safer Experimentation With Coding Agents: Why I Built SafeAgent.ca — March 23, 2026
Coding agents are getting more capable, and more people want to try them on real repositories. The problem is simple: curiosity moves faster than caution. A lot of early experimentation happens in the worst possible way. Someone points an agent at a working copy, gives it network access, and hopes for the best. That may -
When Institutions Drift, Systems Emerge — March 4, 2026
Viewed as a system, the story is familiar. An institution encounters a change at the edges. People closest to the activity notice it first. They suggest adjustments. Leadership listens politely but remains anchored to the existing model. Nothing changes. Eventually, the people pushing for change stop pushing. They build something else instead. That pattern played
February 2026
-
The Hidden Tax of Agentic Systems: The Token Economics of MCP — February 11, 2026
There is a quiet cost building inside many modern AI architectures. It does not show up in demo environments. It does not appear in proof-of-concepts. It rarely gets mentioned in architecture diagrams. But in production, it becomes unavoidable. I am talking about token overhead – specifically, the growing operational cost of providing Large Language Models -
Adultic AI: Parenting Autonomous Agents for the Real World — February 10, 2026
There is a quiet shift underway in software. For the past decade, we built systems that waited. They waited for input. They waited for permission. They waited for instruction. Now we are building systems that act. Autonomous agents can interpret goals, choose tools, chain actions, and produce outcomes with minimal human intervention. For many builders, -
Building for the Break: Why many of today’s public MCP servers are an accident waiting to happen — February 7, 2026
Every technology wave produces its share of impressive demos. We are now seeing that pattern repeat with Model Context Protocol (MCP) servers. Public MCP endpoints are appearing everywhere. Companies are rushing to expose tools, wire up APIs, and signal that they are ready for an agent-driven future. On the surface, this looks like progress. Underneath, -
The Era When Building Software Stops Being the Hard Part: The Likely Shape of the Next 5 to 7 Years — February 6, 2026
For most of software history, the defining question was simple: Can we build it? That question shaped everything – team structures, hiring models, funding strategies, even professional identity. Software creation was expensive, slow, and coordination-heavy. Organizations assembled pyramids of talent because they had to. Junior developers produced volume. Senior engineers imposed structure. Architects guided the -
The Morris Worm Moment for Autonomous Agents — February 5, 2026
Why mature systems design always arrives just after the first preventable incident. On the evening of November 2, 1988, a graduate student released a small program onto the early internet. It was not intended to be destructive. It was not designed as a weapon. It was, by most credible accounts, an experiment. Within hours, machines
January 2026
- Agent Readiness Is a Design Discipline — January 16, 2026
Agent readiness is not an AI feature. It is a structural property of a system. A system is agent ready when non-human actors can act through it safely, predictably, and without interpretation. If a human must infer intent, guess behaviour, or recover from ambiguity, the system is not agent ready. The Core Rule Agents require