Skip to content

It's (Still) All About Boundaries

Why boundaries — module structure, clean interfaces, and strict types — are the single most important factor in whether AI-assisted development actually works.

There is one thing that separates a codebase you can move quickly in from one that slowly suffocates your team. It is not the language, the framework, the cloud provider, or the number of tests. It is boundaries.

This has always been true. In the age of AI, it is more true than ever.


Spaghetti

Picture a bowl of spaghetti. Now picture each strand as an artery — sorry, a little gruesome, I know — each one carrying blood, performing some vital function, connected at both ends to something important.

You are a surgeon who needs to operate on one section of this mass. You draw a circle around the area you are working on. Immediately you see a dozen arteries crossing the boundary — disappearing into the tangle in every direction. To understand what your section does, you have to trace each one. You follow an artery out, arrive somewhere else in the system, and now you have to understand what happens there too. Then you follow the next. And the next.

By the time you have traced enough to feel confident, you have accumulated an enormous amount of context. And here is the brutal reality: you cannot hold it all at once. The parts you understood first have already started to fade. You are perpetually chasing a complete picture you can never quite assemble.

And when the system needs to grow? Nobody wants to reach into the tangle. So the path of least resistance is to graft something onto the edge — a new piece bolted on the outside, with arteries spliced into various points in the existing mass. It works. It ships. And the tangle gets a little bigger, a little harder to trace, every time.

This is a codebase without boundaries. This is why engineering teams slow down — not because the developers get worse, but because the system gets bigger and more entangled, and the cognitive cost of making any single change becomes prohibitive.


The Spotlight

Think of your working memory as a spotlight. Wherever you shine it, things brighten — you understand them, you can reason about them. Move it to illuminate a new area and the places you just lit start to dim. You can widen the beam, but only at the cost of less brightness everywhere. This is not a flaw in any particular engineer. It is how human cognition works.

Good software architecture is, at its core, a set of techniques for keeping the required size of the spotlight small.

If the part of the system you are changing has a clean, narrow interface — a small surface area — and the modules around it are equally well-defined, then your spotlight only needs to cover a small area. You understand your module. You understand the interface it exposes. You understand the interfaces it depends on. You do not need to understand the internals of anything else, because the boundaries guarantee those internals cannot surprise you.

Return to our arterial tangle: boundaries are the equivalent of a surgeon operating on an organ with clearly defined blood vessels in and out, rather than reaching into a mass of undifferentiated tissue. Same arteries, same vital functions — but now there are only a handful crossing your boundary, and each one has a known purpose.

The smaller and shorter-lived your spotlight, the more essential these boundaries become.


This Is Not New

Bounded modules with strictly defended interfaces are not a new idea. Eric Evans wrote Domain-Driven Design in 2003. Robert Martin has been articulating clean architecture for decades. The principles behind clean architecture, layered architecture, and hexagonal architecture patterns all converge on the same insight: keep modules focused, keep interfaces narrow, keep internals hidden.

The industry broadly agrees this is the right approach. And yet many production codebases are the arterial tangle.

Why? Because designing good boundaries is remarkably hard. It requires deep understanding of the domain — knowing where the natural seams are, which concepts belong together, and which should be kept apart. Get the boundaries wrong and you end up with modules that are constantly reaching into each other, or abstractions that fight the grain of the problem. Bad boundaries can be worse than no boundaries: they add indirection without reducing complexity.

And even when boundaries start well, they erode. Every shortcut — reaching into another module’s internals, leaking implementation details through a public interface, adding a dependency because it is convenient — widens the surface area a little. Over time, those erosions compound. The spotlight has to grow. Eventually you are back in the tangle.

This is not a reason to avoid boundaries. It is a reason to treat them as a discipline. Keep modules small and focused. Expose only what needs to be exposed. Treat every public interface as a contract. Enforce it. When you need to make a change that crosses a boundary, reason about interfaces — not internals.

And layer it up. Individual adapters have boundaries. Modules have boundaries. Subsystems have boundaries. At each zoom level, you should be able to understand what you are looking at without descending into the level below. This is how systems remain comprehensible as they grow.


AI Makes This Harder

Now introduce an AI coding agent.

The spotlight problem does not get better. It gets dramatically worse.

An AI agent’s spotlight is smaller than a human’s, and it fades faster. Models have a context window — a hard limit on how much they can hold at once. Beyond a certain size, adding more context does not help; it actively hurts. Quality degrades. The model loses track of constraints from earlier in the prompt and starts making decisions that are locally coherent but globally broken.

Where a human developer’s understanding at least persists across a working day, an AI agent starts every task completely fresh. A developer on a team accumulates context over months and years — the discussions, the failed experiments, the decisions that were reversed. They carry a mental model built from thousands of hours of exposure. An AI agent has none of that. Every time it picks up a task it is the first day on the job, with a smaller spotlight than the engineer it is assisting and one that dims the moment the task ends.

For all the progress in AI models, reasoning well across a large context window is still something they struggle with. Without a fundamental breakthrough, that is unlikely to change soon.

The implication is stark: hand an AI agent a poorly-bounded system and ask it to make a change, and you will get code that looks plausible but breaks things in non-obvious ways. The model cannot trace all the arteries. It does not know what it does not know. It will write confident, syntactically correct, tests-pass code that quietly does the wrong thing.


The Solution Is Still Boundaries

The solution to the AI context problem is the same solution that existed before AI. It is not a silver bullet — as we have seen, getting boundaries right is genuinely hard work. But it is the most effective tool we have.

Give an agent a bounded module with a clean interface. The module fits in context. Its dependencies are expressed as interfaces, not implementation details that need to be traced. The agent can understand what it is changing, reason about what will break, and produce correct output.

At the system level, the agent can see how the main pieces connect without descending into each one. It can zoom into a subsystem and see what modules it comprises. It can zoom into a specific module and see its external interfaces without needing to understand the internals behind them.

This is the same zoom-level reasoning that makes a well-bounded system comprehensible to a human engineer. It works for the same reasons — and when the spotlight is as small and short-lived as an AI agent’s, boundaries are the difference between a useful result and a confident mistake.


Hard Boundaries: The Type System as Guardrail

There is a second kind of boundary that matters in the AI era, beyond module structure.

Hard, machine-enforced boundaries — your type system, your linter, your compiler — become extraordinarily valuable when AI is writing code. Not because the AI is careless, but because it should not need to hold certain things in its head at all.

A strongly-typed codebase in strict mode tells the agent exactly what shape every value takes, what every function expects and returns, what is allowed and what is not. The agent does not have to guess or infer. It can lean on the type system the way a developer leans on IDE autocomplete — as an authoritative source of truth that does not consume context to remember.

This shrinks the effective spotlight dramatically. The agent can focus on intent and logic because the mechanics are already enforced by the system.

Strict types. Enforced linting. Meaningful compile-time errors. In a codebase where AI agents are contributing, these are not hygiene concerns. They are load-bearing guardrails.


bounded.dev

This is the founding principle behind everything here.

The promise of AI-assisted development — genuine, sustainable speed — is only reachable in codebases with good structure. Boundaries throughout the system. Strictly defended module interfaces. Hard type and linting enforcement. Structured context that agents can read and reason about.

Without those things, AI makes development faster in the short term and catastrophically messier in the long term. The tangle grows faster. The surface area expands. The system becomes less and less understandable, and the initial acceleration reverses.

With them, you get something genuinely new: a codebase that a human and an AI can navigate together, at multiple levels of abstraction, without either needing to hold the whole system in focus at once. The speed of AI with the quality that good engineering has always produced.

Boundaries are not a constraint on what AI can do. They are what makes AI assisted development actually work.


This is the first post on bounded.dev. Future posts will go deeper on specific techniques: how to structure context for AI agents, where the monolith/microservices line actually sits, and what a production-ready, AI-optimised project template looks like.