Victor A.

We only need MCPs because we don't trust LLMs

A model with shell access inside a disposable sandbox, with limited files, no sensitive credentials, controlled network access, and good rollback mechanisms, is a much saner design than a model trapped behind dozens of tiny pseudo-tools pretending not to be a shell.

If a new coworker joined your project tomorrow and asked for help working on GitHub, you would not hand them a GitHub MCP. You would have them install Git and the GitHub CLI, and make sure their account has the right permissions. That is how we treat teammates. We do not build tiny artificial tool surfaces for them and force them to operate through a layer of wrappers just because the raw interface feels scary.

That contrast explains what most MCP servers actually are. They are trust boundaries, not breakthroughs in interface design. They exist because the model sitting behind the keyboard is not trusted enough to have broad, direct access to the environment.

A year ago, that approach was obviously correct. Models were tripping over basic tool use, getting stuck in loops, calling the wrong tools. Of course you want a narrow interface if the model cannot reliably drive a broad one. But the capability curve moved fast. Around the Opus 4.5 moment, people who once refused to let a model run code locally began spinning up multiple agent sessions with far fewer restrictions. The default posture shifted from "watch every move" to "let it run and intervene if needed." That is a very different relationship, and it exposes the limits of the current MCP worldview.

The limit is simple: tool-level control is not how we usually secure powerful systems. We do not protect a company by inventing one API per acceptable behavior and hoping nobody colors outside the lines. We protect it with authorization, sandboxing, network boundaries, audit trails, and scoped credentials. We decide what a person or service can reach, not which exact verbs they are allowed to think with. The same principle should apply to agents.

Bash is the best example. People talk about bash as if the shell itself is dangerous. The shell is fine. Unbounded access is the problem. Give a model shell access inside a disposable sandbox, with limited files, no sensitive credentials, controlled network access, and good rollback mechanisms, and you have a much saner design than dozens of tiny pseudo-tools pretending not to be a shell. The real safety layer should live in the environment.

That is also why products like OpenClaw arrived at the right time. As models became more competent, users started wanting fewer handrails. Not because they stopped caring about safety, but because they wanted the upside of a more general system. Every time you narrow the tool surface, you limit the model's ability to solve problems creatively.

Humans have always shared tools on computers through apps or CLIs. MCPs introduced a third way. A few lines of config give your model tools another developer built, no new CLIs to install. Remote MCPs mean you skip downloading untrusted code into the agent's harness entirely. But the protocol was not designed with progressive disclosure in mind. Every tool call ships the full schema, like reading an app's entire help page on every single interaction. That has room to improve, which it seems to be.

I strongly believe that once we figure out the right abstractions for agent authentication, authorization, and checkpointing, on both user machines and servers, most MCPs as we know them will disappear. As a safety philosophy, they are a bridge technology from the era when models were too brittle to trust. The lasting pattern will be better sandboxes, not smaller tools.