Model companies can package orchestration, memory, tool use, and managed execution however they want, but the actual harness that matters in practice is still the applied layer around the model. The workflow logic, domain judgment, failure handling, evaluation, permissions, integrations, operator taste, and the way work gets made reliable under real scrutiny and the company that actually owns the outcome. That does not magically disappear because a model company wrapped tools around the model and gave it cleaner language. That layer is still where most of the hard and durable work lives-moat. It is still what applied AI companies build. So when Anthropic frames the model as the brain and the rest as some attachable set of hands, it feels reductionist to the point of silliness. Maybe that framing works for a launch narrative. I do not think it accurately describes where the moat is.
That's a very fair comment and I agree with you! I see very few companies doing a great job of building the "actual" harness components you refer to here though and feel like they need to wake up and realize what's happening to them. Thoughts?
Yes in dev tools, this is very real. Vercel, Supabase, Resend, and similar products benefit directly from becoming the default layer agents reach for in a workflow, and that is clearly a race right now. In engineering, where the operator is allowing more and more autonomy to the agent, building for the agent experience makes a lot of sense.
Outside of dev tools, especially in marketing, the weight still sits much more with humans. So there the need is more observability, checkpoints, audits, and evals around the work. Much more control and nuance in shaping how these agents operate.
Agree, I'm not theorizing that GUIs or HITL workflows go away, more that if you build those to have more "capability" and then move on to the next thing and don't prioritize agent parity, you might soon have a structural disadvantage.
Model companies can package orchestration, memory, tool use, and managed execution however they want, but the actual harness that matters in practice is still the applied layer around the model. The workflow logic, domain judgment, failure handling, evaluation, permissions, integrations, operator taste, and the way work gets made reliable under real scrutiny and the company that actually owns the outcome. That does not magically disappear because a model company wrapped tools around the model and gave it cleaner language. That layer is still where most of the hard and durable work lives-moat. It is still what applied AI companies build. So when Anthropic frames the model as the brain and the rest as some attachable set of hands, it feels reductionist to the point of silliness. Maybe that framing works for a launch narrative. I do not think it accurately describes where the moat is.
That's a very fair comment and I agree with you! I see very few companies doing a great job of building the "actual" harness components you refer to here though and feel like they need to wake up and realize what's happening to them. Thoughts?
Yes in dev tools, this is very real. Vercel, Supabase, Resend, and similar products benefit directly from becoming the default layer agents reach for in a workflow, and that is clearly a race right now. In engineering, where the operator is allowing more and more autonomy to the agent, building for the agent experience makes a lot of sense.
Outside of dev tools, especially in marketing, the weight still sits much more with humans. So there the need is more observability, checkpoints, audits, and evals around the work. Much more control and nuance in shaping how these agents operate.
Agree, I'm not theorizing that GUIs or HITL workflows go away, more that if you build those to have more "capability" and then move on to the next thing and don't prioritize agent parity, you might soon have a structural disadvantage.
Absolutely agree, that feels like the crux. A company can look busy shipping, yet still fall behind if it misses the new ergonomics taking shape