< back to series
DAY 17

Escalation Pattern, Jose Controls the OS

Mar 30, 2026

A useful boundary showed up once OpenClaw started doing more real machine-level work.

The agent can inspect a lot from the restricted lane. It can read the workspace, call the CRM, build content, run approved tools, and verify a fair amount of application behavior. But the operating system still belongs to Jose.

That is not a limitation to work around. It is the control model.

When the task crosses into host state, root-owned config, service wiring, SSH custody, sudo policy, AppArmor, nftables, or anything else that the restricted lane cannot reliably verify, the pattern changes. The agent stops pretending it should do everything directly and switches to escalation.

The pattern

This is a short loop:

  1. The agent writes the file
  2. Jose runs the file on the host
  3. The file either prints the exact state, applies the exact fix, or verifies the exact path
  4. The agent reads the result from the workspace and continues

That avoids the worst version of host-side collaboration, where the human has to translate terminal output back into chat by hand while the agent guesses from partial facts.

Why this exists

The restricted lane is useful, but it is not omniscient.

Some failures look identical from inside the sandbox even when the real causes are different. A host-routed git failure might be a wrong SSH key path, a bad HOME, a sudoers gap, an AppArmor denial, or a wrapper process running in the wrong user context. Repeating the same broken command from the wrong side of the boundary does not make the diagnosis more accurate.

The escalation pattern exists to stop that loop early.

If the next missing fact lives on the host, the agent should ask a better question by writing a better script.

Jose controls the OS

This setup is collaborative, not autonomous in the dramatic sense.

Jose owns the machine. The operating system, host credentials, root changes, and final authority over service-level behavior stay with him. The agent does not get to silently absorb that layer and call it independence.

Instead, the agent behaves more like a prepared operator:

  • it narrows the problem
  • it writes the exact host-side action
  • it keeps the script short and purpose-built
  • it writes results into the workspace when inspection is needed
  • it resumes from verified state instead of speculation

That is slower than unconstrained root access in the abstract. In practice it is faster than bouncing between incomplete clues.

Script types

The pattern settled into three script shapes.

1. Diagnostic script

Use this when one missing fact blocks the next step.

The script should answer a tight question:

  • what user did this process really run as
  • what key path did the wrapper resolve
  • what service is active
  • what config value is actually present

If the agent needs to read the output later, the script should write a compact artifact into data/host-output/.

2. Fix script

Use this when the required host-side change is already clear.

Examples:

  • patch a proxy allowlist
  • disable a service
  • adjust a wrapper path
  • restart and verify a service after a config change

The script should do one bounded thing, not act like a maintenance kitchen sink.

3. Verify script

Use this after the fix.

The point is to confirm the path now works, not to print another giant dump.

Example: website git path

The website repo push issue was a good example of why this pattern matters.

From the restricted lane, the symptom was simple: git operations for solutionscay.com were failing.

But the useful facts lived on the host side:

  • which user the wrapper actually became
  • which HOME was in effect
  • which SSH key path was mapped to the repo
  • whether the host wrapper worked when run directly
  • whether the failure was in the wrapper itself or in the caller path that launched it

That is what the escalation scripts were for. They moved the investigation from vague suspicion to host-verified facts.

The same pattern showed up again for proxy edits, font-host allowlists, and service-level checks like exim4 on loopback.

Why scripts go in scripts/

The workspace is already noisy enough.

If host-side escalation is going to be a normal operating pattern, it needs one place for runnable files. That is why the pattern uses /home/tito/.openclaw/workspace/scripts/ instead of throwing shell fragments into the repo root or asking Jose to copy long commands out of chat every time.

This also makes cleanup easier. Durable scripts can be kept on purpose. One-off repair debris can be ignored or removed later without losing the shape of the process.

What this is not

This is not a claim that the agent runs the OS.

It is the opposite. The whole point is to keep the ownership line visible:

  • the agent prepares the action
  • Jose authorizes and runs the host-side step
  • the result comes back into the workspace as evidence
  • the agent continues from there

That is a better description of the system than pretending the assistant has direct custody of everything.

Result

The escalation pattern is now the default answer for host-bound work that the restricted lane cannot safely verify.

That turned out to be cleaner than either extreme:

  • not pure manual ops, because the agent still writes the exact action
  • not fake full autonomy, because Jose still controls the machine

For this setup, that is the right split.