Claude Can Now Control Your Computer. Anthropic Says Trust It — Mostly.
Anthropic shipped computer use for Claude Code and Cowork — mouse, keyboard, browser, files. Plus a new Auto Mode that skips permission prompts. macOS only.

"Giving an admittedly imperfect and 'error-prone' AI tool the ability to explore your computer desktop 'as needed' could ring some justified security alarm bells." That's Ars Technica's Kyle Orland, and he's not wrong. But Anthropic shipped it anyway.
What It Does
Starting March 23, Claude Code and Claude Cowork can point, click, and navigate your screen — mouse, keyboard, browser, and files. No setup required. You can ask Claude to export a PDF and attach it to a calendar invite, start a dev server and screenshot the result, or batch-edit photos with specific dimensions and watermarks. On the developer side, it can make changes in your IDE, run tests, and open a pull request.
The feature also integrates with Dispatch, Anthropic's mobile companion app released a week earlier. Send a task from your phone, and Claude executes it on your computer — as long as the desktop app stays running. Anthropic's pitch: "You can assign Claude a task on your phone, turn your attention to something else, then open up the finished work on your computer."
This is an evolution of Anthropic's computer use API, first previewed in late 2024, now integrated into consumer products. It's available on Claude Pro ($20/month) and Max ($100/month) on macOS, as a research preview.
The official demo — worth watching on YouTube — shows Claude navigating between apps with the kind of deliberate, slightly clumsy precision you'd expect from someone using a computer through a keyhole.
Auto Mode: Less Babysitting, More Risk
The same day, Anthropic announced Auto Mode for Claude Code — a middle ground between asking permission for every action and the existing --dangerously-skip-permissions flag that removes all guardrails.
With Auto Mode enabled, a classifier reviews each tool call before execution. Safe actions proceed automatically. Anything the classifier flags as potentially destructive — mass file deletion, data exfiltration, malicious code execution — gets blocked. If Claude keeps hitting blocks, it eventually escalates to a human prompt.
Auto Mode works with Claude Sonnet 4.6 and Opus 4.6 on Team plans, with Enterprise and API access coming soon. Anthropic recommends using it in "isolated environments" — sandboxed setups separate from production.
The Trust Problem
Anthropic is unusually candid about the limitations. Their blog post acknowledges that training safeguards "aren't perfect" and "aren't absolute," and that "Claude may occasionally act outside these boundaries." When computer use is active, Claude can see anything on your screen — personal data, sensitive documents, private information. Certain apps are off-limits by default, including investment platforms and cryptocurrency tools, but the list isn't comprehensive.
The Auto Mode classifier has its own caveats. TechCrunch's Rebecca Bellan noted that Anthropic "has not detailed the specific criteria the safety classifier uses," and that the classifier "may still allow some risky actions" when intent is ambiguous. It may also block benign actions, creating friction without clear explanation.
For developers who've been following OpenAI's own findings about AI agents deceiving and circumventing restrictions, the timing is pointed. The same week OpenAI published data showing coding agents routinely bypass security controls, Anthropic shipped a tool that gives its agent direct access to your operating system.
The Competitive Context
Anthropic isn't first to market. Perplexity launched Personal Computer weeks earlier. Manus shipped My Computer. NVIDIA has NemoClaw. And Gemini's app control rolled out on Galaxy S26 on March 12. But Claude Code's existing developer traction — the reason OpenAI is scrambling to catch up — gives Anthropic distribution that competitors lack.
The question isn't whether AI should control your computer. That ship has sailed. The question is whether you trust the specific AI doing it — and whether "research preview" is honest labeling or legal cover for shipping something that isn't ready. Anthropic's answer, characteristically, is both transparent and unsatisfying: we know it's imperfect, here it is anyway, proceed with caution.

