What is the story about?
Anthropic has introduced a new capability that allows its AI chatbot Claude to complete tasks directly on a user’s computer, marking a shift towards more action-oriented artificial intelligence systems.
The update enables Claude to go beyond responding to prompts and actively perform tasks such as retrieving files, navigating the web, and running developer tools. The company said the feature is currently available as a research preview for Claude Pro and Claude Max subscribers, and is limited to devices running macOS.
The feature builds on Anthropic’s existing tools, including Claude Code and Claude Cowork, which can now execute tasks within a user’s system. Claude is designed to prioritise connectors with supported services such as Google Workspace and Slack when completing tasks.
If a connector is not available, the AI can still proceed by interacting with the system manually, simulating keyboard and mouse actions. This allows it to open files, use web browsers, and operate development tools as needed.
The rollout follows growing momentum in agentic AI, where models are designed to take actions autonomously. The open-source OpenClaw framework has played a role in this shift, enabling AI systems, often referred to as “claws”, to perform tasks across software environments.
Recently, NVIDIA introduced NemoClaw, a framework aimed at simplifying the deployment of such tools with built-in security settings.
Anthropic also said the feature works alongside its Dispatch tool, which allows users to assign tasks to Claude remotely via a smartphone. These tasks can include checking emails, opening sessions in Claude Code, or running automated workflows.
Anthropic said Claude will always request user permission before carrying out actions on a computer. Users can stop tasks at any time if needed.
The company also cautioned users against using the feature for sensitive data. Some applications are disabled by default to reduce risk.
The system includes safeguards against threats such as prompt injection attacks, with automatic scanning for vulnerabilities. However, Anthropic acknowledged that the feature is still in its early stages and may not handle complex tasks reliably.
Security concerns around agentic AI remain, as such systems can perform actions quickly and at scale. There is also a risk that malicious actors could exploit these tools if safeguards are bypassed.
Anthropic said it is releasing the feature as a research preview to gather feedback and refine its capabilities.
The update enables Claude to go beyond responding to prompts and actively perform tasks such as retrieving files, navigating the web, and running developer tools. The company said the feature is currently available as a research preview for Claude Pro and Claude Max subscribers, and is limited to devices running macOS.
Anthropic AI agentic tools: Claude Code and Claude Cowork
The feature builds on Anthropic’s existing tools, including Claude Code and Claude Cowork, which can now execute tasks within a user’s system. Claude is designed to prioritise connectors with supported services such as Google Workspace and Slack when completing tasks.
If a connector is not available, the AI can still proceed by interacting with the system manually, simulating keyboard and mouse actions. This allows it to open files, use web browsers, and operate development tools as needed.
The rollout follows growing momentum in agentic AI, where models are designed to take actions autonomously. The open-source OpenClaw framework has played a role in this shift, enabling AI systems, often referred to as “claws”, to perform tasks across software environments.
Recently, NVIDIA introduced NemoClaw, a framework aimed at simplifying the deployment of such tools with built-in security settings.
Anthropic also said the feature works alongside its Dispatch tool, which allows users to assign tasks to Claude remotely via a smartphone. These tasks can include checking emails, opening sessions in Claude Code, or running automated workflows.
The privacy factor: Permissions, safeguards and limitations
Anthropic said Claude will always request user permission before carrying out actions on a computer. Users can stop tasks at any time if needed.
The company also cautioned users against using the feature for sensitive data. Some applications are disabled by default to reduce risk.
The system includes safeguards against threats such as prompt injection attacks, with automatic scanning for vulnerabilities. However, Anthropic acknowledged that the feature is still in its early stages and may not handle complex tasks reliably.
Security concerns around agentic AI remain, as such systems can perform actions quickly and at scale. There is also a risk that malicious actors could exploit these tools if safeguards are bypassed.
Anthropic said it is releasing the feature as a research preview to gather feedback and refine its capabilities.














