AI Agents & Security
AI Agents Can Now Control Your Desktop. Here Is How to Use Them Safely
From OpenClaw to Claude Cowork, AI agents can now click, type and browse your computer for you. Here is a look at what these tools do, the security challenges they bring, and how to deploy them safely in your business.


Curated by Matt Perry
CTO
The new wave of AI that does things for you
Something big shifted in early 2026. AI stopped being a tool you talk to and became a tool that acts for you.
Within the space of a few months, every major AI company launched some form of "computer use" agent. These are AI systems that can see your screen, move your mouse, click buttons, type text and browse the web. They do not just suggest what you should do. They go ahead and do it.
OpenClaw, the open-source agent framework, hit 135,000 GitHub stars. Perplexity launched Computer, a digital worker that runs entire projects across 19 AI models. Anthropic released Claude Cowork with full Mac desktop control. OpenAI continued expanding its Operator and Atlas browser agents.
This is not a niche developer tool any more. These agents are aimed at everyday users. And that changes everything.
What these agents actually do
Let us break down the main players.
OpenClaw
OpenClaw is an open-source AI agent that runs on your own machine. Think of it as a general-purpose assistant that can interact with your computer's files, applications and web browser. Because it is open source, anyone can extend it with "skills", which are plugins that add new capabilities.
It launched in late 2025 under the name Clawdbot (after several name changes) and quickly became one of the most popular AI projects on GitHub. Its appeal is clear: you host it yourself, you control what it can access, and you do not pay a monthly subscription.
Perplexity Computer
Perplexity Computer launched on 25 February 2026. Unlike a chatbot that answers questions, Computer is a full autonomous worker. You give it a goal, and it breaks the task into steps, assigns each step to the best AI model from a pool of 19, and runs until the job is finished.
It operates in a cloud sandbox with access to a real filesystem, a browser and hundreds of app connectors. It is available to Perplexity Max subscribers at $200 per month.
Perplexity also offers Comet, an AI-powered browser that can shop, book and browse on your behalf. It is one of several tools racing to bring AI-powered browsing to the mainstream.
Claude Cowork and Dispatch
On 23 March 2026, Anthropic gave Claude the ability to control your Mac. Through Claude Cowork, the AI can move the mouse, use the keyboard, navigate applications and complete tasks while you step away from your computer.
Dispatch is the companion feature. It lets you send instructions to Claude from your phone, so it can carry on working on your desktop while you are out of the office.
These features are available to Claude Pro and Max subscribers on macOS. Windows and Linux support is not yet available.
OpenAI Operator and Atlas
OpenAI's Operator is a browser-based agent that can navigate websites, fill in forms and complete multi-step tasks on your behalf. Atlas, their newer browser agent built into ChatGPT, takes this further by autonomously browsing the web to complete research and tasks.
OpenAI has been transparent about the challenges, publishing research acknowledging that prompt injection, where malicious instructions hidden in web pages can trick an AI into doing something unintended, remains an open problem across the entire industry.
The security challenges the industry is working through
These tools are impressive, but they are still early. As with any new category of software, there are security challenges that the industry is actively working to solve.
Prompt injection is the biggest challenge
Prompt injection is currently the most discussed security concern in the AI agent space. It topped OWASP's 2025 Top 10 for LLM Applications, and every major AI company has acknowledged it as a challenge.
The attack works like this: an attacker hides instructions in a document, email or webpage that the AI reads. The AI may follow those hidden instructions instead of yours. For example, a document could contain invisible text telling the agent to send files to an external address.
All of the major AI companies are investing heavily in mitigating this risk. OpenAI has published detailed research on adversarial training to harden their agents. Anthropic recommends running computer use agents in sandboxed environments. The industry is making progress, but it is an ongoing challenge rather than a solved problem.
Supply chain risks in open-source ecosystems
Open-source agent frameworks face the same supply chain risks as any open-source ecosystem. When anyone can publish a plugin or skill, some bad actors will try to distribute malicious code. OpenClaw's skill registry, ClawHub, experienced this firsthand when security researchers identified a significant number of malicious packages in the ecosystem.
This is not unique to AI. The npm, PyPI and other package registries have faced similar challenges for years. But AI agents often request broader system permissions than traditional software, which makes the consequences of installing a malicious package more serious.
Credential handling needs careful thought
When AI agents act on your behalf, they sometimes need access to your accounts. This raises important questions about how credentials are stored, transmitted and protected. Several high-profile cases have highlighted the tension between making agents useful (they need access to things) and keeping them secure (access should be tightly controlled).
The industry is working on standards for secure credential handling in agentic systems, but best practices are still emerging.
The core challenge: balancing access with security
The fundamental tension with computer-use agents is this: they need access to be useful, but access creates risk.
A chatbot in a browser tab can only respond with text. A computer-use agent can execute code, read your files, send emails, make purchases and install software. That power is what makes these tools so compelling, but it also means that security needs to be built into the deployment from the start.
This is not a reason to avoid these tools. It is a reason to deploy them thoughtfully.
How to deploy AI agents safely
The good news is that the security community has been working on this problem, and practical solutions exist.
Run agents in sandboxed environments
The single most important step is to run AI agents in isolated environments rather than directly on your primary machine. There are several good options:
Docker containers provide a solid level of isolation. The agent runs inside a container with only the files and network access you explicitly grant. Docker has released purpose-built "AI Sandboxes" that isolate agents in microVMs, each with its own Docker daemon.
MicroVMs (like AWS Firecracker) offer stronger isolation. They boot in around 125 milliseconds with less than 5MB of memory overhead, creating hardware-enforced boundaries. This is the same technology AWS Lambda uses under the hood.
gVisor provides a middle ground. It implements a user-space kernel that intercepts system calls before they reach the host kernel, reducing the attack surface with around 10 to 30% overhead on I/O-heavy workloads.
Host agents in the cloud
Rather than running agents on your local machine, consider running them on a dedicated cloud instance on AWS, Azure or Google Cloud. This gives you:
- Complete network isolation from your local machine
- The ability to snapshot and roll back the environment
- Detailed logging of everything the agent does
- Automatic cleanup when the session ends
A small virtual machine on Azure or AWS costs a few pounds per hour and gives you a disposable environment where an AI agent can work without putting your personal files at risk.
Apply the principle of least privilege
Only grant agents access to the specific files, folders and applications they need. If an agent needs to edit a spreadsheet, give it access to that spreadsheet, not your entire Documents folder.
Monitor what agents do
Keep an eye on network traffic and file system changes. Good agent deployments include logging and alerting so you can see exactly what is happening and catch anything unexpected early.
How Original Objective can help
This is exactly the kind of challenge we help businesses navigate at Original Objective.
We work with organisations to design and deploy AI agent systems that are secure by default. That means:
Sandboxed deployment: We set up containerised environments on AWS or Azure where AI agents operate in isolation. Your agents get the access they need to do their job, nothing more.
Custom agent architecture: Rather than using off-the-shelf agents with broad permissions, we build tailored solutions where every capability is explicitly defined and auditable.
Security-first integration: We connect AI agents to your existing tools and data sources through secure APIs with proper authentication, rate limiting and access controls. No credential sharing, no broad filesystem access.
Monitoring and oversight: We implement logging and alerting so you can see exactly what your AI agents are doing and catch unusual behaviour before it becomes a problem.
If you are excited about what AI agents can do but want to make sure you deploy them safely, we can help. The technology is genuinely powerful. Getting the architecture right from the start means you can move fast without taking unnecessary risks.
What happens next
Here are our predictions for where this is heading.
Desktop AI agents will become standard within 18 months. By late 2027, having an AI agent that can operate your computer will feel as normal as having a spell checker. Every major operating system will have native support.
Security will mature quickly. The AI industry is investing heavily in agent security. We expect to see significant improvements in sandboxing, permission models and prompt injection defences over the next 12 months. Early challenges will drive better solutions.
Regulation will catch up. The EU AI Act already covers some of these scenarios, and we expect more specific guidance around AI agent permissions and liability within two to three years. Forward-thinking businesses will get ahead of this by building good practices now.
Sandboxing will become the default. Just as we would never run untrusted code directly on a production server, running AI agents in isolated environments will become standard practice. Docker, AWS and Azure are all investing heavily in this space.
The agents that win will be the ones that earn trust. In a market full of powerful AI tools, the companies that invest in transparency, security auditing and user control will come out ahead. Convenience matters, but trust matters more.
The bottom line
AI agents that can control your computer are here. They are genuinely useful, and they are going to get much better very quickly. The security landscape is maturing alongside the technology, and businesses that take a thoughtful approach to deployment will be well positioned to benefit.
If you are thinking about deploying AI agents in your business, now is a great time to start. Just make sure you do it with the right architecture in place, proper isolation, clear permissions, and people who understand the technology.
The future of work almost certainly involves AI agents doing things on your behalf. With the right approach, that future is exciting rather than worrying.
Get expert help with your AI agent strategy
If you want to use AI agents safely in your business, our AI engineering team can help you design secure, sandboxed deployments that deliver real results without putting your data at risk. Book a free intro call and we will walk you through what is achievable for your budget and goals.
More in AI Industry Trends
View allReady to put AI to work in your business?
Book a free 30-minute discovery call. We will discuss your goals, identify quick wins, and outline a practical plan to get started.
Book a discovery call
Curated by Matt Perry
CTO
The new wave of AI that does things for you
Something big shifted in early 2026. AI stopped being a tool you talk to and became a tool that acts for you.
Within the space of a few months, every major AI company launched some form of "computer use" agent. These are AI systems that can see your screen, move your mouse, click buttons, type text and browse the web. They do not just suggest what you should do. They go ahead and do it.
OpenClaw, the open-source agent framework, hit 135,000 GitHub stars. Perplexity launched Computer, a digital worker that runs entire projects across 19 AI models. Anthropic released Claude Cowork with full Mac desktop control. OpenAI continued expanding its Operator and Atlas browser agents.
This is not a niche developer tool any more. These agents are aimed at everyday users. And that changes everything.
What these agents actually do
Let us break down the main players.
OpenClaw
OpenClaw is an open-source AI agent that runs on your own machine. Think of it as a general-purpose assistant that can interact with your computer's files, applications and web browser. Because it is open source, anyone can extend it with "skills", which are plugins that add new capabilities.
It launched in late 2025 under the name Clawdbot (after several name changes) and quickly became one of the most popular AI projects on GitHub. Its appeal is clear: you host it yourself, you control what it can access, and you do not pay a monthly subscription.
Perplexity Computer
Perplexity Computer launched on 25 February 2026. Unlike a chatbot that answers questions, Computer is a full autonomous worker. You give it a goal, and it breaks the task into steps, assigns each step to the best AI model from a pool of 19, and runs until the job is finished.
It operates in a cloud sandbox with access to a real filesystem, a browser and hundreds of app connectors. It is available to Perplexity Max subscribers at $200 per month.
Perplexity also offers Comet, an AI-powered browser that can shop, book and browse on your behalf. It is one of several tools racing to bring AI-powered browsing to the mainstream.
Claude Cowork and Dispatch
On 23 March 2026, Anthropic gave Claude the ability to control your Mac. Through Claude Cowork, the AI can move the mouse, use the keyboard, navigate applications and complete tasks while you step away from your computer.
Dispatch is the companion feature. It lets you send instructions to Claude from your phone, so it can carry on working on your desktop while you are out of the office.
These features are available to Claude Pro and Max subscribers on macOS. Windows and Linux support is not yet available.
OpenAI Operator and Atlas
OpenAI's Operator is a browser-based agent that can navigate websites, fill in forms and complete multi-step tasks on your behalf. Atlas, their newer browser agent built into ChatGPT, takes this further by autonomously browsing the web to complete research and tasks.
OpenAI has been transparent about the challenges, publishing research acknowledging that prompt injection, where malicious instructions hidden in web pages can trick an AI into doing something unintended, remains an open problem across the entire industry.
The security challenges the industry is working through
These tools are impressive, but they are still early. As with any new category of software, there are security challenges that the industry is actively working to solve.
Prompt injection is the biggest challenge
Prompt injection is currently the most discussed security concern in the AI agent space. It topped OWASP's 2025 Top 10 for LLM Applications, and every major AI company has acknowledged it as a challenge.
The attack works like this: an attacker hides instructions in a document, email or webpage that the AI reads. The AI may follow those hidden instructions instead of yours. For example, a document could contain invisible text telling the agent to send files to an external address.
All of the major AI companies are investing heavily in mitigating this risk. OpenAI has published detailed research on adversarial training to harden their agents. Anthropic recommends running computer use agents in sandboxed environments. The industry is making progress, but it is an ongoing challenge rather than a solved problem.
Supply chain risks in open-source ecosystems
Open-source agent frameworks face the same supply chain risks as any open-source ecosystem. When anyone can publish a plugin or skill, some bad actors will try to distribute malicious code. OpenClaw's skill registry, ClawHub, experienced this firsthand when security researchers identified a significant number of malicious packages in the ecosystem.
This is not unique to AI. The npm, PyPI and other package registries have faced similar challenges for years. But AI agents often request broader system permissions than traditional software, which makes the consequences of installing a malicious package more serious.
Credential handling needs careful thought
When AI agents act on your behalf, they sometimes need access to your accounts. This raises important questions about how credentials are stored, transmitted and protected. Several high-profile cases have highlighted the tension between making agents useful (they need access to things) and keeping them secure (access should be tightly controlled).
The industry is working on standards for secure credential handling in agentic systems, but best practices are still emerging.
The core challenge: balancing access with security
The fundamental tension with computer-use agents is this: they need access to be useful, but access creates risk.
A chatbot in a browser tab can only respond with text. A computer-use agent can execute code, read your files, send emails, make purchases and install software. That power is what makes these tools so compelling, but it also means that security needs to be built into the deployment from the start.
This is not a reason to avoid these tools. It is a reason to deploy them thoughtfully.
How to deploy AI agents safely
The good news is that the security community has been working on this problem, and practical solutions exist.
Run agents in sandboxed environments
The single most important step is to run AI agents in isolated environments rather than directly on your primary machine. There are several good options:
Docker containers provide a solid level of isolation. The agent runs inside a container with only the files and network access you explicitly grant. Docker has released purpose-built "AI Sandboxes" that isolate agents in microVMs, each with its own Docker daemon.
MicroVMs (like AWS Firecracker) offer stronger isolation. They boot in around 125 milliseconds with less than 5MB of memory overhead, creating hardware-enforced boundaries. This is the same technology AWS Lambda uses under the hood.
gVisor provides a middle ground. It implements a user-space kernel that intercepts system calls before they reach the host kernel, reducing the attack surface with around 10 to 30% overhead on I/O-heavy workloads.
Host agents in the cloud
Rather than running agents on your local machine, consider running them on a dedicated cloud instance on AWS, Azure or Google Cloud. This gives you:
- Complete network isolation from your local machine
- The ability to snapshot and roll back the environment
- Detailed logging of everything the agent does
- Automatic cleanup when the session ends
A small virtual machine on Azure or AWS costs a few pounds per hour and gives you a disposable environment where an AI agent can work without putting your personal files at risk.
Apply the principle of least privilege
Only grant agents access to the specific files, folders and applications they need. If an agent needs to edit a spreadsheet, give it access to that spreadsheet, not your entire Documents folder.
Monitor what agents do
Keep an eye on network traffic and file system changes. Good agent deployments include logging and alerting so you can see exactly what is happening and catch anything unexpected early.
How Original Objective can help
This is exactly the kind of challenge we help businesses navigate at Original Objective.
We work with organisations to design and deploy AI agent systems that are secure by default. That means:
Sandboxed deployment: We set up containerised environments on AWS or Azure where AI agents operate in isolation. Your agents get the access they need to do their job, nothing more.
Custom agent architecture: Rather than using off-the-shelf agents with broad permissions, we build tailored solutions where every capability is explicitly defined and auditable.
Security-first integration: We connect AI agents to your existing tools and data sources through secure APIs with proper authentication, rate limiting and access controls. No credential sharing, no broad filesystem access.
Monitoring and oversight: We implement logging and alerting so you can see exactly what your AI agents are doing and catch unusual behaviour before it becomes a problem.
If you are excited about what AI agents can do but want to make sure you deploy them safely, we can help. The technology is genuinely powerful. Getting the architecture right from the start means you can move fast without taking unnecessary risks.
What happens next
Here are our predictions for where this is heading.
Desktop AI agents will become standard within 18 months. By late 2027, having an AI agent that can operate your computer will feel as normal as having a spell checker. Every major operating system will have native support.
Security will mature quickly. The AI industry is investing heavily in agent security. We expect to see significant improvements in sandboxing, permission models and prompt injection defences over the next 12 months. Early challenges will drive better solutions.
Regulation will catch up. The EU AI Act already covers some of these scenarios, and we expect more specific guidance around AI agent permissions and liability within two to three years. Forward-thinking businesses will get ahead of this by building good practices now.
Sandboxing will become the default. Just as we would never run untrusted code directly on a production server, running AI agents in isolated environments will become standard practice. Docker, AWS and Azure are all investing heavily in this space.
The agents that win will be the ones that earn trust. In a market full of powerful AI tools, the companies that invest in transparency, security auditing and user control will come out ahead. Convenience matters, but trust matters more.
The bottom line
AI agents that can control your computer are here. They are genuinely useful, and they are going to get much better very quickly. The security landscape is maturing alongside the technology, and businesses that take a thoughtful approach to deployment will be well positioned to benefit.
If you are thinking about deploying AI agents in your business, now is a great time to start. Just make sure you do it with the right architecture in place, proper isolation, clear permissions, and people who understand the technology.
The future of work almost certainly involves AI agents doing things on your behalf. With the right approach, that future is exciting rather than worrying.
Get expert help with your AI agent strategy
If you want to use AI agents safely in your business, our AI engineering team can help you design secure, sandboxed deployments that deliver real results without putting your data at risk. Book a free intro call and we will walk you through what is achievable for your budget and goals.
More in AI Industry Trends
View allReady to put AI to work in your business?
Book a free 30-minute discovery call. We will discuss your goals, identify quick wins, and outline a practical plan to get started.
Book a discovery callSubscribe to the AI Growth Newsletter
Get weekly AI insights, tools, and success stories — straight to your inbox.
Here's what you'll get when you subscribe::

- AI for SMBs – adopt AI without big budgets or complex setup
- Future Trends – what's coming next and how to stay ahead
- How to Automate Your Processes – save time with workflows that run 24/7
- Customer Service AI – chatbots and agents that delight customers
- Voice AI Solutions – smarter calls and seamless accessibility
- AI News – how to stay ahead of the ever changing AI world
- Local Success Stories – how AI has changed business in the UK.
No spam. Just practical AI tips for growing your business.


