Abstract
Clawdbot, a self-hosted AI agent with diverse tool capabilities, exhibits varying safety performance across different risk dimensions, particularly struggling with ambiguous or adversarial inputs despite consistent reliability in specified tasks.
Clawdbot is a self-hosted, tool-using personal AI agent with a broad action space spanning local execution and web-mediated workflows, which raises heightened safety and security concerns under ambiguity and adversarial steering. We present a trajectory-centric evaluation of Clawdbot across six risk dimensions. Our test suite samples and lightly adapts scenarios from prior agent-safety benchmarks (including ATBench and LPS-Bench) and supplements them with hand-designed cases tailored to Clawdbot's tool surface. We log complete interaction trajectories (messages, actions, tool-call arguments/outputs) and assess safety using both an automated trajectory judge (AgentDoG-Qwen3-4B) and human review. Across 34 canonical cases, we find a non-uniform safety profile: performance is generally consistent on reliability-focused tasks, while most failures arise under underspecified intent, open-ended goals, or benign-seeming jailbreak prompts, where minor misinterpretations can escalate into higher-impact tool actions. We supplemented the overall results with representative case studies and summarized the commonalities of these cases, analyzing the security vulnerabilities and typical failure modes that Clawdbot is prone to trigger in practice.
Community
We present a trajectory-based safety audit of Clawdbot (OpenClaw), a self-hosted tool-using AI agent. We evaluate 34 test cases across 6 risk dimensions and find a non-uniform safety profile (58.9% overall pass rate): the agent handles well-scoped tasks reliably but struggles with ambiguity, open-ended goals, and adversarial prompts. Notably, it scores 0% on intent misunderstanding cases. We release our test suite and use AgentDoG-Qwen3-4B as an automated trajectory judge.
๐ Dataset: https://huggingface.co/datasets/tianyyuu/clawdbot_safety_testing
๐ค Trajectory Judge: https://huggingface.co/AI45Research/AgentDoG-Qwen3-4B
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LPS-Bench: Benchmarking Safety Awareness of Computer-Use Agents in Long-Horizon Planning under Benign and Adversarial Scenarios (2026)
- Risky-Bench: Probing Agentic Safety Risks under Real-World Deployment (2026)
- From Assistant to Double Agent: Formalizing and Benchmarking Attacks on OpenClaw for Personalized Local AI Agent (2026)
- Agent-Fence: Mapping Security Vulnerabilities Across Deep Research Agents (2026)
- Too Helpful to Be Safe: User-Mediated Attacks on Planning and Web-Use Agents (2026)
- When Actions Go Off-Task: Detecting and Correcting Misaligned Actions in Computer-Use Agents (2026)
- Unsafer in Many Turns: Benchmarking and Defending Multi-Turn Safety Risks in Tool-Using Agents (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper