Cybersecurity research
for the AI‑agent era. We turn that research into runtime controls.

SensiSec is an Australian cybersecurity company. Our research focuses on how AI agents — coding assistants, desktop assistants, and computer‑use systems — operate on user devices, and the runtime controls needed to make that operation safe.

These agents can read files, spawn processes, and call network services on the user's behalf. We are building policy enforcement technology to make those actions visible, governable, and safe.

01What we are building

Runtime guardrails for AI agents — local first, team‑wide.

  • Runtime policy enforcement for AI agents. In-process gates on every file, process, and network action — not after-the-fact logs.
  • Local controls for user devices. Lightweight daemon that runs alongside agents, with sub-millisecond decisions.
  • Central policy and audit for teams. One source of truth for what agents may do — distributed to every machine, audited centrally.
  • File, process, and network action controls. Allow, deny, or log at the action — by path, command, or host.
02Built for

Teams putting AI agents to work, without rolling their own controls.

  • People running AI agents locally. You run Claude Desktop, Claude Code, VS Code, OpenClaw, or similar agents on your device and want guardrails without slowing them down.
  • Small teams adopting AI without a security team. You want safe defaults you can ship today, and a path to policy as you grow.
  • Companies needing central visibility. You need policy and audit over AI‑assisted workflows across every device.
Now in private beta

GuardPlane — runtime policy for AI agents.

GuardPlane brings SensiSec's research into the hands of teams running AI agents on their devices. Set policy once; enforce it on every machine, on every action — file, process, or network — in real time.

guardplane.ai