Cybersecurity research
for the AI‑agent era. We turn that research into runtime controls.

SensiSec is an Australian cybersecurity company. Our research focuses on how AI agents — coding assistants, desktop assistants, and computer‑use systems — operate on user devices, and the runtime controls needed to make that operation safe.

AI coding assistants and desktop agents can read files, spawn processes, and call network services. We are building policy enforcement technology to make those actions visible, governable, and safe.

01What we are building

Runtime guardrails for AI agents — local first, team‑wide.

  • Runtime policy enforcement for AI agents. In-process gates on every file, process, and network action — not after-the-fact logs.
  • Local controls for developer machines. Lightweight daemon that runs alongside agents, with sub-millisecond decisions.
  • Central policy and audit for teams. One source of truth for what agents may do — distributed to every machine, audited centrally.
  • File, process, and network action controls. Allow, ask, or block at the action — by path, command, host, or risk class.
02Built for

Teams putting AI agents to work, without rolling their own controls.

  • Developers using AI coding agents locally. You run Claude Code, Cursor, or Codex agents and want guardrails without slowing them down.
  • Small teams adopting AI without a security team. You want safe defaults you can ship today, and a path to policy as you grow.
  • Companies needing central visibility. You need policy and audit over AI‑assisted workflows across every developer machine.
Now in private beta

GuardPlane — runtime control plane for AI coding agents.

GuardPlane brings SensiSec's research into the hands of teams using AI coding agents. Set policy once; enforce it on every machine, on every action — file, process, or network — in real time.

guardplane.ai