AI & Agentic Security
This isn't cloud security or product security with an AI label. It's a genuinely new field. Model security, prompt injection defence, agent guardrails, supply chain integrity for AI systems. The role is still being defined, and most companies are figuring out who should own it. If you're building with AI, that question is going to find you whether you're ready or not.
Does this role even exist yet?
Barely. A handful of companies have dedicated AI security engineers. Most are still bolting AI security onto existing AppSec or cloud security roles. The job title is emerging and the skill set is being defined in real time. If you're waiting for the market to produce a clear 'AI Security Engineer' profile before you start thinking about it, you're already behind.
What would this person actually do?
They would secure the AI stack from the ground up. Hardening model endpoints, building guardrails for autonomous agents, securing training data pipelines, implementing output filtering, and ensuring supply chain integrity for the models and frameworks you depend on. They sit at the intersection of ML engineering and security engineering.
When should I start thinking about this hire?
If AI is core to your product and you're moving beyond prototypes into production, now. Not when you have a breach or a customer asks how you're securing your models. The people who can do this work are scarce, and they're not going to be easier to find in 12 months.
Hiring for AI & Agentic Security?
The practitioners who define this field are not on job boards. They are embedded in the communities we operate in. Let's talk about what you need.
START THE CONVERSATION