At most technology companies, security protects the company's systems, data, and users from external threats. At AI companies, it does all of that — and it also protects the outside world from the company's product.

That additional mandate is the structural difference that reshapes the entire function. When a traditional software company ships a bug, the worst case is data loss or downtime. When an AI company's model is misused, the consequences can range from mass-generated disinformation to detailed instructions for weapons synthesis. This isn't hypothetical — it's why Anthropic employs a "Policy Manager, Chemical Weapons and High Yield Explosives" and OpenAI employs an "Abuse Investigator (CBRN)."

This article maps the full security function across the AI companies in our dataset: 298 jobs across 10 roles at 90 companies.

Two caveats before we start. First, security hiring varies by company type. Model builders and consumer-facing AI platforms have the heaviest security needs, particularly in Trust & Safety. Infrastructure providers have more traditional security profiles. Application companies sit in between. Second, Trust & Safety is concentrated — 36 jobs across just 6 companies. This is a role that exists primarily at companies operating consumer-facing AI platforms. It may not generalise to all AI companies.


The Numbers

Ten roles make up the security function at AI companies. Together they account for 3.4% of all hiring in our dataset.

RoleJobsCompaniesAI-specific skills present?
Detection & Incident Response5220Yes — detecting risks from autonomous AI agents
Infrastructure & Cloud Security5224Yes — securing GPU clusters, AI accelerator infrastructure
Application Security Engineer4021Yes — securing LLM architectures, AI agent assessment
Security GRC & Compliance3823Partially — AI governance frameworks
Trust & Safety366Entirely AI-specific
Security Engineer2821Yes — securing agentic AI systems
Physical Security177Minimal — mostly data centre security
Offensive Security & Red Team1310Yes — red-teaming AI models, prompt injection testing
Security Leader1210Partially — AI-driven cybersecurity programs
Identity & Access Management105Yes — managing identity for agentic AI systems

The column "AI-specific skills present" is this article's analytical foundation. Every role in this table has been examined for skills that would not exist without AI. In most cases, those skills are classified as "emerging" — they appear in job requirements but at moderate confidence levels, sitting alongside traditional security competencies that remain unchanged. The AI layer is being added on top, not replacing what was there before.


Traditional Roles with AI-Specific Mutations

These roles exist at every tech company. At AI companies, their skill profiles contain new requirements that reflect AI-specific threats.

Application Security Engineer (40 jobs, 21 companies)

The traditional core — threat modelling, secure design reviews, vulnerability assessment — is unchanged. The AI-specific additions are what's new: securing AI and machine learning systems including LLM architectures and training data pipelines, designing secure controls for emerging AI technologies, performing security assessments on AI agents and agentic systems, and protecting enterprise knowledge graphs and multi-tenant AI platforms. All of these are categorised as emerging skills with moderate confidence scores, appearing in some but not all job requirements. This suggests the role is in transition: most application security work at AI companies is still traditional, but the AI-specific layer is growing.

Infrastructure & Cloud Security Engineer (52 jobs, 24 companies)

Traditional core: Kubernetes security, network architecture, zero-trust models. The AI-specific additions include securing AI and machine learning infrastructure including model protection and training data security, securing GPU clusters and specialised AI accelerator infrastructure, and building AI-powered security automation. The GPU cluster requirement is particularly notable. GPU infrastructure has a different security profile from traditional cloud compute — multi-tenancy on GPU nodes, model weight protection, training data isolation. These are problems that Kubernetes security training doesn't cover.

Identity & Access Management (10 jobs, 5 companies)

Traditional core: SSO, RBAC, authentication systems. One AI-specific addition stands out: managing identity and access for non-human identities including service accounts, workloads, and agentic AI systems. This is a small but significant finding. When AI agents act autonomously — calling APIs, accessing databases, triggering workflows — they need identity management. The IAM discipline is expanding from "which humans can access which systems" to "which autonomous agents can take which actions." Traditional IAM frameworks were not designed for this. OWASP's 2025 Top 10 for LLM Applications reflects the same concern, listing "Excessive Agency" — the risk of granting AI agents too much autonomy, too many tools, or insufficient oversight — as one of its most significantly expanded categories.


Roles That Exist Because of AI

Trust & Safety (36 jobs, 6 companies)

This is the most purely AI-native security role. It exists because AI models can generate harmful content, be weaponised by malicious actors, and create novel abuse vectors that have no precedent in traditional software.

The skill profile is specific: detecting, investigating, and disrupting malicious use of AI platforms; training and refining large language models for safety and policy enforcement; building multi-layered defences and real-time safety mechanisms; developing AI-specific detection capabilities and behavioural clustering techniques.

The job titles within this role reveal the scope of the problem:

These titles describe work that didn't exist before generative AI. A CBRN abuse investigator at an AI company investigates attempts to use models to generate information about chemical, biological, radiological, and nuclear threats. A biological safety research scientist studies whether the model could assist in creating biological hazards. The AI Incident Database, which tracks reported AI safety incidents, added 108 new incident records between November 2025 and January 2026 alone — evidence that the threat surface these roles address is not theoretical.

The company concentration matters. Anthropic accounts for 13 of the 36 Trust & Safety jobs, OpenAI for 16, and xAI for 4. These are all companies operating consumer-facing AI platforms where misuse is a direct risk. Infrastructure providers and enterprise application companies have minimal Trust & Safety hiring. This role may grow as more companies launch consumer AI products, or it may remain concentrated at the major model providers.

Offensive Security & Red Team (13 jobs, 10 companies)

Red teaming is a traditional security practice. Red teaming AI models is a new discipline being defined in real time.

The AI-specific skills are the sharpest in any security role: identifying and exploiting AI/ML-specific attack surfaces including prompt injection, model exfiltration, and agent abuse (confidence: 0.92); testing AI-integrated and LLM-powered applications for unique security vulnerabilities (0.88); identifying novel attack surfaces in distributed AI systems and agentic workflows (0.81); and researching LLM misuse scenarios and developing forward-looking defensive strategies (0.73).

The job titles show how this discipline is specialising:

DeepMind has a dedicated "Agentic Red Team" — a team specifically focused on testing autonomous AI agents. This is a sub-specialisation that is emerging alongside the broader expansion of agentic AI across the industry.


The Inward vs. Outward Security Split

Traditional security functions protect the company from external threats — attackers, data breaches, compliance failures. AI companies need that, but they also need security functions that protect the external world from the company's product. This creates two distinct mandates within the same function.

Inward-facing (traditional): Detection & Incident Response, Infrastructure Security, IAM, GRC, Physical Security. These exist at any tech company, with AI-specific additions to their skill profiles.

Outward-facing (AI-native): Trust & Safety, Offensive Security (red-teaming the model itself, not the infrastructure), and elements of Application Security (securing the model's outputs, not just its codebase).

The inward-facing roles account for roughly 187 of the 298 security jobs. The outward-facing roles account for roughly 49, with the remainder — Security Engineer and Security Leader — spanning both mandates. The outward-facing roles are where the hiring is most novel and where the skills are most AI-specific. They are also the roles most sensitive to regulatory pressure. The EU AI Act, which becomes broadly applicable on 2 August 2026, will require specific safety testing and monitoring capabilities for high-risk AI systems — obligations that map directly to these outward-facing roles.


What Security Looks Like by Company Type

What a company builds determines its security profile.

Company typeSecurity jobsTotal jobsSecurity %
Model + Infrastructure (e.g. OpenAI, Databricks)1273,0444.2%
Infrastructure (e.g. CoreWeave, Nebius)742,1243.5%
Application (e.g. Notion, Harvey)502,3682.1%
Model + Application156242.4%
Pure Model / Research41712.3%

Model + Infrastructure companies — the firms that build foundation models and operate the platforms they run on — have both the highest absolute number of security jobs (127) and the highest security-to-total ratio (4.2%). These are the companies most exposed to the dual mandate: they need traditional infrastructure protection for their GPU clusters and training infrastructure, and they need outward-facing safety for the models they ship to consumers. Infrastructure providers, despite having fewer novel security challenges, still invest at 3.5% — reflecting the high-value targets that GPU infrastructure and AI training environments represent.

Application companies have the lowest security ratio (2.1%). Their products use AI but typically don't expose models directly to consumers, reducing the Trust & Safety surface. Their security needs look more like a traditional SaaS company with an AI-shaped addition to the threat model.


Exemplar Jobs

Three postings that illustrate the AI-specific security challenge.

Offensive Security Engineer, Agent Security — OpenAI. This person tests OpenAI's AI agents for security vulnerabilities. Not traditional penetration testing — they're trying to make agents do things they shouldn't, access data they shouldn't, or take actions they weren't authorised for. When an AI agent can call APIs, browse the web, and execute code, the attack surface is fundamentally different from a static application. This is the kind of work OWASP's "Excessive Agency" category was written to address.

Policy Manager, Chemical Weapons and High Yield Explosives — Anthropic. This is not a security engineering role. It's a policy role at the intersection of domain expertise — CBRN knowledge — and AI safety. The person develops and enforces policy around whether and how the model responds to queries about weapons of mass destruction. The fact that this role exists tells you something about the threat model these companies operate under.

Security Lead, Agentic Red Team — DeepMind. This person leads a team specifically focused on testing autonomous AI agents for safety failures. Not testing the model's outputs in isolation — testing what happens when an agent is given tools, autonomy, and goals. What does it do when it encounters edge cases? What happens when multiple agents interact? This is a discipline being defined in real time, and the fact that DeepMind has structured it as a named team with a dedicated lead suggests they expect it to grow.


Choosing a Path

"I want to red-team AI systems" → Offensive Security & Red Team

"I want to investigate AI misuse and protect users" → Trust & Safety

"I want to secure AI infrastructure" → Infrastructure & Cloud Security or Application Security

"I want to work on detection and incident response" → Detection & Incident Response

"I want to work on AI governance and compliance" → Security GRC & Compliance


Observations

Security at AI companies has a dual mandate. Traditional inward-facing protection — infrastructure, data, access — coexists with outward-facing protection of the external world from the company's product. The outward-facing roles are the newer and more AI-specific part of the function, concentrated at consumer-facing model providers.

Most AI-specific security skills are still classified as "emerging." The traditional core of each security role remains unchanged. Threat modelling, incident response, penetration testing, compliance — all still foundational. The AI layer is an addition, not a replacement. This suggests the field is in transition rather than transformation: security professionals are being asked to learn AI-specific skills, but the foundational discipline still matters more than the new specialisation.

Trust & Safety is concentrated at consumer-facing model providers. Thirty-six jobs across 6 companies. Anthropic, OpenAI, and xAI account for the vast majority. If more companies launch consumer AI products, this role will likely spread. If AI remains primarily enterprise-focused, it may stay concentrated. Worth monitoring quarterly.

The "Agentic Red Team" specialisation is a leading indicator. DeepMind's creation of a dedicated team for testing autonomous AI agents suggests that agent security will become a distinct sub-discipline within offensive security, separate from model safety testing. At 13 total offensive security jobs across 10 companies today, the function is small — but the differentiation is already visible in how companies are naming and structuring these teams.

The regulatory surface is expanding. The EU AI Act's high-risk AI system obligations, applicable from August 2026, will require safety testing, monitoring, and governance capabilities that map directly to the outward-facing security roles documented here. Companies that have already invested in Trust & Safety, GRC, and Offensive Security teams are building the organisational capacity that regulation will soon require more broadly.


All data is from the Applied Methods dataset as of April 2026. Job counts reflect active postings at time of analysis. The dataset covers 90 AI companies — primarily venture-backed startups and public companies with significant AI operations. It does not cover AI adoption at traditional enterprises. All roles mentioned can be explored at appliedmethods.ai.