In 2023, the consensus was clear. Prompt Engineer would be the defining new career of the AI era — a six-figure role requiring no coding, no engineering degree, just the ability to talk to AI in the right way. Anthropic advertised Prompt Engineering positions at $375,000. LinkedIn courses proliferated. The role was declared the hottest job in tech.
Our data tracks 8,652 active jobs across 90 AI companies. Prompt Engineer has 3 postings.
The roles AI actually created — AI Agent Engineer, AI Tutor & Domain Expert, Forward Deployed Engineer — look nothing like what was predicted. This article examines what happened by walking through three categories: what died, what arrived instead, and what appeared that nobody predicted at all.
A note on scope: "AI company" is not a monolithic category. The 90 companies in our dataset range from pure research labs to GPU infrastructure providers to vertical SaaS applications. Some of the roles discussed here are concentrated at a small number of companies, which limits how broadly we can generalise. We note company concentration wherever it matters.
The Prediction vs. the Reality
Here are the ten most distinctly AI-native roles in our dataset — roles that either didn't exist before AI or represent a significant AI-driven mutation of an existing function.
| Role | Jobs | Companies | Function |
|---|---|---|---|
| Forward Deployed Engineer | 147 | 31 | Engineering |
| AI Tutor & Domain Expert | 104 | 5 | Research & Science |
| AI Agent Engineer | 90 | 30 | Engineering |
| Member of Technical Staff | 52 | 20 | Research & Science |
| Trust & Safety | 38 | 7 | Security |
| AI Product Manager | 36 | 13 | Product |
| ML Data & Annotation Operations | 19 | 8 | Data & Analytics |
| Forward Deployed Product Manager | 14 | 5 | Product |
| Applied AI Engineer | 11 | 5 | Engineering |
| Prompt Engineer | 3 | 2 | Engineering |
Traditional engineering roles like Backend Engineer (436 jobs) and ML Engineer (275 jobs) are excluded — they existed before the current AI wave, even if AI has changed what they do. The Forward Deployed Engineer, which we examined in detail in The Deployment Gap, sits at the top: 147 jobs across 31 companies, a role built around embedding engineers directly inside customer environments to make AI products work in production.
The point is not that predictions were wrong for the sake of being wrong. It's that the labour market responded to the practical challenges of building and deploying AI products — not to the theoretical possibilities of interacting with them.
What Died: Prompt Engineer
Three jobs. Two companies.
The two Anthropic postings — Prompt Engineer, Claude Code and Prompt Engineer, Agent Prompts & Evals — are not the role that was hyped. Their skills profile tells the story: designing system prompts to shape model behaviour, evaluating AI model outputs for quality and safety, diagnosing model behavioural regressions, building evaluation frameworks and benchmarks. The emerging skills include designing agent behaviour patterns — tool calling, output validation, error recovery. These are specialised model behaviour engineering roles. They require deep technical knowledge of how language models work, not the ability to write a clever ChatGPT query.
The third posting, at n8n, is titled "Staff LLM Interaction Engineer" — a title that barely resembles the original concept, reflecting how far the remaining roles have drifted from the Prompt Engineer that was hyped in 2023.
The external data confirms the pattern. A Microsoft survey of 31,000 workers across 31 countries ranked Prompt Engineer second-to-last among roles companies planned to add. On Indeed, searches for the role peaked in April 2023 and have since cratered, with job postings described by Indeed's VP of AI as "minimal." A recruiting director at Razoroo estimated that openings dropped 80-90% from the initial surge in 2022. Sam Altman saw this coming. In October 2022, before the hype cycle had even peaked, he said he didn't think prompt engineering would persist as a distinct discipline within five years.
But here is the finding that no one else has reported. The skill didn't disappear. It dispersed.
Prompt engineering now appears as a skill or technology requirement across at least 10 distinct roles in 7 organisational functions:
| Role | Function | How prompt engineering appears |
|---|---|---|
| Prompt Engineer | Engineering | Core discipline (the remaining 3 jobs) |
| AI Product Manager | Product | Architecting agent system behaviours, including prompt optimisation |
| Offensive Security & Red Team | Security | Identifying AI/ML attack surfaces including prompt injection |
| Creative Producer | Design & Creative | Prompting and optimising AI-generated content across modalities |
| AI Agent Engineer | Engineering | Building prompt engineering and LLM fine-tuning pipelines |
| ML Data & Annotation Operations | Data & Analytics | Designing prompt strategies to guide model behaviour |
| Applied ML Scientist | Research & Science | Applying foundation models and prompt engineering to downstream applications |
| Applied AI Engineer | Engineering | Listed as a required technology |
| Implementation Specialist | Customer Support | Listed as a required technology |
| Program & Project Manager | Operations | Listed as a required technology |
In Security, prompt engineering appears as something to defend against. In Product, it's a tool for shaping agent behaviour. In Design, it's a production technique for AI-generated content. In Data Operations, it's a methodology for guiding model training.
The dedicated Prompt Engineer role died because the skill became a baseline expectation across the organisation — not because it became less valuable. As one CTO put it, it became a capability within a job title, not a job title to itself.
What Arrived Instead: AI Agent Engineer
Ninety jobs across 30 companies.
Where Prompt Engineer was about communicating with a model, AI Agent Engineer is about building autonomous systems that use models to take actions in the world. The breadth is the key finding. Thirty companies — a third of our dataset — are hiring for this role. It is not one company's quirk.
The company distribution:
| Company | Jobs |
|---|---|
| Sierra | 22 |
| Moveworks | 12 |
| Decagon | 7 |
| OpenAI | 7 |
| Cohere | 5 |
| Writer | 4 |
| Harvey | 4 |
| PhysicsX | 3 |
| 22 other companies | 1–2 each |
The titles vary — Agent Infrastructure, Agentic Workflows, Agent Orchestration, Agent Harness — but the skill profile is consistent. The core competencies centre on designing AI agents for customer interactions and business workflows, implementing LLMs into production environments, building scalable architectures for enterprise-grade deployments, and developing conversational AI systems. The emerging skills are distinctly agentic: orchestrating multi-component agent systems, implementing agent memory and context management for long-running autonomous processes, and designing safety mechanisms and guardrails for agent behaviour in production.
The technology stack reinforces the picture. Python and large language models appear across nearly every posting. TypeScript, distributed systems, NLP, and the OpenAI API are common. Slack and Microsoft Teams show up regularly — because these agents operate inside enterprise communication tools, not in isolation. LangChain appears in some postings, though not yet ubiquitously, suggesting the tooling ecosystem is still consolidating.
Look at three postings to see what the work actually involves:
Software Engineer, Agent Harness at Cursor — building the agent infrastructure for an AI coding tool. Applied AI Engineer, Agentic Workflows at Cohere — agentic systems at a model provider. Senior Software Engineer, Agent Orchestration at Decagon — an agent-first company where this is the core engineering discipline.
The company type breakdown matters. Of 90 AI Agent Engineer jobs, 57 are at application companies — firms building products that use AI rather than building AI itself. Another 15 are at model-plus-infrastructure companies like Databricks and OpenAI. Only 4 are at pure infrastructure providers. The pattern is intuitive: companies selling AI-powered products to enterprises need people to build the agent systems those products run on.
The connection to Prompt Engineer's decline is direct. Building an agent system requires designing prompts, but also requires systems architecture, orchestration logic, error handling, evaluation frameworks, memory management, safety guardrails, and production deployment. Prompt engineering became one skill among many in a more complex engineering discipline. The AI Agent Engineer role subsumes it.
The enterprise market confirms the trajectory. A PwC survey of 300 senior executives found that 79% report AI agents already being adopted in their companies, with 88% planning to increase AI-related budgets due to agentic AI. A KPMG survey found agent deployment nearly quadrupled over the course of 2025, from 11% to 42% of organisations. The AI agent market crossed $7.6 billion in 2025 and is projected to exceed $10.9 billion in 2026. The companies in our dataset are building the systems those enterprises are buying.
What Nobody Predicted: AI Tutor & Domain Expert
One hundred and four jobs across 5 companies.
This section requires careful framing. Of those 104 jobs, 98 are at xAI. The remainder are at Harvey (2), Luminance (3), Anthropic (1), and Recursion (1). The concentration limits how broadly we can generalise. We state it directly.
That said, the pattern is genuinely new. AI companies are hiring investment bankers, lawyers, doctors, physicists, and linguists — not to do investment banking, law, medicine, physics, or linguistics, but to teach AI models how to think about those domains.
The title diversity at xAI alone tells the story.
In finance: Investment Banking Expert (M&A, DCM, ECM), Finance Expert (Quantitative Trading, FICC Research, Portfolio Management, Private Credit, Structured Finance, Real Estate Investment, Fixed Income, Credit Analyst, Corporate Finance, Equity Research, Macro Research, Risk, Quant), Economics Expert, Accounting Expert (Tax, Technical Accounting, Faculty/Professor).
In science and engineering: Biology Tutor, Chemistry Tutor, Physics Tutor, Medicine Tutor, Earth Science Tutor, Space Science Tutor, Materials Science Tutor, Civil/Mechanical/Electrical/Chemical Engineering Tutor, Data Science Tutor, Statistics Tutor, Pure Math Tutor, Applied Math Tutor, Competition Math Tutor.
In language: AI Tutors in 25+ languages — Arabic, Bengali, Chinese, Danish, Dutch, Finnish, French, German, Gujarati, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Marathi, Norwegian, Polish, Portuguese, Punjabi, Spanish, Swedish, Tagalog, Tamil, Telugu, Thai, Turkish, Urdu, Vietnamese.
And then there is a category that would have been difficult to conceive of a few years ago: Model Behavior Tutor — Epistemic Rigor & Truthfulness. Model Behavior Tutor — Style, Taste & Aesthetics. Model Behavior Tutor — Social Cognition & EQ. Model Behavior Tutor — Wit & Conversation.
xAI is hiring people specifically to teach its model how to be witty, aesthetically discerning, and emotionally intelligent. This is not annotation work. It is cultural and personality calibration of an AI system.
Beyond xAI, the pattern takes different forms. Harvey has Applied Legal Researchers — domain experts evaluating AI legal output. Luminance has Legal Subject Matter Experts and general Subject Matter Experts. Recursion has clinical scientists contributing domain knowledge to AI drug discovery.
The skills profile reflects the dual nature of the work. The foundational skills are operational: creating annotated datasets, using annotation software, evaluating AI responses with subject matter expertise, collaborating with technical teams. But the emerging skills are distinctly AI-native: creating training data for machine learning systems, evaluating AI-generated outputs for domain accuracy, and teaching models domain-specific reasoning and problem-solving approaches.
This is not traditional data labelling. Production RLHF annotation for frontier models can cost $100 per expert comparison for complex tasks, with total annotation budgets reaching into the millions. The quality of domain expert feedback is now recognised as a primary differentiator in model performance — a pattern that makes xAI's investment in finance experts, physics tutors, and model behaviour specialists easier to understand as competitive strategy.
The company type segmentation is stark. Of the 104 jobs, 98 are at Model + Infrastructure companies — firms that build foundation models. Only 5 are at application companies. This is a model-building function, not a product deployment function. It is almost the exact inverse of AI Agent Engineer's distribution.
Whether this pattern scales beyond 5 companies is an open question. If AI Tutor & Domain Expert roles spread to more companies in future quarters, that is evidence the function is becoming structural — a permanent part of how AI models are built. If it stays concentrated, it may reflect a specific competitive strategy at companies like xAI rather than an industry-wide shift.
What This Means
Four observations from the data. No predictions.
The market created deployment and integration roles, not the roles pundits predicted. The largest AI-native roles — Forward Deployed Engineer (147 jobs), AI Agent Engineer (90 jobs) — are about making AI work in production environments, not about interacting with models as an end user. As we documented in The Deployment Gap, AI companies employ nearly as many people in customer-facing technical roles as in core engineering. The labour market responded to practical deployment challenges, not theoretical possibilities.
Skills migrate faster than job titles. Prompt engineering went from a dedicated role to a dispersed skill across 7 organisational functions in under two years. It is now embedded in Security (as a threat vector), Product (as a design tool), and Creative (as a production technique). The AI Agent Engineer role may follow a similar trajectory if agent-building becomes commoditised by better tooling — the skill persisting even as the dedicated title evolves. The lesson for career planning: skills outlast titles.
Domain expertise is becoming an AI input, not just an AI output. The AI Tutor role inverts the traditional relationship between domain knowledge and technology. Instead of AI replacing domain experts, AI companies are hiring them to improve models. The RLHF pipeline has made domain expert feedback one of the most expensive and strategically important inputs in AI development. Whether this pattern scales beyond the current 5 companies is an open question worth watching.
What a company builds determines which AI-native roles it hires. Model builders hire domain experts. Application builders hire agent engineers. Platform companies hire forward deployed engineers. The company type — not the industry label "AI" — drives the hiring pattern. A monolithic view of "AI jobs" obscures more than it reveals.
All data is from the Applied Methods dataset as of April 2026. Job counts reflect active postings at time of analysis. The dataset covers 90 AI companies — primarily venture-backed startups and public companies with significant AI operations. It does not cover AI adoption at traditional enterprises. We have one snapshot of data and cannot claim these roles are "growing" or "declining" from our own dataset alone. Where we reference trends, we cite external sources. All roles mentioned can be explored at appliedmethods.ai.