Two things are true at once about AI and hiring right now, and they point in different directions.
The first is that across the 103 AI companies in our dataset, only 181 of 8,935 active job postings — about 2% — carry an entry-level marker in the title. Intern, junior, associate, new grad. The rest sit at mid-level or above. Senior, staff, and principal roles alone account for 2,443 postings, or 27%. In engineering specifically, the ratio of senior-and-above to entry-level is roughly 18 to 1.
The second is that those same 103 companies are actively restructuring job descriptions across every function to include new AI-native capabilities. Ninety-three canonical roles now require skills that did not exist three years ago — building AI agents, integrating LLMs into internal tools, designing for agentic capabilities — and those roles span 16 of the 17 functions we track.
A recent piece in the Financial Times by John Burn-Murdoch and Madhumita Murgia, "The AI job loss story is all about bundles," offers a framework for interpreting findings like these. The argument runs as follows: jobs are bundles of tasks, and what matters for AI displacement is not just which tasks can be automated but how tightly the remaining tasks are bound together. "Weak bundle" jobs — junior coders, contractors, single-skill roles where tasks can be neatly separated — are being hollowed out. "Strong bundle" jobs — senior engineers, cross-functional specialists, people whose work combines technical execution with judgment and accountability — are resilient. The piece draws on work by Garicano, Li, and Wu at the LSE, Brynjolfsson and colleagues at Stanford, and Crane and Soto at the Federal Reserve.
The framework is persuasive. It also makes claims that can be tested. We have 8,935 job postings at the companies building the tools that are supposedly causing the displacement. If the bundle theory holds, their hiring should reflect it. Here is what we found.
The entry-level scarcity is real — and the cause is harder to pin down
Across the dataset, the shape of AI company hiring is not a pyramid. It is a column, narrow at the bottom, wide through the middle, narrow again at the very top.
| Level | Jobs | % of total |
|---|---|---|
| Intern | 61 | 0.7% |
| Junior / associate | 120 | 1.3% |
| Mid / unspecified | 4,182 | 46.8% |
| Senior | 1,542 | 17.3% |
| Staff | 787 | 8.8% |
| Principal | 114 | 1.3% |
| Lead | 430 | 4.8% |
| Manager and above | 1,699 | 19.0% |
In engineering, the shape is even starker. Of 2,458 active engineering postings, 63 (2.6%) carry an explicit entry-level marker. The combined senior, staff, and principal bucket contains 1,109 jobs, or 45.1% of engineering roles. The role-level picture is consistent:
| Role | Total | Entry-level | Senior+ |
|---|---|---|---|
| Infrastructure & Platform Engineer | 486 | 11 (2.3%) | 259 (53%) |
| Backend Engineer | 445 | 4 (0.9%) | 272 (61%) |
| ML Engineer | 292 | 10 (3.4%) | 142 (49%) |
| Fullstack Engineer | 186 | 3 (1.6%) | 86 (46%) |
| Forward Deployed Engineer | 143 | 4 (2.8%) | 12 (8%) |
| Software Engineer | 124 | 19 (15.3%) | 60 (48%) |
This is the pattern the FT piece predicts. The junior rung of the engineering ladder is narrow.
But there is a caveat worth taking seriously. 46.8% of all postings carry no seniority marker in the title at all. A posting titled simply "Backend Engineer" might be open to a candidate with three years of experience or ten. The absence of "Junior" in a title is not the same as a requirement for seniority. And most firms in the dataset are Series B through public, growing fast, structured around experienced contributors rather than large training cohorts. Without a historical baseline of these companies' entry-level hiring rates before LLM coding tools existed, we cannot cleanly distinguish between "hollowed out by AI" and "never hired many juniors to begin with."
The FT acknowledges this gap. Its evidence rests on the Crane-Soto paper, which uses the US labour force survey and covers coders across the entire economy — including contractors and non-tech industries. Our dataset is narrower and different: 103 AI-native companies, all hiring at the frontier of the industry the research is about. The surveys see a decline in junior coder employment. We see very few junior positions posted, but cannot yet tell you whether that is a shift or a constant.
Where the entry-level jobs actually are
Of the 103 companies in the dataset, 40 post at least one entry-level role. The top 10 account for 55% of all entry-level postings:
| Company | Entry-level jobs | Total jobs | Type |
|---|---|---|---|
| Palantir | 20 | 235 | Infra + Apps |
| Graphcore | 19 | 129 | Infra |
| Crusoe | 11 | 332 | Infra |
| MongoDB | 10 | 416 | Infra |
| Databricks | 9 | 858 | Models + Infra |
| Legora | 8 | 129 | Apps |
| Perplexity | 6 | 75 | Apps |
| Typeface | 6 | 32 | Apps |
| Ramp | 6 | 127 | Apps |
| Nebius | 5 | 342 | Infra |
The distribution says something the aggregate obscures. Entry-level hiring is not evenly thin across the industry — it is concentrated. Palantir's 20 entry-level openings are structured intern and new-grad programmes: Software Engineer New Grad, Forward Deployed Software Engineer Internship, Product Designer New Grad. Graphcore, a UK chip company, runs formal engineering early-career pipelines. Databricks operates MBA and new-grad tracks. These are apprenticeship structures — exactly what Garicano and Rayo's work on "training bundles" suggests should collapse first when AI handles the routine work that entry-level employees were historically trained on. At these companies, the collapse has not happened.
Pure model builders show the inverse pattern. The eight companies in our dataset that build models exclusively post 175 jobs combined, of which three are entry-level. That is 1.7%, against a 27% senior-and-above share. The companies closest to the research frontier hire the fewest juniors in absolute and proportional terms.
What a company sells shapes who it hires at the bottom. Infrastructure companies with apprenticeship histories and enterprise deployment pipelines — Palantir, MongoDB, Databricks — run structured entry programmes. Pure model builders do not. Application companies sit in between. The "AI company" label obscures these differences. The hiring data makes them visible.
Bundle strength is visible in the technology data
The FT framework rests on a theoretical distinction between weak and strong bundles. One empirical proxy for bundle strength is the number and category breadth of technologies a role requires. Roles that span many technology categories — programming languages, frameworks, platforms, tools, and concepts — are harder to unbundle than roles with a narrow stack.
The widest technology profiles in the dataset:
| Role | Technologies | Categories |
|---|---|---|
| Infrastructure & Platform Engineer | 56 | 5 |
| Data Engineer | 56 | 5 |
| Research Scientist | 51 | 5 |
| Infrastructure & Cloud Security Engineer | 46 | 5 |
| Systems Engineer | 43 | 4 |
| Chip & Silicon Engineer | 40 | 5 |
| Forward Deployed Engineer | 39 | 5 |
| Security Engineer | 38 | 5 |
| Backend Engineer | 35 | 5 |
The narrowest profiles sit in specialised operational and legal roles — Employment Counsel, Technical Recruiter, UX Researcher — which typically require four to six technologies from one or two categories.
The pattern broadly aligns with the bundle framework. Engineering, research, security, and deployment roles are the strongest bundles — full category coverage, combining technical depth with cross-functional judgment. The weakest bundles cluster in operational functions with narrow tool requirements. Forward Deployed Engineer is an instructive archetype: 39 technologies across all five categories, and a skill profile requiring production AI system development, customer environment integration, business-to-technical translation, and stakeholder management across multi-month engagements. Separating any one of those tasks from the bundle destroys the role's value. The FT framework predicts it should be resilient. The hiring data reflects that — 143 active postings across 33 companies.
The framework has a limit at the AI company scope, though. Every role in the dataset exists inside a company building AI. Even the "weak bundle" roles here carry more technical complexity than their counterparts at traditional enterprises. What counts as a weak bundle inside the AI industry is probably a reasonably strong bundle everywhere else.
The task ladder is wider, not just higher
The FT's second argument is about the task ladder. METR benchmark data — AI agent task-completion horizons doubling roughly every seven months, accelerating to every four — supports a picture of AI climbing vertically, picking off mundane tasks first and advancing up the cognitive chain. The implication is that the bottom of the ladder disappears, stranding junior workers.
The data shows a different dynamic running in parallel. AI is not only being subtracted from the bottom of roles — it is being added across the entire structure, at every level and in every function.
Ninety-three canonical roles now include at least one AI-native skill in their job descriptions — a capability that did not exist before the current wave of AI. These skills span 16 of the 17 functions we track:
| Function | Roles with AI-native skills |
|---|---|
| Engineering | 10 |
| Security | 10 |
| Customer Support | 8 |
| Research & Science | 8 |
| Sales & GTM | 8 |
| Infrastructure & IT | 7 |
| Data & Analytics | 6 |
| Legal & Compliance | 6 |
| Product | 6 |
| People & HR | 5 |
| Operations | 5 |
| Marketing | 4 |
| Design & Creative | 4 |
| Finance | 3 |
| Knowledge & Communication | 2 |
| Physical Systems | 1 |
A Legal Operations role now lists "building AI agents for legal task automation." An IT Support Specialist is expected to design "IT automation solutions using AI agents." A Tax Manager is expected to leverage "AI tools to accelerate tax analysis." A Product Designer is expected to design for "AI agents and agentic capabilities in enterprise software." A Brand & Communications Manager is expected to work with generative AI tools.
These are not edge cases. They are the default shape of the job description in 2026 at AI companies.
The technology data reinforces the point. LLMs appear as a required technology in roles across all 16 functions with AI-native skills. Cursor, Claude, LangChain, and the OpenAI API appear in engineering postings, product management, customer enablement, instructional design, marketing, and security. A Solutions Architect's stack now includes LangChain alongside Databricks and Apache Spark. A Brand Manager's stack now includes ChatGPT. AI tools are not replacing these roles — they are being woven into them.
The METR doubling curve is real. But the assumption that it translates into a vertical ladder — bottom rungs removed, employees displaced upward — misses what is happening inside these companies. They are not waiting for automation to arrive from below. They are rewriting job descriptions in real time, adding AI-native capabilities at every level of seniority and in every function. Jobs are not getting thinner. They are getting wider.
What the data supports, and what it complicates
The entry-level scarcity is real, but its cause is ambiguous. Only 2% of AI company job postings are explicitly entry-level, and in engineering the ratio of senior-plus to entry-level is roughly 18 to 1. This is consistent with the FT's claim that AI is hollowing out the bottom of the task ladder. But it is also consistent with a simpler explanation: fast-growing AI companies have never prioritised junior hiring. Without a pre-LLM baseline at the same companies, we cannot distinguish between "AI eroded this" and "this was always thin."
The bundle framework has empirical support at the function level, with a qualifier at the company level. Technology breadth and cross-functional skill density vary across roles in ways that align with the weak-bundle/strong-bundle distinction. But the dataset is AI companies specifically. The baseline technical complexity here is higher than in the wider labour market, so even "weak bundle" roles in our data carry substantial skill density. The framework probably generalises more cleanly when applied to the broader economy than to this slice of it.
What a company sells determines where the juniors are. Infrastructure and application companies with enterprise deployment pipelines run structured entry-level programmes. Pure model builders do not. This is a sharper signal than the aggregate: the 1.7% entry-level share at model-first companies is very different from the 5.2% at infrastructure-plus-applications companies, and the gap is not small.
The task-ladder metaphor needs updating. The image of AI climbing from the bottom up captures something real about METR's benchmark progression. But inside the companies building and deploying AI, the dominant pattern is not subtraction from the bottom — it is addition across the whole structure. Ninety-three roles now carry AI-native skill requirements. These capabilities span 16 of 17 functions. The task ladder is being rebuilt rather than shortened, with new rungs inserted throughout.
What the Burn-Murdoch and Murgia piece gets right is that the simple "task automation" model is insufficient — bundles matter, and the interdependence of tasks within a role is a better predictor of resilience than the count of automatable tasks. What it understates is the parallel process: the same technology that is eroding weak bundles in the broader economy is simultaneously inserting new, AI-native tasks into roles that previously had none. The economy is not just losing tasks from the bottom. It is acquiring tasks — and task categories — it did not have before.
Whether the two processes cancel out, or compound, is the question the hiring data cannot yet answer.
This article is a commentary on "The AI job loss story is all about bundles" by John Burn-Murdoch and Madhumita Murgia, published in the Financial Times on 9 April 2026. All Applied Methods data is from the dataset as of April 2026. Job counts reflect active postings at time of analysis. Seniority is inferred from job titles and will undercount entry-level roles at companies that do not use explicit seniority prefixes. The dataset covers 103 AI companies — primarily venture-backed startups and public companies with significant AI operations. It does not cover AI adoption at traditional enterprises. All roles mentioned can be explored at appliedmethods.ai.
