Across 103 AI companies in our dataset there are 8,935 active job postings. Engineering, at 2,458 postings, is the largest single function — more than a quarter of the hiring. At the firms closest to AI, coders are still being hired at scale.
The second reading of the same data points in a different direction. Engineering is 27.5% of hiring, which means engineering is not most of the hiring. Sales & GTM, at 2,099 postings, is 23.5%. Customer-facing deployment roles — Solutions Architects, Solutions Engineers, Forward Deployed Engineers, Customer Success Managers, Implementation Specialists, Engagement Managers, Field Engineering Managers, Client Partners — total roughly 1,900 more. What an AI company's hiring looks like in April 2026 is less "mostly engineers" than the public conversation assumes.
A recent piece in The Economist, "The tech jobs bust is real. Don't blame AI (yet)," argues that the tech-employment decline since 2022 is real but cannot yet be attributed to AI. Technology's share of US employment has slipped from 2.5% in late 2022 to 2.3%. More than 500,000 tech jobs are "missing" against earlier trends. But the timing is wrong for an AI story: Claude Code, which the piece identifies as the first tool remotely plausible as an engineer substitute, only launched in February 2025. The piece attributes the decline to pandemic-era overhiring, higher interest rates, and offshoring — mostly to India. It cites Leland Crane and Paul Soto at the Federal Reserve, who find that US coder employment has continued to grow since ChatGPT, though much more slowly than before, and Ivan Yotzov at the Bank of England and co-authors, whose survey of almost 6,000 firms across four countries found that over 80% of firms report AI has had no impact on either employment or productivity over the past three years.
Our dataset can't test economy-wide claims. But it can test what's happening at the firms most exposed to AI — the ones building and selling it. Here is what we found.
AI companies are still hiring engineers
At the firms that would most obviously be substituting AI for their own engineers, engineering remains the largest function — 2,458 active postings across 103 companies. Backend Engineers account for 445 of those, Infrastructure & Platform Engineers 486, Machine Learning Engineers 292. AI-native engineering roles combined — ML Engineer, AI Agent Engineer, Applied AI Engineer, and Prompt Engineer — total 404 postings, or 16.4% of the engineering function. The Prompt Engineer role, which had a brief run as a distinct canonical title in 2023-24, has now collapsed to 3 active postings across the industry — a story we've covered elsewhere.
| Engineering category | Jobs | Share of Engineering |
|---|---|---|
| Traditional software (backend, frontend, fullstack, mobile, generalist) | 854 | 34.7% |
| Infrastructure / platform (infra, SRE, database) | 603 | 24.5% |
| AI-native (ML, AI agent, applied AI, prompt) | 404 | 16.4% |
| Engineering leadership (EM, TPM) | 352 | 14.3% |
| Forward Deployed | 143 | 5.8% |
| Other (quality, security, design) | 102 | 4.1% |
Traditional software engineering remains the single largest category inside the function — 34.7% of engineering hiring. The shape is inconsistent with a narrative in which the "automation frontier" has already moved through general-purpose coding. If companies building AI had concluded internally that their own engineers could be replaced by the tools they ship, their own hiring would be the first place the shift would show up. It hasn't.
This is where our data most clearly supports the Economist's argument. The piece is careful with the "yet." It does not claim AI has had no effect on software engineering, only that AI was not a plausible substitute for engineers during the 2022-2024 window when most of the tech-employment decline happened. Inside our dataset, in April 2026, engineers are still being hired in volume.
"Tech employment" is a narrower lens than it used to be
Here is where our data diverges from the Economist's framing. Both the Economist and the Fed papers it cites use "tech employment" or "coder employment" as the central category. That worked as a proxy for what tech companies do with their workforces when tech companies were mostly engineers — broadly true in the 2010s. It works less well now.
At AI companies in April 2026, engineering is the largest function but a minority share of hiring.
| Function | Active jobs | Share |
|---|---|---|
| Engineering | 2,458 | 27.5% |
| Sales & GTM | 2,099 | 23.5% |
| Business Operations | 623 | 7.0% |
| Customer Support | 507 | 5.7% |
| Research & Science | 442 | 4.9% |
| Marketing | 374 | 4.2% |
| People & HR | 345 | 3.9% |
| Physical Systems | 334 | 3.7% |
| Security | 312 | 3.5% |
| Finance | 280 | 3.1% |
| Product | 278 | 3.1% |
| Other 6 functions | 883 | 9.9% |
A company hiring 100 people is, on this pattern, hiring 27 or 28 engineers. The other 72 or 73 are salespeople, solutions architects, customer success managers, operations leads, recruiters, finance staff, security engineers, industry specialists, and the rest.
Nothing about this contradicts the Economist's main argument. It does push back on the implicit equation between "tech employment" as measured by BLS occupation series and "what tech companies are doing with their workforces." The equation held when engineering was most of the hiring. It increasingly isn't.
What a company sells shapes the composition
The 103 companies in our dataset are not a homogeneous group. Aggregating across them obscures pattern at the company-type level.
| Company type | Companies | Jobs | Engineering | Sales & GTM |
|---|---|---|---|---|
| Applications only | 36 | 2,466 | 21.9% | 32.5% |
| Models + Infrastructure (e.g. Databricks, OpenAI) | 8 | 2,555 | 27.0% | 27.3% |
| Infrastructure only | 19 | 2,126 | 28.0% | 16.2% |
| Models + Applications | 23 | 722 | 24.2% | 21.1% |
| All three (e.g. Palantir) | 5 | 468 | 53.0% | 7.1% |
| Infrastructure + Applications | 4 | 423 | 36.4% | 13.7% |
| Pure model labs | 8 | 175 | 31.4% | 6.9% |
Three patterns stand out. First, application-focused AI companies hire sales more than they hire engineering. Across 36 pure application-builders, Sales & GTM is 32.5% of active hiring and Engineering is 21.9% — sales outnumbers engineering by roughly 50%. Second, hybrid delivery models that combine infrastructure and applications, or span all three categories, are engineering-heavy — 36% and 53% respectively — reflecting the kind of structure where embedded engineers are effectively the product. Third, pure model labs have almost no sales function (6.9%), because they sell APIs and models rather than customer-specific platform deployments.
Databricks, our largest single company by active postings, is the fullest example of the application-platform pattern. Of 858 active postings, 27% are engineering and 49% are Sales & GTM. Nearly half of what Databricks is currently hiring for is customer-facing work.
Anthropic, with 430 active postings, is 24% engineering and 28% sales, with another 41 postings in Research & Science. OpenAI's 642 postings are 32% engineering and 14% sales. Palantir is 45% engineering and 11% sales — consistent with its Forward-Deployed-Engineer-heavy delivery model. Perplexity, much smaller and earlier in its scaling, is 51% engineering and 3% sales — closer to an early-stage pure tech company than to a platform business.
The Economist names Block specifically as evidence for the tech-layoffs thesis, citing its cut of more than 4,000 roles. Block is in our dataset because it ships AI applications. It currently has 87 active postings: 3 are engineering and 73 are Sales & GTM, almost entirely Account Executives and Solutions Engineers in regional territories — Manchester, Glasgow, Madison, San Diego, and so on. If Block is the paradigm tech layoff, it is a company restructuring away from an engineering-weighted model toward a distribution-heavy one — not a company shrinking its engineering workforce because AI has taken over engineering tasks.
The composition isn't incidental. It is what an AI company selling to enterprises currently looks like.
Where it gets complicated
Our dataset describes AI companies specifically. It does not describe the broader US labour market, and it cannot directly test the Economist's claims about San Francisco employment trends, BLS occupation series, or the 12%-to-100% growth in tech workers at non-tech firms between 2022 and 2025. Those claims belong to different datasets.
Two methodological differences matter when comparing what we see to the Economist's cited sources. First, our counts are active job postings, not filled employment. A posting is a hiring intention at a point in time; it can sit open for months, close without being filled, or appear multiple times across locations. Labour-force surveys count people in jobs, which is a different quantity. Second, the Crane-Soto paper uses CPS occupation data linked to O*NET task exposure — it captures people who call themselves software developers regardless of industry. Our data captures who AI companies are hiring, regardless of what candidates call themselves. Those are different slices of the same reality.
The Yotzov survey is worth noting on its own terms. Its ~6,000 firms are a cross-sectoral sample of CFOs and senior executives across the US, UK, Germany, and Australia. It is not a sample of AI companies, or predominantly of tech companies. Its finding that 80% of firms report no AI impact on employment over three years is weaker evidence on the specific question the Economist is asking — "did AI cause the tech-jobs bust?" — than the piece's summary ("essentially zero") implies. It tells us that most firms, in most sectors, haven't seen AI change their headcounts yet. It tells us less about what's happening inside the AI industry itself.
There is also a structural caveat our data cannot resolve. The market in April 2026 may not be the market in April 2027. If coding assistants are only now becoming serious substitutes for experienced engineers — and if hiring decisions respond to that change with a one- or two-year lag — the patterns here will shift. Whether they do, and how sharply, is an empirical question for later.
What the data supports, and what it complicates
1. AI companies are still hiring engineers. This supports the Economist's "not yet." 2,458 active engineering postings across 103 AI companies, including the firms most directly exposed to Claude Code, Cursor, and similar tools. If those companies had concluded AI could replace engineers, their own hiring would be the first place it showed up. It hasn't.
2. "Tech employment" is an increasingly narrow lens on AI-company workforces. Engineering is 27.5% of hiring. Sales & GTM is 23.5%. Customer-facing deployment work adds another ~21%. A company's engineering hiring and its total hiring are no longer closely coupled. National labour statistics don't see this composition change.
3. Application-focused AI companies hire sales more than engineering. 36 companies in our dataset ship AI-powered applications without training models or running infrastructure. Their hiring is 32.5% Sales & GTM and 21.9% engineering. Databricks, the largest single hiring company in our data, runs 49% sales and 27% engineering. This is the pattern for a large and growing share of the AI industry.
4. "AI company" as a category hides major composition differences. Pure model labs hire sales at 6.9% of total. Palantir-type hybrids hire 53% engineering. Mid-stage application builders like Block hire almost entirely into sales. The aggregate "AI company" category flattens these differences in ways that matter for any claim about AI-industry hiring in aggregate.
5. Block is a poor example for any AI-displacement thesis. 87 active postings, 73 Sales & GTM, 3 Engineering. If Block is the paradigm tech layoff, the most accurate reading is of a company repositioning its go-to-market, not of AI having taken over engineering work.
The Economist's central argument — that the 2022-2025 tech-jobs decline predates AI's plausible substitutability for engineers — holds up against our data. What our data adds is that the category "tech jobs" is becoming less useful for reasoning about AI-company workforces. The more interesting story isn't whether coders are being replaced. It is that even at the firms most able to use AI to replace coders, coders are already a minority of the hiring.
This article is a commentary on "The tech jobs bust is real. Don't blame AI (yet)," published in The Economist on 13 April 2026. All Applied Methods data is from the dataset as of April 2026 and reflects active job postings at time of analysis. The dataset covers 103 AI companies with active postings — primarily venture-backed startups and public companies with significant AI operations. It does not cover AI adoption at traditional enterprises, which the Economist argues is where much of the current tech-skills migration is happening. All roles mentioned can be explored at appliedmethods.ai.
