Is AI Changing Jobs Yet? Anthropic's Early Evidence
Anthropic’s new paper, published on March 5, 2026, tries to answer a question that gets plenty of heat and not enough proof: is AI already changing the job market in a visible way, or are people mostly reacting to the idea of change before the numbers truly move? The report does not claim that mass displacement has already arrived. Instead, it builds a way to track where AI is most likely to matter first, which jobs look most exposed right now, and whether those jobs are starting to show signs of stress in hiring or unemployment data.
What this research is really about
The paper is titled Labor market impacts of AI: A new measure and early evidence. That title matters, because the study is less about making dramatic predictions and more about building a better yardstick. Anthropic argues that many earlier attempts to measure job risk leaned too heavily on theory alone. A task might be possible for an AI system in principle, yet still show little real use because of legal limits, workflow friction, software needs, or the simple fact that people have not changed their habits yet.
So the big move in this report is a metric called observed exposure. It combines three ingredients: the U.S. O*NET task database, Anthropic’s own real-world usage data from the Economic Index, and an earlier academic framework that estimated whether a language model could make a task at least twice as fast. Then it gives more weight to work-related use and to automated use than to collaborative use. In plain English, the paper asks: which job tasks are not only possible for AI, but are already being done with AI in ways that could cut into human labor?
Why this is different from older AI job talk
A lot of AI job commentary jumps from capability demos to sweeping claims about entire professions. Anthropic’s paper pushes back on that shortcut. It notes that actual coverage is still much smaller than theoretical coverage. Across tasks seen in Anthropic’s earlier Economic Index reports, 97% fell into categories that were theoretically feasible for large language models, yet real-world use remained far below the full range of what those systems might do someday. In the computer and math category, for example, theoretical feasibility reached 94% of tasks, while current coverage measured through Anthropic’s usage data was only 33%.
That gap is one of the most useful ideas in the whole report. It tells readers that “AI can do this” and “workers are using AI for this at scale” are not the same statement. That may sound obvious, though it gets lost in public debate all the time. A task can be technically feasible and still not be folded into day-to-day work. This paper treats that gap as the main story rather than a footnote.
Which jobs look most exposed right now
The paper’s ranking of exposed occupations will grab most of the attention. Anthropic finds that computer programmers sit at the top with 75% coverage under its measure. Customer service representatives and data entry keyers are also near the top, with data entry keyers listed at 67% coverage. On the other end, around 30% of workers had zero coverage in the data because their tasks appeared too rarely to pass the threshold; examples included cooks, motorcycle mechanics, lifeguards, bartenders, dishwashers, and dressing room attendants.
This matches the broader pattern Anthropic reported in its first Economic Index release on February 10, 2025. That earlier report found AI use clustered in software development and technical writing, with 37.2% of relevant Claude conversations tied to computer and mathematical work. It also found that AI use leaned more toward collaboration than direct replacement, with 57% classed as augmentation and 43% as automation. Only about 4% of occupations showed AI use across at least three-quarters of their tasks, while roughly 36% showed use in at least a quarter of tasks.
What the paper says about unemployment
Here is the headline many readers may find surprising: Anthropic does not find a systematic rise in unemployment for workers in the most exposed occupations since late 2022. Using Current Population Survey data, the report compares workers in the top quartile of exposure with workers in jobs showing no AI exposure. The result is small and statistically insignificant. Anthropic’s message is clear: if AI is already producing broad unemployment effects in these occupations, the signal is not strong enough to stand out in this dataset yet.
That does not mean “nothing is happening.” The paper points to a weaker but more troubling clue around younger workers. For people ages 22 to 25, hiring into exposed occupations appears to have slowed. Anthropic estimates a 14% drop in the job-finding rate for young workers entering exposed roles compared with 2022, and says the result is only barely statistically significant. Older workers did not show the same pattern. So the study stops short of a loud alarm, though it does raise a yellow flag for entry-level hiring.
Who may feel the pressure first
Another striking result is who sits inside the most exposed group. Anthropic reports that workers in highly exposed occupations are more likely to be female, more educated, higher paid, and older. In the pre-ChatGPT comparison period of August through October 2022, the exposed group was 16 percentage points more likely to be female, earned 47% more on average, and had far higher levels of advanced education. Graduate degree holders made up 17.4% of the exposed group versus 4.5% of the unexposed group.
That cuts against the old habit of treating AI risk as a problem mainly for low-wage routine work. This paper suggests the early pressure may be stronger in white-collar, screen-based, information-heavy jobs. Anthropic also finds that occupations with higher observed exposure are projected by the Bureau of Labor Statistics to grow less through 2034. The relationship is modest, though directionally meaningful: every 10 percentage point increase in coverage is linked with a 0.6 percentage point drop in projected job growth.
What readers should take away
The best way to read this paper is not as proof that AI has already broken the labor market, and not as proof that concerns were overblown. It is better read as a serious attempt to replace vague fear with a live measurement system. Anthropic is saying that the biggest question is not whether AI can do pieces of many jobs. The bigger question is when those capabilities turn into routine, work-related, automated use at enough scale to show up in hiring, wages, and unemployment.
Right now, the answer seems mixed. AI use is real, concentrated, and growing. Exposure is highest in certain white-collar roles. Hiring for younger workers in exposed jobs may already be softening. Yet broad unemployment effects are still hard to detect. That makes this research valuable precisely because it resists easy slogans. It gives us a cleaner way to watch the job market change while the story is still being written.












