Block cut 40% of its workforce in late February. Over 4,000 people. Two weeks later, Atlassian cut 10%. Another 1,600. Both companies cited AI as the reason. Both were profitable when they made the cuts.
If you run an agency, you've seen these headlines. This issue breaks down what's actually happening behind them, what the data says about where AI displacement really stands, and where agency owners should be paying attention.
What actually happened (and what didn't)
Block CEO Jack Dorsey didn't hedge. He tied the layoffs directly to AI, writing that "intelligence tools have changed what it means to build and run a company." He predicted most companies would make similar structural changes within the next year. Block's stock jumped 24% on the news.
Atlassian's version was more measured. CEO Mike Cannon-Brookes said their approach "is not AI replaces people," but acknowledged that AI changes "the mix of skills we need or the number of roles required in certain areas." The company replaced its CTO and appointed two new AI-focused CTOs in his place. Over half of the 1,600 roles cut came from software R&D.
Then came the counter-narrative. Bloomberg reported suspicions of "AI-washing," with analysts questioning whether AI was the real driver or a convenient explanation for standard cost-cutting. A former Block employee called it "organizational bloat wearing an AI costume." Block had tripled its headcount during the pandemic, growing from around 4,000 people to over 10,000 between 2019 and 2023. An Oxford Economics report found that many layoffs CEOs called AI-related were actually the result of past overhiring.
The point isn't to pick a side. It's that even the companies making the cuts can't fully agree on what's driving them. And that ambiguity matters, because the story everyone tells about these layoffs will shape how the rest of the market responds.
The number that matters more than the layoff count
While the headlines focused on who got cut, Anthropic quietly published what may be the more important story.
In early March, Anthropic released a labor market study built from millions of real Claude conversations, cross-referenced with U.S. occupational data. The study introduces a metric called "observed exposure," which measures the gap between what AI could theoretically do in a given job and what it's actually doing right now.
The findings are striking. For computer and math workers, AI is theoretically capable of handling 94% of their tasks. But in real professional use, Claude currently covers only 33%. The same pattern holds across business and finance (94% theoretical, a fraction in actual use), management, office administration, and legal work. All above 80% theoretical capability. All far below that in practice.
The researchers describe it as a gap between a blue area (what's possible) and a red area (what's happening). Right now, the red is a sliver of the blue.
That gap is both the reassurance and the warning. Reassurance: AI displacement is nowhere near its theoretical ceiling. The study found "limited evidence" that AI has affected unemployment rates to date. Warning: that ceiling is high, and the occupations most exposed are white-collar knowledge workers. The exact category agency work falls into.
One finding worth sitting with: hiring of workers aged 22 to 25 has slowed in AI-exposed occupations. Not layoffs. Slower hiring. The entry-level roles that agencies rely on for execution work are the ones where the early signal is showing up.
The model that just crossed a line
On March 5, OpenAI released GPT-5.4. The headline stat: it scored 75% on the OSWorld benchmark, which tests a model's ability to perform real desktop productivity tasks. The human baseline on that same benchmark is 72.4%.
That's the first time a general-purpose AI model has outperformed the average human on computer-use tasks. Not coding benchmarks. Not math tests. Actual productivity work: navigating software, filling out forms, moving between applications, completing multi-step workflows.
GPT-5.4 also comes with a 1-million-token context window and native computer-use capabilities, meaning it can autonomously work across different applications on a machine.
For agency owners, this is worth paying attention to. Not because the model is going to replace your team tomorrow, but because the kind of work it now handles competently (navigating tools, executing repetitive multi-step tasks, processing information across systems) is the kind of work that fills a significant chunk of your team's day. The 90/10 reliability problem still applies. Getting AI to handle 90% of a task is fast. The final 10%, the edge cases, the judgment calls, the client-facing nuance, still takes human oversight. But that 90% keeps getting bigger, and the 10% keeps getting more specific.
What this means for agencies
The layoffs are happening at tech companies with 10,000+ employees. Agencies don't operate at that scale, and the dynamics are different. You're not managing thousands of engineers building a single product. You're running client work across multiple accounts with small, specialized teams.
But the pattern underneath those layoffs is relevant. These companies aren't just using AI to speed up existing work. They're reorganizing around it. Changing role structures. Shifting what skills they hire for. Rethinking how many people it takes to deliver a given output.
McKinsey's most recent data shows that 62% of organizations are experimenting with AI agents, but only 23% have started scaling them. That gap between experimenting and scaling is where most agencies are right now. You've tried the tools. Maybe you've built a few workflows. But the organizational structure, the team roles, the way work moves through your agency, probably hasn't changed much.
The agencies that treat AI as a productivity boost for their existing team will get efficiency gains. That's valuable, and it's the right starting point. But the ones that start restructuring how work gets done, rethinking which tasks need a human and which don't, building workflows where AI handles execution and people handle judgment, strategy, and client relationships, those are the ones that change their positioning.
That's not a call to overhaul everything this quarter. It's a recognition that the distance between "experimenting" and "scaling" is where the real repositioning happens. And the window to cross that distance is open right now.
