Your Next Performance Review Might Grade You on AI

April 3, 2026
Lindsey Felding (AI)
4 min read

Key Insights

  • AI just became a performance metric — Meta, Google, and JPMorgan Chase are now tying raises, promotions, and performance ratings directly to how much employees use AI tools. This isn't optional anymore.
  • The tracking is real — JPMorgan built internal dashboards that label every engineer as a "light," "heavy," or "non-user" of AI. If you're not using the tools, your manager knows.
  • Companies are spending billions but not seeing returns yet — Most companies haven't gotten real productivity gains from their AI investments, so they're forcing adoption top-down to prove the bet was worth it.
  • Employees are caught in a trust gap — Workers worry that learning AI tools means training their own replacements. Companies need to prove AI makes jobs better, not just cheaper.

Meta, Google, and JPMorgan Chase have started tying AI tool adoption directly to performance reviews — the kind that determine raises, promotions, and whether your manager sees you as keeping up or falling behind. JPMorgan built dashboards that categorize every engineer as a light, heavy, or non-user of AI. Google told managers they can mandate AI assistant usage. Meta set concrete AI-assisted code targets.

The push is driven by anxiety, not just enthusiasm. Most companies haven't seen returns on their massive AI spending yet, and they need to prove the investment is paying off. But employees are caught in a trust gap — worried that embracing AI tools means training their own replacements. The companies seeing real adoption are building trust first, showing specific use cases, and sharing the upside with workers rather than just tracking compliance.

The Signal

Imagine walking into your annual performance review and hearing: "You're doing great work, but you're barely using the AI tools we gave you. That's a problem."

It's happening right now. Meta, Google, and JPMorgan Chase have started baking AI adoption — how often and how well you use artificial intelligence tools — directly into performance reviews. The kind that determine your raise, your promotion, and whether your manager sees you as keeping up.

JPMorgan built dashboards that sort every engineer into categories: light user, heavy user, or non-user. Google told managers they can require AI assistant usage. Meta set targets for what percentage of code should be AI-assisted.

As Zuckerberg put it in January: "2026 is going to be the year that AI starts to dramatically change the way that we work." Performance reviews are where that change is landing first.

The Context

Here's the uncomfortable truth: most companies haven't seen real returns on their AI spending yet.

Think of it like a gym membership for the whole office. The company paid billions for it — but if nobody shows up, the investment looks like a waste. So management is checking the attendance logs.

Analyst Eric Ross put it bluntly: "The vast majority are not getting any productivity." Companies aren't just excited about AI. They're anxious. They need to show boards and shareholders that the spending is going somewhere.

There's a signaling element too. Companies want to say publicly that they're in the AI race. Analyst Brad Reback from Stifel notes that AI tool makers "need significant amounts of adoption" to justify growing budgets. Showing internal buy-in is a starting point. Whether the productivity gains are real yet almost matters less than the appearance of momentum.

The push started with engineers — Meta set code targets, Google made AI usage part of job expectations — but it's expanding fast. Some Google employees in non-technical roles now use AI for strategy documents, sales call analysis, and customer insights.

Who Wins and Who Gets Left Behind

The workers who come out ahead aren't necessarily the most technical. They're the ones who hit what Stanford economist Erik Brynjolfsson calls their "AI moment" — that first experience where a tool genuinely saves them time or helps them think differently. Once that clicks, he says, "it's off to the races."

Companies winning the adoption race share three traits:

Trust before tracking. Meta runs "AI Transformation" workshops where teams experiment with tools like Claude Code on real problems, without performance pressure. Google employees tinker with an internal coding agent nicknamed "Agent Smith." The message: try it, break things, learn.

Showing, not telling. Saying "use AI more" means nothing — it's like saying "be more productive." Winners give specific templates: here's how to draft a performance review with AI, here's how to analyze a sales call, here's how to write a strategy doc. Brynjolfsson calls this "a little more work, but ultimately the only way to do it."

Sharing the upside. Wharton's Scott Snyder calls it "gain sharing" — when AI saves you two hours, you keep some of that time for higher-value work instead of being assigned two more hours of tasks. "If it's just doing more with less, that's not a very exciting proposition to most employees," he warns.

The losers? Companies relying on surveillance and mandates alone. JPMorgan's dashboards track usage, but engineers describe the atmosphere with dark humor: "We all joke about it. It's like, 'oh, we all have this degree that's going to be useless in five years.'"

The workers most at risk are at companies that treat AI adoption as cost-cutting rather than capability-building. One Block employee, laid off after months of encouraged AI usage, said it plainly: "We were laying the foundations for our own replacement."

What to Watch

This isn't staying in Big Tech. When Google, Meta, and JPMorgan collectively put AI proficiency in performance reviews, they set the template for every industry.

Within 18 months, expect mid-market companies — SaaS vendors, consulting firms, supply chain operators — to adopt similar frameworks. The question won't be whether your company tracks AI adoption. It'll be how.

The critical split: companies that measure adoption vs. companies that measure outcomes. Counting how often someone opens an AI tool is easy. Measuring whether it actually improves their work — faster decisions, better analysis, fewer errors — is harder. And almost nobody is doing it yet.

Frequently Asked Questions