June 4, 2025
Klarna’s AI Mistake: Why Replacing Humans Backfired
Klarna’s push to replace 700 support agents with AI looked like a fintech success story—until quality and compliance issues forced a major strategy shift.

Austin Carroll
CEO & Co-Founder
News
3 minutes
Just a year ago, Klarna was being praised for what seemed like a breakthrough in customer service automation. CEO Sebastian Siemiatkowski told shareholders that the company’s AI assistant had taken over the work of 700 customer service agents. Headcount dropped from 5,000 to 3,800. Analysts applauded the move as a glimpse into the future of scalable support. But that future came with a flaw: the customer experience got worse.
The Efficiency Promise That Fell Apart
Klarna’s AI assistant wasn’t just answering simple queries—it was meant to handle complex customer issues, replacing a large part of the support workforce. On paper, it worked. Internally, Klarna projected it could run the full buy-now-pay-later operation with just 2,000 employees.
But the quality didn’t hold up. In a candid Bloomberg interview, Siemiatkowski admitted what many in fintech are starting to realize: focusing too heavily on cost-cutting leads to lower service quality. The AI could respond to tickets, but it couldn’t show empathy, interpret emotional context, or resolve nuanced problems—especially when money is on the line.
“As cost unfortunately seems to have been a too predominant evaluation factor... what you end up having is lower quality.”
Klarna’s New Direction: A Hybrid Model
To fix the damage, Klarna is now shifting to what it calls an "Uber-type" setup for customer support. It’s a distributed workforce model that allows human agents to work remotely while still offering the scalability of AI. The goal: keep automation where it works, but ensure real people are available when it matters most.
This shift isn’t just about service quality—it’s becoming a strategic brand move. Klarna now positions human support as a differentiator in a market where faceless automation is the norm. According to Siemiatkowski, letting customers know they can speak to a human whenever they want is becoming a trust-building feature.
The Bigger Picture: Regulation Is Catching Up
Klarna’s reversal also comes at a time when lawmakers are taking a closer look at AI in financial services. Virginia recently passed the High-Risk Artificial Intelligence Developer and Deployer Act, joining Colorado and others in setting rules for how AI can be used in lending, payments, and customer interactions.
Key requirements in Virginia’s law include:
Mandatory risk assessments for AI used in financial decisions
Transparency to consumers when AI is involved in critical outcomes
Joint liability for developers and deployers of AI systems
The message from regulators is clear: fintechs can no longer treat AI as a plug-and-play solution. If it affects consumer outcomes, it needs oversight, human accountability, and clear disclosures.
What This Means for Fintech Leaders
Klarna’s experience is a warning to fintechs moving too fast toward full automation. It also points to a few evolving best practices:
Hybrid support systems are no longer just an option—they may be a regulatory requirement in some states.
Transparency about AI usage isn’t just good practice, it’s becoming a legal obligation.
Human oversight is a competitive advantage in high-stakes industries like finance.
As more states introduce legislation and enforcement mechanisms tighten, Klarna’s pivot might end up being a smart compliance move—not just a customer service one.
Bottom Line
Klarna’s retreat from AI-first support is more than a tech correction—it’s a signal of where the fintech industry is headed. Automation alone won’t cut it. The companies that win in this new landscape will be the ones that strike the right balance between cost-efficiency, regulatory compliance, and customer trust. And that balance, it turns out, still includes humans.