Ignore customers' jobs-to-be-done at your peril: Why most self-service still disappoints
Customer self-service represents one of AI's most compelling use cases.
The fundamentals align perfectly. Customers are increasingly frustrated with rigid, scripted self-service. At the same time, service operations are under pressure to automate in the face of ever-growing interaction volumes. The latest advances in AI have unlocked a new level of experience—more dynamic, conversational, and human-like than ever before. Applied to self-service, AI promises the hard-dollar ROI businesses have long been chasing. And teams don’t need to boil the ocean—they can start small, route a sliver of interactions to a new solution, and iterate from there.
Yet a chasm persists between what the technology can do and what’s actually being implemented.
It’s true that the latest technology often shines in demos but is harder to scale. It's also true that, amid the GenAI hype, vendors are too often touting it as a cure for everything.
But the root issue runs deeper: organizations are deploying solutions without first classifying interactions by job-to-be-done (JTBD)—a critical step in identifying which are suitable for self-service, defining the experience to deliver, and selecting the right technology for the task.
I am finding four major types of customer interactions:
Inquiries – “Can I do a backdoor Roth IRA conversion?”
These information-seeking interactions need clear and accurate answers. They also extend into customer success, helping users get more value from products and services. GenAI is well-suited for this category, with human escalation when needed. Its effectiveness will depend on the quality of underlying knowledge and its arrangement using technology like vector databases and/or knowledge graphs.Issues – “I can’t print from my mobile device.”
These are customer support scenarios. Goal-oriented AI agents and decision trees excel at guiding users through step-by-step diagnostics and resolution workflows.Transactions – “Update my billing address.”
These high-volume, bounded-scope interactions are still best handled by traditional NLP. Over time, agent-based AI solutions will gradually take over, but for now, you should favor established technologies that scale, potentially leveraging GenAI to enhance information gathering. This category includes identity verification and authentication, leveraging specific technologies like biometrics.Decisions – “Should I refinance my mortgage given current market conditions?”
When customers face complex, high-stakes, or emotionally charged decisions, human support is essential. These conversations require nuance, empathy, reassurance, or confirmation, and should be routed to a human. AI’s role here is assistive—detecting intent, gathering context, and supporting agents behind the scenes.
One could add a fifth interaction type—those initiated by the brand. These proactive engagements are now possible at scale through personalized, automated outreach. Context-driven, they span use cases like re-engaging customers at key moments, helping them get more value from a product or service, or providing timely alerts about payments or potential issues. This category is also well-suited for AI agents, but must include a seamless fallback to a human when needed.
An interaction taxonomy must become the foundation of every self-service initiative. It should drive which interactions to prioritize for automation, shape how the experience is designed to foster repeat adoption, and guide the selection of the right technologies for each use case.