I continue to reflect on last year’s learnings, and today I want to discuss the evolving role of pilots.
For the past decade, the software and SaaS sales playbook has revolved around a product demo followed by a pilot:
Early-stage efforts would focus on securing a demo that provides a high-level view of the product and serves as a stepping stone to identify a small set of priority use cases tied to the customer’s most pressing business issues.
Sellers would then guide the customer toward a tightly scoped pilot addressing one or two of those issues. The pilot would validate usability with a subset of users and confirm that the targeted workflows operate effectively within the customer’s operational context.
These pilots would typically be short and narrowly defined, handled by Customer Success Managers, with exit criteria framed as straightforward checklists, and failure rates kept low largely thanks to their limited scope.
Today, AI is reshaping the playing field.
AI demos are compelling and often impressive, but what comes after is a cliff. It is hard for prospective buyers to visualize what it actually takes to put AI to work in their own environment. It is equally challenging to define and measure success, set clear expectations for results, and fully grasp what ongoing operations and continuous improvement will require.
To address these questions, buyers are turning to pilots of a very different caliber: true trials. In its 2026 State of Business Buying report, Forrester found that more than 60% of B2B buyers include a trial in their purchase process.
One finding from the same study underscores the challenges of these pilots: half of the trials for $1+M projects result in either the project being halted or a vendor switch.
Vendors are responding by supplementing or replacing Customer Success Managers with forward-deployed engineers (FDEs). In many cases, FDEs continue beyond the initial trial to drive ongoing improvement and expansion.
Success also requires a methodology overhaul that addresses:
Discovery of the customer environment
Assessment of data readiness
Managing potential employee resistance to change and upskilling
Identifying practical measures of success
Building and prioritizing the use case portfolio
Managing the ongoing cycle of improvements
Eventually, success also requires stepping back to reassess which problems to pursue. Until last year, most companies would favor “safe” initial use cases with limited risk but modest upside. Today, there is greater readiness to apply AI to bigger problems that command higher upside. The good news is that more businesses are comfortable with measures of success that don’t mechanically translate into cost-cutting or revenue gain, but they do expect tangible improvement.
This isn’t a temporary adjustment. The combination of deeper technical involvement, comprehensive methodology, and sharper use case selection is setting a new standard for putting AI to work.



