IBM's latest CEO study contains numbers that should alarm every executive. After three years of AI investments, only 25% delivered expected ROI. Only 16% scaled enterprise-wide.

Most companies start with technology and hope it finds a purpose. They should start with human problems.

The Fear-Driven Investment Cycle

What drives this backward thinking? Fear. Sixty-four percent of CEOs invest in AI before they understand its value, terrified of falling behind competitors.

The cycle becomes predictable once it starts. Competitors announce AI initiatives. Headlines trumpet breakthroughs. Executives respond by authorizing pilots, hiring Chief AI Officers, and launching innovation labs.

But they skip the most basic question: What job should this AI do?

Without clear purpose, companies create solutions hunting for problems. They optimize metrics that don't matter and automate processes that shouldn't exist. The result is wasted money and confused organizations.

The Human Toll

This technology-first approach exacts a steep price on workers. Thirty-one percent need retraining within three years. Fifty-four percent of companies are hiring for AI roles that didn't exist last year.

Some organizations handle this transition thoughtfully. They retrain existing employees, hire specialists, deploy AI for routine tasks, and partner with consultants. But even these strategies fail without clear purpose driving them.

Meanwhile, companies measure what's easy to track: ROI, efficiency gains, technical performance. They don't ask whether people feel capable working with AI or how it changes team dynamics. This oversight explains why most initiatives stall.

Why Pilots Don't Scale

You can prove AI works in controlled tests. But scaling requires something entirely different: people must adopt the technology and weave it into their daily work. That depends on trust and understanding, not just technical capability.

Organizations stuck in pilot mode (60% of them, according to the study) focus almost entirely on proving the technology works. They ignore whether people want to work with it or know how to integrate it effectively.

This creates a paradox. Technical success in pilots becomes organizational failure at scale.

The Investment Paradox

Despite these disappointing results, CEOs are doubling down. They expect AI spending to grow 31% in 2025-2027, compared to 15% growth in 2024-2026.

This paradox reveals something important about executive thinking. Leaders believe their early struggles represent learning, not fundamental errors. They think bigger investments will yield better results.

CEOs remain surprisingly optimistic about future returns. Eighty-five percent expect positive ROI for AI efficiency projects by 2027, and 77% expect positive returns for growth initiatives. This optimism suggests executives see current failures as temporary setbacks rather than systemic problems.

But money won't solve problems rooted in approach. The real issue isn't insufficient investment; it's starting with technology instead of outcomes. One-third of companies deploying AI cannot even tell if it's working because they never defined success.

What Success Looks Like

The 25% of organizations getting results follow a different path entirely. They start with outcomes and work backward to determine if AI can help.

These companies ask human-centered questions first:What business problem are we solving? How will we measure improvement? What would success look like in concrete terms?

Only after answering these questions do they consider whether AI offers a solution. This approach naturally prevents the scaling problem because it focuses on measurable business value rather than technical capability.

The successful organizations built this discipline from day one. They established clear success metrics, defined human oversight requirements, and tested capabilities against specific business goals before expanding scope. Their Chief AI Officers now report an average ROI of 14% as programs move beyond pilots.

Breaking the Pattern

Organizations that escape this cycle share common characteristics. They treat AI implementation as organizational change, not a technology project. They maintain human control over strategic decisions. They design AI systems that enhance rather than replace human capabilities.

Most importantly, they view every AI decision through three lenses: what task will AI perform, how will people feel working with it, and how will it affect team relationships. This comprehensive view prevents the technical tunnel vision that dooms most implementations.

The Path Forward

The choice facing executives is stark but simple. Continue the technology-first approach that leaves 75% of organizations learning expensive lessons, or join the 25% getting measurable results by starting with human needs.

Stop copying competitors' AI announcements. Stop launching pilots without defining what success means for the people who will use these systems. Stop measuring only technical performance while ignoring human adoption.

Instead, start with problems your people face at work. Define clear outcomes. Ask what jobs need doing better, faster, or more reliably. Identify where people spend time on work that doesn't require human creativity or judgement.

Only then ask whether AI can help. Technology without purpose creates expensive complexity. Purpose without technology creates missed opportunities. But purpose-driven technology creates measurable value.

The organizations getting this right focus on clear outcomes, maintain human control over critical decisions, and build quality assurance from the start. They treat AI as a tool to serve human needs, not as an end in itself.

The difference lies not in the technology you choose, but in how clearly you define the business problems you want to solve and how thoughtfully you integrate AI into human work.

Source: https://www.ibm.com/thought-leadership/institute-business-value/c-suite-study/ceo