Workers adopt AI tools outside policy while leaders report productivity gains. Both patterns are real. A UK study found 71% of employees use unapproved AI tools at work, saving nearly 8 hours weekly—yet only 32% worry about data privacy. Meanwhile, 66% of senior leaders across Europe report significant productivity improvements.

The gap between speed and safety narrows when organizations make approved tools convenient, define clear decision rights, and measure both time saved and time spent fixing errors. The bottleneck isn't model quality. It's the system around it.


Outcome-Driven AI Strategy: Leaders report gains but skip the measurement

Senior leaders say AI is working. The data says they're not sure how.

An IBM survey of 3,500+ executives across Europe, the Middle East, and Africa found 66% report significant productivity gains from AI. Forty-one percent expect clear ROI within 12 months. Ninety-two percent believe agentic AI will deliver measurable returns within two years.

Larger firms report more success: 72% of enterprises with 1,001-5,000 employees cite productivity improvements, compared to 55% of SMEs. Public sector organizations also report just 55%, suggesting institutional constraints beyond resources.

Here's the problem: these are perceptions, not measurements. Before scaling pilots, teams need to answer three questions: What did we save? What did we fix? What did we ship? Most can't.

The executives aren't naive. Sixty-eight percent identify security, privacy, and ethics as the top barrier to scaling. They know the risks. What's missing is the discipline to track outcomes before and after deployment.

Some tracking is happening. Executives report employees redirect AI-saved time to developing new ideas (38%), strategic decision-making (36%), and creative work (33%). But systematic measurement—cycle time, error rates, customer satisfaction tied directly to AI—remains rare.

What works: Establish simple baselines before pilots. Measure time saved and hours spent fixing mistakes. Tie investments to business metrics, not activity counts. Simple baselines beat fancy dashboards.

What's next: As agentic AI scales, track task suitability, review load, and the share of outputs that ship without human rework. That turns perception into proof.

Source: IBM EMEA survey via PR Newswire


AI Governance & Decision Rights: Employees moved first—policy hasn't caught up

Workers aren't rebelling. They're problem-solving.

A Microsoft-commissioned study of 2,003 UK employees found 71% have used unapproved AI tools at work. Fifty-one percent do so weekly. They save an average of 7.75 hours per week on administrative tasks—extrapolated to 12.1 billion hours annually across the UK economy, valued at roughly £207 billion.

Why the shadow adoption? Forty-one percent use unapproved tools because they're familiar from personal life. Twenty-eight percent report their company provides no approved alternative. When sanctioned tools are slow or unavailable, people reach for the fastest option.

The risk is real but underestimated. Only 32% of employees worry about data privacy when using consumer AI tools. Just 29% worry about IT security. This isn't only a governance problem—it's a risk communication failure. A RiverSafe survey found one in five UK companies experienced data leakage from generative AI use.

This happens when decision rights are vague. Who decides what data can go where? Who approves exceptions? Who pays for remediation? If the answers live nowhere, the risks live everywhere.

What works: CFOs and CISOs partner to identify unapproved use, then offer approved alternatives that match consumer convenience. Track remediation time as a risk metric. Publish a simple allow/deny matrix for common data types. Provide clear exception paths with time-bound approvals. Adoption shifts to sanctioned tools when they're easier than the alternatives.

What's next: More firms will move from "block by default" to "permit with protections"—tighter data scopes, automated redaction, and audit logs tied to review workflows. That's the only scalable way to align speed with safety.

Sources: Microsoft UK research; AI Magazine coverage


Human–AI Collaboration Design: Higher-wage workers face higher exposure

The conventional story says AI threatens low-skill work first. The data says otherwise.

Research from the Washington Center for Equitable Growth found AI exposure is highest among workers with higher wages and higher education. Their tasks are cognitive, writing-heavy, and analytical—exactly what large language models handle well. Women face slightly higher exposure than men. Asian American workers show higher exposure than other racial groups.

The wage effect depends on how AI is used. When AI augments work—helping humans perform tasks better—wages increase by 2.5% for every 1% increase in exposure. When AI automates work—replacing human tasks—wages decrease by 2.3% for every 1% increase. The overall effect is marginally positive (+0.5%) because these forces offset each other.

This matters for design. A marketing writer and a financial analyst both use AI to draft documents, but their review standards and error costs differ. Blanket policies and generic training fail when different roles need different rules.

What works: Create simple task guidelines by role. Distinguish tasks where AI generates drafts, summarizes information, checks work, or should never make decisions. Set accuracy thresholds based on error costs. Require source citations for analytical work. Track time saved, error rates, and quality scores by role to ensure gains don't concentrate inequitably.

What's next: Track exposure and outcomes by role, investing deliberately in areas with highest overlap and highest error costs. This creates tension—preventing inequality may require investing more in already-privileged roles. That demands transparent communication.

Source: Washington Center for Equitable Growth


Employer-Driven Adoption: Small businesses show what aligned adoption looks like

Small business owners are adopting AI rapidly—and employees are on board.

A survey of 530 U.S. small business employers found 88% use AI tools (averaging 4.8 tools per business). Seventy-three percent say these tools are important to competitiveness and growth. Employee response, as reported by employers, is notably positive: 58% report strong engagement, 36% neutral reception, and only 6% negative reactions.

This contrasts sharply with the shadow AI pattern in larger organizations. The difference appears to be alignment. In small businesses, the person deciding on tools often sets the policy. There's no shadow AI problem when adoption and governance move together.

The business impact is measurable. Forty-one percent of owners report AI frees time for strategic work. Thirty-five percent redirect resources to revenue-generating projects. Thirty-three percent report improved customer engagement.

This speed doesn't transfer directly to enterprise scale. Small businesses move faster with fewer stakeholders. Enterprises face integration complexity, security review requirements, and change management across distributed teams.

What works: Bring this experimental discipline inside larger organizations through structured sandboxes. Create safe environments with approved tools. Run two-week sprints where employees demonstrate real applications. Capture three metrics: time saved, hours spent fixing errors, and whether stakeholders accept the output as-is. Keep a simple keep/drop/scale log. Share validated use cases in a searchable library.

What's next: Make the approved path the easiest path. Publish role-specific examples so employees know which use cases will survive security review. Coordinate across teams to prevent duplicated effort.

Source: Small Business & Entrepreneurship Council survey


The middle path

The pattern holds: Employees move first. Leaders report gains but lack measurement systems. Higher-wage workers face the highest exposure, requiring role-specific design. Small businesses achieve alignment that eludes larger organizations.

These insights connect. Leaders perceive gains but can't measure them systematically because employees adopt tools outside IT visibility. This works temporarily until quality failures expose the measurement gap. Small businesses avoid this when adoption and governance move together.

The gap isn't the model. It's the system: clear outcomes, accessible baselines, approved tools that match consumer convenience, role-specific guidelines, and measurement that tracks both time saved and time spent fixing.

The trade-off is real. Clamp down hard, and you slow discovery. Look away, and you invite data leakage and rework. The middle path: Define outcomes first. Publish a simple allow/deny matrix. Give people approved tools that are genuinely easy to use. Measure time saved alongside time spent fixing. For knowledge work, add task guidelines and accuracy standards by role.

Do that, and perception begins to match reality. You keep the speed, reduce the risk, and make the gains compound.

Until next time, Matthias


Artificial intelligence is reshaping how we work. But the real challenge isn't technical. It's human. AI isn't just a tool — it's a new team member. The Collaboration Brief curates weekly insights on how people work with AI and with each other in this new reality.

P.S. This newsletter practices what it preaches. AI agents handle research, fact-checking, and drafting. I curate sources, validate claims, and make final calls on quality. Human judgment at every stage.

https://www.jtbd-to-ai.com/