No products in the cart.
The AI Perception Divide in Knowledge Work: Closing the Gap Between Employees and Managers
A study reveals a significant disparity in how employees and managers perceive the impact of AI in knowledge management, with actionable recommendations for bridging the divide and increasing adoption confidence.
The AI Perception Divide in Knowledge Work
When M. Nakash and E. Bolisani published their Oxford Review study on AI‑enabled knowledge management, the headline was unmistakable: managers rate AI tools far higher than the staff who actually use them day‑to‑day. The survey—administered to a large number of respondents across several firms in Europe and North America—asked participants to score AI’s usefulness for four core processes: acquisition, documentation, sharing, and application. Managers assigned the highest scores to acquisition and application, while employees lingered around the mid‑point for acquisition and only matched managers on documentation.
This gap is not merely academic. In practice, a multinational consulting firm that piloted an AI‑driven briefing generator found that senior partners adopted the tool within weeks, yet junior consultants postponed usage for months, citing uncertainty about relevance. The result was a substantial delay in rollout and a dip in projected ROI. The pattern repeats across industries: when leadership’s enthusiasm outpaces frontline confidence, adoption stalls, training budgets swell, and the promised efficiency gains evaporate.
Why do managers see AI as a strategic lever while employees remain cautious? The study points to role‑specific exposure. Managers are tasked with synthesizing market trends, forecasting revenue, and presenting insights to boards—activities where AI’s speed and pattern‑recognition are immediately visible. Employees, by contrast, wrestle with repetitive data entry, routine reporting, and client‑facing tasks where AI’s output must be vetted for accuracy. The resulting perception gap is most pronounced in knowledge acquisition, where managers view AI as a decisive efficiency booster while staff worry about data quality and loss of contextual nuance.
Actionable recommendations for bridging the divide:
- Launch “dual‑track” pilots that pair a manager champion with a frontline user, allowing both perspectives to shape configuration and success metrics.
- Introduce micro‑learning modules that illustrate concrete, role‑specific AI benefits within a short video.
- Implement a feedback loop where employee‑reported friction points are logged, reviewed regularly, and fed back into the AI model’s training data.
Why High‑Tech Lags in AI Adoption Confidence
One of the most counter‑intuitive findings in the Oxford Review paper is the “high‑tech paradox.” Firms classified as high‑tech—software developers, semiconductor manufacturers, and digital platforms—rated AI’s usefulness for knowledge acquisition and sharing lower than their public‑sector and service‑sector counterparts. The authors attribute this to a heightened awareness of AI’s limitations among technically sophisticated teams.
Supporting this view, a 2025 study by researchers from the Department of management at Bar-Ilan University in Israel and the Department of Management and Engineering at the University of Padova in Italy observed that organizations with deep AI expertise often set a higher bar for performance, leading to slower internal confidence building. In practice, a leading cloud‑services provider delayed the rollout of an internal knowledge‑graph because its engineers flagged semantic drift in early tests—a concern that would likely have been dismissed in a less technically literate environment.
Launch “dual‑track” pilots that pair a manager champion with a frontline user, allowing both perspectives to shape configuration and success metrics.
You may also like
Artificial IntelligenceAI’s Transformative Role in UX/UI Design Careers
AI is revolutionizing UX/UI design, making personalization a key focus for designers. Discover how to adapt and thrive in this evolving landscape.
Read More →Below is a comparative table that distills the sectoral scores from the study:

| Sector | Acquisition (Oxford) | Sharing (Oxford) |
|---|---|---|
| High‑Tech | 3.6 | 3.4 |
| Industrial | 4.0 | 3.9 |
| Public | 4.1 | 3.9 |
| Service | 4.0 | 3.8 |
These numbers suggest two practical levers for high‑tech firms:
- Calibration workshops that surface realistic performance baselines and align expectations with measurable milestones.
- Cross‑sector learning exchanges where high‑tech teams observe AI rollout successes in public‑sector case studies, thereby normalising confidence.
Graduates See AI’s Potential Where Others Don’t
Education emerged as a decisive moderator in the perception data. Employees with a university degree rated AI’s usefulness for knowledge application higher than non‑graduates. Interestingly, both groups converged on acquisition, documentation, and sharing, indicating that higher education primarily sharpens appreciation for AI’s analytical capabilities rather than its routine functions.
Knowledge managers—those tasked with curating, tagging, and retrieving corporate expertise—scored AI highest for acquisition versus non‑knowledge managers. Dr. Lina Patel, senior AI strategist at the European Institute of Technology, interprets this as “a signal that meta‑cognitive training—understanding how to ask the right questions of AI—magnifies perceived value.” She adds that organizations that embed AI literacy into onboarding see a substantial uplift in adoption speed.
Concrete steps to leverage education as a catalyst include:
Concrete steps to leverage education as a catalyst include:

- Designing a tiered AI‑literacy curriculum that starts with foundational concepts for all staff and progresses to advanced prompting techniques for knowledge managers.
- Partnering with local universities to create “co‑op” projects where graduate interns pilot AI tools on real‑world knowledge‑flow challenges.
- Rewarding cross‑functional mentorship where graduate employees coach non‑graduates on extracting insights from AI‑generated reports.
The Twin Pillars of AI Acceptance
The Oxford Review paper underscores a robust, positive correlation between perceived usefulness and trust across all knowledge‑management stages. Trust proved especially decisive for knowledge application. In other words, even the most capable AI engine will languish if users doubt its reliability.
You may also like
Artificial IntelligenceIBM CEO Arvind Krishna on Hiring Trends Amid Economic Uncertainty
IBM CEO Arvind Krishna emphasizes the importance of strategic workforce planning in uncertain economic times, focusing on hiring trends and the future of technology jobs.
Read More →Real‑world evidence supports this claim. A global pharmaceutical company introduced an AI‑assisted literature‑review platform. After an initial rollout, usage was lower than expected. A subsequent “trust‑audit” revealed that clinicians were uneasy about opaque model decisions and the lack of traceability. By publishing model provenance and adding human‑in‑the‑loop validation checkpoints, the firm lifted adoption within six months.

Three proven tactics for building trust in knowledge‑management AI are:
Transparent algorithms: Users need to understand how an AI arrives at its outputs—even if not at a deeply technical level. Clear explanations, visible data sources, and traceable decision paths reduce the “black box” effect that often fuels skepticism.
Transparent algorithms: Users need to understand how an AI arrives at its outputs—even if not at a deeply technical level.
Human-in-the-loop validation: Embedding checkpoints where employees can review, edit, or override AI-generated outputs ensures accountability remains shared. This not only improves accuracy but also reinforces a sense of control—critical for frontline adoption.
Consistent performance feedback loops: Trust compounds over time. Organizations that actively collect user feedback, flag errors, and iteratively refine AI systems create a visible trajectory of improvement. When employees see their concerns leading to real changes, confidence follows.
You may also like
Business InsightsRewiring Operations for Competitive Advantage in 2026
This article delves into how businesses can redefine their operations to gain a competitive edge by 2026, focusing on technology and process reengineering.
Read More →Taken together, these pillars reveal something simple but often overlooked: AI adoption isn’t just a technical rollout—it’s a psychological one. And trust, once earned, becomes the bridge between potential and performance.








