Your surveys run every 6-12 months ·
Your training runs every year ·
Your AI tells you it sees everything.

None of them know what your teams are actually experiencing.

Three investments. Three budgets. A structural gap between all of them that has been compounding on your P&L for years. Invisible, unmeasured, and almost always larger than expected.

$0
Average annual cost of the Manager Gap - based on 300 people

Across six impact layers.

Calculate My MGI Score
Results stay in your browser · 4 minutes

What is the Manager Gap?

The Manager Gap is the structural space between what managers can currently see about their teams and what their teams are actually experiencing. It is created by three parallel systems that operate independently and produce no shared picture of team health: surveys that measure safety instead of reality, training programs disconnected from live team signal, and AI monitoring tools that capture managed performance on watched channels.

The Manager Gap Index (MGI) is the first diagnostic tool that calculates the cost of this gap. It produces an MGI Score from 0 to 100 (a higher score means a narrower gap) and a dollar figure across six impact layers, using your specific inputs and published research multipliers.

Based on MGI assessments to date, the average first-time score for a mid-market company (200 to 500 employees) is 33 out of 100, placing it in the Critical band with an average annual Manager Gap cost of $8.3M. A higher score means a narrower gap. Most of that cost is invisible on the day the score is taken.

Three systems. Three budgets.
No connection between them.

Most organisations believe they have a managed people strategy. They have surveys that run every quarter. They have training programs for their managers. Many now have AI tools that monitor communications to understand team health.

All three have budget lines. All three have ownership. All three have been in place for years.

Not one of them is telling the organisation what its teams are actually experiencing. And not one of them knows what the others found out.

System 01

Surveys

Collects stated signal. Produces a score. The data reflects safety, not reality.

System 02

Training

Selected from a catalogue. Delivered in isolation. Works on a problem it cannot see.

System 03

AI Monitoring

Captures everything on channels employees know are watched. The real conversation migrated the day the system was deployed.

← Three systems with no connection and no shared truth →

Why Employee Surveys Produce Unreliable Data

It is a measure of how safe your employees felt when they answered the question. Those are not the same number. But they are being treated as if they are.

01
Small team triangulation. In a team of six, one outlier response is not anonymous. Every member of that team can identify who scored differently. So nobody scores differently. The data that surfaces is not a measure of what the team experiences. It is a measure of what the team is comfortable admitting.
02
Appraisal proximity. Most surveys run within 60 days of performance reviews. Employees have a direct financial incentive to manage their responses. You do not score your manager 3 out of 10 on communication if you believe that score will be in the room when your pay review happens. This is not paranoia. It is rational self-protection.
03
Positive skew compounding over time. Each survey cycle, the safe answer drifts slightly more positive. Leadership sees gradual improvement. The underlying team signal does not improve. It becomes less visible. The gap between the reported score and the actual experience widens every quarter.
A survey measures how safe your employees feel answering a question about their manager.

That is not the same as what your teams are actually experiencing.

But it is being treated as if it is.
The structural flaw in survey-based people strategy

Clover ERA collects team-level patterns, not individual responses. Nothing to triangulate. Genuine anonymity by architecture, not procedurally promised.

Your training program is working on a problem it cannot see.

Manager development programs are not designed from your survey data. They are selected from vendor catalogues, commissioned in response to a budget cycle, or built around what managers generally struggle with, not what your managers are specifically struggling with right now.

A manager attends a workshop on giving feedback. They return to a team where the problem is not feedback. It is that two people have stopped raising issues because the last three times they did, nothing changed. The training worked on the wrong problem. Not because the content was wrong. Because it had no access to the live team signal it needed to be useful.

Harvard Business School named this in 2016: most training spend fails not because the content is wrong but because the learning is entirely disconnected from the actual conditions managers return to.

Source: Beer, Finnstrom & Schrader. Harvard Business School Working Paper 16-121 (2016). The Great Training Robbery.

Your AI monitoring tool is capturing everything your employees are willing to say on a channel they know is being watched.

A new category of tool has entered the market. It monitors Teams messages, email threads, Slack activity, and call transcripts. It produces dashboards showing communication patterns, collaboration health, and sentiment scores. Vendors describe it as the future of employee understanding.

There is a structural problem with this approach that its vendors do not advertise.

When employees discover their communications are being monitored, and they always discover this, they do not adjust one answer on one questionnaire. They adjust how they communicate on every monitored channel, permanently.

The Teams chat that used to surface problems candidly becomes procedurally correct. The email that used to carry genuine disagreement becomes professionally managed. The Slack message that used to flag a concern becomes neutral.

The real conversation moves to WhatsApp. To a voice note. To the car park. To any channel the system cannot see.

The organisation now has a surveillance infrastructure that produces a detailed, confident, and structurally false picture of a healthy, communicating team, because every data point was generated by someone who knew they were being watched.

This is not a technology limitation. It is a behavioural certainty. The moment an organisation deploys communication surveillance, it destroys the candid informal communication that was its most reliable early warning system, and replaces it with managed performance on monitored channels.

AI surveillance tools do not capture what employees are experiencing.

They capture what employees are willing to say on a channel they know is being watched.

Those are not the same dataset.
The structural flaw in AI-based people intelligence

The migration to unmonitored channels is the signal. When your employees are using WhatsApp for conversations that used to happen on Teams, they are telling you something precise about how they feel about the monitored channel. That signal is invisible to your AI dashboard. It is not invisible to the MGI.

Clover ERA collects team-level behavioural patterns through anonymous daily check-ins. Not communication content. Not message sentiment. Not productivity tracking. There is nothing for employees to perform and no channel to migrate away from.

The survey tells you there is a problem. The training attempts a fix. The AI dashboard tells you everything is fine. None of them know what the others found out. That gap has been on your P&L the whole time.

The Manager Gap does not only cost you people who leave.

Five of the six impact layers are costs from employees who are still on your payroll. The disengagement, the suppressed ideas, the performance drag from managers who cannot see their teams. Those are happening today, in an organisation whose survey score suggests things are broadly fine and whose AI dashboard shows green.

01

Regrettable Attrition

0.5x to 2x annual salary per departure
The employees you wanted to keep. Gallup confirms 75% of voluntary turnover traces to management, not compensation, not the role. The people leaving are not leaving the company. They are leaving the gap.
Source: Gallup (2025 State of the Global Workplace)
02

Disengagement Tax

18 to 34% of salary per disengaged employee, annually
The employees who show up but are not fully present. 70% of team engagement variance is attributable to the manager. When the manager is operating on skewed survey data or false AI confidence, the disengagement builds unseen.
Source: Gallup (2025). $438B in global productivity lost to manager disengagement in 2024.
03

Manager Drag

10 to 52% of a team's productive time, lost
Managers without live team signal create drag across the organisation. Their teams manage upward. Their peers avoid collaboration. Their manager absorbs the overhead. Poor management costs the US economy over $500 billion annually. From people staying, not from people leaving.
Source: Perceptyx (2025); Training Industry (2020)
04

Promotion Risk

60% of new managers fail within 24 months
Your best engineer is now a struggling manager. They were given a survey that shows 7.4, a training program that cannot see their team, and an AI dashboard showing green. None of these tell them or their organisation what is actually happening. 60% of new managers receive no meaningful support when they are promoted.
Source: Gartner; Center for Creative Leadership (2024); Wharton School (2024)
05

Innovation Suppression

31% less innovation in teams with low psychological safety
Teams where the Manager Gap is wide go quiet. They stop raising problems because nothing changed the last time they did, and the dashboard said everything was fine. AI surveillance accelerates this: when employees know their messages are monitored, the informal conversation that surfaces problems earliest migrates to channels nobody can see.
Source: Google Project Aristotle (2015, 2022). 85% of employees withhold important information from their manager (Edmondson & Detert).
06

Customer Impact

Engaged teams sell 20% more than disengaged teams
The Manager Gap does not stay internal. Sales teams hit lower quota. Customer success managers break account relationships mid-cycle. The AI dashboard shows normal communication volume and calls it healthy collaboration.
Source: Gallup; Bain & Company. 5% retention improvement increases profits 25 to 95%.

Three systems convinced leadership everything was broadly fine.
The other five cost layers were accumulating the whole time.

$6.2M
Average total cost — all six layers
Mid-market companies, 200 to 500 employees.

Four minutes. Five sections.
One number that tells you where the gap is.

The MGI does not measure how people feel about working at your company. It does not rely on survey data, training feedback, or AI-monitored communications. It measures the cost of the gap between your current infrastructure and what your teams are actually experiencing.

01
Visibility

How much can your managers currently see about their team's experience. Five questions about signal frequency, format, and anonymity architecture, plus one about AI communication monitoring.

02
Action

When managers see a problem, do they know what to do. Four questions about whether specific, contextual guidance exists in the flow of work.

03
Accountability

Is anyone tracking whether managers act. Four questions about whether actions are logged, followed up, and connected to the team signal that follows.

04
Strain

How much pressure is already on the system. Four questions about restructures, growth rate, promotion patterns, and current turnover.

05
Cost Exposure

Four numbers that produce your full cost across six impact layers. Headcount, average salary, voluntary turnover rate, and customer-facing percentage.

Your MGI Score

0 to 100. A higher score means a narrower gap. Five classifications from Exposed (0) to Closed (100). Positioned against the mid-market benchmark of 26 to 42. Includes flags where specific risk patterns are detected, including Surveillance False Confidence.

Your Total Cost

Six impact layers. Your inputs. Named research multipliers. Not an industry average. Your number.

Where the gap is widest

Dimension-by-dimension breakdown. Which layer is driving the most cost. Which intervention closes the gap fastest.

Where do you sit?

0-20
21-40
41-60
61-80
81-100
Exposed
Critical
Open
Narrow
Closed
0-19
20-39
40-59
60-79
80-100
Most mid-market companies score here on their first assessment (26 to 42)

The MGI Score runs from 0 to 100. A higher score means a narrower gap. A lower score means a wider gap and a higher cost.

Exposed

0 to 19

Maximum gap. All three systems are simultaneously creating false confidence while the cost compounds. Immediate intervention warranted.

Over $10M

Critical

20 to 39

Wide gap. Survey confidence is false. Training is working on the wrong problem. AI monitoring is capturing managed performance, not genuine signal. High probability of accelerating attrition in the next 90 days.

$5M to $10M

Open

40 to 59

Significant blind spots. Survey data and AI dashboards may be masking the real team signal. Regrettable attrition is occurring and under-measured.

$2M to $5M

Narrow

60 to 79

Addressable gaps. Targeted intervention closes them before they compound. At this score, the cost is close to baseline. Every organisation that runs on human management carries this cost. The question is whether you are managing it or ignoring it.

$500K to $2M

Closed

80 to 100

The gap is actively managed. Cost is at its structural minimum. All organisations carry some baseline cost. This band means it is not compounding.

Under $500K

The average first-time score for a mid-market SaaS or Fintech company is 33. That is Critical. Annual cost: approximately $8.3M. Most of it invisible on the day the score is taken, hidden behind a 7.4 survey score, a completed training program, and an AI dashboard showing green.

Find out what three parallel systems are costing you, and what none of them can see.

The Manager Gap Index calculates your full cost across six impact layers, your MGI Score against the mid-market benchmark, and flags whether your current people intelligence infrastructure is producing genuine signal or managed performance.

The number you are about to see is almost certainly larger than your surveys, your training feedback, and your AI dashboard have ever suggested it could be.

That is the point.
Calculate My MGI Score
Four minutes · Results stay in your browser · Built on published research
Already have your score? Discuss MGI Analysis Further to discuss what closing the gap looks like.

Frequently Asked Questions

The Manager Gap is the structural space between what managers can currently see about their teams and what their teams are actually experiencing. It is created by three parallel systems that operate independently and produce no shared picture of team health: surveys that measure how safe employees feel answering a question (not what they actually experience), training programs selected from catalogues with no access to live team signal, and AI monitoring tools that capture managed performance on channels employees know are being watched. The Manager Gap exists in every organisation that runs on human management. The question is not whether it exists. It is whether you are measuring it or ignoring it.
The Manager Gap Index (MGI) is a diagnostic tool built by Clover ERA that calculates the true cost of the Manager Gap across six impact layers: regrettable attrition, disengagement tax, manager drag, promotion risk, innovation suppression, and customer impact. It takes four minutes, uses five sections of questions covering visibility, action, accountability, strain, and cost exposure, and produces a single MGI Score from 0 to 100 (higher is better, 100 = fully closed gap) with a dollar cost figure based on your specific inputs and published research multipliers from Gallup, Harvard Business School, Google Project Aristotle, and Gartner.
The MGI Score is calculated across five dimensions. Visibility measures how much your managers can currently see about their team's experience, including signal frequency, format, anonymity architecture, and whether AI communication monitoring is in place. Action measures whether specific, contextual guidance exists when problems surface. Accountability measures whether actions are logged, followed up, and connected to the team signal that follows. Strain measures current pressure from restructures, growth rate, promotion patterns, and turnover. Cost Exposure applies your headcount, average salary, voluntary turnover rate, and customer-facing percentage to published research multipliers for each of the six cost layers. The score ranges from 0 (Exposed) to 100 (Closed), with most mid-market companies scoring between 26 and 42 on their first assessment.
Employee surveys produce unreliable data because of three structural flaws that compound over time. First, small team triangulation: in a team of six, one outlier response is identifiable, so nobody scores differently. Second, appraisal proximity: most surveys run within 60 days of performance reviews, giving employees a direct financial incentive to manage their responses upward. Third, positive skew compounding: each survey cycle, the safe answer drifts slightly more positive while the underlying team experience does not change. The result is a score that measures psychological safety around the survey itself, not what teams are actually experiencing. This is not a design flaw that better questions can fix. It is a structural limitation of the survey format.
Based on MGI assessments to date, the average first-time score for a mid-market company with 200 to 500 employees is 33 out of 100 (where 100 is best), placing it in the Critical band. The average annual cost across all six impact layers is approximately $8.3M. Five of the six cost layers come from employees who are still on the payroll: disengagement tax (18 to 34% of salary per disengaged employee), manager drag (10 to 52% of productive time lost), promotion risk (60% of new managers fail within 24 months), innovation suppression (31% less innovation in low psychological safety teams), and customer impact (engaged teams sell 20% more). Only one layer, regrettable attrition, measures the cost of people who have already left.