Your surveys run every quarter. Your training runs every year. Your AI tells you it sees everything.

None of them know what your teams are actually experiencing.

Three investments. Three budgets. A structural gap between all of them that has been compounding on your P&L for years. Invisible, unmeasured, and almost always larger than expected.

$0
Average annual cost of the manager gap — mid-market companies

Across six impact layers.

Calculate My MGI Score
Results stay in your browser · 4 minutes

Three systems. Three budgets.
No connection between them.

Most organisations believe they have a managed people strategy. They have surveys that run every quarter. They have training programmes for their managers. Many now have AI tools that monitor communications to understand team health.

All three have budget lines. All three have ownership. All three have been in place for years.

Not one of them is telling the organisation what its teams are actually experiencing. And not one of them knows what the others found out.

System 01

Surveys

Collects stated signal. Produces a score. The data reflects safety, not reality.

System 02

Training

Selected from a catalogue. Delivered in isolation. Works on a problem it cannot see.

System 03

AI Monitoring

Captures everything on channels employees know are watched. The real conversation migrated the day the system was deployed.

← Three systems with no connection and no shared truth →

Your survey score is not what your teams are actually experiencing.

It is a measure of how safe your employees felt when they answered the question. Those are not the same number. But they are being treated as if they are.

01
Small team triangulation. In a team of six, one outlier response is not anonymous. Every member of that team can identify who scored differently. So nobody scores differently. The data that surfaces is not a measure of what the team experiences. It is a measure of what the team is comfortable admitting.
02
Appraisal proximity. Most surveys run within 60 days of performance reviews. Employees have a direct financial incentive to manage their responses. You do not score your manager 3 out of 10 on communication if you believe that score will be in the room when your pay review happens. This is not paranoia. It is rational self-protection.
03
Positive skew compounding over time. Each survey cycle, the safe answer drifts slightly more positive. Leadership sees gradual improvement. The underlying team signal does not improve. It becomes less visible. The gap between the reported score and the actual experience widens every quarter.
A survey measures how safe your employees feel answering a question about their manager.

That is not the same as what your teams are actually experiencing.

But it is being treated as if it is.
The structural flaw in survey-based people strategy

Clover ERA collects team-level patterns, not individual responses. Nothing to triangulate. Genuine anonymity by architecture, not procedurally promised.

Your training programme is working on a problem it cannot see.

Manager development programmes are not designed from your survey data. They are selected from vendor catalogues, commissioned in response to a budget cycle, or built around what managers generally struggle with, not what your managers are specifically struggling with right now.

A manager attends a workshop on giving feedback. They return to a team where the problem is not feedback. It is that two people have stopped raising issues because the last three times they did, nothing changed. The training worked on the wrong problem. Not because the content was wrong. Because it had no access to the live team signal it needed to be useful.

Harvard Business School named this in 2016: most training spend fails not because the content is wrong but because the learning is entirely disconnected from the actual conditions managers return to.

Source: Beer, Finnstrom & Schrader. Harvard Business School Working Paper 16-121 (2016). The Great Training Robbery.

Your AI monitoring tool is capturing everything your employees are willing to say on a channel they know is being watched.

A new category of tool has entered the market. It monitors Teams messages, email threads, Slack activity, and call transcripts. It produces dashboards showing communication patterns, collaboration health, and sentiment scores. Vendors describe it as the future of employee understanding.

There is a structural problem with this approach that its vendors do not advertise.

When employees discover their communications are being monitored, and they always discover this, they do not adjust one answer on one questionnaire. They adjust how they communicate on every monitored channel, permanently.

The Teams chat that used to surface problems candidly becomes procedurally correct. The email that used to carry genuine disagreement becomes professionally managed. The Slack message that used to flag a concern becomes neutral.

The real conversation moves to WhatsApp. To a voice note. To the car park. To any channel the system cannot see.

The organisation now has a surveillance infrastructure that produces a detailed, confident, and structurally false picture of a healthy, communicating team, because every data point was generated by someone who knew they were being watched.

This is not a technology limitation. It is a behavioural certainty. The moment an organisation deploys communication surveillance, it destroys the candid informal communication that was its most reliable early warning system, and replaces it with managed performance on monitored channels.

AI surveillance tools do not capture what employees are experiencing.

They capture what employees are willing to say on a channel they know is being watched.

Those are not the same dataset.
The structural flaw in AI-based people intelligence

The migration to unmonitored channels is the signal. When your employees are using WhatsApp for conversations that used to happen on Teams, they are telling you something precise about how they feel about the monitored channel. That signal is invisible to your AI dashboard. It is not invisible to the MGI.

Clover ERA collects team-level behavioural patterns through anonymous daily check-ins. Not communication content. Not message sentiment. Not productivity tracking. There is nothing for employees to perform and no channel to migrate away from.

The survey tells you there is a problem. The training attempts a fix. The AI dashboard tells you everything is fine. None of them know what the others found out. That gap has been on your P&L the whole time.

The Manager Gap does not only cost you people who leave.

Five of the six impact layers are costs from employees who are still on your payroll. The disengagement, the suppressed ideas, the performance drag from managers who cannot see their teams. Those are happening today, in an organisation whose survey score suggests things are broadly fine and whose AI dashboard shows green.

01

Regrettable Attrition

0.5x to 2x annual salary per departure
The employees you wanted to keep. Gallup confirms 75% of voluntary turnover traces to management, not compensation, not the role. The people leaving are not leaving the company. They are leaving the gap.
Source: Gallup (2025 State of the Global Workplace)
02

Disengagement Tax

18 to 34% of salary per disengaged employee, annually
The employees who show up but are not fully present. 70% of team engagement variance is attributable to the manager. When the manager is operating on skewed survey data or false AI confidence, the disengagement builds unseen.
Source: Gallup (2025). $438B in global productivity lost to manager disengagement in 2024.
03

Manager Drag

10 to 52% of a team's productive time, lost
Managers without live team signal create drag across the organisation. Their teams manage upward. Their peers avoid collaboration. Their manager absorbs the overhead. Poor management costs the US economy over $500 billion annually. From people staying, not from people leaving.
Source: Perceptyx (2025); Training Industry (2020)
04

Promotion Risk

60% of new managers fail within 24 months
Your best engineer is now a struggling manager. They were given a survey that shows 7.4, a training programme that cannot see their team, and an AI dashboard showing green. None of these tell them or their organisation what is actually happening. 60% of new managers receive no meaningful support when they are promoted.
Source: Gartner; Center for Creative Leadership (2024); Wharton School (2024)
05

Innovation Suppression

31% less innovation in teams with low psychological safety
Teams where the Manager Gap is wide go quiet. They stop raising problems because nothing changed the last time they did, and the dashboard said everything was fine. AI surveillance accelerates this: when employees know their messages are monitored, the informal conversation that surfaces problems earliest migrates to channels nobody can see.
Source: Google Project Aristotle (2015, 2022). 85% of employees withhold important information from their manager (Edmondson & Detert).
06

Customer Impact

Engaged teams sell 20% more than disengaged teams
The Manager Gap does not stay internal. Sales teams hit lower quota. Customer success managers break account relationships mid-cycle. The AI dashboard shows normal communication volume and calls it healthy collaboration.
Source: Gallup; Bain & Company. 5% retention improvement increases profits 25 to 95%.

Three systems convinced leadership everything was broadly fine.
The other five cost layers were accumulating the whole time.

$6.2M
Average total cost — all six layers
Mid-market companies, 200 to 500 employees.

Four minutes. Five sections.
One number that tells you where the gap is.

The MGI does not measure how people feel about working at your company. It does not rely on survey data, training feedback, or AI-monitored communications. It measures the cost of the gap between your current infrastructure and what your teams are actually experiencing.

01
Visibility

How much can your managers currently see about their team's experience. Five questions about signal frequency, format, and anonymity architecture, plus one about AI communication monitoring.

02
Action

When managers see a problem, do they know what to do. Four questions about whether specific, contextual guidance exists in the flow of work.

03
Accountability

Is anyone tracking whether managers act. Four questions about whether actions are logged, followed up, and connected to the team signal that follows.

04
Strain

How much pressure is already on the system. Four questions about restructures, growth rate, promotion patterns, and current turnover.

05
Cost Exposure

Four numbers that produce your full cost across six impact layers. Headcount, average salary, voluntary turnover rate, and customer-facing percentage.

Your MGI Score

0 to 100. Five classifications from Closed to Exposed. Positioned against the mid-market benchmark of 58 to 74. Includes flags where specific risk patterns are detected, including Surveillance False Confidence.

Your Total Cost

Six impact layers. Your inputs. Named research multipliers. Not an industry average. Your number.

Where the gap is widest

Dimension-by-dimension breakdown. Which layer is driving the most cost. Which intervention closes the gap fastest.

Where do you sit?

0-20
21-40
41-60
61-80
81-100
Closed
Narrow
Open
Critical
Exposed
0-20
21-40
41-60
61-80
81-100
Most mid-market companies score here on their first assessment (58 to 74)

Closed

0 to 20

Strong visibility loop. The Manager Gap is narrow. Your infrastructure is functioning as intended.

Under $500K

Narrow

21 to 40

Addressable gaps. Targeted intervention closes them before they compound.

$500K to $2M

Open

41 to 60

Significant blind spots. Survey data and AI dashboards may be masking the real team signal. Regrettable attrition is occurring and under-measured.

$2M to $5M

Critical

61 to 80

Wide gap. Survey confidence is false. Training is working on the wrong problem. AI monitoring is capturing managed performance, not genuine signal. High probability of accelerating attrition in the next 90 days.

$5M to $10M

Exposed

81 to 100

Maximum gap. All three systems are simultaneously creating false confidence while the cost compounds. Immediate intervention warranted.

Over $10M

The average first-time score for a mid-market SaaS or Fintech company is 67. That is Critical. Annual cost: approximately $8.3M. Most of it invisible on the day the score is taken, hidden behind a 7.4 survey score, a completed training programme, and an AI dashboard showing green.

Find out what three parallel systems are costing you, and what none of them can see.

The Manager Gap Index calculates your full cost across six impact layers, your MGI Score against the mid-market benchmark, and flags whether your current people intelligence infrastructure is producing genuine signal or managed performance.

The number you are about to see is almost certainly larger than your surveys, your training feedback, and your AI dashboard have ever suggested it could be.

That is the point.
Calculate My MGI Score
No email required · Results stay in your browser · Built on published research
Already have your score? Discuss MGI Analysis Further to discuss what closing the gap looks like.