Your surveys run every 6-12 months ·
Your training runs every year ·
Your AI tells you it sees everything.
Three investments. Three budgets. A structural gap between all of them that has been compounding on your P&L for years. Invisible, unmeasured, and almost always larger than expected.
Across six impact layers.
Calculate My MGI ScoreThe Manager Gap is the structural space between what managers can currently see about their teams and what their teams are actually experiencing. It is created by three parallel systems that operate independently and produce no shared picture of team health: surveys that measure safety instead of reality, training programs disconnected from live team signal, and AI monitoring tools that capture managed performance on watched channels.
The Manager Gap Index (MGI) is the first diagnostic tool that calculates the cost of this gap. It produces an MGI Score from 0 to 100 (a higher score means a narrower gap) and a dollar figure across six impact layers, using your specific inputs and published research multipliers.
Based on MGI assessments to date, the average first-time score for a mid-market company (200 to 500 employees) is 33 out of 100, placing it in the Critical band with an average annual Manager Gap cost of $8.3M. A higher score means a narrower gap. Most of that cost is invisible on the day the score is taken.
Most organisations believe they have a managed people strategy. They have surveys that run every quarter. They have training programs for their managers. Many now have AI tools that monitor communications to understand team health.
All three have budget lines. All three have ownership. All three have been in place for years.
Not one of them is telling the organisation what its teams are actually experiencing. And not one of them knows what the others found out.
Collects stated signal. Produces a score. The data reflects safety, not reality.
Selected from a catalogue. Delivered in isolation. Works on a problem it cannot see.
Captures everything on channels employees know are watched. The real conversation migrated the day the system was deployed.
It is a measure of how safe your employees felt when they answered the question. Those are not the same number. But they are being treated as if they are.
Clover ERA collects team-level patterns, not individual responses. Nothing to triangulate. Genuine anonymity by architecture, not procedurally promised.
Manager development programs are not designed from your survey data. They are selected from vendor catalogues, commissioned in response to a budget cycle, or built around what managers generally struggle with, not what your managers are specifically struggling with right now.
A manager attends a workshop on giving feedback. They return to a team where the problem is not feedback. It is that two people have stopped raising issues because the last three times they did, nothing changed. The training worked on the wrong problem. Not because the content was wrong. Because it had no access to the live team signal it needed to be useful.
Harvard Business School named this in 2016: most training spend fails not because the content is wrong but because the learning is entirely disconnected from the actual conditions managers return to.
A new category of tool has entered the market. It monitors Teams messages, email threads, Slack activity, and call transcripts. It produces dashboards showing communication patterns, collaboration health, and sentiment scores. Vendors describe it as the future of employee understanding.
There is a structural problem with this approach that its vendors do not advertise.
When employees discover their communications are being monitored, and they always discover this, they do not adjust one answer on one questionnaire. They adjust how they communicate on every monitored channel, permanently.
The Teams chat that used to surface problems candidly becomes procedurally correct. The email that used to carry genuine disagreement becomes professionally managed. The Slack message that used to flag a concern becomes neutral.
The real conversation moves to WhatsApp. To a voice note. To the car park. To any channel the system cannot see.
The organisation now has a surveillance infrastructure that produces a detailed, confident, and structurally false picture of a healthy, communicating team, because every data point was generated by someone who knew they were being watched.
This is not a technology limitation. It is a behavioural certainty. The moment an organisation deploys communication surveillance, it destroys the candid informal communication that was its most reliable early warning system, and replaces it with managed performance on monitored channels.
The migration to unmonitored channels is the signal. When your employees are using WhatsApp for conversations that used to happen on Teams, they are telling you something precise about how they feel about the monitored channel. That signal is invisible to your AI dashboard. It is not invisible to the MGI.
Clover ERA collects team-level behavioural patterns through anonymous daily check-ins. Not communication content. Not message sentiment. Not productivity tracking. There is nothing for employees to perform and no channel to migrate away from.
The survey tells you there is a problem. The training attempts a fix. The AI dashboard tells you everything is fine. None of them know what the others found out. That gap has been on your P&L the whole time.
Five of the six impact layers are costs from employees who are still on your payroll. The disengagement, the suppressed ideas, the performance drag from managers who cannot see their teams. Those are happening today, in an organisation whose survey score suggests things are broadly fine and whose AI dashboard shows green.
Three systems convinced leadership everything was broadly fine.
The other five cost layers were accumulating the whole time.
The MGI does not measure how people feel about working at your company. It does not rely on survey data, training feedback, or AI-monitored communications. It measures the cost of the gap between your current infrastructure and what your teams are actually experiencing.
How much can your managers currently see about their team's experience. Five questions about signal frequency, format, and anonymity architecture, plus one about AI communication monitoring.
When managers see a problem, do they know what to do. Four questions about whether specific, contextual guidance exists in the flow of work.
Is anyone tracking whether managers act. Four questions about whether actions are logged, followed up, and connected to the team signal that follows.
How much pressure is already on the system. Four questions about restructures, growth rate, promotion patterns, and current turnover.
Four numbers that produce your full cost across six impact layers. Headcount, average salary, voluntary turnover rate, and customer-facing percentage.
0 to 100. A higher score means a narrower gap. Five classifications from Exposed (0) to Closed (100). Positioned against the mid-market benchmark of 26 to 42. Includes flags where specific risk patterns are detected, including Surveillance False Confidence.
Six impact layers. Your inputs. Named research multipliers. Not an industry average. Your number.
Dimension-by-dimension breakdown. Which layer is driving the most cost. Which intervention closes the gap fastest.
The MGI Score runs from 0 to 100. A higher score means a narrower gap. A lower score means a wider gap and a higher cost.
Maximum gap. All three systems are simultaneously creating false confidence while the cost compounds. Immediate intervention warranted.
Wide gap. Survey confidence is false. Training is working on the wrong problem. AI monitoring is capturing managed performance, not genuine signal. High probability of accelerating attrition in the next 90 days.
Significant blind spots. Survey data and AI dashboards may be masking the real team signal. Regrettable attrition is occurring and under-measured.
Addressable gaps. Targeted intervention closes them before they compound. At this score, the cost is close to baseline. Every organisation that runs on human management carries this cost. The question is whether you are managing it or ignoring it.
The gap is actively managed. Cost is at its structural minimum. All organisations carry some baseline cost. This band means it is not compounding.
The average first-time score for a mid-market SaaS or Fintech company is 33. That is Critical. Annual cost: approximately $8.3M. Most of it invisible on the day the score is taken, hidden behind a 7.4 survey score, a completed training program, and an AI dashboard showing green.
The Manager Gap Index calculates your full cost across six impact layers, your MGI Score against the mid-market benchmark, and flags whether your current people intelligence infrastructure is producing genuine signal or managed performance.