MIT’s 95%: Why Enterprise AI Is Stalling, and Where Mid-Market Teams Are Quietly Winning
- Jamie Bykov-Brett
- Sep 11
- 4 min read

I’m an AI consultant. I have supported many organisations in implementing AI. It might seem strange, but I love the new MIT Media Lab study making the rounds. Not because it says most AI pilots are failing, but because it finally puts numbers to what so many of us see up close.
The technology is impressive. The way we try to use it is the problem.
MIT looked across hundreds of deployments and found something stark: about 95% of enterprise AI efforts show no measurable impact on profit and loss (the statement that shows income and expenses), and only a small minority make it past pilot into real value.
Success here is not “the model runs without errors.” Success means it moved the profit and loss account after six months. That bar, it turns out, is rarely cleared.
If you have played with tools like ChatGPT, you already know the potential is obvious. So why does that potential not translate at scale inside large companies? The pattern is familiar. This is less a model problem and more an organisational one. Projects stall where workflows are brittle, context is not learnt, and tools remain bolt-ons rather than part of the operating system. Enterprises run the most pilots yet convert the fewest.
Mid-market companies, by contrast, move faster from pilot to production. Speed matters, and so does proximity to the work.
It helps to retire a comforting myth. Buying “AI” is not the same as designing for intelligence.
The teams that break through do not tack a chatbot onto an old process and call it transformation. They reshape the workflow around the meeting point of human judgement and machine capability. They define who trusts what, when to escalate, and how the system learns from each decision. In short, they treat intelligence as infrastructure, not just an interface.
There is another quiet truth in the data.
Generic tools shine for individuals, yet often stall in the enterprise because they do not remember your clients, your policies, your edge cases, or how you actually get work done. That is why people happily use personal chatbots at work while official programmes lag.
It is a culture and design gap, not a graphics-processor gap.
If you lead in a small or mid-sized business, this is your moment. Lighter legacy, shorter decision loops, and owners who sit closer to customers all help. Self-reported data from Australia suggests that many SMEs (small and medium-sized enterprises) using AI say it is boosting revenue.
It is not a universal law, but it is a clear signal of how fast focused teams can embed AI where it actually changes work.
Two practical nudges from the field.
First, buy before you build, at least early on. Specialised vendors succeed more often than home-grown platforms. Build your own once you have learnt where value lives.
Second, aim where the money is, which is usually backstage. Leaders love shiny tools for sales, but the clearest returns often show up in finance, operations, and other unglamorous processes where learning systems can actually compound.
If you are a large enterprise, the path is not closed. It is simply different.
Start narrow. Put line managers, not only an AI lab, in charge of adoption. Specify collaboration contracts between humans and systems. Decide what gets automated, what gets reviewed, how the handovers work, and how the system improves next time. Invest in trust as a system. That means transparency, consistency, and recovery when things go wrong, not just a hope that users will figure it out. This is how clever demos turn into dependable workflow.
At Executive AI Institute we call this a human-centric approach, where behavioural psychology meets tech strategy. It is relationship-first leadership in the age of intelligent tools. Equip the people who do the work. Redesign the rituals they use to do it. Then let the model amplify that discipline. Do this and the five per cent success club stops looking exotic and starts looking procedural.
So no, the headline is not “AI does not work.” It is “AI does not pay unless your operating model learns with it.” The best news in the MIT data is that the fixes are mostly human. Clear ownership, tighter loops, and systems that remember. Start there. The technology will keep getting better. The question is whether your organisation gets better with it.
Practical recommendations to join the successful five %
Anchor every AI project to one profit and loss outcome. For example, reduce cost to serve, shorten cycle time, or raise revenue per customer. Write the target, the method of measurement, and the review cadence before you build anything.
Pick one workflow and map it end to end. Identify the decision points, handovers, failure modes, and where human judgement is essential. Build for that single path first, then scale.
Create collaboration contracts. Define what the system does on its own, what humans verify, when to escalate, and how exceptions are handled. Make these contracts visible inside the tool.
Treat data as a product. Assign an owner, set quality targets, document lineage, and give teams self-serve access with clear guardrails. Bad inputs ruin good models.
Measure adoption like a first-class outcome. Track who uses the tool, on which tasks, and what changes in the work. Celebrate real examples where the workflow improved.
Put line managers in charge of change. Give them budget, coaching, and weekly rituals to embed new habits. Central teams advise and unblock rather than command and control.
Build trust as infrastructure. Provide clear explanations, consistent behaviour, easy reversal of actions, and a reliable recovery path when things go wrong. Publish a simple responsible-use policy people can understand.
Aim at backstage processes first. Finance, operations, procurement, and service knowledge often yield faster, clearer returns than shiny front-of-house experiments.
Time-box learning. Run a 60 to 90 day implementation with a small cohort, set scale-up gates, and retire pilots that do not move the profit and loss account.
Right-size the technology. Use the smallest workable model, deploy privately when needed, and add retrieval that pulls from your own documents so answers reflect your context.
Invest in skills and practice. Offer short weekly sessions, internal communities of practice, and prompt libraries tied to real tasks. Skill beats novelty over the long run.
Comments