AI Governance
MIT’s AI Study is Terrifying, but Not for the Reasons You Think


This summer, MIT released its State of AI in Business 2025 report — one of the most detailed studies of how organizations are adopting generative AI (GenAI). The paper has stirred debate about whether we are on the verge of a massive AI bubble. The most viral data point from the study is that 95% of organizations fail to generate meaningful ROI from GenAI adoption.
But for those of us in the security and privacy community, the report is alarming for reasons beyond economics. Beneath the ROI discussion lies insights about organizational behavior that reveal just how recklessly enterprises are approaching AI adoption. They include the following:
- Security and privacy are not seen as roadblock, but not due to a lack of risk.
- The ROI “Solution” is more complexity, and more risks.
- Shadow AI is rapidly outpacing enterprise adoption.
In this post, we’ll examine each insight and what it means for organizations.
Security and Privacy Are Not Seen as Roadblocks
MIT’s researchers found that “legal” and “risk” concerns were not cited as major reasons organizations hesitate to scale GenAI.
On the surface, that sounds like progress. In practice, it’s troubling. On the ground, security and compliance professionals see massive gaps in governance, architecture maturity, and privacy protection in GenAI deployments. The disconnect is clear. Executives aren’t claiming risks don’t exist, they’re signaling they won’t let risks get in the way of production.
Treating security and privacy as “optional” is what should terrify us. This mindset can negatively permeate an organization’s culture. Quick gains in innovation or efficiency without clear security parameters isn’t progress, but rather a form of corporate negligence.
The ROI “Solution” Is More Complexity, and More Risk
The MIT study emphasizes that the way to cross the “GenAI Divide” (from pilots to value) isn’t to slow down adoption, but to accelerate it. It suggests moving from static chatbots to AI Agents and agentic workflows.
That’s the right prescription for technical capability. Agents introduce memory, context, and tool orchestration that can finally generate persistent business value. But there's the catch. These same agentic architectures multiply security and privacy risks (both in volume and complexity). They demand:
- Persistent memory systems (raising new data retention and misuse risks),
- Multi-agent coordination (introducing identity, trust, and provenance challenges),
- Expanded tool integration (widening the attack surface).
For organizations still struggling to secure a corporate chatbot and associated LLM threat models, “diving into agents” is like sending a first-day skier, barely balancing on the bunny hill, straight to the top of a black-diamond headwall.
Shadow AI Is Already Outpacing Enterprise AI
While enterprises stall, employees find their own solutions. MIT’s data shows that while only 40% of companies purchase official GenAI licenses, 90% of employees use personal tools like ChatGPT or Claude for work tasks.
This “shadow AI economy” bypasses enterprise controls, exporting confidential data into unmanaged environments. Employees aren’t acting maliciously, they’re simply frustrated. When sanctioned deployments don’t meet their needs, they revert to unsanctioned tools to get the job done.
As a result, enterprises inherit both ROI failure and heightened risk exposure.
Calls to Action: A Secure Path Across the Divide
Innovation and security don’t need to be at odds. The most forward-looking firms have proven that competitive advantage can be achieved without compromising trust. To get there, organizations must shift their mindset:
- Reframe risk as real. AI systems process sensitive personal, financial, and proprietary data. Security and privacy must be treated as first-order design constraints.
- Invest in maturity before complexity. Boards and executives must give architects and engineers the time to establish security- and privacy-by-design foundations before leaping into agentic workflows. Pushing into advanced AI without guardrails doesn’t just endanger the firm, it can create societal externalities such as systematic bias and mass privacy violations.
- Empower employees as allies, not adversaries. Instead of forcing workers into shadow AI, bring them into the design process. One of the key findings of the MIT study is that organizations were more successful when front-line managers closest to the work were included in decision-making. Power users and “AI prosumers” often know best where tools deliver value. Their input can be harnessed to improve adoption and reduce unsanctioned use.
Closing
While the MIT study may have been framed around ROI, its deeper findings reveal something more urgent: a systemic underestimation of risk in the pursuit of AI value.
If organizations want to truly cross the GenAI Divide, they must recognize that security and privacy are not speed bumps to progress, but rather the guardrails that keep the road open.
At Coalfire, and across the security and privacy community, our plea is simple: adopt AI boldly, but never at the expense of trust. The organizations that make this choice will not only win ROI, but they will win resilience and competitive advantages.