Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Data Gaps, Not Hype, Block Productive AI for MSPs: Insights from Dr. Fern Halper image

Data Gaps, Not Hype, Block Productive AI for MSPs: Insights from Dr. Fern Halper

E1957 · Business of Tech
Avatar
0 Plays1 hour ago

The episode reveals a persistent and widening governance gap as organizations rush to implement AI without adequate data foundations or operational controls. According to observations from Dr. Fern Halper, current AI adoption is overwhelmingly characterized by top-down pressure, especially around generative and agentic AI, but is constrained by immaturity in governance, data integration, and organizational readiness. Microsoft’s bundling of Copilot in E7 licenses highlights this structural shift, as “consumerized” AI solutions proliferate without corresponding investments in foundational data and oversight.

Supporting this view, new research cited by Dr. Fern Halper indicates that nearly half of organizations are under executive mandates to pursue AI, but most remain stalled in the experimental or pilot phase. The failure to move beyond pilots is not primarily a technology limitation but stems from inadequate data quality, lack of lineage controls, fragmented data governance, and persistent data silos. The report identifies that only about 35–45% of organizations deploying generative or agentic AI have come up through a cycle of machine learning and data foundation development.

Secondary examples reinforce the governance and risk exposure. MSPs and end-customers are increasingly relying on off-the-shelf or prebuilt AI (such as Copilot or ChatGPT) for individual productivity, rather than building production-ready, data-driven applications contextualized with proprietary information. This often leads to uncontrolled proliferation of “shadow AI”—tools deployed outside formal oversight—further compounding compliance and data protection risks. As organizations start experimenting with agentic AI, the risks escalate, since these systems not only generate outputs but can take direct action, magnifying the impact of weak governance and access controls.

For MSPs, IT service providers, and technology leaders, the operational consequence is heightened responsibility around governance, auditability, and data management. The unchecked spread of shadow AI introduces contractual and regulatory exposure, particularly as clients seek to incorporate AI tools without formal policies or understanding of associated risks. Providers should prioritize baseline governance frameworks, client-facing AI literacy training, and infrastructure capable of accommodating unstructured data, lineage requirements, and auditing. Failing to address these priorities increases the risk of service breakdowns and complicates SLA enforcement as AI systems broaden operational scope.

Supported by: 
JumpCloud 
HaloPSA 
Acronis 

Upcoming event: The Pivotal Point of IT: Building Services for the AI-First Era 
Date: May 13 at 1p.m. EDT 
Register: https://go.acronis.com/davesobelaiera

 

💼 All Our Sponsors

Support the vendors who support the show:

👉 https://businessof.tech/sponsors/

 

🚀 Join Business of Tech Plus

Get exclusive access to investigative reports, vendor analysis, leadership briefings, and more.

👉 https://businessof.tech/plus

 

🎧 Subscribe to the Business of Tech

Want the show on your favorite podcast app or prefer the written versions of each story?

📲 https://www.businessof.tech/subscribe

 

📰 Story Links & Sources

Looking for the links from today’s stories?

Recommended