
Agentic AI is being deployed as production infrastructure in enterprise settings, but prevailing frameworks remain unreliable for mission-critical operations. Dave Sobel and Ron Aroussi from Muxie underscored that while AI agents are functional—especially in non-deterministic contexts like customer support—expectations of deterministic, workflow-based reliability are not met. The move from demonstration agents to production-scale tools brings heightened attention to issues of reliability, observability, and especially risk of vendor lock-in for Managed Service Providers (MSPs) and their clients.
Operational deployment of AI agents currently gravitates toward roles with minimal operational risk, such as customer-facing chatbots or internal chief-of-staff assistants. Aroussi explained that while such agents can automate initial support tiers and internal daily briefings, their unpredictability and potential for error limit their use in processes demanding strict oversight and accountability. He identified two core use cases—external (customer support) and internal (personalized information management)—explicitly noting that agents are best positioned to augment rather than fully automate complex workflows at this stage.
A critical risk for MSPs lies in attempting to retrofit existing software frameworks to support agents, which introduces integration complexity and increases the likelihood of operational failures. Purpose-built infrastructure for agentic AI offers better alignment between AI capabilities and production requirements, with Aroussi citing drastically reduced hallucination rates and improved oversight when using native tools. Open source is identified as a foundational element for AI development, but it incurs its own risks, particularly around third-party code quality and the long-term sustainability of community-driven projects.
The practical implication for MSPs and IT service providers is clear: a cautious, incremental adoption approach focused on low-risk use cases, coupled with rigorous controls on agent permissions and robust audit trails, is essential. Decision-makers should avoid assuming agents operate with the reliability or accountability of traditional software, prioritize operational transparency, and ensure that responsibilities for agent actions are clearly defined and enforced at the implementation level. Vendor lock-in and software provenance remain significant governance concerns as agentic AI moves from experiment to infrastructure.
Support the vendors who support the show:
👉 https://businessof.tech/sponsors/
Get exclusive access to investigative reports, vendor analysis, leadership briefings, and more.
👉 https://businessof.tech/plus
Want the show on your favorite podcast app or prefer the written versions of each story?
📲 https://www.businessof.tech/subscribe
Looking for the links from today’s stories?
Every episode script — with full source links — is posted at:
Pitch your story or appear on Business of Tech: Daily 10-Minute IT Services Insights:
💬 https://www.podmatch.com/hostdetailpreview/businessoftech
LinkedIn: https://www.linkedin.com/company/2890807