
The core structural shift addressed centers on the transition from AI as an assistive tool to agentic AI operating autonomously within business systems, thereby moving risk and control issues to the forefront. Agentic AI—characterized by the ability to independently execute actions within user interfaces, browsers, and systems of record—is changing the dynamics of accountability and operational authority. Companies like Meta are experiencing incidents where AI systems can enact changes or publish guidance inside live environments, making the question less about feature innovation and more about containment, permission, and the allocation of responsibility.
A key development cited is a security-related incident at Meta, where an AI-generated and published security directive resulted in a real operational consequence without direct execution rights. This illustrates the growing risk, as agentic AIs are now capable of operating through the same channels as human users while accessing sensitive data and functions. Vendors such as Anthropic are enabling agentic capabilities, including control over full user workflows and system access, while security vendors and platforms like Microsoft are shifting towards identity frameworks and policies specifically designed to constrain agent autonomy and protect operational environments.
Additional developments reinforcing this shift include the expansion of agentic AI into mainstream products, such as Perplexity’s browser embedding AI assistants directly in everyday workflows, and the increasing integration of AI agents into databases and enterprise platforms. As these agents mature, the risk profile shifts from theoretical to operational, with vendors updating contracts to transfer liability downstream to service operators. This emphasizes that risk is no longer contained by traditional permissions and access control, and audit trails and proactive governance must become new priorities for service providers.
These dynamics demand that MSPs and IT leaders re-examine operational and contractual practices. Agent deployment without properly scoped permissions, logging, and defined ownership of outcomes exposes operators to unpriced liabilities rather than incremental value. Practical requirements now include explicit service agreements covering agent actions, comprehensive permission reviews, and client-facing agent readiness assessments to establish due diligence. Failure to provide evidence of agent governance can result in being treated as uninsurable risk, pushing governance standards from optional best practice to commercial necessity.
00:00 AI Acts Now
02:57 Who Owns It?
05:13 Trust Breaks Here
08:01 Why Do We Care?
Supported by:
CometBackup
HaloPSA
Support the vendors who support the show:
👉 https://businessof.tech/sponsors/
Get exclusive access to investigative reports, vendor analysis, leadership briefings, and more.
👉 https://businessof.tech/plus
Want the show on your favorite podcast app or prefer the written versions of each story?
📲 https://www.businessof.tech/subscribe
Looking for the links from today’s