Agency owners, here’s the stat that should stop you mid-scroll: MIT’s Project NANDA found 95% of organizations are getting zero return on GenAI investment (MIT, Project NANDA “State of AI in Business 2025”).
My take: AI isn’t failing, Shadow AI plus weak foundations are.
Even if your agency hasn’t “implemented AI,” your team already has. Microsoft reports 75% of knowledge workers use AI at work, and 78% of AI users bring their own AI tools to work (Microsoft Work Trend Index, 2024). That’s Shadow AI, people using AI outside IT/security visibility.
Insurance agencies are uniquely exposed because our daily workflows touch sensitive data: loss runs, policy docs, claim notes, COIs, driver/VIN info, payroll/class codes, and more.
Now add the visibility problem: corporate data is flowing into AI tools in ways most leaders never see, often through personal logins. Cyberhaven found 73.8% of workplace ChatGPT usage is through non-corporate accounts, and that corporate data sent to AI tools surged dramatically year-over-year (Cyberhaven, Q2 2024 AI Adoption & Risk Report).
And it’s not slowing down. Netskope reports growing genAI data policy violations at scale across organizations (Netskope Cloud & Threat Report, 2026).
Here’s what should worry you: non-corporate AI accounts and personal AI apps typically don’t include enterprise safeguards like enforced SSO, DLP, audit logging, access controls, and contractual privacy terms. And depending on the provider and settings, what people paste into consumer AI tools can be used to improve/train models (OpenAI consumer data-use policy; business offerings differ).
That’s one of the biggest reasons AI ROI stalls. If we can’t trust the foundation, we can’t safely connect AI to real agency workflows, so we get stuck in pilot purgatory.
For agencies, ROI breaks down when:
A solid data security + permissions + security-controls foundation is fundamental—not just for safe AI, but for an AI implementation that actually delivers on the ROI promise.
So what can an agency do this month to make sure AI is secure and ROI-positive?
Closing:
Done right, AI absolutely produces real ROI in an insurance agency, starting with simple wins like better emails, faster summaries, cleaner proposals, and quicker renewal prep with large language models your team can prompt confidently. Then it scales into AI agents and automated workflows that reduce touches, speed up service, and improve consistency. But you only get that ROI if the foundation is in place: secure, organized data + clean rights/permissions + strong security controls + a clear AI use policy + oversight + training.
That’s exactly why we built our Agency AI Foundation and AI Agency Roadmap—to take agencies from “everyone’s experimenting in the shadows” to a safe, governed rollout that actually sticks. We help you map the use cases, organize the data structure, lock down security rights and permissions, set the policy and guardrails, train the team, and run the change management so adoption is real and measurable.
If you want to stop guessing and start getting ROI from AI the right way, contact me directly.
Jerry Fetty is the Founder of SMART Services and has spent 35+ years helping independent insurance agencies modernize their technology, strengthen cybersecurity, and operate more efficiently. Today, his focus is helping agencies adopt AI the right way, with a secure foundation, clean data structure, clear policies, and real-world training that produces measurable ROI.
References (with links)
Comments