Return to site

Why Most AI Governance Is Solving the Wrong Problem

A Guest Post by Sean Duca

March 31, 2026

This is the first post in Eight PR's Visionary Series - The Business of Tomorrow, featuring perspectives from visionaries across industries and markets. We are grateful to Sean Duca for agreeing to take part and contribute to this series.

Why Most AI Governance Is Solving the Wrong Problem

Something is quietly going wrong in how organisations are responding to AI risk, and most governance programmes are missing it entirely.

Anyone who has spent time inside large organisations responding to emerging threats will recognise the pattern. A risk is identified. A working group is formed. Consultants are engaged. A framework is produced. The framework gets presented, discussed, filed, and the organisation continues doing exactly what it was doing before.

With AI, we are watching this play out at scale, and faster than most governance machinery can handle.

The problem is not that organisations do not take AI seriously. Most do, at least rhetorically. The problem is that they have confused identifying risk with reducing it.

A risk register is not a governance posture.

A responsible AI policy document is not an architecture decision.

And a completed audit is not the same as a safer system.

This distinction matters enormously.

Risk identification produces paperwork. Risk reduction produces change in how systems are built, how decisions are made, who has authority to act, and under what conditions autonomy is extended to a machine versus reserved for a human.

Those are harder conversations. They require judgement, not just analysis. They require someone willing to say, “this particular use case should not go ahead,” rather than simply presenting a list of considerations.

Much of what gets sold today as AI governance is compliance preparation. In practice, it helps organisations demonstrate oversight rather than build it.

Compliance work has its place. Regulators will demand it, and organisations need to show their thinking. But compliance is a floor, not a ceiling. Treating it as the ceiling is how organisations end up surprised when something goes wrong.

So what does genuine AI governance require? A few things that tend to get skipped.

Architectural Accountability

Who owns the decision about where AI sits in a workflow? Not who signs off on the policy, but who made the call about the design and who is accountable when it fails. If that question cannot be answered cleanly, the governance is not real.

Escalation Clarity

When an AI system behaves unexpectedly, and it will, what happens in the first 90 minutes? Who gets called? Who can pause or roll back the system? Frameworks that do not answer these questions are not really governance frameworks.

Honest Capability Assessment

AI vendors are incentivised to present their systems in the best possible light. Most organisations do not have the internal expertise to challenge those claims properly. Independent judgement from someone with no stake in the sale is rarer and more valuable than many organisations realise.

None of this is technically complicated. What makes it difficult is organisational.

It means asking questions that make people uncomfortable. It means slowing things down when speed is what everyone is demanding. And it means being willing to tell clients what they need to hear, not what they want to hear.

The organisations that navigate the next phase of AI well will not necessarily be the ones with the most sophisticated technology. They will be the ones that did the harder work. Real decisions about how AI fits into their operations, who is responsible when things go wrong, and where human judgement must remain in the loop.

That work rarely produces impressive slide decks.

But it produces something far more valuable. Organisations that know exactly where AI should, and should not, be trusted.

Sean Duca is the Founder and Principal of The Duca Group, a boutique advisory firm working with boards and executive leaders on high-consequence decisions in AI governance, cybersecurity, and enterprise technology strategy.

Website: www.duca.co

Sean Duca is the Founder and Principal of The Duca Group, a boutique advisory firm working with boards and executive leaders on high-consequence decisions in AI governance, cybersecurity, and enterprise technology strategy.

Across more than 25 years in the security and technology industry, he has seen a consistent pattern: organisations invest heavily in frameworks, policies, and tools, but the hardest decisions remain unresolved.

Sean’s work focuses on those decisions.

He has held senior leadership roles at Palo Alto Networks, Cisco, and McAfee, including serving as Regional Chief Security Officer for Asia Pacific and Japan at Palo Alto Networks, and CTO of Customer Experience at Cisco across Asia Pacific, Japan, and Greater China. In these roles, he worked directly with enterprise leaders to connect technology investment to real business outcomes.

He began his career in frontline engineering and rose to CTO for Asia Pacific at McAfee and Intel Security, a progression that underpins his advisory style today: practical, direct, and grounded in how systems behave under real pressure, not just how they are designed.

Sean serves on the advisory boards of Apate.AI and Deploi, has contributed to the Australian Government’s Online Safety Consultative Working Group, and is a published author on cybersecurity governance and strategy.

His core belief is simple: most organisations don’t have a framework problem, they have a judgement problem.

The person you engage is the person who shows up.

Disclaimer: The content provided in this article is the property of Sean Duca and is shared on Eight PR's website for informational purposes only. We do not claim ownership of any content, images, or intellectual property therein. All rights are reserved by the original creator. Eight PR makes no representation as to the accuracy, completeness or suitability of this information. Any reliance you place on this content is at your own risk.