Posted by Guest Contributor - Scott Alldridge (CEO, IP Services) on Aug 7th 2025
A Mission-Critical Need for AI Governance in the Era of GenAI
Guest Author: Scott Alldridge (CEO, IP Services)
For the first time in history, China has come very close to matching the US despite being years behind in AI, where this new arms race is not just for innovation but for governance. Companies are pushing more AI systems into production all the time in order to drive business value, automate decision making, and assist (or eventually replace) many workers. Yet, in this haste, essential safeguards to prevent this power from becoming a liability in the form of data privacy regulation breaches or erosion in trust between stakeholders are being overlooked.
As I wrote in VisibleOps Cybersecurity “A fool with a tool is still a fool”. As it stands the road is littered with too many who have their hands on an AI toolkit and too few that understand how to use this tech ethically.
The Secure Controls Framework (SCF) becomes imperative at this point. But this isn't just a cybersecurity framework because, as the report states, it is essentially "the playbook" for:
- Ensuring that you are not only using AI to effectuate your organization's values;
- That you understand what your risk posture looks like while doing so; and lastly
- Knowing how your people, processes and technology come together in one integrated ecosystem to create responsible AI agents.
Better AI Governance Can Never Be an Afterthought
We are now firmly in the “bring your own AI” age (BYO-AI), in which employees at all levels of an organization are already playing around with generative AI tools, LLMs, copilots and third-party services. And many times, it is unknown to IT, legal or compliance personnel. These unsanctioned tools, often tied to personal accounts, result in unregulated data flows, potential leakages and a governance gap.
Sure, your written policy may be a “no confidential data in ChatGPT,” but seriously now when a user uploads a confidential strategy doc for help rewriting it? That's the gap, that's where the SCF shines as your North Star.
SCF and AI: Mapping Controls to Real-world Problems
The SCF groups security and privacy best practices by unified control domains to help you tailor your cybersecurity program as new technologies like AI emerge. This is where SCF comes in, offering governance scaffolding to help identify, manage and mitigate the risks in an AI project if approached properly. These are the fundamental AI governance considerations to which SCF can be aligned.
These aren't just abstract controls. They are tangible, measurable outputs that assist in answering the big boardroom questions; or at least what boards should be asking:
- Who uses AI?
- What AI tools are in use?
- Is data being protected?
- Are our decisions defensible?
Five AI Governance Practices to Develop Today
SCF-complimented AI governance would be best achieved by investing in five fundamental capabilities:
- Comprehensive AI Discovery. If you can see it, you can govern. List all AI in deployment, from public services to any AI embedded in software and custom LLMs. Attach this to user identity and data flow context, so that you receive what is done by whom at which time acting over the data.
- Context-Aware Risk Assessment. Risk scoring on AI services should be automated to include model lineage, permission for DA functions and custom programming access, frequency of use, and compliance requirements. You can use this to help you prioritize investments to mature your risk decisions and to direct oversight resources appropriately.
- Onboarding and Approval Workflows. Establish procedures for these new AI products and services to be assessed and approved swiftly. Apply SCF Domain specific checklists (Data Security, Data Retention, Compliance, Privacy and Trustworthiness)
- Runtime Policy Enforcement. Move beyond static policy documents. Design systems that can apply limits immediately when permission is being used. It means giving warning to the users, and masking the inputs, or limiting on some functionality or if it’s auditing.
- Ongoing Reporting and Audit Readiness. Be able to track where AI is in use across the enterprise, and report on adherence with controls mapped back to SCF This gives leadership confidence and eases the burden of an audit.
Avoiding "Security Theater" - Why Administrative Policies With AI Will Not Work
The fact is that most organizations have AI policies now. But there are few who have tied those policies to tangible enforcement. The risk? Policies become optics without operations. They use AI tools your employees are all too excited to exploit. Your developers build agents. AI is enabled by your vendors in platforms This leaves you hoping that your security awareness training is sufficient.
As I write about in VisibleOps Cybersecurity, security for security's sake is security theater - armor that looks good on paper, makes a great prop in war plays, but easily dismantled by the terrors of battle. This is where the rubber hits the road and paper policies translate to operational reality, you can see runtime enforcement.
Hope is not a strategy.
The SCF’s power is in the prescriptive control alignment, which gets you from intention to implementation. However, it only works if you develop the systems and habits around it.
Why This is Boardroom Lens Worthy
The most important thing to remember is that executives and boards are demanding AI first strategies. Unsurprisingly, however, the people ultimately held accountable for any breach, privacy violation or regulatory failure are none other than those same leaders.
Before leaders can fully accept AI, they must have trust that:
- Well-controlled exposure of AI;
- Sensitive data isn’t being mishandled;
- Real-time policy enforcement;
- AI decisions are auditable; and
- Compliant third-party tools.
As you are reporting to your board on cybersecurity maturity, tie in with it the state of AI governance.
Bringing It All Together: Integrity & Controls Must Lead AI Governance
AI can scale your business; however, it also can be a source of risk if you do not have appropriate guardrails. If you cannot define it, you cannot measure it and if you cannot measure it and cannot control it then you do not own it. If you do not own it, you cannot protect it!
This is the system where you start to build in AI governance and is not a bolt-on feature. It is a new operational discipline that requires you to see, from applications to users, and across your supply chain you must align with frameworks like SCF and leverage the processes of your existing IT and cybersecurity teams.
Artificial Intelligence is no longer just an application. Complex, science is an evolving agent. The only way to manage it responsible is a set of processes, structured with SCF and working on operational rigor.
We should not wait for a compliance letter or breach headline to make this happen. Let’s lead with visibility. With governance. With integrity.
Let’s lead with control to reduce risk, while keeping the cybersecurity posture top of mind!