What Do You Mean, 'AI Governance'?
How one phrase came to mean opposite things in the same room.
On February 14, 2025, Peter Kyle stood at the Munich Security Conference and announced that the United Kingdom’s AI Safety Institute would be renamed the AI Security Institute. Same building, staff, and budget. The mandate narrowed: chemical and biological weapon misuse, cyberattacks, and child sexual abuse material. Out of scope: bias, freedom of speech.
Four months later, the United States did its version. The Commerce Department renamed its AI Safety Institute the Center for AI Standards and Innovation. Howard Lutnick said the rebrand removed “censorship and regulations” used “under the guise of national security.” NIST guidance instructed CAISI partners to drop references to “AI safety,” “responsible AI,” and “AI fairness.”
The buildings did not change, but the contents did.
If you watched the Bletchley Declaration in November 2023, the Seoul Summit in May 2024, and the Paris AI Action Summit in February 2025 in sequence, you watched the phrase “AI safety” walk into a meeting room, sit down, and discover it had been holding two conversations the whole time. At Paris, fifty-eight countries signed a Statement on Inclusive and Sustainable AI. The United States and the United Kingdom refused. JD Vance, on the dais: “The AI future is not going to be won by hand-wringing about safety.“
That moment is diagnostic. The word everyone had been using at every summit for two years did not survive: it drifted.
A Magnet Word
“AI safety” is one symptom. The bigger word is “AI governance.”
Some words pull people into different interpretations while feeling like agreement. They keep meetings moving, people signing contracts, and policies passing. Then, six months or six years later, someone opens the deliverable and discovers that the room had been using the same syllables to describe four different jobs.
Watch what happens when you pull on “AI governance.”
In Brussels, it means the European Commission’s coordinating function across twenty-seven member states under the AI Act. In Washington, until January 2025, it meant Executive Order 14110; after January 2025, it meant Executive Order 14179, which rescinded 14110 in three days. In Sacramento, it meant SB 1047 (until Governor Newsom vetoed it in September 2024); then it meant SB 53, which kept transparency reporting and dropped the kill-switch. In Denver, under the Colorado AI Act, it means a developer’s duty of “reasonable care” to prevent algorithmic discrimination. In an Anthropic Responsible Scaling Policy, it means an internal Risk Officer, a capability threshold, and a Long-Term Benefit Trust. In a Deloitte board roadmap, it means a skills matrix and a refresh schedule. At AI Now, it means antitrust and worker organizing. At UNESCO, it means human rights and dignity grounded in 194-state agreement.
These are different objects. They share a vocabulary and almost nothing else.
What I Watched For Twenty-Three Years
I spent twenty-three years inside institutions where ambiguous language carried operational consequence. Marine Corps construction sites. Logistics commands across four countries. Corporate facilities portfolios in three states. A 23-office closure managed across HR, Legal, Finance, IT, and Business Development through five years and a pandemic. The pattern showed up every time. The words looked clear. The room nodded. The cost arrived later.
In 2005, I took over the facilities portfolio for 3D Marine Logistics Group in Okinawa: 300 facilities, three million square feet, 30 locations across Japan, Korea, Hawaii, and Guam, 4,600 personnel. I reported to a Chief of Staff who used the word “ready” daily. Every commander in the chain used “ready” daily. Nobody stopped to verify that “ready” meant the same operational state to the building maintenance officer, the procurement officer, and the host-nation liaison. We coordinated as if we agreed. When we didn’t, projects slipped.
I left active duty in 2016, finished a doctorate at USC Rossier in Organizational Change and Leadership in 2023, and spent the next two years on a scoping review of 131 sources across eight disciplines: healthcare communication, aviation crew resource management, cognitive science, military operations, organizational psychology, linguistics, high-reliability organizing, and team science. The literature confirmed what I had watched in every room I had ever worked in. The default state of human communication is drift. Shared understanding takes work. The work is to catch the drift before the budget or the policy pays for it.
Eight Rooms, Eight Words
Eight communities use “AI governance.” They mean different things.
Regulators in Brussels mean rights protection through ex ante conformity assessment and member-state market surveillance. Regulators in Washington, under the current administration, mean removing barriers to American AI dominance.
Frontier-AI labs (Anthropic, OpenAI, Google DeepMind) mean an internal Responsible Scaling Policy or Frontier Safety Framework: their own capability thresholds, their own auditors, their own stopping rules.
Big Four consultants mean a board-level oversight discipline producing model inventories, third-party risk management, and ISO 42001 mappings against the NIST AI Risk Management Framework.
Alignment researchers (Bengio, Hinton, Russell, the twenty-five-author Science paper from May 2024) mean binding licensing, mandatory pre-deployment testing, and one-third of AI R&D budgets devoted to safety. Closer in shape to nuclear-style controls than to NIST-style voluntary frameworks.
Civil-society scholars (Buolamwini, Gebru, Bender, Whittaker, Crawford, Benjamin) mean redistribution of power. Antitrust. Worker organizing. Bans on weaponized facial recognition. Refusal of the surveillance business model. Sarah Myers West, in AI Now’s 2024 paper Power and Governance in the Age of AI: “AI as we know it today is a creation of concentrated industry power.”
DoD and federal contractors mean a Chief AI Officer role under OMB M-24-10, an inventory of “rights-impacting” and “safety-impacting” AI systems, and compliance with the five Responsible AI principles (Responsible, Equitable, Traceable, Reliable, Governable).
UNESCO and the African Union mean development capability, digital sovereignty, and human rights anchored in 194-state agreement. From outside the Brussels-Washington axis, “AI governance” often reads as a Western export imposed on jurisdictions that did not draft it.
Same word. Eight rooms. Eight non-overlapping objects.
Where Drift Turns Into Cost
The fault lines are where decisions actually turn.
Catastrophic-risk control versus algorithmic-discrimination prevention. Bengio, Hinton, GovAI, METR, and the Future of Life Institute treat governance as preventing extinction-class harms from frontier systems. Buolamwini, Gebru, the Algorithmic Justice League, and the Colorado AI Act treat governance as preventing harm to people who are already being injured. The two camps publicly disagree about which deserves the word.
Voluntary risk management versus enforceable regulation. NIST AI RMF, the Hiroshima Code of Conduct, the OECD Principles, and most corporate Responsible Scaling Policies are voluntary. The EU AI Act, the Colorado AI Act, NYC Local Law 144, and SB 1047 (briefly) carried teeth. The EU version comes with fines up to 7 percent of worldwide turnover. The voluntary versions come with press releases.
Innovation enablement versus power redistribution. Executive Order 14179, signed January 23, 2025, defines governance as removing barriers to “America’s global AI dominance.” AI Now’s 2025 Artificial Power report defines governance as breaking up tech oligarchy. Both call themselves AI governance. Both cite the same underlying technology. They share no policy primitives.
A vote on SB 1047 turned on which definition was operative. The Paris no-signs turned on which definition was operative. The CAISI rebrand turned on which definition was operative. When real consequences attach to a magnet word, somebody’s definition wins, and somebody’s loses, and the people whose definition lost rarely got to vote on which one was on the ballot.
Four Questions to Run in Any Room
Here is the test I run in any room where the phrase “AI governance” is used.
Whose harm are we governing? The public’s, the company’s, the state’s, or humanity-at-large?
Who pays if we are wrong? Users, taxpayers, shareholders, or future people?
Who enforces? A regulator, a board, an auditor, or a market?
Who decides what “wrong” means? Courts, voters, engineers, or executives?
If two people in the room give different answers to any of these questions, you are looking at a magnet word event. Stop the meeting. Pin the term.
The one-minute script:
“When we say ‘AI governance’ in this conversation, are we talking about (a) preventing catastrophic harm from frontier models, (b) preventing discrimination in consequential decisions, (c) ensuring board-level oversight of enterprise AI use, or (d) protecting national competitiveness? I want to make sure we are working from the same definition before we agree on next steps.”
That sentence costs sixty seconds. It saves whatever the consequence would have cost on the back end.
Governance Worth the Name
Governance worth the name names four things.
It names the harm: who gets hurt, in what way, on what timeline, with what evidence.
It names the actor: which institution, which person, which budget line is responsible for the decision that creates the harm.
It names the enforcement: which regulator, court, auditor, or contracting officer can stop the harm or reverse it.
It names the redress: what the injured party gets when the harm happens, who pays for it, and how soon.
Anything that calls itself governance while omitting any of these four is, in Emily Bender’s phrase, marketing.
Under that test, NYC Local Law 144 is governance: narrow, contested, and badly enforced (the New York State Comptroller’s December 2025 audit found 75 percent of complaints misrouted), but the four parts are present. Anthropic’s Responsible Scaling Policy, version 3.0, fails the test. It names the harm and the actor. It does not name the enforcement (Anthropic audits Anthropic) or the redress (no injured party has standing). The same is true of OpenAI’s Preparedness Framework and Google DeepMind’s Frontier Safety Framework. These are commitments. Calling them governance softens the word until it cannot do the work.
What This Means
The most precise users of the term “AI governance” right now are working from outside the rooms where it is operationalized. Bender and Hanna’s The AI Con (May 2025). Whittaker’s NDSS keynote. Buolamwini’s Algorithmic Justice League. AI Now’s Artificial Power report. They name the harms. They name the actors. They describe what enforcement and redress would look like. Do they get invited to write the policies?
The companies that draft Responsible Scaling Policies fund the academic centers that evaluate them. Consulting firms that sell governance services write the frameworks regulators cite. The U.S. and U.K. security services rebrand institutes without legislative input. The word “AI governance” sits in the middle of all of it, doing the work of agreement while the underlying definitions move in opposite directions.
If you are in a meeting next week where someone uses the phrase, ask which version they mean. If they cannot answer in one sentence, you are in a vocabulary problem pretending to be a governance conversation. Somebody is going to pay for that mistake. The only question is whether you saw it coming.
Jerry W. Washington, Ed.D., is a twenty-three-year U.S. Marine Corps retired Master Sergeant. Ed.D., USC Rossier (Organizational Change and Leadership) and author of Simulated Realities: Generative AI and the Remanufacture of Professionalism (2023) and a 131-source scoping review on shared meaning under pressure (working paper, 2026). He’s also an independent advisor on AI readiness for education and workforce systems: K–12 districts, community colleges, workforce boards, and veteran-service organizations. He publishes weekly at What Time Binds. Engagements and writing at jerrywwashington.com.



