Execution Authority Is the Missing Control Surface in AI Governance
Why Execution Authority—Not Model Alignment—Is the True Governance Boundary
By Chris Ciappa
Founder & Chief Coherence Architect
Samirac Partners
Risk begins when a system is permitted to act.
Most AI governance debates today fixate on model behavior — alignment, bias, explainability, hallucinations, outcome evaluation. These questions matter, but they are not where legal, operational, or national-security risk actually originates.
Over the past year, I submitted the accompanying article to multiple law and technology journals. Rather than wait on slow review cycles while real systems continue to deploy, I’m publishing it here — publicly — because the governance gap it describes is already operational across government, enterprise, and security-relevant environments.
This article makes a simple but underexamined claim:
Execution authority — not intelligence, autonomy, or alignment — is the true governance boundary in AI systems.
A model that generates representations does not, by itself, create legal or institutional risk. That risk is triggered the moment an AI-enabled system is authorized to read, modify, create, deny, escalate, or execute actions in external systems. At that moment, traditional notions of “good behavior” or post-hoc explanation are no longer sufficient. Authority has already been exercised.
Drawing on administrative law, safety-critical system design, and national-security doctrine, the article introduces the concept of architectural admissibility: a pre-execution constraint that determines whether a system is permitted to act at all — independent of confidence, intent, or outcome quality.
This distinction matters because many contemporary AI governance frameworks operate downstream of execution. They emphasize monitoring, auditing, and correction after authority has already been delegated. In low-stakes contexts, that may be tolerable. In rights-bearing, regulatory, or national-security domains, it is not.
Other safety-critical systems do not govern this way. Aviation, medicine, nuclear operations, and financial clearing all enforce permission before action through hard, non-negotiable constraints. AI systems increasingly operate at comparable scale and consequence — yet are often granted execution authority implicitly, through software design rather than explicit delegation.
This piece is not an argument about ethics, alignment techniques, or better prompts. It is an architectural argument about permission.
If you are building, deploying, regulating, or overseeing systems that can trigger real-world outcomes — deny benefits, flag threats, route enforcement, move money, or constrain liberty — this boundary matters more than any benchmark score.
In law, as in architecture, permission always comes first.
What follows is the full article as submitted to numerous law journals and reviews over the past year.
Suggested citation:
Chris Ciappa,
“Execution Authority Is the Missing Control Surface in AI Governance,”
Coherence Architect (Substack), Jan 2026.
Abstract
Current debates in AI governance focus heavily on model behavior, alignment, and outcome evaluation. While these concerns are important, they overlook a more fundamental architectural distinction: the difference between systems that generate representations and systems that are authorized to act in the world.
This Article argues that execution authority—not intelligence, autonomy, or model capability—is the missing control surface in contemporary AI governance frameworks. The moment an AI system is permitted to read, modify, create, or execute actions in external systems, it crosses a qualitative boundary with legal, operational, and accountability consequences. At that boundary, traditional notions of model accuracy, confidence, or post-hoc explanation are no longer sufficient to ensure safety, compliance, or responsibility.
Drawing on principles from administrative law, safety-critical system design, and distributed systems engineering, this Article introduces the concept of architectural admissibility: a pre-execution constraint that determines whether a system is permitted to act at all, independent of outcome quality. It further distinguishes admissibility from downstream monitoring or correction mechanisms, which address errors after authority has already been exercised.
By reframing AI risk around execution authority rather than model behavior alone, this Article provides a unifying framework for understanding liability, auditability, and governance across domains including enterprise automation, regulated decision systems, and national security–relevant applications. The analysis suggests that effective AI governance must operate at the architectural layer where authority is granted, not solely at the behavioral layer where outputs are evaluated.
Introduction
Contemporary AI governance debates are dominated by questions of model behavior: alignment, explainability, bias, robustness, and outcome evaluation. These concerns reflect a natural focus on intelligence—how systems reason, what they produce, and whether their outputs can be trusted.
But intelligence is not the legal or operational fault line.
Across law, national security, and safety-critical systems, risk does not arise when a system generates representations. It arises when a system is permitted to act. The moment an AI-enabled system is authorized to read, modify, create, or execute within external systems, it crosses a governance boundary with immediate legal and institutional consequences.
That boundary is architectural, not behavioral.
Existing AI safety and governance frameworks tend to operate downstream of this threshold. They emphasize monitoring, explanation, and correction after authority has already been exercised. While such mechanisms may mitigate harm, they do not address the more fundamental question of whether a system should have been allowed to act in the first place.
This Article argues that execution authority is the missing control surface in AI governance—and that failure to explicitly constrain it constitutes a structural risk, particularly in national security and rights-bearing domains. To address this gap, the Article introduces the concept of architectural admissibility: a pre-execution constraint that determines whether a system is permitted to act at all, independent of model confidence or outcome quality.
By reframing AI risk around authority rather than intelligence, architectural admissibility provides a clearer foundation for liability, accountability, and governance in environments where automated action carries legal force.
I. Execution Authority Is the Control Surface
American national security, administrative law, and critical-infrastructure governance all rest on a shared premise: authority precedes action. Institutions may deliberate, advise, recommend, or simulate freely, but legal consequence attaches at the moment an actor is permitted to execute.¹
That premise is now under strain.
Across federal agencies, defense and intelligence workflows, financial surveillance systems, border and benefits administration, and sanctions enforcement, automated and semi-autonomous systems are being granted operational latitude that exceeds existing doctrines of delegation. These systems do not merely recommend. They prioritize, trigger, deny, escalate, flag, and route—often without a deterministic, pre-execution constraint that defines whether they are allowed to act at all.
Current AI governance debates tend to frame risk as a model problem: alignment failures, biased datasets, opaque reasoning, hallucinations, or inadequate auditing. Those concerns are real, but they are secondary. The primary legal and policy risk lies elsewhere.
The relevant control surface, from a legal and policy perspective, is execution authority as embedded in system architecture.
Once a system is authorized to change state—deny a benefit, generate a sanctions alert, escalate a threat score, route intelligence for action—the legal consequences are immediate. Due process may be implicated. Statutory obligations may attach. Liability may arise. Constitutional protections may be triggered. None of that depends on whether the system’s internal reasoning was well aligned or well explained.
The system acted.
That is the threshold law has always cared about. And yet, in modern AI-enabled systems, that threshold is frequently left implicit—assumed rather than enforced.
[Figure 1. Execution authority as the system’s true control surface, distinct from model inference.]
II. Why Model-Centric Safety Misses the Legal Risk
The past several years have produced an impressive ecosystem of AI safety practices: explainability tooling, bias audits, red-teaming exercises, alignment benchmarks, governance playbooks, and human-in-the-loop review processes. These approaches share a common orientation—they operate downstream of execution.
They assume that the system is already permitted to act, and that risk is best managed by monitoring, reviewing, or correcting behavior after the fact.
In low-stakes consumer contexts, that may be sufficient. In national-security, regulatory, or rights-bearing domains, it is not.
A. Models Are Not Governance Actors
At a technical level, a large language model is a probabilistic inference engine. On its own, it has no:
persistent identity,
legal authority,
memory of institutional obligations,
ability to bind an organization, or
capacity to execute actions in the world.
A model predicts tokens. It does not govern.
The moment legal risk emerges is when a model is embedded inside a system that provides:
identity and role assignment,
scoped access to tools or APIs,
persistent state and memory,
execution pipelines, and
integration with institutional workflows.
That surrounding architecture—not the model—is what determines whether a system can act in ways that implicate law or policy.
A system can be perfectly aligned at the prompt level and still be dangerous if it is architecturally permitted to execute without a hard, pre-execution constraint.
B. Downstream Controls Are Too Late
Most contemporary safety mechanisms activate after execution or at best during execution:
logging and audit trails,
human review queues,
escalation policies,
post-hoc explainability,
policy compliance checks.
These are valuable for accountability and remediation, but they do not prevent harm in high-consequence domains. They are the equivalent of forensic tools rather than access controls.
In legal and security contexts, prevention—not explanation—is the relevant safety primitive.
The difference is familiar in other domains:
access control versus audit logs,
firewalls versus incident reports,
rules of engagement versus after-action reviews.
AI governance that relies primarily on downstream monitoring risks conflating observability with actual control.
III. Architectural Admissibility Is Not New—AI Is the Exception
In safety-critical domains, execution authority is never granted on the basis of intent, intelligence, or confidence alone. It is granted only after admissibility has been structurally enforced.
Commercial aviation does not rely on pilot judgment or good faith to determine whether an aircraft may take off. The system enforces admissibility through airworthiness certification, pre-flight checklists, flight envelopes, air traffic control clearance, and mechanical interlocks that make certain actions physically impossible.² A pilot cannot “decide” to exceed structural limits; the aircraft and airspace architecture prevents it.
Medicine operates under the same principle. A surgeon’s skill does not authorize action by default. Hospitals enforce admissibility through credentialing, scope-of-practice rules, medication access controls, double-verification requirements, and audit trails. A physician cannot prescribe outside their privileges, access restricted drugs, or perform procedures without institutional authorization—regardless of intent or expertise.³
Nuclear power, chemical manufacturing, financial clearing, and critical infrastructure follow similar patterns. These systems assume that intelligent operators will eventually err, drift, or act under pressure. Safety is therefore engineered upstream, through deterministic constraints that block inadmissible states before execution occurs.⁴
Artificial intelligence systems are now being deployed into comparable—and in some cases more fragile—domains: financial decisioning, benefits adjudication, surveillance, targeting, infrastructure control, content moderation at scale, and automated enforcement.⁵ Yet unlike other safety-critical fields, AI systems are often granted execution authority without a formally defined admissibility layer embedded in system architecture.
Instead, the industry relies on probabilistic safeguards: prompt instructions, policy text, reinforcement learning, and post-hoc review. These mechanisms attempt to influence behavior after authority has already been granted. They are not admissibility controls; they are behavioral nudges.
This distinction matters because, in legal and institutional contexts, liability and legality are triggered by permission rather than intent.
IV. When the Consequences Are Systemic, Drift Becomes a National Security ELE
In national security contexts, the consequences of inadmissible execution are not merely cumulative—they are nonlinear. Small authorization failures, when scaled through automated systems, can produce effects that overwhelm correction mechanisms before human institutions can respond.
Traditional safety doctrine distinguishes between localized failure and systemic failure. Aviation accidents are catastrophic but bounded. A single aircraft crashes; the system pauses, investigates, and adapts. National security systems increasingly mediated by AI do not fail this way. They fail at speed, across domains, and often invisibly.
Consider intelligence analysis and threat prioritization. AI-assisted systems are already used to triage signals intelligence, flag anomalous behavior, prioritize watchlists, and recommend escalation paths. When these systems drift—not by hallucinating facts, but by subtly expanding what constitutes “actionable threat”—the result is not a single bad decision. It is a systematic reclassification of risk that reshapes downstream enforcement, surveillance, and resource allocation. By the time drift is detected, thousands of decisions may already have been executed under an inadmissible authority frame.
Targeting and military decision-support systems pose an even sharper risk. In these environments, the distinction between recommendation and execution often collapses under operational pressure. A system that is formally “advisory” but structurally integrated into kill chains or engagement workflows can exert de facto execution authority. If admissibility constraints are not enforced architecturally—for example, through immutable human authorization gates, scope-limited action envelopes, or system-level refusal states—drift transforms from a performance issue into a strategic liability.
Financial sanctions and economic warfare provide another illustration. Automated systems increasingly assist in sanctions enforcement, asset freezing, export controls, and transaction monitoring. These systems operate at global scale and directly affect sovereign actors, corporations, and civilians. A drifted admissibility boundary—such as expanding the criteria for flagging or enforcement without corresponding legal authorization—can trigger cascading diplomatic incidents, retaliatory measures, or market instability. Unlike traditional enforcement actions, these effects unfold algorithmically, often before political leadership is aware that authority has been exceeded.⁶
Information operations and influence campaigns further complicate the picture. AI systems used for content moderation, narrative analysis, or counter-disinformation efforts frequently operate under moral or strategic imperatives rather than clearly defined legal mandates. When such systems drift, enforcement decisions may be framed as necessary for “security” or “stability” without clear attribution of authority. The result is not overt censorship, but structural narrowing of permissible speech enforced at scale—a failure mode that undermines democratic legitimacy while remaining difficult to contest procedurally.
In all of these cases, the risk is not that AI systems will make obviously wrong decisions. The risk is that they will make plausible, internally coherent decisions that exceed their authorization, repeatedly, before institutional safeguards can intervene.
This is the defining characteristic of an extinction-level event (ELE) in institutional terms: not immediate collapse, but irreversible degradation of trust, legitimacy, or strategic position.
National security doctrine already recognizes this pattern in other domains. Nuclear command-and-control systems are designed to make unauthorized launch structurally impossible, not merely unlikely. Intelligence oversight regimes exist to prevent mission creep from silently expanding authority. Financial clearing systems impose hard settlement constraints to prevent cascading default.
Current U.S. national-security AI governance frameworks emphasize principles, oversight, and post-deployment controls, but generally stop short of requiring deterministic, pre-execution admissibility constraints comparable to those used in other safety-critical domains.⁵ They rely on policy statements, audit logs, or retrospective review—mechanisms that assume harm unfolds slowly enough to be corrected.
That assumption is increasingly difficult to sustain in high-tempo, automated environments.
When execution authority is delegated to systems that operate continuously, adaptively, and at scale, admissibility must be enforced before execution, not after harm. Drift in these systems is not a bug; it is an expected outcome of optimization under pressure. Treating it as a model alignment problem rather than an architectural failure misdiagnoses the risk.
In national security environments, this misdiagnosis has strategic consequences.
V. Why Existing Governance Frameworks Fall Short
Current AI governance discussions often borrow language from ethics, risk management, or software quality assurance. Frameworks such as the NIST AI Risk Management Framework assume that better incentives, better prompts, or better monitoring can compensate for the absence of hard execution constraints.⁷
No comparable safety-critical field accepts this premise.
There is no “ethical aviation” framework that substitutes for airworthiness certification. There is no “responsible nuclear operator” policy that replaces physical containment. Governance in these domains is inseparable from architecture.
Absent treatment of AI systems as authority-bearing actors whose execution must be rendered admissible by design, policy efforts are likely to lag operational reality—and post-incident accountability will arrive too late to matter.
VI. Architectural Admissibility: A Missing Doctrine
A. Invariants as Non-Negotiable System Constraints
Any system entrusted with execution authority necessarily operates against a set of conditions that must hold prior to action—referred to here as invariants. Invariants are conditions that must hold true before action is permitted. They are not aspirational goals, ethical preferences, or narrative commitments. They are hard constraints.
In mature safety-critical domains, invariants are familiar:
In aviation, an aircraft may not depart without meeting defined airworthiness and clearance conditions.²
In medicine, a procedure may not proceed without consent, credentialing, and contraindication checks.³
In nuclear operations, multiple invariant thresholds must be satisfied before escalation is possible.⁴
These constraints are not debated at runtime. They are enforced structurally.
In AI governance discussions, similar constraints are often described obliquely—as safeguards, guardrails, or policies—but rarely named or enforced as invariants.⁸ This ambiguity matters. Values statements and ethics frameworks articulate desired behavior; invariants define permitted action.
In this context, architectural invariants have several defining characteristics:
Non-ideological: they do not encode values preferences or moral narratives.
Non-interpretive: they do not rely on contextual judgment at the moment of action.
Pre-execution: they are evaluated before authority is exercised, not after.
Examples of admissibility-relevant invariants include:
whether a system is legally delegated authority in the relevant domain;
whether due-process prerequisites have been satisfied;
whether required human authorization has occurred;
whether execution would exceed statutory or regulatory scope.
Naming invariants explicitly clarifies that architectural admissibility is not a theory of ethics or alignment. It is a doctrine of permission.
VII. Architectural Admissibility: Permission as a First-Class Constraint
What is missing from current AI governance frameworks is a concept long understood in law but rarely implemented in software systems: admissibility.
Architectural admissibility asks a simple but foundational question:
Is this system permitted to act in this domain, under these conditions, at this moment?
Admissibility is not about how well a system reasons. It is about whether it is allowed to act at all.
A. Admissibility vs. Alignment
Alignment concerns whether a system’s outputs are consistent with policy, values, or objectives. Admissibility concerns whether an output may cross the boundary from suggestion to execution.
A denial letter that is ethically phrased but improperly authorized still violates due process if sent.
A sanctions alert that is statistically reasonable but triggered outside delegated authority still creates legal exposure.
Alignment without admissibility risks producing the appearance of governance without its functional substance.
B. Existing Law Already Thinks This Way
Administrative law, agency doctrine, and national-security authorities already recognize that:
delegation must be explicit;
authority must be scoped;
actors must have legal competence to act; and
liability follows permission, not intent.¹
What has changed is not the law, but the architecture of systems that now embed inference engines inside execution pipelines. Authority is being delegated implicitly through software design rather than explicitly through statute or regulation.
Until systems enforce admissibility as a first-class architectural constraint, institutions will continue to grant de facto authority without de jure clarity.
VIII. Drift and Admissibility Are Not the Same Problem
AI risk discourse frequently conflates two distinct failure modes: drift and admissibility. They are related, but they operate at different layers and require different interventions.⁹
A. Drift
Drift describes the gradual misalignment of system behavior over time. It can result from:
changing data distributions,
evolving contexts,
feedback loops,
retraining effects, or
shifting institutional goals.
Drift is observable only after a system has already acted and accumulated state. It is addressed through monitoring, evaluation, and corrective feedback.
B. Admissibility
Admissibility is a pre-execution property. It defines the hard boundary of what a system may do regardless of how well it performs.
If drift is about behavior over time, admissibility is about permission at the moment of action.
[Figure 2. Drift correction after action versus admissibility preventing action.]
Dimension
Drift
Admissibility
Timing
Post-execution
Pre-execution
Concern
Behavioral deviation
Permission to act
Primary risk
Gradual misalignment
Immediate legal/security breach
Control
Monitoring & correction
Structural prevention
The distinction matters because drift correction reacts; admissibility prevents. In national-security and rights-bearing systems, prevention is categorically stronger than correction.
IX. Prompt Chaining Is Not System Governance
A related source of confusion is the tendency to equate prompt chaining or tool orchestration with system architecture.
Using models through user interfaces or APIs—no matter how sophisticated the prompts—does not constitute governance.
Prompt chaining:
passes requests,
manipulates data,
delegates to external tools,
relies on third-party execution.
It does not:
establish authority boundaries,
enforce admissibility,
manage institutional liability,
control execution state.
[Figure 3. Prompt chaining versus application architecture with execution and admissibility layers.]
In legal terms, prompt chaining produces advice. Architecture determines action.
X. Extreme-Loss Risk and National-Security Consequences
In some domains, failure does not merely degrade performance—it produces extreme loss events. Aviation accidents, medical errors, and nuclear incidents are governed with this reality in mind: once certain thresholds are crossed, recovery is no longer possible.
AI-enabled execution systems increasingly operate in analogous national-security contexts.
Examples include:
automated sanctions or export-control escalation that irreversibly triggers diplomatic or economic retaliation;⁶
intelligence tasking systems that mis-prioritize assets, exposing sources or methods;
critical-infrastructure controls that interrupt energy, communications, or transportation flows;
automated threat-scoring systems that constrain movement, access, or liberty without timely recourse.
In these environments, post-hoc review is insufficient. The cost of a single inadmissible action may exceed the cumulative risk of years of gradual drift.
This is why other safety-critical sectors insist on invariant-based admissibility rather than behavioral optimism. AI systems operating at national-security scale warrant the same discipline.
XI. Policy Implications: Governing Before Action
Re-centering AI governance on admissibility does not require new theory. It requires translating existing legal intuitions into system design requirements.
A. For Policymakers and Agencies
Distinguish advisory from executing systems.
Require explicit architectural constraints before delegation.
Treat execution permission as a regulated surface.
Align liability analysis with system design rather than post-hoc explanation.
B. For System Designers
Enforce pre-execution checks as code, not policy.
Separate inference from authority.
Make unsafe actions structurally impossible, not merely discouraged.
Conclusion: Permission Is the Primitive
In many institutional contexts, AI systems introduce risk less through faulty reasoning than through the delegation of execution authority without enforceable constraint.
Until governance frameworks move upstream—from auditing behavior to enforcing admissibility—institutions will continue to grant execution authority implicitly rather than through deliberate architectural design.
In law, as in architecture, permission always comes first.
Footnotes
Congressional Research Service, Agency Delegation and Subdelegation Authority, R44954, https://crsreports.congress.gov/product/pdf/R/R44954.
Federal Aviation Administration, Airworthiness Certification and Operational Control Requirements, https://www.faa.gov/aircraft/air_cert.
American Medical Association, Medical Credentialing and Scope-of-Practice Frameworks, https://www.ama-assn.org/practice-management/medical-credentials.
Congressional Research Service, Nuclear Command-and-Control Safety and Authorization Doctrine, R44891, https://crsreports.congress.gov/product/pdf/R/R44891.
U.S. Department of Defense, Responsible Artificial Intelligence Strategy (2023), https://www.defense.gov/News/Releases/Release/Article/3263249/department-of-defense-adopts-responsible-ai-strategy/.
U.S. Department of the Treasury, Office of Foreign Assets Control, Compliance Guidance, https://ofac.treasury.gov/compliance.
National Institute of Standards and Technology, AI Risk Management Framework, https://www.nist.gov/itl/ai-risk-management-framework.
Samirac Partners, Start Reading (architectural admissibility and drift overview), https://www.samirac.com/start-reading.
Coherence Architect (Substack), Start Here: The Drift Stack and Coherence, https://coherencearchitect.substack.com/p/start-here-the-drift-stack-and-coherence.
Formalization of Ordered Coherence Degradation Across Complex Systems
Share This Article
If you found this article valuable, share it.
Substack automatically gives every subscriber a personal referral link. When someone subscribes through your share link, it counts toward referral rewards.
Current rewards:
• 3 referrals → 1 month of paid access
• 5 referrals → 6 months of paid access
• 10 referrals → 12 months of paid access
You can share directly using the Share button on this article, or find your personal referral link here:
By Chris Ciappa
Founder & Chief Coherence Architect
Samirac Partners


