Building the Governance Layer: A Practitioner’s Guide to Intra-Enterprise Agentic Governance

Building the Governance Layer: A Practitioner’s Guide to Intra-Enterprise Agentic Governance
Opinion Research

Your agent deleted 200 files from a shared drive last Tuesday. It had permission. The files matched the pattern. But some were active deal documents. Who authorized “matches the pattern” as sufficient criteria for a destructive action, and is that authorization written down anywhere?

Your eval pipeline says V2 outperforms V1. The benchmarks improved. But when compliance asks for the certification record that proves V2 is production-ready with documented evidence that survives a regulatory audit, what do you hand them?

Your coding agent committed a PR to a shared repo at 2am and kicked off the deployment to Production. It had the right permissions. But who authorized autonomous commits to shared infrastructure outside business hours, and where is that authorization recorded?

These are real-life use cases that enterprises with 50+ agents in production are executing right now, and the numbers confirm it: 80% of enterprises have agent use cases in development, but only 14% have governance frameworks in place. Just 2% report their agents are governed in an always-on, consistent manner.¹ Gartner projects 40% of agentic AI projects will be canceled by end of 2027, primarily due to governance and risk control failures.²

In the first article in this research series, I outlined the governance gap in agentic AI and how historical patterns predict eventual standardization. In this article, I focus on implementing agentic governance within a single governance and the unresolved challenges that come bundled with it. I’d consider this more of a blueprint than a white paper.

Five Layers, Not One Problem

Intra-enterprise agent governance is not a single challenge. It decomposes into five concentric layers, each with different tools, different audiences, and different maturity levels. I look at it as a matryoshka: each layer nests inside the next and depends on the one inside it being functional. Or think of it as defense in depth: each layer catches what the inner one missed.

This decomposition matters because most governance discussions conflate all five into a single “AI governance” problem and then propose solutions that address Layer 3 while leaving Layers 1, 2, and 4 untouched. The result: observable agents that nobody can actually control. You can see what happened but you could not have prevented it.

In this article, I cover Layers 1 through 4. Layer 5 (cross-enterprise governance, the domain-specific A2A profile, the FINOS working group charter) is a separate problem that requires the first four layers to be functional. I’ll cover that in the next article.

Layer 1: What Practitioners Are Building

Layer 1 is where governance starts, and it is the layer that enterprise frameworks miss entirely. FINOS AI Governance Framework v2.0, the most credible published standard for agentic AI governance in financial services, has 6 agentic-specific controls.³ Only 2 of them touch Layer 1, and only partially. FINOS targets enterprise infrastructure, not individual developer workflows.⁴

This is why practitioners are building their own governance. Not because they want to, but because Layers 2 and 3 do not solve their problem. Across a fragmented landscape, five patterns converge:

Structural scaffolding versus contractual governance: This distinction is architecturally significant and not documented in existing literature. CLAUDE.md / AGENTS.md serve as lightweight structural documents describing project architecture, directory layout, and domain instructions. They are the map.

Governance rules belong in a separate artifact: a dedicated rules file (Rules.md) with explicit contractual framing, referenced from the scaffolding documents. The rules are the contract. A common failure mode is cramming both into one file, which dilutes the contractual force of the rules and makes the scaffolding harder to maintain.

Periodic re-injection: Rules injected once at session start degrade over long-running sessions. The context window fills with task-specific content, and the LLM loses track of governance constraints. This is the needle-in-the-haystack problem applied to governance. I’ve seen this firsthand across what feels like countless sessions over the past year, though not exactly countless as my session handoff results in numbered documents for every single one, each with YAML frontmatter tracking files changed, technical decisions with rationale, accomplishments, learnings, and priorities for the next session. That’s a first-person governance audit trail, built by a practitioner who needed it before any framework offered it. Anyway, coming back to the topic. Periodic reinjection of rules is needed because agents ignore rules in extended contexts even with contractual framing at session start. The fix: re-inject governance rules at context boundaries, specifically sub-agent spawns, major task transitions, and context window pressure points. Single-injection governance is insufficient for sustained sessions.

Hook-based enforcement: Claude Code’s hook system (PreToolUse, PostToolUse, SessionStart, SessionEnd, SubagentStart/Stop) provides deterministic governance enforcement that does not rely on the model remembering a rule.⁶ Think of it as the GitHub Actions webhook pattern applied to agent behavior: every tool invocation passes through a policy gate before execution. This is how you enforce “never write PII to logs” or “escalate to human for any action above $50K” consistently at the harness level, without depending on the LLM to comply.

I’ve been building this pattern into an open-source framework called The Bulwark. It ships with a three-layer enforcement model: Rules (injected at SessionStart as binding protocol), Hooks (PostToolUse blocking hooks for typecheck/lint/build gates), and Pipelines (multi-agent workflows triggered by hooks). Edit a code file and a blocking hook fires a code review pipeline: parallel sub-agents for security analysis, type safety, linting, and coding standards. Edit a test file and a separate test-audit pipeline runs AST pre-processing and mock detection (no mocking the system under test, verify real outputs, integration tests hit real systems). SessionEnd hooks generate session handoff documents as audit artifacts: files touched, technical decisions with rationale, verification status, lessons learned. Every sub-agent writes three-tier output (YAML log, orchestrator summary, diagnostic YAML) creating a layered audit trail. It’s early-stage (six GitHub stars), but the architecture validates that Layer 1 governance can be deterministic, auditable, and embedded in the development workflow itself.

Knowledge persistence and decision lineage: The context window is finite, but governance requires long-term consistency. Andrej Karpathy’s compiled wiki idea went viral for good reason: it addresses the fundamental problem of maintaining state outside the context window.⁸ᵃ I’ve been working on this as a first-class personal project for the past five months (long before Karpathy put this idea to paper), devoting every free hour outside of work to it. My framework for context & knowledge management: CLEAR (currently dogfooding to shake out bugs), takes the compiled wiki concept further: its architecture tests a reverse file-knowledge index that links code back to the knowledge entries and decisions that informed it, creating a decision lineage primitive. When someone asks “why was this code written this way?”, the answer traces from the file through the knowledge graph to the original design decision. This is the Layer 1 end of a bridge that no tool yet spans completely.

Version everything: Governance artifacts (policies, prompts, knowledge bases) must be tracked and versioned with the same discipline as code. When a policy changes, you need impact analysis. When a prompt is updated, you need rollback capability. This pattern is converging across tools but remains manually managed in most organizations.

The gap at Layer 1 is that all of this is fragmented. Individual practitioners cobble together hooks, rules, and knowledge management from available primitives. There is no Layer 1 governance product. I know because I went looking, didn’t find one, and ended up building two. That’s not a brag — it’s evidence of a gap. When a product leader has to build his own governance tooling because Layers 2 and 3 don’t solve the problem, the market is telling you something.

Layer 2: What Platforms Provide (and What They Don’t)

Layer 2 is where most practitioners assume governance lives. It does not. Layer 2 provides permission, but permission is not governance.

The distinction is load-bearing. Permission asks “can the agent access this tool?” Governance asks “should the agent use this tool right now, and can I prove the answer to that question later?” Every adjacent regulated domain, from military autonomous systems to clinical decision support to algorithmic trading, separates these two concepts.⁹˒¹⁰˒¹¹ RBAC and ABAC handle permission. Neither handles authority. No published framework unifies the two for agentic AI. Layer 2 platforms fall into three tiers:

Structural enforcement platforms (Claude Code, Gemini CLI) use OS-level sandboxing (Seatbelt on macOS, Bubblewrap and seccomp on Linux) to isolate agent actions at the operating system level.¹² The agent cannot bypass the sandbox. Governance enforcement is deterministic, not probabilistic.

Configuration-driven platforms (Copilot, Amazon Q) enforce client-side policies and allow-lists. Policies can be set organizationally and propagated to users. Enforcement is real but varies by configuration.

Advisory platforms (Cursor, Windsurf) rely on the LLM interpreting rules files. There is no enforcement guarantee. The agent is a predictor, not a policy engine. A rules file in an advisory platform is a strong suggestion, not a contract.

The governance gap at Layer 2: Most platforms conflate “can it?” with “should it?” A platform can grant an agent permission to call a database write tool. No platform currently evaluates whether the agent should call that tool in this specific context, with this specific data, at this specific moment. That evaluation requires authority, not just permission, and authority is where the governance model breaks down.

The implication is uncomfortable but precise: standardization alone doesn’t resolve who actually holds authority when an action commits across systems. Without that, you get interoperability, but not enforceable control.

Layer 3: What Production Governance Looks Like

Layer 3 is where the tooling is most mature and where a critical misconception persists: observability is not enforcement.

The Missing Piece: An Action Risk Taxonomy

What nobody has standardized is the classification of agent actions by risk level. OWASP’s Agentic Top 10 for 2026 catalogs threat categories: goal hijacking, tool misuse, identity abuse, rogue agents, memory poisoning, cascading failures.¹⁷ These are threats. They are not operational classifications. OWASP tells you what can go wrong. It does not tell you that deleting a staging database is “destructive” while reading it is “safe,” or that modifying a production record requires human review while the same action in a development environment might not.

The adjacent domains solved this long ago. Financial trading classifies orders by execution risk: market orders (guaranteed execution, highest risk) versus limit orders (price-controlled, medium risk) versus stop-loss orders (protective, triggered conditionally).²⁰ The FDA classifies software as a medical device into Class I (informs, low risk), Class II (guides, moderate risk), and Class III (intervenes, high risk, requires pre-market approval).²¹ NIST’s Cybersecurity Framework uses implementation tiers to classify organizational security maturity.²² AWS IAM decomposes all service actions into five access levels: List, Read, Tagging, Write, and Permissions Management.²³

Agentic AI needs the same taxonomy. A starting framework:

Risk LevelCategoryApproval GateExamples
Level 4: SafeRead-only, informational, within entitlement boundaryNoneFile reads, search queries, analysis on entitled data
Level 3: StandardRoutine state changesAutomated checkCode generation, file writes, config changes
Level 2: High-RiskSignificant state changesHuman reviewFinancial transactions, external comms, credential modifications
Level 1: DestructiveIrreversible actionsMulti-party approvalFile deletion, database drops, production deployments

Two design principles make this taxonomy operational rather than decorative:

First: It must be layer-aware. The same action carries different risk at different governance layers. File deletion at Layer 1 (individual workspace, no rollback capability) is Destructive. File deletion at Layer 3 (production environment with eval pipeline, automated rollback, and approval gate) can be Standard. Context changes the classification.

Second: It must distinguish action-level risk from data-level risk. In my own domain (CRM for capital markets), a simple read query can surface records the user isn’t entitled to see. The action is safe; the data exposure isn’t. A read against entitlement-gated data (PII, restricted client records, cross-domain information) escalates from Level 4 to Level 3 or Level 2 depending on sensitivity, even though the action itself is read-only. Any taxonomy that classifies purely on action type will miss this entire class of risk.

The decision tree should account for both dimensions: Is the action read-only? If yes, is the data within the agent’s entitlement boundary? If both yes, Level 4. If the data is entitlement-gated, Level 3 or Level 2 depending on sensitivity. Is the action a routine state change in an exploratory context? If yes, Level 3. Does it touch production or shared resources? Level 2. Is it irreversible or cross-boundary? Level 1. Does the governance layer provide containment? Lower the risk level by one category.

This is not a finished standard. It is a starting framework that enterprises can adopt and customize rather than building from scratch. The alternative, which is the current state, is every organization inventing its own risk matrix: rows are agent types, columns are action categories, cells contain risk levels and required approvals. Labor-intensive, inconsistent across organizations, and impossible to audit against a common baseline.

The Forgotten Phase: Agent Decommissioning

One lifecycle phase that almost nobody has addressed: what happens when an agent reaches end-of-life?

When an agent is retired, it retains API keys, cached tokens, memory stores, vector embeddings, model endpoints, and system integrations. If not properly decommissioned, these become dormant identities with live privileges: invisible, forgotten, and exploitable. Orphaned credentials consistently rank among the top three entry vectors for data breaches.²⁴

The precedent is instructive. Knight Capital’s $440 million loss in 45 minutes in August 2012 resulted partly from defective code left in a router from 2005 that should have been decommissioned years earlier.²⁵ The code was dead. The risk was live.

Proper agent decommissioning requires credential revocation (verified, not just disabled), data access removal across all systems the agent could reach, dependency mapping (what other agents or workflows rely on this one?), audit trail preservation for the compliance-mandated retention period, and inventory update to reflect retirement. This is not optional governance. It is risk architecture. And no enterprise has published a systematic agent decommissioning procedure.

Layer 4: The Cross-Platform Problem

This is where most enterprises will hit the wall within 12 months. Layer 4 governs agents across multiple vendor platforms within a single organization. Agent 365 governs Microsoft agents. Kong’s MCP Registry governs MCP tools. Langfuse observes across platforms but does not enforce. An enterprise running agents across Claude, Copilot, and a custom LangGraph deployment needs three separate governance configurations, three separate audit trails, and three separate identity models.

The emerging Agent Management Platform (AMP) category, projected to reach $15 billion by 2029, represents the market’s recognition that this is a distinct product problem.²⁶ Microsoft’s Agent Governance Toolkit, open-sourced in April 2026, is the first production framework that addresses governance across 8+ agent frameworks.¹⁶ It is runtime policy enforcement only. It does not solve identity translation (mapping Entra Agent IDs to service principals to MCP tokens), policy translation (converting policies between vendor-native formats), or cross-platform audit aggregation. These problems remain unsolved.

FINOS’s AI Governance Framework v2.0 acknowledges the Layer 4 problem through its multi-agent trust boundary control (AIR-OP-028) and MCP supply chain governance control (AIR-SEC-026).³ But acknowledgment is not solution. No FINOS control provides the operational framework for governing a heterogeneous agent fleet. This gap is one of the strongest arguments for a dedicated FINOS working group on cross-platform governance.

The Gaps Between Layers

Three structural gaps emerge from this layered analysis. Each one is load-bearing for the governance argument.

Gap 1: Decision Lineage

Three layers contribute to decision lineage, but no tool spans all three.

Layer 1 captures why: the knowledge context, the design rationale, the decision that informed the code. This gap is something I identified early, and it became a primary driver behind CLEAR‘s knowledge management architecture — a reverse file-knowledge index that links code back to the knowledge entries and design decisions that informed it. I’ve been validating this across hundreds of sessions and the concept holds, but no production tool spans all three layers yet.

Layer 2 captures what was allowed: the permission decisions, the hook execution logs, the sandbox enforcement records. Harness hooks and audit trails record this. This is something I attempted to do in The Bulwark.

Layer 3 captures what happened: the traces, the costs, the eval scores, the production behavior. Langfuse agent graphs, Datadog spans, and observability platforms record this.

The bridge connecting production behavior (Layer 3) to the governance rule that authorized it (Layer 2) to the knowledge that informed the rule (Layer 1) does not exist. Each layer captures its piece. Nobody synthesizes the full chain. When a regulator asks “why did your agent make this decision?”, the answer requires traversing all three layers. Today, that traversal is manual.

Gap 2: Authority Versus Permission

This gap maps directly to a framework that draws on four adjacent domains, and it is arguably the most structurally consequential of the three.

Authority and permission are orthogonal properties.⁹˒¹⁰˒¹¹ Every regulated domain that has faced this problem (military autonomous systems, clinical decision support, algorithmic trading, industrial automation) separates them. The U.S. DOD requires that autonomous weapon systems allow operators to exercise “appropriate levels of human judgment,” with “appropriate” defined contextually, not universally.⁹ The FDA holds that clinicians must maintain “professional skepticism and clinical judgment when using AI tools.”¹⁰ MiFID II requires firms to map “who owns deployment decisions, who can operate specific algorithms, and which Senior Management Function approvals are triggered.”¹¹

The convergence across domains: authority remains human, contextual, and declarable. Permission can be delegated to the system. Authority cannot.

For agentic AI, this operationalizes into four authority types: Advisory (system recommends, human decides), Delegated (human pre-authorizes an action class), Autonomous (system executes and notifies, low-risk only), and Emergency Override (human can interrupt autonomous action in real time). Every agent action should be classified into one type, and reclassification should require a human decision.

The commitment boundary is equally important. Agent actions need explicit state markers distinguishing exploration from commitment: Exploratory (no external state changes, reading and analyzing), Provisional (action prepared but not executed, requires approval), and Committed (action executed, audit trail created, potentially binding). FIX Protocol solved this by distinguishing Indications of Interest from firm orders.²⁸ FHIR’s Task resource uses an explicit state machine: Draft to Ready to In Progress to Completed.²⁹ Agentic systems have no equivalent standard.

Gap 3: FINOS Coverage

FINOS AI Governance Framework v2.0 covers Layer 2 and Layer 3 well: 4 of 6 controls address harness governance, 5 of 6 address production governance.⁴ But Layer 1 (practitioner governance) and Layer 4 (cross-platform enterprise governance) are weak spots. This is partly a deliberate scope decision: FINOS targets enterprise infrastructure, not individual developer workflows. But the consequence is that the industry’s most credible governance framework leaves the two fastest-growing governance problems without coverage.

The Common Controls for AI Services initiative (CC4AI), launched in June 2025 with participation from Citi, Morgan Stanley, BMO, Bank of America, RBC, Goldman Sachs, Microsoft, Google, and AWS, operationalizes the framework through an “attest once, inherit many” model: vendors demonstrate compliance once, and consuming financial institutions inherit the assurance.³⁰ This is governance at scale. But it only covers what FINOS covers, which means Layers 2 and 3 are increasingly well-governed while Layers 1 and 4 are increasingly exposed.

The Honest Timeline

The governance-first deployment advantage is real. The DevOps evidence is clear: embedding security controls in the SDLC costs roughly 1x; retrofitting the same controls post-deployment costs 30x.³¹ DORA metrics show that elite DevOps performers achieve both velocity and stability through embedded governance, not despite it.³² Deloitte reports that organizations with active senior leadership governance achieve demonstrably greater business value than those delegating governance to technical teams.³³

But here is what would be dishonest not to say:

Governance-first adoption will not be driven by evidence. It will be driven by failure.

The historical pattern is unambiguous. The 2008 financial crisis produced Dodd-Frank in two years. MiFID II went from adoption to implementation in four years. The 21st Century Cures Act took seven years from legislative intent to operational enforcement.³⁴ Acute crises compress the cycle to 2 to 4 years. Chronic pain stretches it to 7 to 10.

Agentic AI governance is currently in the chronic pain phase. Ten documented agent control failures between October 2024 and February 2026 have triggered vendor-specific patches (Replit added dev/prod separation, Amazon implemented peer review, Cursor added rollback improvements) but no industry-wide standard, no consortium mandate, and no regulatory response.¹⁹ The reason is structural: most firms haven’t had their first cross-boundary agent failure yet. The pain is theoretical, not daily the way voice trading was for FIX. Once a real regulatory incident hits from agent-to-agent interaction, consortium formation compresses from years to months.

The market signals say we are closer to that inflection than most realize. JPMorgan has hired senior management leaders from to lead AI Policy and Governance in February 2026. Goldman Sachs is piloting autonomous coding agents across 12,000 developers while simultaneously hardening compliance infrastructure.³⁵ These are not future-planning hires. They are managing current deployments and anticipating the EU AI Act enforcement beginning August 2026.

The consortium infrastructure is being built in parallel. The Agentic AI Foundation (AAIF) launched in December 2025 under the Linux Foundation with Anthropic, Block, and OpenAI.³⁶ FINOS has CC4AI with six tier-1 banks collaborating on shared compliance evidence.³⁰ This is simultaneous, multi-vector standardization running ahead of regulation.

The firms governing today will be the firms writing the standards tomorrow. The firms waiting for the mandate will be conforming to a standard someone else built.

What Comes Next

Everything in this article addresses what happens inside a single organization. The harder problem is what happens when your agents interact with agents from another firm. Whose governance rules apply? Whose compliance policies govern the interaction? Where is the audit trail that proves the interaction was within both firms’ entitlement bounds?

The design insight that points toward the answer: FIX succeeded because it embedded the contract into the protocol itself. The agentic equivalent would be making governance metadata a structural artifact of the execution graph, not a separate oversight layer bolted on after the fact. And there’s a constraint that any realistic standard must satisfy: the governance standard that wins will not be the most comprehensive one. It will be the one simple enough to adopt in weeks.

That is the RIXML parallel. A domain-specific schema on top of the existing base protocol. Not a new protocol. A Financial Services A2A Profile that standardizes what crosses the boundary between firms: what agents are authorized to do, what they did, and who is responsible when something goes wrong.

That is the next article.

Sources

  1. [Tier 2] Board.org & DataMatters, “2025 State of Enterprise Data Governance Report,” 2026
  2. [Tier 1] Gartner, “Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027,” Press Release, June 25, 2025
  3. [Tier 1] FINOS, “FINOS AI Governance Framework v2.0,” Interactive Platform, https://air-governance-framework.finos.org/
  4. [Tier 1] FINOS, “FINOS AI Governance Framework v2.0 – Addressing Agentic AI Risks in a Rapidly Evolving Landscape,” Blog, October 2025
  5. [Reserved — removed]
  6. [Tier 1] Anthropic, Claude Code hooks documentation: PreToolUse, PostToolUse, SessionStart, SessionEnd, SubagentStart/Stop, 2026
  7. [Tier 3] Author’s open-source project: “The Bulwark,” GitHub, https://github.com/qball-inc/the-bulwark (early-stage governance enforcement framework)
  8. 8a. [Tier 3] Andrej Karpathy, compiled wiki knowledge pattern (viral adoption across Claude Code practitioners, 2025-2026)
  9. [Tier 1] U.S. Department of Defense, “DODD 3000.09: Autonomy in Weapon Systems,” 2012 (updated 2023)
  10. [Tier 1] FDA, “Proposed Regulatory Framework for AI/ML-Based Software as a Medical Device,” 2021; American Medical Association clinical authority guidance
  11. [Tier 1] ESMA, “Article 17: Algorithmic trading,” MiFID II Regulatory Technical Standards; FCA, “Multi-firm review of algorithmic trading controls,” 2024
  12. [Tier 1] Anthropic, Claude Code permissions and security documentation: Seatbelt (macOS), Bubblewrap + seccomp (Linux/WSL)
  13. [Reserved — removed]
  14. [Tier 1] Langfuse, documentation: tracing, prompt management, evaluation, cost attribution, https://langfuse.com/docs
  15. [Tier 2] OpenTelemetry, GenAI semantic conventions (experimental), 2026
  16. [Tier 1] Microsoft, “Agent Governance Toolkit,” GitHub, open-sourced April 2, 2026
  17. [Tier 1] OWASP Gen AI Security Project, “OWASP Top 10 for Agentic Applications for 2026,” https://genai.owasp.org/resource/owasp-top-10-for-agentic-applications-for-2026/
  18. [Tier 1] Microsoft, “Microsoft Agent 365,” GA May 1, 2026, $15/user/month
  19. [Tier 2] Individual incident reports: Meta OpenClaw email deletion (The Verge, February 2026); Replit 1,206 executive records deletion (Ars Technica, July 2025); Cursor git-tracked file deletion (community reports, December 2025); Amazon Kiro production environment deletion (developer reports, December 2025)
  20. [Tier 1] Vanguard, “Stock & ETF Orders: Limit, Market, Stop, & Stop-Limit”; ESMA, “Article 48: Systems resilience, circuit breakers and electronic trading,” MiFID II
  21. [Tier 1] FDA, “Global Approach to Software as a Medical Device (SaMD),” https://www.fda.gov/medical-devices/software-medical-device-samd/
  22. [Tier 1] NIST, “The NIST Cybersecurity Framework (CSF) 2.0,” 2024
  23. [Tier 1] AWS, “Access levels in policy summaries,” IAM documentation
  24. [Tier 1] Okta, “AI Agent Lifecycle Management: Identity-first Security,” 2026
  25. [Tier 1] SEC, “SEC Charges Knight Capital With Violations of Market Access Rule,” Press Release, October 16, 2013
  26. [Tier 2] Covasant, “The AI Governance Mandate: Scaling Agentic AI on Google Cloud in 2026,” AMP market projection ($15B by 2029)
  27. [Reserved — removed]
  28. [Tier 1] FIX Trading Community, “IOI Message,” FIX 4.4 Dictionary
  29. [Tier 1] HL7 FHIR, “Task Resource,” FHIR v4.0.1, https://hl7.org/fhir/task.html
  30. [Tier 1] FINOS, “Global Financial Institutions and Technology Leaders Collaborate Under FINOS to Launch Open Source Common Controls for AI Services,” June 2025
  31. [Tier 2] StartLeft Security, “The Hidden Costs of Ignoring Security by Design”
  32. [Tier 1] Google Cloud, “Use Four Keys metrics like change failure rate to measure your DevOps performance”
  33. [Tier 2] Deloitte, “The State of AI in the Enterprise,” 2026
  34. [Tier 2] Federal Reserve History, “Dodd-Frank Act”; Federal Register, “21st Century Cures Act,” May 2020
  35. [Tier 2] eFinancialCareers, “Morgan Stanley, JPMorgan, BofA & Citi ramped up US MD hiring,” 2026; QA Financial, “Goldman Sachs joins JPMorgan and Morgan Stanley in GenAI race”
  36. [Tier 1] Linux Foundation, “Linux Foundation Announces the Formation of the Agentic AI Foundation (AAIF),” December 9, 2025