All Stories
—
·
All Stories
PULSE.

Multilingual editorial — AI-curated intelligence on tech, business & the world.

Topics

  • Southeast Asia Fintech
  • Vietnam's Tech Economy
  • Southeast Asia EV Market
  • ASEAN Digital Economy
  • Indonesia Agriculture
  • Indonesia Startups
  • Indonesia Green Energy
  • Indonesia Infrastructure
  • Indonesia Fintech
  • Indonesia's Digital Economy
  • Japan Immigration
  • Japan Real Estate
  • Japan Pop Culture
  • Japan Startups
  • Japan Healthcare
  • Japan Manufacturing
  • Japan Economy
  • Japan Tech Industry
  • Japan's Aging Society
  • Future of Democracy

Browse

  • All Topics

© 2026 Pulse Latellu. All rights reserved.

AI-generated. Made by Latellu

PULSE.

All content is AI-generated and may contain inaccuracies. Please verify independently.

Articles

Trending Topics

Cybersecurity
Public Policy & Regulation
Energy Transition
Data & Privacy
AI Policy
Smart Cities

Browse by Category

Southeast Asia FintechVietnam's Tech EconomySoutheast Asia EV MarketASEAN Digital EconomyIndonesia AgricultureIndonesia StartupsIndonesia Green EnergyIndonesia InfrastructureIndonesia FintechIndonesia's Digital EconomyJapan ImmigrationJapan Real EstateJapan Pop CultureJapan StartupsJapan HealthcareJapan ManufacturingJapan EconomyJapan Tech IndustryJapan's Aging SocietyFuture of Democracy
Bahasa IndonesiaIDEnglishEN日本語JA

All content is AI-generated and may contain inaccuracies. Please verify independently.

All Articles

Browse Topics

Southeast Asia FintechVietnam's Tech EconomySoutheast Asia EV MarketASEAN Digital EconomyIndonesia AgricultureIndonesia StartupsIndonesia Green EnergyIndonesia InfrastructureIndonesia FintechIndonesia's Digital EconomyJapan ImmigrationJapan Real EstateJapan Pop CultureJapan StartupsJapan HealthcareJapan ManufacturingJapan EconomyJapan Tech IndustryJapan's Aging SocietyFuture of Democracy

Language & Settings

Bahasa IndonesiaEnglish日本語
All Stories
Agentic AI—April 3, 2026·20 min read

Agentic AI in Digital Gold: The Auditable Gaps Between Planning and Settlement

Autonomous agents can execute multi-step gold workflows, but the highest risk is not the model. It is the handoffs, prices, and custody verification.

Sources

  • nist.gov
  • csis.org
  • genai.owasp.org
  • owasp.org
  • weforum.org
  • thomsonreuters.com
  • internationalaisafetyreport.org
  • arxiv.org
  • arxiv.org
  • arxiv.org
  • arxiv.org
  • arxiv.org
  • internationalaisafetyreport.org
  • nist.gov
All Stories

In This Article

  • Agentic AI in Digital Gold: Auditable Gaps
  • From chat to autonomous execution
  • Digital gold workflow beneath the UI
  • Plan loops and state drift risk
  • Spread and conversion opacity controls
  • Custody gating where automation stalls
  • Agent security evidence from execution risks
  • Case 1: AI agent hijacking evaluation focus by NIST, 2025.
  • Case 2: OWASP agentic skills security risk framing.
  • Case 3: International AI Safety Report cycle, 2024–2026.
  • Orchestration frameworks: evidence or theater
  • Quantitative signals you can count
  • Delegation changes accountability
  • Audit targets for digital gold workflows
  • Policy requires runtime evidence

Agentic AI in Digital Gold: Auditable Gaps

A digital gold purchase can fail without ever looking like a failure on your screen. Money may leave your balance, the app may show a “conversion” as successful, and yet settlement can route to the wrong counterparty, apply an opaque buy-sell spread, or stall during custody and verification.

That’s the exact failure mode agentic AI--positioned as a step-by-step executor--can amplify. It doesn’t just recommend actions. It plans, runs multi-step workflows, and “self-corrects” along the way. The result: operational errors can become automated and far harder to unwind after the fact.

This shift matters because the black box moves. Instead of asking “why the model answered that,” investigators must ask “why the workflow executed that.” In agentic systems, governance and security guidance repeatedly frames risk around tool use, autonomy, and agent coordination--not raw language fluency. OWASP’s GenAI Security project lays out agentic security risks as concrete failure pathways, while NIST’s work on AI agent standards emphasizes the need to operationalize safety, evaluation, and accountability rather than rely on ad-hoc testing. (OWASP GenAI Top 10 Risks and Mitigations; NIST AI Agent Standards Initiative)

Below is a deep-dive for investigators and researchers into how agentic AI can create audit gaps in end-to-end digital gold transactions--where UI-driven actions must ultimately map to settlement, custody & verification, and physical handoff. The focus is on operational controls beneath the interface: the places where workflow automation risk can silently turn into customer harm.


From chat to autonomous execution

Agentic AI is not just “an assistant that talks.” In agentic systems, models connect to tools and run through an execution loop: plan a multi-step workflow, invoke tools, check outcomes, and continue until a goal is reached. That autonomy accelerates time-to-action for users, but it also compresses audit time for regulators and investigators. You stop evaluating a single recommendation and start evaluating a chain of actions.

The governance challenge is definitional, too. CSIS highlights confusion around “agentic AI,” noting that unclear definitions can blur responsibility and the controls organizations are expected to apply. That definitional drift matters because enterprise deployment teams may treat agent autonomy as a feature rather than a safety boundary. If regulators cannot pin down what the system was allowed to do, it becomes difficult to prove negligence or design failure. (CSIS on lost definition and risk of confusion)

When agent execution includes financial operations, the stakes rise. In a digital gold product, the “chat layer” may be the user’s entry point, but the workflow often includes: price quote retrieval, buy or sell conversion logic, order routing to a trading or settlement engine, and then custody and verification steps before the asset is recognized as owned. Agentic AI can orchestrate these steps, but it can also automate the wrong routing if internal state is corrupted or if tool outputs are inconsistent.

So what: when investigating agentic AI in digital gold, don’t only assess model behavior. Evaluate execution boundaries--what tools can be called, under what permissions, and what evidence is logged at each handoff from plan to action.


Digital gold workflow beneath the UI

A typical user journey starts simply: a user selects a purchase amount, the app displays a conversion or value, and the app confirms success. It may also offer pathways to physical custody, printout, or delivery. A public product page for Laku Emas illustrates this mix of features: digital gold balances, conversion activities, and routes that connect to physical custody or documentation. (Laku Emas app listing)

UI confirmation can be misleading when the backend workflow does not match what the interface promises. Investigators should map the operational chain under the interface and, critically, identify where “success” is asserted versus where finality is actually recorded.

In digital gold, the gap between “conversion completed” and “custody recognized” is often where audit gaps form. Agentic AI increases that risk by multiplying the number of state transitions that must align. Instead of auditing a single workflow, audit it as a sequence of joins across systems.

A practical way to structure that trace:

  1. Quote and pricing logic (quote-to-order linkage): The spread must come from a pricing source with a defined timestamp and identifier. An audit-friendly workflow generates a quote reference (for example, quote_id) and carries it forward unchanged into the conversion calculation and then into the order request payload. If an agent recomputes the spread internally or requests a “fresh quote” after a UI confirmation, discrepancies can emerge between what was shown and what was ultimately settled.

  2. Order routing and settlement path (counterparty-to-venue linkage): Routing decisions must be tied to stable identifiers: order ID → settlement instruction → counterparty routing. If an agent selects a settlement path based on tool outputs (or heuristics) that are later revised, investigators may see the classic “plan agreed, settlement diverged” pattern. The chosen route should be explicitly logged and reproducible from authoritative routing tables, not just reflected in model text.

  3. Custody & verification (ownership gate linkage): Custody systems often update asynchronously. “Ownership finalization” should be gated on a custody registry event that is authoritative and timestamped. For audit purposes, map: order ID → custody issuance transaction → verification status → final balance credit. If the app marks the purchase complete before the custody event occurs, then the UI becomes a narrative layer sitting ahead of the ledger.

  4. Physical handoff and printout workflows (fulfillment state machine linkage): If the product supports physical custody or documentation, add another state machine: request creation → inventory/fulfillment eligibility → verification/labeling → courier/handback. Agentic execution can trigger a fulfillment stage early if it treats “user requested physical custody” as sufficient without validating readiness signals such as inventory availability, identity/eligibility checks, or custody “release” permissions.

Self-correction can also backfire. In this context, “self-correction” usually means retries after failure. Retries can multiply side effects: multiple quote fetches, duplicate order submissions, or repeated custody verification calls that behave like distinct transactions. The audit question isn’t whether retries happened. It’s whether the retries were idempotent.

So what: audit the workflow stack as a traceable chain of identifiers and authoritative events. Require a complete trace from “user intent” to “settlement confirmation” to “custody & verification completion,” and ensure each handoff carries the same reference keys (quote/order/custody transaction IDs) across subsystems.


Plan loops and state drift risk

To unmask the black box, investigators need to understand the agent execution loop: the system generates a plan, executes actions through tools, observes results, then updates internal state. A common failure risk shows up as state drift. An agent may believe it successfully executed step A because a tool returned an “OK,” while the downstream system (settlement ledger, custody registry, or reconciliation database) may not yet reflect that state.

NIST’s AI agent standards work stresses that evaluation and accountability must be built around agents’ behaviors and interactions with systems, not only model outputs. Practically, that means capturing evidence from each subsystem the agent touches. (NIST AI Agent Standards Initiative)

The black-box problem sharpens further when orchestration frameworks coordinate tool calls, retries, and multi-agent collaboration. These tools can improve reliability--but they can also obscure which system is the true source of record. Investigators should require clarity on which system is authoritative for each phase:

  • authoritative for pricing (exchange/trading engine),
  • authoritative for order status (settlement service),
  • authoritative for custody ownership (custody registry),
  • authoritative for physical/printout readiness (fulfillment and verification service).

If agentic orchestration stores intermediate data (like an estimated conversion) and later uses it to decide custody steps, errors can compound. If the orchestration framework logs only the “final decision,” investigators may never learn which step actually went wrong.

OWASP’s agentic security guidance is explicit: risk shifts with tool access and chaining, and mitigations must account for agent tool use and execution behaviors. That aligns with an investigation focus on tool-level traces, not just conversational transcripts. (OWASP GenAI Top 10 Risks and Mitigations; OWASP Agentic Skills Top 10)

So what: require “state-of-record” logs for every step. Store input parameters used by the agent, tool response payloads, the agent’s decision at that point, and the downstream authoritative status update time.


Spread and conversion opacity controls

In digital gold, the “spread” is not a minor detail. It’s the economic reality of the product: it determines what customers pay relative to the asset’s reference value. When agentic systems automate conversion, opacity can show up in two ways: inconsistent spread application across steps, and delayed price finalization that turns quotes into effective hidden costs.

Audit how the agent handles these controls:

  • Spread source of truth: does the agent compute spread, request it, or receive it from a pricing service? “Compute it inside the agent” is an audit risk.
  • Quote timestamp binding: does the agent treat a quote as valid for a specific time window? Retries can reuse an expired quote without the UI showing the update.
  • Human-visible disclosure mapping: if the UI shows one price and settlement uses another, users can feel deceived even if accounting remains technically “correct.”

Agent security research also points to active work on agent hijacking evaluation, including NIST’s published technical work on strengthening AI agent hijacking evaluations. While that research isn’t specific to gold, it’s directly relevant: if agents can be influenced, they can be pushed toward tool calls that alter pricing or routing outcomes. A malicious input that changes what tools an agent calls can shift the spread and conversion path without a clear signal in the conversation. (NIST technical blog on strengthening AI agent hijacking evaluations)

There’s also a more mundane opacity risk. Orchestration frameworks may normalize tool outputs and hide differences across pricing feeds. That can make the black box harder to audit and increase the chance that two steps use different pricing models.

So what: treat buy-sell spread as an operational control with evidence. Logs should let you reproduce the exact spread applied and the exact quote timestamp used at settlement.


Custody gating where automation stalls

Custody & verification is where agentic self-correction collides with real-world constraints. Custody systems may require asynchronous reconciliation, physical inventory availability, and identity verification checks. Even when an agent can plan and execute, it still must wait for verification signals that may arrive late or fail.

In many digital asset products, custody & verification are not cosmetic. They are what turns a “balance in a database” into an enforceable claim. If agentic workflows proceed before verification completes, mismatches can emerge between what the user sees as ownership and what the custody registry actually records.

NIST’s direction on agent standards emphasizes operationalized safety and evaluation. That includes requirements for how agents interact with systems and how outcomes are verified. (NIST AI Agent Standards Initiative) OWASP’s agentic security materials similarly stress that mitigations must handle multi-step execution and tool invocation, because each new step increases the chance the workflow proceeds on incomplete evidence. (OWASP GenAI Top 10 Risks and Mitigations)

For investigators, the question isn’t “did the model understand custody?” The question is whether the workflow gated custody progression on verification evidence. Audit edge cases such as:

  • verification timeout triggers,
  • partial verification states,
  • retries that re-enter earlier steps,
  • fallback paths that continue with “best effort” assumptions.

Agentic orchestration can create “workflow automation risk” by making retries automatic. Retries can generate duplicate custody verification attempts, leading to multiple reconciliation entries that later reconcile into user-facing confusion.

So what: implement strict gating. Agents should not advance ownership finalization until custody and verification systems confirm in authoritative logs. During investigations, look specifically for any path where final user-facing success precedes custody confirmation.


Agent security evidence from execution risks

Direct public case studies for digital gold settlement are still limited. But the broader evidence base about agent failures and security risks is concrete, and it can guide how investigators design audits for digital gold workflows--especially where tool use and autonomy can produce wrong or manipulated execution.

Case 1: AI agent hijacking evaluation focus by NIST, 2025.

NIST published a technical blog describing strengthening AI agent hijacking evaluations in January 2025. The operational relevance is clear: it treats agent hijacking as an evaluation target, meaning tool-using agents can be manipulated into undesired behavior. Investigators should treat this as a blueprint for threat modeling tool invocation sequences in digital gold workflows, particularly around pricing and routing tools. (NIST technical blog)

Case 2: OWASP agentic skills security risk framing.

OWASP’s Agentic Skills Top 10 documents security risks and mitigations tied to agent skill design and use. It’s not about gold settlement specifically, but its patterns map directly to operational chain failures: what happens when agents have tool permissions and when skills are abused or misconfigured. Investigators can translate these categories into audit checks for tool access control, input validation, and execution logging in digital gold workflows. (OWASP Agentic Skills Top 10)

Case 3: International AI Safety Report cycle, 2024–2026.

The International AI Safety Report’s ongoing reporting framework provides safety perspectives intended for policy and verification. It is not a gold trading case study; it’s a “process evidence” case that treats safety as something that must be measured and audited, not guessed. Investigators can use this framing to justify audit evidence requirements for agentic execution in financial workflows. (International AI Safety Report overview; International AI Safety Report main)

These cases don’t prove what a specific digital gold product did. Gold-specific implementation data for agentic orchestration is often not public. Still, the security and evaluation evidence base is directly applicable to the operational mechanisms investigators must audit: autonomy, tool access, and execution logging. That’s the gap between black-box demos and the settlement chain that actually matters.

So what: use agent security evidence to design gold workflow audits. Even without gold-specific incident reports, the documented hijacking and tool-chaining risks point to exactly what to test in pricing, routing, and custody verification gates.


Orchestration frameworks: evidence or theater

Enterprise deployment of agentic AI depends on orchestration frameworks to coordinate tools, memory, and multi-step plans. “Framework adoption” can become theater, though--especially when documentation is treated as sufficient and runtime logs are treated as optional. The investigator’s job is to separate policies that exist on paper from operational evidence that exists at runtime.

World Economic Forum content on AI agents in action emphasizes evaluation and governance foundations, reinforcing that assessment belongs inside deployment, not after it. That complements the operational audit stance: you must validate agent behavior against intended boundaries under real tool invocation and failure modes. (WEF, AI agents in action)

Thomson Reuters also addresses safeguarding agentic AI, but it should be treated as industry synthesis rather than primary operational evidence. Use it to generate audit questions, not to assume controls exist in any particular system. A cautious investigator will demand artifacts: access control lists, tool permission schemas, execution traces, and reconciliation reports. (Thomson Reuters on safeguarding agentic AI)

Orchestration frameworks create a structural reality: hidden coupling. One agent may use different tools for quote retrieval and settlement status, but the framework may keep an internal “belief state” cache. If that cache isn’t invalidated on tool discrepancies, the agent can proceed on incorrect assumptions.

To tell evidence from theater, ask for cross-system traceability--not just “agent logs.” Specifically:

  • Correlation IDs that survive every tool call: In practice, this should connect user request → agent execution run → tool invocations → downstream transaction IDs (order/settlement/custody).
  • Reproducible step sequences: Persisted inputs (parameters, identifiers, quote IDs), tool responses (payloads), and the decision rule that turns one step into the next.
  • Explicit non-authoritative labeling for caches: If intermediate values are used, they should be flagged as estimates and replaced only by authoritative “state-of-record” events.
  • Integrity-checked raw tool results: If the orchestration framework normalizes outputs (for example, “price up/down”) rather than storing raw response payloads, investigators can’t verify whether the agent saw the full truth.
  • Failure paths that record what didn’t happen: Audit gaps often appear in rollback, partial completion, and timeout branches. Frameworks that log only success outcomes obscure the boundary where the UI says “done” but the authoritative system says “pending.”

So what: demand runtime evidence from orchestration frameworks--per-step trace logs with correlation IDs, permissioned tool invocation records, and reconciliation outputs that show exactly how agent actions affected settlement and custody. If the framework cannot provide a defensible trace join between “agent run” and authoritative ledger/custody events, treat governance claims as unproven.


Quantitative signals you can count

Agentic AI discussions often drown in qualitative claims. Investigators need measurable properties that can be validated from logs and operational reports. This section uses NIST’s and OWASP’s materials as the basis for measurable evaluation targets and security testing, avoiding gold-market statistics outside the provided evidence set.

Three quantitative anchors you can extract into testable metrics:

  1. Evaluation pressure on agent hijacking, 2025: NIST’s January 2025 technical blog provides the starting point for building measurable “agent hijacking evaluation strength” targets in tool-using systems. You can operationalize this by counting how many hijacking scenarios the agent fails safely across tool categories in staging tests, then comparing before and after changes. The quantitative anchor is the year and publication date: 2025 is when the evaluation guidance was published. (NIST technical blog)

  2. OWASP Top 10 structure: OWASP’s GenAI security project releases a “Top 10” for agentic AI security risks and mitigations. The quantitative anchor is the number 10. Map those 10 risk categories to gold workflow tool permissions, execution gates, and logging completeness checks, then score coverage. (OWASP GenAI Top 10 Risks and Mitigations)

  3. OWASP Agentic Skills Top 10 categories: OWASP’s Agentic Skills Top 10 again provides 10 risk themes for agent skill design and use. You can turn each theme into a digital gold audit control check for agent skills: input validation, authentication of tool results, and safe failure modes. The quantitative anchor is again “Top 10,” meaning 10. (OWASP Agentic Skills Top 10)

These are not gold-specific conversion rates. They’re quantitative evaluation scaffolds grounded in the validated sources provided. Use them to measure whether your agentic execution system is test-covered and whether security and safety criteria exist as numbers--not slogans.

To avoid coverage theater, require two additional counts from workflow logs:

  • Trace completeness rate: For N sampled transactions, what fraction have a continuous chain of correlation IDs and authoritative state transitions recorded end-to-end (quote → order → settlement status → custody/verification complete)? Report as complete_traces / N.
  • Idempotency failure rate: Across M retry/failure injection tests, what fraction created duplicate downstream side effects (duplicate order submissions, duplicate custody verification entries, multiple fulfillment triggers)? Report as side_effect_duplicates / M.

These measures are derived from what you can count--log rows and downstream transactions--not from assumptions.

So what: replace vague assurances with measurable evaluation coverage. If you can’t count hijacking test failures (and fix them), can’t map tool permissions to Top 10 risk coverage, and can’t show trace completeness metrics per workflow step, you don’t have operational controls.


Delegation changes accountability

Delegating decisions to agentic AI reshapes the accountability graph. When an agent executes end-to-end, investigators must understand where “decision” ends and where “execution” begins. If a system auto-selects a settlement path, applies conversion rates, or triggers custody fulfillment without requiring human confirmation, the organization must treat those actions as automated governance decisions.

CSIS warns that confusion around agentic AI definitions can create governance gaps where responsibilities are unclear. In gold workflows, this shows up as unclear accountability for pricing discrepancies, settlement reversals, or custody verification failures. (CSIS on lost definition)

OWASP’s agent security materials emphasize tool-chaining risks and mitigations that assume agents can be manipulated or can fail in ways that compound across steps. This is where delegation becomes dangerous: the more autonomy you grant, the more you must harden operational controls. (OWASP GenAI Top 10 Risks and Mitigations)

There is also the practical user confusion risk. Even if no fraud occurs, automation errors can produce inconsistent user narratives: an order shows completed while custody verification later fails; a physical request is created and then canceled; the displayed spread differs from the settlement statement. These are evidence problems. When system logs are incomplete, regulators can’t reconstruct the truth.

So what: if decisions touch settlement or custody, require auditable gates. Delegation should stop at points where errors become irreversible--or where reconciliation demands authoritative evidence.


Audit targets for digital gold workflows

An investigative audit of agentic AI in digital gold should not stop at model cards. It should examine the operational controls chain behind UI features that connect digital balances to conversion and physical custody or printout pathways. A public product listing for a digital gold app can tell you what features exist (digital balance, conversion/swapping, physical custody/printout routes), but it can’t tell you whether settlement paths and custody verification gates actually work. (Laku Emas app listing)

Based on the validated risk and governance sources, here is an audit program tailored to workflow automation risk and operational controls:

  • Execution trace integrity: Verify that every step in the plan-to-execute loop has trace logs, including tool inputs/outputs and the moment the agent believes a step succeeded. Link those traces to authoritative settlement and custody records.
  • Tool permissioning and routing constraints: Confirm the agent cannot freely choose settlement paths. Routing should be constrained to validated identifiers with whitelisted transitions.
  • Quote finalization policy: Confirm a quote-to-settlement binding exists (timestamp and reference price). Ensure retries refresh quotes rather than silently reusing stale ones.
  • Custody gating: Confirm “custody & verification complete” is a strict prerequisite for any user-facing success that claims ownership, and ensure custody failures produce explicit rollback or reconciliation rather than hidden partial completion.
  • Self-correction safety: Test how the agent behaves when a step fails. Determine whether retries create duplicates and whether idempotency controls (duplicate-request detection) prevent double charging or double fulfillment.
  • Security evaluation coverage: Map agent security testing to OWASP Top 10 categories and to NIST’s agent hijacking evaluation direction. Score what is tested, what isn’t, and why.

So what: treat agentic execution as a regulated operational process. Demand evidence of traceability, permissioning, quote binding, custody gating, and tested hijacking resilience.


Policy requires runtime evidence

The strongest near-term policy intervention is not to ban agentic AI. It is to require evidence-based operational controls for agentic execution in financial workflows that affect custody and settlement. Specifically, deployments that execute digital gold transactions should implement standardized agent execution logging tied to authoritative settlement and custody systems, with audit-ready trace retention and per-step permissioning records. NIST’s agent standards initiative provides the framing for standards work, and OWASP provides concrete risk categories for what must be mitigated. (NIST AI Agent Standards Initiative; OWASP GenAI Top 10 Risks and Mitigations)

Over the next 18 months from April 2026, regulators and large enterprises are likely to shift from documentation-only governance to runtime evidence requirements for tool-using agents in high-stakes workflows. This forecast is grounded in the direction of standards and security guidance (NIST standards initiative and NIST’s continued focus on agent hijacking evaluation, plus OWASP’s structured Top 10 risk materials), which collectively point toward measurable evaluation and auditable execution rather than narrative assurances. (NIST AI Agent Standards Initiative; NIST hijacking evaluation blog; OWASP GenAI Top 10 Risks and Mitigations)

If you want a practical implication for practitioners: don’t ship agentic execution without a settlement-and-custody trace you can defend in an investigation. Build the chain of custody for data, not just custody for gold--and when multi-step settlement actions are in play, the audit trail is the product.


Keep Reading

Agentic AI

Agentic AI autonomy needs an auditable control plane: Copilot Cowork patterns, DLP runtime controls, and governance checkpoints

Agentic AI shifts work from chat to execution. This editorial lays out an enterprise “agentic control plane” checklist for permissions, logging, DLP runtime controls, and auditability.

April 9, 2026·15 min read
Cybersecurity

SDLC Release Gates for Agentic AI Workflows: Turning Zero Trust into Engineering Proof

Agentic AI changes the software supply chain: your CI gates must prove controls for code, data, agents, and endpoints. Zero Trust and NIST guidance make it auditable.

April 3, 2026·17 min read
Public Policy & Regulation

Agentic AI Governance Needs Audit Evidence Builds, Not Paper Promises: Singapore’s IMDA Model, EU AI Act, ISO 42001

Singapore’s agentic AI framework shows how regulators can require an “audit evidence build” sequence: permissions, traceability, delegated actions, and runtime monitoring with go-live gates.

March 23, 2026·15 min read