All Stories
—
·
All Stories
PULSE.

Multilingual editorial — AI-curated intelligence on tech, business & the world.

Topics

  • Space Exploration
  • Artificial Intelligence
  • Health & Nutrition
  • Sustainability
  • Energy Storage
  • Space Technology
  • Sports Technology
  • Interior Design
  • Remote Work
  • Architecture & Design
  • Transportation
  • Ocean Conservation
  • Space & Exploration
  • Digital Mental Health
  • AI in Science
  • Financial Literacy
  • Wearable Technology
  • Creative Arts
  • Esports & Gaming
  • Sustainable Transportation

Browse

  • All Topics

© 2026 Pulse Latellu. All rights reserved.

AI-generated. Made by Latellu

PULSE.

All content is AI-generated and may contain inaccuracies. Please verify independently.

Articles

Trending Topics

Public Policy & Regulation
Cybersecurity
Energy Transition
AI & Machine Learning
Trade & Economics
Infrastructure

Browse by Category

Space ExplorationArtificial IntelligenceHealth & NutritionSustainabilityEnergy StorageSpace TechnologySports TechnologyInterior DesignRemote WorkArchitecture & DesignTransportationOcean ConservationSpace & ExplorationDigital Mental HealthAI in ScienceFinancial LiteracyWearable TechnologyCreative ArtsEsports & GamingSustainable Transportation
Bahasa IndonesiaIDEnglishEN日本語JA

All content is AI-generated and may contain inaccuracies. Please verify independently.

All Articles

Browse Topics

Space ExplorationArtificial IntelligenceHealth & NutritionSustainabilityEnergy StorageSpace TechnologySports TechnologyInterior DesignRemote WorkArchitecture & DesignTransportationOcean ConservationSpace & ExplorationDigital Mental HealthAI in ScienceFinancial LiteracyWearable TechnologyCreative ArtsEsports & GamingSustainable Transportation

Language & Settings

Bahasa IndonesiaEnglish日本語
All Stories
Agentic AI—May 7, 2026·11 min read

Agentic AI Isolation for Critical Infrastructure: CI Fortify Controls You Can Implement Now

Agentic AI can change what systems “do” during an incident. This guide translates CI Fortify’s isolation and recovery premise into implementable controls for agent workflows.

Sources

  • nist.gov
  • nist.gov
  • csrc.nist.gov
  • nist.gov
  • nist.gov
  • nvlpubs.nist.gov
  • gov.uk
  • oecd.org
All Stories

In This Article

  • Agentic AI Isolation for Critical Infrastructure: CI Fortify Controls You Can Implement Now
  • Start with isolation before agents act
  • Translating CI Fortify into agentic controls
  • Map dependencies to confine runtime reach
  • Allowlist tools and define entitlements
  • Least privilege that downgrades cleanly
  • Use entitlement tiers per workflow step
  • Keep agent identity separate from operators
  • Audit evidence that survives recovery
  • Record a verifiable evidence chain
  • Instrument orchestration without loosening trust
  • Turn lessons into quarterly isolation drills
  • What to publish before agents touch production

Agentic AI Isolation for Critical Infrastructure: CI Fortify Controls You Can Implement Now

Start with isolation before agents act

Give an agent broad access, and a wrong turn does not stay small. In critical infrastructure, the real question for agentic AI security teams isn’t just whether a model is “safe.” It’s whether the operating environment stays safe when the agent is wrong, compromised, or misconfigured. That is the CI Fortify premise: design for degraded operation by isolating components and enabling recovery, rather than banking on continuous correctness.

NIST is also treating agents as a security-relevant shift. NIST’s agent-focused work frames agentic AI as systems that can take actions, not just generate text, and it emphasizes evaluation and security testing for agent behaviors at scale. (Source) In practice, action-taking expands the blast radius of both bugs and attacks. Isolation-by-design is the control strategy that limits that blast radius.

Isolation is the only approach that still holds when vendor or cloud components fail mid-workflow. Agentic AI deployments depend on external tools (ticketing, code repositories, job runners, messaging systems, storage) and remote execution paths. When those dependencies degrade, a non-isolated agent can keep retrying, escalating permissions, or continuing steps no longer appropriate for the current safety state.

So what: treat agentic AI execution like production automation with failure modes, and bake isolation and recovery into the agent’s runtime design--not a last-minute incident hope.

Translating CI Fortify into agentic controls

CI Fortify’s isolation and recovery orientation translates into four operational requirements: (1) know what depends on what, (2) prevent the agent from using more access than it needs, (3) constrain and record tool actions, and (4) produce audit evidence that remains valid after recovery. This aligns with NIST’s emphasis on agent evaluation and tool-related risks: agent behavior can’t be assumed safe simply because the model is “good at reasoning.” You need controls around the system boundary, the tool interface, and runtime state. (Source)

Map dependencies to confine runtime reach

Dependency mapping means building a graph of which agent steps touch which systems, and which identities and credentials those steps use. For critical infrastructure, it’s more than an IT asset inventory. It is an “action dependency” map: what changes when an agent completes Step N. For example, “open change ticket” might trigger “push pipeline,” which then triggers “deploy configuration,” which might restart a production service.

NIST’s agent security research and evaluation initiatives stress testing agents in realistic tool-use contexts, including large-scale red teaming for tool-enabled agents. That operational need is to understand where agent actions land and which tool calls represent meaningful risk. (Source)

In an isolation-by-design model, dependency mapping becomes the basis for runtime confinement. With the dependency graph, you can define what must remain reachable during an isolation event and what must be cut off without breaking core OT and critical services.

So what: before tightening permissions, build an action dependency map linking agent steps to concrete systems and credentials. Without that map, least privilege becomes guesswork--and isolation drills become unmeasurable.

Allowlist tools and define entitlements

Least privilege for agentic AI comes from tool allowlisting and entitlements, not only from IAM roles. Tool allowlisting means explicitly permitting a finite set of tools the agent can call--such as “read-only configuration lookup,” “create ticket,” or “run approved automation job.” Entitlements specify, per tool, what the agent may do with each one (which repositories, which namespaces, which change windows).

NIST’s guidance on AI risk management frameworks discusses structuring AI risk management through lifecycle activities. The operational translation is to treat tool permissions and entitlements as part of the agent’s risk controls across deployment, monitoring, and governance--not as one-time setup. (Source)

Isolation-by-design relies on this same boundary: if the agent is permitted to take actions only within an allowed sandbox of tools, you reduce the chance that a remote or compromised decision loop reaches into systems that must remain stable.

So what: implement an allowlist for agent tool calls and entitlements per tool, so “agent autonomy” never implies unlimited access.

Least privilege that downgrades cleanly

Least privilege is often treated as a static permissions model. Agentic AI security requires an operational approach: permissions must match the agent’s current safety state and change when recovery mode begins. NIST’s guidance on accelerating software and AI agent adoption highlights the need for engineering processes and evaluation to support safer adoption of agents--reinforcing that runtime control must be engineered, not improvised during incidents. (Source)

Use entitlement tiers per workflow step

Define entitlement tiers by workflow step criticality, then make tier changes automatic, observable, and reversible. For example:

  • Tier 0: Observe (read-only, no mutations)
  • Tier 1: Request (create tickets, propose changes)
  • Tier 2: Execute (run approved jobs, apply changes)
  • Tier 3: Emergency override (human approval required)

Make each tier an enforceable contract across three axes:

  1. Tool scope (which tool functions are callable at all)
  2. Target scope (which specific systems or repositories or projects or jobs are addressable)
  3. Action scope (which operations are permitted--e.g., “approve PR” vs “open PR,” “read config” vs “apply config”)

Each tier should map to separate tool entitlements and separate identity contexts (different workload identities/service principals). When isolation triggers, you automatically drop the agent to a lower tier. Crucially, freeze any in-flight execution context so the agent cannot “finish” privileged work after the downgrade signal fires.

Operationally, you need a deterministic downgrade path tied to identifiable safety-state conditions, including:

  • tool dependency degradation (e.g., job runner timeouts beyond threshold)
  • policy gate failures (repeated allow/deny outcomes suggesting tool abuse)
  • evidence integrity anomalies (missing audit artifacts, unexpected log gaps)
  • abnormal tool-call velocity (rate anomaly across steps)

So what: don’t assign least privilege once. Assign least privilege per step tier, enforce downgrade automatically on defined safety-state triggers, and freeze in-flight privileged work so recovery is real control--not just reduced intent.

Keep agent identity separate from operators

Treat the agent as an actor with its own identity, separate from operator accounts. Operators approve decisions, but the agent should not inherit broad human permissions. This is isolation-by-design because it prevents privilege escalation through workflows that reuse credentials.

NIST’s “AI privacy and policy” materials aren’t in scope here, but NIST’s broader AI risk management framing still supports identity separation as a lifecycle control in AI systems. In agentic deployments, separation must include logging and attribution so you can answer: who or what authorized a tool call?

NIST’s published accelerator content also points to adopting engineering and evaluation practices that support safe behavior when agents operate at scale. Identity separation is one of the few controls that remains effective regardless of the model’s internal reasoning. (Source)

So what: implement agent identities with separate scopes and require human approval for Tier 3 actions so your recovery posture doesn’t depend on perfect agent judgment.

Audit evidence that survives recovery

Auditability is not optional for agentic AI security in critical environments. If an agent plans, executes, and self-corrects, you need evidence for each step: the prompt and state that led to a tool call, the tool call parameters, the response, and the policy decision that allowed or denied it. That evidence must remain valid after recovery, when systems may have rolled back or restarted.

NIST’s CAISI agent security research blog discusses large-scale red teaming for tool use. Red teaming is controlled adversarial testing, and its operational value for audit evidence is that it forces teams to prove that logs, containment boundaries, and tool constraints worked during hostile sequences. (Source)

Record a verifiable evidence chain

Self-correction is an agent revising its plan after it receives tool results or detects an error. Without an evidence chain, self-correction can become an audit blind spot, because the agent may change actions based on incomplete context.

An operational approach is to record:

  1. Planning context: what the agent believed, including relevant structured inputs
  2. Tool request: tool name, parameters, and target scope
  3. Policy gate outcome: allow or deny, and which policy rule applied
  4. Tool response: minimal necessary output stored securely
  5. Next step decision: why the agent chose to proceed or stop

NIST’s CAISi research and evaluation initiatives support that agent systems should be evaluated with tool-use probes and tests; those same probes should produce audit artifacts that can be reviewed after an incident. Direct public documentation may not specify exact logging fields, but it does establish that tool-use behaviors are central and must be tested. (Source)

So what: design your logging and evidence chain around tool calls and policy gates, because that’s what you can verify when the agent’s “self-correction” is exactly what operators need to challenge.

Instrument orchestration without loosening trust

Agent orchestration frameworks coordinate planning, tool use, and step execution across multiple services. In security terms, orchestration is where trust boundaries are easiest to break because it sits between the model and the tools. NIST’s agent adoption guidance stresses engineering practices for safe adoption, which translates into orchestration controls: policy gates at the orchestration layer, enforced entitlements per step, and complete evidence capture for tool actions. (Source)

You don’t need to adopt a specific vendor framework to apply the design pattern. The required controls are structural:

  • Orchestration as a policy enforcement point (PDP/PEP concept: policy decision/enforcement)
  • Deterministic routing to approved tools (no free-form tool execution)
  • State machine that supports isolation transitions (normal to isolation to recovery)
  • Audit hooks at every tool call

The orchestration layer should constrain “self-correction.” If a model proposes a new tool or expanded scope, orchestration should deny until a human-approved policy update is applied--or until the agent is explicitly downgraded to an observation tier.

To keep incident response practical, instrumentation should make these answers available in under a minute:

  • What tier is the agent in right now?
  • Which tool/function call was attempted?
  • Which policy rule allowed or denied it?
  • Did the orchestration freeze any in-flight privileged work?

If the orchestrator can’t answer these, the evidence chain will be incomplete precisely when you need it most.

NIST’s UK policy context on agentic AI discusses agentic AI considerations in public guidance, useful as a backdrop for practitioners on how agentic capabilities are being addressed by governments. The operational takeaway is that agentic AI needs more than model-level safety: systems-level controls and practical constraints are expected. (Source)

So what: design orchestration so the agent cannot expand its own privileges. Make isolation a first-class state in your orchestrator state machine, and put audit gates at the tool boundary so tier, policy decision, and evidence are all observable during recovery.

Turn lessons into quarterly isolation drills

Agentic AI security is moving from “tool-use experiments” to enterprise deployment. That’s why isolation-by-design should be operationalized as a recurring control, not a one-time assessment. NIST’s work on evaluation probes and agent adoption acceleration points toward continuous evaluation and testing for safer adoption of agents. (Source, Source)

A realistic timeline from now (2026-05-07) follows:

  • Within 60 to 90 days: implement dependency mapping and step-tier entitlements for the top critical workflows that agents execute.
  • Within 120 to 180 days: run isolation trigger drills using agent evaluation probes, with evidence-chain validation during recovery.
  • Within 6 to 9 months: integrate tool allowlisting and entitlements into orchestration change control, so new tools require explicit policy updates.

This forecast is grounded in the direction of NIST’s ongoing evaluation probe work and adoption acceleration emphasis, though specific enterprise adoption schedules will vary by organization. Direct public confirmation of an exact “quarterly” standard does not exist in the provided sources; the recommendation is a control design inference from NIST’s evaluation emphasis. (Source, Source)

What to publish before agents touch production

If you manage agent deployments in or adjacent to critical services, require a concrete operational artifact: publish an “agent isolation runbook” that defines (a) which step tiers downgrade during isolation, (b) which tool allowlists remain available, and (c) what audit evidence must exist to close the incident. Tie approvals for entitlement expansions to a change-control workflow with human review.

This recommendation aligns with the core isolation-by-design approach described for critical infrastructure operations preparing to work in isolation, emphasizing isolation and recovery as operational posture. (Source)

If you want a simple rule to keep teams aligned: treat agentic AI autonomy like a privilege that contracts during isolation, and require an evidence chain that proves it.

Keep Reading

Agentic AI

Agentic AI Security Is Becoming System Security: Build an Access-Control Plane for Every Tool

Agentic AI shifts from chat to execution. Treat agent workflows like production systems: identity, least-privilege tool access, approvals, audit trails, and rollback.

May 7, 2026·18 min read
Cybersecurity

CISA and NSA Secure Deployment Guidance Meets CSF 2.0: A Practical Control-Plane for Agentic AI Defense

Security teams can treat agentic AI as privileged cyber capability by redesigning identity, logging, sandboxing, and governance evidence loops.

May 4, 2026·16 min read
Agentic AI

Agentic AI Execution Needs a Security Control Plane: 5 Build Steps from NIST, OWASP, MITRE

Move from “assistant” to “executor” only after you set identity, least privilege, audit telemetry, and monitored decision boundaries.

May 7, 2026·16 min read