—·
Gigawatts of AI compute are being marketed like campus projects. The investment truth is subtations, interconnection queues, and 24/7 power reliability.
On paper, “AI campuses” look like a straightforward compute build. In Ohio—around SoftBank’s Stargate push—the ordering is different: the power system sets the calendar.
Recent reporting tied to the Portsmouth area describes a data center at up to 10 gigawatts alongside up to 10 gigawatts of new generation, including 9.2 gigawatts of natural gas, plus grid upgrades and transmission lines meant to avoid customer-rate impacts. (Source)
That sequence matters because AI infrastructure procurement doesn’t behave like a normal capex cycle. When electricity limits workloads, everything downstream gets re-priced—training throughput, server utilization, and even the financial structure of “capacity” sales. Hyperscalers and data-center developers are pushed to treat interconnection and generation availability as first-order assets, not externalities.
So the investigative question becomes: what actually sits inside the “power-to-compute” stack for gigawatt-scale data centers—and where do costs and timelines truly originate? The answer isn’t model performance. It’s power procurement, grid reliability obligations, interconnection delays, and power electronics that shape total delivered cost per unit of AI work.
Power-to-compute is the path from bulk electricity procurement to the last conversion stage feeding GPU and accelerator compute boards. In practice, “delivered compute” depends on four structural constraints—measured as four deltas: (1) when power becomes deliverable, (2) how much purchased energy becomes usable IT load, (3) how long a site can ride through an outage without shedding IT load, and (4) what penalties or payments follow when availability commitments aren’t met.
First is grid deliverability. You can sign long-term electricity agreements and still be unable to pull power when the grid needs to protect itself. Interconnection queues determine when capacity can flow from generation to load, and U.S. process reforms underline how long the transition can take. The regulatory timeline is typically expressed through FERC rulemaking and RTO implementation following FERC Order No. 2023 (issued July 28, 2023) establishing reforms to interconnection procedures and allowing use of cluster studies instead of purely first-come, first-served dynamics. (Source) The operational reality is blunt: “AI demand” doesn’t create physical electrons. It must find legal and electrical pathways through a constrained transmission system—and those pathways come with dates that can slip even after a data-center deal is signed.
Second is reliability engineering. AI workloads are mission-critical economically: downtime can cost more than slower inference. That drives design patterns like redundant power paths and backup generator systems sized for meaningful runtime. While exact hours vary by operator and site, one compiled data-center technical handbook notes typical reliance on diesel engine generator backup with on-site fuel storage often for about 12–48 hours at full load, and configurations such as N+1 redundancy or 2N variants depending on generator count and risk posture. (Source) For investors and operators, this matters because “uptime guarantees” aren’t just marketing promises—they translate into engineering choices (redundant buses, transfer switching, fuel logistics, maintenance cadence) that raise both capex and opex, and they also shape what portion of energy is effectively “interruptible” versus “firm.”
Third is conversion efficiency and architecture. Electricity bought at the grid doesn’t deliver one-to-one to compute boards. Every conversion stage introduces losses, and those losses compound at high power densities. Silicon carbide (SiC) power electronics has become part of the infrastructure narrative because it promises better efficiency and thermal performance at higher voltages—reducing conversion losses and cooling burden. For example, Infineon has marketed a leading-edge 800 volt AI data center power direction tied to SiC and reported targets for “efficiency levels as high as 98 percent per conversion stage” (as stated in its product/press context). (Source) Microchip similarly described a shift toward architectures such as 400V DC rack power distribution paired with mSiC technology aimed at reducing energy losses across AI data center power conversion. (Source)
So what: If you’re investigating AI compute infrastructure seriously, treat “power-to-compute” as a four-delta problem—deliverability timing, conversion losses, ride-through duration, and availability economics. The grid queue and the last power conversion stage both affect effective compute delivered per paid megawatt-hour, and at gigawatt scale, one missed milestone can dominate the timeline more than any single percentage-point efficiency claim.
Electricity demand growth for data centers is no longer an abstract forecast. The scale now shows up in shares of national generation and in state-level stress points.
In the U.S., the Electric Power Research Institute (EPRI) published scenario-based estimates that data centers could consume between 4.6% and 9.1% of U.S. electricity generation by 2030 (four scenarios). (prnewswire.com) The Department of Energy (DOE) later summarized an analysis finding that data centers consumed about 4.4% of total U.S. electricity in 2023 and are expected to consume approximately 6.7% to 12% by 2028. (Source)
Capacity economics follow those numbers. As the national share rises, regional grid constraints tighten. You feel it in interconnection processes and in reliability procurement. PJM, the RTO covering much of the U.S. mid-Atlantic and Midwest where Ohio sits, publicly described interconnection reform progress: its “transition queue” has been reduced to roughly 63,000 MW of projects from approximately 200,000 MW at the start of the reform transition. (insidelines.pjm.com) PJM also noted in 2025 planning material that it had processed more than 170,000 MW of new generation requests since 2023, while 30,000 MW of generation projections remained in its interconnection transition queue to be processed in 2026. (Source)
If you connect these facts, a structural conclusion emerges: gigawatt-scale AI facilities are competing not only for power price, but for physical and procedural grid time.
So what: When you hear “campus announced,” immediately ask two investigator questions. What is the deliverability plan through interconnection? And what is the reliability plan for backup and grid support? Without both, gigawatt claims risk becoming marketing endpoints rather than deployable constraints.
SoftBank’s Ohio storyline is often framed with headline numbers, but the procurement mechanics are harder—and more expensive.
Recent reporting tied the Portsmouth “PORTS Technology Campus” to expectations of a 10-gigawatt data center and up to 10 gigawatts of new power generation, including 9.2 gigawatts of natural gas, according to DOE. (Source) The same reporting described a $4.2 billion investment in grid upgrades and new transmission lines, paired with claims that the upgrades would not raise customer rates. (Source)
That raises the cost-recovery question: who pays for substations, transmission, and interconnection upgrades that enable AI loads, and how is that financing treated in rates and contracts? In practical terms, the “bundle” claim matters because grid upgrades are often recovered through utility rate structures and/or negotiated developer-payment mechanisms that can include (a) customer-funded or customer-allocated network upgrade costs, (b) utility-backed projects with future rate recovery schedules, or (c) arrangements where the data-center load partially “pays for itself” through contract terms and tariff riders. In a portfolio deal, the investor’s job is to separate the electrons purchase (power supply) from the wires purchase (upgrade and delivery). The SoftBank/Ohio example suggests an ecosystem approach where both are treated as deal-defining assets rather than behind-the-scenes enablement.
A second, practical layer is manufacturing and site readiness. SoftBank-linked Stargate server production has been tied to the Lordstown, Ohio industrial base, with reporting that SoftBank acquired Foxconn’s Ohio facility and planned retrofits to build AI servers and other gear. (tomshardware.com) This matters for the infrastructure stack because hardware procurement rarely waits for the grid to finish. If power deliverability slips by even a year, racks can become stranded—capacity sits idle while power is missing, or contracts force rescheduling, renegotiation of commissioning dates, and revised depreciation/interest timing. The manufacturing tie-in therefore isn’t just industrial policy; it’s schedule-coupling risk management across different procurement timelines (servers, switchgear, transformers, interconnection upgrades).
A third layer is that campus scale may be staged. Local reporting around Lordstown described that a Stargate-linked data center planned for the hub would not necessarily be a massive single facility and that SoftBank indicated use of only a small percentage of previously reported power figure(s). (wfmj.com) Even if this remains an evolving statement rather than a final technical spec, it reinforces the investigator point: gigawatt narratives often arrive before the power and engineering constraints are fully “locked.” Staging is often the only way to convert a provisional interconnection and construction plan into trancheable commitments—meaning practical “gigawatts” become a sequence of deliverability milestones rather than one overnight switch.
So what: Use SoftBank/Ohio not to chase hype, but to map the transaction logic. Identify which party is financing grid upgrades, which party bears interconnection schedule risk, and how redundancy is budgeted. Then ask what the contract does when deliverability slips—liquidated damages, phased capacity take-or-pay, termination rights, or step-up pricing. Those decisions determine whether “gigawatts” translate to usable compute or to an option-like claim.
Interconnection delays can turn a grid plan into a financial risk. It’s not just waiting months or years for paperwork; it’s how investors hedge timing and how operators insure availability.
The U.S. interconnection process has been reformed to reduce serial delays. FERC’s interconnection final rule reform, issued July 28, 2023, is designed to streamline procedures and agreements and implement a cluster-study approach over earlier first-come, first-served dynamics. (ferc.gov) PJM’s public materials frame its transition queue reduction as progress, but the numbers still underline ongoing backlog. (insidelines.pjm.com) PJM has also acknowledged near-term planning pressure driven by demand growth from data centers while maintaining adequate supply. (insidelines.pjm.com)
For gigawatt campuses, the hidden mechanism is deposit economics and schedule coupling. Interconnection study costs, upgrade triggers, and timing windows affect whether developers can ramp power intake when GPUs are ready. Even with reforms, major network upgrades may still be required, cascading into permitting timelines, equipment lead times, and construction concurrency constraints.
Another structural wrinkle comes from reliability resource procurement. PJM described actions such as its Reliability Resource Initiative (RRI) intended to get “shovel-ready” high-reliability projects studied and connected faster by adding them to a later transition cycle rather than waiting for the next cycle interconnection process fully implemented. (Source) PJM also discussed an early potential capacity shortage affecting the system as early as the 2026/2027 Delivery Year in the context of its forecast incorporation. (Source)
So what: Investigate interconnection as a risk instrument, not a utility procedure. Look for contract language that assigns upgrade and timing risk, and use RTO filings to understand whether the grid is ready to deliver capacity when compute assets are commissioned.
The cost of AI compute infrastructure isn’t only $/megawatt for the data hall. It’s $/delivered-megawatt-hour—with uptime guarantees, conversion losses, fuel strategies, and grid support obligations.
Start with standby power and backup sizing. If a site is designed for N+1 or 2N generator redundancy, capital and operating costs rise with generator count, fuel storage, and maintenance. A technical handbook compilation notes many large data centers rely on diesel engine generators as backup, with stored diesel often for 12–48 hours at full load in cited typical ranges. (datacenterhive.com) Those choices create two investigator-visible constraints: land and space for tanks and pumps, and emissions compliance and fuel logistics.
Next is power market exposure. Data-center operators may negotiate power procurement contracts that don’t perfectly align with their workload profiles. AI inference and training can be load-shaped by scheduling systems and by hardware utilization targets, but grid operators plan around deliverability and reliability. The result is that operators may pay for capacity they can’t access when they need it, or accept higher per-kWh pricing during constrained hours.
Finally, there’s the “grid-support” bargain. In a highly constrained system, developers and utilities are pushed toward solutions that keep the grid stable: grid upgrades, participation in reliability mechanisms, and operational arrangements that reduce risk for ratepayers. SoftBank/Ohio’s reported bundling of grid upgrades alongside generation and an assertion of non-rate impact illustrates the political and regulatory bargain involved. (apnews.com) Even if details vary by jurisdiction and contract, the economic pattern is persistent: the question is whether the site pays for firm delivery through its own arrangements (contracted deliverability, on-site resilience, grid services commitments) or whether costs are shifted outward into rate design and reliability funding structures.
So what: For capacity economics, ask which component of the $ figure is paid by the operator and which is socialized. Then check whether standby assets are treated as insured availability or as “sunk redundancy,” because that changes how you model delivered compute economics—and whether “power” should be valued as an energy commodity, a capacity product, or a hybrid availability guarantee.
The investor and engineering question is shifting from “How do we lower hardware cost per accelerator?” to “How do we reduce total delivered cost per training or inference workload?”
One lever is higher-voltage power conversion and new rack architectures. Microchip described power distribution changes such as 400V DC rack power distribution paired with SiC-based approaches to optimize power conversion, reduce energy losses, and improve reliability in high-power-density environments. (Source) Infineon similarly positioned silicon carbide within an 800V-related direction, aiming at high efficiency per conversion stage in future AI data center architectures. (Source)
A second lever is advanced power semiconductors beyond SiC alone. The same Infineon press framing ties together SiC, gallium nitride (GaN), and silicon technologies for efficiency improvements. (Source) The investor constraint is that components matter only insofar as they improve system-level economics: higher conversion efficiency reduces losses, lowering both energy costs and cooling burdens for the same computational throughput.
A third lever is grid interfacing and generation placement choices. If you can reduce reliance on long-distance transmission upgrades—or accelerate delivery time by siting closer to generation—you may reduce interconnection and network upgrade risk. PJM’s use of reliability resource initiatives shows one institutional approach to compress time-to-connection for high-reliability resources when queues are stressed. (Source)
So what: Treat power electronics as part of a total-cost model, not an isolated tech bet. When comparing infrastructure plans, quantify conversion losses and their effect on delivered energy per unit workload. Then map whether those improvements offset interconnection and grid upgrade constraints.
These documented examples help map how real AI compute decisions get made through power and grid realities.
PORTS Technology Campus power bundling in Ohio. AP reported that the Portsmouth area is expected to include a 10-gigawatt data center and up to 10 gigawatts of new power generation including 9.2 gigawatts of natural gas, alongside grid upgrades and transmission lines described as part of the plan. (Timeline: reporting published 2026-03-21, referencing DOE expectations.) (Source)
PJM interconnection reform progress and remaining backlog. PJM described that its transition queue has been reduced to ~63,000 MW from ~200,000 MW as part of interconnection reform progress. (Timeline: fact-sheet referenced as updated June 2025, with reform context.) (Source)
PJM forecast-driven reliability urgency. PJM’s RRI-related reporting describes capacity shortage concerns potentially affecting the system as early as the 2026/2027 Delivery Year in the context of its forecast and FERC actions. (Timeline: PJM article describing actions accepted by FERC, published 2025 context.) (Source)
SoftBank’s Ohio hardware pipeline through Lordstown manufacturing. Reuters-reported coverage (via aggregated reporting) described SoftBank’s acquisition of Foxconn’s Ohio facility in Lordstown, with retrofits planned to build AI servers and related infrastructure gear, helping link manufacturing readiness with compute delivery. (Timeline: Reuters-reported deal context dated 2025-08-18 in the cited coverage.) (tomshardware.com)
So what: To “unpack the black box,” track these signals across three dimensions: (1) grid deliverability packages, (2) interconnection reform status and remaining queue volumes, and (3) hardware readiness synchronization with site power commissioning.
The binding constraint on gigawatt-scale AI compute infrastructure is shifting from compute procurement to grid readiness. It’s measurable in electricity share projections, interconnection reform timelines, and RTO reliability planning urgency. (prnewswire.com) The practical consequence is clear: infrastructure developers need to de-risk power deliverability the same way they de-risk supply-chain components.
Policy recommendation (U.S. actors): the Federal Energy Regulatory Commission (FERC) and relevant RTO/ISO operators should require that large new data-center power commitments in constrained regions include transparent, auditable “deliverability milestones” tied to interconnection status and construction readiness. The intent is to reduce speculative capacity capture and shift investment toward projects that can actually inject and deliver power when compute arrives.
The share-ready truth: campuses will keep getting announced—but the winners will be the teams that treat grid deliverability and reliability accounting as the real product, and build the substation like it’s part of the server stack.
Planned US data centers face power delays tied to grid hardware lead times and interconnection limits, forcing hyperscalers and utilities into new PPA and reliability fights.
The DOE’s PORTS Technology Campus model turns hyperscaler–chipmaker deals into sovereign, power-contracting arrangements with grid risk and site governance embedded.
The next AI infrastructure deal is less about chips than about who finances substations, secures grid access, and absorbs the risk of 24/7 power demand.