Blog | HCI Energy

HCI Energy at Infraday PNW: Practical Power for Wildfire, Flood, and Seismic Readiness

Written by Scott Briley | 8/28/25 3:26 PM

Scott Briley and Rebecca MacLeod, members of the HCI team, attended Infraday PNW in Seattle to gain firsthand insights from the agencies in charge of the Pacific Northwest’s critical public infrastructure. They learned about the most common failures, effective strategies, and areas where power continuity still faces challenges.

Among the day’s sessions, one stood out: “Integrating Risk and Resilience into Infrastructure Systems—Wildfire, Flood, and Seismic Readiness.” It reflected what HCI designs for every day: uninterrupted power and real-time visibility. Much of the day underscored a shift from reactive to proactive—clear return-to-service objectives, routine self-tests, and shared dashboards so teams see issues before they become outages.

Below is a recap from the panel—and the power realities many of these organizations are facing in the field.

Panel: Integrating Risk and Resilience into Infrastructure Systems—Wildfire, Flood, and Seismic Readiness

Participants included leaders from transit, utilities, emergency management, public health, and local public works: Andrea Trepadean (Sound Transit), Ann Grodnik-Nagle (Seattle Public Utilities), Bradley Kramer (Public Health—Seattle & King County), Clayton Putnam (City of Shoreline), Curry Mayer (City of Seattle Office of Emergency Management), and John Schelling (King County DNRP)

Panel Highlights and Power Implications

Below are the themes we heard, the power requirements they imply, and practical approaches agencies can consider—drawn from deployments in similar settings.

What the panel emphasized for power:

  • Return-to-service objectives (target time to restore operations)
  • Continuity of service through grid disturbances and shocks
  • Shared status/monitoring so leadership and ops see the same picture
  • Use Hazard Identification and Vulnerability Analysis (HVA) to prioritize sites and neighborhoods

Transit: Incident response stalls when operations and communications sites lose power

Key point: Leadership readiness in an incident—knowing when to clear the way for trained teams—only works if operations and communications sites (e.g., P25/LMR, FirstNet, PSAP) don’t drop. (Sound Transit)

Power requirement: No-lapse power at control and comms sites during utility interruptions and voltage dips.

An approach to consider: The approach involves prioritizing battery continuity with automatic generator switchover, providing a shared power status view for operations, and ensuring autonomy is appropriately sized to handle access delays after storms or earthquakes.

Utilities: Pump stations fail during cloudbursts and tidal flooding

Key point: Drainage and water assets must be hardened for cloudbursts and tidal flooding—especially in low-lying basins like Duwamish/South Park. (Seattle Public Utilities; City of Shoreline)

Power requirement: Continuous power for pump/lift stations, controls, SCADA/telemetry, and comms—with battery autonomy to bridge access/fuel delays, auto-gen failover, and remote alarms for triage.

An approach to consider: Use batteries as the primary power source, ensuring that the auto-generator failover and battery autonomy are sized to accommodate tide windows and access delays. Add remote alarms/telemetry, lead-lag HVAC, and elevated/sealed electronics with corrosion-resistant hardware for brackish or tidal sites.

Design for heat and smoke: Account for heat-dome loads and smoke infiltration with remote HVAC/air-quality alarms and planned filter changes during wildfire weeks.

Communications sites and backhaul must ride through quakes and surges

Key point: Seismic impacts, flooding, smoke, and large-event loads can interrupt communications sites (e.g., P25/LMR, FirstNet, PSAP), misalign microwave paths, or stall dispatch support—fragmenting incident command if power blinks. (Multiple panelists—seismic readiness & coordination)

Power requirement: No-lapse power and basic link/power visibility at comms/backhaul sites, with redundant backhaul options where consequences are high.

An approach to consider: Implement battery-first power systems with automatic generator backup to ensure that sites can maintain operations during outages and voltage dips; additionally, include straightforward remote alerts for power and link status, as well as a secondary connectivity option in high-consequence areas.

Citywide operations break down without a shared power view

Key point: Scaling playbooks for large crowds (≈500,000 downtown on game days) and multi-agency incidents requires consistent, resilient power across EOCs, staging areas, resilience hubs (libraries, community centers), and pop-up field posts. (King County panelists)

Power requirement: Standardized configurations across all sites and a citywide common operating picture, enabling teams to triage and prioritize quickly.

An approach to consider: Standardize configurations and naming, set common alarm thresholds, and use fleet-wide monitoring with a citywide roll-up. Pre-stage quick-connect power kits for pop-up posts, and do tabletop and load-bank exercises before major events. Use your jurisdiction’s Hazard Identification & Vulnerability Analysis (HVA) to prioritize sites; HCI maps battery autonomy and monitoring to those priorities.

County operations can't outlast prolonged power loss

Key point: When grid outages last days, pumps, gates, SCADA, comms, and safety systems can stall—so wastewater, solid waste, and parks need continuity plans that assume delayed access and refuels. (King County DNRP)

Power requirement: Long-duration, low-maintenance continuity for controls, IT, and comms—integrated with SCADA/telemetry, with enough autonomy to bridge access/fuel gaps and simple remote status for triage.

Example: King County completed a 16.5 MW battery at the West Point treatment plant to keep the main pumps online during power disruptions.

An approach to consider: Consider a battery-first approach that includes auto-generator assistance for endurance, defines load-shedding priorities, enables remote runtime and fuel forecasts, and aligns fuel types and spares across sites to simplify logistics over multiple days.

Four takeaways we drew from the discussion

  1. Communications first. Treat communications sites (e.g., P25/LMR, FirstNet, microwave/backhaul, and PSAP equipment) as the first power dependencies to harden. Aim for no-lapse transfer and enough battery autonomy to cover access/fuel delays.

  2. Design for continuity, not “backup.” “Backup” implies a gap. Engineer for seamless source transfer (no reboots or resets) so radios, links, controls, and SCADA stay live through utility blips and outages.

  3. Make power visible. Use fleet-wide monitoring and shared dashboards so ops and leadership act on the same facts—runtime, fuel/battery state, alarms, and failure points—reducing guesswork and truck rolls.

  4. Put power where the risk is. Harden the actual risk nodes—ridge-top radios, bridges, pump/lift stations, floodplains, EOCs, resilience hubs, and pop-up posts—with right-sized, modular setups and standardized configs to speed deployment and recovery.

At Infraday PNW, one message came through clearly: when storms, smoke, or seismic activity hit, critical sites have to stay on, and everyone needs the same live view. Agencies talked about designing for downpours (not drizzle), planning for slow access after events, and removing single points of failure. That’s where HCI fits: battery-first power that rides through utility blips and outages, plus straightforward remote status so operators and leadership see the same alerts, run time, and fuel or battery levels.

Planning for flood/seismic incidents or citywide events? Schedule a 30-minute scoping call—we’ll review your critical sites and outline power autonomy and monitoring options.