AI in Critical Infrastructure — Published: 21 March 2026 · 6 min read
Enforcing NIST Security Compliance with AI Inside a Major UK Electrical Utility
Some details of this engagement remain confidential. What we can share is how an OntosLab Frontline Deployed AI Engineer — embedded directly inside the client — built a production compliance system on grounded data across the Microsoft stack, changing how a large energy operator manages security governance and site attendance at some of the UK’s most sensitive infrastructure.
The organisation we refer to here as BigElec operates critical electrical infrastructure at a scale that places it squarely within the scope of the UK’s evolving regulatory attention on national infrastructure security. We are not naming them. Some operational specifics of this engagement remain outside what we are able to publish. What we can describe is the shape of the problem, the approach our embedded engineer took, and what a production AI compliance system looks like when it is built into the fabric of how a large, regulated operator actually works — not bolted onto it afterwards.
The trigger for this project was not a failure. It was the recognition that the existing approach to security compliance — manual tracking, periodic review, and governance processes that lived partly in spreadsheets and partly in institutional memory — was not going to hold as regulatory scrutiny of critical national infrastructure continued to intensify. The NIST Cybersecurity Framework had become the reference standard against which BigElec’s security posture was being assessed. The gap between what NIST requires and what the organisation could demonstrate, consistently and in real time, needed to close.
This is the kind of engagement our Frontline Deployed AI Engineer model exists for. Not a remote build handed over at the end of a sprint, but an engineer sitting inside the client’s team, working within their environment, building institutional trust alongside the system itself.
“The gap between what NIST requires and what an organisation can demonstrate, consistently and in real time, is where the risk actually lives.”
The problem
Compliance that only exists at audit time is not compliance.
NIST is not a checklist you complete once a year. It is a continuous framework — Identify, Protect, Detect, Respond, Recover — that requires an organisation to demonstrate ongoing adherence across its people, processes, and technology. For an electrical utility operating substations and other high-sensitivity physical sites, this creates a particular challenge: the compliance story has to hold at the process level, at the individual site level, and at the level of who is physically present at which location and under what governance conditions.
Substation attendance was one of the sharpest points of friction. The existing processes for logging, approving, and auditing site access were not unreasonable in design — but they were distributed across systems that did not talk to each other, relied on manual steps that were easy to skip under operational pressure, and produced records that were difficult to interrogate meaningfully after the fact. A compliance team asking “show me the access history for this site over the last six months” was, in practice, asking for a significant piece of manual work that nobody had time to do well.
Beyond attendance, the broader compliance picture suffered from the same structural problem: the information needed to assess NIST posture existed inside BigElec, but it was fragmented, inconsistently maintained, and accessible only to people with the institutional knowledge to find it. There was no surface through which a compliance lead could ask a direct question about process adherence and get a direct, grounded answer.
The approach
Grounded intelligence, not generated guesswork.
The system our embedded engineer built works entirely within BigElec’s existing Microsoft environment. It is grounded in data held inside Dataverse — the operational source of truth for site records, attendance logs, process documentation, and compliance status — and surfaces through Microsoft Teams, where the organisation’s operational and security teams already work. There is no parallel platform to learn, no new interface to embed into a reluctant workflow. The AI is available where people already are.
The grounding is the critical design decision. The system does not generate answers from general knowledge about NIST or security practice. It answers from BigElec’s own records — and when the records do not support a confident answer, it says so. This distinction matters enormously in a regulated environment. A compliance system that produces plausible-sounding responses not anchored in the actual operational data is not a compliance system. It is a liability.
What the system does in practice covers several distinct functions:
- Compliance interrogation — team members can ask direct questions about process adherence, NIST control status, and site governance through a natural language interface in Teams. The system returns answers grounded in current Dataverse records, with references to the underlying data so the response can be verified and audited
- Concern flagging and marking — where the system identifies a gap between required and actual compliance state — an overdue review, an incomplete attendance record, a process step not completed within the required window — it marks the concern formally in Dataverse and surfaces it to the appropriate owner
- Automated email enforcement — identified concerns trigger structured emails to the relevant responsible parties, with context, the specific compliance requirement involved, and a clear prompt to action. These are not generic alerts. They are contextual, grounded communications tied to a specific record
- Task and prompt management — the system creates and tracks remediation tasks inside the Microsoft ecosystem, maintaining visibility of open compliance items and their status without requiring a separate project management tool
- Process guidance on demand — team members can ask the system how a specific process should be followed, what the NIST requirement behind a given control is, or what the correct procedure is for a particular site attendance scenario. The system draws on BigElec’s own documented procedures, not external interpretations
The attendance governance piece sits at the intersection of all these functions. The system monitors site access records continuously, flags anomalies against the expected governance pattern, initiates the email and task workflows when intervention is needed, and maintains an auditable record of every action taken — which becomes, in effect, a continuous compliance log rather than a retrospective reconstruction.
Framework
NIST CSF, end to end
The system maps operational data against all five NIST functions — Identify, Protect, Detect, Respond, Recover — surfacing gaps in real time rather than at audit intervals.
Grounding
Answers from the data, not the model
All responses are anchored in records held inside Dataverse. The system does not speculate beyond what the operational data supports.
Integration
Microsoft stack, no new surface
Delivered through Teams against Dataverse. No new platform, no parallel system, no change management overhead for adoption.
Audit
Every action logged
Flagged concerns, sent emails, created tasks and process prompts all write back to Dataverse, creating a continuous compliance record rather than a periodic snapshot.
Safety and governance
Built for a regulated environment from the start.
Working inside critical national infrastructure imposes constraints that are not present in most commercial AI projects, and they are constraints we welcomed rather than worked around. The system operates entirely within BigElec’s existing Azure tenancy. No data leaves the organisation’s controlled environment. No external model is trained on operational records. The AI functions as a reasoning and orchestration layer over data BigElec already holds and already governs — it does not introduce new data flows that would require new governance.
Human oversight is structural, not incidental. The system flags concerns and initiates workflows. It does not take autonomous action on access control, site governance, or compliance status. Every email it sends, every task it creates, every concern it marks can be reviewed, overridden, and traced to the specific data that triggered it. In an environment where the consequences of a governance failure are significant, this is not a limitation of the system’s capability. It is the correct design.
The sensitivity of the operational context means we are not publishing certain specifics of the engagement — the precise sites involved, the detailed configuration of the attendance governance rules, or the specific NIST controls where the gap analysis was sharpest. What we can say is that the system is in production, it is processing real compliance data, and the organisations responsible for oversight of UK infrastructure security are aware of the approach.
“An AI that produces plausible answers not anchored in the actual operational data is not a compliance tool. It is a liability.”
What made it work
Embedded from day one.
The decision to place our engineer directly inside BigElec — rather than building at arm’s length and handing over a finished system — was what made the technical work possible. Security-sensitive environments do not open up to external teams working from the outside. They open to people who are present, who understand the operational context, and whose work can be observed and interrogated as it develops. The Frontline Deployed AI Engineer model exists precisely because some problems require that kind of proximity to solve well.
The decision to ground the system entirely in Dataverse — rather than allowing it to reason from general NIST guidance or broader external knowledge — was the most consequential single design choice. It meant the system could only ever say things that were true of this organisation, in this state, right now. In a compliance context, that constraint is not a restriction. It is the entire value proposition.
Delivering through Teams rather than a bespoke interface removed the adoption problem almost entirely. The compliance team did not need to be trained on a new tool. They needed to learn that they could ask a question in a Teams channel and get a grounded, auditable answer. The enforcement workflows — the emails, the task creation, the concern marking — mattered more than anticipated. Compliance gaps that previously required a coordinator to notice, investigate, document, and chase are now surfaced, logged, and actioned before they reach a state where they become audit findings.
The UK’s attention to critical infrastructure security is not going to diminish. The regulatory environment around operators like BigElec will continue to tighten. What this engagement demonstrated is that the technology exists to meet that environment not with more manual process, but with AI that works from grounded data, operates within existing governance structures, and makes the compliance story demonstrable in real time rather than reconstructed at review.
Work with us
Got a similar problem
in your organisation?
If you are working on AI compliance, governance, or security enforcement in a regulated environment — energy, utilities, public sector — we have built in those conditions and understand what production-grade actually means in that context. Let’s talk.
