Some mornings in Normandy, when the fog sits low over the fields and the horizon disappears, I’m reminded of a simple truth: you don’t move faster by pretending there are no constraints. You move faster by knowing exactly where the path is, and by trusting it enough to walk with pace.
This is, to me, the healthiest way to look at the current European regulatory wave in technology. Many organisations still treat it as an incoming storm: something to endure, shown in red on a risk slide, then pushed into the comforting hands of Legal with a silent hope that it will not affect delivery too much. It’s an understandable reflex. It’s also a losing strategy.
Because the reality is not “more regulation”. The reality is “more demonstrable trust”. It’s an opportunity.
The EU AI Act is a good example of that shift. It follows a staged timeline with obligations that start earlier than the final applicability date, which forces organisations to stop thinking of compliance as a one-off event and start thinking of it as a capability they must build and run. NIS2 pushes in the same direction, with a strong emphasis on cybersecurity risk management and incident reporting across a much wider set of sectors than before, and a clear expectation that Member States transpose it into national law on time.
DORA, while focused on the financial sector, has become a reference point for operational resilience thinking. It’s not only about “security”; it’s about proving that your organisation can withstand disruption, recover, and manage third-party dependencies with discipline. It’s also a useful preview of where expectations are heading well beyond finance.
Put those three together and you can see the pattern: Europe is not asking for better intentions. It is asking for better operating models. The usual consequence is predictable. Delivery slows down. Teams hesitate. They wait for guidance. They negotiate every new initiative in a series of meetings that gradually turn into a tax on momentum. Compliance becomes something you “apply” late, when the project is already built, timelines are already committed, and everyone is tense.
That’s the part we should stop accepting as normal.
The alternative is what I call “Regulation as Code”. Not because you can literally turn laws into software and be done with it, but because you can translate regulatory obligations into reusable controls, embed those controls into your delivery patterns, and generate evidence continuously instead of reconstructing it painfully when an audit, a crisis, or a board question arrives. Once you do that, something almost counterintuitive happens. Constraints become acceleration.
Why the “wait for Legal” approach is slower than it looks
In many organisations, Legal and Compliance act as the final gate. A project arrives at the end with a half-finished risk narrative, a collection of documents, and a request for approval under time pressure. The legal team does what it can, but the structure is wrong from the start: the real work becomes exception management, not system design.
This creates two behaviours that are equally harmful. Either teams freeze and delay decisions, or they build first and ask later, which increases the probability of rework, rejection, and silent workarounds. Nobody wins, and the organisation learns that doing things properly is slow.
What the organisation is missing is not goodwill. It is a paved road. When teams know what “acceptable” looks like from day one, and when the platform makes the acceptable path the easiest one to follow, you replace negotiation with execution. You also change the role of Legal and Compliance: they stop being pulled into every individual launch and instead invest their time in defining and maintaining the standards that make launches repeatable. That’s a far more efficient use of scarce expertise.
Build controls once, reuse them everywhere
If you want this to work, you need a shared control backbone that spans AI, security, resilience, and third-party risk. The temptation is to create separate programmes: one for AI Act, one for NIS2, one for resilience, one for vendors. It looks organised on paper, but it often leads to duplicated work, inconsistent requirements, and “compliance fatigue” for delivery teams who receive different checklists depending on which committee is asking.
A better approach is to build a single catalogue of controls mapped to obligations, with clear ownership and clear implementation patterns. Not as an abstract document, but as a living system: the controls exist, teams know how to adopt them, and the organisation can prove they are applied.
In practical terms, this means that when a team wants to deploy a GenAI assistant, or an optimisation model, or a workflow agent, the same foundational elements are already defined: what kind of data can be used and under what conditions, what oversight is required, how logging and traceability are handled, how access is managed, how incidents are escalated, how third parties are assessed, and how the system is monitored over time. It does not remove judgment, but it removes improvisation.
Automate evidence, because evidence is where time goes to die
Most organisations still produce compliance evidence in the most inefficient way possible: late, manually, with screenshots, spreadsheets, and long email chains trying to reconstruct decisions that were made months earlier. It is expensive, it is fragile, and it reinforces the belief that compliance is an administrative burden rather than a design choice.
A mature organisation flips the model. Evidence becomes a by-product of normal operations. Access approvals are traceable because identity and permissions are managed properly. Model versions are recorded because the deployment pipeline is structured. Logs exist and can be queried because observability is built in. Third-party assessments are versioned because procurement and risk share a common framework. Incident response is demonstrable because playbooks are exercised and recorded, not invented during a crisis.
This is where “Regulation as Code” becomes tangible. You’re not asking people to remember what to do. You’re building systems that do it by default.
The effect on speed is very real. When evidence is automated, delivery teams stop losing weeks at the end of projects. When evidence is reliable, governance stops relying on debates and starts relying on facts. When evidence is continuous, audits become less traumatic and more routine. This is one of those rare situations where doing things properly is actually cheaper.
A simple example: the GenAI assistant everyone wants “next month”
It starts the same way almost every time. A business unit wants an internal assistant that can read documents, answer questions, draft responses, and reduce time wasted searching and rewriting. The pilot works. Enthusiasm grows. People ask for scale.
If you approach that with the traditional model, you’ll recognise the next phase: Security worries about data leakage, Legal worries about compliance, HR worries about employee data, Risk worries about accountability. The delivery team is stuck in back-and-forth, and the project becomes less about value and more about fear management.
With a “Regulation as Code” approach, you don’t start with fear. You start with patterns.
The organisation already has a standard way to deploy this category of capability. It already knows which data classes are allowed, how access is enforced at retrieval time, how outputs are logged, what oversight exists for sensitive cases, what monitoring is required, and what escalation path is used when the system behaves unexpectedly. The team ships on a paved road, and exceptions are handled explicitly, not casually. This is how you create an execution advantage. Not by ignoring constraints, but by mastering them.
The strategic point
There is a belief that regulation slows innovation. In my experience, what truly slows innovation is uncertainty.
When teams are unsure what is allowed, they hesitate, or they rush into shortcuts. Hesitation delays delivery. Shortcuts trigger incidents. Incidents create backlash. Backlash increases controls in the worst possible way, as reactive layers and blanket restrictions. The cycle is familiar, and it is exhausting. A clear, reusable compliance backbone breaks that cycle. It gives teams confidence. It gives leadership visibility. It gives the organisation a way to scale across countries with consistency, even as local implementations and interpretations evolve.
This is why I don’t see the EU constraint landscape as an excuse to slow down. I see it as a forcing function to build better systems, with stronger trust and less noise. In low visibility, a clear path is not a limitation. It’s an advantage. And if you build it well, you don’t just stay compliant. You ship faster than competitors who are still waiting for permission at the end.
What does “Regulation as Code” mean in practice?
It means translating regulatory requirements into reusable controls and delivery patterns, then embedding those controls into the platform and the build pipeline. Instead of treating compliance as a last-minute review, you make it a default behaviour of how systems are designed, deployed, and operated.
Why can EU regulation become an execution advantage rather than a brake?
Because competitors who “wait for Legal” end up negotiating every project from scratch and discovering issues late, when rework is expensive. If you industrialise controls once and reuse them, teams ship faster with fewer surprises, and compliance becomes predictable rather than a source of delays.
How do the EU AI Act, NIS2, and DORA connect for a CIO?
They rhyme around the same expectations: governance, risk management, traceability, incident readiness, and third-party discipline. Even when the scope differs, the operating model is similar: you must be able to show that controls exist, are applied, and can be evidenced under scrutiny.
What is the biggest mistake enterprises make with the EU AI Act?
Treating it like a document exercise or a single compliance milestone. The AI Act pushes organisations toward continuous capability: classification of AI use cases, oversight, documentation, monitoring, and accountability that still works once the system is live and evolving.
What does NIS2 change compared to “traditional cybersecurity”?
It raises expectations that cybersecurity is not only a technical function but an organisational capability with defined risk measures and incident reporting discipline. The practical impact is that boards and executives are expected to treat cyber resilience as operational governance, not as a side conversation for specialists.
Why does “DORA thinking” matter outside the financial sector?
Because it crystallises modern resilience expectations: operational continuity, robust ICT risk management, serious third-party oversight, and evidence that those capabilities are real. Even outside finance, the direction of travel is clear: resilience is becoming demonstrable, not declarative.
What does “build controls once, reuse everywhere” look like?
A shared control catalogue mapped to obligations, with owners, implementation patterns, and standard evidence outputs. Delivery teams don’t reinvent “what good looks like” for each project; they use pre-approved patterns (identity, logging, access, monitoring, incident playbooks, vendor checks) that are consistent across teams and countries.
Why is “evidence” the real accelerator for compliance?
Because most organisations still assemble evidence late and manually, which slows launches and creates stress. If evidence is generated continuously by normal operations (logs, approvals, versioning, monitoring, access reviews), audits and approvals become routine, and delivery stops being held hostage by last-minute compliance archaeology.
How do you automate compliance evidence without adding bureaucracy?
By designing platforms that produce evidence by default: identity and access management that records approvals, CI/CD pipelines that version changes, observability that retains logs, model registries that store evaluations, and third-party workflows that track assessments. The “process” becomes a property of the system, not a burden on individuals.
How do you apply Regulation as Code to GenAI and agents specifically?
You standardise the safe path: data classification and permitted sources, identity-aware retrieval and access control, logging and traceability of prompts/context/actions, human oversight for sensitive outputs, monitoring for drift, and a clear incident escalation path. Teams can still innovate, but they do so inside guardrails that are already defensible.
What governance model works best for scaling compliant AI across countries?
A stable core with local adaptations: central control patterns and shared platforms, combined with local compliance interpretation where required. You avoid rebuilding everything per country while still respecting jurisdictional rules and organisational realities.
What are the first concrete steps a CIO can take next quarter?
Start with a control backbone: define a reusable set of controls for Data, AI, security, resilience, and third parties; map them to the regulations; implement them as platform defaults; and pilot two or three use cases through the “paved road” with automated evidence. If you can ship repeatably with less negotiation, you’ve already created advantage.