2. Hyperscale advantages

in our previous posts I’ve described that sovereignty risks are risks that should be evaluated as such. Risks require evaluation that in turn require acceptance or mitigations. Why would we accept a risk, what benefits outweigh that risk, and what options do we have. That’s todays topic.

In most discussions, the goal is framed as eliminating sovereignty risk. Move local, move sovereign, remove exposure. But that assumes something important: that elimination is actually possible, and that doing so comes without trade-offs.

In today’s technology landscape (whether we like it or not) global platforms and ecosystems are deeply interconnected, and often led by U.S.-based innovation.

At the hardware level, most modern compute platforms rely on architectures developed by companies such as Intel and AMD. Even when infrastructure is procured through European or “sovereign” hosters, the underlying stack typically involves globally sourced components, technologies, and dependencies.

This is simply how the ecosystem has evolved as a result of economics (next post)

Because regardless of where infrastructure is hosted, or how “local” a solution is positioned, it often remains indirectly exposed to international supply chains, trade policies, export controls, and sanctions regimes. Sovereignty, in that sense, is not just about location or ownership; it’s also about dependency.

It’s entirely possible to build a fully localized environment: local hosting, open-source software, even custom-built applications. And in certain scenarios, that can align well with sovereignty objectives. However, that choice introduces its own set of trade-offs such as:

  • Slower access to security updates and innovation
  • Increased responsibility for maintaining and securing the stack
  • Greater exposure to the influence of supply chain constraints and component availability

More importantly; none of these are hypothetical risks, they are operational realities that need to be factored into the decision.

So the question shifts, again, from “How do we eliminate sovereignty risk?”
to “Which dependencies are we willing to accept and what do we gain, or lose, in return?”

Because in practice, choosing for sovereignty in one dimension often means accepting risk in another.

Hyperscale versus local

When discussing sovereignty trade-offs, local or “sovereign” hosters are often positioned as a straightforward alternative. The narrative is familiar: comparable security, strong data protection, and sometimes lower cost. And in specific scenarios, that may hold. But this comparison assumes that hyperscale and local / partner hosting operate on the same principles and they surely don’t.

So lets see why looking at hyperscale requires a fundamentally different mindset.

Security within the platform

First of all, it is not just about running infrastructure at a larger scale. It is about how the entire ecosystem is designed, built, and operated. In many environments, security is something that gets added: layered on top of infrastructure through controls, tools, and processes. Think like: we will add security services, we will add firewalls, we will monitor the software with a SIEM or SOAR.

In hyperscale environments, the model is different. Security is not an add-on, it is a property of the system itself as platforms are designed with the assumption that threats are constant, failures will occur and attack surfaces need to be minimized by design. There is a reason AWS’s Nitro system is so popular, but also not replicated inside OpenStack (often used by European hosters). There is a reason Confidential Computing, Integrated HSM’s and advanced data governance (certainly in AI) is more mature in hyperscalers. They have the resources, the knowledge and the absolute need to get security right.

Securing from within the entire platform leads to deeply embedded security models, where identity, isolation, monitoring, and response are integrated into the platform from the ground up. Deeply embedded platform security sounds ideal: identity, isolation, monitoring, and response by design. In reality, it’s extremely hard to operate, unless you have hyperscale-level resources. Without that scale, complexity quickly outpaces what smaller environments can manage securely and efficiently.

Supply chains

For most environments, infrastructure is sourced: hardware is procured, software is integrated, and operations are layered on top. Some larger hosters create their own version of the software, tweak it and possibly innovate on it. There are however (afaik) no hosters that design, integrate and adjust their software to specifically support that hardware to its maximum performance and security standards.

In hyperscale, much of this is designed, not just assembled. Hardware is build and optimized for its own profile (storage, compute, ai, etc), that influences chip designs, defines datacenter architectures, and continuously evolving components based on operational feedback at global scale. This has a few consequences: It drives efficiency and cost optimization at levels that are difficult to replicate in smaller environments, but more importantly it enables tight integration between hardware, software, and operations, improving performance, resilience, and security continuously and at broad scale.

While there are still dependencies, there are fewer of them, they are more deliberately controlled, and better understood. This reduces variability, shortens response cycles, and allows for faster, coordinated improvements across the full stack. In contrast, more fragmented environments rely on multiple vendors and layers of integration, increasing complexity, operational overhead, and the potential for gaps in performance, resilience, and security is much bigger.

Integration

The biggest “problem” of all. Proprietary software. Vendor Lock-In, closed eco-systems, under the full will power of a large company, forcing you to accept what you do not want.. (okay i exaggerated) ..

It’s often presented as the core downside of hyperscale (and not without reason). Relying on a single platform creates dependency. It limits flexibility. Moving away can be complex, sometimes costly, as you are “stuck” in an ecosystem. But stopping the conversation there misses an important part of the equation. Because lock-in doesn’t exist in isolation; it exists in exchange for value.

A simple way to think about it: most of us are perfectly comfortable getting on a plane.

We don’t insist on building it ourselves. We don’t demand control over every subsystem. We trust that the aircraft has been designed with safety as a core principle, that it’s continuously maintained, that the people operating it are trained, monitored, and that it is supported by layered systems and processes to make it a smooth ride.

That decision comes with a loss of control, but also confidence that the system is engineered at a level we couldn’t realistically achieve ourselves, we use the experts with access to continuous improve, get the right training, and oversight is established. And we have the ability to focus on the outcome (getting where we need to go), rather than managing every component along the way

Now compare that to choosing between hyperscale and local or “sovereign hosting”.

Both can get you where you need to go. But the way the system is designed (and where the value sits) is fundamentally different.

Hyperscale is the jetliner: massively engineered, continuously improved, operated at a scale where security, resilience, and operational excellence are enforced by design, not aspiration. Local hosting is the Cessna: you retain more direct control, you understand more of what’s going on, but you’re also carrying much more of the operational burden, and the ceiling of what you can achieve is materially lower.

What’s often forgotten is this: both a jetliner and a small plane operate in the exact same hostile environment. They fly through the same air, face the same weather, and are equally unforgiving if critical systems fail. When things go wrong, both can kill you. The difference is not the risk itself, it’s how systematically that risk is engineered, mitigated, and managed. And for the critics, yes, the jetliner operates on a global scale, is subjected to foreign laws and you are under the authority of the pilot.

But that level of safety does not come from simplicity, it comes from absorbing and orchestrating enormous amounts of complexity: integrated control systems, continuous telemetry, standardized procedures, trained operators, and tightly coupled processes that are designed to work as a single system. As a passenger, we do not see it, but it works in the background. A small plane, by contrast, operates with far looser integration. Systems are more fragmented, dependencies are more visible, and much of the coordination is pushed back onto the pilot.

And that is exactly the same trade-off you see in cloud platforms: not in the risk itself, but in where the complexity lives.

In highly integrated platforms, much of the complexity is already solved within the ecosystem itself. Identity is not something you have to stitch together across components, it is unified and consistently enforced. Security policies don’t need to be replicated and translated across multiple layers, they apply coherently across the environment. Services are designed to work together, not just coexist, allowing you to build and focus on outcomes rather than integrations.

That integration is not just cosmetic. It shapes how the system behaves and the value you (can) derive from it; such as clarity in how access is managed across multiple services, consistency in how security is enforced and monitored, predictability in how services interact and most importantly, it reduces the number of places where things can go wrong in integrations.

In more fragmented environments, these same capabilities don’t come built in; they have to be assembled and constantly held together. Identity has to be aligned across providers, policies need translation between systems, protocols must be negotiated and more. This while security shifts from being a property of the platform to being a function of how well you got the integrations right.

So while vendor lock-in is real, it is also the mechanism through which this level of integration provides real value (in both operational excellence and security).

Innovation

The sovereign conversation usually addresses innovation as, being able to host containers everywhere, AI models should be open source and can be hosted anywhere and anyone can host basic services today. While that is true, this argument forgoes the innovation curve and required investments and the value that is derived from it.

Local or sovereign hosters will point to their ability to build, customize, and adapt solutions quickly, often within the open-source communities, sometimes with more flexibility at the software layer.

Hyperscale platforms don’t just innovate in software. They operate across the entire technology stack: data center design, hardware components, security models, operations and supply chains. These innovations are also not independent of each other, as a matter of fact, they reinforce each other. A change in hardware design influences performance and security. Operational insights feed back into architectural decisions. Security controls are embedded into the runtime environment, not added later.

Furthermore, these companies are so large, they influence (and cooperate) with external hardware vendors such as Intel, AMD and NVIDIA in pushing the boundaries of new technologies, rather than being constrained to the layers made available to them.

Local hosters operate on top of existing hardware ecosystems, established supply chains, and externally defined platform capabilities. Their ability to innovate deeply across the stack is, by design, more limited.

Innovation is not just about speed or features. It’s about where in the system change can happen and how far that change propagates. Hyperscale platforms innovate across the full stack and at global scale, and that scale and speed are essential when defending against increasingly sophisticated cyber threats.

Local hosters innovate closer to the customer and within the layers they control. This allows them to tailor solutions to specific regulatory, operational, or industry requirements, and to provide a level of visibility and contextual alignment that hyperscale cannot simply match. They can also design architectures that explicitly limit exposure to extra-territorial jurisdiction, addressing (perceived) sovereignty risks through structure, control, and contractual boundaries.

But while this reduces certain legal and governance concerns, it does not change the nature of the external threat landscape itself. As cyber threats become more frequent and sophisticated, these environments are still exposed to the same adversaries, and must therefore invest disproportionately in detecting, responding to, and keeping pace with those threats within a more limited scope.

You can’t meaningfully claim sovereignty over a system that isn’t secure.

Conclusion

So, yes, sovereignty comes with trade-offs. I know, its a hard truth. They might be less visible depending on what you use, but they are very real and sit within the hosting platform itself: performance, operational excellence, security, supply chain resilience, and ultimately responsibility. Moving away from hyperscale shifts that entire burden, incrementally but consistently, back to you or your “sovereign” hoster.

And that’s really the point.

Every architectural choice you make determines where complexity lives and who is responsible for managing it.

With hyperscale, much of that complexity is absorbed by the platform. Security models are embedded. Integrations are pre-defined. Supply chains are engineered, not just consumed. Innovation is continuous and happens across the entire stack. You accept dependency, but in return, you get a system designed to operate at scale, with consistency, depth, and continuously evolving defenses.

With local or sovereign approaches, complexity becomes more distributed. More components, more integration points, more operational ownership. That can provide flexibility, control, and the ability to design architectures that limit exposure to extra-territorial jurisdiction and address perceived sovereignty risks. It also enables tailoring for mission-critical systems with very specific regulatory or operational requirements.

But those benefits only hold if the environment is secure in the first place.

As threats continue to evolve, these environments are exposed to the same adversaries, without the same level of scale to detect, analyze, and respond. The burden of keeping the system secure, integrated, and up to date sits much more directly with the operator (and their dependencies), which ultimately is another risk you have to accept, mitigate or reject.

Local hosting can mitigate sovereignty risks (to an extent), and it can be the right choice for specific, high-control scenarios; but only when the organization can sustain the required level of security.

It’s like choosing that smaller aircraft and a single pilot. If you value flexibility, direct control, and a more constrained operational environment (and you trust the systems, the setup, and the people operating it) then it can absolutely be the right choice, particularly when sovereignty risks outweigh the benefits.

But you have to be aware that you are also accepting a different operating model, where more responsibility sits closer to you, and where the margin for error is narrower in a very hostile environment.

But let’s see what it would take to build a European hyperscaler.. as it’s just a matter of throwing money at it right? Sovereign Europe

Intro: Digital Sovereignty
Chapter 1. Sovereignty Risks
Chapter 2. Hyperscale advantages
Chapter 3. Sovereign Europe
Chapter 4. Mitigating risks
Chapter 5. Conversations
Chapter 6. Terms and Conditions
Chapter 7. Ai to aiaiai


Posted

in

by