A cloud service provider, or CSP, is an IT company that provides on-demand, scalable computing resources like computing power, data storage, or applications over the internet.
Typically, cloud-based service models are defined as IaaS (infrastructure as a service), PaaS (platform as a service), or SaaS (software as a service).
– Google Cloud
Preamble
This article is an introduction to a project called Superphénix (SPX) where we present how we rebuilt the infrastructure of an existing cloud provider from scratch, using cloud native technology and Kubernetes.
It is part of a series of articles, detailing the main aspects of the project. If you want to know more about specific topics, check out the other articles:
- Superphénix - Part 1: Virtualization
- Superphénix - Part 2: Network
- Superphénix - Part 3: Storage
- Superphénix - Part 4: Tooling
- Superphénix - Part 5: Tests
- Superphénix - Part 6: The Console
The Challenge of Being a Small CSP
Back in 2023, a small cloud service provider in France called Agora Calycé was faced with a challenge. Their entire infrastructure relied entirely on third party proprietary software. And this was about to quickly become an issue.
We’re talking about the usual candidates: VMware, NetApp, Veeam, Microsoft and more.
There’s nothing wrong about using software from any of these companies. These companies know their targets and goals, and have plenty of resources to create and maintain products of excellent quality. But these multi-billion dollar companies are critical suppliers. Anything goes wrong with any of them and providing services to customers becomes next to impossible.
Nonetheless, small CSPs are usually extremely close to their customers, and that is why they remain competitive. They don’t have the resources of Amazon or Google, but they have the proximity.
The problem is, even though they’re smaller than any of the big CSPs you can think of, they have the exact same challenges. They need heavy automation, high reliability, fast deployments and affordable pricing.
In this particular case, the choice of relying heavily on suppliers to render services seemed pretty logical at the inception of the company. The fact there weren’t a lot of employees and the need to start offering services fast meant the easiest path wasn’t to create the CSP’s infrastructure from scratch. The easiest path was to loan hardware and start deploying stacks such as VMware to handle the workload of customers.
Low Coupling, High Cohesion
You might be familiar with the software development adage low coupling, high cohesion. It turns out that after years of making business, while your entire infrastructure relies on your suppliers, with little to no path for an easy migration, you find yourself doing what we call high coupling, low cohesion.
And this was the problem this CSP was facing. With the increase in pricing of VMware in 2024, the pressure was getting high. How can a CSP with 90% of its workload running on VMware continue existing when its margin is getting eaten up by companies valued multiple thousand times more than itself? How can they survive changing licensing models and terms of conditions?
An analogy would be that Agora Calycé bought a car upfront, fully expecting to use it for as long as possible, but they end up leasing it at the same time from the original seller. Cherry on the top, the car in question is not supported anymore, and you cannot get it fixed. How does that make any sense for them?
So we asked the question: how are more traditional CSPs doing it? What are the other options?
Do It Yourself
The answer may seem simple to the readers, but for a company so heavily invested in its business model, accepting such a drastic change is hard. And yet, at a turning point, they decided it was time to switch philosophies.
Until now, the philosophy was low risk - low reward, buy safe, create value on top of what you’ve bought, and end up with a low margin. A CSP like AWS, Amazon or Azure doesn’t work like that. They build from scratch, and create value with their own property. A CSP’s value isn’t defined by the hardware they own, but by the software they’ve built on top of it to extract its value.
It isn’t a race of who’s got the biggest mine, it’s a race of who’s got the best shovel. And just like during the gold rush, it isn’t the miners who are making the most money, it’s whoever is selling the equipment.
To start taking matter in their own hands, they decided to begin building their own shovels. Both because it made more sense financially, but also because it felt like the natural thing to do. Is it thrilling to eternally resell someone’s work? For Agora Calycé, the answer was a definite no.
And so they decided to start building their own infrastructure.
Building a Cloud Service Provider From Scratch in 2023
At the end of 2023, it was decided to start building a new infrastructure, from scratch. The project’s code name would be Superphénix (SPX for short), named after a nuclear reactor in France that ended up closing for political reasons, despite being ahead of its time. This is not foreshadowing, but just a way to protect ourselves if anything goes bad. A way to say “we’ve told you, it was in the name”.
At the time, Kubernetes was a big deal for Agora Calycé. It was used to deploy apps internally, and it was also sold and managed as a service for their customers. A lot had been done to automate their provisioning, to maintain them and to deploy on top of them.
With all the work that was done on automating Kubernetes, and with the prospect of using things such as GitOps and native monitoring, it felt natural for Agora Calycé to want to base their future CSP on Kubernetes.
Think about it, the automation of Kubernetes, but ported to networks spanning datacenters. To storage systems hosting petabytes of data. To hypervisors running thousands of VMs. Sounds exciting? It sure did to us. All of the benefits introduced by the cloud ecosystem and by Kubernetes, with easy monitoring, declarative provisioning, unified deployments. Right there, in a CSP. Of course, it might not seem natural at first. A CSP has a big portion of its activity rely on running VMs. And traditionally, VMs are put in direct opposition with containers. But Kubernetes is a container orchestrator, not a VM orchestrator, so how is this even going to work? We’ll get to that later.
At the same time, Agora Calycé was investing more and more on Open Source software, going to conventions such as Kubecon, and thinking that maybe their future couldn’t entirely be based on proprietary technologies.
The Cloud Native philosophy had become strong. And that’s why we started drafting a manifest for Agora Calycé, detailing what we’d think a CSP would look like in 2024, with most of its infrastructure based on Kubernetes. An exciting project, starting from nothing, with the hope of transitioning the entire existing stack by 2026. Ambitious, but we agreed that it was both necessary and doable.
The Manifest - Detailing What a CSP Built on Kubernetes Looks Like
The manifest is the initial document in which we dreamt about how we’d build an entire CSP from scratch using Kubernetes as its base platform. Needless to say, since its drafting, we’ve strayed a lot from it. But this was to be expected.
The important point this manifest was making is that whatever we were building, it would need to be based on opensource technologies, and we would contribute to those projects. We’d contribute by pushing code, by giving feedback, by talking to the maintainers and by partnering with companies that are directly involved with those projects.
It also defined some important topics of discussions concerning the technical aspects:
- How do we handle VMs? Kubernetes is supposed to be a container orchestrator, not a VM orchestrator.
- How do we manage a complex software defined network (SDN) with the scalability and flexbility needed by a CSP? Kubernetes adopts a very simple network model, unfit for a CSP.
- How do we handle the storage? We need a software defined storage (SDS) system, and Kubernetes usually consumes storage as an end user.
- How do we handle billing? There’s no automated system that exists to do that with VMs running on Kubernetes.
- How do we handle disaster recovery? Can we transition all our workload from one AZ to another?
- And many more…
It doesn’t look like Kubernetes is such a good candidate after all, is it? And yet, it turns out the Cloud Native ecosystem surrounding Kubernetes offered a lot of what we needed.
And that’s why for each of these questions, several options were proposed. Let’s look into how we designed this new CSP by going over the main topics.
Using Kubernetes as an Hypervisor
A cloud provider needs to provide VMs to its clients. It is one of the most basic and essential brick of the infrastructure. A good portion of the products are based on this layer.
If the CSP offers Kubernetes as a Service, guess what the K8S nodes are running on? VMs.
There’s quite a few requirements that we need to fulfill to make K8S a good choice as an hypervisor:
- We need fast, easily provisionned VMs of all types (Windows, Linux and more)
- VMs must be live-migratable to allow for host maintainance and load balancing
- It must be possible to provide networking to these VMs using local networks and public IPs
- Block storage must be presented to the VMs with hotplug capabilities
- It must be easy to retrieve the usage of each VM for billing purposes
- Creating and importing images must be compatible with existing VMs on the legacy infrastructure (VMDKs)
- VMs are agnostic of the CSI/CNI that provides the storage or network for easy migrations in the future
And probably a dozen more requirements. And Kubernetes doesn’t really fit any of these considering it’s made for containers.
And yet, there’s a couple reasons it is actually reasonable to use a tool that was initially created to orchestrate containers, but to orchestrate VMs:
- the Kubernetes API makes automation extremely easy
- internal networking between VMs can be handled directly by the CNI of Kubernetes instead of relying on an implementation from the core network
- GitOps is native to the solution
- K8S is easy to deploy and well-understood within the CSP, which makes teaching people how to operate the new infrastructure straightforward
There’s also drawbacks:
- you’re bending a tool to make it do something that was never planned initially
- no one can help you with what you’ve built, there’s no L3 24/7 support to call when a problem arises
So what would it look like? Let’s make a naive drawing of an Availability Zone (AZ).
┌──────────────────────────────────────────────────────────────────────────────────────────────┐
│ AZ1 │
│ │
│ ┌─────────────────────────────────────────────────┐ ┌────────────────────────────────────┐ │
│ │ DC1 │ │ DC2 │ │
│ │ K8S Network Overlay │ │ │ │
│ │ ┌──────────────┬────────────────┬────────┼──┼─────────┬──────────────┐ │ │
│ │ │ │ │ │ │ │ │ │ │
│ │ ┌──────┴──────┐ ┌─────┴───────┐ ┌──────┴──────┐ │ │ ┌──────┴──────┐ ┌─────┴───────┐ │ │
│ │ │ H1 (K8S) │ │ H2 (K8S) │ │ H3 (K8S) │ │ │ │ H4 (K8S) │ │ H5 (K8S) │ │ │
│ │ │ x x x x x x │ │ xxxxxxxxxx │ │ xxxxxxxxxxx │ │ │ │ x x x x x │ │ x x x x x x │ │ │
│ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │ └─────────────┘ └─────────────┘ │ │
│ │ │ │ │ │
│ │ │ │ │ │
│ │ │ │ │ │
│ └─────────────────────────────────────────────────┘ └────────────────────────────────────┘ │
│ │
└──────────────────────────────────────────────────────────────────────────────────────────────┘
x Virtual Machine running on an hypervisor
So basically, a stretched cluster between datacenters, and instead of managing Pods, we’re managing VMs. Of course that idea didn’t come from nowhere. We didn’t invent that.
You’re probably already familiar with a famous project developed by Red Hat, called Kubevirt. It extends the Kubernetes API to handle VMs in a cluster where Pods are already running.
The initial idea of the Kubevirt maintainers was to allow traditional applications that only run on VMs to be ported next to containers.
We want to push that further, and use Kubevirt in place of a tool like VMware. This means creating Kubernetes clusters entirely dedicated to running VMs.
It turns out that Kubevirt answers for a lot of our requirements. And where it doesn’t, we can write a bit of code and push it upstream.
If you want to know more about the virtualization layer in Superphénix, go read this article that goes in greater details.
Let’s acknowledge Kubevirt already solves a lot of our problems. We still need to tackle two major issues: network and storage. There’s a lot of integration to do between Kubevirt and the other layers of the CSP, and that’s where a lot of the complexity resides.
Using Kubernetes as a Storage Orchestrator
Until then, Agora Calycé had based their entire storage infrastructure on NetApp. NetApp is easy to deploy and maintain, but both the hardware and the software is heavily tied to them.
We wanted to go a different route. Buy arbitrary storage hardware, basically Linux appliances filled with fast disks, and install Kubernetes on it. And then, use that Kubernetes to install and orchestrate a storage engine.
It came with a few requirements, based on how the storage was used at that time with customers.
The storage layer must fullfill the following requirements:
- Deliver block storage
- Deliver NFS or an equivalent (capable of ReadWriteMany)
- Have the ability to be used as a CSI on Kubernetes (to provide storage to the virtualization layer or to Kubernetes clusters of the CSP)
- Be optionally usable outside of K8S
- Replicate across AZs for disaster recovery
- Stay resilient within the AZ across many disks/hosts
- Be capable of thin provisioning
- Have the notion of tenants
- Have the ability to assign quotas for tenants with QoS
- Allow for easy data recovery
- Send metrics to compute the usage and handle billing
- Be easily manageable in GitOps
Ceph & Rook
Ceph is a storage defined network, capable of onboarding disks across many hosts and storing data on them. Basically, if you need more capacity, just add disks and hosts. It’s dead simple when describing it like that, but it’s a complex beast to operate. Many CSPs operate Ceph as a storage backend. It’s opensource, battle tested and extremely resilient to failure. It’s also pretty performant, considering it’s delivering data over the network.
Now, Ceph by itself would have been a good candidate, but remember, we’re trying to orchestrate it using Kubernetes. And Ceph is your old school C++ daemon, orchestrated by systemd at best. So, not a good candidate after all.
Unless? It turns out there exists a Kubernetes operator called Rook that both installs, orchestrate, and maintains Ceph. Now, Rook is usually used as a fully integrated solution. It uses disks on the Kubernetes hosts and gives pods access to them through PVCs using the Ceph CSI. The usual usage of Rook is “hyperconverged”.
If we want to ditch the current storage solution, we need to provide storage not only to the VMs on the Kubernetes clusters, but also to other clients, such as VMware. That means giving Rook the ownership of the centralized Kubernetes storage clusters, and connecting “client clusters” to them.
This is entirely doable if your network allows it. Ceph permits that and Rook too.
With the Ceph CSI installed on our hypervisor clusters running Kubevirt, we can provide PVC to our VMs, as if they were disks, all of that backed by our storage cluster operated by Rook. Any feature that Ceph and Rook provide, we can translate it to features on our virtualization stack.
For example, Ceph provides volume replication across clusters. Rook makes it easy to set it up between two Rook clusters using CRDs. And that’s how we provide cross-AZ disaster recovery for the VMs of our customers! We migrate a bunch of custom resources from one cluster to another, and Kubevirt provisions the exact same VMs, backed by PVCs that have been synced by Rook and Ceph.
There’s also an article entirely dedicated to how we run storage in Superphénix, don’t hesitate to read it.
So we’ve got virtualization handled as well as storage. That leaves us with networking. Can we use the power of automation available with Kubernetes to operate the network of the CSP?

Some of the production storage arrays being tested before sending them to other datacenters across the country. Those are Supermicro full NVMe storage arrays, running Talos, Rook and Ceph. Note the use of shiny stickers to make our storage faster.
Using Kubernetes as an SDN Orchestrator
There’s two ways to envision the network of our CSP:
- Use the capabilities offered by our physical network (Juniper, Cumulus, Cisco etc…) and extend that to our hypervisors
- Make the network entirely virtual, atop the Kubernetes cluster, using a CNI
Of course, a CSP uses expensive network appliances, capable of a lot of stuff, including automation. We could delegate the automation of the network to those appliances using YANG/Netconf/RestConf/gNMI (choose your poison). Agora Calycé needs to do that to an extent.
But with limited resources and the will to abstract ourselves as much as possible from suppliers, this isn’t the best solution. It’s an understatement to say that the networking manufacturers are far from reaching the same degree of maturity in terms of standardization and automation as Kubernetes.
So how do we provide network connectivity to our VMs? First let’s look at our requirements:
- VMs need connectivity to the Internet with elastic IPs and/or NAT
- Everything must be dual stack (IPv4/IPv6 compatible)
- VMs can be plugged in multiple networks
- We must provide L2 local networks to the VMs (we call them subnets)
- We must provide L3 segmentation using VPCs (Virtual Private Clouds)
- Everything must be light, ideally in the form of CNFs
- That includes the NAT engines that will translate IPs once we transit from a private network to a public one
- Firewalling must not be centralized by physical or virtual firewalls, but rather be orchestrated and decentralized by the SDN
- The network must support live migration of VMs
- We must be able to set quotas and bill usage easily
And as usual, a dozen more of these.
For once, it looks like Kubernetes actually offers a lot of what we need using existing CNIs, such as Cilium.
But we still have a problem. Existing CNIs are really only good for containers, or for the initial vision of Kubevirt which is to have containers sitting next to applications that couldn’t be containerized and were left in VMs.
We need a CSP’s SDN. And not a lot of projects offer that. For example, Cilium is probably one of the best CNIs on Kubernetes, but how can we play live migrations with it? It doesn’t know what VMs are and it is entirely based on the philosophy that everything is an L3 network. We can’t even give 2 VMs the same IP, everything must be unique. But the customers of a CSP want to give the IPs they choose to their VMs. So a lot of what traditional CNIs are designed for doesn’t apply.
We tried handling network by automating bridge creation and plugging the physical network of Agora Calycé directly to VMs, but it proved impractical, not secure and just not the philosophy we were going for. Remember, we want to use the automation of Kubernetes as much as possible.
And as usual, the Cloud Native ecosystem is full of extremely interesting projects to solve our issues. One of them is Kube-OVN. It’s a CNI that claims to adopt the SDN approach, with user-defined subnets, VPCs, BGP, Internet gateways and more. Exactly what we need.

For each layer of Superphénix, we’ve made a mascot. Always helpful on PowerPoint presentations.
It is based on OpenvSwitch and OVN, used in famous hypervisors such as Openstack, which makes it very reassuring. Kube-OVN has also a very distinct feature: it is Kubevirt aware. It knows what Kubevirt’s VM are.
That’s how Kube-OVN handles live migrations with network losses of less than half a second, or stable IPs for VMs that survive reboots.
But a lot more had to be done to make our network work using Kubernetes as the orchestrator. For example, we decided to replace the BGP speakers (connecting Agora Calycé to the outside world) to make them run on top of, you guessed it, Kubernetes.
If you want to know more about our network adventures, head to the dedicated article on how we built a CSP’s network on top of Kubernetes.
Building PaaS and SaaS atop the infrastructure
Now that we’ve got VMs as a service, network as a service and storage as a service, we’ve built a true IaaS (infrastructure as a service) cloud provider.
But Agora Calycé doesn’t entirely rely on selling VMs to customers. They build on top of that to provide advanced services to their customers, such as webservers, databases, Kubernetes clusters and more.
It turns out that those are things you can easily provision on Kubernetes already, using operators built by the community. And we can run those operators alongside the VMs, hereby providing services in the same subnet/VPC where the virtual machines run.
This is extremely economical, and very easy to implement. If the customer wants a database in a private network where only their VMs can access it, we can provision it with a Helm Chart or an operator. Just like you would with a normal Kubernetes cluster.
Doing deployment that way helps Agora Calycé reduce the time to market enormously by leaning on prepackaged solutions in the cloud native ecosystem.
Building Kubernetes as a Service on top of Superphénix
It has become the norm for cloud service providers to have a Kubernetes as a Service offer. And in our case, it’s no exception. Customers are heavily interested in having clusters provisioned rapidly, with automated updates, and if possible pre-installed stacks of tools such as monitoring, log aggregation, CNIs and more.
Considering that the Kubernetes nodes will exist within VMs, it means the Kubernetes cluster will live inside of VMs that are themselves deployed by a Kubernetes cluster.
And this is why Kubevirt has a few integrations that can help us build Kubernetes clusters on top of clusters. For example, the Kubevirt CSI allows us to expose part of the Ceph CSI to the child cluster.
Using this technology, we can offer CephFS or block storage in the forms of PVCs, within the cluster deployed on top of Kubevirt VMs.
We also need to deal with services of type LoadBalancers that the customers might create in their clusters. Those services need to be translated to services inside the parent cluster. With a little glue on Kube-OVN’s side to allow LoadBalancing to exist within custom VPC/Subnets, we effectively created a Kubernetes as a Service product in record time.
KaaS within Superphénix could be the subject of a dedicated article, so stay tuned for more on that subject.
Tying It Together
A lot of technologies are used to make Superphénix work. We’ve seen Kubevirt, Kube-OVN, Rook and Ceph. But there’s a lot more to make operating, updating, upgrading and installing this entire CSP possible.
For example, the entire thing is declared and deployed with GitOps. For that, we use ArgoCD, with one instance in each AZ and a central ArgoCD that deploys all the other one in cascade. We carefully crafted a Git architecture with various Helm charts to make the entire solution deployable in less than 20 minutes. This is extremely handy for end to end tests on real hardware.
We collect a lot of metrics to know the health of the system as well as how much to bill our customers. For that we use Prometheus, Grafana, Grafana Mimir and the Loki stack.
The impeccable integration of observability in Kubernetes means we inherit all the good stuff to monitor our VMs, our storage and our SDN.
As an OS base, we use Talos. It offers the installation and upgrade of the operating system and of the Kubernetes cluster using GitOps. This is what makes it possible for Superphénix to be entirely declarative, from the OS to the software stack.
We also use a lot of the Kubernetes primitives to enhance security, such as Validating Admission Policies or LimitRanges to implement validation rules and quotas.
We call of that glue “tooling” and we have yet another article available here !
Testing It
A lot of testing needs to be done on your infrastructure when you’ve built it by yourself. There’s no one to call when something is broken.
That means implementing systematic end to end testing of the entire CSP’s stack, with simulated load. That way, at any change that needs to be pushed to production, we first validate everything on a test environment. To run those tests, we have two types of test environments:
- physical environments, or so-called “hardware in the loop”
- virtualized environments
If we want to change something on the production stack, for example, upgrade a component such as Kubevirt, we first test everything in virtualized environments. This allows for fast-paced tests, where everything can be wiped and recreated from scratch in only a few minutes. Those environments run on VMware, as we consider we do not have the maturity yet to run Superphénix on top of itself.
Physical environments are the last environment on which a change is validated after having been tested on a virtualized one. It validates the change works on the hardware we run, with the performance we expect.
That environment is completely isolated, running on 2 different datacenters to simulate cross-AZ replication and disaster recovery. It has its own dedicated AS (autonomous system), its dedicated transit provider and IPv4/IPv6 ranges on the Internet. The full stack is present, with servers acting as hypervisors, and servers acting as storage arrays.
It runs on old hardware that is repurposed as not to waste anything.

One of the two racks used in the lab, there’s one in each of two datacenters.
Tests can be orchestrated using ArgoCD, as all our deployments are defined in Git. This means changes can be designed in separate branches and merged in production using standard merge requests. The workflow is extremely efficient and reduces risks of accidents, with thorough testing in two separate environments, tracking of who did what using “git blame”, and the possibility of rolling back rapidly using reverts.
Our next steps are introducing standard testing tools from the cloud native ecosystem, such as ChaosMonkey to simulate outages. Running a CSP on Kubernetes also means you inherit all the cool tools built to test clusters, but to test your storage, virtualization and network.
To learn more about our testing framework and our E2E workflows, we’ve got you covered with a special blog post.
Contributing to Open Source Software
Many times in the project did we run into blockages. And the best way to go forward was to patch the projects or add new features ourselves - not to sit still and wait for someone else to hopefully do the job for us.
We’ve done it many times, from small commits adding documentation to bigger pull requests adding brand new features. Here’s a quick list of a few PRs we’ve created and got merged:
- Adding BGP support to NAT gateways in Kube-OVN
- Making a new Helm Chart for Kube-OVN
- Adding patches to Kubevirt’s clones
- Adding Volume Restore Policies and overrides for Kubevirt’s snapshot restores
And more than a few dozen other contributions.
It was very important for us to contribute in any mean possible to opensource software. We didn’t want to just use the great projects at our disposal, but we wanted to make them even better.
We highly encourage anyone reading this article to help all those projects which truly deserve it. Contributing can also mean giving feedback, it doesn’t have to be code.
We also started working with new partners such as Clyso and 42on to validate our storage designs and to help with emergency support. This type of collaboration is indirectly beneficial to the opensource ecosystem. We do not pay for Ceph as a product, but we pay for services and value created by companies around the project. This money will help pay for developers that work on enhancing the software throughout the years.
Where Are We Now?
Our philosophy to get to production is simple: iterate fast, fail often, and practice dogfooding.
A lot of work has been done in the last year and a half. Lots of testing, lots of exploring different possibilities, and a lot of code was written and upstreamed.
But the legacy infrastructure of Agora Calycé hasn’t gone away just yet. Migrating from one to the other will be a lengthy process, and we still need to assess how stable the solution is. But we’re confident that the result is what we envisioned, if not better.
We already have parts of the infrastructure in production, such as the new storage arrays running Talos, Kubernetes, Rook and Ceph. They’re being used by the legacy infrastructure to provision NVMe block storage. And from receiving the hardware to production, we only spent 4 months designing, testing and implementing the solution. We even upstreamed bits of code to Ceph! We don’t think this would have been possible outside the cloud native ecosystem.
But there’s still a lot of work left to do. Our first milestone is slowly migrating some of the workload such as Kubernetes clusters from the KaaS offering.
Once this is validated and more fine tuning has been done, we’ll start speeding up migrations, with the goal of having most of the infrastructure running on Superphénix by the end of 2026.
It is ambitious, and quite frankly very hard. It cannot be done in a seamless way, so there’s gonna be cutoffs while we transfer the data and convert the definition of VMs from VMware to the Kubevirt CRDs. But hopefully, with automation, most of this process will require no human intervention.
We’re also working on building a web console so that Agora Calycé’s customers can access their services and operate their infrastructure by themselves. We have a dedicated article on this subject if you want to know more.
We’ve seen so many benefits from the switch Agora Calycé is operating: reduced costs, reduced time to market, increased flexibility and full ownership of the solution.
For this CSP, this is an entirely new paradigm. A custom-made solution, tailored for their needs, opensource and sovereign. No licenses, no hidden costs.
Conclusion
Superphénix is far more than a technological experiment, it’s a bold reimagining of what a small CSP can be when it takes control of its own destiny. By replacing legacy, proprietary infrastructure with open source, cloud-native components like Kubernetes, KubeVirt, Rook/Ceph, and Kube-OVN, Agora Calycé is proving that a fully modern, automated, scalable cloud platform can be built without the backing of billion-dollar corporations.
There are many challenges ahead, such as reliability concerns, and the steep learning curve of operating bleeding-edge home made technology. But the reward is freedom. Freedom from lock-in. Freedom to innovate. Freedom to shape a cloud platform that reflects the values and needs of the people operating and using it.
Superphénix defines a turning point in the history of Agora Calycé. A change of philosophy. A philosophy that says being small is no excuse for thinking small. That with the right tools, vision, and determination, you can build infrastructure that rivals the giants, not by copying them, but by doing things differently.
This journey is just beginning, but it’s already a testament to what’s possible when you bet on yourself, and on open source.