Our solution
AI factories are purpose-built infrastructures that combine GPU clusters, high-speed interconnects and CPU orchestration layers to train and serve next-generation AI models.
The CPU plays a critical role: coordinating accelerator utilization, managing data pipelines, and handling the control plane logic that keeps the entire system running efficiently.
Typical workloads
-
Large language model pre-training
-
High-throughput AI inference
-
Data ingestion and preprocessing pipelines
-
Multimodal model training
-
Reinforcement learning from human feedback


Why SiPearl
→ High-bandwidth CPU architecture built to keep accelerators fully fed and utilised.
→ Energy-efficient design that lowers the cost per AI training step at scale.
→ European technology enabling sovereign AI capabilities outside US and Chinese supply chains.
Technical features of Rhea1




Discover all the details of our CPU
Seine Reference Server, a modular solution dedicated to Rhea1
Multifunctional, flexible and versatile, the Seine Reference Server can be used for validation and testing, as a reference design, for software porting and for demonstrations and customer testing.
It is available in two configurations:
- a single Rhea1 processor connecting to up to two GPUs in a single chassis,
- or as a dual socket Rhea1.
Each configuration supports up to two SATA (Serial Advanced Technology Attachment) disks and up to two PCIe NICs.
