AI Factories

Orchestrating large-scale AI infrastructure

Our solution

AI factories are purpose-built infrastructures that combine GPU clusters, high-speed interconnects and CPU orchestration layers to train and serve next-generation AI models.

The CPU plays a critical role: coordinating accelerator utilization, managing data pipelines, and handling the control plane logic that keeps the entire system running efficiently.

Typical workloads

  • Large language model pre-training
  • High-throughput AI inference
  • Data ingestion and preprocessing pipelines
  • Multimodal model training
  • Reinforcement learning from human feedback
CPU_HPC_equipe_SiPearl_au_travail

Why SiPearl

→ High-bandwidth CPU architecture built to keep accelerators fully fed and utilised.
→ Energy-efficient design that lowers the cost per AI training step at scale.
→ European technology enabling sovereign AI capabilities outside US and Chinese supply chains.

Technical features of Rhea1

Rhea1 6 hm etching
Rhea1: 80 Neoverse V1 cores
Rhea1: 4 HBM embarked
Rhea1: 4 DDR5

Discover all the details of our CPU

Seine Reference Server, a modular solution dedicated to Rhea1

Multifunctional, flexible and versatile, the Seine Reference Server can be used for validation and testing, as a reference design, for software porting and for demonstrations and customer testing.

It is available in two configurations:

  • a single Rhea1 processor connecting to up to two GPUs in a single chassis,
  • or as a dual socket Rhea1.

Each configuration supports up to two SATA (Serial Advanced Technology Attachment) disks and up to two PCIe NICs.

Our Cookie Policy in a nutshell

Ce site utilise des cookies afin que nous puissions vous fournir la meilleure expérience utilisateur possible. Les informations sur les cookies sont stockées dans votre navigateur et remplissent des fonctions telles que vous reconnaître lorsque vous revenez sur notre site Web et aider notre équipe à comprendre les sections du site que vous trouvez les plus intéressantes et utiles.