Peak Privacy

Tailor-made Swiss server solution for LLM inference

We supported Peak Privacy in designing and building a specialized server infrastructure for LLM inference. The process was characterized by close collaboration, including intensive idea exchange and joint problem-solving, where the specific expertise of both partners was used optimally to align the hardware precisely with the software requirements.

Client
Peak Privacy
Industry
Artificial Intelligence
Website
Date
April, 2025

The Challenge

Peak Privacy faced the major challenge of finding a suitable server infrastructure for their demanding LLM (Large Language Model) inference requirements. The Swiss market offered no adequate solutions; offers from large international cloud providers like Azure or Google were either disproportionately expensive with insufficient performance or simply not tailored to Peak Privacy’s specific needs. A local, high-performance, and cost-efficient option that could meet the high demands of LLM inference was missing. Day-to-day operations required a reliable and high-performing infrastructure – which simply wasn’t available.

The Solution

In search of a suitable partner, Peak Privacy came across our company. Together, we initiated a collaborative development process. Through an active exchange of ideas – bouncing ideas back and forth –, Peak Privacy’s specific requirements were thoroughly analyzed. The solution was to combine Peak Privacy’s deep expertise in LLM inference with our proven server and infrastructure know-how. This enabled the design and construction of a server tailored precisely to Peak Privacy’s workloads. The collaboration was partnership-driven and goal-oriented, with us acting as a flexible and capable partner throughout the whole process.

The Result

The result is a custom-built server solution that perfectly meets Peak Privacy’s requirements. Its performance significantly surpasses that of previously evaluated cloud offerings – at much more attractive costs. Peak Privacy now benefits from an optimized, local infrastructure specifically designed for LLM inference. This enables faster, more efficient, and more cost-effective operations in daily business. Satisfaction with both the solution and the partnership with us is very high.

«Working with Nine was a real stroke of luck. We finally found a partner in Switzerland who not only understands our specific needs for LLM inference but also has the technical expertise to build a perfectly tailored, high-performance, and cost-attractive server solution. It was a game changer for us.»

Fabio Duò
Founder