What Does … an IT Service Desk Supporter Do at Nine?

No company can run efficiently without customer support. This is no different at Nine: every day, we receive numerous exciting inquiries that are answered by our Customer Service Desk, or prepared and handed on to our engineering team for clarification. Some of these inquiries are very time-critical and technically extremely demanding, but we always want to offer our customers the best possible service – and solution.

Read our interview with Sophie Kwass, one of our IT Service Desk Supporters, to find out how she masters her complex day-to-day work at Nine while expanding her technical skills on a daily basis.

Elena: Hello Sophie, thank you for taking a moment!
Sophie: I was forced to take part by HR. (smiles) 😆

Dear Sophie, how long have you been working for Nine, and why did you choose us as your employer?
I joined Nine in February 2023, so a bit more than a year ago. I had previously worked in a support role in a web hosting company, and therefore knew what motivated me: high-quality support for exciting technology products. It’s also important to me to be able to get involved and develop myself further – that’s practically in the team DNA in my current job. Nine was therefore a no-brainer and my employer of choice.

What do you prefer? Answering tickets in writing or helping customers over the phone?
The combination of the two is what makes it work for me, because I like variety. I usually handle any technological queries in the form of tickets. On the other hand, I often get an even more authentic impression of our customers on the phone – a call is much more personal and direct. ☎️

You are the «Tech Lead» in your team – can you briefly explain what that means?
Our team has completely reorganised itself in the past 12 months. We want to offer support that is competent, helps both our customers and our engineering teams, and is fun at the same time. To this end, I am developing new processes with my team and the engineers, organise knowledge sharing sessions and lead the sprints in which we organise our project work alongside our day-to-day business. 

What does a typical day in your team look like?
There’s no such thing as a typical day – but I’ll give it a try anyway: we start our work in a way that makes sure we are ready to answer the first calls by 9AM. We start with an initial triage of any tickets received and use all communication channels to gain first insights into the current situation. We then meet for a stand-up to plan the day, and also take part in the stand-ups of the engineering teams. We then join forces to tackle the tickets, and we occasionally receive calls.

And that was only the morning?
Yes, exactly. It’s usually quieter in the afternoon, so depending on the day, at least one person will look after the phone and the inbox. The other team members focus on projects to improve the quality of support, work on our documentation, or collaborate with the engineers to address more complex queries. We also meet several times a week for various training sessions on engineering topics. At 6PM, we go home or – thanks to flexible work-from-home options – turn off our laptops. 💻

What makes your team special?
We all like to learn from and with each other, prefer to help our customers as a team and puzzle over technical issues together, with a lot of curiosity. Then again, that’s what Nine as a company is all about for me – everyone shares their knowledge and their joy of learning. We want to progress together and achieve something. We don’t really have a ‘mission impossible’, because we always go the extra mile for our customers.

Thank you very much, Sophie, for the interesting interview.
You’re very welcome. 😉

Our Customer Service Desk currently consists of four staff members who are the first point of contact for any issues, questions or other queries. Our customers value this opportunity for personal contact, the fast response times and competent advice, as we are proud and grateful to see, hear and read in the consistently positive feedback.

Check back regularly to make sure you don’t miss the next behind-the-scenes insights about our team members!

vcluster: A virtual Kubernetes Cluster

As more developers get familiar with deploying to Kubernetes, the need for better isolation between tenants becomes more important. Is it enough to just have access to a namespace in a cluster? I would argue for most use-cases to deploy your production apps, yes.

As more developers get familiar with deploying to Kubernetes, the need for better isolation between tenants becomes more important. Is it enough to just have access to a namespace in a cluster? I would argue for most use-cases to deploy your production apps, yes. But if you want dynamic test environments or to schedule your GitLab builds on a certain set of nodes, it can quickly get quite complex to safely set this up on the same cluster as your production apps. Using a full seperate cluster for that is possible but it’s slow to setup and it’s usually quite expensive if you use a fully managed cluster.

The Kubernetes multi-tenancy SIG (Special Interest Group) has been prototyping a virtual cluster implementation for quite some time, but it has always stayed somewhat experimental and limited. The vcluster project attempts to address these shortcomings, implementing a similar architecture but with a rich set of features, even in the earliest implementation, and has really good documentation.

The basic concept of a vcluster is that it starts up a kube-apiserver inside a pod on the host cluster and then a syncer component will ensure that certain resources are synced between the vcluster and the host cluster so existing infrastructure and services on the host cluster can be reused. For example, if you request a storage volume (PVC) on the vcluster, it will be synced to the host cluster where the existing storage implementation will take care of creating the actual volume, without the need to install any complicated CSI drivers on your vcluster. You just get dynamic volume provisioning “for free”. This also applies to things like load balancers, ingresses etc. Your workloads (pods) created on the vcluster can also make use of existing nodes on the host cluster. But there is also the option to further isolate workloads by scheduling all workloads created inside the vcluster on dedicated nodes of the host cluster. This is also how vclusters are implemented at Nine, you don’t need to worry about sharing a node with other tenants

Source vcluster: vcluster.com

When to use a vcluster

That’s of course completely up to you but here are some use-cases we could think of:

  • CI/CD: Because of the fast creation and deletion times of a vcluster, as well as their low cost, they lend themselves to be used in CI/CD pipelines to test deployments of your apps end-to-end.
  • Testing new Kubernetes API versions: We try to always provide the latest Kubernetes releases within vcluster so you can test your apps against new API versions early.
  • Well isolated and cost effective environments: Staging and development environments can use their own vcluster to be better isolated from production instead of using multiple namespaces on a single NKE cluster.
  • Test new CRD versions and operators: Testing new CRDs and/or operators can easily be tested on a temporary vcluster. Want to try an upgrade of cert-manager and see if your certificates are still getting signed? A vcluster can help with that.

How we are making use of vclusters

At Nine we are constantly looking at new software to solve certain problems , which means we often need to deploy something on a Kubernetes cluster and tear it down again after we are done with testing. In the past we have been using local clusters with kind or Minikube but with a lot of software you have to resort to workarounds to get it running, e.g. find the usually hidden “allow insecure TLS” flag as it’s not really simple to get a trusted TLS certificate inside your temporary kind cluster. Or say you want to share your prototype with other team members, it gets quite tricky to expose your locally running applications to the internet. Here a vcluster offers the best of both worlds as you can get an (almost) full cluster within seconds.

Another use-case of ours is running the staging environment for our API and controllers. We make heavy use of CRDs, which makes it hard to use a shared cluster but as we are just running a few pods for the controllers a full cluster would also be wasteful.

Comparison to NKE

We see vclusters as a complimentary tool to our fully managed NKE clusters. The API server of a vcluster does not have the same reliability as a complete Kubernetes cluster such as NKE. However, a brief failure of the API server does not usually cause your application to fail. This comparison table should give an overview of most of the differences:

Service Type Load Balancer
Persistent Storage (RWO)
Argo CD Integration
NKE Maschinentypen
Dedizierte Worker Nodes
Dedizierte HA Control-Plane Nodes
Cluster Add-ons
Automatisches Backup
Verfügbarkeitsgarantie (SLA)
Schnelle Erstellungszeit (<~2 min)
Cluster Admin

Getting started

While vclusters can be created in Cockpit, we have also added it to our CLI utility to offer a kind-like experience and to support CI/CD workflows. You can create a new vcluster with just a single command.

$ nctl create vcluster
 ✓ created vcluster "grown-velocity"
 ✓ waiting for vcluster to be ready ⏳
 ✓ vcluster ready 🐧
 ✓ added grown-velocity/nine to kubeconfig 📋
 ✓ logged into cluster grown-velocity/nine 🚀
$ nctl get clusters
NAME                       NAMESPACE       PROVIDER    NUM_NODES
grown-velocity             nine            vcluster    1


By default this will choose a random name for your vcluster and spawn a single node that is dedicated to this vcluster. All of this can be configured using flags, use nctl create vcluster -h to get an overview over all available flags.

Now you can start to use the vcluster just like any Kubernetes cluster.

$ kubectl get ns
NAME              STATUS   AGE
kube-system       Active   47s
kube-public       Active   47s
kube-node-lease   Active   47s
default           Active   47s

$ nctl delete vcluster grown-velocity
do you really want to delete the vcluster "grown-velocity/nine"? [y|n]: y
 ✓ vcluster deletion started
 ✓ vcluster deleted 🗑

Do you have any questions?

Container Orchestration: What is container orchestration and what can it be used for?

Virtualization, containers and cloud computing have fundamentally changed the development and operation of modern applications. However, if you have to manage and provide a large number of containers, there is no way around container orchestration, because countless processes have to be managed simultaneously and in a resource-optimized manner. Tools such as the open source-based Kubernetes provide powerful solutions for orchestrating a container-based environment.

In the following article, we explain what container orchestration is, what it is used for, and how it works. We go into detail about Container Orchestration with Kubernetes and introduce the nine Managed Google Kubernetes Engine (GKE). Some use cases will conclude by illustrating the usefulness of container orchestration.

Definition of the container orchestration

Container Orchestration is the automation of the processes for provisioning, organizing, managing, and scaling containers. It creates a dynamic system that groups many different containers, manages their interconnections, and ensures their availability. Container Orchestration can be used in different environments. It can manage containers in a private or public cloud or on-premises equipment.

Container orchestration details 

Container orchestration is closely linked to cloud computing and the delivery of applications in the form of many individual microservices. But what is Container Orchestration needed for and how does it work?

What is container orchestration needed for?

Applications that were developed without containers in mind, are often referred to as Classical or Monolithic applications.  All functions, classes and sometimes services, were included in to a single source repository with many internal dependencies: 

Source: https://cloud.google.com/kubernetes-engine/kubernetes-comic/assets/panel-8.png, art and story from Scott McCloud

While these “application monsters” have an impressive range of functions, they are difficult to deploy, maintain, and scale. These applications cannot keep pace with the ever-faster processes of digital transformation.

Source: https://cloud.google.com/kubernetes-engine/kubernetes-comic/assets/panel-3.png, art and story from Scott McCloud

The current trend is moving towards microservices, which allows for an IT team to “divide and conquer” large problems into small tasks. Applications consist of many small, independent services with individual tasks. The microservices communicate with each other via defined interfaces and as a whole form the actual application.

Source: https://cloud.google.com/kubernetes-engine/kubernetes-comic/assets/panel-9.png, art and story from Scott McCloud

Each microservice can be provided, operated, debugged, and updated individually without affecting the operation of the overall application. Microservices are a decisive step towards agile applications and the DevOps concept.

Source: https://cloud.google.com/kubernetes-engine/kubernetes-comic/assets/panel-10.png, art and story from Scott McCloud

Existing services are divided up into containers or a group of containers. The containers provide a kind of virtualized environment and have the complete runtime environment including libraries, configurations, and dependencies. When compared to full virtual machines, containers require fewer resources and can be started much faster. Many containers can be run in parallel on a single physical or virtual server. Together with the containers, the microservices can be moved easily and quickly between different environments. Containerized microservices form the basis for cloud-native applications.

Complex applications consist of many microservices and containers that are operated on different systems and in different environments. The manual management of a large number of containers is a challenge for every administrator. A solution to this problem is container orchestration, as is possible with Kubernetes, for example. It automates the processes of deployment, organization, management, and scaling of containers.

Source: https://cloud.google.com/kubernetes-engine/kubernetes-comic/assets/panel-20.png, art and story from Scott McCloud

Which functions does the container orchestration perform?

Container Orchestration offers the user the possibility to control, coordinate and automate all processes around the many individual containers. Container Orchestration performs the following tasks, among others:

  • Provision of the containers
  • Configuring the containers
  • Allocating resources
  • Grouping the containers
  • Starting and stopping the containers
  • Monitoring of the container status
  • Updating the containers
  • Failover of individual containers
  • Scaling the containers
  • Ensuring the communication of the containers

The containers and their dependencies are described in configuration files. Container Orchestration uses these files to plan the deployment and operation of containers.

Container Orchestration in Kubernetes

Kubernetes, often abbreviated K8s, is an open source-based solution for orchestrating containers. It was originally developed by Google and released in 2014. In 2015, Google donated the Kubernetes project to the Cloud Native Computing Foundation (CNCF). CNCF is responsible for many other projects in the Cloud Native Computing environment.

Source: https://cloud.google.com/kubernetes-engine/kubernetes-comic/assets/panel-21.png, art and story from Scott McCloud

Although it is still a young software, Kubernetes rules the ecosystem of the orchestrators. Kubernetes can control, operate and manage containers, but requires a container engine like Docker to provide the actual container runtime environment. Compared to the container orchestration with Docker Swarm, which is now integrated in Docker, Kubernetes offers a much wider range of features.

How does Kubernetes work?

Kubernetes knows the following basic elements:

  •  Pods
  • Nodes (formerly Minions)
  • Cluster
  • Master Nodes

Within the Kubernetes architecture, a Pod is the smallest unit. A Pod can contain one or more containers.

Source: https://cloud.google.com/kubernetes-engine/kubernetes-comic/assets/panel-26.png, art and story from Scott McCloud

Individual Pods or groups of Pods are operated on one node. A node is a physical or virtual machine. A container runtime environment like Docker is installed on the nodes.

Source: https://cloud.google.com/kubernetes-engine/kubernetes-comic/assets/panel-27.png, art and story from Scott McCloud

Several nodes can be combined into a cluster. Clusters consist of at least one master node and several worker nodes. The master nodes have the task of receiving commands from the administrator and controlling the worker nodes with their pods.

Source: https://cloud.google.com/kubernetes-engine/kubernetes-comic/assets/panel-28.png, art and story from Scott McCloud

The master decides which node is best suited for a particular task, determines the pods that run on the node and allocates resources. The master nodes receive regular status updates that allow the operation of the nodes to be monitored. If required, Pods with their containers can be automatically started on other nodes.

The nine Managed Google Kubernetes Engine (GKE)

Kubernetes is a powerful tool and offers a huge feature set, but also has a steep learning curve. Appropriate know-how and resources are required for container orchestration with Kubernetes. With the nine Managed Google Kubernetes Engine (GKE) you get a fully managed environment where you can deploy and orchestrate your containers directly. The operation of the platform is manged by nine. 

What is the nine Managed GKE and how does it work?

The nine Managed GKE is a managed service product. It is based on the Google Kubernetes Engine (GKE), an environment to deploy, manage, and scale containerized applications on Google infrastructure. The containers run on a secure, easily scalable cluster consisting of multiple machines. Nine provides additional features to Google Kubernetes and takes over the complete operation of the cluster. Regular backups are performed, external services are integrated and storage management, monitoring, and replication tasks are performed. You can fully concentrate on the development of your applications and the orchestration of the containers on the managed platform.

All data is securely stored in Switzerland, as this is a Swiss service. At the same time, the worldwide Google infrastructure offers the possibility of global scaling.

Concrete Use Cases

If you have applications that are divided into microservices, or you want to redesign existing applications cloud-natively, you can run the containerized microservices on a cluster with nine Managed GKE. Nine takes care of the operation of the cluster. You concentrate on the development and management of the containers. The reallocated resources allow you to eliminate budget bottlenecks, catch up on backlogs in application development, or eliminate technical deficits in existing applications. Below are some use cases that illustrate the best practices of container orchestration.

Agile, dynamically growing applications of a startup

Startups with new business ideas need agile applications. Functions have to be changed, extended, or adapted on a daily basis. Once the first successes are achieved, the requirements in terms of resources and scaling increase. With modern, containerized applications, startups cover all requirements for agile, dynamically growing applications. By dividing the application into many different microservices and making them available via containers, individual functions can be changed or scaled without affecting the entire application.

Applications with high availability

Many companies depend on the high availability of their business-critical applications. Even short failures can lead to immense sales losses or a loss of reputation. Industries that have high demands on the availability of certain applications include manufacturing and finance. In a modern application consisting of containerized microservices, container orchestration takes care of the uninterrupted operation. For example, if individual computers fail, the affected containers are automatically started on other computers in a redundantly designed cloud environment. Manual intervention is not necessary. Even when updating or scaling the services, the basic operational readiness of the entire application is not affected.

Concentration on application development – no resources for operation 

In most cases the financial, human, or technical resources are limited. If applications are provided on the basis of a fully managed, containerized, and cloud-based environment such as the nine Managed GKE, resources are freed up as typical operational tasks are eliminated. These resources can be used for application development or for eliminating technical shortcomings of existing applications. The company focuses more on its core business and the chances of success increase.

The nine cloud navigators are your partner for cloud-native applications and container orchestration 

If you want to benefit from the advantages of cloud computing and cloud-native applications and accelerate your time-to-market, then the nine cloud navigators are the right partner for you. With the nine Managed GKE, your data is securely stored in Switzerland. At the same time, you have the possibility of the global scalability of the Google Cloud. You do not have to deal with the complexity of managing and operating a cluster yourself. 

Our Kubernetes experts help you with container orchestration and provide you with a fully managed environment. We will be happy to answer your questions or introduce you to our managed cloud solution.

Talk to one of our experts

Do you have any questions about our products? Either contact Sales or go directly to your cockpit to see all the possibilities at Nine.