Infrastructure

When you host your application or database with Kinsta, your projects run on Google Cloud Platform’s top-tier infrastructure. In this guide, we’ll dive a little into the details of our Application Hosting and Database Hosting infrastructure.

A diagram of Kinsta's Application Hosting infrastructure.
A diagram of Kinsta’s Application Hosting infrastructure.

Git Repository

Your application’s code is stored within a Git repository. You can choose from any (or all) of the following:

MyKinsta Add/Deploy Application

In MyKinsta, when you add an application, it connects to the Git repository to retrieve the application.

MyKinsta Bot

With Automatic deployment on commit enabled in your application’s settings, if you commit a change or merge to your repository, the MyKinsta bot detects this, then pulls the application from your Git service provider and deploys the updated version of the application.

Google Cloud Build

MyKinsta sends the application to Google Cloud Build to build an image of the application from the code. It knows what applications or modules to install for the application from the information in the Nixpacks, buildpacks or Dockerfile. The output is an image that can be turned into a container.

Google Artifact Registry

This stores the container images that are ready to deploy. Each application has a single image that can be used whenever it needs to be deployed.

Kubernetes Cluster

The image from the artifact registry is pushed to the cluster. This is a virtual machine (VM) where multiple containers can run. The clusters are tuned to ensure the request from the artifact registry finds the right container, the containers are running, and they have the right resources. If there are any issues with a container, the application is redeployed to another container. We use cri-o v1.23.x on our infrastructure; however, this version is not static and may be upgraded as we upgrade different components in the stack.

Our Kubernetes infrastructure supports a multi-tenant setup, where each application runs in its own containerized environment. Network isolation and multi-layer virtualization ensure security and prevent unauthorized access between applications. This design provides you with a reliable and secure hosting platform, enabling you to focus on your core business while we handle the underlying infrastructure. We deploy at least one cluster per region, with the potential for additional clusters based on the number of applications in each region. This system ensures optimal resource allocation and scalability to meet the growing needs of our clients.

Cloudflare

When a visitor accesses the website for an application, it first accesses Cloudflare, which knows which cluster hosts the website. It then sends the access request to the correct cluster.

Currently, for Application Hosting and Database Hosting, Cloudflare includes the default firewall rules, the default DDoS protection, and other defaults.

Cloud Load Balancing

Each cluster has a load balancer that receives the access request from Cloudflare and randomly pushes a VM worker node.

Ingress

The VM worker node receives the request on the Ingress system, which knows which container is responsible for the hostname being requested. The Ingress system sends the request to the correct container, and if the container has a database attached, it communicates with the database and sends a response on the same route.

Virtual Machines (VM)

A virtual machine (VM) can hold multiple containers and multiple databases.

Containers

Each container or application can have multiple copies on the VM. In this case, the Ingress system knows this and randomly sends through one of the copies of the same container.

Persistent Storage

You can add persistent storage to a web process or background processes. This adds a storage volume that is attached to the VM (virtual machine) for your application and retains data even if the application is restarted or redeployed.

Was this article helpful?