Single Data Center On-Premises Deployment Example Using Kubernetes
Only available in Nexus Repository Pro. Interested in a free trial? Start here.
This example architecture illustrates how to use a Kubernetes cluster and PostgreSQL database to create a resilient Nexus Repository deployment. To learn more about resiliency and protecting your data in the event of an outage or disaster, see Resiliency and High Availability.
Before proceeding, you should know these important terms:
- Node - A virtual or physical machine
- Pod - A group of one or more containers with shared storage and network resources and a specification for how to run containers
- Container - A package with the program to execute (Nexus Repository) and everything required for it to run (e.g., code, runtime, system libraries)
- Instance - An instance of Nexus Repository
This reference architecture is designed to protect against the following scenarios:
- A node/server failure within a data center
- A Nexus Repository service failure
You would use this architecture if you fit the following profiles:
- You are a Nexus Repository Pro user looking for a resilient Nexus Repository deployment option on-premises in order to reduce downtime
- You would like to achieve automatic failover and fault tolerance as part of your deployment goals
- You already have a Kubernetes cluster set up as part of your deployment pipeline for your other in-house applications and would like to leverage the same for your Nexus Repository deployment
- You have migrated or set up Nexus Repository with an external PostgreSQL database and want to fully reap the benefits of an externalized database setup
- You do not need High Availability (HA) active-active mode
- A Nexus Repository Pro license
- Nexus Repository 3.33.0 or later
- A tool such as Kops for setting up Kubernetes clusters
- kubectl command-line tool
- Kustomize to customize Kubernetes objects
- Bare metal servers/virtual machines for configuring master and worker nodes
- A PostgreSQL database
- A load balancer
- File storage (Network File System)
In this reference architecture, a maximum of one Nexus Repository instance is running at a time. Having more than one Nexus Repository instance replica will not work.
Setting Up the Architecture
Step 1 - Kubernetes Cluster
Kubernetes works by placing containers into pods to run on nodes. You must set up a Kubernetes cluster comprising one master node and two worker nodes. Nexus Repository will run on one of your worker nodes. In the event of a worker node failure, Kubernetes will spin up a new Nexus Repository instance on the other worker node.
See Kops documentation for an example of how to set up a Kubernetes cluster with one master and two worker nodes using Kops.
Step 2 - PostgreSQL Database
At any given time, only one instance will communicate with the database and blob store. Set up a PostgreSQL database and ensure the worker nodes can communicate with this database. See Configuring Nexus Repository Pro for External PostgreSQL for more information.
In order to avoid single points of failure, we recommend you set up a highly available PostgreSQL cluster.
Step 3 - Creating a Namespace
Kubernetes namespaces allow you to organize Kubernetes clusters into virtual sub-clusters. Before creating a namespace, you must have already configured a Kubernetes cluster and you must configure the kubectl command-line tool to communicate with your cluster.
See Kubernetes documentation on creating a namespace.
Step 4 - Kustomize
Kustomize is bundled with kubectl, and you can apply it using the following command where
<kustomization_directory> is a directory containing all of the YAML files, including the kustomize.yaml file:
kubectl apply -k <kustomization_directory>
Your Nexus Repository license is passed into the Nexus Repository container by using Kustomize to create a configuration map that is mounted as a volume of the pod running Nexus Repository. See Kubernetes documentation for using Kustomize for more information.
Step 5 - File System
The Nexus Repository container will store logs and Elastic Search indexes in the
/nexus-data directory on the node.
Step 6 - Persistent Volume and Persistent Volume Claim
Using a persistent volume allows Elasticsearch indexes to survive pod restarts on a particular node. When used in conjunction with the
NEXUS_SEARCH_INDEX_REBUILD_ON_STARTUP flag, this ensures that the Elasticsearch index is only rebuilt if it is empty when the Nexus Repository pod starts. This means that the only time the Elasticsearch index is rebuilt is the first time that a Nexus Repository pod starts on a node.
Storage space will be allocated to a persistent volume from your root volume (i.e., the volume attached to the provisioned node). Therefore, you must ensure that the size you specify for your node's root volume is sufficient for your usage. Ensure that the size of the root volume is bigger than that specified in the persistent volume claim so that there's some spare storage capacity on the node. For example, for a persistent volume claim size of 100 Gigabytes, you could make the actual size of the root volume 120 Gigabytes.
Sample Kubernetes YAML Files
The YAML files linked in this section are just examples and cannot be used as-is. You must fill them out with the appropriate information for your deployment to be able to use them.
You can use the sample on-premises resiliency YAML files from our sample files GitHub repository to help set up the YAMLs you will need for a resilient deployment. Ensure you have filled out the YAML files with appropriate information for your deployment.
Before running the YAML files in this section, you must first create a namespace. To create a namespace, use a command like the one below with the kubectl command-line tool:
kubectl create namespace <namespace>
You must then run your YAML files in the order below:
- Persistent Volume YAML
- Persistent Volume Claim YAML
- License Configuration Mapping YAML
- Deployment YAML
- Services YAML
The resources created by these YAMLs are not in the default namespace.