Single Data Center On-Premises Deployment Example Using Kubernetes
Only available in Nexus Repository Pro. Interested in a free trial? Start here.
This example architecture illustrates how to use a Kubernetes cluster and PostgreSQL database to create a resilient Nexus Repository deployment. To learn more about resiliency and protecting your data in the event of an outage or disaster, see Resiliency and High Availability.
Important Terms
Before proceeding, you should know these important terms:
- Node - A virtual or physical machine
- Pod - A group of one or more containers with shared storage and network resources and a specification for how to run containers
- Container - A package with the program to execute (Nexus Repository) and everything required for it to run (e.g., code, runtime, system libraries)
- Instance - An instance of Nexus Repository
Use Cases
This reference architecture is designed to protect against the following scenarios:
- A node/server failure within a data center
- A Nexus Repository service failure
You would use this architecture if you fit the following profiles:
- You are a Nexus Repository Pro user looking for a resilient Nexus Repository deployment option on-premises in order to reduce downtime
- You would like to achieve automatic failover and fault tolerance as part of your deployment goals
- You already have a Kubernetes cluster set up as part of your deployment pipeline for your other in-house applications and would like to leverage the same for your Nexus Repository deployment
- You have migrated or set up Nexus Repository with an external PostgreSQL database and want to fully reap the benefits of an externalized database setup
- You do not need High Availability (HA) active-active mode
Requirements
- A Nexus Repository Pro license
- Nexus Repository 3.33.0 or later
- A tool such as Kops for setting up Kubernetes clusters
- kubectl command-line tool
- Kustomize to customize Kubernetes objects
- Bare metal servers/virtual machines for configuring master and worker nodes
- A PostgreSQL database
- A load balancer
- File storage (Network File System)
Limitations
In this reference architecture, a maximum of one Nexus Repository instance is running at a time. Having more than one Nexus Repository instance replica will not work.
Setting Up the Architecture
Kubernetes Cluster
Kubernetes works by placing containers into pods to run on nodes. You must set up a Kubernetes cluster comprising one master node and two worker nodes. Nexus Repository will run on one of your worker nodes. In the event of a worker node failure, Kubernetes will spin up a new Nexus Repository instance on the other worker node.
See Kops documentation for an example of how to set up a Kubernetes cluster with one master and two worker nodes using Kops.
PostgreSQL Database
At any given time, only one instance will communicate with the database and blob store. Set up a PostgreSQL database and ensure the worker nodes can communicate with this database. See Configuring Nexus Repository Pro for External PostgreSQL for more information.
In order to avoid single points of failure, we recommend you set up a highly available PostgreSQL cluster.
Creating a Namespace
Kubernetes namespaces allow you to organize Kubernetes clusters into virtual sub-clusters. Before creating a namespace, you must have already configured a Kubernetes cluster and you must configure the kubectl command-line tool to communicate with your cluster.
See Kubernetes documentation on creating a namespace.
Kustomize
Kustomize is bundled with kubectl, and you can apply it using the following command where <kustomization_directory>
is a directory containing all of the YAML files, including the kustomize.yaml file:
kubectl apply -k <kustomization_directory>
Your Nexus Repository license is passed into the Nexus Repository container by using Kustomize to create a configuration map that is mounted as a volume of the pod running Nexus Repository. See Kubernetes documentation for using Kustomize for more information.
File System
The Nexus Repository container will store logs and Elastic Search indexes in the /nexus-data
directory on the node.
Persistent Volume and Persistent Volume Claim
Using a persistent volume allows Elasticsearch indexes to survive pod restarts on a particular node. When used in conjunction with the NEXUS_SEARCH_INDEX_REBUILD_ON_STARTUP
flag, this ensures that the Elasticsearch index is only rebuilt if it is empty when the Nexus Repository pod starts. This means that the only time the Elasticsearch index is rebuilt is the first time that a Nexus Repository pod starts on a node.
Storage space will be allocated to a persistent volume from your root volume (i.e., the volume attached to the provisioned node). Therefore, you must ensure that the size you specify for your node's root volume is sufficient for your usage. Ensure that the size of the root volume is bigger than that specified in the persistent volume claim so that there's some spare storage capacity on the node. For example, for a persistent volume claim size of 100 Gigabytes, you could make the actual size of the root volume 120 Gigabytes.
Sample Kubernetes YAML Files
You can use the sample YAML files in this section to help set up the YAMLs you will need for a resilient deployment. Ensure you have filled out the YAML files with appropriate information for your deployment.
You must run your YAML files in the order below:
- Persistent Volume YAML
- Persistent Volume Claim YAML
- License Configuration Mapping YAML
- Deployment YAML
- Services YAML
The resources created by these YAMLs are not in the default namespace.
Persistent Volume YAML
kind: PersistentVolume metadata: name: nfs-blobstorage-pv spec: capacity: storage: <size> volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: nfs-blobstorage mountOptions: - hard - nfsvers=4.1 nfs: path: <path to mount> server: <server ip address>
Persistent Volume Claim YAML
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-blobstorage-claim namespace: nxrm spec: accessModes: - ReadWriteOnce storageClassName: nfs-blobstorage resources: requests: storage: <size>
License Configuration Mapping YAML
apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization configMapGenerator: - name: nxrm-license namespace: nxrm files: - nxrm-license.lic resources: - nxrm_deployment.yaml
Deployment YAML
The nxrm_deployment.yaml
below deploys Nexus Repository while the services.yaml
also shown below sets up the Ingress along with a Load Balancer.
Note the following important information:
replicas
is set to 1.- Docker repositories will need additional ports exposed matching your repository connector configuration.
- The environment variable
NEXUS_SEARCH_INDEX_REBUILD_ON_STARTUP
will automatically rebuild your Elasticsearch index on node startup. - The NFS Blob Storage Volume is mounted into a
/blobs
directory in the Nexus Repository container. Blobs should be stored in this directory. For example, when creating a new blob store, specify "/blobs
" as the root in the Path field in the Create blob store form (e.g.,/blobs/BlobStore1
).
apiVersion: apps/v1 kind: Deployment metadata: name: nxrm-deployment namespace: nxrm labels: app: nxrm spec: replicas: 1 selector: matchLabels: app: nxrm template: metadata: labels: app: nxrm spec: initContainers: - name: chown-nexusdata-owner-to-nexus image: busybox:1.33.1 command: ['chown', '-R', '200:200', '/blobs'] volumeMounts: - name: nfs-blob-storage mountPath: /blobs containers: - name: nxrm-pod image: sonatype/nexus3:3.33.0 securityContext: runAsUser: 200 imagePullPolicy: IfNotPresent ports: - containerPort: 8081 env: - name: DB_NAME value: <db-name> - name: DB_USER value: <db-user> - name: DB_PASSWORD value: <db-password> - name: LICENSE_FILE value: /etc/nxrm-license/nxrm-license.lic - name: NEXUS_SECURITY_RANDOMPASSWORD value: "false" - name: NEXUS_SEARCH_INDEX_REBUILD_ON_STARTUP value: "true" - name: INSTALL4J_ADD_VM_PARAMS value: "-Xms2703m -Xmx2703m -XX:MaxDirectMemorySize=2703m \ -Dnexus.licenseFile=${LICENSE_FILE} \ -Dnexus.datastore.enabled=true \ -Djava.util.prefs.userRoot=${NEXUS_DATA}/javaprefs \ -Dnexus.datastore.nexus.name=nexus \ -Dnexus.datastore.nexus.type=jdbc \ -Dnexus.datastore.nexus.jdbcUrl=jdbc:postgresql://postgres_url:5432/${DB_NAME} \ -Dnexus.datastore.nexus.username=${DB_USER} \ -Dnexus.datastore.nexus.password=${DB_PASSWORD}" volumeMounts: - name: nfs-blob-storage mountPath: /blobs - name: license-volume mountPath: /etc/nxrm-license volumes: - name: nfs-blob-storage persistentVolumeClaim: claimName: nfs-blobstorage-claim - name: license-volume configMap: name: nxrm-license
Services YAML
In the following YAML, <scheme>
should typically be set to internal
.
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: namespace: nxrm name: ingress-nxrm annotations: kubernetes.io/ingress.class: <load balancer class> <load balancer class>.ingress.kubernetes.io/scheme: <scheme> <load balancer class>.ingress.kubernetes.io/subnets: subnet-abc, subnet-xyz spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: nxrm-service port: number: 80 --- apiVersion: v1 kind: Service metadata: name: nxrm-service namespace: nxrm labels: app: nxrm spec: type: NodePort selector: app: nxrm ports: - protocol: TCP port: 80 targetPort: 8081
You can also use the below to extend the services.yaml for Docker port configuration:
Ingress for Docker Connector (Optional)
In the following YAML, <scheme>
should typically be set to internal
.
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: namespace: nxrm name: ingress-nxrm-docker annotations: kubernetes.io/ingress.class: <load balancer class> <load balancer class>.ingress.kubernetes.io/scheme: <scheme> <load balancer class>.ingress.kubernetes.io/subnets: subnet-abc, subnet-xyz spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: nxrm-service-docker port: number: 9090
Nodeport for Docker Connector
apiVersion: v1 kind: Service metadata: name: nxrm-service-docker namespace: nxrm labels: app: nxrm spec: type: NodePort selector: app: nxrm ports: - name: docker-connector protocol: TCP port: 9090 targetPort: 9090