Single-Node Cloud Resilient Deployment Example Using Azure

Only available in Nexus Repository Pro. Interested in a free trial? Start here.

You never know when disaster may strike. With a resilient deployment on Azure like the one outlined below, you can ensure that you still have access to Nexus Repository in the event of service or data center outage. 

Similar architecture could be used for other cloud or on-premise deployments with Kubernetes and file-based or other supported blob storage. If you would like to manage your own deployment, see Single Data Center On-Premises Deployment Example Using Kubernetes. If you prefer to use AWS, see Single-Node Cloud Resilient Deployment Using AWS.

Use Cases

This reference architecture is designed to protect against the following scenarios:

  • An Azure Availability Zone (AZ) outage within a single region
  • A node/server failure
  • A Nexus Repository service failure

You would use this architecture if you fit the following profiles:

  • You are a Nexus Repository Pro user looking for a resilient Nexus Repository deployment option in Azure in order to reduce downtime
  • You would like to achieve automatic failover and fault tolerance as part of your deployment goals
  • You already have an Azure Kubernetes Service (AKS) cluster set up as part of your deployment pipeline for your other in-house applications and would like to leverage the same for your Nexus Repository deployment
  • You have migrated or set up Nexus Repository with an external PostgreSQL database and want to fully reap the benefits of an externalized database setup
  • You do not need High Availability (HA) active-active mode

Requirements

  • A Nexus Repository Pro license
  • Nexus Repository 3.33.0 or later
  • kubectl command-line tool
  • An Azure account with permissions for accessing the following Azure services:
    • Azure Kubernetes Service (AKS)
    • Azure database for PostgreSQL
    • Azure Monitor
    • Azure KeyVault
    • Azure Command-Line Interface
  • Kustomize

Limitations

In this reference architecture, a maximum of one Nexus Repository instance is running at a time. Having more than one Nexus Repository failover instance will not work.

Setting Up the Architecture

Azure AKS Cluster

The first thing you must do is create a resource group. A resource group is a container that holds related resources for an Azure solution. In this case, everything we are about to set up will be contained within this new resource group that you create. Follow Microsoft's documentation to create a resource group from the Azure portal. Ensure you put the resource group in the same region you intend to setup your all your resources.

Then, you can follow Microsoft's documentation for creating an AKS cluster.

Ensure you enable Azure Monitor when creating your AKS cluster.

Azure PostgreSQL

We recommend Azure database for PostgreSQL server for storing Nexus Repository configurations and component metadata.

Follow Microsoft's documentation for creating an Azure database for PostgreSQL server.

  • Use the single server option.
  • When setting up the database, be sure to select the resource group you created when setting up your AKS cluster.
  • Be sure to select the same region as the resource group you created.

Use the Azure CLI to create the nexus database using Microsoft's documentation as a reference. (See Microsoft's documentation for installing the Azure CLI.)

Ingress Controller

An ingress controller provides reverse proxy, configurable traffic routing, and TLS termination for Kubernetes services. You can configure the ingress controller to associate a static IP address with the Nexus Repository pod. This way, if the Nexus Repository pod restarts, it can still be accessed through the same IP address. 

Follow Microsoft's documentation for setting up an ingress controller in AKS.

Create Kubernetes Namespace

Follow the Kubernetes documentation for creating a Kubernetes namespace with the kubectl command-line tool. You will use the following command to create the namespace:

kubectl create namespace <namespace>

Licensing

Azure Key Vault

Azure Key Vault provides secrets, key, and certificate management. In the event of a failover, Key Vault can retrieve the license when the new Nexus Repository container starts. This way, your Nexus Repository always starts in Pro mode.

Follow Microsoft's documentation for creating a key vault.

  • Be sure to select the resource group you created when setting up your AKS cluster.
  • Be sure to select the same region as the resource group you created.
  • Restrict it to your virtual network.

You will then need to add your license file to the key vault using the Azure CLI.  The command to add your license will look similar to the following:

az keyvault secret set --name <name_for_secret> --vault-name <name_of_keyvault> --file <path_of_license_file.lic> --encoding base64

You will also need to add the secrets for your username and password for your database; you can do this manually in the key vault you created via the portal. Follow Microsoft's documentation for adding secrets via the portal.

The Kubernetes node on which Nexus Repository will run requires  the CSI Secrets Store driver to be able to retrieve the license and database user/password information from the key vault. This is a Kubernetes-specific plugin; directions for obtaining and installing this driver are available in Microsoft's documentation.

Note that the CSI Secrets Store Driver for AKS is still in preview. This is why we suggest using Kustomize to import the license as a ConfigMap that is mounted as a volume into the Nexus Repository pod to make the license available to Nexus Repository.

Kustomize

Your Nexus Repository license is passed into the Nexus Repository container by using Kustomize to create a configuration map that is mounted as a volume of the pod running Nexus Repository. See Kubernetes documentation for using Kustomize for more information.

Azure Monitor 

You should enable Azure Monitor when creating your AKS cluster.

When running Nexus Repository on Kubernetes, it is possible for it to run on different nodes in different AZs over the course of the same day. In order to be able to access Nexus Repository's logs from nodes in all AZs, we recommend that you externalize your logs to Azure Monitor. 

When first installed, Nexus Repository sends task logs to separate files. Those files do not exist at startup and only exist as the tasks are being run. In order to facilitate sending logs to Azure Monitor, you need the log file to exist when Nexus Repository starts up. The nxrm-logback-tasklogfile-override.yaml shown in the sample YAMLs section below sets this up. 

Apply the nxrm-logback-tasklogfile-override.yaml to the AKS cluster before applying the deployment YAML.

Enabling Azure Monitor when creating the AKS cluster automatically pushes logs to stdout from all containers in the Nexus Repository pod to Azure Monitor.

Therefore, in addition to the main log file (i.e., nexus.log), Nexus Repository uses side car containers to log the contents of the other log files (request, audit, and task logs) to stdout so that they can be automatically pushed to Azure Monitor.

Example of Obtaining Log Messages

Below is an example of using a query to find out container IDs and use them to view log messages.

Open the Azure portal and navigate to the Log Analytics workspace that you specified for Azure Monitor during AKS cluster creation. Select the Logs tab and open a query editor.

Run the following query for each of the containers in the Nexus Repository pod:

ContainerInventory
| where Name contains "<container_name>" and ContainerState == "Running"
| order by TimeGenerated desc


container_name could be one of the following:

  • nxrm-app_nxrm-deployment-nexus 
  • tasks-log_nxrm-deployment-nexus
  • request-log_nxrm-deployment-nexus
  • audit-log_nxrm-deployment-nexus


This will return a result such as the following:


You can then copy the ContainerID from this result and use it to access the logs in another query such as the following:

ContainerLog
| where TimeGenerated > ago (1h)
| where ContainerID in ('4f5165265cd79f4876b308fa95b89a0de08b93cf6abbdb47a9a2c25f7ef3736d')
| order by LogEntry desc


This would return a result such as the following with log messages shown in the LogEntry column:

Local Persistent Volume and Local Persistent Volume Claim

Using a local persistent volume allows Elasticsearch indexes to survive pod restarts on a particular node. When used in conjunction with the NEXUS_SEARCH_INDEX_REBUILD_ON_STARTUP flag, this ensures that the Elasticsearch index is only rebuilt if it is empty when the Nexus Repository pod starts. This means that the only time the Elasticsearch index is rebuilt is the first time that a Nexus Repository pod starts on a node.

Storage space will be allocated to a local persistent volume from your root volume (i.e., the volume attached to the provisioned node). Therefore, you must ensure that the size you specify for your node's root volume is sufficient for your usage. Ensure that the size of the root volume is bigger than that specified in the local persistent volume claim so that there's some spare storage capacity on the node. For example, for a local persistent volume claim size of 100 Gigabytes, you could make the actual size of the root volume 120 Gigabytes.

See the sample storage class YAML, local persistent volume YAML, and local persistent volume claim YAML below.

Azure Blob

Azure Blob Storage provides your object (blob) storage.

Use Microsoft's documentation to first set up a storage account and then begin working with blobs.

  • When setting up your storage account, be sure to select the resource group you created when setting up your AKS cluster.
  • Select the same region as the resource group you created.
  • If available in your region, use the premium performance option. Otherwise, use the standard option.
    • If using the premium performance option, select the block blobs premium account type.
  • Select the zone-redundant storage option.
  • Restrict it to your virtual network.

Upgrading Nexus Repository when Deployed in Kubernetes

To upgrade Nexus Repository deployed in Kubernetes, you must complete the following steps:

  1. Update the deployment YAML (as shown in Sample YAML Files) to the Nexus Repository Docker image version to which you want to upgrade.
  2. Apply the updated deployment YAML to your cluster.

Sample Kubernetes YAML Files

You can use the sample YAML files in this section to help set up the YAMLs you will need for a resilient deployment. Ensure you have filled out the YAML files with appropriate information for your deployment.

Before running the YAML files in this section, you must first create a namespace as detailed below.

Then, you must run your YAML files in the order below:

  1. Storage Class YAML
  2. Secrets YAML or License Configuration Mapping YAML
  3. nxrm-logback-tasklogfile-override YAML
  4. Local Persistent Volume YAML
  5. Local Persistent Volume Claim YAML
  6. Kustomize Deployment YAML or Secrets Store CSI Driver Deployment YAML
  7. Services YAML
    1. Ingress for Docker (Optional)
    2. Nodeport for Docker (Optional)


The resources created by these YAMLs are not in the default namespace.

Create Namespace 

Before running the YAML files below, you must create a namespace. To create a namespace, use a command like the one below with the kubectl command-line tool:

kubectl create namespace <namespace>

Storage Class YAML 

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
  namespace: nexus-repo-mgr 
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Secrets YAML 

apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  namespace: nexus-repo-mgr
  name: nxrm-nexus-license-secret
spec:
  provider: azure
  secretObjects:
  - data:
    - key: db-password
      objectName: db-password
    - key: db-user
      objectName: db-user
    secretName: db-secret
    type: Opaque
  parameters:
    keyvaultName: nexus-kv
    useVMManagedIdentity: "true"
    userAssignedIdentityID: "<client id>" # The clientId of the addon-created managed identity (see https://docs.microsoft.com/en-us/azure/aks/csi-secrets-store-driver)
    objects: |
      array:
        - |
          objectName: nxrm-license
          objectType: secret
          objectEncoding: base64
        - |
          objectName: db-password
          objectType: secret
        - |
          objectName: db-user
          objectType: secret    

    tenantId: <tenant_id> # the tenant ID containing the Azure Key Vault instance (value of your key vault's 'Directory ID' in Azure portal.)

License Configuration Mapping YAML 

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: nxrm-license
  namespace: nexus-repo-mgr
  files:
  - nxrm-license.lic
resources:
- nxrm_deployment.yaml

nxrm-logback-tasklogfile-override YAML

apiVersion: v1
kind: ConfigMap
metadata:
  name: nxrm-logback-tasklogfile-override
  namespace: nexus-repo-mgr
data:
  logback-tasklogfile-appender-override.xml: |
    <included>
      <appender name="tasklogfile" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <File>${karaf.data}/log/tasks/allTasks.log</File>
        <filter class="org.sonatype.nexus.pax.logging.TaskLogsFilter" />
        <Append>true</Append>
        <encoder class="org.sonatype.nexus.pax.logging.NexusLayoutEncoder">
          <pattern>%d{"yyyy-MM-dd HH:mm:ss,SSSZ"} %-5p [%thread] %node %mdc{userId:-*SYSTEM} %c - %m%n</pattern>
        </encoder>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
          <fileNamePattern>${karaf.data}/log/tasks/allTasks-%d{yyyy-MM-dd}.log.gz</fileNamePattern>
          <maxHistory>1</maxHistory>
        </rollingPolicy>
      </appender> 
    </included>

Local Persistent Volume YAML 

You should not use Dynamic Volume Provisioning as it will cause scheduling problems if AKS provisions the Nexus Repository pod and the volume in different AZs. The volume used must be the local volume attached as illustrated in the sample persistent volume YAML file. 

apiVersion: v1
kind: PersistentVolume
metadata:
  name: ebs-pv
spec:
  capacity:
    storage: <size> # E.g. 100Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /mnt
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: topology.kubernetes.io/zone
          operator: In
          values:
          - eastus-1
          - eastus-2

Local Persistent Volume Claim YAML 

You should not use Dynamic Volume Provisioning as it will cause scheduling problems if AKS provisions the Nexus Repository pod and the volume in different AZs. The volume used must be the local volume attached as illustrated in the sample persistent volume YAML file. 


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ebs-claim
  namespace: nexus-repo-mgr
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-storage
  resources:
    requests:
      storage: <size> # E.g. 100Gi

Deployment YAML - Kustomize 

Use this deployment.yaml when using Kustomize.

Note the following important information:

  • replicas is set to 1.
  • volumeMounts are specified for the Nexus Repository license, local node's EBS volume, and logback config map for consolidating task logs into one log file. 
  • Docker repositories will need additional ports exposed matching your repository connector configuration.
  • The environment variable NEXUS_SEARCH_INDEX_REBUILD_ON_STARTUP will automatically rebuild your Elasticsearch index on pod/node restart.
  • The deployment YAML includes three busybox sidecar containers which tails the Nexus Repository logs (request, audit, and task logs) to stdout so that they can be automatically pushed to Azure Monitor. See Kubernetes documentation for more information on the sidecar pattern for accessing logs.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nxrm-deployment-nexus
  namespace: nexus-repo-mgr
  labels:
    app: nxrm
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nxrm
  template:
    metadata:
      labels:
        app: nxrm
    spec:
      initContainers:
        # chown nexus-data to 'nexus' user and init log directories/files for a new pod 
        # otherwise the side car containers will crash a couple of times and backoff whilst waiting 
        # for nxrm-app to start and this increases the total start up time.
        - name: chown-nexusdata-owner-to-nexus-and-init-log-dir
          image: busybox:1.33.1
          command: [/bin/sh]
          args:
            - -c
            - >-
              mkdir -p /nexus-data/etc/logback &&
              mkdir -p /nexus-data/log/tasks &&
              mkdir -p /nexus-data/log/audit &&
              touch -a /nexus-data/log/tasks/allTasks.log &&
              touch -a /nexus-data/log/audit/audit.log &&
              touch -a /nexus-data/log/request.log &&
              chown -R '200:200' /nexus-data
          volumeMounts:
            - name: nexusdata
              mountPath: /nexus-data      
      containers:
      - name: nxrm-app
        image: sonatype/nexus3:3.37.0
        securityContext:
          runAsUser: 200
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8081
        env:
        - name: DB_NAME
          value: <db-name>
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: db-secret
              key: db-password
        - name: DB_USER
          valueFrom:
            secretKeyRef:
              name: db-secret
              key: db-user              
        - name: NEXUS_SECURITY_RANDOMPASSWORD
          value: "false"
		- name: INSTALL4J_ADD_VM_PARAMS
 		  value: "-Xms2703m -Xmx2703m -XX:MaxDirectMemorySize=2703m -Dnexus.licenseFile=/nxrm-license/nxrm-license.lic \
          -Dnexus.datastore.enabled=true -Djava.util.prefs.userRoot=${NEXUS_DATA}/javaprefs \
          -Dnexus.datastore.nexus.jdbcUrl=jdbc:postgresql://${DB_HOST}:5432/${DB_NAME}?sslmode=require \
          -Dnexus.datastore.nexus.username=${DB_USER}@db_server_name \ 
          -Dnexus.datastore.nexus.password=${DB_PASSWORD}"
        volumeMounts:
          - mountPath: /nxrm-license
            name: license-volume
          - name: nexusdata
            mountPath: /nexus-data
          - name: logback-tasklogfile-override
            mountPath: /nexus-data/etc/logback/logback-tasklogfile-appender-override.xml
            subPath: logback-tasklogfile-appender-override.xml
      - name: request-log
        image: busybox:1.33.1
        args: [/bin/sh, -c, 'tail -n+1 -F /nexus-data/log/request.log']
        volumeMounts:
          - name: nexusdata
            mountPath: /nexus-data
      - name: audit-log
        image: busybox:1.33.1
        args: [/bin/sh, -c, 'tail -n+1 -F /nexus-data/log/audit/audit.log']
        volumeMounts:
          - name: nexusdata
            mountPath: /nexus-data
      - name: tasks-log
        image: busybox:1.33.1
        args: [/bin/sh, -c, 'tail -n+1 -F /nexus-data/log/tasks/allTasks.log']
        volumeMounts:
          - name: nexusdata
            mountPath: /nexus-data             
      volumes:
      - name: nexusdata
        persistentVolumeClaim:
          claimName: ebs-claim
      - name: license-volume
        configMap:
          name: nxrm-license
      - name: logback-tasklogfile-override
        configMap:
          name: nxrm-logback-tasklogfile-override  
          items:
               - key: logback-tasklogfile-appender-override.xml
                 path: logback-tasklogfile-appender-override.xml        


Deployment YAML - Secrets Store CSI Driver 

Use this deployment.yaml when using Secrets Store CSI Driver.

Note the following important information:

  • replicas is set to 1.
  • volumeMounts are specified for the Nexus Repository license, local node's EBS volume, and logback config map for consolidating task logs into one log file. 
  • Docker repositories will need additional ports exposed matching your repository connector configuration.
  • The environment variable NEXUS_SEARCH_INDEX_REBUILD_ON_STARTUP will automatically rebuild your Elasticsearch index on pod/node restart.
  • The deployment YAML includes three busybox sidecar containers which tails the Nexus Repository logs (request, audit, and task logs) to stdout so that they can be automatically pushed to Azure Monitor. See Kubernetes documentation for more information on the sidecar pattern for accessing logs.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nxrm-deployment-nexus
  namespace: nexus-repo-mgr
  labels:
    app: nxrm
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nxrm
  template:
    metadata:
      labels:
        app: nxrm
    spec:
      initContainers:
        # chown nexus-data to 'nexus' user and init log directories/files for a new pod
        # otherwise the side car containers will crash a couple of times and backoff whilst waiting
        # for nxrm-app to start and this increases the total start up time.
        - name: chown-nexusdata-owner-to-nexus-and-init-log-dir
          image: busybox:1.33.1
          command: [/bin/sh]
          args:
            - -c
            - >-
              mkdir -p /nexus-data/etc/logback &&
              mkdir -p /nexus-data/log/tasks &&
              mkdir -p /nexus-data/log/audit &&
              touch -a /nexus-data/log/tasks/allTasks.log &&
              touch -a /nexus-data/log/audit/audit.log &&
              touch -a /nexus-data/log/request.log &&
              chown -R '200:200' /nexus-data
          volumeMounts:
            - name: nexusdata
              mountPath: /nexus-data
      containers:
        - name: nxrm-app
          image: sonatype/nexus3:3.37.0
          securityContext:
            runAsUser: 200
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8081
          env:
            - name: DB_NAME
              value: <db-name>
            - name: DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: db-secret
                  key: db-password
            - name: DB_USER
              valueFrom:
                secretKeyRef:
                  name: db-secret
                  key: db-user
            - name: NEXUS_SECURITY_RANDOMPASSWORD
              value: "false"
            - name: INSTALL4J_ADD_VM_PARAMS
              value: "-Xms2703m -Xmx2703m -XX:MaxDirectMemorySize=2703m \
          -Dnexus.licenseFile=/nxrm-secrets/nxrm-license -Dnexus.datastore.enabled=true -Djava.util.prefs.userRoot=${NEXUS_DATA}/javaprefs \
          -Dnexus.datastore.nexus.jdbcUrl=jdbc:postgresql://${DB_HOST}:5432/${DB_NAME}?sslmode=require \
          -Dnexus.datastore.nexus.username=${DB_USER}@db_server_name \ 
          -Dnexus.datastore.nexus.password=${DB_PASSWORD}"
          volumeMounts:
            - mountPath: /nxrm-secrets
              name: nxrm-secrets
            - name: nexusdata
              mountPath: /nexus-data
            - name: logback-tasklogfile-override
              mountPath: /nexus-data/etc/logback/logback-tasklogfile-appender-override.xml
              subPath: logback-tasklogfile-appender-override.xml
        - name: request-log
          image: busybox:1.33.1
          args: [/bin/sh, -c, 'tail -n+1 -F /nexus-data/log/request.log']
          volumeMounts:
            - name: nexusdata
              mountPath: /nexus-data
        - name: audit-log
          image: busybox:1.33.1
          args: [/bin/sh, -c, 'tail -n+1 -F /nexus-data/log/audit/audit.log']
          volumeMounts:
            - name: nexusdata
              mountPath: /nexus-data
        - name: tasks-log
          image: busybox:1.33.1
          args: [/bin/sh, -c, 'tail -n+1 -F /nexus-data/log/tasks/allTasks.log']
          volumeMounts:
            - name: nexusdata
              mountPath: /nexus-data
      volumes:
        - name: nexusdata
          persistentVolumeClaim:
            claimName: ebs-claim
        - name: nxrm-secrets
          csi:
            driver: secrets-store.csi.k8s.io
            readOnly: true
            volumeAttributes:
              secretProviderClass: "nxrm-nexus-license-secret"
        - name: logback-tasklogfile-override
          configMap:
            name: nxrm-logback-tasklogfile-override
            items:
               - key: logback-tasklogfile-appender-override.xml
                 path: logback-tasklogfile-appender-override.xml 

Services YAML

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: nexus-repo-mgr
  name: ingress-nxrm-nexus
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nxrm-service-nexus
            port:
              number: 80
---              
apiVersion: v1
kind: Service
metadata:
  name: nxrm-service-nexus
  namespace: nexus-repo-mgr
  labels:
    app: nxrm
spec:
  type: NodePort
  selector:
    app: nxrm
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8081

You can also use the below to extend the services.yaml for Docker port configuration:

Ingress for Docker Connector (Optional) 

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: nxrm
  name: ingress-nxrm-docker
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nxrm-service-docker
                port:
                  number: 9090

Nodeport for Docker Connector (Optional) 

apiVersion: v1
kind: Service
metadata:
  name: nxrm-service-docker
  namespace: nxrm
  labels:
    app: nxrm
spec:
  type: NodePort
  selector:
    app: nxrm
  ports:
    - name: docker-connector
      protocol: TCP
      port: 9090
      targetPort: 9090