Skip to main content

Initial Setup - High Availability Clustering (Legacy)

Note

High Availability Clustering is a legacy feature that only works on the Orient database.

All new deployments should use one of our new high availability or resilient deployment options described in Resiliency and High Availability.

This section will cover the steps for enabling High Availability Clustering in your Nexus Repository Manager deployment.

Note

This document assumes that you have completely read and prepared your choices for Configuring Storage, Configuring Hazelcast, and Designing your Cluster Backup/Restore Process.

Do not proceed until you have a complete plan ready for all 3.

Once you begin, focus on stabilizing one single node at a time.

Node Deployment Steps for High Availability Clustering

First Node

  1. Follow the usual Installation Methods. Create the file sonatype-work/nexus3/etc/nexus.propertieswith the following contents:

    # Jetty section
    application-port=8081
    application-host=0.0.0.0
    nexus-args=${jetty.etc}/jetty.xml,${jetty.etc}/jetty-http.xml,${jetty.etc}/jetty-requestlog.xml
    nexus-context-path=/
    
    # Nexus section
    nexus-edition=nexus-pro-edition
    nexus-features=\
     nexus-pro-feature
    
    nexus.clustered=true
    # nexus.licenseFile is only necessary for the first run
    # replace /path/to/your/sonatype-license.lic with the path to your license, and ensure the user running Nexus Repository manager can read it
    nexus.licenseFile=/path/to/your/sonatype-license.lic
  2. Next, deploy your chosen hazelcast configuration to the path sonatype-work/nexus3/etc/fabric/hazelcast-network.xml. For NXRM 3.6.1 or earlier, these changes must be made in NEXUS_HOME/etc/fabric/hazelcast.xml, where NEXUS_HOME is where Nexus Repository Manager is installed.

  3. Start Nexus Repository Manager. Log in as an administrator, then visit the SystemNodes screen. You should see your first node, like this example:

    Nodes screen

You may want to assign a friendly Node Name to each node in your cluster to make it easier to identify. Without a node name, screens that refer to individual nodes will display the unique ID labeled Node Identity in the screenshot above. Simply click on the entry in the list and you will have the ability to set a node name.

When Nexus Repository Manager is started withnexus.clustered=trueand a PRO license, it will not create the default blobstore or initial example repositories. You are free to set these up now, or later as you add nodes.

Second Node and Beyond

Warning

Do not copy the sonatype-work directory to your additional nodes. Each node must be allowed to initialize its own private sonatype-work directory; data generated on first run is unique to each instance.

Starting a second node off of a copy of the sonatype-work directory will result in the inability to correctly form a cluster.

Again follow the usual Installation Methods. Before starting Nexus Repository Manager, repeat the steps you performed on the first node:

  1. Create the file sonatype-work/nexus3/etc/nexus.properties with the same contents as the first node (note that you will have to choose a different port number only if you're running two or more nodes on a single host):

    # Jetty section
    application-port=8081
    application-host=0.0.0.0
    nexus-args=${jetty.etc}/jetty.xml,${jetty.etc}/jetty-http.xml,${jetty.etc}/jetty-requestlog.xml
    nexus-context-path=/
    
    # Nexus section
    nexus-edition=nexus-pro-edition
    nexus-features=\
     nexus-pro-feature
    
    nexus.clustered=true
    # nexus.licenseFile is only necessary for the first run
    # replace /path/to/your/sonatype-license.lic with the path to your license, and ensure the user running Nexus Repository manager can read it
    nexus.licenseFile=/path/to/your/sonatype-license.lic
  2. Deploy your chosen hazelcast configuration to the path sonatype-work/nexus3/etc/fabric/hazelcast-network.xml

  3. Start Nexus Repository Manager

After each node joins the cluster, confirm it is visible in the SystemNodes screen. Set a node name for each node as desired. Repeat this section for each node you wish to join your High Availability Cluster until all are running.

Enabling High Availability on an existing Nexus Repository Manager Deployment

If you already have a single node Nexus Repository Manager deployment, you will still need to read and prepare your choices for Configuring Storage,Configuring Hazelcast, andDesigning your Cluster Backup/Restore Process. You will also need to upgrade your single node deployment to a version of Nexus Repository Manager that supports High Availability Clustering (Legacy) before you attempt to establish a cluster. All nodes within a cluster must be running the exact same version of Nexus Repository Manager.

You additionally may haveto design a strategy for migrating the content on your existing blob stores to a shared filesystem accessible to all nodes in the cluster. Moving blobs from one blobstore to another can be achieved using Dynamic Storage.

Blob Store Content Migration

All blob stores used by HA-C nodes must be shared. If you need to move blobstore blobs to new blobstores in preparation for sharing between nodes, first perform that procedure in a single node configuration using the Dynamic Storage operations.

Configuration

Once you have all the pieces in place, enable high availability by performing the steps listed in the Second Node and Beyond section above.