Skip to main content

Configuring Hazelcast (Legacy)

Note

High Availability Clustering is a legacy feature that only works on the Orient database.

All new deployments should use one of our new high availability or resilient deployment options described in Resiliency and High Availability.

Network Preparation

The nodes in a Nexus Repository Manager cluster need to be able to communicate with each other over TCP/IP. For cluster communications, Nexus Repository Manager will use a range of ports starting with 5701. Each node in the Nexus Repository Manager cluster will require that an extra port is available; the extra ports used by Nexus Repository Manager will be bound sequentially by default.

For example, in a three-node cluster, each of the nodes in the cluster will need to have ingress opened on ports5701 , 5702, and 5703. The nodes will use ephemeral ports, randomly selected by the operating system, for outbound (egress) communications. If outbound or inbound network communications need to be customized and may be blocked by a firewall or other network appliance, the ports used for cluster communications can be customized in the Hazelcast configuration.

Note

For more information on customizing Hazelcast and the ports it uses, please see the documentation for Hazelcast cluster configuration.

The nodes in a Nexus Repository Manager cluster will also replicate database transactions among the nodes in the cluster. The database requires ports 2424 and 2434 be open for ingress and egress to each of the rest of the nodes in the cluster.

Node Discovery

Hazelcast has multiple methods for discovering other nodes. In the default configuration, Nexus Repository Manager uses multicast to discover other Nexus Repository Manager nodes. This is done to simplify cluster configuration. However, multicast may not work reliably in all network environments.

To customize discovery, copy NEXUS_HOME/etc/fabric/hazelcast-network-default.xml to $data-dir/etc/fabric/hazelcast-network.xml and adjust the settings enclosed in the <join> tag.

Warning

In NXRM 3.6.1 or earlier, these changes must be applied directly to NEXUS_HOME/etc/fabric/hazelcast.xml, where NEXUS_HOME is where Nexus Repository Manager is installed.

Multicast Discovery

Multicast is the default discovery method and is recommended unless it is not supported on your network. To test multicast connectivity between nodes, we recommend using iPerf. iPerf is freely available for Windows, Linux and Mac OSX under the BSD licence.

To test multicast connectivity between two nodes, each node will need to have iPerf installed (note: iPerf3 does not support multicast client testing). During testing, one node will act as the "server" while the other node acts as the "client." This is important during testing, and further, we suggest that each node be tested as a server and as a client to ensure proper two-way multicast communication.

Testing multicast from the server side

iperf -s -B 224.2.2.3 -p 54327 -i 1

Testing multicast from the client side

iperf -c 224.2.2.3 -p 54327 -u -i 1

The address and port 224.2.2.3:54327 was chosen because this is the default port and address that will be used by Nexus Repository Manager during the node discovery. When the client is able to successfully connect and communicate with the server node, the client will output a set of brief diagnostic messages indicating how much data was sent and received. The following is a sample of the output from a client in a successful session:

Sample Output

Client connecting to 224.2.2.3, UDP port 54327
Sending 1470 byte datagrams
Setting multicast TTL to 1
UDP buffer size:  208 KByte (default)

[  3] local 10.10.0.102 port 33743 connected with 224.2.2.3 port 54327
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   129 KBytes  1.06 Mbits/sec
[  3]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  3.0- 4.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  4.0- 5.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  5.0- 6.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  6.0- 7.0 sec   129 KBytes  1.06 Mbits/sec
[  3]  7.0- 8.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  8.0- 9.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  9.0-10.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  0.0-10.0 sec  1.25 MBytes  1.05 Mbits/sec
[  3] Sent 893 datagrams

The advantage of the using multicast for Nexus Repository Manager node discovery is that nodes can be added and removed from the cluster without cluster administrators needing to perform configuration or configuration changes. However, routers may not be able to route multicast requests properly between subnets, or multicast may be disabled altogether. In these situations the cluster configuration can be done manually. If multicast is not available, either AWS Discovery (if you are running NXRM inside AWS on EC2 instances) or TCP/IP Discoverycan be used.

For a production instance, we advise that the default address and port (224.2.2.3:54327) is not used, to avoid another cluster within the same network and using the same default configuration from unintentionally joining an existing cluster.

AWS Discovery

Nexus Repository Manager can be deployed on cloud-computing services, such as Amazon Web Services (AWS), where multicast discovery is not supported. Hazelcast AWS discovery is recommended.

AWS Environment

The NXRM servers need permissions to find other nodes, granted through AWS Identity and Access Management(IAM). The following configuration settings should be used:

  1. The EC2 instances running NXRM should be assigned an InstanceProfile

  2. The InstanceProfile should have an InstanceRole with a policy granting the permission ec2:DescribeInstances on all resources.

In CloudFormation, the role would look similar to this:

"InstanceRole": {
  "Type": "AWS::IAM::Role",
  "Properties": {
    "AssumeRolePolicyDocument": {
      "Version": "2012-10-17",
      "Statement": [{
        "Effect": "Allow",
        "Principal": {
          "Service": "ec2.amazonaws.com"
        },
        "Action": [
          "sts:AssumeRole"
        ]
      }]
    },
    "Policies": [{
      "PolicyName": "hazelcastDiscovery",
      "PolicyDocument": {
        "Version": "2012-10-17",
        "Statement": [{
          "Effect": "Allow",
          "Action": [
            "ec2:DescribeInstances"
          ],
          "Resource": ["*"]
        }]
      }
    }]
  }
}

NXRM Configuration for AWS Discovery

To configure Hazelcast for automatic node discovery, you need the IAM role name, AWS region, and security group of the EC2 instances. Find the <join> tag in $install-dir/etc/fabric/hazelcast-network.xml. Then, edit the file for each node:

  1. Change the value in <multicast enabled="true"> to "false".

  2. Update the <discovery-strategies> section as explained below.

  3. Save the file.

  4. Reboot each node in the cluster.

The $data-dir/etc/fabric/hazelcast-network.xml file with the modified properties will look similar to this:

    <join>
      <!-- deactivating other discoveries -->
      <multicast enabled="false"/>
      <tcp-ip enabled="false"/>
      <aws enabled="false"/>

      <discovery-strategies>
        <discovery-strategy enabled="true" class="com.hazelcast.aws.AwsDiscoveryStrategy">
          <properties>
            <!--
                         | Required: use the command 'aws iam list-instance-profiles' on the EC2 instance to locate the name
                         | of the role used by the IAM Instance Profile you created previously
                         -->
            <property name="iam-role">EC2_IAM_ROLE_NAME</property>

                        <!-- Required: set this to the region where your EC2 instances running Nexus Repository Manager are located -->
            <property name="region">us-west-1</property>

                        <!--
                         | The next few options give you a choice to how the nodes are enumerated. You can specify:
                         | * Just a security group name, if the security group only contains Nexus Repository Manager hosts
                         | * or a tag-key/tag-value pair, if you have tagged the EC2 instances
                         | * or a combination of the two, if the security group has other hosts in it.
                         -->
            <property name="security-group-name">EC2_SECURITY_GROUP_NAME</property>

            <!-- example tag idea, you are free to use whatever tag naming convention you need-->
            <property name="tag-key">Purpose</property>
            <property name="tag-value">Nexus Repository Manager</property>
     
          </properties>
        </discovery-strategy>
      </discovery-strategies>
    </join>

Note

For more information on configuring Hazelcast in an AWS environment, please see the documentation for the AWS EC2 discovery plugin for hazelcast.

TCP/IP Discovery

When multicast discovery and AWS discovery are not available, TCP/IP discovery can be configured. TCP/IP discovery requires the IP address of each node to included in the configuration. Please see the documentation for Hazelcast cluster configuration. Find the <join> tag in $install-dir/etc/fabric/hazelcast-network.xml. Then, edit the file for each node:

  1. Add or set the following in $data-dir/etc/nexus.properties: nexus.hazelcast.discovery.isEnabled=false

  2. Update the <join> section as explained below.

  3. Save the file.

  4. Reboot each node in the cluster.

In this case the $data-dir/etc/fabric/hazelcast-network.xml file with the modified properties will look similar to this:

<join>
    <multicast enabled="false"/>
    <tcp-ip enabled="true">
        <member-list>
          <member>10.0.1.10</member>
          <member>10.0.1.11</member>
          <member>10.0.1.12</member>
        </member-list>
    </tcp-ip>
    <aws enabled="false"/>
</join>