Single-Node Cloud Resilient Deployment Example Using AWS
Only available in Nexus Repository Pro. Interested in a free trial? Start here.
We recognize that Nexus Repository is mission-critical to your business. With an Amazon Web Services (AWS)-based Nexus Repository deployment, you can ensure that your Nexus Repository instance is available even if disaster strikes. Whether a single service or an entire data center goes down, you can ensure that you still have access to Nexus Repository.
This section provides instructions and explanations for setting up a resilient AWS-based Nexus Repository deployment like the one illustrated below.
Similar architecture could be used for other cloud or on-premise deployments with Kubernetes and file-based or other supported blob storage. If you would like to manage your own deployment, see Single Data Center On-Premises Deployment Example Using Kubernetes.
Use Cases
This reference architecture is designed to protect against the following scenarios:
- An AWS Availability Zone (AZ) outage within a single AWS region
- A node/server (i.e., EC2) failure
- A Nexus Repository service failure
You would use this architecture if you fit the following profiles:
- You are a Nexus Repository Pro user looking for a resilient Nexus Repository deployment option in AWS in order to reduce downtime
- You would like to achieve automatic failover and fault tolerance as part of your deployment goals
- You already have an Elastic Kubernetes Service (EKS) cluster set up as part of your deployment pipeline for your other in-house applications and would like to leverage the same for your Nexus Repository deployment
- You have migrated or set up Nexus Repository with an external PostgreSQL database and want to fully reap the benefits of an externalized database setup
- You do not need High Availability (HA) active-active mode
Requirements
In order to set up an environment like the one illustrated above and described in this section, you will need the following:
- A Nexus Repository Pro license
- Nexus Repository 3.33.0 or later
- An AWS account with permissions for accessing the following AWS services:
- Elastic Kubernetes Service (EKS)
- Relational Database Service (RDS) for PostgreSQL
- Application Load Balancer (ALB)
- CloudWatch
- Simple Storage Service (S3)
- Secrets Manager
Limitations
In this reference architecture, a maximum of one Nexus Repository instance is running at a time. Having more than one Nexus Repository instance replica will not work.
Setting Up the Architecture
AWS EKS Cluster
Nexus Repository runs on a single-node AWS EKS cluster spread across two AZs within a single AWS region. After you configure EKS to run only one instance (by setting the min, max, and desired nodes to one), EKS ensures that only one instance of Nexus Repository runs at any one time in the entire cluster. If something causes the instance or the node to go down, another will be spun up. If an AZ becomes unavailable, AWS spins up a new node in the secondary AZ with a new pod running Nexus Repository.
Begin by setting up the EKS cluster in the AWS web console. AWS provides instructions for managed nodes (i.e., EC2) in their documentation.
Your EKS cluster should have a max node count of one spread across two AZs in an AWS region.
AWS Aurora PostgreSQL Cluster
An Aurora PostgreSQL cluster containing three databases (one writer node and two replicas) spread across three AZs in the region where you've deployed your EKS provides an external database for Nexus Repository configurations and component metadata.
AWS provides instructions on creating an Aurora database cluster in their documentation.
AWS Load Balancer Controller
The AWS Load Balancer Controller allows you to provision an AWS ALB via an Ingress type specified in your Kubernetes deployment YAML file. This load balancer, which is provisioned in one of the public subnets specified when you create the cluster, allows you to reach the Nexus Repository pod from outside the EKS cluster. This is necessary because the nodes on which EKS runs the Nexus Repository pod are in private subnets and are otherwise unreachable from outside the EKS cluster.
Follow the AWS documentation to deploy the AWS LBC to your EKS cluster.
Kubernetes Namespace
A namespace allows you to isolate groups of resources in a single cluster. Resource names must be unique within a namespace, but not across namespaces. See the Kubernetes documentation about namespaces for more information.
To create a namespace, use a command like the one below with the kubectl command-line tool:
kubectl create namespace <namespace>
AWS Secrets Manager
AWS Secrets Manager stores your Nexus Repository Pro license as well as the database username, password, and host address. In the event of a failover, Secrets Manager can retrieve the license when the new Nexus Repository container starts. This way, your Nexus Repository always starts in Pro mode.
Use the AWS Secrets Store CSI drivers to mount the license secret, which is stored in AWS Secrets Manager, as a volume in the pod running Nexus Repository.
Include the --syncSecret.enabled=true flag when running the helm command for installing the Secret Store CSI Driver. This will ensure that secrets are automatically synced from AWS Secrets Manager into the Kubernetes secrets specified in the secrets YAML.
Note that only the AWS CLI can support storing a binary license file. AWS provides documentation for using a --secret-binary
argument in the CLI.
The command will look as follows:
aws secretsmanager create-secret --name supersecretlicense --secret-binary fileb://super-secret-license-file.lic --region <region>
This will return a response such as this:
{ "VersionId": "4cd22597-f0a9-481c-8ccd-683a5210eb2b", "Name": "supersecretlicense", "ARN": "arn:aws:secretsmanager:<region>:<account id>:secret:supersecretlicense-abcdef" }
You will put the ARN
value in the secrets YAML.
If updating the license (e.g., when renewing your license and receiving a new license binary), you'll need to restart the Nexus Repository pod after uploading the license to the AWS Secrets Manager. The AWS CLI command for updating a secret is put-secret-value
.
AWS CloudWatch
When running Nexus Repository on Kubernetes, it is possible for it to run on different nodes in different AZs over the course of the same day. In order to be able to access Nexus Repository's logs from nodes in all AZs, we recommend that you externalize your logs to CloudWatch. Follow AWS documentation to set up Fluentbit for gathering logs from your EKS cluster.
When first installed, Nexus Repository sends task logs to separate files. Those files do not exist at startup and only exist as the tasks are being run. In order to facilitate sending logs to CloudWatch, you need the log file to exist when Nexus Repository starts up. The nxrm-logback-tasklogfile-override.yaml shown below sets this up.
Once Fluentbit is set up and running on your EKS cluster, apply the fluent-bit.yaml shown below to configure it to stream Nexus Repository's logs to CloudWatch. The specified Fluentbit YAML sends the logs to CloudWatch log streams within a nexus-logs
log group.
AWS also provides documentation for setting up and using CloudWatch.
Local Persistent Volume and Local Persistent Volume Claim
Using a local persistent volume allows Elasticsearch indexes to survive pod restarts on a particular node. When used in conjunction with the NEXUS_SEARCH_INDEX_REBUILD_ON_STARTUP
flag, this ensures that the Elasticsearch index is only rebuilt if it is empty when the Nexus Repository pod starts. This means that the only time the Elasticsearch index is rebuilt is the first time that a Nexus Repository pod starts on a node.
Storage space will be allocated to a local persistent volume from your root EBS volume (i.e., the EBS volume attached to the provisioned node). Therefore, you must ensure that the size you specify for your EKS node's root EBS volume is sufficient for your usage. Ensure that the size of the root EBS volume is bigger than that specified in the local persistent volume claim so that there's some spare storage capacity on the node. For example, for a local persistent volume claim size of 100 Gigabytes, you could make the actual size of the root EBS volume 120 Gigabytes.
See the sample storage class YAML, local persistent volume YAML, and local persistent volume claim YAML below.
AWS S3
Located in the same region as your EKS deployment, AWS S3 provides your object (blob) storage. AWS provides detailed documentation for S3 on their website.
Sample Kubernetes YAML Files
You can use the sample YAML files in this section to help set up the YAMLs you will need for a resilient deployment. Ensure you have filled out the YAML files with appropriate information for your deployment.
Before running the YAML files in this section, you must first create a namespace as detailed below.
Then, you must run your YAML files in the order below:
- Storage Class YAML
- Secrets YAML as mentioned in Secrets Manager setup
- Fluent-bit Setup as mentioned in the CloudWatch section
- nxrm-logback-tasklogfile-override YAML
- Local Persistent Volume YAML
- Local Persistent Volume Claim YAML
- Deployment YAML
- Services YAML
The resources created by these YAMLs are not in the default namespace.
Create Namespace
Before running the YAML files below, you must create a namespace. To create a namespace, use a command like the one below:
kubectl create namespace <namespace>
Storage Class YAML
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-storage namespace: nxrm-nexus-27385 provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer
Secrets YAML
apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: namespace: nxrm-nexus-27385 name: nxrm-nexus-27385-nxrm-secret spec: provider: aws secretObjects: - data: - key: db-user objectName: nxrm-db-user - key: db-password objectName: nxrm-db-password - key: db-host objectName: nxrm-db-host secretName: nxrm-db-secret type: Opaque parameters: objects: | - objectName: "arn:aws:secretsmanager:<region>:<account id>:secret:nxrm-license.lic-abcdef" objectAlias: nxrm-license.lic - objectName: "arn:aws:secretsmanager:<region>:<account id>:secret:nxrm-rds-cred-nexus-27386-abcdef" jmesPath: - path: "username" objectAlias: "nxrm-db-user" - path: "password" objectAlias: "nxrm-db-password" - path: "host" objectAlias: "nxrm-db-host"
fluent-bit.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: fluent-bit namespace: amazon-cloudwatch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: fluent-bit-role rules: - nonResourceURLs: - /metrics verbs: - get - apiGroups: [""] resources: - namespaces - pods - pods/logs verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: fluent-bit-role-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: fluent-bit-role subjects: - kind: ServiceAccount name: fluent-bit namespace: amazon-cloudwatch --- apiVersion: v1 kind: ConfigMap metadata: name: fluent-bit-config namespace: amazon-cloudwatch labels: k8s-app: fluent-bit data: fluent-bit.conf: | [SERVICE] Flush 5 Log_Level info Daemon off Parsers_File parsers.conf HTTP_Server ${HTTP_SERVER} HTTP_Listen 0.0.0.0 HTTP_Port ${HTTP_PORT} storage.path /var/fluent-bit/state/flb-storage/ storage.sync normal storage.checksum off storage.backlog.mem_limit 5M @INCLUDE nexus-log.conf @INCLUDE nexus-request-log.conf @INCLUDE nexus-audit-log.conf @INCLUDE nexus-tasks-log.conf nexus-log.conf: | [INPUT] Name tail Tag nexus.nexus-log Path /var/log/containers/nxrm-deployment-nexus-27385*nxrm-nexus-27385_nxrm-app*.log Parser docker DB /var/fluent-bit/state/flb_container.db Mem_Buf_Limit 5MB Skip_Long_Lines Off Refresh_Interval 10 Rotate_Wait 30 storage.type filesystem Read_from_Head ${READ_FROM_HEAD} [FILTER] Name kubernetes Match nexus.nexus-log Kube_URL https://kubernetes.default.svc:443 Kube_Tag_Prefix application.var.log.containers. Merge_Log On Merge_Log_Key log_processed K8S-Logging.Parser On K8S-Logging.Exclude Off Labels Off Annotations Off [OUTPUT] Name cloudwatch_logs Match nexus.nexus-log region ${AWS_REGION} log_group_name /aws/containerinsights/${CLUSTER_NAME}/nexus-logs log_stream_prefix ${HOST_NAME}-nexus.log- auto_create_group true extra_user_agent container-insights nexus-request-log.conf: | [INPUT] Name tail Tag nexus.request-log Path /var/log/containers/nxrm-deployment-nexus-27385*nxrm-nexus-27385_request-log*.log Parser docker DB /var/fluent-bit/state/flb_container.db Mem_Buf_Limit 5MB Skip_Long_Lines Off Refresh_Interval 10 Rotate_Wait 30 storage.type filesystem Read_from_Head ${READ_FROM_HEAD} [FILTER] Name kubernetes Match nexus.request-log Kube_URL https://kubernetes.default.svc:443 Kube_Tag_Prefix application.var.log.containers. Merge_Log On Merge_Log_Key log_processed K8S-Logging.Parser On K8S-Logging.Exclude Off Labels Off Annotations Off [OUTPUT] Name cloudwatch_logs Match nexus.request-log region ${AWS_REGION} log_group_name /aws/containerinsights/${CLUSTER_NAME}/nexus-logs log_stream_prefix ${HOST_NAME}-request.log- auto_create_group true extra_user_agent container-insights nexus-audit-log.conf: | [INPUT] Name tail Tag nexus.audit-log Path /var/log/containers/nxrm-deployment-nexus-27385*nxrm-nexus-27385_audit-log*.log Parser docker DB /var/fluent-bit/state/flb_container.db Mem_Buf_Limit 5MB Skip_Long_Lines Off Refresh_Interval 10 Rotate_Wait 30 storage.type filesystem Read_from_Head ${READ_FROM_HEAD} [FILTER] Name kubernetes Match nexus.audit-log Kube_URL https://kubernetes.default.svc:443 Kube_Tag_Prefix application.var.log.containers. Merge_Log On Merge_Log_Key log_processed K8S-Logging.Parser On K8S-Logging.Exclude Off Labels Off Annotations Off [OUTPUT] Name cloudwatch_logs Match nexus.audit-log region ${AWS_REGION} log_group_name /aws/containerinsights/${CLUSTER_NAME}/nexus-logs log_stream_prefix ${HOST_NAME}-audit.log- auto_create_group true extra_user_agent container-insights nexus-tasks-log.conf: | [INPUT] Name tail Tag nexus.tasks-log Path /var/log/containers/nxrm-deployment-nexus-27385*nxrm-nexus-27385_tasks-log*.log Parser docker DB /var/fluent-bit/state/flb_container.db Mem_Buf_Limit 5MB Skip_Long_Lines Off Refresh_Interval 10 Rotate_Wait 30 storage.type filesystem Read_from_Head ${READ_FROM_HEAD} [FILTER] Name kubernetes Match nexus.tasks-log Kube_URL https://kubernetes.default.svc:443 Kube_Tag_Prefix application.var.log.containers. Merge_Log On Merge_Log_Key log_processed K8S-Logging.Parser On K8S-Logging.Exclude Off Labels Off Annotations Off [OUTPUT] Name cloudwatch_logs Match nexus.tasks-log region ${AWS_REGION} log_group_name /aws/containerinsights/${CLUSTER_NAME}/nexus-logs log_stream_prefix ${HOST_NAME}-tasks.log- auto_create_group true extra_user_agent container-insights parsers.conf: | [PARSER] Name docker Format json Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%LZ [PARSER] Name syslog Format regex Regex ^(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$ Time_Key time Time_Format %b %d %H:%M:%S [PARSER] Name container_firstline Format regex Regex (?<log>(?<="log":")\S(?!\.).*?)(?<!\\)".*(?<stream>(?<="stream":").*?)".*(?<time>\d{4}-\d{1,2}-\d{1,2}T\d{2}:\d{2}:\d{2}\.\w*).*(?=}) Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%LZ [PARSER] Name cwagent_firstline Format regex Regex (?<log>(?<="log":")\d{4}[\/-]\d{1,2}[\/-]\d{1,2}[ T]\d{2}:\d{2}:\d{2}(?!\.).*?)(?<!\\)".*(?<stream>(?<="stream":").*?)".*(?<time>\d{4}-\d{1,2}-\d{1,2}T\d{2}:\d{2}:\d{2}\.\w*).*(?=}) Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%LZ --- apiVersion: apps/v1 kind: DaemonSet metadata: name: fluent-bit namespace: amazon-cloudwatch labels: k8s-app: fluent-bit version: v1 kubernetes.io/cluster-service: "true" spec: selector: matchLabels: k8s-app: fluent-bit template: metadata: labels: k8s-app: fluent-bit version: v1 kubernetes.io/cluster-service: "true" spec: containers: - name: fluent-bit image: amazon/aws-for-fluent-bit:2.10.0 imagePullPolicy: Always env: - name: AWS_REGION valueFrom: configMapKeyRef: name: fluent-bit-cluster-info key: logs.region - name: CLUSTER_NAME valueFrom: configMapKeyRef: name: fluent-bit-cluster-info key: cluster.name - name: HTTP_SERVER valueFrom: configMapKeyRef: name: fluent-bit-cluster-info key: http.server - name: HTTP_PORT valueFrom: configMapKeyRef: name: fluent-bit-cluster-info key: http.port - name: READ_FROM_HEAD valueFrom: configMapKeyRef: name: fluent-bit-cluster-info key: read.head - name: READ_FROM_TAIL valueFrom: configMapKeyRef: name: fluent-bit-cluster-info key: read.tail - name: HOST_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: CI_VERSION value: "k8s/1.3.7" # the below var is just to force DaemonSet restarts when changing configuration stored in ConfigMap above - name: FOO_VERSION value: "15" resources: limits: memory: 200Mi requests: cpu: 500m memory: 100Mi volumeMounts: # Please don't change below read-only permissions - name: fluentbitstate mountPath: /var/fluent-bit/state - name: varlog mountPath: /var/log readOnly: true - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true - name: fluent-bit-config mountPath: /fluent-bit/etc/ - name: runlogjournal mountPath: /run/log/journal readOnly: true - name: dmesg mountPath: /var/log/dmesg readOnly: true terminationGracePeriodSeconds: 120 volumes: - name: fluentbitstate hostPath: path: /var/fluent-bit/state - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: fluent-bit-config configMap: name: fluent-bit-config - name: runlogjournal hostPath: path: /run/log/journal - name: dmesg hostPath: path: /var/log/dmesg serviceAccountName: fluent-bit tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule - operator: "Exists" effect: "NoExecute" - operator: "Exists" effect: "NoSchedule"
nxrm-logback-tasklogfile-override.yaml
apiVersion: v1 kind: ConfigMap metadata: name: nxrm-logback-tasklogfile-override namespace: nxrm-nexus-27385 data: logback-tasklogfile-appender-override.xml: | <included> <appender name="tasklogfile" class="ch.qos.logback.core.rolling.RollingFileAppender"> <File>${karaf.data}/log/tasks/allTasks.log</File> <filter class="org.sonatype.nexus.pax.logging.TaskLogsFilter" /> <Append>true</Append> <encoder class="org.sonatype.nexus.pax.logging.NexusLayoutEncoder"> <pattern>%d{"yyyy-MM-dd HH:mm:ss,SSSZ"} %-5p [%thread] %node %mdc{userId:-*SYSTEM} %c - %m%n</pattern> </encoder> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <fileNamePattern>${karaf.data}/log/tasks/allTasks-%d{yyyy-MM-dd}.log.gz</fileNamePattern> <maxHistory>1</maxHistory> </rollingPolicy> </appender> </included>
Local Persistent Volume YAML
You should not use Dynamic EBS Volume Provisioning as it will cause scheduling problems if EKS provisions the Nexus Repository pod and the EBS volume in different AZs. The EBS volume used must be the local volume attached as illustrated in the sample persistent volume YAML file.
apiVersion: v1 kind: PersistentVolume metadata: name: ebs-pv spec: capacity: storage: 120Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /mnt nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - us-east-1a - us-east-1b
Local Persistent Volume Claim YAML
You should not use Dynamic EBS Volume Provisioning as it will cause scheduling problems if EKS provisions the Nexus Repository pod and the EBS volume in different AZs. The EBS volume used must be the local volume attached as illustrated in the sample persistent volume YAML file.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ebs-claim namespace: nxrm-nexus-27385 spec: accessModes: - ReadWriteOnce storageClassName: local-storage resources: requests: storage: 100Gi
Deployment YAML
The YAML below deploys Nexus Repository.
Note the following important information:
replicas
is set to 1.v
are specified for the Nexus Repository license, local persistent volume, and logback config map for consolidating task logs into one log file.olumeMounts
- Docker repositories will need additional ports exposed matching your repository connector configuration.
- The deployment YAML includes three busybox sidecar containers which tail the Nexus Repository logs via a local persistent volume and stream them to the worker node running the nxrm pod where Fluentbit picks them up and send to CloudWatch. See Kubernetes documentation for more information on the sidecar pattern for accessing logs.
apiVersion: apps/v1 kind: Deployment metadata: name: nxrm-deployment-nexus-27385 namespace: nxrm-nexus-27385 labels: app: nxrm spec: replicas: 1 selector: matchLabels: app: nxrm template: metadata: labels: app: nxrm spec: serviceAccountName: nxrm-nexus-27385-deployment-sa initContainers: # chown nexus-data to 'nexus' user and init log directories/files for a new pod # otherwise the side car containers will crash a couple of times and backoff whilst waiting # for nxrm-app to start and this increases the total start up time. - name: chown-nexusdata-owner-to-nexus-and-init-log-dir image: busybox:1.33.1 command: [/bin/sh] args: - -c - >- mkdir -p /nexus-data/etc/logback && mkdir -p /nexus-data/log/tasks && mkdir -p /nexus-data/log/audit && touch -a /nexus-data/log/tasks/allTasks.log && touch -a /nexus-data/log/audit/audit.log && touch -a /nexus-data/log/request.log && chown -R '200:200' /nexus-data volumeMounts: - name: nexusdata mountPath: /nexus-data containers: - name: nxrm-app image: sonatype/nexus3:3.37.0 securityContext: runAsUser: 200 imagePullPolicy: IfNotPresent ports: - containerPort: 8081 env: - name: DB_NAME value: <db-name> - name: DB_PASSWORD valueFrom: secretKeyRef: name: nxrm-db-secret key: db-password - name: DB_USER valueFrom: secretKeyRef: name: nxrm-db-secret key: db-user - name: DB_HOST valueFrom: secretKeyRef: name: nxrm-db-secret key: db-host - name: NEXUS_SECURITY_RANDOMPASSWORD value: "false" - name: INSTALL4J_ADD_VM_PARAMS value: "-Xms2703m -Xmx2703m -XX:MaxDirectMemorySize=2703m -Dnexus.licenseFile=/nxrm-secrets/nxrm-license.lic \ -Dnexus.datastore.enabled=true -Djava.util.prefs.userRoot=${NEXUS_DATA}/javaprefs \ -Dnexus.datastore.nexus.jdbcUrl=jdbc:postgresql://${DB_HOST}:3306/${DB_NAME} \ -Dnexus.datastore.nexus.username=${DB_USER} \ -Dnexus.datastore.nexus.password=${DB_PASSWORD}" volumeMounts: - mountPath: /nxrm-secrets name: nxrm-secrets - name: nexusdata mountPath: /nexus-data - name: logback-tasklogfile-override mountPath: /nexus-data/etc/logback/logback-tasklogfile-appender-override.xml subPath: logback-tasklogfile-appender-override.xml - name: request-log image: busybox:1.33.1 args: [/bin/sh, -c, 'tail -n+1 -F /nexus-data/log/request.log'] volumeMounts: - name: nexusdata mountPath: /nexus-data - name: audit-log image: busybox:1.33.1 args: [/bin/sh, -c, 'tail -n+1 -F /nexus-data/log/audit/audit.log'] volumeMounts: - name: nexusdata mountPath: /nexus-data - name: tasks-log image: busybox:1.33.1 args: [/bin/sh, -c, 'tail -n+1 -F /nexus-data/log/tasks/allTasks.log'] volumeMounts: - name: nexusdata mountPath: /nexus-data volumes: - name: nexusdata persistentVolumeClaim: claimName: ebs-claim - name: nxrm-secrets csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "nxrm-nexus-27385-nxrm-secret" fsType: ext4 - name: logback-tasklogfile-override configMap: name: nxrm-logback-tasklogfile-override items: - key: logback-tasklogfile-appender-override.xml path: logback-tasklogfile-appender-override.xml
Services YAML
The services.yaml shown below sets up the Ingress Load Balancer.
Note that <scheme>
should typically be set to internal
.
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: namespace: nxrm-nexus-27385 name: ingress-nxrm-nexus-27385 annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: <scheme> alb.ingress.kubernetes.io/subnets: <subnet id 1>, <subnet id 2> spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: nxrm-service-nexus-27385 port: number: 80 --- apiVersion: v1 kind: Service metadata: name: nxrm-service-nexus-27385 namespace: nxrm-nexus-27385 labels: app: nxrm spec: type: NodePort selector: app: nxrm ports: - protocol: TCP port: 80 targetPort: 8081
You can also use the below to extend the services.yaml for Docker port configuration:
Ingress for Docker Connector (Optional)
Note that <scheme>
should typically be set to internal
.
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: namespace: nxrm name: ingress-nxrm-docker annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: <scheme> alb.ingress.kubernetes.io/subnets: subnet-abc, subnet-xyz spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: nxrm-service-docker port: number: 9090
Nodeport for Docker Connector (Optional)
apiVersion: v1 kind: Service metadata: name: nxrm-service-docker namespace: nxrm labels: app: nxrm spec: type: NodePort selector: app: nxrm ports: - name: docker-connector protocol: TCP port: 9090 targetPort: 9090