Skip to content

Upgrade to 3.4.2 and 7.4.2

1. Upgrade to 3.4.2 / 3.4.2.1 / 3.4.2.2 and 7.4.2 / 7.4.2.1 / 7.4.2.2

RDAF Infra Upgrade: from 1.0.3 to 1.0.3.2(haproxy)

RDAF Platform: From 3.4.1 / 3.4.1.x to 3.4.2 / 3.4.2.1 (Scheduler Service, API Server) / 3.4.2.2 (Resource Manager, API Server & Portal UI)

AIOps (OIA) Application: From 7.4.1 / 7.4.1.x to 7.4.2 / 7.4.2.1 (Webhook Server, Alert Processor, Event Consumer, SMTP Server) / 7.4.2.2 (Collaboration Service)

RDAF Deployment rdafk8s CLI: From 1.2.1 to 1.2.2

RDAF Client rdac CLI: From 3.4.1 to 3.4.2.1

RDAF Infra Upgrade: from 1.0.3 to 1.0.3.2(haproxy)

RDAF Platform: From 3.4 to 3.4.2 / 3.4.2.1 (Scheduler Service, API Server) / 3.4.2.2 (Resource Manager, API Server & Portal UI)

OIA (AIOps) Application: From 7.4 to 7.4.2 / 7.4.2.1 (Webhook Server, Alert Processor, Event Consumer, SMTP Server) / 7.4.2.2 (Collaboration Service)

RDAF Deployment rdaf CLI: From 1.2.0 to 1.2.2

RDAF Client rdac CLI: From 3.4 to 3.4.2.1

1.1. Prerequisites

Before proceeding with this upgrade, please make sure and verify the below prerequisites are met.

Important

1) This upgrade involves multi-version upgrade. RDAF Platform services should be upgraded from 3.4 to 3.4.2 (All Services) first, followed by upgrading selective platform services (Scheduler, API Server, Portal frontend/backend & Worker) from 3.4.2 to 3.4.2.1. Further upgrading selective platform services (Resource Manager, API Server & Portal frontend/backend) from 3.4.2.1 to 3.4.2.2

2) RDAF OIA application services should be upgraded from 7.4 or 7.4.1.x to 7.4.2(All Services) first, followed by upgrading selective OIA application services (Webhook Server, Alert Processor, Event Consumer, SMTP Server) from 7.4.2 to 7.4.2.1. Further upgrading selective OIA service (Collaboration Service) from 7.4.2 to 7.4.2.2

  • RDAF Deployment CLI version: 1.2.1

  • Infra Services tag: 1.0.3 / 1.0.3.1 (haproxy)

  • Platform Services and RDA Worker tag: 3.4.1 / 3.4.1.x

  • OIA Application Services tag: 7.4.1 / 7.4.1.x

  • CloudFabrix recommends taking VMware VM snapshots where RDA Fabric infra/platform/applications are deployed

  • Check all MariaDB nodes are sync on HA setup using below commands before start upgrade

Tip

Please run the below commands on the VM host where RDAF deployment CLI was installed and rdafk8s setup command was run. The mariadb configuration is read from /opt/rdaf/rdaf.cfg file.

MARIADB_HOST=`cat /opt/rdaf/rdaf.cfg | grep -A3 mariadb | grep datadir | awk '{print $3}' | cut -f1 -d'/'`
MARIADB_USER=`cat /opt/rdaf/rdaf.cfg | grep -A3 mariadb | grep user | awk '{print $3}' | base64 -d`
MARIADB_PASSWORD=`cat /opt/rdaf/rdaf.cfg | grep -A3 mariadb | grep password | awk '{print $3}' | base64 -d`

mysql -u$MARIADB_USER -p$MARIADB_PASSWORD -h $MARIADB_HOST -P3307 -e "show status like 'wsrep_local_state_comment';"

Please verify that the mariadb cluster state is in Synced state.

+---------------------------+--------+
| Variable_name             | Value  |
+---------------------------+--------+
| wsrep_local_state_comment | Synced |
+---------------------------+--------+

Please run the below command and verify that the mariadb cluster size is 3.

mysql -u$MARIADB_USER -p$MARIADB_PASSWORD -h $MARIADB_HOST -P3307 -e "SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size'";
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+

Warning

Make sure all of the above pre-requisites are met before proceeding with the upgrade process.

Warning

Kubernetes: Though Kubernetes based RDA Fabric deployment supports zero downtime upgrade, it is recommended to schedule a maintenance window for upgrading RDAF Platform and AIOps services to newer version.

Important

Please make sure full backup of the RDAF platform system is completed before performing the upgrade.

Kubernetes: Please run the below backup command to take the backup of application data.

rdafk8s backup --dest-dir <backup-dir>

Run the below command on RDAF Management system and make sure the Kubernetes PODs are NOT in restarting mode (it is applicable to only Kubernetes environment)

kubectl get pods -n rda-fabric -l app_category=rdaf-infra
kubectl get pods -n rda-fabric -l app_category=rdaf-platform
kubectl get pods -n rda-fabric -l app_component=rda-worker 
kubectl get pods -n rda-fabric -l app_name=oia 

  • Verify that RDAF deployment rdaf cli version is 1.2.0 / 1.2.1 on the VM where CLI was installed for docker on-prem registry managing Kubernetes or Non-kubernetes deployments.
rdaf --version
  • On-premise docker registry service version is 1.0.2
docker ps | grep docker-registry
  • RDAF Infrastructure services version is 1.0.3 except for below services.

  • rda-minio: version is RELEASE.2023-09-30T07-02-29Z

  • haproxy: version is 1.0.3 or 1.0.3.1

rdafk8s infra status
  • RDAF Platform services version is 3.4.1 / 3.4.1.x

Run the below command to get RDAF Platform services details

rdafk8s platform status
  • RDAF OIA Application services version is 7.4.1 / 7.4.1.x

Run the below command to get RDAF App services details

rdafk8s app status
  • RDAF Deployment CLI version: 1.2.0 / 1.2.1

  • Infra Services tag: 1.0.3 / 1.0.3.1 (haproxy)

  • Platform Services and RDA Worker tag: 3.4 / 3.4.1.x

  • OIA Application Services tag: 7.4 / 7.4.1.x

  • CloudFabrix recommends taking VMware VM snapshots where RDA Fabric infra/platform/applications are deployed

Warning

Make sure all of the above pre-requisites are met before proceeding with the upgrade process.

Warning

Non-Kubernetes: Upgrading RDAF Platform and AIOps application services is a disruptive operation. Schedule a maintenance window before upgrading RDAF Platform and AIOps services to newer version.

Important

Please make sure full backup of the RDAF platform system is completed before performing the upgrade.

Non-Kubernetes: Please run the below backup command to take the backup of application data.

rdaf backup --dest-dir <backup-dir>
Note: Please make sure this backup-dir is mounted across all infra,cli vms.

  • Verify that RDAF deployment rdaf cli version is 1.2.0 / 1.2.1 on the VM where CLI was installed for docker on-prem registry managing Kubernetes or Non-kubernetes deployments.
rdaf --version
  • On-premise docker registry service version is 1.0.2
docker ps | grep docker-registry
  • RDAF Infrastructure services version is 1.0.3 except for below services.

  • rda-minio: version is RELEASE.2023-09-30T07-02-29Z

  • haproxy: version is 1.0.3 or 1.0.3.1

rdaf infra status
  • RDAF Platform services version is 3.4 or 3.4.1.x

Run the below command to get RDAF Platform services details

rdaf platform status
  • RDAF OIA Application services version is 7.4 or 7.4.1.x

Run the below command to get RDAF App services details

rdaf app status

RDAF Deployment CLI Upgrade:

Please follow the below given steps.

Note

Upgrade RDAF Deployment CLI on both on-premise docker registry VM and RDAF Platform's management VM if provisioned separately.

Login into the VM where rdaf & rdafk8s deployment CLI was installed for docker on-prem registry and managing Kubernetes or Non-kubernetes deployment.

  • Download the RDAF Deployment CLI's newer version 1.2.2 bundle.
wget https://macaw-amer.s3.amazonaws.com/releases/rdaf-platform/1.2.2/rdafcli-1.2.2.tar.gz
  • Upgrade the rdaf & rdafk8s CLI to version 1.2.2
pip install --user rdafcli-1.2.2.tar.gz
  • Verify the installed rdaf & rdafk8s CLI version is upgraded to 1.2.2
rdaf --version
rdafk8s --version
  • Download the RDAF Deployment CLI's newer version 1.2.2 bundle and copy it to RDAF management VM on which rdaf & rdafk8s deployment CLI was installed.
wget  https://macaw-amer.s3.amazonaws.com/releases/rdaf-platform/1.2.2/offline-rhel-1.2.2.tar.gz
  • Extract the rdaf CLI software bundle contents
tar -xvzf offline-rhel-1.2.2.tar.gz
  • Change the directory to the extracted directory
cd offline-rhel-1.2.2
  • Upgrade the rdaf CLI to version 1.2.2
pip install --user rdafcli-1.2.2.tar.gz  -f ./ --no-index
  • Verify the installed rdaf CLI version
rdaf --version
rdafk8s --version
wget  https://macaw-amer.s3.amazonaws.com/releases/rdaf-platform/1.2.2/offline-ubuntu-1.2.2.tar.gz
  • Extract the rdaf CLI software bundle contents
tar -xvzf offline-ubuntu-1.2.2.tar.gz
  • Change the directory to the extracted directory
cd offline-ubuntu-1.2.2
  • Upgrade the rdaf CLI to version 1.2.2
pip install --user rdafcli-1.2.2.tar.gz  -f ./ --no-index
  • Verify the installed rdaf CLI version
rdaf --version
rdafk8s --version
  • Download the RDAF Deployment CLI's newer version 1.2.2 bundle
wget https://macaw-amer.s3.amazonaws.com/releases/rdaf-platform/1.2.2/rdafcli-1.2.2.tar.gz
  • Upgrade the rdaf CLI to version 1.2.2
pip install --user rdafcli-1.2.2.tar.gz
  • Verify the installed rdaf CLI version is upgraded to 1.2.2
rdaf --version
  • Download the RDAF Deployment CLI's newer version 1.2.2 bundle and copy it to RDAF management VM on which rdaf & rdafk8s deployment CLI was installed.
wget  https://macaw-amer.s3.amazonaws.com/releases/rdaf-platform/1.2.2/offline-rhel-1.2.2.tar.gz
  • Extract the rdaf CLI software bundle contents
tar -xvzf offline-rhel-1.2.2.tar.gz
  • Change the directory to the extracted directory
cd offline-rhel-1.2.2
  • Upgrade the rdaf CLI to version 1.2.2
pip install --user rdafcli-1.2.2.tar.gz -f ./ --no-index
  • Verify the installed rdaf CLI version
rdaf --version
rdafk8s --version
wget  https://macaw-amer.s3.amazonaws.com/releases/rdaf-platform/1.2.2/offline-ubuntu-1.2.2.tar.gz
  • Extract the rdaf CLI software bundle contents
tar -xvzf offline-ubuntu-1.2.2.tar.gz
  • Change the directory to the extracted directory
cd offline-ubuntu-1.2.2
  • Upgrade the rdafCLI to version 1.2.2
pip install --user rdafcli-1.2.2.tar.gz -f ./ --no-index
  • Verify the installed rdaf CLI version
rdaf --version
rdafk8s --version

1.2. Download the new Docker Images

Download the new docker image tags for RDAF Platform and OIA Application services and wait until all of the images are downloaded.

To fetch registry please use the below command

rdaf registry fetch --tag 1.0.3.2,3.4.2,7.4.2,1.0.3,3.4.2.1,3.4.2.2,7.4.2.1,7.4.2.2

Run the below command to upgrade the registry

rdaf registry upgrade --tag 1.0.3

Important

If above command fails while downloading the docker registry image with a certificate error, please follow the below workaround steps to bypass the certificate validation and re-run the upgrade command again.

Note

The username/password has not been provided in this documentation. If you need access credentials, please reach out to the Support Team at ([email protected])

  • Use the below mentioned command and the following steps after that, if the above mentioned scenario is experienced by the user

docker login -u XXXXXXXX -p XXXXXXXX cfxregistry.cloudfabrix.io:443
1) Open the daemon.json file and add insecure-registries as shown below

  • sudo vi /etc/docker/daemon.json

  • As indicated below, kindly include insecure-registries beneath the experimental parameter.

    "insecure-registries" : ["cfxregistry.cloudfabrix.io:443"],

{
"tls": true,
"tlscacert": "/etc/tlscerts/ca/ca.pem",
"tlsverify": true,
"storage-driver": "overlay2",
"hosts": [
"unix:///var/run/docker.sock",
"tcp://0.0.0.0:2376"
],
"tlskey": "/etc/tlscerts/server/server.key",
"debug": false,
"tlscert": "/etc/tlscerts/server/server.pem",
"experimental": false,
"insecure-registries" : ["cfxregistry.cloudfabrix.io:443"],
"live-restore": true
}
2) Restart docker using below mentioned command

sudo service docker restart

To fetch registry please use the below command

rdaf registry fetch --tag 1.0.3.2,3.4.2,7.4.2,1.0.3,3.4.2.1,3.4.2.2,7.4.2.1,7.4.2.2

Run the below command to verify above mentioned tags are downloaded for all of the RDAF Platform and OIA Application services.

rdaf registry list-tags 

Please make sure 1.0.3.2 image tag is downloaded for the below RDAF Infra service.

  • rda-platform-haproxy

Please make sure 1.0.3 image tag is downloaded for the below RDAF Infra service.

  • rda-platform-kafka
  • rda-platform-zookeeper
  • rda-platform-mariadb
  • rda-platform-opensearch
  • rda-platform-nats
  • rda-platform-busybox
  • rda-platform-nats-box
  • rda-platform-nats-boot-config
  • rda-platform-nats-server-config-reloader
  • rda-platform-prometheus-nats-exporter
  • rda-platform-redis
  • rda-platform-redis-sentinel
  • rda-platform-arangodb-starter
  • rda-platform-kube-arangodb
  • rda-platform-arangodb
  • rda-platform-kubectl
  • rda-platform-logstash
  • rda-platform-fluent-bit

Please make sure RELEASE.2023-09-30T07-02-29Z image tag is downloaded for the below RDAF Infra service.

  • minio

Please make sure 3.4.2 image tag is downloaded for the below RDAF Platform services.

  • rda-client-api-server
  • rda-registry
  • rda-scheduler
  • rda-collector
  • rda-identity
  • rda-fsm
  • rda-asm
  • rda-stack-mgr
  • rda-access-manager
  • rda-resource-manager
  • rda-user-preferences
  • onprem-portal
  • onprem-portal-nginx
  • rda-worker-all
  • onprem-portal-dbinit
  • cfxdx-nb-nginx-all
  • rda-event-gateway
  • rda-chat-helper
  • rdac
  • rdac-full
  • cfxcollector

Please make sure 3.4.2.1 image tag is downloaded for the below RDAF Platform services.

  • rda-rda-scheduler
  • rda-client-api-server
  • rda-worker-all
  • cfxdx-nb-nginx-all
  • rdac
  • rdac-full
  • cfx-onprem-portal
  • cfx-onprem-portal-dbinit
  • onprem-portal-nginx

Please make sure 3.4.2.2 image tag is downloaded for the below RDAF Platform services.

  • rda-resource-manager
  • rda-portal
  • rda-api-server

Please make sure 7.4.2 image tag is downloaded for the below RDAF OIA Application services.

  • rda-app-controller
  • rda-alert-processor
  • rda-file-browser
  • rda-smtp-server
  • rda-ingestion-tracker
  • rda-reports-registry
  • rda-ml-config
  • rda-event-consumer
  • rda-webhook-server
  • rda-irm-service
  • rda-alert-ingester
  • rda-collaboration
  • rda-notification-service
  • rda-configuration-service
  • rda-irm-service
  • rda-alert-processor-companion

Please make sure 7.4.2.1 image tag is downloaded for the below RDAF OIA Application services.

  • rda-event-consumer
  • rda-webhook-server
  • rda-alert-processor
  • rda-smtp-server

Please make sure 7.4.2.2 image tag is downloaded for the below RDAF OIA Application services.

  • rda-collaboration

Downloaded Docker images are stored under the below path.

/opt/rdaf-registry/data/docker/registry/v2/

Run the below command to check the filesystem's disk usage on offline registry VM where docker images are pulled.

df -h /opt

If required older image-tags which are no longer used can be deleted to free up the disk space using the below command.

rdaf registry delete-images --tag <tag1,tag2>

Note

Run the above command if /opt occupies more than 80% of the disk space or free capacity of /opt is less than 25GB.

1.3. Upgrade Steps

1.3.1 Upgrade RDAF Infra Services

Please download the below python script (python rdaf_upgrade_120_121_to_122.py)

wget https://macaw-amer.s3.amazonaws.com/releases/rdaf-platform/1.2.2/rdaf_upgrade_120_121_to_122.py

Please run the downloaded python upgrade script rdaf_upgrade_120_121_to_122.py as shown below.

The below step will generate *values.yaml.latest files for all RDAF Infrastructure services under /opt/rdaf/deployment-scripts directory.

python rdaf_upgrade_120_121_to_122.py upgrade

Note

After running the above upgrade script, it makes the below changes.

  • Adds new service rda_asm (Alert State Manager) configuration and settings to values.yaml and values.yaml.latest files

  • Log-shipping agent is migrated from fluent-bit to filebeat

  • HAproxy configuration is updated to optimize Webhook and MariaDB service traffic.

Updated configuration for rda_asm (Alert State Manager) service in values.yaml file.

...
...
rda_asm:
  strategy: RollingUpdate
  replicas: 1
  privileged: false
  resources:
    requests:
      memory: 100Mi
    limits:
      memory: 4Gi
  env:
    RDA_ENABLE_TRACES: 'no'
    DISABLE_REMOTE_LOGGING_CONTROL: 'no'
    RDA_SELF_HEALTH_RESTART_AFTER_FAILURES: '3'
    PURGE_COMPLETED_INSTANCES_DAYS: 1
    PURGE_STALE_INSTANCES_DAYS: 120
...
...

Highlighted lines are updated in the HAProxy service configuration. (Path : "/opt/rdaf/config/haproxy/haproxy.cfg" on haproxy running VM's)

...
...
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
external-check
insecure-fork-wanted
ssl-default-bind-options no-sslv3 no-tls-tickets force-tlsv12
...
...
backend mariadb
    mode tcp
    balance roundrobin
    option tcpka
    timeout server 28800s
    default-server inter 10s downinter 5s
    option external-check
    external-check command /maria_cluster_check
    server mariadb-192.168.125.45 192.168.125.45:31182 check
...
...
backend webhook
    mode http
    balance roundrobin
    option httpchk OPTIONS /webhooks/hookid
    http-check expect rstatus (2|3)[0-9][0-9]
    http-response set-header Cache-Control no-store
    http-response set-header Pragma no-cache
    http-check disable-on-404
    default-server inter 10s downinter 5s fall 3 rise 2
    cookie SERVERID insert indirect nocache maxidle 30m maxlife 24h httponly secure
    server rdaf-webhook-1 192.168.125.46:32650 check cookie rdaf-webhook-1
...
...
  • Upgrade haproxy service using below command
rdafk8s infra upgrade --tag 1.0.3.2 --service haproxy
  • Please use the below mentioned command to see haproxy is up and in Running state.
rdafk8s infra status
+--------------------------+----------------+-----------------+--------------+------------------------------+
| Name                     | Host           | Status          | Container Id | Tag                          |
+--------------------------+----------------+-----------------+--------------+------------------------------+
| haproxy                  | 192.168.108.13 | Up 7 hours      | acb0535d47f6 | 1.0.3.2                      |
| haproxy                  | 192.168.108.14 | Up 7 hours      | 292fa79d6066 | 1.0.3.2                      |
| keepalived               | 192.168.108.13 | active          | N/A          | N/A                          |
| keepalived               | 192.168.108.14 | active          | N/A          | N/A                          |
| rda-nats                 | 192.168.108.13 | Up 8 Hours ago  | e31ffb44d023 | 1.0.3                        |
| rda-nats                 | 192.168.108.14 | Up 8 Hours ago  | bd39199e9dff | 1.0.3                        |
| rda-minio                | 192.168.108.13 | Up 8 Hours ago  | fcdd4fc0f339 | RELEASE.2023-09-30T07-02-29Z |
| rda-minio                | 192.168.108.14 | Up 8 Hours ago  | def48fd1761c | RELEASE.2023-09-30T07-02-29Z |
| rda-minio                | 192.168.108.16 | Up 8 Hours ago  | e51f463de10e | RELEASE.2023-09-30T07-02-29Z |
| rda-minio                | 192.168.108.17 | Up 8 Hours ago  | 5d34e867a079 | RELEASE.2023-09-30T07-02-29Z |
| -mariadb                 | 192.168.108.13 | Up 2 Hours ago  | dda4b0dc1fad | 1.0.3                        |
| rda-mariadb              | 192.168.108.14 | Up 2 Hours ago  | d80d5d844ec4 | 1.0.3                        |
+--------------------------+----------------+-----------------+--------------+------------------------------+

Note

In this release, log shipping agent service is migrated from fluent-bit to filebeat.

If log monitoring service is installed please bring it down using below command.

rdafk8s log_monitoring down
  • Download the python script (rdaf_upgrade_120_121_to_122.py)
wget https://macaw-amer.s3.amazonaws.com/releases/rdaf-platform/1.2.2/rdaf_upgrade_120_121_to_122.py

Please run the downloaded python upgrade script rdaf_upgrade_120_121_to_122.py as shown below.

The below step will generate *values.yaml.latest files for all RDAF Infrastructure services under /opt/rdaf/deployment-scripts directory.

python rdaf_upgrade_120_121_to_122.py upgrade

Note

After running the above upgrade script, it makes the below changes.

  • Adds new service rda_asm (Alert State Manager) configuration and settings to values.yaml and values.yaml.latest files

  • Log-shipping agent is migrated from fluent-bit to filebeat

  • HAproxy configuration is updated to optimize Webhook and MariaDB service traffic.

Updated configuration for rda_asm (Alert State Manager) service in values.yaml file.

...
...
rda_asm:
mem_limit: 4G
memswap_limit: 4G
privileged: false
environment:
  RDA_ENABLE_TRACES: 'no'
  DISABLE_REMOTE_LOGGING_CONTROL: 'no'
  RDA_SELF_HEALTH_RESTART_AFTER_FAILURES: 3
  PURGE_COMPLETED_INSTANCES_DAYS: 1
  PURGE_STALE_INSTANCES_DAYS: 120
deployment: true
...
...

Highlighted lines are updated in the HAProxy service configuration. (Path : "/opt/rdaf/config/haproxy/haproxy.cfg" on haproxy running VM's)

...
...
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
external-check
insecure-fork-wanted
ssl-default-bind-options no-sslv3 no-tls-tickets force-tlsv12
...
...
backend mariadb
    mode tcp
    balance roundrobin
    option tcpka
    timeout server 28800s
    default-server inter 10s downinter 5s
    option external-check
    external-check command /maria_cluster_check
    server mariadb-192.168.125.45 192.168.125.45:3306 check
...
...
backend webhook
    mode http
    balance roundrobin
    option httpchk OPTIONS /webhooks/hookid
    http-check expect rstatus (2|3)[0-9][0-9]
    http-response set-header Cache-Control no-store
    http-response set-header Pragma no-cache
    http-check disable-on-404
    default-server inter 10s downinter 5s fall 3 rise 2
    cookie SERVERID insert indirect nocache maxidle 30m maxlife 24h httponly secure
...
...

Note

If an external Opensearch cluster was deployed for Metrics and Logs, below given command adds it to the RDA Fabric platform. It is optional, please skip this step if there is no external Opensearch cluster was deployed.

Please run the below command to setup external opensearch cluster

python rdaf_upgrade_120_121_to_122.py external_os_setup

Above command will ask for below inputs:

External Opensearch Cluster IP Addresses (comma separated):

External Opensearch Cluster Username:

External Opensearch Cluster Password:

  • Upgrade haproxy service using below command
rdaf infra upgrade --tag 1.0.3.2 --service haproxy

Run the below RDAF command to check infra status

rdaf infra status
+----------------------+----------------+-----------------+--------------+------------------------------+
| Name                 | Host           | Status          | Container Id | Tag                          |
+----------------------+----------------+-----------------+--------------+------------------------------+
| haproxy              | 192.168.133.97 | Up 22 hours     | 5016d26a4c88 | 1.0.3.2                      |
| haproxy              | 192.168.133.98 | Up 22 hours     | 73b4f0a8235f | 1.0.3.2                      |
| keepalived           | 192.168.133.97 | active          | N/A          | N/A                          |
| keepalived           | 192.168.133.98 | active          | N/A          | N/A                          |
| nats                 | 192.168.133.97 | Up 43 hours     | 2342eb72fbd6 | 1.0.3                        |
| nats                 | 192.168.133.98 | Up 43 hours     | 745cedb9ade6 | 1.0.3                        |
| minio                | 192.168.133.93 | Up 43 hours     | 67f4017a19bf | RELEASE.2023-09-30T07-02-29Z |
| minio                | 192.168.133.97 | Up 43 hours     | 7519e544135c | RELEASE.2023-09-30T07-02-29Z |
| minio                | 192.168.133.98 | Up 43 hours     | 655ba3058fb0 | RELEASE.2023-09-30T07-02-29Z |
| minio                | 192.168.133.99 | Up 43 hours     | 44d987601c56 | RELEASE.2023-09-30T07-02-29Z |
| mariadb              | 192.168.133.97 | Up 43 hours     | 24bded0556bb | 1.0.3                        |
| mariadb              | 192.168.133.98 | Up 43 hours     | 59ac3e182890 | 1.0.3                        |
+----------------------+----------------+-----------------+--------------+------------------------------+

Run the below RDAF command to check infra healthcheck status

rdaf infra healthcheck
+----------------+-----------------+--------+------------------------------+----------------+--------------+
| Name           | Check           | Status | Reason                       | Host           | Container Id |
+----------------+-----------------+--------+------------------------------+----------------+--------------+
| haproxy        | Port Connection | OK     | N/A                          | 192.168.133.97 | 5016d26a4c88 |
| haproxy        | Service Status  | OK     | N/A                          | 192.168.133.97 | 5016d26a4c88 |
| haproxy        | Firewall Port   | OK     | N/A                          | 192.168.133.97 | 5016d26a4c88 |
| haproxy        | Port Connection | OK     | N/A                          | 192.168.133.98 | 73b4f0a8235f |
| haproxy        | Service Status  | OK     | N/A                          | 192.168.133.98 | 73b4f0a8235f |
| haproxy        | Firewall Port   | OK     | N/A                          | 192.168.133.98 | 73b4f0a8235f |
| keepalived     | Service Status  | OK     | N/A                          | 192.168.133.97 | N/A          |
| keepalived     | Service Status  | OK     | N/A                          | 192.168.133.98 | N/A          |
| nats           | Port Connection | OK     | N/A                          | 192.168.133.97 | 2342eb72fbd6 |
| nats           | Service Status  | OK     | N/A                          | 192.168.133.97 | 2342eb72fbd6 |
| nats           | Firewall Port   | OK     | N/A                          | 192.168.133.97 | 2342eb72fbd6 |
| nats           | Port Connection | OK     | N/A                          | 192.168.133.98 | 745cedb9ade6 |
| nats           | Service Status  | OK     | N/A                          | 192.168.133.98 | 745cedb9ade6 |
| nats           | Firewall Port   | OK     | N/A                          | 192.168.133.98 | 745cedb9ade6 |
| minio          | Port Connection | OK     | N/A                          | 192.168.133.93 | 67f4017a19bf |
| minio          | Service Status  | OK     | N/A                          | 192.168.133.93 | 67f4017a19bf |
| minio          | Firewall Port   | OK     | N/A                          | 192.168.133.93 | 67f4017a19bf |
| minio          | Port Connection | OK     | N/A                          | 192.168.133.97 | 7519e544135c |
+----------------+-----------------+--------+------------------------------+----------------+--------------+

Note

In this release, log shipping agent service is migrated from fluent-bit to filebeat.

If log monitoring service is installed please bring it down using below command.

rdaf log_monitoring down

1.3.2 Upgrade RDAF Platform Services

Step-1: Run the below command to initiate upgrading RDAF Platform services.

rdafk8s platform upgrade --tag 3.4.2

As the upgrade procedure is a non-disruptive upgrade, it puts the currently running PODs into Terminating state and newer version PODs into Pending state.

Step-2: Run the below command to check the status of the existing and newer PODs and make sure atleast one instance of each Platform service is in Terminating state.

kubectl get pods -n rda-fabric -l app_category=rdaf-platform

Step-3: Run the below command to put all Terminating RDAF platform service PODs into maintenance mode. It will list all of the POD Ids of platform services along with rdac maintenance command that required to be put in maintenance mode.

python maint_command.py

Note

If maint_command.py script doesn't exist on RDAF deployment CLI VM, it can be downloaded using the below command.

wget https://macaw-amer.s3.amazonaws.com/releases/rdaf-platform/1.1.6/maint_command.py

Step-4: Copy & Paste the rdac maintenance command as below.

rdac maintenance start --ids <comma-separated-list-of-platform-pod-ids>

Step-5: Run the below command to verify the maintenance mode status of the RDAF platform services.

rdac pods --show_maintenance | grep False

Step-6: Run the below command to delete the Terminating RDAF platform service PODs

for i in `kubectl get pods -n rda-fabric -l app_category=rdaf-platform | grep 'Terminating' | awk '{print $1}'`; do kubectl delete pod $i -n rda-fabric --force; done

Note

Wait for 120 seconds and Repeat above steps from Step-2 to Step-6 for rest of the RDAF Platform service PODs.

Please wait till all of the new platform service PODs are in Running state and run the below command to verify their status and make sure all of them are running with 3.4.2 version.

rdafk8s platform status
+----------------------+----------------+-----------------+--------------+-------+
| Name                 | Host           | Status          | Container Id | Tag   |
+----------------------+----------------+-----------------+--------------+-------+
| rda-api-server       | 192.168.108.20 | Up 11 Hours ago | 29812b00a0e1 | 3.4.2 |
| rda-api-server       | 192.168.108.17 | Up 11 Hours ago | 45c7a1690829 | 3.4.2 |
| rda-registry         | 192.168.108.20 | Up 11 Hours ago | 560d2ee8762d | 3.4.2 |
| rda-registry         | 192.168.108.17 | Up 11 Hours ago | e0f19e1cdb9c | 3.4.2 |
| rda-identity         | 192.168.108.17 | Up 11 Hours ago | 759dd0612524 | 3.4.2 |
| rda-identity         | 192.168.108.20 | Up 11 Hours ago | 15ba99de3b74 | 3.4.2 |
| rda-fsm              | 192.168.108.20 | Up 11 Hours ago | b43674264547 | 3.4.2 |
| rda-fsm              | 192.168.108.17 | Up 11 Hours ago | c0120825fe4f | 3.4.2 |
| rda-asm              | 192.168.108.19 | Up 11 Hours ago | 815f355aa368 | 3.4.2 |
| rda-asm              | 192.168.108.18 | Up 11 Hours ago | 3298a3cffe93 | 3.4.2 |
| rda-chat-helper      | 192.168.108.17 | Up 11 Hours ago | 0eca96d90fe9 | 3.4.2 |
| rda-chat-helper      | 192.168.108.20 | Up 11 Hours ago | 57e81c890a96 | 3.4.2 |
+----------------------+----------------+-----------------+--------------+-------+

Run the below command to check rda-fsm service is up and running and also verify that one of the rda-scheduler service is elected as a leader under Site column.

rdac pods
+-------+----------------------------------------+-------------+--------------+----------+-------------+-----------------+--------+--------------+---------------+--------------+
| Cat   | Pod-Type                               | Pod-Ready   | Host         | ID       | Site        | Age             |   CPUs |   Memory(GB) | Active Jobs   | Total Jobs   |
|-------+----------------------------------------+-------------+--------------+----------+-------------+-----------------+--------+--------------+---------------+--------------|
| Infra | api-server                             | True        | rda-api-server | 9c0484af |             | 11:41:50 |      8 |        31.33 |               |              |
| Infra | api-server                             | True        | rda-api-server | 196558ed |             | 11:40:23 |      8 |        31.33 |               |              |
| Infra | asm                                    | True        | rda-asm-5b8fb9 | bcbdaae5 |             | 11:42:26 |      8 |        31.33 |               |              |
| Infra | asm                                    | True        | rda-asm-5b8fb9 | 232a58af |             | 11:42:40 |      8 |        31.33 |               |              |
| Infra | collector                              | True        | rda-collector- | d06fb56c |             | 11:42:03 |      8 |        31.33 |               |              |
| Infra | collector                              | True        | rda-collector- | a4c79e4c |             | 11:41:59 |      8 |        31.33 |               |              |
| Infra | registry                               | True        | rda-registry-6 | 2fd69950 |             | 11:42:03 |      8 |        31.33 |               |              |
| Infra | registry                               | True        | rda-registry-6 | fac544d6 |             | 11:41:59 |      8 |        31.33 |               |              |
| Infra | scheduler                              | True        | rda-scheduler- | b98afe88 | *leader*    | 11:42:01 |      8 |        31.33 |               |              |
| Infra | scheduler                              | True        | rda-scheduler- | e25a0841 |             | 11:41:56 |      8 |        31.33 |               |              |
| Infra | worker                                 | True        | rda-worker-5b5 | 99bd054e | rda-site-01 | 11:33:40 |      8 |        31.33 | 0             | 0            |
| Infra | worker                                 | True        | rda-worker-5b5 | 0bfdcd98 | rda-site-01 | 11:33:34 |      8 |        31.33 | 0             | 0            |
+-------+----------------------------------------+-------------+----------------+----------+-------------+----------+--------+--------------+---------------+--------------+

Run the below command to check if all services has ok status and does not throw any failure messages.

rdac healthcheck

Warning

For Non-Kubernetes deployment, upgrading RDAF Platform and AIOps application services is a disruptive operation when rolling-upgrade option is not used. Please schedule a maintenance window before upgrading RDAF Platform and AIOps services to newer version.

Run the below command to initiate upgrading RDAF Platform services with zero downtime

rdaf platform upgrade --tag 3.4.2 --rolling-upgrade --timeout 10

Note

timeout <10> mentioned in the above command represents as Seconds

Note

The rolling-upgrade option upgrades the Platform services running in high-availability mode on one VM at a time in sequence. It completes the upgrade of Platform services running on VM-1 before upgrading them on VM-2, followed by VM-3, and so on.

During this upgrade sequence, RDAF platform continues to function without any impact to the application traffic.

After completing the Platform services upgrade on all VMs, it will ask for user confirmation to delete the older version Platform service PODs.

2024-05-31 04:16:51,478 [rdaf.component.platform] INFO     - Upgrading service: portal-frontend on host 192.168.133.92
[+] Running 1/07:09,500 [rdaf.component] INFO     -
⠿ portal-frontend Pulled                                                  0.0s
[+] Running 1/1
⠿ Container platform-portal-frontend-1  Started                          10.5s

2024-05-31 04:17:09,507 [rdaf.component.platform] INFO     - Waiting for upgraded containers to join pods
2024-05-31 04:17:09,508 [rdaf.component.platform] INFO     - Checking if the upgraded components '['rda_api_server', 'rda_registry', 'rda_scheduler', 'rda_collector', 'rda_asset_dependency', 'rda_identity', 'rda_asm', 'rda_fsm', 'rda_chat_helper', 'cfx-rda-access-manager', 'cfx-rda-resource-manager', 'cfx-rda-user-preferences', 'portal-backend', 'portal-frontend']' has joined the rdac pods...
+----------+--------------------------+---------+---------+--------------+-------------+------------+
| Pod ID   | Pod Type                 | Version | Age     | Hostname     | Maintenance | Pod Status |
+----------+--------------------------+---------+---------+--------------+-------------+------------+
| f8ba1938 | api-server               | 3.4     | 2:57:49 | f5f0fe35bc07 | None        | True       |
| e821e1b1 | registry                 | 3.4     | 2:57:50 | dfcc7f04a79b | None        | True       |
| 610519ee | scheduler                | 3.4     | 2:57:29 | 22b1fe983584 | None        | True       |
| 87bfe5aa | collector                | 3.4     | 2:57:11 | e16104b9d236 | None        | True       |
| b9f3d534 | asset-dependency         | 3.4     | 2:56:54 | 51dba9aad31c | None        | True       |
| ae81777f | authenticator            | 3.4     | 2:56:35 | 3c364fe3d7f7 | None        | True       |
| 8c1a1de7 | fsm                      | 3.4     | 2:56:15 | c0cf0e4ba2b6 | None        | True       |
| cc77e1cb | chat-helper              | 3.4     | 2:56:00 | 4a39f5fa20ac | None        | True       |
| e68abf01 | cfxdimensions-app-       | 3.4     | 2:54:40 | 72783eb5fac2 | None        | True       |
|          | access-manager           |         |         |              |             |            |
| b82b9b4b | cfxdimensions-app-       | 3.4     | 2:54:09 | 722f9f607669 | None        | True       |
|          | resource-manager         |         |         |              |             |            |
| cfe7f628 | user-preferences         | 3.4     | 2:53:39 | 6a6c8e36e8a3 | None        | True       |
+----------+--------------------------+---------+---------+--------------+-------------+------------+
Continue moving above pods to maintenance mode? [yes/no]: yes
2024-05-31 04:20:33,994 [rdaf.component.platform] INFO     - Initiating Maintenance Mode...
2024-05-31 04:20:39,156 [rdaf.component.platform] INFO     - Waiting for services to be moved to maintenance.
2024-05-31 04:21:01,569 [rdaf.component.platform] INFO     - Following container are in maintenance mode
+----------+--------------------------+---------+---------+--------------+-------------+------------+
| Pod ID   | Pod Type                 | Version | Age     | Hostname     | Maintenance | Pod Status |
+----------+--------------------------+---------+---------+--------------+-------------+------------+
| f8ba1938 | api-server               | 3.4     | 3:09:03 | f5f0fe35bc07 | maintenance | False      |
| b9f3d534 | asset-dependency         | 3.4     | 3:08:08 | 51dba9aad31c | maintenance | False      |
| ae81777f | authenticator            | 3.4     | 3:07:50 | 3c364fe3d7f7 | maintenance | False      |
| e68abf01 | cfxdimensions-app-       | 3.4     | 3:05:55 | 72783eb5fac2 | maintenance | False      |
|          | access-manager           |         |         |              |             |            |
| b82b9b4b | cfxdimensions-app-       | 3.4     | 3:05:24 | 722f9f607669 | maintenance | False      |
|          | resource-manager         |         |         |              |             |            |
| cc77e1cb | chat-helper              | 3.4     | 3:07:14 | 4a39f5fa20ac | maintenance | False      |
| 87bfe5aa | collector                | 3.4     | 3:08:26 | e16104b9d236 | maintenance | False      |
| 8c1a1de7 | fsm                      | 3.4     | 3:07:30 | c0cf0e4ba2b6 | maintenance | False      |
| e821e1b1 | registry                 | 3.4     | 3:09:04 | dfcc7f04a79b | maintenance | False      |
| 610519ee | scheduler                | 3.4     | 3:08:43 | 22b1fe983584 | maintenance | False      |
| cfe7f628 | user-preferences         | 3.4     | 3:04:53 | 6a6c8e36e8a3 | maintenance | False      |
+----------+--------------------------+---------+---------+--------------+-------------+------------+
2024-05-31 04:21:01,572 [rdaf.component.platform] INFO     - Waiting for timeout of 10 seconds...
2024-05-31 04:21:11,581 [rdaf.component.platform] INFO     - Upgrading service: rda_api_server on host 192.168.133.93

Run the below command to initiate upgrading RDAF Platform services without zero downtime

rdaf platform upgrade --tag 3.4.2

Please wait till all of the new platform services are in Up state and run the below command to verify their status and make sure all of them are running with 3.4.2 version.

rdaf platform status
+--------------------+----------------+------------+--------------+-------+
| Name               | Host           | Status     | Container Id | Tag   |
+--------------------+----------------+------------+--------------+-------+
| rda_api_server     | 192.168.133.92 | Up 5 hours | da6d9abf9bb6 | 3.4.2 |
| rda_api_server     | 192.168.133.93 | Up 5 hours | 6105a1d2c514 | 3.4.2 |
| rda_registry       | 192.168.133.92 | Up 5 hours | 4a4b530de8a6 | 3.4.2 |
| rda_registry       | 192.168.133.93 | Up 5 hours | 3b322de7dc6b | 3.4.2 |
| rda_scheduler      | 192.168.133.92 | Up 5 hours | 52258e0dd904 | 3.4.2 |
| rda_scheduler      | 192.168.133.93 | Up 5 hours | c161d8d1fb8d | 3.4.2 |
| rda_collector      | 192.168.133.92 | Up 5 hours | 5ca1e495cdba | 3.4.2 |
| rda_collector      | 192.168.133.93 | Up 5 hours | c09dc78bfa0c | 3.4.2 |
| rda_asset_dependen | 192.168.133.92 | Up 5 hours | 5dd435f9fbd8 | 3.4.2 |
| cy                 |                |            |              |       |
| rda_asset_dependen | 192.168.133.93 | Up 5 hours | 5e8415d5d7c8 | 3.4.2 |
| cy                 |                |            |              |       |
| rda_identity       | 192.168.133.92 | Up 5 hours | aa252c94a072 | 3.4.2 |
| rda_identity       | 192.168.133.93 | Up 5 hours | c0215c6c14cd | 3.4.2 |
| rda_fsm            | 192.168.133.92 | Up 5 hours | e8dfd2900b46 | 3.4.2 |
| rda_fsm            | 192.168.133.93 | Up 5 hours | fd0be9e91135 | 3.4.2 |
+--------------------+----------------+------------+--------------+-------+

Run the below command to check fsm service is up and running and also verify that one of the scheduler service is elected as a leader under Site column.

rdac pods

Run the below command to check if all services has ok status and does not throw any failure messages.

rdac healthcheck
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+
| Cat       | Pod-Type                               | Host         | ID       | Site        | Health Parameter                                    | Status   | Message                                                     |
|-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------|
| rda_app   | alert-ingester                         | 7f75047e9e44 | daa8c414 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-ingester                         | 7f75047e9e44 | daa8c414 |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-ingester                         | 7f75047e9e44 | daa8c414 |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-ingester                         | 7f75047e9e44 | daa8c414 |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-ingester                         | 7f75047e9e44 | daa8c414 |             | kafka-connectivity                                  | ok       | Cluster=NTc1NWU1MTQxYmY3MTFlZg, Broker=1, Brokers=[1, 2, 3] |
| rda_app   | alert-ingester                         | f9ec55862be0 | f9b9231c |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-ingester                         | f9ec55862be0 | f9b9231c |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-ingester                         | f9ec55862be0 | f9b9231c |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-ingester                         | f9ec55862be0 | f9b9231c |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-ingester                         | f9ec55862be0 | f9b9231c |             | kafka-connectivity                                  | ok       | Cluster=NTc1NWU1MTQxYmY3MTFlZg, Broker=3, Brokers=[1, 2, 3] |
| rda_app   | alert-processor                        | c6cc7b04ab33 | b4ebfb06 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-processor                        | c6cc7b04ab33 | b4ebfb06 |             | minio-connectivity                                  | ok       |                                                             |
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+

1.3.2.1 Upgrade RDAF Platform Services to 3.4.2.1

Step-1: Run the below command to initiate upgrading RDAF Platform Scheduler, API Server and Portal UI Services

rdafk8s platform upgrade --tag 3.4.2.1 --service rda-scheduler --service rda-api-server --service rda-portal

Note

After upgrading the above mentioned services, use the following commands to verify that the services are up and running

As the upgrade procedure is a non-disruptive upgrade, it puts the currently running PODs into Terminating state and newer version PODs into Pending state.

Step-2: Run the below command to check the status of the existing and newer PODs and make sure atleast one instance of each Platform service is in Terminating state.

kubectl get pods -n rda-fabric -l app_category=rdaf-platform

Step-3: Run the below command to put all Terminating RDAF platform service PODs into maintenance mode. It will list all of the POD Ids of platform services along with rdac maintenance command that required to be put in maintenance mode.

python maint_command.py

Step-4: Copy & Paste the rdac maintenance command as below.

rdac maintenance start --ids <comma-separated-list-of-platform-pod-ids>

Step-5: Run the below command to verify the maintenance mode status of the RDAF platform services.

rdac pods --show_maintenance | grep False

Step-6: Run the below command to delete the Terminating RDAF platform service PODs

for i in `kubectl get pods -n rda-fabric -l app_category=rdaf-platform | grep 'Terminating' | awk '{print $1}'`; do kubectl delete pod $i -n rda-fabric --force; done

Note

Wait for 120 seconds and Repeat above steps from Step-2 to Step-6 for rest of the RDAF Platform's Scheduler and API-Server service PODs.

Please wait till all of the new platform service are in Up state and run the below command to verify their status and make sure below services are running with 3.4.2.1 version.

  • rda-api-server

  • rda-scheduler

  • rda-portal

rdafk8s platform status

+--------------------+----------------+-----------------+--------------+---------+
| Name               | Host           | Status          | Container Id | Tag     |
+--------------------+----------------+-----------------+--------------+---------+
| rda-api-server     | 192.168.131.44 | Up 1 Days ago   | fcb83e98940b | 3.4.2.1 |
| rda-api-server     | 192.168.131.45 | Up 1 Days ago   | d9a445a35874 | 3.4.2.1 |
| rda-registry       | 192.168.131.44 | Up 3 Weeks ago  | 5596787a1fc4 | 3.4.2   |
| rda-registry       | 192.168.131.45 | Up 3 Weeks ago  | 320f65e42b38 | 3.4.2   |
| rda-identity       | 192.168.131.45 | Up 3 Weeks ago  | 50d255d651a7 | 3.4.2   |
| rda-identity       | 192.168.131.47 | Up 3 Weeks ago  | 949dc2dd1605 | 3.4.2   |
| rda-fsm            | 192.168.131.47 | Up 3 Weeks ago  | ecefe1e50471 | 3.4.2   |
| rda-fsm            | 192.168.131.46 | Up 3 Weeks ago  | c2d5841a14c2 | 3.4.2   |
| rda-asm            | 192.168.131.45 | Up 3 Weeks ago  | 9a5d1bec238d | 3.4.2   |
| rda-asm            | 192.168.131.44 | Up 3 Weeks ago  | 9de8da6fbd97 | 3.4.2   |
| rda-chat-helper    | 192.168.131.45 | Up 3 Weeks ago  | a111f3a11efb | 3.4.2   |
| rda-chat-helper    | 192.168.131.46 | Up 3 Weeks ago  | 3d05bd971678 | 3.4.2   |
| rda-access-manager | 192.168.131.46 | Up 3 Weeks ago  | 92580c66be5f | 3.4.2   |
| rda-access-manager | 192.168.131.47 | Up 3 Weeks ago  | 806b51bf4624 | 3.4.2   |
| rda-resource-      | 192.168.131.45 | Up 3 Weeks ago  | 278ad57f8d14 | 3.4.2   |
| manager            |                |                 |              |         |
| rda-resource-      | 192.168.131.44 | Up 3 Weeks ago  | c1969894eedc | 3.4.2   |
| manager            |                |                 |              |         |
| rda-scheduler      | 192.168.131.45 | Up 1 Days ago   | 627139281c63 | 3.4.2.1 |
| rda-scheduler      | 192.168.131.44 | Up 1 Days ago   | e43ae3785fae | 3.4.2.1 |
| rda-portal-backend | 192.168.131.46 | Up 16 Hours ago | 0ff2e3b7220f | 3.4.2.1 |
| rda-portal-        | 192.168.131.46 | Up 16 Hours ago | 122b014f0714 | 3.4.2.1 |
| frontend           |                |                 |              |         |
| rda-portal-backend | 192.168.131.45 | Up 16 Hours ago | 5917b9cfdd07 | 3.4.2.1 |
| rda-portal-        | 192.168.131.45 | Up 16 Hours ago | 347380e8bac1 | 3.4.2.1 |
| frontend           |                |                 |              |         |
+--------------------+----------------+-----------------+--------------+---------+
Run the below command to check if all services has ok status and does not throw any failure messages.

rdac healthcheck
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+
| Cat       | Pod-Type                               | Host         | ID       | Site        | Health Parameter                                    | Status   | Message                                                     |
|-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------|
| rda_infra | api-server                             | rda-api-serv | 3c36575d |             | service-status                                      | ok       |                                                             |
| rda_infra | api-server                             | rda-api-serv | 3c36575d |             | minio-connectivity                                  | ok       |                                                             |
| rda_infra | api-server                             | rda-api-serv | 1fd1778b |             | service-status                                      | ok       |                                                             |
| rda_infra | api-server                             | rda-api-serv | 1fd1778b |             | minio-connectivity                                  | ok       |                                                             |
| rda_infra | asm                                    | rda-asm-f6b8 | 39a53ac4 |             | service-status                                      | ok       |                                                             |
| rda_infra | asm                                    | rda-asm-f6b8 | 39a53ac4 |             | minio-connectivity                                  | ok       |                                                             |
| rda_infra | asm                                    | rda-asm-f6b8 | 199a31d2 |             | service-status                                      | ok       |                                                             |
| rda_infra | asm                                    | rda-asm-f6b8 | 199a31d2 |             | minio-connectivity                                  | ok       |                                                             |
| rda_infra | scheduler                              | rda-schedule | ee7565aa |             | service-status                                      | ok       |                                                             |
| rda_infra | scheduler                              | rda-schedule | ee7565aa |             | minio-connectivity                                  | ok       |                                                             |
| rda_infra | scheduler                              | rda-schedule | ee7565aa |             | DB-connectivity                                     | ok       |                                                             |
| rda_infra | scheduler                              | rda-schedule | ee7565aa |             | scheduler-webserver-connectivity                    | ok       |                                                             |
| rda_infra | scheduler                              | rda-schedule | 779a624d |             | service-status                                      | ok       |                                                             |
| rda_infra | scheduler                              | rda-schedule | 779a624d |             | minio-connectivity                                  | ok       |                                                             |
| rda_infra | scheduler                              | rda-schedule | 779a624d |             | DB-connectivity                                     | ok       |                                                             |
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+

Run the below command to initiate upgrading RDAF Platform Scheduler Services with zero downtime

rdaf platform upgrade --tag 3.4.2.1 --rolling-upgrade --service rda_scheduler --timeout 10

Run the below command to initiate upgrading RDAF Platform Scheduler Services without zero downtime

rdaf platform upgrade --tag 3.4.2.1  --service rda_scheduler

Run the below command to initiate upgrading RDAF Platform API Server Services with zero downtime

rdaf platform upgrade --tag 3.4.2.1 --rolling-upgrade --service rda_api_server --timeout 10

Run the below command to initiate upgrading RDAF Platform API Server Services without zero downtime

rdaf platform upgrade --tag 3.4.2.1  --service rda_api_server

Run the below command to initiate upgrading RDAF Platform with backend Portal UI Services with zero downtime

rdaf platform upgrade --tag 3.4.2.1 --rolling-upgrade --service portal-backend --timeout 10

Run the below command to initiate upgrading RDAF Platform with backend Portal UI Services without zero downtime

rdaf platform upgrade --tag 3.4.2.1 --service portal-backend

Run the below command to initiate upgrading RDAF Platform with frontend Portal UI Services with zero downtime

rdaf platform upgrade --tag 3.4.2.1 --rolling-upgrade --service portal-frontend ---timeout 10

Run the below command to initiate upgrading RDAF Platform with frontend Portal UI Services without zero downtime

rdaf platform upgrade --tag 3.4.2.1 --service portal-frontend

Note

After upgrading the above mentioned services, use the following commands to verify that the services are up and running

Please wait till all of the new platform service are in Up state and run the below command to verify their status and make sure below services are running with 3.4.2.1 version.

  • rda_api_server

  • rda_scheduler

  • portal UI

rdaf platform status

+--------------------+-----------------+---------------+--------------+---------+
| Name               | Host            | Status        | Container Id | Tag     |
+--------------------+-----------------+---------------+--------------+---------+
| rda_api_server     | 192.168.109.50  | Up 21 hours   | 6d686f886d94 | 3.4.2.1 |
| rda_api_server     | 192.168.109.51  | Up 21 hours   | 4158deb9a167 | 3.4.2.1 |
| rda_registry       | 192.168.109.50  | Up 7 days     | cb664cf49d0d | 3.4.2   |
| rda_registry       | 192.168.109.51  | Up 7 days     | e8e6007645e9 | 3.4.2   |
| rda_scheduler      | 192.168.109.50  | Up 21 hours   | 35969c5b62d1 | 3.4.2.1 |
| rda_scheduler      | 192.168.109.51  | Up 21 hours   | 7ec72ad12f07 | 3.4.2.1 |
| portal-backend     | 192.168.125.217 | Up 42 minutes | ddf9709e178b | 3.4.2.1 |
| portal-frontend    | 192.168.125.217 | Up 39 minutes | dfe0077ea955 | 3.4.2.1 |
+--------------------+-----------------+---------------+--------------+---------+
Run the below command to check if all services has ok status and does not throw any failure messages.

rdac healthcheck
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+
| Cat       | Pod-Type                               | Host         | ID       | Site        | Health Parameter                                    | Status   | Message                                                     |
|-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------|
| rda_app   | alert-ingester                         | 2a80917206c7 | cc59bb23 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-ingester                         | 2a80917206c7 | cc59bb23 |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-ingester                         | 2a80917206c7 | cc59bb23 |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-ingester                         | 2a80917206c7 | cc59bb23 |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-ingester                         | 2a80917206c7 | cc59bb23 |             | kafka-connectivity                                  | ok       | Cluster=NjY5Yzk1OTJmN2JhMTFlZQ, Broker=3, Brokers=[1, 2, 3] |
| rda_app   | alert-ingester                         | b9ffd2248f8c | ab21c027 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-ingester                         | b9ffd2248f8c | ab21c027 |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-ingester                         | b9ffd2248f8c | ab21c027 |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-ingester                         | b9ffd2248f8c | ab21c027 |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-ingester                         | b9ffd2248f8c | ab21c027 |             | kafka-connectivity                                  | ok       | Cluster=NjY5Yzk1OTJmN2JhMTFlZQ, Broker=1, Brokers=[1, 2, 3] |
| rda_app   | alert-processor                        | 0ef76607db2d | da0dc802 |             | service-status                                      | ok       |                                                             |
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+

1.3.2.2 Upgrade RDAF Platform Services to 3.4.2.2

Step-1: Run the below command to initiate upgrading RDAF Platform Resource Manager and Portal UI Services

rdafk8s platform upgrade --tag 3.4.2.2 --service rda-resource-manager --service rda-portal --service rda-api-server

Note

After upgrading the above mentioned services, use the following commands to verify that the services are up and running

As the upgrade procedure is a non-disruptive upgrade, it puts the currently running PODs into Terminating state and newer version PODs into Pending state.

Step-2: Run the below command to check the status of the existing and newer PODs and make sure atleast one instance of each Platform service is in Terminating state.

kubectl get pods -n rda-fabric -l app_category=rdaf-platform

Step-3: Run the below command to put all Terminating RDAF platform service PODs into maintenance mode. It will list all of the POD Ids of platform services along with rdac maintenance command that required to be put in maintenance mode.

python maint_command.py

Step-4: Copy & Paste the rdac maintenance command as below.

rdac maintenance start --ids <comma-separated-list-of-platform-pod-ids>

Step-5: Run the below command to verify the maintenance mode status of the RDAF platform services.

rdac pods --show_maintenance | grep False

Step-6: Run the below command to delete the Terminating RDAF platform service PODs

for i in `kubectl get pods -n rda-fabric -l app_category=rdaf-platform | grep 'Terminating' | awk '{print $1}'`; do kubectl delete pod $i -n rda-fabric --force; done

Note

Wait for 120 seconds and Repeat above steps from Step-2 to Step-6 for rest of the RDAF Platform's Scheduler and API-Server service PODs.

Please wait till all of the new platform service are in Up state and run the below command to verify their status and make sure below services are running with 3.4.2.2 version.

  • rda-resource-manager

  • rda-portal

  • rda-api-server

rdafk8s platform status

+--------------------+----------------+-----------------+--------------+---------+
| Name               | Host           | Status          | Container Id | Tag     |
+--------------------+----------------+-----------------+--------------+---------+
| rda-api-server     | 192.168.131.44 | Up 1 Days ago   | fcb83e98940b | 3.4.2.2 |
| rda-api-server     | 192.168.131.45 | Up 1 Days ago   | d9a445a35874 | 3.4.2.2 |
| rda-registry       | 192.168.131.44 | Up 3 Weeks ago  | 5596787a1fc4 | 3.4.2   |
| rda-registry       | 192.168.131.45 | Up 3 Weeks ago  | 320f65e42b38 | 3.4.2   |
| rda-identity       | 192.168.131.45 | Up 3 Weeks ago  | 50d255d651a7 | 3.4.2   |
| rda-identity       | 192.168.131.47 | Up 3 Weeks ago  | 949dc2dd1605 | 3.4.2   |
| rda-fsm            | 192.168.131.47 | Up 3 Weeks ago  | ecefe1e50471 | 3.4.2   |
| rda-fsm            | 192.168.131.46 | Up 3 Weeks ago  | c2d5841a14c2 | 3.4.2   |
| rda-asm            | 192.168.131.45 | Up 3 Weeks ago  | 9a5d1bec238d | 3.4.2   |
| rda-asm            | 192.168.131.44 | Up 3 Weeks ago  | 9de8da6fbd97 | 3.4.2   |
| rda-chat-helper    | 192.168.131.45 | Up 3 Weeks ago  | a111f3a11efb | 3.4.2   |
| rda-chat-helper    | 192.168.131.46 | Up 3 Weeks ago  | 3d05bd971678 | 3.4.2   |
| rda-access-manager | 192.168.131.46 | Up 3 Weeks ago  | 92580c66be5f | 3.4.2   |
| rda-access-manager | 192.168.131.47 | Up 3 Weeks ago  | 806b51bf4624 | 3.4.2   |
| rda-resource-      | 192.168.131.45 | Up 3 Weeks ago  | 278ad57f8d14 | 3.4.2.2 
| manager            |                |                 |              |         |
| rda-resource-      | 192.168.131.44 | Up 3 Weeks ago  | c1969894eedc | 3.4.2.2 |
| manager            |                |                 |              |         |
| rda-scheduler      | 192.168.131.45 | Up 1 Days ago   | 627139281c63 | 3.4.2.1 |
| rda-scheduler      | 192.168.131.44 | Up 1 Days ago   | e43ae3785fae | 3.4.2.1 |
| rda-portal-backend | 192.168.131.46 | Up 16 Hours ago | 0ff2e3b7220f | 3.4.2.2 |
| rda-portal-        | 192.168.131.46 | Up 16 Hours ago | 122b014f0714 | 3.4.2.2 |
| frontend           |                |                 |              |         |
| rda-portal-backend | 192.168.131.45 | Up 16 Hours ago | 5917b9cfdd07 | 3.4.2.2 |
| rda-portal-        | 192.168.131.45 | Up 16 Hours ago | 347380e8bac1 | 3.4.2.2 |
| frontend           |                |                 |              |         |
+--------------------+----------------+-----------------+--------------+---------+
Run the below command to check if all services has ok status and does not throw any failure messages.

rdac healthcheck
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+
| Cat       | Pod-Type                               | Host         | ID       | Site        | Health Parameter                                    | Status   | Message                                                     |
|-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------|
| rda_infra | api-server                             | rda-api-serv | 3c36575d |             | service-status                                      | ok       |                                                             |
| rda_infra | api-server                             | rda-api-serv | 3c36575d |             | minio-connectivity                                  | ok       |                                                             |
| rda_infra | api-server                             | rda-api-serv | 1fd1778b |             | service-status                                      | ok       |                                                             |
| rda_infra | api-server                             | rda-api-serv | 1fd1778b |             | minio-connectivity                                  | ok       |                                                             |
| rda_infra | asm                                    | rda-asm-f6b8 | 39a53ac4 |             | service-status                                      | ok       |                                                             |
| rda_infra | asm                                    | rda-asm-f6b8 | 39a53ac4 |             | minio-connectivity                                  | ok       |                                                             |
| rda_infra | asm                                    | rda-asm-f6b8 | 199a31d2 |             | service-status                                      | ok       |                                                             |
| rda_infra | asm                                    | rda-asm-f6b8 | 199a31d2 |             | minio-connectivity                                  | ok       |                                                             |
| rda_infra | scheduler                              | rda-schedule | ee7565aa |             | service-status                                      | ok       |                                                             |
| rda_infra | scheduler                              | rda-schedule | ee7565aa |             | minio-connectivity                                  | ok       |                                                             |
| rda_infra | scheduler                              | rda-schedule | ee7565aa |             | DB-connectivity                                     | ok       |                                                             |
| rda_infra | scheduler                              | rda-schedule | ee7565aa |             | scheduler-webserver-connectivity                    | ok       |                                                             |
| rda_infra | scheduler                              | rda-schedule | 779a624d |             | service-status                                      | ok       |                                                             |
| rda_infra | scheduler                              | rda-schedule | 779a624d |             | minio-connectivity                                  | ok       |                                                             |
| rda_infra | scheduler                              | rda-schedule | 779a624d |             | DB-connectivity                                     | ok       |                                                             |
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+

Run the below command to initiate upgrading RDAF Platform cfx-rda-resource-manager with zero downtime

rdaf platform upgrade --tag 3.4.2.2 --rolling-upgrade --service cfx-rda-resource-manager --timeout 10

Run the below command to initiate upgrading RDAF Platform cfx-rda-resource-manager without zero downtime

rdaf platform upgrade --tag 3.4.2.2  --service cfx-rda-resource-manager

Run the below command to initiate upgrading RDAF Platform with backend Portal UI Services with zero downtime

rdaf platform upgrade --tag 3.4.2.2 --rolling-upgrade --service portal-backend --timeout 10

Run the below command to initiate upgrading RDAF Platform with backend Portal UI Services without zero downtime

rdaf platform upgrade --tag 3.4.2.2 --service portal-backend

Run the below command to initiate upgrading RDAF Platform with frontend Portal UI Services with zero downtime

rdaf platform upgrade --tag 3.4.2.2 --rolling-upgrade --service portal-frontend --timeout 10

Run the below command to initiate upgrading RDAF Platform with frontend Portal UI Services without zero downtime

rdaf platform upgrade --tag 3.4.2.2 --service portal-frontend

Run the below command to initiate upgrading RDAF Platform API Server Services with zero downtime

rdaf platform upgrade --tag 3.4.2.2 --rolling-upgrade --service rda_api_server --timeout 10

Run the below command to initiate upgrading RDAF Platform API Server Services without zero downtime

rdaf platform upgrade --tag 3.4.2.2 --service rda_api_server

Note

After upgrading the above mentioned services, use the following commands to verify that the services are up and running

Please wait till all of the new platform service are in Up state and run the below command to verify their status and make sure below services are running with 3.4.2.2 version.

  • cfx_rda_resource_manager

  • portal UI

  • rda_api_server

rdaf platform status

+--------------------+-------------------+---------------+--------------+---------+
| Name               | Host              | Status        | Container Id | Tag     |
+--------------------+-------------------+---------------+--------------+---------+
| cfx-rda-access-    | 192.168.108.50    | Up 10 days    | 0b19b7ac1d08 | 3.4.2   |
| manager            |                   |               |              |         |
| cfx-rda-access-    | 192.168.108.56    | Up 10 days    | dd2c94418849 | 3.4.2   |
| manager            |                   |               |              |         |
| cfx-rda-resource-  | 192.168.108.50    | Up 3 days     | 84c050726064 | 3.4.2.2 |
| manager            |                   |               |              |         |
| cfx-rda-resource-  | 192.168.108.56    | Up 3 days     | 7576e8c241de | 3.4.2.2 |
| manager            |                   |               |              |         |
| cfx-rda-user-      | 192.168.108.50    | Up 10 days    | 881b1bb15151 | 3.4.2   |
| preferences        |                   |               |              |         |
| cfx-rda-user-      | 192.168.108.56    | Up 10 days    | 6b4d374438cf | 3.4.2   |
| preferences        |                   |               |              |         |
| portal-backend     | 192.168.108.50    | Up 3 days     | feca2d831f1b | 3.4.2.2 |
| portal-backend     | 192.168.108.56    | Up 3 days     | cd6e9430f0f7 | 3.4.2.2 |
| portal-frontend    | 192.168.108.50    | Up 3 days     | ac86af402e6c | 3.4.2.2 |
| portal-frontend    | 192.168.108.56    | Up 3 days     | 56b011423bef | 3.4.2.2 |
| rda_api_server     | 192.168.108.50    | Up 2 days     | fdb012433bag | 3.4.2.2 |
| rda_api_server     | 192.168.108.56    | Up 2 days     | cgb051273feg | 3.4.2.2 |
+--------------------+-------------------+---------------+--------------+---------+
Run the below command to check if all services has ok status and does not throw any failure messages.

rdac healthcheck
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+
| Cat       | Pod-Type                               | Host         | ID       | Site        | Health Parameter                                    | Status   | Message                                                     |
|-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------|
| rda_app   | alert-ingester                         | 2a80917206c7 | cc59bb23 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-ingester                         | 2a80917206c7 | cc59bb23 |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-ingester                         | 2a80917206c7 | cc59bb23 |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-ingester                         | 2a80917206c7 | cc59bb23 |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-ingester                         | 2a80917206c7 | cc59bb23 |             | kafka-connectivity                                  | ok       | Cluster=NjY5Yzk1OTJmN2JhMTFlZQ, Broker=3, Brokers=[1, 2, 3] |
| rda_app   | alert-ingester                         | b9ffd2248f8c | ab21c027 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-ingester                         | b9ffd2248f8c | ab21c027 |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-ingester                         | b9ffd2248f8c | ab21c027 |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-ingester                         | b9ffd2248f8c | ab21c027 |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-ingester                         | b9ffd2248f8c | ab21c027 |             | kafka-connectivity                                  | ok       | Cluster=NjY5Yzk1OTJmN2JhMTFlZQ, Broker=1, Brokers=[1, 2, 3] |
| rda_app   | alert-processor                        | 0ef76607db2d | da0dc802 |             | service-status                                      | ok       |                                                             |
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+

1.3.3 Upgrade rdac CLI

Run the below command to upgrade the rdac CLI

rdafk8s rdac_cli upgrade --tag 3.4.2.1

Run the below command to upgrade the rdac CLI

rdaf rdac_cli upgrade --tag 3.4.2.1

1.3.4 Upgrade RDA Worker Services

Step-1: Please run the below command to initiate upgrading the RDA Worker service PODs.

rdafk8s worker upgrade --tag 3.4.2

Step-2: Run the below command to check the status of the existing and newer PODs and make sure atleast one instance of each RDA Worker service POD is in Terminating state.

kubectl get pods -n rda-fabric -l app_component=rda-worker
NAME                          READY   STATUS    RESTARTS   AGE
rda-worker-5b5cfcf8f7-bhqsl   1/1     Running   0          11h
rda-worker-5b5cfcf8f7-tlns2   1/1     Running   0          11h

Step-3: Run the below command to put all Terminating RDAF worker service PODs into maintenance mode. It will list all of the POD Ids of RDA worker services along with rdac maintenance command that is required to be put in maintenance mode.

python maint_command.py

Step-4: Copy & Paste the rdac maintenance command as below.

rdac maintenance start --ids <comma-separated-list-of-platform-pod-ids>

Step-5: Run the below command to verify the maintenance mode status of the RDAF worker services.

rdac pods --show_maintenance | grep False

Step-6: Run the below command to delete the Terminating RDAF worker service PODs

for i in `kubectl get pods -n rda-fabric -l app_component=rda-worker | grep 'Terminating' | awk '{print $1}'`; do kubectl delete pod $i -n rda-fabric --force; done

Note

Wait for 120 seconds between each RDAF worker service upgrade by repeating above steps from Step-2 to Step-6 for rest of the RDAF worker service PODs.

Step-7: Please wait for 120 seconds to let the newer version of RDA Worker service PODs join the RDA Fabric appropriately. Run the below commands to verify the status of the newer RDA Worker service PODs.

rdac pods | grep rda-worker
rdafk8s worker status
+------------+----------------+-----------------+--------------+-------+
| Name       | Host           | Status          | Container Id | Tag   |
+------------+----------------+-----------------+--------------+-------+
| rda-worker | 192.168.108.18 | Up 13 Hours ago | 14376666ac3b | 3.4.2 |
| rda-worker | 192.168.108.19 | Up 13 Hours ago | 3f55a7c00437 | 3.4.2 |
+------------+----------------+-----------------+--------------+-------+

Step-8: Run the below command to check if all RDA Worker services has ok status and does not throw any failure messages.

rdac healthcheck
  • Upgrade RDA Worker Services

Please run the below command to initiate upgrading the RDA Worker Service with zero downtime

rdaf worker upgrade --tag 3.4.2 --rolling-upgrade --timeout 10

Note

timeout <10> mentioned in the above command represents as seconds

Note

The rolling-upgrade option upgrades the Worker services running in high-availability mode on one VM at a time in sequence. It completes the upgrade of Worker services running on VM-1 before upgrading them on VM-2, followed by VM-3, and so on.

After completing the Worker services upgrade on all VMs, it will ask for user confirmation to delete the older version Worker service PODs.

Digest: sha256:e33ca4a38eda639a4d252fa088d320f9f0e6a782adbea92f6c3682d4f4dbd0da
Status: Downloaded newer image for 192.168.133.95:5000/ubuntu-rda-worker-all:3.4.2
192.168.133.95:5000/ubuntu-rda-worker-all:3.4.2

2024-05-31 04:29:53,948 [rdaf.component.worker] INFO     - Collecting worker details for rolling upgrade
2024-05-31 04:29:56,463 [rdaf.component.worker] INFO     - Rolling upgrade worker on 192.168.133.96
+----------+----------+---------+---------+--------------+-------------+------------+
| Pod ID   | Pod Type | Version | Age     | Hostname     | Maintenance | Pod Status |
+----------+----------+---------+---------+--------------+-------------+------------+
| 1f25b8b5 | worker   | 3.4     | 2:41:47 | 2e383f8684ca | None        | True       |
+----------+----------+---------+---------+--------------+-------------+------------+
Continue moving above pod to maintenance mode? [yes/no]: yes
2024-05-31 04:30:58,391 [rdaf.component.worker] INFO     - Initiating maintenance mode for pod 1f25b8b5
2024-05-31 04:31:03,260 [rdaf.component.worker] INFO     - Waiting for worker to be moved to maintenance.
2024-05-31 04:31:15,660 [rdaf.component.worker] INFO     - Following worker container is in maintenance mode
+----------+----------+---------+---------+--------------+-------------+------------+
| Pod ID   | Pod Type | Version | Age     | Hostname     | Maintenance | Pod Status |
+----------+----------+---------+---------+--------------+-------------+------------+
| 1f25b8b5 | worker   | 3.4     | 2:43:08 | 2e383f8684ca | maintenance | False      |
+----------+----------+---------+---------+--------------+-------------+------------+
2024-05-31 04:31:15,661 [rdaf.component.worker] INFO     - Waiting for timeout of 10 seconds.
2024-05-31 04:31:29,475 [rdaf.component.worker] INFO     - Upgrading worker on host 192.168.133.96
[+] Running 1/11:44,452 [rdaf.component] INFO     -
⠿ Container deployment-scripts-rda_worker-1  Started                     11.3s

2024-05-31 04:31:44,459 [rdaf.component.worker] INFO     - Waiting for upgraded worker to join rdac pods.
2024-05-31 04:31:47,124 [rdaf.component.worker] INFO     - Waiting for worker to be up and running... retry 1
2024-05-31 04:31:59,583 [rdaf.component.worker] INFO     - Waiting for worker to be up and running... retry 2
2024-05-31 04:32:12,025 [rdaf.component.worker] INFO     - Worker is up and running

Please run the below command to initiate upgrading the RDA Worker Service without zero downtime

rdaf worker upgrade --tag 3.4.2

Please wait for 120 seconds to let the newer version of RDA Worker service containers join the RDA Fabric appropriately. Run the below commands to verify the status of the newer RDA Worker service containers.

rdac pods | grep worker
| Infra | worker      | True        | 6eff605e72c4 | a318f394 | rda-site-01 | 13:45:13 |      4 |        31.21 | 0             | 0            |
| Infra | worker      | True        | ae7244d0d10a | 554c2cd8 | rda-site-01 | 13:40:40 |      4 |        31.21 | 0             | 0            |

rdaf worker status

+------------+----------------+------------+--------------+-------+
| Name       | Host           | Status     | Container Id | Tag   |
+------------+----------------+------------+--------------+-------+
| rda_worker | 192.168.133.96 | Up 2 hours | 03061dd8dfcc | 3.4.2 |
| rda_worker | 192.168.133.92 | Up 2 hours | cbb31b875cf6 | 3.4.2 |
+------------+----------------+------------+--------------+-------+
Run the below command to check if all RDA Worker services has ok status and does not throw any failure messages.

rdac healthcheck

1.3.4.1 Upgrade RDA Worker Services to 3.4.2.1

Step-1: Please run the below command to initiate upgrading the RDA Worker services

rdafk8s worker upgrade  --tag 3.4.2.1

Step-2: Run the below command to check the status of the existing and newer PODs and make sure atleast one instance of each RDA Worker service POD is in Terminating state.

kubectl get pods -n rda-fabric -l app_component=rda-worker
NAME                          READY   STATUS    RESTARTS   AGE
rda-worker-5b5cfcf8f7-bhqsl   1/1     Running   0          11h
rda-worker-5b5cfcf8f7-tlns2   1/1     Running   0          11h

Step-3: Run the below command to put all Terminating RDAF worker service PODs into maintenance mode. It will list all of the POD Ids of RDA worker services along with rdac maintenance command that is required to be put in maintenance mode.

python maint_command.py

Step-4: Copy & Paste the rdac maintenance command as below.

rdac maintenance start --ids <comma-separated-list-of-platform-pod-ids>

Step-5: Run the below command to verify the maintenance mode status of the RDAF worker services.

rdac pods --show_maintenance | grep False

Step-6: Run the below command to delete the Terminating RDAF worker service PODs

for i in `kubectl get pods -n rda-fabric -l app_component=rda-worker | grep 'Terminating' | awk '{print $1}'`; do kubectl delete pod $i -n rda-fabric --force; done

Note

Wait for 120 seconds between each RDAF worker service upgrade by repeating above steps from Step-2 to Step-6 for rest of the RDAF worker service PODs.

Step-7: Please wait for 120 seconds to let the newer version of RDA Worker service PODs join the RDA Fabric appropriately. Run the below commands to verify the status of the newer RDA Worker service PODs.

rdac pods | grep rda-worker
rdafk8s worker status
+------------+----------------+---------------+--------------+---------+
| Name       | Host           | Status        | Container Id | Tag     |
+------------+----------------+---------------+--------------+---------+
| rda-worker | 192.168.131.50 | Up 1 Days ago | 4643695ccf98 | 3.4.2.1 |
| rda-worker | 192.168.131.44 | Up 1 Days ago | 86a017e55cab | 3.4.2.1 |
+------------+----------------+---------------+--------------+---------+

Please run the below command to initiate upgrading the RDA Worker Service with zero downtime

rdaf worker upgrade --tag 3.4.2.1 --rolling-upgrade --timeout 10

Please run the below command to initiate upgrading the RDA Worker Service without zero downtime

rdaf worker upgrade --tag 3.4.2.1

Please run the below command to check the Worker status

rdaf worker status
+------------+----------------+-------------+--------------+---------+
| Name       | Host           | Status      | Container Id | Tag     |
+------------+----------------+-------------+--------------+---------+
| rda_worker | 192.168.109.53 | Up 32 hours | 937a308d5402 | 3.4.2.1 |
| rda_worker | 192.168.109.54 | Up 32 hours | f6b40eb421f2 | 3.4.2.1 |
+------------+----------------+-------------+--------------+---------+

1.3.5 Upgrade OIA Application Services

Step-1: Run the below commands to initiate upgrading RDAF OIA Application services

rdafk8s app upgrade OIA --tag 7.4.2

Step-2: Run the below command to check the status of the newly upgraded PODs.

kubectl get pods -n rda-fabric -l app_name=oia

Step-3: Run the below command to put all Terminating OIA application service PODs into maintenance mode. It will list all of the POD Ids of OIA application services along with rdac maintenance command that are required to be put in maintenance mode.

python maint_command.py

Step-4: Copy & Paste the rdac maintenance command as below.

rdac maintenance start --ids <comma-separated-list-of-oia-app-pod-ids>

Step-5: Run the below command to verify the maintenance mode status of the OIA application services.

rdac pods --show_maintenance | grep False

Step-6: Run the below command to delete the Terminating OIA application service PODs

for i in `kubectl get pods -n rda-fabric -l app_name=oia | grep 'Terminating' | awk '{print $1}'`; do kubectl delete pod $i -n rda-fabric --force; done
kubectl get pods -n rda-fabric -l app_name=oia

Note

Wait for 120 seconds and Repeat above steps from Step-2 to Step-6 for rest of the OIA application service PODs.

Please wait till all of the new OIA application service PODs are in Running state and run the below command to verify their status and make sure they are running with 7.4.2 version.

rdafk8s app status

+--------------------+----------------+-----------------+--------------+-------+
| Name               | Host           | Status          | Container Id | Tag   |
+--------------------+----------------+-----------------+--------------+-------+
| rda-alert-ingester | 192.168.108.17 | Up 13 Hours ago | cfc71955cd04 | 7.4.2 |
| rda-alert-ingester | 192.168.108.20 | Up 13 Hours ago | 4eb3e9cf918d | 7.4.2 |
| rda-alert-         | 192.168.108.19 | Up 13 Hours ago | 36f523e5e9e7 | 7.4.2 |
| processor          |                |                 |              |       |
| rda-alert-         | 192.168.108.20 | Up 13 Hours ago | 7826db3cd05b | 7.4.2 |
| processor          |                |                 |              |       |
| rda-alert-         | 192.168.108.19 | Up 13 Hours ago | 0b5f5328789e | 7.4.2 |
| processor-         |                |                 |              |       |
| companion          |                |                 |              |       |
| rda-alert-         | 192.168.108.18 | Up 13 Hours ago | 8bca7e8008c1 | 7.4.2 |
| processor-         |                |                 |              |       |
| companion          |                |                 |              |       |
| rda-app-controller | 192.168.108.19 | Up 13 Hours ago | d3b21ae01f09 | 7.4.2 |
| rda-app-controller | 192.168.108.18 | Up 13 Hours ago | 52ac29c6f1d7 | 7.4.2 |
| rda-collaboration  | 192.168.108.20 | Up 13 Hours ago | 5af92cec0b38 | 7.4.2 |
| rda-irm-service    | 192.168.108.20 | Up 13 Hours ago | bf67882b1ce8 | 7.4.2 |
| rda-irm-service    | 192.168.108.17 | Up 13 Hours ago | eecbefa1ad01 | 7.4.2 |
+--------------------+----------------+-----------------+--------------+-------+
Step-3: Run the below command to verify all OIA application services are up and running.

rdac pods
+-------+----------------------------------------+-------------+----------------+----------+-------------+---------+--------+--------------+---------------+--------------+
| Cat   | Pod-Type                               | Pod-Ready   | Host           | ID       | Site        | Age     |   CPUs |   Memory(GB) | Active Jobs   | Total Jobs   |
|-------+----------------------------------------+-------------+----------------+----------+-------------+---------+--------+--------------+---------------+--------------|
| App   | alert-ingester                         | True        | rda-alert-inge | 7861bd4f |             | 4:20:52 |      8 |        31.33 |               |              |
| App   | alert-ingester                         | True        | rda-alert-inge | 4abc521f |             | 4:20:52 |      8 |        31.33 |               |              |
| App   | alert-processor                        | True        | rda-alert-proc | 9bf94e67 |             | 4:20:50 |      8 |        31.33 |               |              |
| App   | alert-processor                        | True        | rda-alert-proc | 4e679139 |             | 4:20:48 |      8 |        31.33 |               |              |
| App   | alert-processor-companion              | True        | rda-alert-proc | 745dfbb9 |             | 4:20:39 |      8 |        31.33 |               |              |
| App   | alert-processor-companion              | True        | rda-alert-proc | 02f6bce0 |             | 4:20:41 |      8 |        31.33 |               |              |
| App   | asset-dependency                       | True        | rda-asset-depe | fc6c7a60 |             | 4:28:00 |      8 |        31.33 |               |              |
| App   | asset-dependency                       | True        | rda-asset-depe | d3ca4c11 |             | 4:27:07 |      8 |        31.33 |               |              |
| App   | authenticator                          | True        | rda-identity-6 | 4cd59d9c |             | 4:27:01 |      8 |        31.33 |               |              |
| App   | authenticator                          | True        | rda-identity-6 | 174298c3 |             | 4:25:53 |      8 |        31.33 |               |              |
| App   | cfx-app-controller                     | True        | rda-app-contro | 4d923832 |             | 4:20:42 |      8 |        31.33 |               |              |
| App   | cfx-app-controller                     | True        | rda-app-contro | b16deafa |             | 4:20:25 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-access-manager       | True        | rda-access-man | 09d1fada |             | 4:27:56 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-access-manager       | True        | rda-access-man | e0af2bcc |             | 4:27:54 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-collaboration        | True        | rda-collaborat | 9e7f7bcb |             | 4:20:31 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-collaboration        | True        | rda-collaborat | 38db5386 |             | 4:20:25 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-file-browser         | True        | rda-file-brows | 589e18f8 |             | 4:20:20 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-file-browser         | True        | rda-file-brows | 853545f8 |             | 4:19:59 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-irm_service          | True        | rda-irm-servic | d17f8dcd |             | 4:20:06 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-irm_service          | True        | rda-irm-servic | 44decaa7 | *leader*    | 4:19:41 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-notification-service | True        | rda-notificati | 74e58855 |             | 4:20:14 |      8 |        31.33 |               |              |
+-------+----------------------------------------+-------------+----------------+----------+-------------+-------------------+--------+-----------------------------+--------------+

Run the below command to check if all services has ok status and does not throw any failure messages.

rdac healthcheck
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+
| Cat       | Pod-Type                               | Host         | ID       | Site        | Health Parameter                                    | Status   | Message                                                     |
|-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------|
| rda_app   | alert-ingester                         | rda-alert-in | 4abc521f |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 4abc521f |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 4abc521f |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-ingester                         | rda-alert-in | 4abc521f |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 4abc521f |             | kafka-connectivity                                  | ok       | Cluster=IrA5ccri7mBeUvhzvrimEg, Broker=0, Brokers=[0, 1, 2] |
| rda_app   | alert-ingester                         | rda-alert-in | 7861bd4f |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 7861bd4f |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 7861bd4f |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-ingester                         | rda-alert-in | 7861bd4f |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 7861bd4f |             | kafka-connectivity                                  | ok       | Cluster=IrA5ccri7mBeUvhzvrimEg, Broker=2, Brokers=[0, 1, 2] |
| rda_app   | alert-processor                        | rda-alert-pr | 4e679139 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-processor                        | rda-alert-pr | 4e679139 |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-processor                        | rda-alert-pr | 4e679139 |             | service-dependency:cfx-app-controller               | ok       | 2 pod(s) found for cfx-app-controller                       |
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+

Run the below commands to initiate upgrading the RDA Fabric OIA Application services with zero downtime

rdaf app upgrade OIA --tag 7.4.2 --rolling-upgrade --timeout 10

Note

timeout <10> mentioned in the above command represents as Seconds

Note

The rolling-upgrade option upgrades the OIA application services running in high-availability mode on one VM at a time in sequence. It completes the upgrade of OIA application services running on VM-1 before upgrading them on VM-2, followed by VM-3, and so on.

After completing the OIA application services upgrade on all VMs, it will ask for user confirmation to delete the older version OIA application service PODs.

Digest: sha256:b5b6bca023ab7c53642712bc46d66c812c4a07786a42cf0557d7a8ab335875fa
Status: Downloaded newer image for 192.168.133.95:5000/cfx-rda-alert-processor-companion:7.4.2
192.168.133.95:5000/cfx-rda-alert-processor-companion:7.4.2

2024-05-31 04:40:34,248 [rdaf.component.oia] INFO     - Gathering OIA app container details.
2024-05-31 04:40:35,521 [rdaf.component.oia] INFO     - Gathering rdac pod details.
+----------+--------------------------+---------+---------+--------------+-------------+------------+
| Pod ID   | Pod Type                 | Version | Age     | Hostname     | Maintenance | Pod Status |
+----------+--------------------------+---------+---------+--------------+-------------+------------+
| e7f1bcf0 | cfx-app-controller       | 7.4     | 2:48:20 | 6d28a3290f5a | None        | True       |
| 77a4a0f1 | reports-registry         | 7.4     | 2:47:06 | 1b8a23454d90 | None        | True       |
| 6fd122a9 | cfxdimensions-app-       | 7.4     | 2:46:12 | 0e5f72321a17 | None        | True       |
|          | notification-service     |         |         |              |             |            |
| 1995fc04 | cfxdimensions-app-file-  | 7.4     | 2:45:18 | 51e0b6bdca54 | None        | True       |
|          | browser                  |         |         |              |             |            |
| 6e6c894e | configuration-service    | 7.4     | 2:44:24 | e7d05cfea72e | None        | True       |
| 193f2f5c | alert-ingester           | 7.4     | 2:43:29 | aab24eb48b2f | None        | True       |
| 7ca3d7fd | webhook-server           | 7.4     | 2:42:34 | 8a96e952df1b | None        | True       |
| 3446c0ae | smtp-server              | 7.4     | 2:41:37 | 6e66dfcedfc3 | None        | True       |
| 88fe7038 | event-consumer           | 7.4     | 2:40:35 | 05b77a67b396 | None        | True       |
| aab256c9 | alert-processor          | 7.4     | 2:38:57 | 58b3e72ce804 | None        | True       |
| 5fec7a2a | cfxdimensions-app-       | 7.4     | 2:37:33 | 3faa1b12c5af | None        | True       |
|          | irm_service              |         |         |              |             |            |
| 0ee17fe2 | ml-config                | 7.4     | 2:36:37 | ec260ec76453 | None        | True       |
| ecfa7e51 | cfxdimensions-app-       | 7.4     | 1:06:45 | bdc5f619ae2d | None        | True       |
|          | collaboration            |         |         |              |             |            |
| 69462b7a | ingestion-tracker        | 7.4     | 2:34:28 | 1f3c4c83224b | None        | True       |
| 573290ee | alert-processor-         | 7.4     | 2:33:26 | ee17fa3ec008 | None        | True       |
|          | companion                |         |         |              |             |            |
+----------+--------------------------+---------+---------+--------------+-------------+------------+
Continue moving above pods to maintenance mode? [yes/no]: yes
2024-05-31 04:40:42,411 [rdaf.component.oia] INFO     - Initiating Maintenance Mode...
2024-05-31 04:40:47,510 [rdaf.component.oia] INFO     - Waiting for services to be moved to maintenance.
2024-05-31 04:41:10,014 [rdaf.component.oia] INFO     - Following container are in maintenance mode
+----------+--------------------------+---------+---------+--------------+-------------+------------+
| Pod ID   | Pod Type                 | Version | Age     | Hostname     | Maintenance | Pod Status |
+----------+--------------------------+---------+---------+--------------+-------------+------------+
| 193f2f5c | alert-ingester           | 7.4     | 2:43:58 | aab24eb48b2f | maintenance | False      |
| aab256c9 | alert-processor          | 7.4     | 2:39:26 | 58b3e72ce804 | maintenance | False      |
| 573290ee | alert-processor-         | 7.4     | 2:33:55 | ee17fa3ec008 | maintenance | False      |
|          | companion                |         |         |              |             |            |
| e7f1bcf0 | cfx-app-controller       | 7.4     | 2:48:49 | 6d28a3290f5a | maintenance | False      |
| ecfa7e51 | cfxdimensions-app-       | 7.4     | 1:07:14 | bdc5f619ae2d | maintenance | False      |
|          | collaboration            |         |         |              |             |            |
| 1995fc04 | cfxdimensions-app-file-  | 7.4     | 2:45:46 | 51e0b6bdca54 | maintenance | False      |
|          | browser                  |         |         |              |             |            |
| 5fec7a2a | cfxdimensions-app-       | 7.4     | 2:38:02 | 3faa1b12c5af | maintenance | False      |
|          | irm_service              |         |         |              |             |            |
| 6fd122a9 | cfxdimensions-app-       | 7.4     | 2:46:41 | 0e5f72321a17 | maintenance | False      |
|          | notification-service     |         |         |              |             |            |
| 6e6c894e | configuration-service    | 7.4     | 2:44:53 | e7d05cfea72e | maintenance | False      |
| 88fe7038 | event-consumer           | 7.4     | 2:41:04 | 05b77a67b396 | maintenance | False      |
| 69462b7a | ingestion-tracker        | 7.4     | 2:34:57 | 1f3c4c83224b | maintenance | False      |
| 0ee17fe2 | ml-config                | 7.4     | 2:37:05 | ec260ec76453 | maintenance | False      |
| 77a4a0f1 | reports-registry         | 7.4     | 2:47:35 | 1b8a23454d90 | maintenance | False      |
| 3446c0ae | smtp-server              | 7.4     | 2:42:06 | 6e66dfcedfc3 | maintenance | False      |
| 7ca3d7fd | webhook-server           | 7.4     | 2:43:03 | 8a96e952df1b | maintenance | False      |
+----------+--------------------------+---------+---------+--------------+-------------+------------+
2024-05-31 04:41:10,018 [rdaf.component.oia] INFO     - Waiting for timeout of 10 seconds...
2024-05-31 04:41:20,027 [rdaf.component.oia] INFO     - Upgrading cfx-rda-app-controller on host 192.168.133.96
[+] Running 1/11:46,027 [rdaf.component] INFO     -
⠿ Container oia-cfx-rda-app-controller-1  Started                        10.9s

2024-05-31 04:41:46,034 [rdaf.component.oia] INFO     - Upgrading cfx-rda-reports-registry on host 192.168.133.96
[+] Running 1/12:12,762 [rdaf.component] INFO     -
⠿ Container oia-cfx-rda-reports-registry-1  Started                      11.3s

Run the below command to initiate upgrading the RDA Fabric OIA Application services without zero downtime

rdaf app upgrade OIA --tag 7.4.2

Please wait till all of the new OIA application service containers are in Up state and run the below command to verify their status and make sure they are running with 7.4.2 version.

rdaf app status
+--------------------+------------  --+------------+--------------+-------+
| Name               | Host           | Status     | Container Id | Tag   |
+--------------------+------------  --+------------+--------------+-------+
| cfx-rda-app-       | 192.168.133.96 | Up 4 hours | f139e2b3cca3 | 7.4.2 |
| controller         |                |            |              |       |
| cfx-rda-app-       | 192.168.133.92 | Up 3 hours | 6d68b737715a | 7.4.2 |
| controller         |                |            |              |       |
| cfx-rda-reports-   | 192.168.133.96 | Up 4 hours | 0a6bac884dff | 7.4.2 |
| registry           |                |            |              |       |
| cfx-rda-reports-   | 192.168.133.92 | Up 3 hours | 3477e7f751ec | 7.4.2 |
| registry           |                |            |              |       |
| cfx-rda-           | 192.168.133.96 | Up 4 hours | 96dd2337f779 | 7.4.2 |
| notification-      |                |            |              |       |
| service            |                |            |              |       |
| cfx-rda-           | 192.168.133.92 | Up 3 hours | 3a1743239a99 | 7.4.2 |
| notification-      |                |            |              |       |
| service            |                |            |              |       |
| cfx-rda-file-      | 192.168.133.96 | Up 3 hours | bd41100a456c | 7.4.2 |
| browser            |                |            |              |       |
| cfx-rda-file-      | 192.168.133.92 | Up 3 hours | 2cc517b8a640 | 7.4.2 |
| browser            |                |            |              |       |
| cfx-rda-           | 192.168.133.96 | Up 3 hours | 9f1e53602999 | 7.4.2 |
| configuration-     |                |            |              |       |
| service            |                |            |              |       |
| cfx-rda-           | 192.168.133.92 | Up 3 hours | 8e50e464bcd5 | 7.4.2 |
| configuration-     |                |            |              |       |
| service            |                |            |              |       |
| cfx-rda-alert-     | 192.168.133.96 | Up 3 hours | 7f75047e9e44 | 7.4.2 |
| ingester           |                |            |              |       |
| cfx-rda-alert-     | 192.168.133.92 | Up 3 hours | f9ec55862be0 | 7.4.2 |
| ingester           |                |            |              |       |
+--------------------+----------------+------------+--------------+-------+

Run the below command to verify all OIA application services are up and running.

rdac pods
+-------+----------------------------------------+-------------+--------------+----------+-------------+---------+--------+--------------+---------------+--------------+
| Cat   | Pod-Type                               | Pod-Ready   | Host         | ID       | Site        | Age     |   CPUs |   Memory(GB) | Active Jobs   | Total Jobs   |
|-------+----------------------------------------+-------------+--------------+----------+-------------+---------+--------+--------------+---------------+--------------|
| App   | alert-ingester                         | True        | 7f75047e9e44 | daa8c414 |             | 3:35:38 |      4 |        31.21 |               |              |
| App   | alert-ingester                         | True        | f9ec55862be0 | f9b9231c |             | 3:10:01 |      4 |        31.21 |               |              |
| App   | alert-processor                        | True        | c6cc7b04ab33 | b4ebfb06 |             | 3:34:08 |      4 |        31.21 |               |              |
| App   | alert-processor                        | True        | 13f11096e604 | 07cafc20 |             | 3:08:35 |      4 |        31.21 |               |              |
| App   | alert-processor-companion              | True        | 3e92fceaf3dd | c1aff864 |             | 3:32:18 |      4 |        31.21 |               |              |
| App   | alert-processor-companion              | True        | 28e1d0571b85 | 8519c0b4 |             | 3:07:08 |      4 |        31.21 |               |              |
| App   | asset-dependency                       | True        | 5dd435f9fbd8 | 518fc308 |             | 5:14:30 |      4 |        31.21 |               |              |
| App   | asset-dependency                       | True        | 5e8415d5d7c8 | 74a5eadc |             | 4:51:11 |      4 |        31.21 |               |              |
| App   | authenticator                          | True        | aa252c94a072 | 5070d9ce |             | 5:14:09 |      4 |        31.21 |               |              |
| App   | authenticator                          | True        | c0215c6c14cd | c70c58ef |             | 4:50:59 |      4 |        31.21 |               |              |
| App   | cfx-app-controller                     | True        | f139e2b3cca3 | f8d1a9ba |             | 3:37:31 |      4 |        31.21 |               |              |
| App   | cfx-app-controller                     | True        | 6d68b737715a | 6345590d |             | 3:11:48 |      4 |        31.21 |               |              |
| App   | cfxdimensions-app-access-manager       | True        | d828b34eb6da | d9f9268e |             | 5:13:13 |      4 |        31.21 |               |              |
| App   | cfxdimensions-app-access-manager       | True        | d5ff2029081c | b4324c33 |             | 4:50:35 |      4 |        31.21 |               |              |
| App   | cfxdimensions-app-collaboration        | True        | 6f940203f546 | 2927c056 |             | 3:35:07 |      4 |        31.21 |               |              |
| App   | cfxdimensions-app-collaboration        | True        | 39932368764e | d348a2fb |             | 3:33:02 |      4 |        31.21 |               |              |
| App   | cfxdimensions-app-file-browser         | True        | bd41100a456c | 4a33436f |             | 3:36:24 |      4 |        31.21 |               |              |
| App   | cfxdimensions-app-file-browser         | True        | 2cc517b8a640 | c451c0fd |             | 3:10:44 |      4 |        31.21 |               |              |
| App   | cfxdimensions-app-irm_service          | True        | 46dabc5d7126 | 2add29f7 |             | 3:33:46 |      4 |        31.21 |               |              |
| App   | cfxdimensions-app-irm_service          | True        | 54ddda61833f | 3297630d |             | 3:08:13 |      4 |        31.21 |               |              |
| App   | cfxdimensions-app-notification-service | True        | 96dd2337f779 | dfdfa99c |             | 3:36:47 |      4 |        31.21 |               |              |
| App   | cfxdimensions-app-notification-service | True        | 3a1743239a99 | 31df3f37 |             | 3:11:05 |      4 |        31.21 |               |              |
| App   | cfxdimensions-app-resource-manager     | True        | 0bdc03d5143c | 5129f62b |             | 5:12:53 |      4 |        31.21 |               |              |
| App   | cfxdimensions-app-resource-manager     | True        | 720413d6e50e | 4ce530e1 |             | 4:50:24 |      4 |        31.21 |               |              |
| App   | chat-helper                            | True        | 5f4c17c88e35 | 70323115 |             | 5:13:36 |      4 |        31.21 |               |              |
| App   | chat-helper                            | True        | 2d4b0ad74c71 | a47fcfaf |             | 4:50:47 |      4 |        31.21 |               |              |
| App   | configuration-service                  | True        | 9f1e53602999 | e503bca1 |             | 3:36:01 |      4 |        31.21 |               |              |
| App   | configuration-service                  | True        | 8e50e464bcd5 | 8fa47921 |             | 3:10:22 |      4 |        31.21 |               |              |
| App   | event-consumer                         | True        | d8fe5b892ffa | 4ad047a9 |             | 3:34:30 |      4 |        31.21 |               |              |
| App   | event-consumer                         | True        | dee43e7f8c78 | 5e97c328 |             | 3:08:56 |      4 |        31.21 |               |              |
| App   | fsm                                    | True        | e8dfd2900b46 | a9564a02 |             | 5:13:56 |      4 |        31.21 |               |              |
| App   | fsm                                    | True        | fd0be9e91135 | 7d44c2e2 |             | 4:50:56 |      4 |        31.21 |               |              |
| App   | ingestion-tracker                      | True        | 4a6bfc77c564 | 79f6df8f |             | 3:32:41 |      4 |        31.21 |               |              |
| App   | ingestion-tracker                      | True        | 7427d37440aa | c4af9983 |             | 3:07:29 |      4 |        31.21 |               |              |
| App   | ml-config                              | True        | 5d76a3184dff | 8e953d02 |             | 3:33:25 |      4 |        31.21 |               |              |
| App   | ml-config                              | True        | 127133280d61 | 10a06f4d |             | 3:07:52 |      4 |        31.21 |               |              |
| App   | reports-registry                       | True        | 0a6bac884dff | f8f7e551 |             | 3:37:09 |      4 |        31.21 |               |              |
| App   | reports-registry                       | True        | 3477e7f751ec | 39632bb1 |             | 3:11:26 |      4 |        31.21 |               |              |
| App   | smtp-server                            | True        | 7eb5de2961fd | a6bc435d |             | 3:34:53 |      4 |        31.21 |               |              |
| App   | smtp-server                            | True        | 8fcb6d4fc639 | 3d4e452e |             | 3:09:18 |      4 |        31.21 |               |              |
| App   | user-preferences                       | True        | 1ecd2b05fb31 | 278bfbd6 |             | 5:12:32 |      4 |        31.21 |               |              |
| App   | user-preferences                       | True        | 306cc6953904 | f232670d |             | 4:50:13 |      4 |        31.21 |               |              |
| App   | webhook-server                         | True        | 202701f84b11 | 6cbd1afc |             | 3:35:16 |      4 |        31.21 |               |              |
| App   | webhook-server                         | True        | f4909f83fd7f | e824a293 |             | 3:09:39 |      4 |        31.21 |               |              |
| Infra | api-server                             | True        | da6d9abf9bb6 | 34459f42 |             | 5:15:17 |      4 |        31.21 |               |              |
| Infra | api-server                             | True        | 6105a1d2c514 | 936e8006 |             | 4:51:29 |      4 |        31.21 |               |              |
| Infra | asm                                    | True        | bbac08113b29 | f8f3536d |             | 5:12:15 |      4 |        31.21 |               |              |
| Infra | asm                                    | True        | a240ac98cacb | f70e9d71 |             | 4:50:12 |      4 |        31.21 |               |              |
| Infra | collector                              | True        | 5ca1e495cdba | 9fde9e54 |             | 5:14:51 |      4 |        31.21 |               |              |
| Infra | collector                              | True        | c09dc78bfa0c | 6fd2c8b5 |             | 4:51:21 |      4 |        31.21 |               |              |
| Infra | registry                               | True        | 4a4b530de8a6 | 26939861 |             | 5:15:25 |      4 |        31.21 |               |              |
| Infra | registry                               | True        | 3b322de7dc6b | ba841ed0 |             | 4:51:35 |      4 |        31.21 |               |              |
| Infra | scheduler                              | True        | 52258e0dd904 | b01d3a30 | *leader*    | 5:15:11 |      4 |        31.21 |               |              |
| Infra | scheduler                              | True        | c161d8d1fb8d | 6a9e26a4 |             | 4:51:31 |      4 |        31.21 |               |              |
| Infra | worker                                 | True        | 721eb79dbe61 | 89f8550b | rda-site-01 | 4:30:03 |      4 |        31.21 | 0             | 0            |
| Infra | worker                                 | True        | 7a75e991b74d | 41d5a79a | rda-site-01 | 4:27:56 |      4 |        31.21 | 0             | 0            |
+-------+----------------------------------------+-------------+--------------+----------+-------------+---------+--------+--------------+---------------+--------------+

Run the below command to check if all services has ok status and does not throw any failure messages.

rdac healthcheck
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+
| Cat       | Pod-Type                               | Host         | ID       | Site        | Health Parameter                                    | Status   | Message                                                     |
|-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------|
| rda_app   | alert-ingester                         | 7f75047e9e44 | daa8c414 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-ingester                         | 7f75047e9e44 | daa8c414 |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-ingester                         | 7f75047e9e44 | daa8c414 |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-ingester                         | 7f75047e9e44 | daa8c414 |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-ingester                         | 7f75047e9e44 | daa8c414 |             | kafka-connectivity                                  | ok       | Cluster=NTc1NWU1MTQxYmY3MTFlZg, Broker=1, Brokers=[1, 2, 3] |
| rda_app   | alert-ingester                         | f9ec55862be0 | f9b9231c |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-ingester                         | f9ec55862be0 | f9b9231c |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-ingester                         | f9ec55862be0 | f9b9231c |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-ingester                         | f9ec55862be0 | f9b9231c |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-ingester                         | f9ec55862be0 | f9b9231c |             | kafka-connectivity                                  | ok       | Cluster=NTc1NWU1MTQxYmY3MTFlZg, Broker=2, Brokers=[1, 2, 3] |
| rda_app   | alert-processor                        | c6cc7b04ab33 | b4ebfb06 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-processor                        | c6cc7b04ab33 | b4ebfb06 |             | minio-connectivity                                  | ok       |                                                             |
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+

1.3.5.1 Upgrade OIA Application Services to 7.4.2.1 / 7.4.2.2

Step-1: Please run the below command to initiate upgrading the Webhook Server, Alert Processor, Event Consumer and SMTP server Services

rdafk8s app upgrade OIA --tag 7.4.2.1  --service rda-webhook-server --service rda-alert-processor --service rda-event-consumer --service rda-smtp-server
Step-2: Please run the below command to initiate upgrading the Collaboration Service

rdafk8s app upgrade OIA --tag 7.4.2.2 --service rda-collaboration

Step-3: Run the below command to check the status of the newly upgraded PODs.

kubectl get pods -n rda-fabric -l app_name=oia

Step-4: Run the below command to put Terminating OIA application rda-webhook-server, rda-alert-processor, rda-event-consumer, rda-smtp-server service PODs into maintenance mode. It will list all of the POD Ids of these OIA application services along with rdac maintenance command to be put them in maintenance mode.

python maint_command.py

Step-5: Copy & Paste the rdac maintenance command as below.

rdac maintenance start --ids <comma-separated-list-of-oia-app-pod-ids>

Step-6: Run the below command to verify the maintenance mode status of the OIA application services (rda-webhook-server, rda-alert-processor, rda-event-consumer, rda-smtp-server, rda-collaboration).

rdac pods --show_maintenance | grep False

Step-7: Run the below command to delete the Terminating OIA application service PODs (rda-webhook-server, rda-alert-processor, rda-event-consumer, rda-smtp-server, rda-collaboration)

for i in `kubectl get pods -n rda-fabric -l app_name=oia | grep 'Terminating' | awk '{print $1}'`; do kubectl delete pod $i -n rda-fabric --force; done
kubectl get pods -n rda-fabric -l app_name=oia

Note

Wait for 120 seconds and Repeat above steps from Step-3 to Step-7 for all of the above mentioned OIA application service PODs.

Please wait till all of the above mentioned OIA application service PODs are in Running state and run the below command to verify their status and make sure they are running with 7.4.2.1/7.4.2.2 version.

rdafk8s app status

+--------------------+----------------+----------------+--------------+---------+
| Name               | Host           | Status         | Container Id | Tag     |
+--------------------+----------------+----------------+--------------+---------+
| rda-alert-ingester | 192.168.131.47 | Up 3 Weeks ago | e9f6a37d0386 | 7.4.2   |
| rda-alert-ingester | 192.168.131.49 | Up 3 Weeks ago | 8bc4ce56511d | 7.4.2   |
| rda-alert-         | 192.168.131.49 | Up 7 Hours ago | a1a75d54aec1 | 7.4.2.1 |
| processor          |                |                |              |         |
| rda-alert-         | 192.168.131.50 | Up 1 Days ago  | 21d2d24ad7de | 7.4.2.1 |
| processor          |                |                |              |         |
| rda-alert-         | 192.168.131.47 | Up 3 Weeks ago | 767892946d90 | 7.4.2   |
| processor-         |                |                |              |         |
| companion          |                |                |              |         |
| rda-alert-         | 192.168.131.49 | Up 3 Weeks ago | dc11c26bd99e | 7.4.2   |
| processor-         |                |                |              |         |
| companion          |                |                |              |         |
| rda-event-consumer | 192.168.131.47 | Up 2 Days ago  | 1da9bf9edf20 | 7.4.2.1 |
| rda-event-consumer | 192.168.131.49 | Up 2 Days ago  | 7b6bb604138e | 7.4.2.1 |
| rda-file-browser   | 192.168.131.46 | Up 3 Weeks ago | 839c21ff8d97 | 7.4.2   |
| rda-file-browser   | 192.168.131.49 | Up 3 Weeks ago | a6231f3aece0 | 7.4.2   |
| rda-ingestion-     | 192.168.131.47 | Up 6 Days ago  | 684bf4d430f0 | 7.4.2   |
| tracker            |                |                |              |         |
| rda-ingestion-     | 192.168.131.50 | Up 7 Hours ago | ec6e2dd4d35b | 7.4.2   |
| tracker            |                |                |              |         |
| rda-irm-service    | 192.168.131.49 | Up 4 Days ago  | 56df3a1a7597 | 7.4.2   |
| rda-irm-service    | 192.168.131.47 | Up 2 Days ago  | 5d7daee53094 | 7.4.2   |
| rda-smtp-server    | 192.168.131.47 | Up 2 Days ago  | c9f97c8e79f8 | 7.4.2.1 |
| rda-smtp-server    | 192.168.131.49 | Up 2 Days ago  | f939b1ef25bf | 7.4.2.1 |
| rda-webhook-server | 192.168.131.47 | Up 2 Days ago  | 0d56723d13d0 | 7.4.2.1 |
| rda-webhook-server | 192.168.131.49 | Up 2 Days ago  | c44c108dc2e9 | 7.4.2.1 |
| rda-collaboration  | 192.168.131.47 | Up 1 Days ago  | c43c109dc2t7 | 7.4.2.2 |
| rda-collaboration  | 192.168.131.49 | Up 1 Days ago  | d44c108dc2f8 | 7.4.2.2 |
+--------------------+----------------+----------------+--------------+---------+
Run the below command to verify all OIA application services are up and running.

rdac pods
+-------+----------------------------------------+-------------+----------------+----------+-------------+-------------------+--------+--------------+---------------+--------------+
| Cat   | Pod-Type                               | Pod-Ready   | Host           | ID       | Site        | Age               |   CPUs |   Memory(GB) | Active Jobs   | Total Jobs   |
|-------+----------------------------------------+-------------+----------------+----------+-------------+-------------------+--------+--------------+---------------+--------------|
| Infra | collector                              | True        | rda-collector- | 874761cd |             | 5 days, 9:04:23   |      8 |        31.33 |               |              |
| Infra | registry                               | True        | rda-registry-6 | 069f3456 |             | 23 days, 20:03:11 |      8 |        31.33 |               |              |
| Infra | registry                               | True        | rda-registry-6 | 0ae8af6f |             | 23 days, 20:03:06 |      8 |        31.33 |               |              |
| Infra | collector                              | True        | rda-collector- | 874761cd |             | 5 days, 9:04:23   |      8 |        31.33 |               |              |
| Infra | registry                               | True        | rda-registry-6 | 069f3456 |             | 23 days, 20:03:11 |      8 |        31.33 |               |              |
| Infra | registry                               | True        | rda-registry-6 | 0ae8af6f |             | 23 days, 20:03:06 |      8 |        31.33 |               |              |
| Infra | scheduler                              | True        | rda-scheduler- | ee7565aa | *leader*    | 3 days, 0:43:20   |      8 |        31.33 |               |              |
| Infra | scheduler                              | True        | rda-scheduler- | 779a624d |             | 3 days, 0:43:02   |      8 |        31.33 |               |              |
| Infra | worker                                 | True        | rda-worker-7cf | 7563615a | rda-site-01 | 3 days, 0:36:29   |      8 |        31.33 | 0             | 3281         |
| Infra | worker                                 | True        | rda-worker-7cf | 0cbdeb0d | rda-site-01 | 3 days, 0:35:31   |      8 |        31.33 | 2             | 3252         |
+-------+----------------------------------------+-------------+----------------+----------+-------------+-------------------+--------+--------------+---------------+--------------+

Run the below command to check if all services has ok status and does not throw any failure messages.

rdac healthcheck
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+
| Cat       | Pod-Type                               | Host         | ID       | Site        | Health Parameter                                    | Status   | Message                                                     |
|-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------|
| rda_app   | alert-ingester                         | rda-alert-in | f9314916 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | f9314916 |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | f9314916 |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-ingester                         | rda-alert-in | f9314916 |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | f9314916 |             | kafka-connectivity                                  | ok       | Cluster=IrA5ccri7mBeUvhzvrimEg, Broker=0, Brokers=[0, 1, 2] |
| rda_app   | alert-ingester                         | rda-alert-in | 8fc5bbcb |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 8fc5bbcb |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 8fc5bbcb |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-ingester                         | rda-alert-in | 8fc5bbcb |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 8fc5bbcb |             | kafka-connectivity                                  | ok       | Cluster=IrA5ccri7mBeUvhzvrimEg, Broker=1, Brokers=[0, 1, 2] |
| rda_app   | alert-processor                        | rda-alert-pr | e7e1e389 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-processor                        | rda-alert-pr | e7e1e389 |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-processor                        | rda-alert-pr | e7e1e389 |             | service-dependency:cfx-app-controller               | ok       | 2 pod(s) found for cfx-app-controller                       |
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+
  • Please run the below command to initiate upgrading the Webhook Server Services with zero downtime
rdaf app  upgrade OIA --tag 7.4.2.1  --rolling-upgrade --service cfx-rda-webhook-server  --timeout 10
  • Please run the below command to initiate upgrading the Webhook Server Services without zero downtime
rdaf app  upgrade OIA  --tag 7.4.2.1  --service cfx-rda-webhook-server
  • Please run the below command to initiate upgrading the Alert Processor Services with zero downtime
rdaf app  upgrade OIA --tag 7.4.2.1 --rolling-upgrade --service cfx-rda-alert-processor  --timeout 10
  • Please run the below command to initiate upgrading the Alert Processor Services without zero downtime
rdaf app  upgrade OIA  --tag 7.4.2.1   --service cfx-rda-alert-processor
  • Please run the below command to initiate upgrading the Event Consumer Services with Zero downtime
rdaf app  upgrade OIA --tag 7.4.2.1  --rolling-upgrade --service  cfx-rda-event-consumer  --timeout 10
  • Please run the below command to initiate upgrading the Event Consumer Services without Zero downtime
rdaf app  upgrade OIA  --tag 7.4.2.1 --service  cfx-rda-event-consumer
  • Please run the below command to initiate upgrading the SMTP Server Services with zero downtime
rdaf app  upgrade OIA --tag 7.4.2.1  --rolling-upgrade --service cfx-rda-smtp-server  --timeout 10
  • Please run the below command to initiate upgrading the SMTP Server Services without Zero downtime
rdaf app  upgrade OIA --tag 7.4.2.1 --service cfx-rda-smtp-server
  • Please run the below command to initiate upgrading the cfx-rda-collaboration with zero downtime
rdaf app  upgrade OIA --tag 7.4.2.2 --rolling-upgrade --service cfx-rda-collaboration  --timeout 10
  • Please run the below command to initiate upgrading the cfx-rda-collaboration without zero downtime
rdaf app  upgrade OIA  --tag 7.4.2.2 --service cfx-rda-collaboration

Note

After upgrading the above mentioned services, use the following commands to verify that the services are up and running

Please wait till all of the new OIA application service containers are in Up state and run the below command to verify their status and make sure below OIA application services are running with 7.4.2.1 / 7.4.2.2 version.

  • cfx-rda-webhook-server
  • cfx-rda-smtp-server
  • cfx-rda-event-consumer
  • cfx-rda-alert-processor
  • cfx-rda-collaboration
rdaf app status

+-----------------------+----------------+-------------+--------------+---------+
| Name                  | Host           | Status      | Container Id | Tag     |
+-----------------------+----------------+-------------+--------------+---------+
| cfx-rda-webhook-      | 192.168.133.96 | Up 26 hours | e2cf84cc1415 | 7.4.2.1 |
| server                |                |             |              |         |
| cfx-rda-webhook-      | 192.168.133.92 | Up 26 hours | c7dad2c7ef19 | 7.4.2.1 |
| server                |                |             |              |         |
| cfx-rda-smtp-         | 192.168.133.96 | Up 25 hours | 9a3fce1ed5d8 | 7.4.2.1 |
| server                |                |             |              |         |
| cfx-rda-smtp-         | 192.168.133.92 | Up 25 hours | e0b6c4026aa5 | 7.4.2.1 |
| server                |                |             |              |         |
| cfx-rda-event-        | 192.168.133.96 | Up 25 hours | 68cac61ee039 | 7.4.2.1 |
| consumer              |                |             |              |         |
| cfx-rda-event-        | 192.168.133.92 | Up 25 hours | 00595adf12c3 | 7.4.2.1 |
| consumer              |                |             |              |         |
| cfx-rda-alert-        | 192.168.133.96 | Up 26 hours | babf2759e4d6 | 7.4.2.1 |
| processor             |                |             |              |         |
| cfx-rda-alert-        | 192.168.133.92 | Up 25 hours | 20c9c7441a31 | 7.4.2.1 |
| processor             |                |             |              |         |
| cfx-rda-collaboration | 192.168.133.96 | Up 23 hours | 40c7c8741c56 | 7.4.2.2 |
| cfx-rda-collaboration | 192.168.133.92 | Up 23 hours | 56c9t7671a36 | 7.4.2.2 |
+-----------------------+----------------+-------------+--------------+---------+
Run the below command to verify all OIA application services are up and running.

rdac pods
+-------+----------------------------------------+-------------+--------------+----------+-------------+------------------+--------+--------------+---------------+--------------+
| Cat   | Pod-Type                               | Pod-Ready   | Host         | ID       | Site        | Age              |   CPUs |   Memory(GB) | Active Jobs   | Total Jobs   |
|-------+----------------------------------------+-------------+--------------+----------+-------------+------------------+--------+--------------+---------------+--------------|
| Infra | api-server                             | True        | 6d686f886d94 | a2037b09 |             | 3 days, 0:55:40  |      8 |        47.03 |               |              |
| Infra | api-server                             | True        | 4158deb9a167 | 175fbdb7 |             | 3 days, 0:52:39  |      8 |        47.03 |               |              |
| Infra | asm                                    | True        | c8d3a854a6cc | 55dfd5e5 |             | 10 days, 2:35:35 |      8 |        47.03 |               |              |
| Infra | asm                                    | True        | 2258a0ad236f | 8c5ced68 |             | 10 days, 2:35:33 |      8 |        47.03 |               |              |
| Infra | collector                              | True        | c9ddf1083d37 | 05a36ea7 |             | 10 days, 2:35:35 |      8 |        47.03 |               |              |
| Infra | collector                              | True        | 93cc8fcfa4a7 | dcd7ba69 |             | 10 days, 2:35:32 |      8 |        47.03 |               |              |
| Infra | registry                               | True        | cb664cf49d0d | d7fb5ec8 |             | 10 days, 2:35:34 |      8 |        47.03 |               |              |
| Infra | registry                               | True        | e8e6007645e9 | c3a3721c |             | 10 days, 2:35:31 |      8 |        47.03 |               |              |
| Infra | scheduler                              | True        | 35969c5b62d1 | 906bef4e | *leader*    | 3 days, 1:02:44  |      8 |        47.03 |               |              |
| Infra | scheduler                              | True        | 7ec72ad12f07 | ece5ba54 |             | 3 days, 0:59:47  |      8 |        47.03 |               |              |
| Infra | worker                                 | True        | 937a308d5402 | db4df57a | rda-site-01 | 3 days, 0:48:30  |      8 |        47.03 | 0             | 112          |
| Infra | worker                                 | True        | f6b40eb421f2 | bc6172fe | rda-site-01 | 3 days, 0:47:36  |      8 |        47.03 | 0             | 116          |
+-------+----------------------------------------+-------------+--------------+----------+-------------+------------------+--------+--------------+---------------+--------------+

Run the below command to check if all services has ok status and does not throw any failure messages.

rdac healthcheck
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+
| Cat       | Pod-Type                               | Host         | ID       | Site        | Health Parameter                                    | Status   | Message                                                     |
|-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------|
| rda_app   | alert-ingester                         | 62ed7f57e732 | cb9fff02 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-ingester                         | 62ed7f57e732 | cb9fff02 |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-ingester                         | 62ed7f57e732 | cb9fff02 |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-ingester                         | 62ed7f57e732 | cb9fff02 |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-ingester                         | 62ed7f57e732 | cb9fff02 |             | kafka-connectivity                                  | ok       | Cluster=NzUzOWMwYjgyZWM3MTFlZg, Broker=3, Brokers=[1, 2, 3] |
| rda_app   | alert-ingester                         | cb74a8e42377 | 64d5635c |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-ingester                         | cb74a8e42377 | 64d5635c |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-ingester                         | cb74a8e42377 | 64d5635c |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-ingester                         | cb74a8e42377 | 64d5635c |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-ingester                         | cb74a8e42377 | 64d5635c |             | kafka-connectivity                                  | ok       | Cluster=NzUzOWMwYjgyZWM3MTFlZg, Broker=1, Brokers=[1, 2, 3] |
| rda_app   | alert-processor                        | babf2759e4d6 | 3e3144c9 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-processor                        | babf2759e4d6 | 3e3144c9 |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-processor                        | babf2759e4d6 | 3e3144c9 |             | service-dependency:cfx-app-controller               | ok       | 2 pod(s) found for cfx-app-controller                       |
| rda_app   | alert-processor                        | babf2759e4d6 | 3e3144c9 |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-processor                        | babf2759e4d6 | 3e3144c9 |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-processor                        | babf2759e4d6 | 3e3144c9 |             | DB-connectivity                                     | ok       |                                                             |
| rda_app   | alert-processor                        | babf2759e4d6 | 3e3144c9 |             | kafka-connectivity                                  | ok       | Cluster=NzUzOWMwYjgyZWM3MTFlZg, Broker=2, Brokers=[1, 2, 3] |
| rda_app   | alert-processor                        | 20c9c7441a31 | fefbd547 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-processor                        | 20c9c7441a31 | fefbd547 |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-processor                        | 20c9c7441a31 | fefbd547 |             | service-dependency:cfx-app-controller               | ok       | 2 pod(s) found for cfx-app-controller                       |
| rda_app   | alert-processor                        | 20c9c7441a31 | fefbd547 |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-processor                        | 20c9c7441a31 | fefbd547 |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-processor                        | 20c9c7441a31 | fefbd547 |             | DB-connectivity                                     | ok       |                                                             |
| rda_app   | alert-processor                        | 20c9c7441a31 | fefbd547 |             | kafka-connectivity                                  | ok       | Cluster=NzUzOWMwYjgyZWM3MTFlZg, Broker=1, Brokers=[1, 2, 3] |
| rda_app   | alert-processor-companion              | 2bbcbedc0019 | 98deb6bf |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-processor-companion              | 2bbcbedc0019 | 98deb6bf |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-processor-companion              | 2bbcbedc0019 | 98deb6bf |             | service-dependency:alert-processor                  | ok       | 2 pod(s) found for alert-processor                          |
| rda_app   | alert-processor-companion              | 2bbcbedc0019 | 98deb6bf |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-processor-companion              | 19be5a60720e | 7ff6f7e0 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-processor-companion              | 19be5a60720e | 7ff6f7e0 |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-processor-companion              | 19be5a60720e | 7ff6f7e0 |             | service-dependency:alert-processor                  | ok       | 2 pod(s) found for alert-processor                          |
| rda_app   | alert-processor-companion              | 19be5a60720e | 7ff6f7e0 |             | service-initialization-status                       | ok       |                                                             |
| rda_infra | api-server                             | 13c1d70cd756 | 2cf2d678 |             | service-status                                      | ok       |                                                             |
| rda_infra | api-server                             | 13c1d70cd756 | 2cf2d678 |             | minio-connectivity                                  | ok       |                                                             |
| rda_infra | api-server                             | 365b6795e331 | 07614661 |             | service-status                                      | ok       |                                                             |
| rda_infra | api-server                             | 365b6795e331 | 07614661 |             | minio-connectivity                                  | ok       |                                                             |
| rda_infra | asm                                    | b6ec081301b1 | a8a84bd5 |             | service-status                                      | ok       |                                                             |
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+

1.3.6 Upgrade RDA Fabric Log Monitoring

Please run the below command to initiate upgrading the Log Monitoring services

rdafk8s log_monitoring upgrade --tag 1.0.3

Note

If log monitoring services are not installed on RDAF platform, please run the below command instead.

For HA environment.

rdafk8s log_monitoring install --log-monitoring-host <logstash-host-ip1,logstash-host-ip2> --tag 1.0.3

For Non-HA environment.

rdafk8s log_monitoring install --log-monitoring-host <logstash-host-ip> --tag 1.0.3

Run the below command to check the log monitoring services status.

rdafk8s log_monitoring status

Please run the below command to initiate upgrading the RDA Fabric Log Monitoring services

rdaf log_monitoring upgrade --tag 1.0.3

Note

If log monitoring services are not installed on RDAF platform, please run the below command instead.

For HA environment.

rdaf log_monitoring install --log-monitoring-host <logstash-host-ip1,logstash-host-ip2> --tag 1.0.3

For Non-HA environment.

rdaf log_monitoring install --log-monitoring-host <logstash-host-ip> --tag 1.0.3

1.4.Post Upgrade Steps

1. Deploy latest Alerts and Incidents Dashboard configuration

Go to Main Menu --> Configuration --> RDA Administration --> Bundles --> Select oia_l1_l2_bundle and Click on Deploy action to deploy the latest Dashboards configuration for Alerts and Incidents.

2. By default retention_purge_extra_filter added in oia-alerts-stream and oia-incidents-stream for purging the data these change will automatically come on fresh install setup with 3.4.2/7.4.2 .

For upgrade setup manually add purge filters in oia-alerts-stream and oia-incidents-stream as per requirement

oia-alerts-stream

"retention_days": 365,
"retention_purge_extra_filter": "a_status == 'CLEARED'",

oia-incidents-stream

"retention_days": 365,
"retention_purge_extra_filter": "i_state == 'Resolved' OR i_state == 'Cancelled' OR i_state == 'Closed'",

1. Deploy latest Alerts and Incidents Dashboard configuration

Go to Main Menu --> Configuration --> RDA Administration --> Bundles --> Select oia_l1_l2_bundle and Click on Deploy action to deploy the latest Dashboards configuration for Alerts and Incidents.

2. By default retention_purge_extra_filter added in oia-alerts-stream and oia-incidents-stream for purging the data these change will automatically come on fresh install setup with 3.4.2/7.4.2.

For upgrade setup manually add purge filters in oia-alerts-stream and oia-incidents-stream as per requirement

oia-alerts-stream

"retention_days": 365,
"retention_purge_extra_filter": "a_status == 'CLEARED'",

oia-incidents-stream

"retention_days": 365,
"retention_purge_extra_filter": "i_state == 'Resolved' OR i_state == 'Cancelled' OR i_state == 'Closed'",

Note

Below mentioned graphdb steps are optional, If user wants to configure graphdb with topology

3. Deploy graphdb credentials with the name graphdb by following the below steps

Home -> Configuration -> RDA Integrations -> Credentials -> Add -> Select graphdb -> provide name graphdb -> Save.

4. Add rda_arango_templates.yml file to Object store by following the below steps

Download file from below link and follow the steps

https://macaw-amer.s3.amazonaws.com/releases/rdaf-platform/1.2.2/rda_arango_templates.yml

Home -> Configurations -> RDA Administration -> Object store -> Upload -> provide name, folder name and select file → Add

Note

Must provide details same as below

Name : rda_arango_templates

Folder Name : arango_templates

Chose file: rda_arango_templates.yml