Skip to content

Upgrade to 3.8.0 and 7.8.0

1. Upgrade to 3.8.0 and 7.8.0

RDAF Platform: From 3.7.2 to 3.8.0

AIOps (OIA) Application: From 7.7.2 to 7.8.0

RDAF Deployment CLI: From 1.3.2 to 1.3.3 and 1.3.2 to 1.3.4

RDAF Client rdac CLI: From 3.7.2 to 3.8.0

RDAF Platform: From 3.7.2 to 3.8.0

OIA (AIOps) Application: From 7.7.2 to 7.8.0

RDAF Deployment CLI: From 1.3.2 to 1.3.3

RDAF Client rdac CLI: From 3.7.2 to 3.8.0

1.1 Prerequisites

Before proceeding with this upgrade, please make sure and verify the below prerequisites are met.

Currently deployed CLI and RDAF services are running the below versions.

  • RDAF Deployment CLI version: 1.3.2

  • Infra Services tag: 1.0.3 / 1.0.3.3 (haproxy)

  • GraphDB Tag: 1.0.3

  • Platform Services and RDA Worker tag: 3.7.2

  • OIA Application Services tag: 7.7.2

  • Each OpenSearch node requires an additional 100 GB of disk space to support both the ingestion of new alert payloads and the migration of alert history data to the pstream.

  • CloudFabrix recommends taking VMware VM snapshots where RDA Fabric infra/platform/applications are deployed

Note

  • Check the Disk space of all the Platform and Service Vm's using the below mentioned command, the highlighted disk size should be less than 80%
    df -kh
    
rdauser@oia-125-216:
Filesystem                         Size  Used Avail Use% Mounted on
udev                                32G     0   32G   0% /dev
tmpfs                              6.3G  357M  6.0G   6% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   48G   12G   34G  26% /
tmpfs                               32G     0   32G   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                               32G     0   32G   0% /sys/fs/cgroup
/dev/loop0                          64M   64M     0 100% /snap/core20/2318
/dev/loop2                          92M   92M     0 100% /snap/lxd/24061
/dev/sda2                          1.5G  309M  1.1G  23% /boot
/dev/sdf                            50G  3.8G   47G   8% /var/mysql
/dev/loop3                          39M   39M     0 100% /snap/snapd/21759
/dev/sdg                            50G  541M   50G   2% /minio-data
/dev/loop4                          92M   92M     0 100% /snap/lxd/29619
/dev/loop5                          39M   39M     0 100% /snap/snapd/21465
/dev/sde                            15G  140M   15G   1% /zookeeper
/dev/sdd                            30G  884M   30G   3% /kafka-logs
/dev/sdc                            50G  3.3G   47G   7% /opt
/dev/sdb                            50G   29G   22G  57% /var/lib/docker
/dev/sdi                            25G  294M   25G   2% /graphdb
/dev/sdh                            50G   34G   17G  68% /opensearch
/dev/loop6                          64M   64M     0 100% /snap/core20/2379
  • Check all MariaDB nodes are sync on HA setup using below commands before start upgrade

Tip

Please run the below commands on the VM host where RDAF deployment CLI was installed and rdafk8s setup command was run. The mariadb configuration is read from /opt/rdaf/rdaf.cfg file.

MARIADB_HOST=`cat /opt/rdaf/rdaf.cfg | grep -A3 mariadb | grep datadir | awk '{print $3}' | cut -f1 -d'/'`
MARIADB_USER=`cat /opt/rdaf/rdaf.cfg | grep -A3 mariadb | grep user | awk '{print $3}' | base64 -d`
MARIADB_PASSWORD=`cat /opt/rdaf/rdaf.cfg | grep -A3 mariadb | grep password | awk '{print $3}' | base64 -d`

mysql -u$MARIADB_USER -p$MARIADB_PASSWORD -h $MARIADB_HOST -P3307 -e "show status like 'wsrep_local_state_comment';"

Please verify that the mariadb cluster state is in Synced state.

+---------------------------+--------+
| Variable_name             | Value  |
+---------------------------+--------+
| wsrep_local_state_comment | Synced |
+---------------------------+--------+

Please run the below command and verify that the mariadb cluster size is 3.

mysql -u$MARIADB_USER -p$MARIADB_PASSWORD -h $MARIADB_HOST -P3307 -e "SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size'";
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+

Warning

Make sure all of the above pre-requisites are met before proceeding with the upgrade process.

Warning

Kubernetes: Though Kubernetes based RDA Fabric deployment supports zero downtime upgrade, it is recommended to schedule a maintenance window for upgrading RDAF Platform and AIOps services to newer version.

Important

Please make sure full backup of the RDAF platform system is completed before performing the upgrade.

Kubernetes: Please run the below backup command to take the backup of application data.

rdafk8s backup --dest-dir <backup-dir>

Run the below command on RDAF Management system and make sure the Kubernetes PODs are NOT in restarting mode (it is applicable to only Kubernetes environment)

kubectl get pods -n rda-fabric -l app_category=rdaf-infra
kubectl get pods -n rda-fabric -l app_category=rdaf-platform
kubectl get pods -n rda-fabric -l app_component=rda-worker 
kubectl get pods -n rda-fabric -l app_name=oia 

  • Verify that RDAF deployment rdaf cli version is 1.3.2 on the VM where CLI was installed for docker on-prem registry managing Kubernetes or Non-kubernetes deployments.
rdaf --version
RDAF CLI version: 1.3.2
  • On-premise docker registry service version is 1.0.3
docker ps | grep docker-registry
ff6b1de8515f   cfxregistry.CloudFabrix.io:443/docker-registry:1.0.3   "/entrypoint.sh /bin…"   7 days ago   Up 7 days             deployment-scripts-docker-registry-1
  • RDAF Infrastructure services version is 1.0.3 except for below services.

  • rda-minio: version is RELEASE.2023-09-30T07-02-29Z

  • haproxy: version is 1.0.3.3

Run the below command to get rdafk8s Infra service details

rdafk8s infra status
  • RDAF Platform services version is 3.7.2

Run the below command to get RDAF Platform services details

rdafk8s platform status
  • RDAF OIA Application services version is 7.7.2

Run the below command to get RDAF App services details

rdafk8s app status

Create oia-alerts-payload Pstream

  • Check and Create oia-alerts-payload Pstream

    - First, check if the oia-alerts-payload pstream already exists within the system. Navigate to Main Menu --> Configuration --> RDA Administration --> Persistent Streams to verify its presence.

  • Create Pstream if Missing

    - If the oia-alerts-payload pstream is not found, create it using the definition provided below. To create a new pstream, go to Main Menu --> Configuration --> RDA Administration --> Persistent Streams --> Add using the appropriate definition.

Click to view the definition of oia-alerts-payload pstream


  {
    "unique_keys": [
        "a_id"
    ],
    "_mappings": {
        "properties": {
            "a_created_ts": {
                "type": "date"
            },
            "a_updated_ts": {
                "type": "date"
            },
            "a_source_payload_compressed": {
                "type": "boolean"
            },
            "a_source_payload": {
                "type": "text",
                "index": false
            },
            "a_sourceeventreceivedat_ts": {
                "type": "date"
            },
            "a_raisedreceivedat_ts": {
                "type": "date"
            },
            "a_clearreceivedat_ts": {
                "type": "date"
            },
            "a_cleared_ts": {
                "type": "date"
            }
        }
    },
    "default_values": {},
    "case_insensitive": true
}
  

Oia Alerts Payload Pstream

Currently deployed CLI and RDAF services are running the below versions.

  • RDAF Deployment CLI version: 1.3.2

  • Infra Services tag: 1.0.3 / 1.0.3.3 (haproxy)

  • GraphDB Tag: 1.0.3

  • Platform Services and RDA Worker tag: 3.7.2

  • OIA Application Services tag: 7.7.2

  • Each OpenSearch node requires an additional 100 GB of disk space to support both the ingestion of new alert payloads and the migration of alert history data to the pstream.

  • CloudFabrix recommends taking VMware VM snapshots where RDA Fabric infra/platform/applications are deployed

Note

  • Check the Disk space of all the Platform and Service Vm's using the below mentioned command, the highlighted disk size should be less than 80%
    df -kh
    
rdauser@oia-125-216:
Filesystem                         Size  Used Avail Use% Mounted on
udev                                32G     0   32G   0% /dev
tmpfs                              6.3G  357M  6.0G   6% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   48G   12G   34G  26% /
tmpfs                               32G     0   32G   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                               32G     0   32G   0% /sys/fs/cgroup
/dev/loop0                          64M   64M     0 100% /snap/core20/2318
/dev/loop2                          92M   92M     0 100% /snap/lxd/24061
/dev/sda2                          1.5G  309M  1.1G  23% /boot
/dev/sdf                            50G  3.8G   47G   8% /var/mysql
/dev/loop3                          39M   39M     0 100% /snap/snapd/21759
/dev/sdg                            50G  541M   50G   2% /minio-data
/dev/loop4                          92M   92M     0 100% /snap/lxd/29619
/dev/loop5                          39M   39M     0 100% /snap/snapd/21465
/dev/sde                            15G  140M   15G   1% /zookeeper
/dev/sdd                            30G  884M   30G   3% /kafka-logs
/dev/sdc                            50G  3.3G   47G   7% /opt
/dev/sdb                            50G   29G   22G  57% /var/lib/docker
/dev/sdi                            25G  294M   25G   2% /graphdb
/dev/sdh                            50G   34G   17G  68% /opensearch
/dev/loop6                          64M   64M     0 100% /snap/core20/2379

Warning

Make sure all of the above pre-requisites are met before proceeding with the upgrade process.

Warning

Non-Kubernetes: Upgrading RDAF Platform and AIOps application services is a disruptive operation. Schedule a maintenance window before upgrading RDAF Platform and AIOps services to newer version.

Important

Please make sure full backup of the RDAF platform system is completed before performing the upgrade.

Non-Kubernetes: Please run the below backup command to take the backup of application data.

rdaf backup --dest-dir <backup-dir>
Note: Please make sure this backup-dir is mounted across all infra,cli vms.

  • Verify that RDAF deployment rdaf cli version is 1.3.2 on the VM where CLI was installed for docker on-prem registry managing Kubernetes or Non-kubernetes deployments.
rdaf --version
RDAF CLI version: 1.3.2
  • On-premise docker registry service version is 1.0.3
docker ps | grep docker-registry
ff6b1de8515f   cfxregistry.CloudFabrix.io:443/docker-registry:1.0.3   "/entrypoint.sh /bin…"   7 days ago   Up 7 days             deployment-scripts-docker-registry-1
  • RDAF Infrastructure services version is 1.0.3 except for below services.

  • rda-minio: version is RELEASE.2023-09-30T07-02-29Z

  • haproxy: version is 1.0.3.3

Run the below command to get RDAF Infra service details

rdaf infra status
  • RDAF Platform services version is 3.7.2

Run the below command to get RDAF Platform services details

rdaf platform status
  • RDAF OIA Application services version is 7.7.2

Run the below command to get RDAF App services details

rdaf app status

Create oia-alerts-payload Pstream

  • Check and Create oia-alerts-payload Pstream

    - First, check if the oia-alerts-payload pstream already exists within the system. Navigate to Main Menu --> Configuration --> RDA Administration --> Persistent Streams to verify its presence.

  • Create Pstream if Missing

    - If the oia-alerts-payload pstream is not found, create it using the definition provided below. To create a new pstream, go to Main Menu --> Configuration --> RDA Administration --> Persistent Streams --> Add using the appropriate definition.

Click to view the definition of oia-alerts-payload pstream


  {
    "unique_keys": [
        "a_id"
    ],
    "_mappings": {
        "properties": {
            "a_created_ts": {
                "type": "date"
            },
            "a_updated_ts": {
                "type": "date"
            },
            "a_source_payload_compressed": {
                "type": "boolean"
            },
            "a_source_payload": {
                "type": "text",
                "index": false
            },
            "a_sourceeventreceivedat_ts": {
                "type": "date"
            },
            "a_raisedreceivedat_ts": {
                "type": "date"
            },
            "a_clearreceivedat_ts": {
                "type": "date"
            },
            "a_cleared_ts": {
                "type": "date"
            }
        }
    },
    "default_values": {},
    "case_insensitive": true
}
  

Oia Alerts Payload Pstream

1.2 RDAF Deployment CLI Upgrade

Please follow the below given steps.

Note

Upgrade RDAF Deployment CLI on both on-premise docker registry VM and RDAF Platform's management VM if provisioned separately.

Important

If the CLI is already at version 1.3.2 please upgrade to 1.3.4 to complete the upgrade sequence.

Login into the VM where rdaf deployment CLI was installed for docker on-premise registry and managing Kubernetes or Non-kubernetes deployment.

  • Download the RDAF Deployment CLI's newer version 1.3.4 bundle.
wget https://macaw-amer.s3.us-east-1.amazonaws.com/releases/rdaf-platform/1.3.4/rdafcli-1.3.4.tar.gz
  • Upgrade the rdafk8s CLI to version 1.3.4
pip install --user rdafcli-1.3.4.tar.gz
  • Verify the installed rdafk8s CLI version is upgraded to 1.3.4
rdafk8s --version
  • Download the RDAF Deployment CLI's newer version 1.3.4 bundle and copy it to RDAF CLI management VM on which rdaf deployment CLI was installed.
wget https://macaw-amer.s3.us-east-1.amazonaws.com/releases/rdaf-platform/1.3.4/offline-rhel-1.3.4.tar.gz
  • Extract the rdaf CLI software bundle contents
tar -xvzf offline-rhel-1.3.4.tar.gz
  • Change the directory to the extracted directory
cd offline-rhel-1.3.4
  • Upgrade the rdaf CLI to version 1.3.4
pip install --user rdafcli-1.3.4.tar.gz -f ./ --no-index
  • Verify the installed rdaf CLI version
rdaf --version
rdafk8s --version
RDAF CLI version: 1.3.4
wget https://macaw-amer.s3.us-east-1.amazonaws.com/releases/rdaf-platform/1.3.4/offline-ubuntu-1.3.4.tar.gz
  • Extract the rdaf CLI software bundle contents
tar -xvzf offline-ubuntu-1.3.4.tar.gz
  • Change the directory to the extracted directory
cd offline-ubuntu-1.3.4
  • Upgrade the rdaf CLI to version 1.3.4
pip install --user rdafcli-1.3.4.tar.gz -f ./ --no-index
  • Verify the installed rdaf CLI version
rdaf --version
rdafk8s --version
RDAF CLI version: 1.3.4

Note

Please download the upgrade script and execute it from the CLI VM.

wget https://macaw-amer.s3.us-east-1.amazonaws.com/releases/rdaf-platform/1.3.3/rdaf_upgrade_132_133.py
python rdaf_upgrade_132_133.py upgrade
[rdauser@pfvm2-133-93 ~]$ python rdaf_upgrade_132_133.py upgrade
backing up existing values.yaml.

After executing the above script, a backup of the existing values.yaml file will be created in /opt/rdaf/deployment-scripts. A new values.yaml file will be generated with the parameter privileged: true added for all services, including platform, worker, event_gateway, and app.

rda_api_server:
mem_limit: 4G
memswap_limit: 4G
privileged: true
environment:
  RDA_STUDIO_URL: '""'
  RDA_ENABLE_TRACES: 'no'
  DISABLE_REMOTE_LOGGING_CONTROL: 'no'
  RDA_SELF_HEALTH_RESTART_AFTER_FAILURES: 3
deployment: true

Please download the upgrade script and execute it from the CLI VM.

wget https://macaw-amer.s3.us-east-1.amazonaws.com/releases/rdaf-platform/1.3.4/rdaf_upgrade_133_134.py
python rdaf_upgrade_133_134.py upgrade

rdauser@kubmaster10830:~$ python rdaf_upgrade_133_134.py upgrade

Creating backup of existing haproxy.cfg on host 192.168.108.31
Updating haproxy configs on host 192.168.108.31..
service/rdaf-webhook patched
After running the script, log in to the VM where HAProxy is installed and open the configuration file located at /opt/rdaf/config/haproxy/haproxy.cfg. Look for the lines highlighted below

backend webhook
    mode http
    balance roundrobin
    stick-table type ip size 10k expire 10m
    stick on src
    option httpchk OPTIONS /webhooks/hookid
    http-check expect rstatus (2|3)[0-9][0-9]
    http-check disable-on-404
    http-response set-header Cache-Control no-store
    http-response set-header Pragma no-cache
    default-server inter 10s downinter 5s fall 3 rise 2
    cookie SERVERID insert indirect nocache maxidle 30m maxlife 24h httponly secure
    server rdaf-webhook-1 10.95.108.32:31389 check cookie rdaf-webhook-1

Once the upgrade script has been executed, restart the HAProxy service to apply the changes.

  • Download the RDAF Deployment CLI's newer version 1.3.3 bundle
wget https://macaw-amer.s3.us-east-1.amazonaws.com/releases/rdaf-platform/1.3.3/rdafcli-1.3.3.tar.gz
  • Upgrade the rdaf CLI to version 1.3.3
pip install --user rdafcli-1.3.3.tar.gz
  • Verify the installed rdaf CLI version is upgraded to 1.3.3
rdaf --version
  • Download the RDAF Deployment CLI's newer version 1.3.3 bundle and copy it to RDAF management VM on which rdaf & rdafk8s deployment CLI was installed.
wget https://macaw-amer.s3.us-east-1.amazonaws.com/releases/rdaf-platform/1.3.3/offline-rhel-1.3.3.tar.gz
  • Extract the rdaf CLI software bundle contents
tar -xvzf offline-rhel-1.3.3.tar.gz
  • Change the directory to the extracted directory
cd offline-rhel-1.3.3
  • Upgrade the rdaf CLI to version 1.3.3
pip install --user rdafcli-1.3.3.tar.gz -f ./ --no-index
  • Verify the installed rdaf CLI version
rdaf --version
rdafk8s --version
wget https://macaw-amer.s3.us-east-1.amazonaws.com/releases/rdaf-platform/1.3.3/offline-ubuntu-1.3.3.tar.gz
  • Extract the rdaf CLI software bundle contents
tar -xvzf offline-ubuntu-1.3.3.tar.gz
  • Change the directory to the extracted directory
cd offline-ubuntu-1.3.3
  • Upgrade the rdafCLI to version 1.3.3
pip install --user rdafcli-1.3.3.tar.gz -f ./ --no-index
  • Verify the installed rdaf CLI version
rdaf --version
rdafk8s --version

Note

Please download the upgrade script and execute it from the CLI VM.

https://macaw-amer.s3.us-east-1.amazonaws.com/releases/rdaf-platform/1.3.3/rdaf_upgrade_132_133.py
python rdaf_upgrade_132_133.py upgrade
[rdauser@pfvm2-133-93 ~]$ python rdaf_upgrade_132_133.py upgrade
backing up existing values.yaml.

After executing the above script, a backup of the existing values.yaml file will be created in /opt/rdaf/deployment-scripts. A new values.yaml file will be generated with the parameter privileged: true added for all services, including platform, worker, event_gateway, and app.

rda_api_server:
mem_limit: 4G
memswap_limit: 4G
privileged: true
environment:
  RDA_STUDIO_URL: '""'
  RDA_ENABLE_TRACES: 'no'
  DISABLE_REMOTE_LOGGING_CONTROL: 'no'
  RDA_SELF_HEALTH_RESTART_AFTER_FAILURES: 3
deployment: true

1.3 Download the new Docker Images

Download the new docker image tags for RDAF Platform and OIA (AIOps) Application services and wait until all of the images are downloaded.

To fetch registry please use the below command

rdaf registry fetch --tag 3.8.0,3.8.1,7.8.0,7.8.1,7.8.1.3

To fetch registry please use the below command

rdaf registry fetch --tag 3.8.0,7.8.0

Note

If the Download of the images fail, Please re-execute the above command

Run the below command to verify above mentioned tags are downloaded for all of the RDAF Platform and OIA (AIOps) Application services.

rdaf registry list-tags 

Please make sure 3.8.0 image tag is downloaded for the below RDAF Platform services.

  • rda-client-api-server
  • rda-registry
  • rda-scheduler
  • rda-collector
  • rda-identity
  • rda-fsm
  • rda-asm
  • rda-stack-mgr
  • rda-access-manager
  • rda-resource-manager
  • rda-user-preferences
  • onprem-portal
  • onprem-portal-nginx
  • rda-worker-all
  • onprem-portal-dbinit
  • cfxdx-nb-nginx-all
  • rda-event-gateway
  • rda-chat-helper
  • rdac
  • rdac-full
  • cfxcollector
  • bulk_stats

Please make sure 3.8.1 image tag is downloaded for the below RDAF Platform services.

  • rda-client-api-server
  • rda-collector

Please make sure 7.8.0 image tag is downloaded for the below RDAF OIA (AIOps) Application services.

  • rda-app-controller
  • rda-file-browser
  • rda-smtp-server
  • rda-ingestion-tracker
  • rda-reports-registry
  • rda-ml-config
  • rda-webhook-server
  • rda-irm-service
  • rda-notification-service
  • rda-configuration-service

Please make sure 7.8.1 image tag is downloaded for the below RDAF OIA (AIOps) Application services.

  • rda-alert-processor-companion
  • rda-collaboration

Please make sure 7.8.1.3 image tag is downloaded for the below RDAF OIA (AIOps) Application services.

  • rda-alert-processor
  • rda-event-consumer
  • rda-alert-ingester

Downloaded Docker images are stored under the below path.

/opt/rdaf-registry/data/docker/registry/v2/ or /opt/rdaf/data/docker/registry/v2/

Run the below command to check the filesystem's disk usage on offline registry VM where docker images are pulled.

df -h /opt

If necessary, older image tags that are no longer in use can be deleted to free up disk space using the command below.

Note

Run the command below if /opt occupies more than 80% of the disk space or if the free capacity of /opt is less than 25GB.

rdaf registry delete-images --tag <tag1,tag2>

1.4 Upgrade Steps

1.4.1 Upgrade RDAF Platform Services

Step-1: Run the below command to initiate upgrading RDAF Platform services.

rdafk8s platform upgrade --tag 3.8.0

As the upgrade procedure is a non-disruptive upgrade, it puts the currently running PODs into Terminating state and newer version PODs into Pending state.

Step-2: Run the below command to check the status of the existing and newer PODs and make sure atleast one instance of each Platform service is in Terminating state.

kubectl get pods -n rda-fabric -l app_category=rdaf-platform

Step-3: Run the below command to put all Terminating RDAF platform service PODs into maintenance mode. It will list all of the POD Ids of platform services along with rdac maintenance command that required to be put in maintenance mode.

python maint_command.py

Note

If maint_command.py script doesn't exist on RDAF deployment CLI VM, it can be downloaded using the below command.

wget https://macaw-amer.s3.amazonaws.com/releases/rdaf-platform/1.1.6/maint_command.py

Step-4: Copy & Paste the rdac maintenance command as below.

rdac maintenance start --ids <comma-separated-list-of-platform-pod-ids>

Step-5: Run the below command to verify the maintenance mode status of the RDAF platform services.

rdac pods --show_maintenance | grep False

Step-6: Run the below command to delete the Terminating RDAF platform service PODs

for i in `kubectl get pods -n rda-fabric -l app_category=rdaf-platform | grep 'Terminating' | awk '{print $1}'`; do kubectl delete pod $i -n rda-fabric --force; done

Note

Wait for 120 seconds and Repeat above steps from Step-2 to Step-6 for rest of the RDAF Platform service PODs.

Please wait till all of the new platform service PODs are in Running state and run the below command to verify their status and make sure all of them are running with 3.8.0 version.

rdafk8s platform status
+---------------+----------------+---------------+--------------+-------+
| Name          | Host           | Status        | Container Id | Tag   |
+---------------+----------------+---------------+--------------+-------+
| rda-api-      | 192.168.108.32 | Up 8 Hours    | 8623d931221b | 3.8.0 |
| server        |                | ago           |              |       |
| rda-registry  | 192.168.108.31 | Up 9 Hours    | 28a2f21b00d5 | 3.8.0 |
|               |                | ago           |              |       |
| rda-identity  | 192.168.108.31 | Up 9 Hours    | 078db9aee291 | 3.8.0 |
|               |                | ago           |              |       |
| rda-fsm       | 192.168.108.31 | Up 9 Hours    | a9707f0ed462 | 3.8.0 |
|               |                | ago           |              |       |
| rda-asm       | 192.168.108.31 | Up 9 Hours    | 5a14d725feb1 | 3.8.0 |
|               |                | ago           |              |       |
| rda-chat-     | 192.168.108.31 | Up 9 Hours    | 86fedee7ddfa | 3.8.0 |
| helper        |                | ago           |              |       |
| rda-access-   | 192.168.108.31 | Up 9 Hours    | f038498ea8ca | 3.8.0 |
| manager       |                | ago           |              |       |
| rda-resource- | 192.168.108.31 | Up 9 Hours    | cf8ef77a530f | 3.8.0 |
| manager       |                | ago           |              |       |
| rda-scheduler | 192.168.108.31 | Up 9 Hours    | de9cabea95d8 | 3.8.0 |
|               |                | ago           |              |       |
| rda-asset-    | 192.168.108.31 | Up 9 Hours    | 61e3c57b6b91 | 3.8.0 |
| dependency    |                | ago           |              |       |
| rda-collector | 192.168.108.31 | Up 9 Hours    | c5c96ff9d77c | 3.8.0 |
|               |                | ago           |              |       |
| rda-user-     | 192.168.108.31 | Up 9 Hours    | 00b24aa1ce61 | 3.8.0 |
| preferences   |                | ago           |              |       |
| rda-portal-   | 192.168.108.31 | Up 9 Hours    | b50c376bbbec | 3.8.0 |
| backend       |                | ago           |              |       |
| rda-portal-   | 192.168.108.31 | Up 9 Hours    | 2d7c210d9943 | 3.8.0 |
| frontend      |                | ago           |              |       |
+---------------+----------------+---------------+--------------+-------+

Run the below command to check the rda-scheduler service is elected as a leader under Site column.

rdac pods
+-------+----------------------------------------+-------------+----------------+----------+-------------+---------+--------+--------------+---------------+--------------+
| Cat   | Pod-Type                               | Pod-Ready   | Host           | ID       | Site        | Age     |   CPUs |   Memory(GB) | Active Jobs   | Total Jobs   |
|-------+----------------------------------------+-------------+----------------+----------+-------------+---------+--------+--------------+---------------+--------------|
| App   | webhook-server                         | True        | rda-webhook-se | 9f53d99f |             | 1:08:29 |      8 |        31.33 |               |              |
| App   | webhook-server                         | True        | rda-webhook-se | a6c3f6df |             | 1:06:56 |      8 |        31.33 |               |              |
| Infra | api-server                             | True        | rda-api-server | c810ccba |             | 1:46:47 |      8 |        31.33 |               |              |
| Infra | api-server                             | True        | rda-api-server | fd51cec6 |             | 1:44:33 |      8 |        31.33 |               |              |
| Infra | asm                                    | True        | rda-asm-657db5 | 1ab3a8ac |             | 1:45:52 |      8 |        31.33 |               |              |
| Infra | asm                                    | True        | rda-asm-657db5 | 901817a6 |             | 1:44:38 |      8 |        31.33 |               |              |
| Infra | collector                              | True        | rda-collector- | a829b2b5 |             | 1:48:21 |      8 |        31.33 |               |              |
| Infra | collector                              | True        | rda-collector- | 4ca5e9b8 |             | 1:48:08 |      8 |        31.33 |               |              |
| Infra | rapid-ingestion-service                | True        | rda-bulk-stats | acd8eb36 |             | 2:38:28 |      8 |        31.33 |               |              |
| Infra | rapid-ingestion-service                | True        | rda-bulk-stats | 4e0e7f02 |             | 2:38:19 |      8 |        31.33 |               |              |
| Infra | registry                               | True        | rda-registry-6 | 97b929ac |             | 1:48:19 |      8 |        31.33 |               |              |
| Infra | registry                               | True        | rda-registry-6 | 54bbc91a |             | 1:48:05 |      8 |        31.33 |               |              |
| Infra | scheduler                              | True        | rda-scheduler- | 4d9993d0 |             | 1:46:57 |      8 |        31.33 |               |              |
| Infra | scheduler                              | True        | rda-scheduler- | 7fdbf090 | *leader*    | 1:46:53 |      8 |        31.33 |               |              |
| Infra | worker                                 | True        | rda-worker-7cd | 589dc400 | rda-site-01 | 1:22:37 |      8 |        31.33 | 0             | 0            |
| Infra | worker                                 | True        | rda-worker-7cd | f46631ec | rda-site-01 | 1:20:24 |      8 |        31.33 | 0             | 0            |
+-------+----------------------------------------+-------------+----------------+----------+-------------+----------+--------+--------------+---------------+--------------+

Run the below command to check if all services has ok status and does not throw any failure messages.

rdac healthcheck

Warning

For Non-Kubernetes deployment, upgrading RDAF Platform and AIOps application services is a disruptive operation when rolling-upgrade option is not used. Please schedule a maintenance window before upgrading RDAF Platform and AIOps services to newer version.

Run the below command to initiate upgrading RDAF Platform services with zero downtime

rdaf platform upgrade --tag 3.8.0 --rolling-upgrade --timeout 10

Note

timeout <10> mentioned in the above command represents as Seconds

Note

The rolling-upgrade option upgrades the Platform services running in high-availability mode on one VM at a time in sequence. It completes the upgrade of Platform services running on VM-1 before upgrading them on VM-2, followed by VM-3, and so on.

During this upgrade sequence, RDAF platform continues to function without any impact to the application traffic.

After completing the Platform services upgrade on all VMs, it will ask for user confirmation to delete the older version Platform service PODs. The user has to provide YES to delete the old docker containers (in non-k8s)

192.168.133.95:5000/onprem-portal-nginx:3.8.0
2024-08-12 02:21:58,875 [rdaf.component.platform] INFO     - Gathering platform container details.
2024-08-12 02:22:01,326 [rdaf.component.platform] INFO     - Gathering rdac pod details.
+----------+----------------------+---------+---------+--------------+-------------+------------+
| Pod ID   | Pod Type             | Version | Age     | Hostname     | Maintenance | Pod Status |
+----------+----------------------+---------+---------+--------------+-------------+------------+
| 3a5ff878 | api-server           | 3.8.0   | 2:34:09 | 5119921f9c1c | None        | True       |
| 689c2574 | registry             | 3.8.0   | 3:23:10 | d21676c0465b | None        | True       |
| 0d03f649 | scheduler            | 3.8.0   | 2:34:46 | dd699a1d15af | None        | True       |
| 0496910a | collector            | 3.8.0   | 3:22:40 | 1c367e3bf00a | None        | True       |
| c4a88eb7 | asset-dependency     | 3.8.0   | 3:22:25 | cdb3f4c76deb | None        | True       |
| 9562960a | authenticator        | 3.8.0   | 3:22:09 | 8bda6c86a264 | None        | True       |
| ae8b58e5 | asm                  | 3.8.0   | 3:21:54 | 8f0f7f773907 | None        | True       |
| 1cea350e | fsm                  | 3.8.0   | 3:21:37 | 1ea1f5794abb | None        | True       |
| 32fa2f93 | chat-helper          | 3.8.0   | 3:21:23 | 811cbcfba7a2 | None        | True       |
| 0e6f375c | cfxdimensions-app-   | 3.8.0   | 3:21:07 | 307c140f99c2 | None        | True       |
|          | access-manager       |         |         |              |             |            |
| 4130b2d4 | cfxdimensions-app-   | 3.8.0   | 2:24:23 | 2d73c36426fe | None        | True       |
|          | resource-manager     |         |         |              |             |            |
| 29caf947 | user-preferences     | 3.8.0   | 3:20:36 | 3e2b5b7e6cb4 | None        | True       |
+----------+----------------------+---------+---------+--------------+-------------+------------+
Continue moving above pods to maintenance mode? [yes/no]: yes
2024-09-30 02:23:04,389 [rdaf.component.platform] INFO     - Initiating Maintenance Mode...
2024-09-30 02:23:10,048 [rdaf.component.platform] INFO     - Waiting for services to be moved to maintenance.
2024-09-30 02:23:32,511 [rdaf.component.platform] INFO     - Following container are in maintenance mode
+----------+----------------------+---------+---------+--------------+-------------+------------+
| Pod ID   | Pod Type             | Version | Age     | Hostname     | Maintenance | Pod Status |
+----------+----------------------+---------+---------+--------------+-------------+------------+
| 3a5ff878 | api-server           | 3.8.0   | 2:34:49 | 5119921f9c1c | maintenance | False      |
| ae8b58e5 | asm                  | 3.8.0   | 3:22:34 | 8f0f7f773907 | maintenance | False      |
| c4a88eb7 | asset-dependency     | 3.8.0   | 3:23:05 | cdb3f4c76deb | maintenance | False      |
| 9562960a | authenticator        | 3.8.0   | 3:22:49 | 8bda6c86a264 | maintenance | False      |
| 0e6f375c | cfxdimensions-app-   | 3.8.0   | 3:21:47 | 307c140f99c2 | maintenance | False      |
|          | access-manager       |         |         |              |             |            |
| 4130b2d4 | cfxdimensions-app-   | 3.8.0   | 2:25:03 | 2d73c36426fe | maintenance | False      |
|          | resource-manager     |         |         |              |             |            |
| 32fa2f93 | chat-helper          | 3.8.0   | 3:22:03 | 811cbcfba7a2 | maintenance | False      |
| 0496910a | collector            | 3.8.0   | 3:23:20 | 1c367e3bf00a | maintenance | False      |
| 1cea350e | fsm                  | 3.8.0   | 3:22:17 | 1ea1f5794abb | maintenance | False      |
| 689c2574 | registry             | 3.8.0   | 3:23:50 | d21676c0465b | maintenance | False      |
| 0d03f649 | scheduler            | 3.8.0   | 2:35:26 | dd699a1d15af | maintenance | False      |
| 29caf947 | user-preferences     | 3.8.0   | 3:21:16 | 3e2b5b7e6cb4 | maintenance | False      |
+----------+----------------------+---------+---------+--------------+-------------+------------+
2024-08-12 02:23:10,052 [rdaf.component.platform] INFO     - Waiting for timeout of 5 seconds...
2024-08-12 02:23:15,060 [rdaf.component.platform] INFO     - Upgrading service: rda_api_server on host 192.168.133.92

Run the below command to initiate upgrading RDAF Platform services without zero downtime

rdaf platform upgrade --tag 3.8.0

Please wait till all of the new platform services are in Up state and run the below command to verify their status and make sure all of them are running with 3.8.0 version.

rdaf platform status
+---------------+-----------------+-------------+--------------+-------+
| Name          | Host            | Status      | Container Id | Tag   |
+---------------+-----------------+-------------+--------------+-------+
| rda_api_serve | 192.168.107.126 | Up 25 hours | 8bc1f423409e | 3.8.0 |
| r             |                 |             |              |       |
| rda_registry  | 192.168.107.126 | Up 25 hours | f0f8cbc100e7 | 3.8.0 |
| rda_scheduler | 192.168.107.126 | Up 25 hours | 46a171bcb79b | 3.8.0 |
| rda_collector | 192.168.107.126 | Up 25 hours | cec7a78793f9 | 3.8.0 |
| rda_asset_dep | 192.168.107.126 | Up 25 hours | e76026011299 | 3.8.0 |
| endency       |                 |             |              |       |
| rda_identity  | 192.168.107.126 | Up 25 hours | 173fdd4b26bb | 3.8.0 |
| rda_asm       | 192.168.107.126 | Up 25 hours | aa66c25f6ed1 | 3.8.0 |
| rda_fsm       | 192.168.107.126 | Up 25 hours | c57a1102cdd0 | 3.8.0 |
| rda_chat_help | 192.168.107.126 | Up 25 hours | 88586517ba09 | 3.8.0 |
| er            |                 |             |              |       |
| cfx-rda-      | 192.168.107.126 | Up 25 hours | 3c754cd76de2 | 3.8.0 |
| access-       |                 |             |              |       |
| manager       |                 |             |              |       |
| cfx-rda-      | 192.168.107.126 | Up 25 hours | b033d5e38ee8 | 3.8.0 |
| resource-     |                 |             |              |       |
| manager       |                 |             |              |       |
| cfx-rda-user- | 192.168.107.126 | Up 25 hours | acbc9fad317e | 3.8.0 |
| preferences   |                 |             |              |       |
| portal-       | 192.168.107.126 | Up 25 hours | 18a6960c84a7 | 3.8.0 |
| backend       |                 |             |              |       |
| portal-       | 192.168.107.126 | Up 25 hours | a7e03b3fd2b3 | 3.8.0 |
| frontend      |                 |             |              |       |
+---------------+-----------------+-------------+--------------+-------+

Run the below command to check the rda-scheduler service is elected as a leader under Site column.

rdac pods

Run the below command to check if all services has ok status and does not throw any failure messages.

rdac healthcheck
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+
| Cat       | Pod-Type                               | Host         | ID       | Site        | Health Parameter                                    | Status   | Message                                                     |
|-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------|
| rda_app   | alert-ingester                         | 3ed35eed7970 | 5d7afdc2 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-ingester                         | 3ed35eed7970 | 5d7afdc2 |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-ingester                         | 3ed35eed7970 | 5d7afdc2 |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-ingester                         | 3ed35eed7970 | 5d7afdc2 |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-ingester                         | 3ed35eed7970 | 5d7afdc2 |             | kafka-connectivity                                  | ok       | Cluster=NTc1NWU1MTQxYmY3MTFlZg, Broker=1, Brokers=[1, 2, 3] |
| rda_app   | alert-ingester                         | c9bce264fe50 | r9b9231a |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-ingester                         | c9bce264fe50 | r9b9231a |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-ingester                         | c9bce264fe50 | r9b9231a |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-ingester                         | c9bce264fe50 | r9b9231a |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-ingester                         | c9bce264fe50 | r9b9231a |             | kafka-connectivity                                  | ok       | Cluster=NTc1NWU1MTQxYmY3MTFlZg, Broker=3, Brokers=[1, 2, 3] |
| rda_app   | alert-processor                        | 35d9272ab400 | a78d6202 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-processor                        | 35d9272ab400 | a78d6202 |             | minio-connectivity                                  | ok       |                                                             |
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+

1.4.1.1 Upgrade RDAF API Server & Collector service to 3.8.1

Note

This upgrade is applicable only for Kubernetes(k8s)

  • To upgrade the RDAF API Server and rda-collector service to version 3.8.1, run the following command.
rdafk8s platform upgrade --tag 3.8.1 --service rda-api-server --service rda-collector

Note

Wait for 120 seconds and Repeat above steps from Step-2 to Step-6 given above for API Server service & Collector service PODs.

  • Please wait till all of the new platform service PODs are in Running state and run the below command to verify their status and make sure they are running with 3.8.1 version.
rdafk8s platform status
+--------------------+----------------+-------------------+--------------+-------+
| Name               | Host           | Status            | Container Id | Tag   |
+--------------------+----------------+-------------------+--------------+-------+
| rda-api-server     | 192.168.131.44 | Up 8 Hours ago    | 8623d931221b | 3.8.1 |
| rda-api-server     | 192.168.131.45 | Up 8 Hours ago    | 875a3c39bfd6 | 3.8.1 | 
| rda-collector      | 192.168.131.44 | Up 8 Hours ago    | 888b4r38bbg8 | 3.8.1 |
| rda-collector      | 192.168.131.45 | Up 8 Hours ago    | 898h7e69bhf7 | 3.8.1 |
+--------------------+----------------+-------------------+--------------+-------+

1.4.2 Upgrade rdac CLI

Run the below command to upgrade the rdac CLI

rdafk8s rdac_cli upgrade --tag 3.8.0

Run the below command to upgrade the rdac CLI

rdaf rdac_cli upgrade --tag 3.8.0

1.4.3 Upgrade RDA Worker Services

Note

If the worker was deployed in a HTTP proxy environment, please make sure the required HTTP proxy environment variables are added in /opt/rdaf/deployment-scripts/values.yaml file under rda_worker configuration section as shown below before upgrading RDA Worker services.

rda_worker:
  terminationGracePeriodSeconds: 300
  replicas: 6
  sizeLimit: 1024Mi
  privileged: true
  resources:
    requests:
      memory: 100Mi
    limits:
      memory: 24Gi
  env:
    WORKER_GROUP: rda-prod-01
    CAPACITY_FILTER: cpu_load1 <= 7.0 and mem_percent < 95
    MAX_PROCESSES: '1000'
    RDA_ENABLE_TRACES: 'no'
    WORKER_PUBLIC_ACCESS: 'true'
    DISABLE_REMOTE_LOGGING_CONTROL: 'no'
    RDA_SELF_HEALTH_RESTART_AFTER_FAILURES: 3
    http_proxy:  "http://test:[email protected]:3128"
    https_proxy: "http://test:[email protected]:3128"
    HTTP_PROXY:  "http://test:[email protected]:3128"
    HTTPS_PROXY: "http://test:[email protected]:3128"
  ....
  ....

Step-1: Please run the below command to initiate upgrading the RDA Worker service PODs.

rdafk8s worker upgrade --tag 3.8.0

Step-2: Run the below command to check the status of the existing and newer PODs and make sure atleast one instance of each RDA Worker service POD is in Terminating state.

kubectl get pods -n rda-fabric -l app_component=rda-worker
NAME                          READY   STATUS    RESTARTS   AGE
rda-worker-77f459d5b9-9kdmg   1/1     Running   0          73m
rda-worker-77f459d5b9-htsmr   1/1     Running   0          74m

Step-3: Run the below command to put all Terminating RDAF worker service PODs into maintenance mode. It will list all of the POD Ids of RDA worker services along with rdac maintenance command that is required to be put in maintenance mode.

python maint_command.py

Step-4: Copy & Paste the rdac maintenance command as below.

rdac maintenance start --ids <comma-separated-list-of-platform-pod-ids>

Step-5: Run the below command to verify the maintenance mode status of the RDAF worker services.

rdac pods --show_maintenance | grep False

Step-6: Run the below command to delete the Terminating RDAF worker service PODs

for i in `kubectl get pods -n rda-fabric -l app_component=rda-worker | grep 'Terminating' | awk '{print $1}'`; do kubectl delete pod $i -n rda-fabric --force; done

Note

Wait for 120 seconds between each RDAF worker service upgrade by repeating above steps from Step-2 to Step-6 for rest of the RDAF worker service PODs.

Step-7: Please wait for 120 seconds to let the newer version of RDA Worker service PODs join the RDA Fabric appropriately. Run the below commands to verify the status of the newer RDA Worker service PODs.

rdac pods | grep rda-worker
rdafk8s worker status
+------------+----------------+---------------+--------------+---------+
| Name       | Host           | Status        | Container Id | Tag     |
+------------+----------------+---------------+--------------+---------+
| rda-worker | 192.168.108.32 | Up 9 Hours    | 67c5fee9720c | 3.8.0   |
|            |                | ago           |              |         |
+------------+----------------+---------------+--------------+---------+

Step-8: Run the below command to check if all RDA Worker services has ok status and does not throw any failure messages.

rdac healthcheck

Note

If the worker was deployed in a HTTP proxy environment, please make sure the required HTTP proxy environment variables are added in /opt/rdaf/deployment-scripts/values.yaml file under rda_worker configuration section as shown below before upgrading RDA Worker services.

rda_worker:
  mem_limit: 8G
  memswap_limit: 8G
  privileged: false
  environment:
    RDA_ENABLE_TRACES: 'no'
    RDA_SELF_HEALTH_RESTART_AFTER_FAILURES: 3
    http_proxy:  "http://test:[email protected]:3128"
    https_proxy: "http://test:[email protected]:3128"
    HTTP_PROXY:  "http://test:[email protected]:3128"
    HTTPS_PROXY: "http://test:[email protected]:3128"
  • Upgrade RDA Worker Services

Please run the below command to initiate upgrading the RDA Worker Service with zero downtime

rdaf worker upgrade --tag 3.8.0 --rolling-upgrade --timeout 10

Note

timeout <10> mentioned in the above command represents as seconds

Note

The rolling-upgrade option upgrades the Worker services running in high-availability mode on one VM at a time in sequence. It completes the upgrade of Worker services running on VM-1 before upgrading them on VM-2, followed by VM-3, and so on.

After completing the Worker services upgrade on all VMs, it will ask for user confirmation, the user has to provide YES to delete the older version Worker service PODs.

2024-08-12 02:56:11,573 [rdaf.component.worker] INFO     - Collecting worker details for rolling upgrade
2024-08-12 02:56:14,301 [rdaf.component.worker] INFO     - Rolling upgrade worker on 192.168.133.96
+----------+----------+---------+---------+--------------+-------------+------------+
| Pod ID   | Pod Type | Version | Age     | Hostname     | Maintenance | Pod Status |
+----------+----------+---------+---------+--------------+-------------+------------+
| c8a37db9 | worker   | 3.8.0   | 3:32:31 | fffe44b43708 | None        | True       |
+----------+----------+---------+---------+--------------+-------------+------------+
Continue moving above pod to maintenance mode? [yes/no]: yes
2024-08-12 02:57:17,346 [rdaf.component.worker] INFO     - Initiating maintenance mode for pod c8a37db9
2024-08-12 02:57:22,401 [rdaf.component.worker] INFO     - Waiting for worker to be moved to maintenance.
2024-08-12 02:57:35,001 [rdaf.component.worker] INFO     - Following worker container is in maintenance mode
+----------+----------+---------+---------+--------------+-------------+------------+
| Pod ID   | Pod Type | Version | Age     | Hostname     | Maintenance | Pod Status |
+----------+----------+---------+---------+--------------+-------------+------------+
| c8a37db9 | worker   | 3.8.0   | 3:33:52 | fffe44b43708 | maintenance | False       |
+----------+----------+---------+---------+--------------+-------------+------------+
2024-08-12 02:57:35,002 [rdaf.component.worker] INFO     - Waiting for timeout of 3 seconds.

Please run the below command to initiate upgrading the RDA Worker Service without zero downtime

rdaf worker upgrade --tag 3.8.0

Please wait for 120 seconds to let the newer version of RDA Worker service containers join the RDA Fabric appropriately. Run the below commands to verify the status of the newer RDA Worker service containers.

rdac pods | grep worker
| Infra | worker      | True        | 6eff605e72c4 | a318f394 | rda-site-01 | 13:45:13 |      4 |        31.21 | 0             | 0            |
| Infra | worker      | True        | ae7244d0d10a | 554c2cd8 | rda-site-01 | 13:40:40 |      4 |        31.21 | 0             | 0            |

rdaf worker status

+------------+----------------+---------------+--------------+---------+
| Name       | Host           | Status        | Container Id | Tag     |
+------------+----------------+---------------+--------------+---------+
| rda_worker | 192.168.133.92 | Up 10 hours   | 778cd6641abf | 3.8.0   |
| rda_worker | 192.168.133.96 | Up 10 hours   | 998ebea682fa | 3.8.0   |
+------------+----------------+---------------+--------------+---------+
Run the below command to check if all RDA Worker services has ok status and does not throw any failure messages.

rdac healthcheck
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Cat       | Pod-Type                               | Host         | ID       | Site        | Health Parameter                                    | Status   | Message                                                                                                                                                                                                                                                         |
|-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| rda_app   | alert-ingester                         | 0f038de97f61 | 8cf86d37 |             | service-status                                      | ok       |                                                                                                                                                                                                                                                                 |
| rda_app   | alert-ingester                         | 0f038de97f61 | 8cf86d37 |             | minio-connectivity                                  | ok       |                                                                                                                                                                                                                                                                 |
| rda_app   | alert-ingester                         | 0f038de97f61 | 8cf86d37 |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                                                                                                                                                                                                                        |
| rda_app   | alert-ingester                         | 0f038de97f61 | 8cf86d37 |             | service-initialization-status                       | ok       |                                                                                                                                                                                                                                                                 |
| rda_app   | alert-ingester                         | 0f038de97f61 | 8cf86d37 |             | kafka-consumer                                      | failed   | Health: [{'98e005500460423c886d8e30d8a9acf6.inbound-events': 1780704, '98e005500460423c886d8e30d8a9acf6.mapped-events': 70708}, {'98e005500460423c886d8e30d8a9acf6.event-request': 0}]                                                                          |
| rda_app   | alert-ingester                         | 0f038de97f61 | 8cf86d37 |             | kafka-connectivity                                  | ok       | Cluster=NjY5Yzk1OTJmN2JhMTFlZQ, Broker=3, Brokers=[1, 2, 3]                                                                                                                                                                                                     |
| rda_app   | alert-ingester                         | 22d67240d3df | ea6c199f |             | service-status                                      | ok       |                                                                                                                                                                                                                                                                 |
| rda_app   | alert-ingester                         | 22d67240d3df | ea6c199f |             | minio-connectivity                                  | ok       |                                                                                                                                                                                                                                                                 |
| rda_app   | alert-ingester                         | 22d67240d3df | ea6c199f |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                                                                                                                                                                                                                        |
| rda_app   | alert-ingester                         | 22d67240d3df | ea6c199f |             | service-initialization-status                       | ok       |                                                                                                                                                                                                                                                                 |
| rda_app   | alert-ingester                         | 22d67240d3df | ea6c199f |             | kafka-consumer                                      | failed   | Health: [{'98e005500460423c886d8e30d8a9acf6.inbound-events': 1780204, '98e005500460423c886d8e30d8a9acf6.mapped-events': 71454}, {'98e005500460423c886d8e30d8a9acf6.event-request': 0}]                                                                          |
| rda_app   | alert-ingester                         | 22d67240d3df | ea6c199f |             | kafka-connectivity                                  | ok       | Cluster=NjY5Yzk1OTJmN2JhMTFlZQ, Broker=3, Brokers=[1, 2, 3]                                                                                                                                                                                                     |
| rda_app   | alert-processor                        | 9eeb8a733bc5 | 71c94d82 |             | service-status                                      | ok       |                                                                                                                                                                                                                                                                 |
| rda_app   | alert-processor                        | 9eeb8a733bc5 | 71c94d82 |             | minio-connectivity                                  | ok       |                                                                                                                                                                                                                                                                 |
| rda_app   | alert-processor                        | 9eeb8a733bc5 | 71c94d82 |             | service-dependency:cfx-app-controller               | ok       | 2 pod(s) found for cfx-app-controller                                                                                                                                                                                                                           |
| rda_app   | alert-processor                        | 9eeb8a733bc5 | 71c94d82 |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                                                                                                                                                                                                                        |
| rda_app   | alert-processor                        | 9eeb8a733bc5 | 71c94d82 |             | service-initialization-status                       | ok       |                                                                                                                                                                                                                                                                 |
| rda_app   | alert-processor                        | 9eeb8a733bc5 | 71c94d82 |             | kafka-consumer                                      | failed   | Health: [{'98e005500460423c886d8e30d8a9acf6.ingested-alerts': 659698, '98e005500460423c886d8e30d8a9acf6.incidents-upserted': 0}]         
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-----------------------------------------------------------------------------------------------------------------------------+

1.4.4 Upgrade OIA Application Services

Step-1: Run the below commands to initiate upgrading RDAF OIA Application services

rdafk8s app upgrade OIA --tag 7.8.0 --service rda-app-controller --service rda-reports-registry --service rda-file-browser --service rda-configuration-service --service rda-webhook-server --service rda-irm-service --service rda-ml-config --service rda-notification-service --service rda-smtp-server --service rda-ingestion-tracker
rdafk8s app upgrade OIA --tag 7.8.1 --service rda-alert-processor-companion
rdafk8s app upgrade OIA --tag 7.8.1.4 --service rda-alert-ingester --service rda-event-consumer --service rda-alert-processor --service rda-collaboration

Step-2: Run the below command to check the status of the newly upgraded PODs.

kubectl get pods -n rda-fabric -l app_name=oia

Step-3: Run the below command to put all Terminating OIA application service PODs into maintenance mode. It will list all of the POD Ids of OIA application services along with rdac maintenance command that are required to be put in maintenance mode.

python maint_command.py

Step-4: Copy & Paste the rdac maintenance command as below.

rdac maintenance start --ids <comma-separated-list-of-oia-app-pod-ids>

Step-5: Run the below command to verify the maintenance mode status of the OIA application services.

rdac pods --show_maintenance | grep False

Step-6: Run the below command to delete the Terminating OIA application service PODs

for i in `kubectl get pods -n rda-fabric -l app_name=oia | grep 'Terminating' | awk '{print $1}'`; do kubectl delete pod $i -n rda-fabric --force; done
kubectl get pods -n rda-fabric -l app_name=oia

Note

Wait for 120 seconds and Repeat above steps from Step-2 to Step-6 for rest of the OIA application service PODs.

Please wait till all of the new OIA application service PODs are in Running state and run the below command to verify their status and make sure they are running with 7.8.0 / 7.8.1 / 7.8.1.4 version.

rdafk8s app status
+---------------+----------------+---------------+--------------+--------+
| Name          | Host           | Status        | Container Id | Tag    |
+---------------+----------------+---------------+--------------+--------+
| rda-alert-    | 192.168.108.31 | Up 9 Hours    | b229e14994cd | 7.8.1.4|
| ingester      |                | ago           |              |        |
| rda-alert-    | 192.168.108.32 | Up 8 Hours    | 5ffee59cac9e | 7.8.1.4|
| processor     |                | ago           |              |        |
| rda-alert-    | 192.168.108.32 | Up 8 Hours    | 4e0d8b8a9076 | 7.8.1  |
| processor-    |                | ago           |              |        |
| companion     |                |               |              |        |
| rda-app-      | 192.168.108.31 | Up 9 Hours    | 5b230e6677c6 | 7.8.0  |
| controller    |                | ago           |              |        |
| rda-          | 192.168.108.31 | Up 9 Hours    | 55e858d0cca0 | 7.8.1.4|
| collaboration |                | ago           |              |        |
| rda-configura | 192.168.108.31 | Up 9 Hours    | a78ff5743025 | 7.8.0  |
| tion-service  |                | ago           |              |        |
| rda-event-    | 192.168.108.31 | Up 9 Hours    | ae6e1aaac682 | 7.8.1.4|
| consumer      |                | ago           |              |        |
| rda-file-     | 192.168.108.31 | Up 9 Hours    | 987891ea364e | 7.8.0  |
| browser       |                | ago           |              |        |
| rda-          | 192.168.108.31 | Up 9 Hours    | d2151559a4bf | 7.8.0  |
| ingestion-    |                | ago           |              |        |
| tracker       |                |               |              |        |
| rda-irm-      | 192.168.108.31 | Up 9 Hours    | ba070de557ec | 7.8.0  |
| service       |                | ago           |              |        |
| rda-ml-config | 192.168.108.31 | Up 9 Hours    | 5c0685496bd5 | 7.8.0  |
|               |                | ago           |              |        |
| rda-          | 192.168.108.31 | Up 9 Hours    | 1ef37a720759 | 7.8.0  |
| notification- |                | ago           |              |        |
| service       |                |               |              |        |
| rda-reports-  | 192.168.108.31 | Up 9 Hours    | 5daa7555f309 | 7.8.0  |
| registry      |                | ago           |              |        |
| rda-smtp-     | 192.168.108.31 | Up 9 Hours    | b81ccaf5883a | 7.8.0  |
| server        |                | ago           |              |        |
| rda-webhook-  | 192.168.108.32 | Up 5 Hours    | c5e6a72674d7 | 7.8.0  |
| server        |                | ago           |              |        |
+---------------+----------------+---------------+--------------+--------+

Step-7: Run the below command to verify all OIA application services are up and running.

rdac pods
+-------+----------------------------------------+-------------+----------------+----------+-------------+----------+--------+--------------+---------------+--------------+
| Cat   | Pod-Type                               | Pod-Ready   | Host           | ID       | Site        | Age      |   CPUs |   Memory(GB) | Active Jobs   | Total Jobs   |
|-------+----------------------------------------+-------------+----------------+----------+-------------+----------+--------+--------------+---------------+--------------|
| App   | alert-ingester                         | True        | rda-alert-inge | 6a6e464d |             | 19:19:06 |      8 |        31.33 |               |              |
| App   | alert-ingester                         | True        | rda-alert-inge | 7f6b42a0 |             | 19:19:23 |      8 |        31.33 |               |              |
| App   | alert-processor                        | True        | rda-alert-proc | a880e491 |             | 19:19:51 |      8 |        31.33 |               |              |
| App   | alert-processor                        | True        | rda-alert-proc | b684609e |             | 19:19:48 |      8 |        31.33 |               |              |
| App   | alert-processor-companion              | True        | rda-alert-proc | 874f3b33 |             | 19:18:54 |      8 |        31.33 |               |              |
| App   | alert-processor-companion              | True        | rda-alert-proc | 70cadaa7 |             | 19:18:35 |      8 |        31.33 |               |              |
| App   | asset-dependency                       | True        | rda-asset-depe | bde06c15 |             | 19:44:20 |      8 |        31.33 |               |              |
| App   | asset-dependency                       | True        | rda-asset-depe | 47b9eb02 |             | 19:44:08 |      8 |        31.33 |               |              |
| App   | authenticator                          | True        | rda-identity-d | faa33e1b |             | 19:44:22 |      8 |        31.33 |               |              |
| App   | authenticator                          | True        | rda-identity-d | 36083c36 |             | 19:44:16 |      8 |        31.33 |               |              |
| App   | cfx-app-controller                     | True        | rda-app-contro | 5fd3c3f4 |             | 19:19:39 |      8 |        31.33 |               |              |
| App   | cfx-app-controller                     | True        | rda-app-contro | d66e5ce8 |             | 19:19:26 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-access-manager       | True        | rda-access-man | ecbb535c |             | 19:44:16 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-access-manager       | True        | rda-access-man | 9a05db5a |             | 19:44:06 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-collaboration        | True        | rda-collaborat | 61b3c53b |             | 19:18:48 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-collaboration        | True        | rda-collaborat | 09b9474e |             | 19:18:27 |      8 |        31.33 |               |              |
+-------+----------------------------------------+-------------+----------------+----------+-------------+-------------------+--------+-----------------------------+--------------+

Run the below command to check if all services has ok status and does not throw any failure messages.

rdac healthcheck
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-----------------------------------------------------------------------------------------------------------------------------+
| Cat       | Pod-Type                               | Host         | ID       | Site        | Health Parameter                                    | Status   | Message                                                                                                                     |
|-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-----------------------------------------------------------------------------------------------------------------------------|
| rda_app   | alert-ingester                         | rda-alert-in | 6a6e464d |             | service-status                                      | ok       |                                                                                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 6a6e464d |             | minio-connectivity                                  | ok       |                                                                                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 6a6e464d |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                                                                                    |
| rda_app   | alert-ingester                         | rda-alert-in | 6a6e464d |             | service-initialization-status                       | ok       |                                                                                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 6a6e464d |             | kafka-connectivity                                  | ok       | Cluster=dKnnkaYSPELK8DBUk0rPig, Broker=0, Brokers=[0, 1, 2]                                                                 |
| rda_app   | alert-ingester                         | rda-alert-in | 6a6e464d |             | kafka-consumer                                      | ok       | Health: [{'387c0cb507b84878b9d0b15222cb4226.inbound-events': 0, '387c0cb507b84878b9d0b15222cb4226.mapped-events': 0}, {}]   |
| rda_app   | alert-ingester                         | rda-alert-in | 7f6b42a0 |             | service-status                                      | ok       |                                                                                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 7f6b42a0 |             | minio-connectivity                                  | ok       |                                                                                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 7f6b42a0 |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                                                                                    |
| rda_app   | alert-ingester                         | rda-alert-in | 7f6b42a0 |             | service-initialization-status                       | ok       |                                                                                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 7f6b42a0 |             | kafka-consumer                                      | ok       | Health: [{'387c0cb507b84878b9d0b15222cb4226.inbound-events': 0, '387c0cb507b84878b9d0b15222cb4226.mapped-events': 0}, {}]   |
| rda_app   | alert-ingester                         | rda-alert-in | 7f6b42a0 |             | kafka-connectivity                                  | ok       | Cluster=dKnnkaYSPELK8DBUk0rPig, Broker=1, Brokers=[0, 1, 2]                                                                 |
| rda_app   | alert-processor                        | rda-alert-pr | a880e491 |             | service-status                                      | ok       |                                                                                                                             |
| rda_app   | alert-processor                        | rda-alert-pr | a880e491 |             | minio-connectivity                                  | ok       |                                                                                                                             |
| rda_app   | alert-processor                        | rda-alert-pr | a880e491 |             | service-dependency:cfx-app-controller               | ok       | 2 pod(s) found for cfx-app-controller                                                                                       |
| rda_app   | alert-processor                        | rda-alert-pr | a880e491 |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                                                                                    |
| rda_app   | alert-processor                        | rda-alert-pr | a880e491 |             | service-initialization-status                       | ok       |                                                                                                                             |
| rda_app   | alert-processor                        | rda-alert-pr | a880e491 |             | kafka-connectivity                                  | ok       | Cluster=dKnnkaYSPELK8DBUk0rPig, Broker=1, Brokers=[0, 1, 2]                                                                 |
| rda_app   | alert-processor                        | rda-alert-pr | a880e491 |             | DB-connectivity                                     | ok       |                                                                                                                             |
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+

Run the below commands to initiate upgrading the RDA Fabric OIA Application services with zero downtime

rdaf app upgrade OIA --tag 7.8.0 --rolling-upgrade --timeout 10

Note

timeout <10> mentioned in the above command represents as Seconds

Note

The rolling-upgrade option upgrades the OIA application services running in high-availability mode on one VM at a time in sequence. It completes the upgrade of OIA application services running on VM-1 before upgrading them on VM-2, followed by VM-3, and so on.

After completing the OIA application services upgrade on all VMs, it will ask for user confirmation to delete the older version OIA application service PODs.

2024-08-12 03:18:08,705 [rdaf.component.oia] INFO     - Gathering OIA app container details.
2024-08-12 03:18:10,719 [rdaf.component.oia] INFO     - Gathering rdac pod details.
+----------+----------------------+---------+---------+--------------+-------------+------------+
| Pod ID   | Pod Type             | Version | Age     | Hostname     | Maintenance | Pod Status |
+----------+----------------------+---------+---------+--------------+-------------+------------+
| 2992fe69 | cfx-app-controller   | 7.7.2   | 3:44:53 | 0500f773a8ff | None        | True       |
| 336138c8 | reports-registry     | 7.7.2   | 3:44:12 | 92a5e0daa942 | None        | True       |
| ccc5f3ce | cfxdimensions-app-   | 7.7.2   | 3:43:34 | 99192de47ea4 | None        | True       |
|          | notification-service |         |         |              |             |            |
| 03614007 | cfxdimensions-app-   | 7.7.2   | 3:42:54 | fbdf4e5c16c3 | None        | True       |
|          | file-browser         |         |         |              |             |            |
| a4949804 | configuration-       | 7.7.2   | 3:42:15 | 4ea08c8cbf2e | None        | True       |
|          | service              |         |         |              |             |            |
| 8f37c520 | alert-ingester       | 7.7.2   | 3:41:35 | e9e3a3e69cac | None        | True       |
| 249b7104 | webhook-server       | 7.7.2   | 3:12:04 | 1df43cebc888 | None        | True       |
| 76c64336 | smtp-server          | 7.7.2   | 3:08:57 | 03725b0cb91f | None        | True       |
| ad85cb4c | event-consumer       | 7.7.2   | 3:09:58 | 8a7d349da513 | None        | True       |
| 1a788ef3 | alert-processor      | 7.7.2   | 3:11:01 | a7c5294cba3d | None        | True       |
| 970b90b1 | cfxdimensions-app-   | 7.7.2   | 3:38:14 | 01d4245bb90e | None        | True       |
|          | irm_service          |         |         |              |             |            |
| 153aa6ac | ml-config            | 7.7.2   | 3:37:33 | 10d5d6766354 | None        | True       |
| 5aa927a4 | cfxdimensions-app-   | 7.7.2   | 3:36:53 | dcfda7175cb5 | None        | True       |
|          | collaboration        |         |         |              |             |            |
| 6833aa86 | ingestion-tracker    | 7.7.2   | 3:36:13 | ef0e78252e48 | None        | True       |
| afe77cb9 | alert-processor-     | 7.7.2   | 3:35:33 | 6f03c7fdba51 | None        | True       |
|          | companion            |         |         |              |             |            |
+----------+----------------------+---------+---------+--------------+-------------+------------+
Continue moving above pods to maintenance mode? [yes/no]: yes
2024-08-12 03:18:27,159 [rdaf.component.oia] INFO     - Initiating Maintenance Mode...
2024-08-12 03:18:32,978 [rdaf.component.oia] INFO     - Waiting for services to be moved to maintenance.
2024-08-12 03:18:55,771 [rdaf.component.oia] INFO     - Following container are in maintenance mode
+----------+----------------------+---------+---------+--------------+-------------+------------+

Run the below command to initiate upgrading the RDA Fabric OIA Application services without zero downtime

rdaf app upgrade OIA --tag 7.8.0

Please wait till all of the new OIA application service containers are in Up state and run the below command to verify their status and make sure they are running with 7.8.0 version.

rdaf app status
+---------------+-----------------+-------------+--------------+-----------+
| Name          | Host            | Status      | Container Id | Tag       |
+---------------+-----------------+-------------+--------------+-----------+
| cfx-rda-app-  | 192.168.107.126 | Up 24 hours | 71776576f78e | 7.8.0     |
| controller    |                 |             |              |           |
| cfx-rda-      | 192.168.107.126 | Up 24 hours | ed51f722c690 | 7.8.0     |
| reports-      |                 |             |              |           |
| registry      |                 |             |              |           |
| cfx-rda-      | 192.168.107.126 | Up 24 hours | 7a6a6292b96c | 7.8.0     |
| notification- |                 |             |              |           |
| service       |                 |             |              |           |
| cfx-rda-file- | 192.168.107.126 | Up 24 hours | 924f7d52776c | 7.8.0     |
| browser       |                 |             |              |           |
| cfx-rda-confi | 192.168.107.126 | Up 24 hours | 9822b65b883b | 7.8.0     |
| guration-     |                 |             |              |           |
| service       |                 |             |              |           |
| cfx-rda-      | 192.168.107.126 | Up 24 hours | cc345c28b659 | 7.8.0     |
| alert-        |                 |             |              |           |
| ingester      |                 |             |              |           |
| cfx-rda-      | 192.168.107.126 | Up 24 hours | 3dd86e6bc7ba | 7.8.0     |
| webhook-      |                 |             |              |           |
| server        |                 |             |              |           |
| cfx-rda-smtp- | 192.168.107.126 | Up 24 hours | ed71e34626cc | 7.8.0     |
| server        |                 |             |              |           |
| cfx-rda-      | 192.168.107.126 | Up 24 hours | c08b799183d5 | 7.8.0     |
| event-        |                 |             |              |           |
| consumer      |                 |             |              |           |
| cfx-rda-      | 192.168.107.126 | Up 24 hours | 3bac40c99a9a | 7.8.0     |
| alert-        |                 |             |              |           |
| processor     |                 |             |              |           |
| cfx-rda-irm-  | 192.168.107.126 | Up 24 hours | 9a9662283468 | 7.8.0     |
| service       |                 |             |              |           |
| cfx-rda-ml-   | 192.168.107.126 | Up 24 hours | fd4973b514ab | 7.8.0     |
| config        |                 |             |              |           |
| cfx-rda-      | 192.168.107.126 | Up 24 hours | 35f9eca70897 | 7.8.0     |
| collaboration |                 |             |              |           |
| cfx-rda-      | 192.168.107.126 | Up 24 hours | 855a4ca70534 | 7.8.0     |
| ingestion-    |                 |             |              |           |
| tracker       |                 |             |              |           |
| cfx-rda-      | 192.168.107.126 | Up 24 hours | 5720f64dc0a8 | 7.8.0     |
| alert-        |                 |             |              |           |
| processor-    |                 |             |              |           |
| companion     |                 |             |              |           |
+---------------+-----------------+-------------+--------------+-----------+

Run the below command to verify all OIA application services are up and running.

rdac pods
+-------+----------------------------------------+-------------+--------------+----------+-------------+---------+--------+--------------+---------------+--------------+
| Cat   | Pod-Type                               | Pod-Ready   | Host         | ID       | Site        | Age     |   CPUs |   Memory(GB) | Active Jobs   | Total Jobs   |
|-------+----------------------------------------+-------------+--------------+----------+-------------+---------+--------+--------------+---------------+--------------|
| App   | alert-ingester                         | True        | 0f038de97f61 | 8cf86d37 |             | 0:58:22 |      8 |        47.03 |               |              |
| App   | alert-ingester                         | True        | 22d67240d3df | ea6c199f |             | 0:58:33 |      8 |        47.03 |               |              |
| App   | alert-processor                        | True        | 9eeb8a733bc5 | 71c94d82 |             | 0:56:16 |      8 |        47.03 |               |              |
| App   | alert-processor                        | True        | 8c8eec3fbffd | 928a49c9 |             | 0:56:04 |      8 |        47.03 |               |              |
| App   | alert-processor-companion              | True        | 9a92c7627466 | 2cc526fc |             | 0:54:18 |      8 |        47.03 |               |              |
| App   | alert-processor-companion              | True        | 471ff8d8fcfa | 168183ee |             | 0:54:07 |      8 |        47.03 |               |              |
| App   | asset-dependency                       | True        | 5dbb1d5ef870 | 34992816 |             | 1:26:42 |      8 |        47.03 |               |              |
| App   | asset-dependency                       | True        | e15dcbe9bff1 | 06e71a22 |             | 1:19:57 |      8 |        47.03 |               |              |
| App   | authenticator                          | True        | d15b764936e1 | a6d0a7fe |             | 1:26:31 |      8 |        47.03 |               |              |
| App   | authenticator                          | True        | 4d48d870903f | 200ea73d |             | 1:19:45 |      8 |        47.03 |               |              |
| App   | cfx-app-controller                     | True        | b5c27f223af0 | 2fa517bf |             | 1:00:29 |      8 |        47.03 |               |              |
| App   | cfx-app-controller                     | True        | 25afa07f6022 | 7757102a |             | 1:00:19 |      8 |        47.03 |               |              |
| App   | cfxdimensions-app-access-manager       | True        | f3fbfb7e903c | 7ceea5e4 |             | 1:25:53 |      8 |        47.03 |               |              |
| App   | cfxdimensions-app-access-manager       | True        | 3eebeb8b00dd | 14b43d23 |             | 1:19:08 |      8 |        47.03 |               |              |
+-------+----------------------------------------+-------------+----------------+----------+-------------+----------+--------+--------------+---------------+--------------+

Run the below command to check if all services has ok status and does not throw any failure messages.

rdac healthcheck
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+
| Cat       | Pod-Type                               | Host         | ID       | Site        | Health Parameter                                    | Status   | Message                                                     |
|-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------|
| rda_app   | alert-ingester                         | 7f75047e9e44 | daa8c414 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-ingester                         | 7f75047e9e44 | daa8c414 |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-ingester                         | 7f75047e9e44 | daa8c414 |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-ingester                         | 7f75047e9e44 | daa8c414 |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-ingester                         | 7f75047e9e44 | daa8c414 |             | kafka-connectivity                                  | ok       | Cluster=NTc1NWU1MTQxYmY3MTFlZg, Broker=1, Brokers=[1, 2, 3] |
| rda_app   | alert-ingester                         | f9ec55862be0 | f9b9231c |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-ingester                         | f9ec55862be0 | f9b9231c |             | minio-connectivity                                  | ok       |                                                             |
| rda_app   | alert-ingester                         | f9ec55862be0 | f9b9231c |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-ingester                         | f9ec55862be0 | f9b9231c |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-ingester                         | f9ec55862be0 | f9b9231c |             | kafka-connectivity                                  | ok       | Cluster=NTc1NWU1MTQxYmY3MTFlZg, Broker=2, Brokers=[1, 2, 3] |
| rda_app   | alert-processor                        | c6cc7b04ab33 | b4ebfb06 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-processor                        | c6cc7b04ab33 | b4ebfb06 |             | minio-connectivity                                  | ok       |                                                             |
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+

1.4.5 Upgrade Event Gateway Services

Important

This Upgrade is for Non-K8s only

Step 1. Prerequisites

  • Event Gateway with 3.7.2 tag should be already installed

Note

If a user deployed the event gateway using the RDAF CLI, follow Step 2 and skip Step 3 or if the user did not deploy event gateway in RDAF CLI go to Step 3

Step 2. Upgrade Event Gateway Using RDAF CLI

  • To upgrade the event gateway, log in to the rdaf cli VM and execute the following command.

    rdaf event_gateway upgrade --tag 3.8.0
    

Step 3. Upgrade Event Gateway Using Docker Compose File

  • Login to the EC agent installed VM

  • Navigate to the location where EC agent was previously installed, using the following command

    cd /opt/rdaf/event_gateway
    
  • Edit the docker-compose file for the EC agent using a local editor (e.g. vi) update the tag and save it

    vi event-gateway-docker-compose.yml
    
    version: '3.1'
    services:
    rda_event_gateway:
    image: cfxregistry.cloudfabrix.io/ubuntu-rda-event-gateway:3.8.0
    restart: always
    network_mode: host
    mem_limit: 6G
    memswap_limit: 6G
    volumes:
    - /opt/rdaf/network_config:/network_config
    - /opt/rdaf/event_gateway/config:/event_gw_config
    - /opt/rdaf/event_gateway/certs:/certs
    - /opt/rdaf/event_gateway/logs:/logs
    - /opt/rdaf/event_gateway/log_archive:/tmp/log_archive
    logging:
        driver: "json-file"
        options:
        max-size: "25m"
        max-file: "5"
    environment:
        RDA_NETWORK_CONFIG: /network_config/rda_network_config.json
        EVENT_GW_MAIN_CONFIG: /event_gw_config/main/main.yml
        EVENT_GW_SNMP_TRAP_CONFIG: /event_gw_config/snmptrap/trap_template.json
        EVENT_GW_SNMP_TRAP_ALERT_CONFIG: /event_gw_config/snmptrap/trap_to_alert_go.yaml
        AGENT_GROUP: event_gateway_site01
        EVENT_GATEWAY_CONFIG_DIR: /event_gw_config
        LOGGER_CONFIG_FILE: /event_gw_config/main/logging.yml
    
  • Please run the following commands

    docker-compose -f event-gateway-docker-compose.yml down
    docker-compose -f event-gateway-docker-compose.yml pull
    docker-compose -f event-gateway-docker-compose.yml up -d
    
  • Use the command as shown below to ensure that the RDA docker instances are up and running.

    docker ps -a | grep event
    
  • Use the below mentioned command to check docker logs for any errors

    docker logs -f  -tail 200 <event gateway containerid>
    

    1.4.6 Upgrade RDA Edge Collector Service

    Step 1. Enter the login credentials for the VM where the EC agent is installed

    Step 2. Navigate to the location where the EC agent was originally installed. view the Example provided below.

    cd /opt/rdaf/edgecollector/
    

    Step 3. Use a local editor to make changes to the EC agent's docker-compose file (e.g. vi).

    vi rda-edgecollector-docker-compose.yml
    
    • Edit and update EC agent tag – Tag x → Tag 3.8.0, view the Example provided below
    image:'cfxregistry.cloudfabrix.io/cfxcollector:3.8.0'
    

    Step 4. Save & Exit

    Step 5. Please execute these commands to bring up the RDA Edge Collector from path cd /opt/rdaf/edgecollector

    docker-compose -f rda-edgecollector-docker-compose.yml down
    
    docker-compose -f rda-edgecollector-docker-compose.yml pull
    
    docker-compose -f rda-edgecollector-docker-compose.yml up -d
    

    Step 6. To make sure the RDA Docker Edge Collector Instance are up and running, use the following command as shown below.

    docker ps -a | grep agent 
    
    b6861c4f79b5 cfxregistry.cloudfabrix.io/cfxcollector:3.7 "/bin/bash -c 'cd /o…" 7 minutes ago  Up 7 minutes rda_edgecollector_agent
    

1.5. Post Upgrade Steps

Step 1: Update Cleared Alerts Data Retention in Database Property to 8760 hours (1 year) retention. Path to update Cleared Alerts Data Retention property Main Menu --> Administration --> Configurations --> Click on row level action for Cleared Alerts Data Retention in Database.

Step 2: Purge Resolved/Closed incidents data from IRM/AP/Collab DB and pstreams. This change is needed so that incidents/Alerts/collab can be maintained consistently across the system.

Currently, the purging mechanism relies on the retention_days setting defined per pstream. As a result, related data (e.g., alerts or collab messages) may be retained for different durations, leading to inconsistencies in how incident-related information is managed throughout the system.

Note

Need this change so that the collector will not purge data from pstreams.

Go to Main Menu --> Configuration --> RDA Administration --> Persistent Streams --> Persistent Streams Update below pstream definitions to remove retention_days and retention_purge_extra_filter attributes if a pstream has defined these properties.

Old Config New Config
"default_values": {},
"retention_days": 30,
"case_insensitive": true

"default_values": {},
"case_insensitive": true

a) oia-incidents-stream

b) oia-incident-inserts-stream

c) oia-incidents-delta-stream

d) oia-incidents-external-tickets-stream

e) oia-incidents-collaboration-stream

f) oia-collab-messagesharing-stream

g) oia-alerts-stream

h) oia-alerts-payload

Step 3: Go to Main Menu --> Configuration --> RDA Administration --> Bundles --> Select oia_l1_l2_bundle and Click on Deploy action to deploy the latest Dashboards configuration for Alerts and Incidents.

Step 4: Deploy the topology_path_viz_bundle from Main Menu --> Configuration --> RDA Administration --> Bundles --> Click Deploy action row level for topology_path_viz_bundle

Step 5: After the upgrade, ensure that the SYS_PTRACE capability is added to all relevant deployment files, including platform services, worker, OIA services, event-gateway.yaml, and bulkstats.yaml.

  • Platform Services
/opt/rdaf/deployment-scripts/helm/platform/<service-name>/templates/deployment.yaml
  • OIA Services
/opt/rdaf/deployment-scripts/helm/oia/<service-name>/templates/deployment.yaml
  • Worker Service
/opt/rdaf/deployment-scripts/helm/rda-worker/templates/deployment.yaml

Please check the SYS_PTRACE within the capabilities section for each service, as illustrated in the following example.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: rda-fabric-services
    app_category: rdaf-platform
    app_component: rda-identity
name: rda-identity
namespace: {{ .Values.namespace }}
spec:
strategy:
    type: {{ .Values.rda_identity.strategy | default "RollingUpdate" }}
replicas: {{ .Values.rda_identity.replicas }}
selector:
  matchLabels:
    app: rda-fabric-services
    app_category: rdaf-platform
    app_component: rda-identity
template:
  metadata:
    labels:
      app: rda-fabric-services
      app_category: rdaf-platform
      app_component: rda-identity
  spec:
    {{- with .Values.securityContext }}
    securityContext:
        {{- toYaml . | nindent 8 }}
    {{- end }}
    terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds | default "3600" }}
    containers:
        - name: rda-identity
        imagePullPolicy: {{ .Values.imagePullPolicy }}
        image: {{ .Values.registry }}/ubuntu-rda-identity:{{ .Values.tag }}
        securityContext:
            privileged: {{ .Values.rda_identity.privileged | default "true" }}
            capabilities:
            add:
                - SYS_PTRACE
        lifecycle:
            preStop:
            exec:
                command:
                - "/bin/sh"
                - "-c"
                - "echo '******************...Initiating graceful termination...*************************' >/proc/1/fd/1 && sleep 10"
        env:
            - name: LISTEN_ADDRESS
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP

Step 6: Purge of Alerthistory Data from the Database

This task includes the purging of CLEARED alerts data from the alerthistory table and migrating CLEARED alert payloads from the alerthistory table to the oia-alert-payload PStream.

  • Execute the purge script as detailed in the provided link. please refer to Manual Purge History Alerts Document. This task involves the purging of alerts history data from the alerthistory table and the Alert payload migration from the alerthistory table to the alert payload PStream.

  • Upon successful execution of the Purge script, update the Cleared Alerts Data Retention in Database setting to 1 hour. Path to update Cleared Alerts Data Retention property Main Menu --> Administration --> Configurations --> Click on row level action for Cleared Alerts Data Retention in Database.

Step 1: Update Cleared Alerts Data Retention in Database Property to 8760 hours (1 year) retention. Path to update Cleared Alerts Data Retention property Main Menu --> Administration --> Configurations --> Click on row level action for Cleared Alerts Data Retention in Database.

Step 2: Purge Resolved/Closed incidents data from IRM/AP/Collab DB and pstreams. This change is needed so that incidents/Alerts/collab can be maintained consistently across the system.

Currently, the purging mechanism relies on the retention_days setting defined per pstream. As a result, related data (e.g., alerts or collab messages) may be retained for different durations, leading to inconsistencies in how incident-related information is managed throughout the system.

Note

Need this change so that the collector will not purge data from pstreams.

Go to Main Menu --> Configuration --> RDA Administration --> Persistent Streams --> Persistent Streams Update below pstream definitions to remove retention_days and retention_purge_extra_filter attributes if a pstream has defined these properties.

Old Config New Config
"default_values": {},
"retention_days": 30,
"case_insensitive": true

"default_values": {},
"case_insensitive": true

a) oia-incidents-stream

b) oia-incident-inserts-stream

c) oia-incidents-delta-stream

d) oia-incidents-external-tickets-stream

e) oia-incidents-collaboration-stream

f) oia-collab-messagesharing-stream

g) oia-alerts-stream

h) oia-alerts-payload

Step 3: Go to Main Menu --> Configuration --> RDA Administration --> Bundles --> Select oia_l1_l2_bundle and Click on Deploy action to deploy the latest Dashboards configuration for Alerts and Incidents.

Step 4: Deploy the topology_path_viz_bundle from Main Menu --> Configuration --> RDA Administration --> Bundles --> Click Deploy action row level for topology_path_viz_bundle

Step 5. After the upgrade, check the following Platform, Worker, OIA Services, Event-gateway, Bulkstats YAML files in CLI VM located at /opt/rdaf/deployment-scripts/

Please check the SYS_PTRACE within the capabilities section for each service, as illustrated in the following example.

rda_collector:
image: cfxregistry.cloudfabrix.io:443/ubuntu-rda-collector:3.8.0
restart: unless-stopped
mem_limit: 12G
mem_reservation: 2G
memswap_limit: 12G
oom_kill_disable: false
cap_add:
- SYS_PTRACE
volumes:
- /opt/rdaf/config/network_config:/network_config
- /opt/rdaf/config/log/rda_collector/:/logging_config
- /opt/rdaf/logs:/logs
logging:
  driver: json-file
  options:
    max-size: 10m
    max-file: '5'
environment:
  RDA_NETWORK_CONFIG: /network_config/config.json
  RDA_DATAPLANE_POLICY: /network_config/policy.json
  LOGGER_CONFIG_FILE: /logging_config/logging.yaml
  ES_CONFIG_PATH: /network_config/config.json
  LABELS: tenant_name=rdaf-01
  RDA_ENABLE_TRACES: 'no'
  DISABLE_REMOTE_LOGGING_CONTROL: 'no'
  RDA_SELF_HEALTH_RESTART_AFTER_FAILURES: 3
privileged: false

Step 6: Purge of Alerthistory Data from the Database

This task includes the purging of CLEARED alerts data from the alerthistory table and migrating CLEARED alert payloads from the alerthistory table to the oia-alert-payload PStream.

  • Execute the purge script as detailed in the provided link. please refer to Manual Purge History Alerts Document. This task involves the purging of alerts history data from the alerthistory table and the Alert payload migration from the alerthistory table to the alert payload PStream.

  • Upon successful execution of the Purge script, update the Cleared Alerts Data Retention in Database setting to 1 hour. Path to update Cleared Alerts Data Retention property Main Menu --> Administration --> Configurations --> Click on row level action for Cleared Alerts Data Retention in Database.