Upgrade to 3.2.1.3 and 7.2.1.1 and 7.2.1.5
1. Upgrade from 7.2.0.x to 7.2.1.1
RDAF Platform: From 3.2.0.3 to 3.2.1.3
OIA (AIOps) Application: From 7.2.0.3 to 7.2.1.1/7.2.1.5
RDAF Deployment rdaf & rdafk8s CLI: From 1.1.7 to 1.1.8
RDAF Client rdac CLI: From 3.2.0.3 to 3.2.1.3
1.1. Upgrade Prerequisites
Before proceeding with this upgrade, please make sure and verify the below prerequisites are met.
Important
Please make sure full backup of the RDAF platform system is completed before performing the upgrade.
Kubernetes: Please run the below backup command to take the backup of application data.
Non-Kubernetes: Please run the below backup command to take the backup of application data. Note: Please make sure this backup-dir is mounted across all infra,cli vms.Run the below command on RDAF Management system and make sure the Kubernetes PODs are NOT in restarting mode (it is applicable to only Kubernetes environment)
- Verify that RDAF deployment
rdaf & rdafk8sCLI's version is 1.1.7 on the VM where CLI was installed for docker on-prem registry and managing Kubernetes or Non-kubernetes deployment.
- On-premise docker registry service version is 1.0.2
- RDAF Infrastructure services version is 1.0.2 (
rda-natsservice version is1.0.2.1andrda-minioservice version isRELEASE.2022-11-11T03-44-20Z)
- RDAF Platform services version is 3.2.0.3
- RDAF OIA Application services version is 7.2.0.3 (
rda-event-consumerservice version is7.2.0.5)
Login into the VM where rdaf & rdafk8s deployment CLI was installed for docker on-prem registry and managing Kubernetes or Non-kubernetes deployment.
- Download the RDAF Deployment CLI's newer version 1.1.8 bundle.
- Upgrade the
rdaf & rdafk8sCLI to version 1.1.8
- Verify the installed
rdaf & rdafk8sCLI version is upgraded to 1.1.8
Download the below python script which is going to be used to identify K8s POD names for each RDA Fabric service POD Ids. Skip this step if this script was already downloaded.
Download the below upgrade python script.
Please run the below python upgrade script. It creates a kafka topic called fsm-events, creates /opt/rdaf/config/network_config/policy.json file, and adds rda-fsm service to values.yaml file.
Important
Please make sure above upgrade script is executed before moving to next step.
- Download the RDAF Deployment CLI's newer version 1.1.8 bundle and copy it to RDAF management VM on which
rdaf & rdafk8sdeployment CLI was installed.
For RHEL OS Environment
For Ubuntu OS Environment
- Extract the
rdafCLI software bundle contents
- Change the directory to the extracted directory
- Upgrade the
rdaf & rdafk8sCLI to version 1.1.8
- Verify the installed
rdaf & rdafk8sCLI version
Download the below upgrade script and copy it to RDAF management VM on which rdaf & rdafk8s deployment CLI was installed.
Please run the downloaded python upgrade script. It creates a kafka topic called fsm-events, creates /opt/rdaf/config/network_config/policy.json file, and adds rda-fsm service to values.yaml file.
Important
Please make sure above upgrade script is executed before moving to next step.
- Download the RDAF Deployment CLI's newer version 1.1.8 bundle
- Upgrade the
rdafCLI to version 1.1.8
- Verify the installed
rdafCLI version is upgraded to 1.1.8
- To stop application services, run the below command. Wait until all of the services are stopped.
- To stop RDAF worker services, run the below command. Wait until all of the services are stopped.
- To stop RDAF platform services, run the below command. Wait until all of the services are stopped.
Run the below RDAF command to check infra status
+----------------+--------------+-----------------+--------------+-------+
| Name | Host | Status | Container Id | Tag |
+----------------+--------------+-----------------+--------------+-------+
| haproxy | 192.168.131.41 | Up 2 weeks | ee9d25dc2276 | 1.0.2 |
| | | | | |
| haproxy | 192.168.131.42 | Up 2 weeks | e6ad57ac421d | 1.0.2 |
| | | | | |
| keepalived | 192.168.131.41 | active | N/A | N/A |
| | | | | |
| keepalived | 192.168.131.42 | active | N/A | N/A |
+----------------+--------------+-----------------+--------------+-------+
Run the below RDAF command to check infra healthcheck status
+----------------+-----------------+--------+----------------------+--------------+--------------+
| Name | Check | Status | Reason | Host | Container Id |
+----------------+-----------------+--------+----------------------+--------------+--------------+
| haproxy | Port Connection | OK | N/A | 192.168.107.63 | ed0e8a4f95d6 |
| haproxy | Service Status | OK | N/A | 192.168.107.63 | ed0e8a4f95d6 |
| haproxy | Firewall Port | OK | N/A | 192.168.107.63 | ed0e8a4f95d6 |
| haproxy | Port Connection | OK | N/A | 192.168.107.64 | 91c361ea0f58 |
| haproxy | Service Status | OK | N/A | 192.168.107.64 | 91c361ea0f58 |
| haproxy | Firewall Port | OK | N/A | 192.168.107.64 | 91c361ea0f58 |
| keepalived | Service Status | OK | N/A | 192.168.107.63 | N/A |
| keepalived | Service Status | OK | N/A | 192.168.107.64 | N/A |
| nats | Port Connection | OK | N/A | 192.168.107.63 | f57ed825681b |
| nats | Service Status | OK | N/A | 192.168.107.63 | f57ed825681b |
+----------------+-----------------+--------+----------------------+--------------+--------------+
Note
Please take backup of /opt/rdaf/deployment-scripts/values.yaml
Download the below upgrade python script.
Please run the below python upgrade script. It creates a few topics for applying config changes to existing topics in HA setups. creates /opt/rdaf/config/network_config/policy.json file, adding fsm service to values.yaml file.
Important
Please make sure above upgrade script is executed before moving to next step.
- Download the RDAF Deployment CLI's newer version 1.1.8 bundle and copy it to RDAF management VM on which
rdafdeployment CLI was installed.
For RHEL OS Environment
For Ubuntu OS Environment
- Extract the
rdafCLI software bundle contents
- Change the directory to the extracted directory
- Upgrade the
rdafCLI to version 1.1.8
- Verify the installed
rdaf & rdafk8sCLI version
Download the below upgrade script and copy it to RDAF management VM on which rdaf deployment CLI was installed.
Please run the below python upgrade script. It creates a few topics for applying config changes to existing topics in HA setups. creates /opt/rdaf/config/network_config/policy.json file, adding fsm service to values.yaml file.
Important
Please make sure above upgrade script is executed before moving to next step.
1.2. Download the new Docker Images
Download the new docker image tags for RDAF Platform and OIA Application services and wait until all of the images are downloaded.
Run the below command to verify above mentioned tags are downloaded for all of the RDAF Platform and OIA Application services.
Please make sure 3.2.1.3 image tag is downloaded for the below RDAF Platform services.
- rda-client-api-server
- rda-registry
- rda-rda-scheduler
- rda-collector
- rda-stack-mgr
- rda-identity
- rda-fsm
- rda-access-manager
- rda-resource-manager
- rda-user-preferences
- onprem-portal
- onprem-portal-nginx
- rda-worker-all
- onprem-portal-dbinit
- cfxdx-nb-nginx-all
- rda-event-gateway
- rdac
- rdac-full
Please make sure 7.2.1.1 image tag is downloaded for the below RDAF OIA Application services.
- rda-app-controller
- rda-alert-processor
- rda-file-browser
- rda-smtp-server
- rda-ingestion-tracker
- rda-reports-registry
- rda-ml-config
- rda-event-consumer
- rda-webhook-server
- rda-irm-service
- rda-alert-ingester
- rda-collaboration
- rda-notification-service
- rda-configuration-service
Please make sure 7.2.1.5 image tag is downloaded for the below RDAF OIA Application services.
- rda-smtp-server
- rda-event-consumer
- rda-webhook-server
- rda-collaboration
- rda-configuration-service
- rda-alert-ingester
Downloaded Docker images are stored under the below path.
/opt/rdaf/data/docker/registry/v2
Run the below command to check the filesystem's disk usage on which docker images are stored.
Optionally, If required, older image-tags which are no longer used can be deleted to free up the disk space using the below command.
1.3.Upgrade Steps
1.3.1 Upgrade RDAF Platform Services
Step-1: Run the below command to initiate upgrading RDAF Platform services.
As the upgrade procedure is a non-disruptive upgrade, it puts the currently running PODs into Terminating state and newer version PODs into Pending state.
Step-2: Run the below command to check the status of the existing and newer PODs and make sure atleast one instance of each Platform service is in Terminating state.
Step-3: Run the below command to put all Terminating RDAF platform service PODs into maintenance mode. It will list all of the POD Ids of platform services along with rdac maintenance command that required to be put in maintenance mode.
Step-4: Copy & Paste the rdac maintenance command as below.
Step-5: Run the below command to verify the maintenance mode status of the RDAF platform services.
Step-6: Run the below command to delete the Terminating RDAF platform service PODs
for i in `kubectl get pods -n rda-fabric -l app_category=rdaf-platform | grep 'Terminating' | awk '{print $1}'`; do kubectl delete pod $i -n rda-fabric --force; done
Note
Wait for 120 seconds and Repeat above steps from Step-2 to Step-6 for rest of the RDAF Platform service PODs.
Please wait till all of the new platform service PODs are in Running state and run the below command to verify their status and make sure all of them are running with 3.2.1.3 version.
+----------------------+--------------+---------------+--------------+---------+
| Name | Host | Status | Container Id | Tag |
+----------------------+--------------+---------------+--------------+---------+
| rda-api-server | 192.168.131.45 | Up 2 Days ago | dde8ab1f9331 | 3.2.1.3 |
| rda-api-server | 192.168.131.44 | Up 2 Days ago | e6ece7235e72 | 3.2.1.3 |
| rda-registry | 192.168.131.45 | Up 2 Days ago | a577766fb8b2 | 3.2.1.3 |
| rda-registry | 192.168.131.44 | Up 2 Days ago | 1aecc089b0c3 | 3.2.1.3 |
| rda-identity | 192.168.131.45 | Up 2 Days ago | fea1c0ef7263 | 3.2.1.3 |
| rda-identity | 192.168.131.44 | Up 2 Days ago | 2a48f402f678 | 3.2.1.3 |
| rda-fsm | 192.168.131.45 | Up 2 Days ago | 5006c8a6e5f3 | 3.2.1.3 |
| rda-fsm | 192.168.131.44 | Up 2 Days ago | 199cac791a90 | 3.2.1.3 |
| rda-access-manager | 192.168.131.44 | Up 2 Days ago | e20495c61be2 | 3.2.1.3 |
| .... | .... | .... | .... | .... |
+----------------------+--------------+---------------+--------------+---------+
Run the below command to check rda-fsm service is up and running and also verify that one of the rda-scheduler service is elected as a leader under Site column.
Run the below command to check if all services has ok status and does not throw any failure messages.
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+
| Cat | Pod-Type | Host | ID | Site | Health Parameter | Status | Message |
|-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------|
| rda_app | alert-ingester | cf4c4f37c47a | 0633b451 | | service-status | ok | |
| rda_app | alert-ingester | cf4c4f37c47a | 0633b451 | | minio-connectivity | ok | |
| rda_app | alert-ingester | cf4c4f37c47a | 0633b451 | | service-dependency:configuration-service | ok | 2 pod(s) found for configuration-service |
| rda_app | alert-ingester | cf4c4f37c47a | 0633b451 | | service-initialization-status | ok | |
| rda_app | alert-ingester | cf4c4f37c47a | 0633b451 | | kafka-connectivity | ok | Cluster=F8PAtrvtRk6RbMZgp7deHQ, Broker=3, Brokers=[2, 3, 1] |
| rda_app | alert-ingester | 7b9f1370e018 | f348532b | | service-status | ok | |
| rda_app | alert-ingester | 7b9f1370e018 | f348532b | | minio-connectivity | ok | |
| rda_app | alert-ingester | 7b9f1370e018 | f348532b | | service-dependency:configuration-service | ok | 2 pod(s) found for configuration-service |
| rda_app | alert-ingester | 7b9f1370e018 | f348532b | | service-initialization-status | ok | |
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+
1.3.2 Upgrade rdac CLI
Run the below command to upgrade the rdac CLI
1.3.3 Upgrade RDA Worker Services
Step-1: Please run the below command to initiate upgrading the RDA Worker service PODs.
Step-2: Run the below command to check the status of the existing and newer PODs and make sure atleast one instance of each RDA Worker service POD is in Terminating state.
Step-3: Run the below command to put all Terminating RDAF worker service PODs into maintenance mode. It will list all of the POD Ids of RDA worker services along with rdac maintenance command that is required to be put in maintenance mode.
Step-4: Copy & Paste the rdac maintenance command as below.
Step-5: Run the below command to verify the maintenance mode status of the RDAF worker services.
Step-6: Run the below command to delete the Terminating RDAF worker service PODs
for i in `kubectl get pods -n rda-fabric -l app_component=rda-worker | grep 'Terminating' | awk '{print $1}'`; do kubectl delete pod $i -n rda-fabric --force; done
Note
Wait for 120 seconds between each RDAF worker service upgrade by repeating above steps from Step-2 to Step-6 for rest of the RDAF worker service PODs.
Step-6: Please wait for 120 seconds to let the newer version of RDA Worker service PODs join the RDA Fabric appropriately. Run the below commands to verify the status of the newer RDA Worker service PODs.
+------------+--------------+---------------+--------------+---------+
| Name | Host | Status | Container Id | Tag |
+------------+--------------+---------------+--------------+---------+
| rda-worker | 192.168.131.50 | Up 2 Days ago | 497059c45d6e | 3.2.1.3 |
| rda-worker | 192.168.131.49 | Up 2 Days ago | 434b2ca40ed8 | 3.2.1.3 |
| .... | .... | .... | .... | .... |
+------------+--------------+---------------+--------------+---------+
Step-7: Run the below command to check if all RDA Worker services has ok status and does not throw any failure messages.
1.3.4 Upgrade OIA Application Services
Step-1: Run the below commands to initiate upgrading RDAF OIA Application services. First command upgrades the specified services from 7.2.0.3 to 7.2.1.1 and following command upgrades rest of the services from 7.2.0.3 to 7.2.1.5 and 7.2.1.6 respectively
rdafk8s app upgrade OIA --tag 7.2.1.1 --service rda-app-controller --service rda-alert-processor --service rda-file-browser --service rda-ingestion-tracker --service rda-reports-registry --service rda-ml-config --service rda-irm-service --service rda-notification-service
rdafk8s app upgrade OIA --tag 7.2.1.5 --service rda-smtp-server --service rda-event-consumer --service rda-webhook-server --service rda-collaboration --service rda-configuration-service
Step-2: Run the below command to check the status of the existing and newer PODs and make sure atleast one instance of each OIA application service is in Terminating state.
Step-3: Run the below command to put all Terminating OIA application service PODs into maintenance mode. It will list all of the POD Ids of OIA application services along with rdac maintenance command that are required to be put in maintenance mode.
Step-4: Copy & Paste the rdac maintenance command as below.
Step-5: Run the below command to verify the maintenance mode status of the OIA application services.
Step-6: Run the below command to delete the Terminating OIA application service PODs
for i in `kubectl get pods -n rda-fabric -l app_name=oia | grep 'Terminating' | awk '{print $1}'`; do kubectl delete pod $i -n rda-fabric --force; done
Note
Wait for 120 seconds and Repeat above steps from Step-2 to Step-6 for rest of the OIA application service PODs.
Please wait till all of the new OIA application service PODs are in Running state and run the below command to verify their status and make sure they are running with 7.2.1.1 or 7.2.1.5 version.
+---------------------------+--------------+---------------+--------------+---------+
| Name | Host | Status | Container Id | Tag |
+---------------------------+--------------+---------------+--------------+---------+
| rda-alert-ingester | 192.168.131.49 | Up 5 Days ago | b323998abd15 | 7.2.1.1 |
| rda-alert-ingester | 192.168.131.50 | Up 5 Days ago | 710f262e27aa | 7.2.1.1 |
| rda-alert-processor | 192.168.131.47 | Up 5 Days ago | ec1c53d94439 | 7.2.1.1 |
| rda-alert-processor | 192.168.131.46 | Up 5 Days ago | deee4db62708 | 7.2.1.1 |
| rda-app-controller | 192.168.131.49 | Up 5 Days ago | ef96deb9adda | 7.2.1.1 |
| rda-app-controller | 192.168.131.50 | Up 5 Days ago | 6880b5632adb | 7.2.1.1 |
| rda-collaboration | 192.168.131.49 | Up 2 Days ago | cc1b1c882250 | 7.2.1.5 |
| rda-collaboration | 192.168.131.50 | Up 2 Days ago | 13be7e8bfa3f | 7.2.1.5 |
+---------------------------+--------------+---------------+--------------+---------+
Step-7: Run the below command to verify all OIA application services are up and running. Please wait till the cfxdimensions-app-irm_service has leader status under Site column.
Run the below command to check if all services has ok status and does not throw any failure messages.
Warning
For Non-Kubernetes deployment, upgrading RDAF Platform and AIOps application services is a disruptive operation. Please schedule a maintenance window before upgrading RDAF Platform and AIOps services to newer version.
Run the below command to initiate upgrading RDAF Platform services.
Please wait till all of the new platform service are in Running state and run the below command to verify their status and make sure all of them are running with 3.2.1.3 version.+--------------------------+--------------+------------+--------------+---------+
| Name | Host | Status | Container Id | Tag |
+--------------------------+--------------+------------+--------------+---------+
| cfx-rda-access-manager | 192.168.107.60 | Up 6 hours | 80dac9d727a3 | 3.2.1.3 |
| cfx-rda-resource-manager | 192.168.107.60 | Up 6 hours | 68534a5c1d4c | 3.2.1.3 |
| cfx-rda-user-preferences | 192.168.107.60 | Up 6 hours | 78405b639915 | 3.2.1.3 |
| portal-backend | 192.168.107.60 | Up 6 hours | 636e6968f661 | 3.2.1.3 |
| portal-frontend | 192.168.107.60 | Up 6 hours | 2fd426bd6aa2 | 3.2.1.3 |
| rda_api_server | 192.168.107.60 | Up 6 hours | e0994b366f98 | 3.2.1.3 |
| rda_asset_dependency | 192.168.107.60 | Up 6 hours | 07610621408c | 3.2.1.3 |
| rda_collector | 192.168.107.60 | Up 6 hours | 467d6b3d13f8 | 3.2.1.3 |
| rda_fsm | 192.168.107.60 | Up 6 hours | e32de86fe341 | 3.2.1.3 |
| rda_identity | 192.168.107.60 | Up 6 hours | 45136d89b2cf | 3.2.1.3 |
| rda_registry | 192.168.107.60 | Up 6 hours | 334d7d4cfa41 | 3.2.1.3 |
| rda_scheduler | 192.168.107.60 | Up 6 hours | acf5a9ab556a | 3.2.1.3 |
+--------------------------+--------------+------------+--------------+---------+
Run the below command to check rda-fsm service is up and running and also verify that one of the rda-scheduler service is elected as a leader under Site column.
Run the below command to check if all services has ok status and does not throw any failure messages.
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+
| Cat | Pod-Type | Host | ID | Site | Health Parameter | Status | Message |
|-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------|
| rda_app | alert-ingester | 9a0775246a0f | 8f538695 | | service-status | ok | |
| rda_app | alert-ingester | 9a0775246a0f | 8f538695 | | minio-connectivity | ok | |
| rda_app | alert-ingester | 9a0775246a0f | 8f538695 | | service-dependency:configuration-service | ok | 2 pod(s) found for configuration-service |
| rda_app | alert-ingester | 9a0775246a0f | 8f538695 | | service-initialization-status | ok | |
| rda_app | alert-ingester | 9a0775246a0f | 8f538695 | | kafka-connectivity | ok | Cluster=F8PAtrvtRk6RbMZgp7deHQ, Broker=3, Brokers=[2, 3, 1] |
| rda_app | alert-ingester | 79d6756db639 | 95921403 | | service-status | ok | |
| rda_app | alert-ingester | 79d6756db639 | 95921403 | | minio-connectivity | ok | |
| rda_app | alert-ingester | 79d6756db639 | 95921403 | | service-dependency:configuration-service | ok | 2 pod(s) found for configuration-service |
| rda_app | alert-ingester | 79d6756db639 | 95921403 | | service-initialization-status | ok | |
| rda_app | alert-ingester | 79d6756db639 | 95921403 | | kafka-connectivity | ok | Cluster=F8PAtrvtRk6RbMZgp7deHQ, Broker=1, Brokers=[2, 3, 1] |
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+
- Upgrade
rdacCLI
Run the below command to upgrade the rdac CLI
- Upgrade RDA Worker Services
Please run the below command to initiate upgrading the RDA Worker service PODs.
Please wait for 120 seconds to let the newer version of RDA Worker service PODs join the RDA Fabric appropriately. Run the below commands to verify the status of the newer RDA Worker service PODs.+------------+--------------+-----------+--------------+---------+
| Name | Host | Status | Container Id | Tag |
+------------+--------------+-----------+--------------+---------+
| rda_worker | 192.168.107.61 | Up 2 days | d951118ee757 | 3.2.1.3 |
| rda_worker | 192.168.107.62 | Up 2 days | f7033a72f013 | 3.2.1.3 |
+------------+--------------+-----------+--------------+---------+
Run the below commands to initiate upgrading RDAF OIA Application services. First command upgrades the specified services from 7.2.0.3 to 7.2.1.1 and second command upgrades rest of the services from 7.2.0.3 to 7.2.1.5
rdaf app upgrade OIA --tag 7.2.1.1 --service cfx-rda-app-controller --service cfx-rda-alert-processor --service cfx-rda-file-browser --service cfx-rda-ingestion-tracker --service cfx-rda-reports-registry --service cfx-rda-ml-config --service cfx-rda-irm-service --service cfx-rda-notification-service
rdaf app upgrade OIA --tag 7.2.1.5 --service cfx-rda-smtp-server --service cfx-rda-event-consumer --service cfx-rda-webhook-server --service cfx-rda-collaboration --service cfx-rda-configuration-service
Please wait till all of the new OIA application service PODs are in Running state and run the below command to verify their status and make sure they are running with 7.2.1.1 or 7.2.1.5 version.
+-------------------------------+--------------+-----------+--------------+---------+
| Name | Host | Status | Container Id | Tag |
+-------------------------------+--------------+-----------+--------------+---------+
| cfx-rda-alert-ingester | 192.168.107.66 | Up 2 days | 79d6756db639 | 7.2.1.5 |
| cfx-rda-alert-ingester | 192.168.107.67 | Up 2 days | 9a0775246a0f | 7.2.1.5 |
| cfx-rda-alert-processor | 192.168.107.66 | Up 2 days | 057552584cfe | 7.2.1.1 |
| cfx-rda-alert-processor | 192.168.107.67 | Up 2 days | 787f0cb42734 | 7.2.1.1 |
| cfx-rda-app-controller | 192.168.107.66 | Up 2 days | 07f406e984ad | 7.2.1.1 |
| cfx-rda-app-controller | 192.168.107.67 | Up 2 days | 0b27802473c1 | 7.2.1.1 |
| cfx-rda-collaboration | 192.168.107.66 | Up 2 days | 7322550c3cee | 7.2.1.5 |
+-------------------------------+--------------+-----------+--------------+---------+
1.4.Post Upgrade Steps
-
(Optional) Deploy latest l1&l2 bundles. Go to Configuration --> RDA Administration --> Bundles --> Select
oia_l1_l2_bundleand Click on deploy action -
Enable ML experiments manually if any experiments are configured (Organization --> Configuration --> ML Experiments)
-
(Optional) Add the following to All Incident Mappings.
## prefarbly after projectId field's json block
{
"to": "notificationId",
"from": "notificationId"
},
- (Optional) New option called
skip_retry_on_keywordsis added within the Incident mapper, which will allow the user to control when to skip retry attempt while making an API call during create or update ticket operations on external ITSM system. (Ex: ServiceNow).
In the below example, if the API error response contains serviceIdentifier is not available or Ticket is already in inactive state no update is allowed message, it will skip retrying the API call as these are expected errors and retrying will not API successful.