Guide to Using RDAF deployment CLI for AWS EKS Cluster Environment.
1. RDAF Deployment CLI for AWS EKS Cluster
RDA Fabric deployment CLI is a comprehensive command line management tool that is used to setup, install/deploy and manage CloudFabrix on-premise Docker registry, RDA Fabric platform, infrastructure and application services.
RDA Fabric platform, infrastructure and application services are supported to be deployed on a AWS EKS Cluster or Kubernetes Cluster or as a Standalone Container Services using docker-compose utility.
Please refer for RDAF Platform deployment using Standalone Container Services
Please refer for RDAF Platform deployment on Kubernetes Cluster
RDAF CLI uses docker-compose as underlying container management utility for deploying and managing RDA Fabric environment when it need to be deployed on non-kubernetes cluster environment.
For deploying RDAF platform on AWS EKS cluster environment, RDAF CLI provides rdafk8s command utility using which installation and configuration management operations can be performed.
Info
RDAF deployment CLI rdafk8s uses helm and kubectl to automate the RDA Fabric platform deployment and lifecycle operations on EKS cluster environment. For more information about these native EKS Cluster management command utility tools, please refer About Helm and About Kubectl
1.1 Pre-requisites
Provision an AWS EKS cluster with the following three dedicated node groups, each configured with minimum resource requirements and node labels to support targeted RDAF service deployments:
AWS EKS cluster with below Node Group Configuration
Node Group |
Node Count | CPU / Memory per Node |
Purpose |
Required Node Labels |
|---|---|---|---|---|
| Infrastructure Services | 4 | 8 vCPUs / 32 GB | Core infrastructure components (MinIO, NATS, etc.) | rdaf_infra_services=allow rdaf_infra_node=node-0 rdaf_infra_minio=allow rdaf_infra_minio_node=minio-0 rdaf_infra_nats=allow rdaf_infra_nats_node=nats-0 |
| Platform | 2 | 8 vCPUs / 32 GB | RDAF platform service workloads | rdaf_platform_services=allow |
| Worker Services | 2 | 8 vCPUs / 32 GB | RDAF worker service workloads | rdaf_worker_services=allow |
| Application Service | 2 | 8 vCPUs / 32 GB | Application-level services including Event Gateway | rdaf_application_services=allow rdaf_event_gateway=allow |
Node Group |
Node Count | CPU / Memory per Node |
Purpose |
Required Node Labels |
|---|---|---|---|---|
| Infrastructure Services | 1 | 8 vCPUs / 32 GB | Core infrastructure components (MinIO, NATS, etc.) | rdaf_infra_services=allow rdaf_infra_node=node-0 rdaf_infra_minio=allow rdaf_infra_minio_node=minio-0 rdaf_infra_nats=allow rdaf_infra_nats_node=nats-0 |
| Platform | 1 | 8 vCPUs / 32 GB | RDAF platform service workloads | rdaf_platform_services=allow |
| Worker Services | 1 | 8 vCPUs / 32 GB | RDAF worker service workloads | rdaf_worker_services=allow |
| Application Service | 1 | 8 vCPUs / 32 GB | Application-level services including Event Gateway | rdaf_application_services=allow rdaf_event_gateway=allow |
A VM is required for managing the EKS cluster and setting up the RDAF CLI. You can provision this VM using any of the following guides.
For deployment and configuration options, please refer to the OVF Based Deployment Guide. For Non-OVF environments such as RHEL, Ubuntu, or Rocky Linux, refer to the Manual Deployment Guide
User would require the SSL certificate ARN for the Load Balancer.
1.2 Steps to Configure the EKS Cluster
To configure an AWS EKS cluster, the following tools are required. Use the instructions below to install them.
The RDAF Deployment CLI for Kubernetes environments relies on kubectl to perform various cluster lifecycle management operations. Ensure the following tools are installed on the machine where the RDAF Deployment CLI will be used
- Run these commands to install the AWS Command Line Interface
sudo apt update
sudo apt install -y unzip curl
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
aws --version
- Use the commands given below for Kubectl installation.
sudo apt update
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
kubectl version --client
- Use the following commands to install eksctl
sudo curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | sudo tar xz -C /usr/local/bin
eksctl version
- Configure the AWS
Run the below command and Enter your AWS Access Key, Secret Key, Region, and Output format.
AWS Access Key ID:xxxxxxxxxxxxxxxx
AWS Secret Access Key:xxxxxxxxxxxxxxxx
Default region name: ap-south-1
Default output format [None]:
- Update kubeconfig using the AWS CLI
To update your kubeconfig and verify the connection to your Amazon EKS cluster, follow these steps
Replace <region> with the AWS region where your EKS cluster is located (e.g., us-west-2).
Replace <cluster-name> with the specific name of your EKS cluster.
Check if the connection to cluster is successful, by using the below command
1.3 CLI Installation
Please download the RDAF deployment bundles from the below provided links.
Tip
In a restricted environment where there is no direct internet access, please download RDAF Deployment CLI offline bundle.
RDAF Deployment CLI offline bundle for Ubuntu: offline-ubuntu-1.5.0.tar.gz
RDAF Deployment CLI bundle: rdafcli-1.5.0.tar.gz
Note
For latest RDAF Deployment CLI versioned package, please contact [email protected]
Login as rdauser user into on-premise docker registry or RDA Fabric Platform VM using any SSH client tool (ex: putty)
Run the following command to install the RDA Fabric deployment CLI tool.
Note
Once the above commands run successfully, logout and logback in to a new session.
Run the below command to verify installed RDAF deployment CLI version
Run the below command to view the RDAF deployment CLI help
Documented commands (type help <topic>):
========================================
app help platform rdac_cli reset setregistry status worker
backup infra prune_images registry restore setup validate
Documented commands (type help <topic>):
========================================
app infra rdac_cli reset setup worker
help platform registry setregistry status
1.4 RDAF Platform Setup
1.4.1 rdafk8s setup
- Run the below rdafk8s setup --aws command to create the RDAF platform's deployment configuration. It is a pre-requisite before RDAF infrastructure, platform and application services can be installed on AWS EKS Cluser.
User will be guided to enter all the required configuration details through a series of prompts.
- Accept the EULA
- Enter additional IP address(es) or DNS names that can be as SANs (Subject alt names) while generating self-signed certificates. This is an optional configuration, but it is important to include any public facing IP addresse(s) that is/are different from worker node's ip addresses which are specified as part of the rdafk8s setup command.
Tip
SANs (Subject alt names) also known as multi-domain certificates which allows to create a single unified SSL certificate which includes more than one Common Name (CN). Common Name can be an IP Address or DNS Name or a wildcard DNS Name (ex: *.acme.com)
Provide any Subject alt name(s) to be used while generating SAN certs
Subject alt name(s) for certs[]:
- Provide the Amazon Resource Name (ARN) of the SSL/TLS certificate to be used with the load balancer. This certificate is required to enable secure HTTPS communication with the RDAF services. Ensure that the certificate covers the domain names or IP addresses that clients will use to access the load balancer.
Provide cert ARN to configure load balancer with
Cert ARN for LoadBalancer[]: arn:aws:acm:ap-south-1:xxxxxxxxxxxxxxx:certificate/xxxxxxxxxxxxxxxxxxxx
- Provide Route53 record will be used to create a DNS entry in Amazon Route 53 that points to the external load balancer. It enables users to access the RDAF UI using a friendly and consistent domain name instead of the load balancer endpoint. Ensure that the hosted zone exists in Route 53 and that the record name matches the domain configured in the SSL/TLS certificate.
- Enter EKS Cluster node private IPs on which RDAF Platform services need to be installed. For HA configuration, please enter comma separated values. Minimum of 2 hosts are required for the HA configuration.
What are the host(s) on which you want the RDAF platform services to be installed?
Platform service host(s)[name-test-vm]: 192.168.0.251,192.168.0.31
- Answer if the RDAF application services are going to be deployed in HA mode or standalone.
- Enter EKS Cluster node private IPs on which RDAF Application services (OIA) need to be installed. For HA configuration, please enter comma separated values. Minimum of 2 hosts are required for the HA configuration.
What are the host(s) on which you want the application services to be installed?
Application service host(s)[name-test-vm]: 192.168.2.7,192.168.3.247
- Enter the name of the Organization. In the below example.
What is the organization you want to use for the admin user created?
Admin organization[CloudFabrix]: cfx
- Enter the Docker registry credentials used to authenticate with the source registry.
What is the username for the docker registry?
Docker registry source username[]: xxxxx
What is the password for the docker registry?
Docker registry source password[]:
Re-enter Docker registry source password[]:
- Press Enter to accept the defaults.
- Enter EKS Cluster node private IPs on which RDAF Worker services need to be installed. For HA configuration, please enter comma separated values. Minimum of 2 nodes are required for the HA configuration.
What are the host(s) on which you want the Worker to be installed?
Worker host(s)[name-test-vm]: 192.168.0.251,192.168.0.31
- Enter EKS Cluster node private IPs on which RDAF NATs infrastructure service need to be installed. For HA configuration, please enter comma separated values. Minimum of 2 nodes are required for the Nats HA configuration.
What is the "host/path-on-host" on which you want the Nats to be deployed?
Nats host/path[192.168.0.252]: 192.168.1.41,192.168.1.57
- Enter EKS Cluster node private IPs on which RDAF Event Gateway service need to be installed. For HA configuration, please enter comma separated values. Minimum of 2 nodes are required for the RDAF Event Gateway HA configuration.
What are the host(s) on which you want the Event Gateway to be installed?
Event Gateway host(s)[192.168.0.252]: 192.168.0.251,192.168.0.31
- Enter EKS Cluster node private IPs on which RDAF Minio infrastructure service need to be installed. For HA configuration, please enter comma separated values. Minimum of 4 nodes are required for the Minio HA configuration.
What is the "host/path-on-host" where you want Minio to be provisioned?
Minio server host/path[192.168.0.252]: 192.168.1.41,192.168.1.57,192.168.0.31,192.168.0.34
- Change the default Minio user credentials if needed or press Enter to accept the defaults.
What is the user name you want to give for Minio root user that will be created and used by the RDAF platform?
Minio user[rdafadmin]:
What is the password you want to use for the newly created Minio root user?
Minio password[EvUyqafW]:
- Enter EKS Cluster node private IPs on which RDAF MariaDB infrastructure service need to be installed. For HA configuration, please enter comma separated values. Minimum of 3 nodes are required for the MariDB database HA configuration.
What is the "host/path-on-host" on which you want the MariaDB server to be provisioned?
MariaDB server host/path[192.168.0.252]: 192.168.1.41,192.168.1.57,192.168.0.252,192.168.0.31
- Change the default MariaDB user credentials if needed or press Enter to accept the defaults.
What is the user name you want to give for MariaDB admin user that will be created and used by the RDAF platform?
MariaDB user[rdafadmin]:
What is the password you want to use for the newly created MariaDB root user?
MariaDB password[hxGbQLXT]:
- Enter EKS Cluster node private IPs on which RDAF Opensearch infrastructure service need to be installed. For HA configuration, please enter comma separated values. Minimum of 3 nodes are required for the Opensearch HA configuration.
What is the "host/path-on-host" on which you want the opensearch server to be provisioned?
opensearch server host/path[192.168.0.252]: 192.168.1.41,192.168.1.57,192.168.0.252
- Change the default Opensearch user credentials if needed or press Enter to accept the defaults.
What is the user name you want to give for Opensearch admin user that will be created and used by the RDAF platform?
Opensearch user[rdafadmin]:
What is the password you want to use for the newly created Opensearch admin user?
Opensearch password[Uy2SkFIOA2]:
-
Enter EKS Cluster node private IPs on which RDAF Kafka infrastructure service need to be installed. For HA configuration, please enter comma separated values. Minimum of 3 nodes are required for the Kafka HA configuration.
-
Enter EKS Cluster node private IPs on which RDAF GraphDB infrastructure service need to be installed. For HA configuration, please enter comma separated values. Minimum of 3 nodes are required for the GraphDB HA configuration.
What is the "host/path-on-host" on which you want the GraphDB to be deployed?
GraphDB host/path[192.168.0.252]: 192.168.1.41,192.168.1.57,192.168.0.252
- Enter RDAF infrastructure service
HAProxy(load-balancer) host(s) ip address or DNS name. For HA configuration, please enter comma separated values. Minimum of 2 hosts are required for theHAProxyHA configuration.
- Accept the EULA
- Enter additional IP address(es) or DNS names that can be as SANs (Subject alt names) while generating self-signed certificates. This is an optional configuration, but it is important to include any public facing IP addresse(s) that is/are different from worker node's ip addresses which are specified as part of the rdafk8s setup command.
Tip
SANs (Subject alt names) also known as multi-domain certificates which allows to create a single unified SSL certificate which includes more than one Common Name (CN). Common Name can be an IP Address or DNS Name or a wildcard DNS Name (ex: *.acme.com)
Provide any Subject alt name(s) to be used while generating SAN certs
Subject alt name(s) for certs[]:
- Provide the Amazon Resource Name (ARN) of the SSL/TLS certificate to be used with the load balancer. This certificate is required to enable secure HTTPS communication with the RDAF services. Ensure that the certificate covers the domain names or IP addresses that clients will use to access the load balancer.
Provide cert ARN to configure load balancer with
Cert ARN for LoadBalancer[]: arn:aws:acm:ap-south-1:xxxxxxxxxxxxxxx:certificate/xxxxxxxxxxxxxxxxxxxx
- Provide Route53 record will be used to create a DNS entry in Amazon Route 53 that points to the external load balancer. It enables users to access the RDAF UI using a friendly and consistent domain name instead of the load balancer endpoint. Ensure that the hosted zone exists in Route 53 and that the record name matches the domain configured in the SSL/TLS certificate.
- Enter EKS Cluster node private IPs on which RDAF Platform services need to be installed. For Non-HA configuration, only one RDAF platform node private ip address is required.
What are the host(s) on which you want the RDAF platform services to be installed?
Platform service host(s)[name-test-vm]: 192.168.0.251
- Answer if the RDAF application services are going to be deployed in HA mode or standalone.
- Enter EKS Cluster node private IPs on which RDAF Application services (OIA) need to be installed. For Non-HA configuration, only one RDAF App node private ip address is required.
What are the host(s) on which you want the application services to be installed?
Application service host(s)[name-test-vm]: 192.168.3.247
- Enter the name of the Organization. In the below example.
What is the organization you want to use for the admin user created?
Admin organization[CloudFabrix]: cfx
- Enter the Docker registry credentials used to authenticate with the source registry.
What is the username for the docker registry?
Docker registry source username[]: xxxxx
What is the password for the docker registry?
Docker registry source password[]:
Re-enter Docker registry source password[]:
- Press Enter to accept the defaults.
- Enter EKS Cluster node private IPs on which RDAF Worker services need to be installed. For Non-HA configuration, only one RDAF Worker node private ip address is required.
What are the host(s) on which you want the Worker to be installed?
Worker host(s)[name-test-vm]: 192.168.0.251
- Enter EKS Cluster node private IPs on which RDAF NATs infrastructure service need to be installed. For Non-HA configuration, only one RDAF infrastructure node private ip address is required.
What is the "host/path-on-host" on which you want the Nats to be deployed?
Nats host/path[192.168.0.252]: 192.168.1.41
- Enter EKS Cluster node private IPs on which RDAF Event Gateway service need to be installed. For Non-HA configuration, only one RDAF Event Gateway node private ip address is required.
What are the host(s) on which you want the Event Gateway to be installed?
Event Gateway host(s)[192.168.0.252]: 192.168.0.251
- Enter EKS Cluster node private IPs on which RDAF Minio infrastructure service need to be installed. For Non-HA configuration, only one RDAF Infra service node private ip address is required.
What is the "host/path-on-host" where you want Minio to be provisioned?
Minio server host/path[192.168.0.252]: 192.168.1.41
- Change the default Minio user credentials if needed or press Enter to accept the defaults.
What is the user name you want to give for Minio root user that will be created and used by the RDAF platform?
Minio user[rdafadmin]:
What is the password you want to use for the newly created Minio root user?
Minio password[EvUyqafW]:
- Enter EKS Cluster node private IPs on which RDAF MariaDB infrastructure service need to be installed. For Non-HA configuration, only one RDAF Infra service node private ip address is required.
What is the "host/path-on-host" on which you want the MariaDB server to be provisioned?
MariaDB server host/path[192.168.0.252]: 192.168.1.41
- Change the default MariaDB user credentials if needed or press Enter to accept the defaults.
What is the user name you want to give for MariaDB admin user that will be created and used by the RDAF platform?
MariaDB user[rdafadmin]:
What is the password you want to use for the newly created MariaDB root user?
MariaDB password[hxGbQLXT]:
- Enter EKS Cluster node private IPs on which RDAF Opensearch infrastructure service need to be installed. For Non-HA configuration, only one RDAF Infra service node private ip address is required.
What is the "host/path-on-host" on which you want the opensearch server to be provisioned?
opensearch server host/path[192.168.0.252]: 192.168.1.41
- Change the default Opensearch user credentials if needed or press Enter to accept the defaults.
What is the user name you want to give for Opensearch admin user that will be created and used by the RDAF platform?
Opensearch user[rdafadmin]:
What is the password you want to use for the newly created Opensearch admin user?
Opensearch password[Uy2SkFIOA2]:
- Enter EKS Cluster node private IPs on which RDAF Kafka infrastructure service need to be installed. For Non-HA configuration, only one RDAF Infra service node private ip address is required.
What is the "host/path-on-host" on which you want the Kafka server to be provisioned?
Kafka server host/path[192.168.0.252]: 192.168.1.41
- Enter EKS Cluster node private IPs on which RDAF GraphDB infrastructure service need to be installed. For Non-HA configuration, only one RDAF Infra service node private ip address is required.
What is the "host/path-on-host" on which you want the GraphDB to be deployed?
GraphDB host/path[192.168.0.252]: 192.168.1.41
- Enter RDAF infrastructure service
HAProxy(load-balancer) host(s) ip address or DNS name. For Non-HA configuration, only one RDAF Infra service node private ip address is required.
After entering the required inputs as above, rdaf setup generates self-signed SSL certificates, creates the required directory structure, configures SSH key based authentication on all of the RDAF hosts and generates rdaf.cfg configuration file under /opt/rdaf directory.
It creates the below director structure on all of the RDAF hosts.
- /opt/rdaf/cert: It contains the generated self-signed SSL certificates for all of the RDAF hosts.
- /opt/rdaf/config: It contains the required configuration file for each deployed RDAF service where applicable.
- /opt/rdaf/data: It contains the persistent data for some of the RDAF services.
- /opt/rdaf/deployment-scripts: It contains the docker-compose
.ymlfile of the services that are configured to be provisioned on RDAF host. - /opt/rdaf/logs: It contains the RDAF services log files.
1.4.2 rdafk8s infra
rdafk8s infra command is used to deploy and manage RDAF infrastructure services on AWS EKS Cluster. Run the below command to view available CLI options.
usage: infra [-h] [--debug] {status,install,upgrade} ...
Manage infra services
positional arguments:
{status,install,upgrade}
status Status of the RDAF Infra
install Install the RDAF Infra containers
upgrade Upgrade the RDAF Infra containers
optional arguments:
-h, --help show this help message and exit
--debug Enable debug logs for the CLI operations
1.4.2.1 Install infra services
rdafk8s infra install command is used to deploy / install RDAF infrastructure services on AWS EKS Cluster. Run the below command to view the available CLI options.
usage: infra install [-h] --tag TAG [--service SERVICES]
optional arguments:
-h, --help show this help message and exit
--tag TAG Tag to use for the docker images of the infra components
--service SERVICES Restrict the scope of the command to a specific service
Run the below command to deploy all RDAF infrastructure services. (Note: Below shown tag name is a sample one for a reference only, for actual tag, please contact CloudFabrix support team at [email protected].)
Run the below command to install a specific RDAF infrastructure service. Below are the supported infrastructure services. (Note: Below shown tag name is a sample one for a reference only, for actual tag, please contact CloudFabrix support team at [email protected])
- nats
- mariadb
- opensearch
- kafka
- graphdb
- haproxy
- qdrant(optional)
Note
This step is optional. Customers who wish to install qdrant service needs to mount a 10GB disk and can run the below command for HA. It will prompt for the deployment IPs, so please make sure to assign 3 IPs for the infrastructure VMs. For Non-HA Please assign one Infra VM IP.
1.4.2.2 Status check
Run the below command to see the status of all of the deployed RDAF infrastructure services.
+--------------------------+--------------+----------------+--------------+------------------------------+
| Name | Host | Status | Container Id | Tag |
+--------------------------+--------------+----------------+--------------+------------------------------+
| rda-nats | 192.168.1.41 | Up 3 Hours ago | 0138966cf012 | 1.0.4 |
| | | | | |
| rda-nats | 192.168.1.57 | Up 3 Hours ago | 7604b1437f0b | 1.0.4 |
| | | | | |
| rda-minio | 192.168.1.41 | Up 3 Hours ago | 0addd0ab8d6c | RELEASE.2023-09-30T07-02-29Z |
| | | | | |
| rda-minio | 192.168.1.57 | Up 3 Hours ago | 7243ff292c29 | RELEASE.2023-09-30T07-02-29Z |
| | | | | |
| rda-mariadb | 192.168.1.41 | Up 3 Hours ago | 625d88fd1d18 | 1.0.4 |
| | | | | |
| rda-mariadb | 192.168.1.57 | Up 3 Hours ago | f3e332cff326 | 1.0.4 |
| | | | | |
| rda-opensearch | 192.168.1.41 | Up 3 Hours ago | b33b1fcd37d3 | 1.0.4 |
| | | | | |
| rda-opensearch | 192.168.1.57 | Up 3 Hours ago | b845b24ac0fe | 1.0.4 |
| | | | | |
| rda-kafka-controller | 192.168.1.41 | Up 3 Hours ago | b0f3c8940098 | 1.0.4 |
| | | | | |
| rda-kafka-controller | 192.168.1.57 | Up 3 Hours ago | b4b1e7d55ab8 | 1.0.4 |
| | | | | |
| rda-graphdb[operator] | 192.168.1.41 | Up 3 Hours ago | 9a864316a8ff | 1.0.4 |
| | | | | |
| rda-graphdb[coordinator] | 192.168.1.41 | Up 1 Hours ago | 31b53276bd22 | 1.0.4 |
| | | | | |
| rda-graphdb[operator] | 192.168.1.57 | Up 3 Hours ago | d805ab7eca7d | 1.0.4 |
| | | | | |
| rda-graphdb[coordinator] | 192.168.1.57 | Up 1 Hours ago | b486c66ed97a | 1.0.4 |
| | | | | |
| qdrant | 192.168.1.41 | Up 3 Hours ago | ad145f1ac0be | 1.0.4 |
| | | | | |
| qdrant | 192.168.1.57 | Up 3 Hours ago | 3d11d57306f3 | 1.0.4 |
+--------------------------+--------------+----------------+--------------+------------------------------+
Below are the AWS EKS Cluster kubectl get pods commands to check the status of RDA Fabric infrastructure services.
NAME READY STATUS RESTARTS AGE
arango-rda-arangodb-operator-86bd5df48-ljctc 1/1 Running 0 3d23h
arango-rda-arangodb-operator-86bd5df48-s29jz 1/1 Running 0 3d23h
opensearch-cluster-master-0 1/1 Running 0 3d23h
opensearch-cluster-master-1 1/1 Running 0 3d23h
opensearch-cluster-master-2 1/1 Running 0 3d23h
rda-arangodb-agnt-9ysg2c6g-c3c523 1/1 Running 0 3d23h
rda-arangodb-agnt-apnjvxby-c3c523 1/1 Running 0 3d23h
rda-arangodb-agnt-mee3p56s-c3c523 1/1 Running 0 3d23h
rda-arangodb-crdn-9gbhyf8o-c3c523 1/1 Running 0 3d23h
rda-arangodb-crdn-dwo3dbaf-c3c523 1/1 Running 0 3d23h
rda-arangodb-crdn-zzttj2sr-c3c523 1/1 Running 0 3d23h
rda-arangodb-prmr-avvohvml-c3c523 1/1 Running 0 3d23h
rda-arangodb-prmr-tneidpy2-c3c523 1/1 Running 0 3d23h
rda-arangodb-prmr-ysnsl7ry-c3c523 1/1 Running 0 3d23h
rda-kafka-controller-0 1/1 Running 0 3d23h
rda-kafka-controller-1 1/1 Running 0 3d23h
rda-kafka-controller-2 1/1 Running 0 3d23h
rda-mariadb-mariadb-galera-0 1/1 Running 0 3d23h
rda-mariadb-mariadb-galera-1 1/1 Running 0 3d23h
rda-mariadb-mariadb-galera-2 1/1 Running 0 3d23h
rda-nats-0 2/2 Running 0 3d17h
rda-nats-1 2/2 Running 0 3d17h
rda-nats-box-6dbcdbc7ff-nn9m2 1/1 Running 0 3d17h
rda-nats-box-6dbcdbc7ff-rxkjr 1/1 Running 0 3d17h
Below kubectl get pods command provides additional details of deployed RDAF Infrastructure services (PODs) along with their worker node(s) on which they were deployed.
In order to get detailed status of the each RDAF Infrastructure service POD, run the below kubectl describe pod command.
Name: rda-nats-0
Namespace: rda-fabric
Priority: 0
Node: k8rdapfm01/192.168.125.45
Start Time: Sun, 12 Feb 2023 00:36:39 +0000
Labels: app=rda-fabric-services
app_category=rdaf-infra
app_component=rda-nats
controller-revision-hash=rda-nats-64747cd755
...
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned rda-fabric/rda-nats-0 to k8rdapfm01
Normal Pulling 10m kubelet Pulling image "192.168.125.140:5000/rda-platform-nats:1.0.2"
Normal Pulled 10m kubelet Successfully pulled image "192.168.125.140:5000/rda-platform-nats:1.0.2" in 3.102792187s
Normal Created 10m kubelet Created container nats
1.4.2.3 Adding Load Balancer to Route 53 Record
After the infra installation, a LoadBalancer service is created for the HAProxy component. The DNS name of this LoadBalancer must be added to Route 53 so that the configured domain points to the HAProxy service.
Step 1: Identify the HAProxy LoadBalancer Service
Run the following command to list the HAProxy services:
rda-haproxy ClusterIP 172.20.92.234 <none> 8808/TCP,80/TCP,443/TCP,... 21m
rda-haproxy-lb LoadBalancer 172.20.246.228 ad9412d08bc864560ad0b18fbf271526-9c82cc75228681a9.elb.ap-south-1.amazonaws.com 80:30793/TCP,443:31755/TCP,... 29m
Note
The EXTERNAL-IP / DNS name of the rda-haproxy-lb service. This will be an AWS ELB DNS name, for example: ad9412d08bc864560ad0b18fbf271526-9c82cc75228681a9.elb.ap-south-1.amazonaws.com
Step 2: Create a Route 53 Record
Log in to the AWS Console -> Navigate to -> Route 53 → Hosted Zones -> Select the hosted zone used during the setup -> Click Create record -> Configure the record as follows
Record name:
Use the domain name provided during the setup (for example, example.cloudfabrix.io)
Record type: A
Alias: Enabled
Route traffic to: Alias to Application / Network Load Balancer
Region: Select the region of the EKS cluster
Load balancer: Select the HAProxy LoadBalancer DNS name identified in Step 1
Save the record
Step 3: Verify DNS Resolution
Allow a few minutes for DNS propagation, then verify using nslookup.
Check the LoadBalancer DNS:
nslookup ad9412d08bc864560ad0b18fbf271526-9c82cc75228681a9.elb.ap-south-1.amazonaws.com
Name: ad9412d08bc864560ad0b18fbf271526-9c82cc75228681a9.elb.ap-south-1.amazonaws.com
Address: 3.108.140.236
Address: 3.6.5.36
Expected output (IPs should match the LoadBalancer):
If both DNS queries resolve to the same IP addresses, the Route 53 configuration is correct.1.4.2.4 Restart infra services
Run the below command to restart one of the RDAF infrastructure services.
Run the below commands to restart more than one of RDAF infrastructure services.
kubectl delete pod <rdaf-infrastructure-pod-name1> <rdaf-infrastructure-pod-name2> ... -n rda-fabric
Above kubectl delete pod command will stop and delete existing RDAF Infrastructure service POD and will redeploy the service.
Danger
Restarting RDAF infrastructure service or services is a disruptive operation which will impact all of the RDAF dependant services and causes a downtime. When RDAF platform is deployed in Production environment, please perform these operations only during a scheduled downtime.
1.4.2.5 Upgrade infra services
Run the below command to upgrade all RDAF infrastructure services to a newer version.
Run the below command to upgrade a specific RDAF infrastructure service to a newer version.
Tip
Above shown tag version is a sample one and for a reference only, for actual newer versioned tag, please contact CloudFabrix support team at [email protected]
Below are the supported RDAF Infrastructure services.
- nats
- mariadb
- opensearch
- kafka
- graphdb
- haproxy
Danger
Please take full configuration and data backup of RDAF platform before any upgrade process. Upgrading RDAF infrastructure service or services is a disruptive operation which will impact all of the RDAF dependant services and causes a downtime. When RDAF platform is deployed in Production environment, please perform upgrade operation only during a scheduled downtime.
1.4.3 rdafk8s platform
1.4.3.1 Install platform services
rdafk8s platform install command is used to deploy / install RDAF core platform services. Run the below command to view the available CLI options.
usage: platform install [-h] --tag TAG [--service SERVICES]
optional arguments:
-h, --help show this help message and exit
--tag TAG Tag to use for the docker images of the platform
components
--service SERVICES Restrict the scope of the command to specific service
Run the below command to deploy all RDAF core platform services. (Note: Below shown tag name is a sample one for a reference only, for actual tag, please contact CloudFabrix support team at [email protected])
The installation of the RDAF core platform services includes the provisioning of the RDAF UI portal URL and the creation of a default tenant administrator account ([email protected]). Upon successful installation, the output will resemble the example shown below.
2025-01-08 05:33:21,803 [rdaf.component.platform] INFO - Created workspace:1d4ff2e8-20db-421e-9610-d9467c233d2d
Handling connection for 7780
2025-01-08 05:33:24,651 [rdaf.component.platform] INFO - UI can be accessed at - https://example.cloudfabrix.io with default user - [email protected]
The default password for [email protected] is admin1234.
Upon initial access to the RDAF UI portal, the system will prompt the user to reset the default password to a user-defined value.
1.4.3.2 Status check
Run the below command to see the status of all of the deployed RDAF core platform services.
+----------------------+---------------+----------------+--------------+-------+
| Name | Host | Status | Container Id | Tag |
+----------------------+---------------+----------------+--------------+-------+
| rda-api-server | 192.168.2.7 | Up 4 Hours ago | daa91c407e5b | 8.2 |
| rda-api-server | 192.168.3.247 | Up 4 Hours ago | 4db16709d642 | 8.2 |
| rda-registry | 192.168.3.247 | Up 4 Hours ago | aebaac4e6f7d | 8.2 |
| rda-registry | 192.168.2.7 | Up 4 Hours ago | 3774ecd23658 | 8.2 |
| rda-identity | 192.168.2.7 | Up 4 Hours ago | 5a7e3a83a54d | 8.2 |
| rda-identity | 192.168.3.247 | Up 4 Hours ago | 421481492c40 | 8.2 |
| rda-fsm | 192.168.2.7 | Up 4 Hours ago | 6059163309e6 | 8.2 |
| rda-fsm | 192.168.3.247 | Up 4 Hours ago | 3dd53e127f41 | 8.2 |
| rda-asm | 192.168.2.7 | Up 4 Hours ago | 408680b2c8ac | 8.2 |
| rda-asm | 192.168.3.247 | Up 4 Hours ago | be48c9801d81 | 8.2 |
| rda-chat-helper | 192.168.3.247 | Up 4 Hours ago | 81e038a3cfa5 | 8.2 |
| rda-chat-helper | 192.168.2.7 | Up 4 Hours ago | 6a8ffd53f05e | 8.2 |
| rda-access-manager | 192.168.2.7 | Up 4 Hours ago | 99ba1cc03bce | 8.2 |
| rda-access-manager | 192.168.3.247 | Up 4 Hours ago | e2b96f992b90 | 8.2 |
| rda-resource-manager | 192.168.3.247 | Up 4 Hours ago | 06c886b60a56 | 8.2 |
| rda-resource-manager | 192.168.2.7 | Up 4 Hours ago | 41803dd8c805 | 8.2 |
| rda-scheduler | 192.168.3.247 | Up 4 Hours ago | a8e9cd99f01c | 8.2 |
| rda-scheduler | 192.168.2.7 | Up 4 Hours ago | e667642d33c6 | 8.2 |
+----------------------+---------------+----------------+--------------+-------+
Below are the AWS EKS Cluster kubectl commands to check the status of RDA Fabric core platform services.
NAME READY STATUS RESTARTS AGE
rda-access-manager-665d9c746f-cndgj 1/1 Running 0 3d23h
rda-access-manager-665d9c746f-w5dlm 1/1 Running 0 3d23h
rda-api-server-7b9447d776-jmdcm 1/1 Running 2 (3d16h ago) 3d23h
rda-api-server-7b9447d776-mxmlz 1/1 Running 2 (3d16h ago) 3d23h
rda-asm-8c6cdb84-qksbq 1/1 Running 0 3d23h
rda-asm-8c6cdb84-t7qpx 1/1 Running 0 3d23h
rda-chat-helper-69c7fbc786-4vs4d 1/1 Running 0 3d23h
rda-chat-helper-69c7fbc786-xh2kl 1/1 Running 0 3d23h
rda-fsm-58f57fddf-kgwdf 1/1 Running 0 3d23h
rda-fsm-58f57fddf-tbbzp 1/1 Running 0 3d23h
rda-identity-6b97b4bb5d-2zjps 1/1 Running 0 3d23h
rda-identity-6b97b4bb5d-mjpt8 1/1 Running 0 3d23h
rda-registry-6f69898676-nrpc5 1/1 Running 0 3d23h
rda-registry-6f69898676-wvfvg 1/1 Running 0 3d23h
rda-resource-manager-8b645546-fbkct 1/1 Running 0 3d23h
rda-resource-manager-8b645546-qkc6r 1/1 Running 0 3d23h
Below kubectl get pods command provides additional details of deployed RDAF core platform services (PODs) along with their worker node(s) on which they were deployed.
In order to get detailed status of the each RDAF core platform service POD, run the below kubectl describe pod command.
Name: rda-collector-6bd6c79475-52hvg
Namespace: rda-fabric
Priority: 0
Node: hari-k8-cluster-infra10819/192.168.108.19
Start Time: Tue, 31 Jan 2023 05:00:57 +0000
Labels: app=rda-fabric-services
app_category=rdaf-platform
app_component=rda-collector
pod-template-hash=6bd6c79475
...
...
QoS Class: Burstable
Node-Selectors: rdaf_platform_services=allow
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
pod-type=rda-tenant:NoSchedule
Events: <none>
1.4.3.3 Upgrade platform services
Run the below command to upgrade all RDAF core platform services to a newer version.
Below are the RDAF core platform services
- rda-access-manager
- rda-resource-manager
- rda-user-preferences
- portal-backend
- portal-frontend
- rda-api-server
- rda-asm
- rda-fsm
- rda-collector
- rda-identity
- rda-registry
- rda-scheduler
- rda-chat-helper
Run the below command to upgrade a specific RDAF core platform service to a newer version.
Tip
Above shown tag version is a sample one and for a reference only, for actual newer versioned tag, please contact CloudFabrix support team at [email protected]
Danger
Please take full configuration and data backup of RDAF platform before any upgrade process. Upgrading RDAF core platform service or services is a disruptive operation which will impact all of the RDAF dependant services and causes a downtime. When RDAF platform is deployed in Production environment, please perform upgrade operation only during a scheduled downtime.
1.4.3.4 Restarting platform services
Run the below command to restart one of the RDAF core platform services.
Run the below commands to restart more than one of RDAF core platform services.
Above kubectl delete pod command will stop and delete existing RDAF core platform service POD and will redeploy the service.
Danger
Restarting RDAF core platform service or services is a disruptive operation which will impact all of the RDAF dependant services and causes a downtime. When RDAF platform is deployed in Production environment, please perform these operations only during a scheduled downtime.
1.4.3.5 Start/Stop platform services
Run the below command to get all of the deployed RDAF core platform services.
Run the below command to stop one of the RDAF core platform services.
Above kubectl scale --replicas=0 command will stop and terminate selected RDAF core platform service POD.
Danger
Stopping RDAF core platform service or services is a disruptive operation which will impact all of the RDAF dependant services and causes a downtime. When RDAF platform is deployed in Production environment, please perform these operations only during a scheduled downtime.
Run the below command to start one of the RDAF core platform services.
Above kubectl scale --replicas=1 command will deploy and start the selected RDAF core platform service POD.
Note
When using above command, please select replicas value as 1 for non-HA deployments and 2 for HA deployments.
1.4.3.6 Reset password
Run the below command to reset the default user's [email protected] password to factory default. i.e. admin1234 and will force the user to reset the default password to tenant admin user's choice.
Warning
Use above command option only in a scenario where tenant admin users are not able to access RDAF UI portal because of external IAM (identity and access management) tool such as Active Directory / LDAP / SSO is down or not accessible and default tenant admin user's password is forgotten or lost.
1.4.4 rdafk8s app
rdafk8s app command is used to deploy and manage RDAF application services. Run the below command to view available CLI options.
The supported application services are below.
- OIA: Operations Intelligence and Analytics (Also known as AIOps)
usage: ('app',) [-h] [--debug] {} ...
Manage the RDAF Apps
positional arguments:
{} commands
status Status of the App
install Install the App service containers
upgrade Upgrade the App service containers
update-config
Updated configurations of one or more components
optional arguments:
-h, --help show this help message and exit
--debug Enable debug logs for the CLI operations
1.4.4.1 Install OIA services
rdafk8s app install command is used to deploy / install RDAF OIA application services. Run the below command to view the available CLI options.
usage: ('app',) install [-h] --tag TAG [--service SERVICES] {OIA}
positional arguments:
{OIA} Select the APP to act on
optional arguments:
-h, --help show this help message and exit
--tag TAG Tag to use for the docker images of the app components
--service SERVICES Restrict the scope of the command to specific service
Run the below command to deploy RDAF OIA / application services. (Note: Below shown tag name is a sample one for a reference only, for actual tag, please contact CloudFabrix support team at [email protected])
1.4.4.2 Status check
Run the below command to see the status of all of the deployed RDAF application services.
+-------------------------------+---------------+-------------------+--------------+-------+
| Name | Host | Status | Container Id | Tag |
+-------------------------------+---------------+-------------------+--------------+-------+
| rda-alert-correlator | 192.168.0.252 | Up 4 Hours ago | 3bf6419f0dd8 | 8.2 |
| rda-alert-correlator | 192.168.0.31 | Up 4 Hours ago | 540cee8d01af | 8.2 |
| rda-alert-ingester | 192.168.0.31 | Up 4 Hours ago | 25b2704b312e | 8.2 |
| rda-alert-ingester | 192.168.0.252 | Up 4 Hours ago | 1c14bd40640b | 8.2 |
| rda-alert-processor | 192.168.0.31 | Up 4 Hours ago | 65e3dc8be129 | 8.2 |
| rda-alert-processor | 192.168.0.252 | Up 4 Hours ago | 25d9aac591cf | 8.2 |
| rda-alert-processor-companion | 192.168.0.252 | Up 4 Hours ago | 817958d70048 | 8.2 |
| rda-alert-processor-companion | 192.168.0.31 | Up 4 Hours ago | dfec9857bc13 | 8.2 |
| rda-app-controller | 192.168.0.252 | Up 4 Hours ago | c23dc0a7b05c | 8.2 |
| rda-app-controller | 192.168.0.31 | Up 4 Hours ago | 2db7fda39a14 | 8.2 |
| rda-collaboration | 192.168.0.252 | Up 4 Hours ago | bad3e020ef6b | 8.2 |
| rda-collaboration | 192.168.0.31 | Up 4 Hours ago | c338cf945abe | 8.2 |
| rda-configuration-service | 192.168.0.31 | Up 4 Hours ago | aabfdee8ba1f | 8.2 |
| rda-configuration-service | 192.168.0.252 | Up 4 Hours ago | 57dd60efbdbc | 8.2 |
| rda-event-consumer | 192.168.0.252 | Up 4 Hours ago | 39d63d3380da | 8.2 |
| rda-event-consumer | 192.168.0.31 | Up 4 Hours ago | aeae4777ce35 | 8.2 |
| rda-file-browser | 192.168.0.31 | Up 4 Hours ago | dd6b6af66d61 | 8.2 |
| rda-file-browser | 192.168.0.252 | Up 4 Hours ago | b2b70fe2b310 | 8.2 |
+-------------------------------+---------------+-------------------+--------------+-------+
Below are the AWS EKS Cluster kubectl get pods commands to check the status of RDA Fabric application (OIA) services.
NAME READY STATUS RESTARTS AGE
rda-alert-ingester-697b867778-9b9bw 1/1 Running 0 11d
rda-alert-processor-f4c4f844c-h59qj 1/1 Running 0 11d
rda-app-controller-5f866cdc94-9d6gx 1/1 Running 0 11d
rda-collaboration-547598cfcf-k6vmt 1/1 Running 0 11d
rda-configuration-service-764d48db8c-94vz5 1/1 Running 0 11d
rda-dataset-caas-all-alerts-7fdf7df6d9-8nkh8 1/1 Running 0 11d
rda-dataset-caas-current-alerts-766bd95dd5-h5tlb 1/1 Running 0 11d
rda-event-consumer-64f7d96b46-h9jzw 1/1 Running 0 11d
rda-file-browser-7f4cd9764c-p99fg 1/1 Running 0 11d
rda-ingestion-tracker-688459ddc4-c9zs7 1/1 Running 0 11d
rda-irm-service-5b754f9687-bv5kw 1/1 Running 0 11d
rda-ml-config-585cd7fd6d-42pzn 1/1 Running 0 11d
rda-notification-service-7798bd7b9d-sdlcn 1/1 Running 0 11d
rda-reports-registry-64474997fd-7wp9c 1/1 Running 0 11d
rda-smtp-server-84995684dc-7c8qd 1/1 Running 0 11d
rda-webhook-server-59c5654b95-mdhhf 1/1 Running 0 11d
Below kubectl get pods command provides additional details of deployed RDAF application (OIA) services (PODs) along with their worker node(s) on which they were deployed.
In order to get detailed status of the each RDAF application (OIA) service POD, run the below kubectl describe pod command.
Name: rda-webhook-server-59c5654b95-mdhhf
Namespace: rda-fabric
Priority: 0
Node: hari-k8-cluster-infra10822/192.168.108.22
Start Time: Tue, 31 Jan 2023 05:41:40 +0000
Labels: app=rda-fabric-services
app_category=rdaf-application
app_component=rda-webhook-server
pod-template-hash=59c5654b95
...
...
QoS Class: Burstable
Node-Selectors: rdaf_application_services=allow
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
pod-type=rda-tenant:NoSchedule
Events: <none>
1.4.4.3 Upgrade app OIA services
Run the below command to upgrade all RDAF OIA application services to a newer version.
Below are the RDAF OIA application services
- all-alerts-cfx-rda-dataset-caas
- cfx-rda-alert-ingester
- cfx-rda-alert-processor
- cfx-rda-app-builder
- cfx-rda-app-controller
- cfx-rda-collaboration
- cfx-rda-configuration-service
- cfx-rda-event-consumer
- cfx-rda-file-browser
- cfx-rda-ingestion-tracker
- cfx-rda-irm-service
- cfx-rda-ml-config
- cfx-rda-notification-service
- cfx-rda-reports-registry
- cfx-rda-smtp-server
- cfx-rda-webhook-server
- current-alerts-cfx-rda-dataset-caas
Run the below command to upgrade a specific RDAF OIA application service to a newer version.
Tip
Above shown tag version is a sample one and for a reference only, for actual newer versioned tag, please contact CloudFabrix support team at [email protected]
Danger
Please take full configuration and data backup of RDAF platform before any upgrade process. Upgrading RDAF OIA application service or services is a disruptive operation which will impact the availability of these services. When RDAF platform is deployed in Production environment, please perform upgrade operation only during a scheduled downtime.
1.4.4.4 Restarting app services
Run the below command to restart one of the RDAF application (OIA) services.
Run the below commands to restart more than one of RDAF application (OIA) services.
Above kubectl delete pod command will stop and delete existing RDAF application service POD and will redeploy the service.
Below are the RDAF OIA application services
- rda-app-controller
- rda-alert-processor
- rda-file-browser
- rda-smtp-server
- rda-ingestion-tracker
- rda-reports-registry
- rda-ml-config
- rda-event-consumer
- rda-webhook-server
- rda-irm-service
- rda-alert-ingester
- rda-collaboration
- rda-notification-service
- rda-configuration-service
- rda-alert-processor-companion
- rda-alert-correlator
Danger
Stopping and Starting RDAF application OIA service or services is a disruptive operation which will impact the availability of these application services. When RDAF platform is deployed in Production environment, please perform these operations only during a scheduled downtime.
1.4.4.5 Start/Stop application services
Run the below command to get all of the deployed RDAF OIA application services.
Run the below command to stop one of the RDAF core platform services.
Above kubectl scale --replicas=0 command will stop and terminate selected RDAF application service POD.
Danger
Stopping RDAF application service or services is a disruptive operation which will impact all of the RDAF dependant services and causes a downtime. When RDAF platform is deployed in Production environment, please perform these operations only during a scheduled downtime.
Run the below command to start one of the RDAF OIA application services.
Above kubectl scale --replicas=1 command will deploy and start the selected RDAF application service POD.
Note
When using above command, please select replicas value as 1 for non-HA deployments and 2 for HA deployments.
1.4.5 rdafk8s worker
rdafk8s worker command is used to deploy and manage RDAF worker services. Run the below command to view available CLI options.
usage: worker [-h] [--debug] {} ...
Manage the RDAF Worker
positional arguments:
{} commands
status Status of the RDAF Worker
install Install the RDAF Worker containers
upgrade Upgrade the RDAF Worker containers
optional arguments:
-h, --help show this help message and exit
--debug Enable debug logs for the CLI operations
1.4.5.1 Install worker service(s)
rdafk8s worker install command is used to deploy / install RDAF worker services. Run the below command to view the available CLI options.
usage: worker install [-h] --tag TAG
optional arguments:
-h, --help show this help message and exit
--tag TAG Tag to use for the docker images of the worker components
Run the below command to deploy all RDAF worker services. (Note: Below shown tag name is a sample one for a reference only, for actual tag, please contact CloudFabrix support team at [email protected])
1.4.5.2 Status check
Run the below command to see the status of all of the deployed RDAF worker services.
+------------+---------------+------------------+--------------+-------+
| Name | Host | Status | Container Id | Tag |
+------------+---------------+------------------+--------------+-------+
| rda-worker | 192.168.0.252 | Up 2 Minutes ago | 02a417646285 | 8.2 |
| rda-worker | 192.168.0.31 | Up 2 Minutes ago | 0bc67f9b5c78 | 8.2 |
+------------+---------------+------------------+--------------+-------+
Below are the AWS EKS Cluster kubectl get pods commands to check the status of RDA Fabric worker services.
NAME READY STATUS RESTARTS AGE
rda-worker-749b977b95-cf757 1/1 Running 0 11d
rda-worker-749b977b95-xkb5w 1/1 Running 0 11d
Below kubectl get pods command provides additional details of deployed RDAF worker services (PODs) along with their worker node(s) on which they were deployed.
In order to get detailed status of the each RDAF worker service POD, run the below kubectl describe pod command.
Name: rda-worker-749b977b95-cf757
Namespace: rda-fabric
Priority: 0
Node: hari-k8-cluster-infra10820/192.168.108.20
Start Time: Tue, 31 Jan 2023 05:18:11 +0000
Labels: app=rda-fabric-services
app_category=rdaf-worker
app_component=rda-worker
pod-template-hash=749b977b95
...
...
QoS Class: Burstable
Node-Selectors: rdaf_worker_services=allow
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
pod-type=rda-tenant:NoSchedule
Events: <none>
1.4.5.3 Upgrade worker services
Run the below command to upgrade all RDAF worker service(s) to a newer version.
Tip
Above shown tag version is a sample one and for reference only, for actual newer versioned tag, please contact CloudFabrix support team at [email protected]
Danger
Upgrading RDAF worker service or services is a disruptive operation which will impact all of the worker jobs. When RDAF platform is deployed in Production environment, please perform upgrade operation only during a scheduled downtime.
1.4.5.4 Restarting worker services
Run the below command to restart one of the RDAF worker services.
Run the below commands to restart more than one of RDAF worker services.
Above kubectl delete pod command will stop and delete existing RDAF worker service POD and will redeploy the service.
Below is the RDAF worker service
- rda-worker
Danger
Stopping and Starting RDAF worker service(s) is a disruptive operation which will impact all of the worker jobs. When RDAF platform is deployed in Production environment, please perform these operations only during a scheduled downtime.
1.4.5.5 Start/Stop worker services
Run the below command to get all of the deployed RDAF worker services.
Run the below command to stop one of the RDAF worker services.
Above kubectl scale --replicas=0 command will stop and terminate RDAF worker service POD.
Danger
Stopping RDAF worker service(s) is a disruptive operation which will impact all of the RDAF dependant services and causes a downtime. When RDAF platform is deployed in Production environment, please perform these operations only during a scheduled downtime.
Run the below command to start one of the RDAF worker service(s).
Above kubectl scale --replicas=1 command will deploy and start the selected RDAF worker service POD.
Note
When using above command, please select replicas value as 1 or more for non-HA deployments and 2 or more for HA deployments.
1.4.6 rdafk8s rdac_cli
rdafk8s rdac_cli command allows you to install and upgrade RDA client CLI utility which interacts with RDA Fabric services and operations.
usage: rdac-cli [-h] [--debug] {} ...
Install RDAC CLI
positional arguments:
{} commands
install Install the RDAC CLI
upgrade Upgrade the RDAC CLI
optional arguments:
-h, --help show this help message and exit
--debug Enable debug logs for the CLI operations
- To install RDA client CLI, run the below command
- To upgrade RDA client CLI version, run the below command
- Run the below command to see RDA client CLI help and available subcommand options.
Run with one of the following commands
agent-bots List all bots registered by agents for the current tenant
agents List all agents for the current tenant
alert-rules Alert Rule management commands
bot-catalog-generation-from-file Generate bot catalog for given sources
bot-package Bot Package management commands
bots-by-source List bots available for given sources
check-credentials Perform credential check for one or more sources on a worker pod
checksum Compute checksums for pipeline contents locally for a given JSON file
compare Commands to compare different RDA systems using different RDA Config files
content-to-object Convert data from a column into objects
copy-to-objstore Deploy files specified in a ZIP file to the Object Store
dashboard User defined dashboard management commands
dashgroup User defined dashboard-group management commands
dataset Dataset management commands
demo Demo related commands
deployment Service Blueprints (Deployments) management commands
event-gw-status List status of all ingestion endpoints at all the event gateways
evict Evict a job from a worker pod
file-ops Perform various operations on local files
file-to-object Convert files from a column into objects
fmt-template Formatting Templates management commands
healthcheck Perform healthcheck on each of the Pods
invoke-agent-bot Invoke a bot published by an agent
jobs List all jobs for the current tenant
logarchive Logarchive management commands
object RDA Object management commands
output Get the output of a Job using jobid.
pipeline Pipeline management commands
playground Start Webserver to access RDA Playground
pods List all pods for the current tenant
project Project management commands. Projects can be used to link different tenants / projects from this RDA Fabric or a remote RDA Fabric.
pstream Persistent Stream management commands
purge-outputs Purge outputs of completed jobs
read-stream Read messages from an RDA stream
reco-engine Recommendation Engine management commands
restore Commands to restore backed-up artifacts to an RDA Platform
run Run a pipeline on a worker pod
run-get-output Run a pipeline on a worker, and Optionally, wait for the completion, get the final output
schedule Pipeline execution schedule management commands
schema Dataset Model Schema management commands
secret Credentials (Secrets) management commands
set-pod-log-level Update the logging level for a given RDA Pod.
shell Start RDA Client interactive shell
site-profile Site Profile management commands
site-summary Show summary by Site and Overall
stack Application Dependency Mapping (Stack) management commands
staging-area Staging Area based data ingestion management commands
subscription Show current CloudFabrix RDA subscription details
synthetics Data synthesizing management commands
verify-pipeline Verify the pipeline on a worker pod
viz Visualize data from a file within the console (terminal)
watch Commands to watch various streams such sas trace, logs and change notifications by microservices
web-server Start Webserver to access RDA Client data using REST APIs
worker-obj-info List all worker pods with their current Object Store configuration
write-stream Write data to the specified stream
positional arguments:
command RDA subcommand to run
optional arguments:
-h, --help show this help message and exit
Tip
Please refer RDA Client CLI Usage for detailed information.
1.5 rdafk8s reset
-
rdafk8s reset --no-purgecommand allows the user to reset the RDAF platform configuration by performing the below operations. -
Stop RDAF application, worker, platform & infrastructure services
- Delete RDAF application, worker, platform & infrastructure services and its data
- Delete all Docker images and volumes RDAF application, worker, platform & infrastructure services
- Delete RDAF platform configuration
Note
If the above command appears to be stuck while deleting the namespace, open a separate terminal tab and run the following command given below to gracefully terminate any pods that are not shutting down properly.
kubectl get pods -n rda-fabric | grep "Terminating" | awk '{{print $1}}' | xargs -I {{}} kubectl delete pod {{}} -n rda-fabric --force --grace-period=0
- Remove the files in rdaf dir by using below given command
Danger
rdafk8s reset command is a disruptive operation as it clears entire RDAF platform footprint. It's primary purpose is to use only in Demo or POC environments ("NOT" in Production) where it requires to re-install entire RDAF platform from scratch.