Post upgrade steps to configure Kubernetes on AWS
Anchor | ||||
---|---|---|---|---|
|
Upgrade the cluster to 2024.3
Update the IAM Stack with 2024.3 automated_deployment_iam_role.json and select yes to give EKS-related permissions to the IAM role.
Create an EKS cluster using CreateEks.json.
Run the following commands one by one on every Kyvos node to install kubectl
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --bin-dir /usr/local/bin/ --install-dir /usr/local/aws-cli --update o curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.29.3/2024-04-19/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo chown kyvos:kyvos kubectl
sudo mv kubectl /bin/
sudo mkdir -p /home/kyvos/.kube
sudo chown -R kyvos:kyvos /home/kyvos/.kube
Run the commands below from sudo user on Kyvos Manager node to install eksctl.
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$ (uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo cp /tmp/eksctl /bin/
Once the EKS cluster is created, go to the created node then go to Security.
Click the eks-cluster-sg-kyvosEks-{STACK-NAME}-random number as Security group.
Add inbound rule to the above security group with TCP 6903 and source group will be the Security Group attached to the BI server.
Add inbound rule to the Web server security group with TCP 2181 and source group will be the Security Group which was mentioned above (eks-cluster-sg-kyvosEks-{STACK-NAME}-random number).
Add inbound rule to the BI Server security group with TCP 2181 and source group will be the Security Group which was mentioned above (eks-cluster-sg-kyvosEks-{STACK-NAME}-random number).
Add inbound rule to the BI Server security group with TCP 45460 and source group will be the Security Group which was mentioned above (eks-cluster-sg-kyvosEks-{STACK-NAME}-random number).
Add inbound rule to the BI Server security group with TCP 6803 and source group will be the Security Group which was mentioned above (eks-cluster-sg-kyvosEks-{STACK-NAME}-random number).
Open the deployment bucket permission section and add the ARN of OIDC and Node group role in the array.
Once the above changes are done, navigate to compute cluster page on Kyvos Manager and click on Native. Choose Containerized Kubernetes.
Provide the name of the Kubernetes Cluster, then Node Pool name and then the K8S auth role name. The name of the role can be found in the resource section of the EKS creation stack.
Copy the value of the EKSOidcRole key and that is the K8S auth role name.
Validate the information and click Save.
Post upgrade steps to configure Kubernetes on Azure
Anchor | ||||
---|---|---|---|---|
|
Create an AKS cluster via script. To create the AKS cluster, refer to Azure documentation.
Upload the script and complete the parameters given in the script.
Once the AKS cluster is created, configure it through Kyvos Manager on the Computer cluster page.
Panel | ||||||
---|---|---|---|---|---|---|
| ||||||
Note
|
Post upgrade steps to configure Kubernetes on GCP
Having a Dataproc with metastore is a prerequisite for configuring Kubernetes on GCP.
Replace the following:
IAM_SA_NAME: a name for your new IAM service account.
IAM_SA_PROJECT_ID: the project ID for your IAM service account.
PROJECT_ID: your Google Cloud project ID.
If using existing Service Account, execute the following commands using the gcloud CLI to link the Kubernetes Service account to the IAM Service account.
gcloud iam service-accounts add-iam-policy-binding IAM_SA_NAME@IAM_SA_PROJECT_ID.iam.gserviceaccount.com --role roles/iam.workloadIdentityUser --member "serviceAccount:PROJECT_ID.svc.id.goog[kyvos-monitoring/default]"
gcloud iam service-accounts add-iam-policy-binding IAM_SA_NAME@IAM_SA_PROJECT_ID.iam.gserviceaccount.com --role roles/iam.workloadIdentityUser --member "serviceAccount:PROJECT_ID.svc.id.goog[kyvos-compute/default]"
Additionally, if using a shared Virtual Network, below roles and permissions are required on by Default service account of Kubernetes (service-PROJECT_NUMBER@container-engine-robot.iam.gserviceaccount.com) on project of Shared Virtual Network
Compute Network User
kubernetes_role (create a custom role)
compute.firewalls.create
compute.firewalls.delete
compute.firewalls.get
compute.firewalls.list
compute.firewalls.update
compute.networks.updatePolicy
compute.subnetworks.get
container.hostServiceAgent.use
· Add the following roles to the existing IAM service account:
roles/iam.serviceAccountTokenCreator (Service Account Token Creator)
roles/container.developer (Kubernetes Engine Developer)
roles/container.clusterAdmin (Kubernetes Engine Cluster Admin)
Add the below permissions to Kyvos role
compute.instanceGroupManagers.update
compute.instanceGroupManagers.get
SSH on All Instances one by one and run the below commands-
sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates gnupg curl -y
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
sudo apt-get update && sudo apt-get install google-cloud-cli -y
sudo apt-get install google-cloud-cli-gke-gcloud-auth-plugin
sudo apt-get install kubectl
Create Kubernetes cluster using Deployment scripts.
Open Kyvos Manager, go to compute cluster, Click Kyvos Native.
Fill in the required inputs. Value provided to Worker Nodes Maximum Count should be same as you configured in templates. Click on Save.
On Kyvos manager>>Kyvos Properties screen, set ‘ENABLE_HELIX_TASK_MANAGER’ to ‘Yes’.
On Kyvos manager>>Kyvos Properties screen, ensure that KYVOS_PROCESS_COMPUTE_SUBTYPE is K8S_COMPUTE_CLUSTER