Deploying Kyvos Resources for GCP using Terraform Script
Applies to: Kyvos Enterprise Kyvos Cloud (SaaS on AWS) Kyvos AWS Marketplace
Kyvos Azure Marketplace Kyvos GCP Marketplace Kyvos Single Node Installation (Kyvos SNI)
Before you begin
In addition to the prerequisites, please ensure the following settings are enabled on your GCP project.
Project Billing: For this, search Billing on your Google Cloud project.
Click Link a Billing Account, and configure the billing information.
Once your billing is enabled, you will see an estimate as shown in the following figure.
Compute Engine APIs: Search for Compute Engine APIs on your project, and click the Enable button.
Once the API is enabled, the corresponding status is displayed, as shown in the following figure.Cloud Key Management Service (KMS) API: Enable this API, which extends customer control over encryption keys.
Cloud Resource Manager API: Search for Cloud Resource Manager API on your project, and click the Enable button.
Once the API is enabled, the API Enabled status is displayed, as shown in the following figure.Enable the following APIs on your project. Refer to the GCP documentation for details.
Cloud Functions
Cloud Build
Cloud Scheduler
Create an App Engine project and select the region where you want to deploy your resources.
To the default Google APIs Service Agent service account, add the storage.buckets.get role. This is required to delete deployment through the Deployment Manager.
Kubernetes Engine API: Search for Kubernetes Engine API on your project and click the Enable button. Once the API is enabled, the API Enabled status is displayed.
Creating resources using script
Download the terraform.tar file on your workstation.
On your workstation, install the gcloud command-line tool.
To execute Terraform on Google Cloud Platform's Cloud Shell, activate Cloud Shell.
Configure the gcloud command-line tool to use your project using the following command.
gcloud config set project [MY_PROJECT]
Here, replace [MY_PROJECT] with your project ID.
Note
After opening the terminal in Cloud Shell, ensure that Cloud Shell is configured to operate within the same project where you intend to deploy your resources.
Copy terrform.tar file and untar it. The following subdirectories and files are displayed.
Access the kyvosparams.tfvars file located in the conf directory, and configure the parameters as needed for your deployment
Enter details for Kyvos resources with Kubernetes and Dataproc in the kyvosparams.tfvars file:
Parameter | Description |
---|---|
projectId | Enter the project Id. NOTE: This project Id will be used for the Kyvos deployment. |
region | Enter the region to deploy Kyvos resources. |
existingVpcName | Enter name of existing VPC. NOTE: If this field is left blank, a new VPC will be created. |
existingvpcProjectId | Enter the project ID of the existing VPC. |
existingSubnetworkName | Enter name of existing VPC subnet. |
createNetworkFirewall | To create firewall rules, set the value of this parameter to true. NOTE: If the value of createVPC is set to true, firewall rules will be created unconditionally. |
gkeSubnetName | Enter the name of an existing Subnet in which you want to deploy GKE Cluster. If left blank subnetwork name will be used. |
secondaryRangeName1 | Enter the Secondary IPv4 ranges name for GKE Cluster creation. NOTE: This must be preconfigured if using an existing VPC. For more information, see Google documentation. |
secondaryRangeName2 | Enter the Secondary IPv4 ranges name for GKE Cluster creation. NOTE: This must be preconfigured if using an existing VPC. For more information, see Google documentation. |
secondaryRange1 | Enter the Secondary IPv4 ranges for GKE Cluster creation in the case of existing VPC. |
secondaryRange2 | Enter the Secondary IPv4 ranges for GKE Cluster creation in the case of existing VPC. |
customPrefixNameVPC | If the value of the existing VPC name is left blank, enter the prefix name to be used for VPC to be created. |
customPrefixNameSubNetwork | If the value of the existing subnetwork name is left blank, enter the prefix name to be used for VPC to be created. |
ipCidrRange | Enter the value of ipCidrRange if value of existing VPC name is left blank. |
vpcConnectorName | Enter the name of the VPC Connector to be used with GCP functions. |
kmCount | The number of Kyvos Manager instances to be launched. |
kmInstanceType | Instance type of Kyvos Manager (n2-standard-4). |
kmVolumeSizeGB | Size of the disk to be attached to the Kyvos Manager. |
kmVolumeType | Type of the disk for KM (pd-ssd). |
hostNameBasedDeployment | Change the value to true to use the hostname for the cluster deployment. |
createLoadBalancer | Set the value as true to create load balancer. By default, the value is set as false. |
enableWebServerHA | Set the value as true for enabling webserver High Availability. By default, the value is set as false. |
webServerCount | Configure the Web Server count. |
webServerInstanceType | Configure the Web Server instance type. |
webServerVolumeSizeGB | Size of the disk to be attached for the Web Server. |
webServerVolumeType | Type of the disk for the Web Server (pd-ssd). |
qeCount | The number of instances to be used as query engines. |
qeInstanceType | Instance type of query engine (n2-highmem-4). |
qeDataVolumeSizeGB | Size of the disk to be attached with query engines. |
qeCacheVolumeSizeGB | Size of the disk to be attached for the cache. |
qeCacheVolumeCount | The number of disks to be attached to the cache. |
qeCacheVolumeType | Type of the disk for the Web Server (pd-ssd). |
biCount | Enter the number of instances to be used as the BI server. |
biInstanceType | Instance type of BI Server (n2-standard-8). |
biVolumeCount | The number of disks to be attached to the BI Server. |
biVolumeSizeGB | Size of the disk to be attached to the BI Server. |
biVolumeType | Type of the disk for BI server (pd-ssd) |
createGcpFunctions | Set the value as true to configure GCP Functions in Kyvos. |
dataprocMetastoreURI | Enter the Metastore URI if you want to deploy Kyvos with no Spark configuration. |
createGKE | Enter the value as True or False.
|
gkeWorkerInitialNodeCount | Enter the initial worker node count for the Kubernetes cluster. NOTE: The default value is 1. |
gkeWorkerInstancetype | Enter the worker node instance type for the Kubernetes cluster. NOTE: n2-standard-16 is the minimum configuration. Instance type smaller than this aren't supported. |
minPodCount | Enter minimum pod count. NOTE:
|
maxWorkerNodeCount | Enter the maximum worker node count. |
existingGkeClusterName | Enter the name of existing GKE cluster. |
existingNodePoolName | Enter the node pool name of existing GKEcluster. |
minPodCountExistingNodePool | Enter minimum pod count of existing GKEcluster. |
maxWorkerNodeCountExistingNodePool | Enter the maximum worker node count of the existing cluster. |
existingGKERange | Enter the secondary IP range used in the existing GKE cluster if the VPC used by the GKE cluster different from the one used for the deployment. |
existingGKEserviceAccountName | Enter the name of service account used in the existing GKE cluster if the service account used by the GKE cluster differs from the one used for the deployment. |
existingGKEServiceAccountemail | Enter the name of service account email used in the existing GKE cluster, if the service account email used by the GKE cluster differs from the one used for the deployment. |
createDataProc | Enter the value as True or False.
|
enableComponentGateway | Set the value of ‘enableComponentGateway’ to True to get publicly accessible URL for Dataproc. |
sharedDataprocCluster | Select true to use the shared Dataproc cluster. In this case, Kyvos will not manage the Dataproc cluster.Select false to use the on-demand Dataproc cluster. In this case, the Dataproc cluster will automatically start or stop. |
dataProcNetworkTags | Provide a list of comma-separated network tags to be added to the Dataproc cluster. Example: dataProcNetworkTags : ["abc","xyz"] |
enableSshFlag | Set the value to true to enable SSH to the Dataproc cluster. |
enableLivy | Set the value of Livy to True if using Dataproc version 2.1.11-debian11. |
masterInstanceCount | The number of master nodes. For example, 1 |
masterInstanceType | Instance type of master node (n2-highmem-4) |
masterInstanceVolumeType | Type of the disk for master node (pd-ssd) |
workerInstanceCount | The number of worker instances. |
workerInstanceType | Instance type of worker node (n2-highmem-8) |
workerInstanceVolumeType | Type of the disk for worker node (pd-ssd) |
enableDataProcMetastore | Set the value as true to allow external Dataproc metastore. |
dataProcMetastoreProjectId | If enable DataProcMetastore is set as true, provide the name of the metastore project ID. |
dataProcMetaStoreName | Provide the name of the metastore name. |
dataProcVersion | Supported version is 2.1.11-debian11 |
enableAutoScaling | Set the value as true to enable the autoscaling of cluster nodes. |
existingAutoScalingPolicyName | Provide the name of the existing autoscaling property, if any. |
secondaryWorkerMinInstanceCount | Specify the number of minimum worker instances to be kept running while scaling. |
secondaryWorkerMaxInstanceCount | Specify the number of maximum worker instances to be kept running while scaling. |
existingDataprocClusterName | Enter the name of the existing Dataproc cluster. NOTE: Use these Configurations if you want to use the existing Dataproc and set the value of the parameter (createDataProc) to false. |
sshPrivateKeyDataproc | The private key of existing Dataproc. NOTE: The private key should be base64 encoded |
dataprocUsername | Name of the user. |
createServiceAccount | Change the value to false if you want to use the existing Service Account. |
serviceAccountName | Enter the service account name to be attached to all Kyvos Virtual Machines. |
enableNodeScaling | Set the value to false to disable the addition of permission required for Node Scaling on Service Account. |
createSecretManager | Set value of the parameter to true to create a new secret manager for Kyvos. |
secretManagerName | Provide the name of the existing Secret Manager. |
secretManagerProjectId | Provide the name of the Project ID in which the existing secret manager is created. |
customPrefixNameDataproc | A prefix is to be added before the name of Dataproc. |
customPrefixNameGKE | A prefix is to be added before GKE. |
customPrefixNameBI | A prefix is to be added before the name of BI virtual machines. |
customPrefixNameBIDisk | A prefix is to be added before the name of BI Disks. |
customPrefixNameQE | A prefix is to be added before the name of Query Engine virtual machines. |
customPrefixNameQEDisk | A prefix is to be added before the name of Query Engine disks. |
customPrefixNameKM | A prefix is to be added before the name of the Kyvos Manager virtual machine. |
customPrefixNameKMDataDisk | A prefix is to be added before the name of the Kyvos Manager disk. |
customPrefixNameProxySubNetwork, | A prefix is to be added before the name of proxy sub network. |
customPrefixNameWebServer, | A prefix is to be added before the name of web server. |
customPrefixNameWebDataDisk | A prefix is to be added before the name of web data disk. |
enableEncryption | Set the value to true to enable encryption for the deployment. Encryption will be enabled for Secret Manager, google storage buckets, and disks. |
cmkKeyRingName | If this field is left blank, a new CMK key ring will be created. If you want to use an existing key ring, make sure you Bring Your Own Key. The key should be located in the same region as deployment. NOTE: Bring Your Own Key: Enter the name of your key ring if encryption is set to true. |
cmkKeyName | If this field is left blank, a new CMK key will be created. If you want to use an existing key, make sure you Bring Your Own Key. The key should be located in the same region as deployment. NOTE: Bring Your Own Key: Enter the name of your key if encryption is set to true. |
bucketName | Enter the name of your bucket. NOTE: If the bucket name does not exist, a new bucket with the same name will be created. |
kyvosWorkDir | Provide the path of the Kyvos work directory. |
kyvosClusterName | Name of the Kyvos cluster to be deployed. |
bundleAccessKey | Key to access Kyvos bundle. |
bundleSecretKey | The secret key for Kyvos bundle. |
sshPublicKey | Text of the SSH public key for authentication. |
sshPrivateKey | Enter the SSH private Key text of the pem file. NOTE: The text must be in base64 encoded. |
kyvosLicenseFileValue | Enter a valid Kyvos license key. NOTE: It should be base64 encoded |
additionalLabels | Labels to be added to the resources. |
From the terminal, navigate to the directory where your files are stored. For example, use cd terraform. Once navigate to the bin folder, execute the ./deploy.sh command. This command will initialize Terraform, generate a plan, and apply the configuration as specified in the kyvosparams.tfvars file.
Note
Review the output to ensure Terraform will create, modify, or delete the resources.
If you need to interrupt the script while it's running, press Ctrl+Z.
If you need to make modifications to the kyvosparams.tfvars file, do so accordingly.
Upon successful execution of this command, Terraform will display the outputs as specified in the configuration.
Terraform will generate an output.json file containing all outputs. This output will be used for Kyvos Manager configurations.
To delete the previously created deployment resources, execute the ./deploy.sh destroy command from the bin folder.
Note
After successfully executing the configuration, Terraform will automatically generate a .tfstate file. To create a new file using the same configuration files, first delete the existing deployment configured by those files.
To change the sourceImage or kmSourceImage, navigate to the source folder, open the variable.tf file, and update the default value as needed.
Possible values in volume type fields:
For SSD type disk: pd-ssd
For standard disk: pd-standard
Change the value of the parameter kmCount to 0 in the kyvosparams.tfvars file to go with wizard-based deployment.
Once created, you can validate if the resources meet the requirements for installing Kyvos on the Google cloud platform.
Post-deployment, for a non-SSH based cluster, if you use an existing Dataproc cluster and a new bucket for automated deployment on GCP, you must execute the dataproc.sh script on the master node of Dataproc after modifying the values of DEPLOYMENT_BUCKET, WORK_DIR, COPY_LIB, and DATAPROC_VERSION to the name of the existing bucket. Then, sync the library and configuration files from the Kyvos Manager on the Dataproc page.
Using existing Service Account
Once Kyvos resources are created using Kubernetes, execute the following commands using the gcloud CLI to link the Kubernetes Service account to the IAM Service account.
gcloud iam service-accounts add-iam-policy-binding IAM_SA_NAME@IAM_SA_PROJECT_ID.iam.gserviceaccount.com --role roles/iam.workloadIdentityUser --member "serviceAccount:PROJECT_ID.svc.id.goog[kyvos-monitoring/default]"
gcloud iam service-accounts add-iam-policy-binding IAM_SA_NAME@IAM_SA_PROJECT_ID.iam.gserviceaccount.com --role roles/iam.workloadIdentityUser --member "serviceAccount:PROJECT_ID.svc.id.goog[kyvos-compute/default]"
In the above-mentioned commands, replace the following:
IAM_SA_NAME: The name of your new IAM service account.
IAM_SA_PROJECT_ID: The project ID of your IAM service account.
PROJECT_ID: The project ID of your Google Cloud.
Related content
Copyright Kyvos, Inc. All rights reserved.