/
Deploying Kyvos Resources for GCP using Terraform Script

Deploying Kyvos Resources for GCP using Terraform Script

Applies to: Kyvos Enterprise  Kyvos Cloud (SaaS on AWS) Kyvos AWS Marketplace

Kyvos Azure Marketplace   Kyvos GCP Marketplace Kyvos Single Node Installation (Kyvos SNI)


Before you begin

In addition to the prerequisites, please ensure the following settings are enabled on your GCP project.

  1. Project Billing: For this, search Billing on your Google Cloud project.

    1. Click Link a Billing Account, and configure the billing information.
      Once your billing is enabled, you will see an estimate as shown in the following figure.

  2. Compute Engine APIs: Search for Compute Engine APIs on your project, and click the Enable button.
    Once the API is enabled, the corresponding status is displayed, as shown in the following figure.

  3. Cloud Key Management Service (KMS) API: Enable this API, which extends customer control over encryption keys.

  4. Cloud Resource Manager API: Search for Cloud Resource Manager API on your project, and click the Enable button.
    Once the API is enabled, the API Enabled status is displayed, as shown in the following figure.

  5. Enable the following APIs on your project. Refer to the GCP documentation for details.

    1. Cloud Functions

    2. Cloud Build

    3. Cloud Scheduler

  6. Create an App Engine project and select the region where you want to deploy your resources.

  7. To the default Google APIs Service Agent service account, add the storage.buckets.get role. This is required to delete deployment through the Deployment Manager.

  8. Kubernetes Engine API: Search for Kubernetes Engine API on your project and click the Enable button. Once the API is enabled, the API Enabled status is displayed.

Creating resources using script

  1. Download the terraform.tar file on your workstation.

  2. On your workstation, install the gcloud command-line tool.

  3. To execute Terraform on Google Cloud Platform's Cloud Shell, activate Cloud Shell.

  4. Configure the gcloud command-line tool to use your project using the following command.
    gcloud config set project [MY_PROJECT]
    Here, replace [MY_PROJECT] with your project ID.

Note

After opening the terminal in Cloud Shell, ensure that Cloud Shell is configured to operate within the same project where you intend to deploy your resources.

  1. Copy terrform.tar file and untar it. The following subdirectories and files are displayed.

  2. Access the kyvosparams.tfvars file located in the conf directory, and configure the parameters as needed for your deployment

  3. Enter details for Kyvos resources with Kubernetes and Dataproc in the kyvosparams.tfvars file:

Parameter

Description

Parameter

Description

projectId

Enter the project Id.

NOTE: This project Id will be used for the Kyvos deployment.

region

Enter the region to deploy Kyvos resources.

existingVpcName

Enter name of existing VPC.

NOTE: If this field is left blank, a new VPC will be created.

existingvpcProjectId

Enter the project ID of the existing VPC.

existingSubnetworkName

Enter name of existing VPC subnet.

createNetworkFirewall

To create firewall rules, set the value of this parameter to true.

NOTE: If the value of createVPC is set to true, firewall rules will be created unconditionally. 

gkeSubnetName

Enter the name of an existing Subnet in which you want to deploy GKE Cluster. If left blank subnetwork name will be used.

secondaryRangeName1

Enter the Secondary IPv4 ranges name for GKE Cluster creation.

NOTE: This must be preconfigured if using an existing VPC.
The range should have a minimum masking of /22.

For more information, see Google documentation.

secondaryRangeName2

Enter the Secondary IPv4 ranges name for GKE Cluster creation.

NOTE: This must be preconfigured if using an existing VPC.
The range should have a minimum masking of /22.

For more information, see Google documentation.

secondaryRange1

Enter the Secondary IPv4 ranges for GKE Cluster creation in the case of existing VPC.

secondaryRange2

Enter the Secondary IPv4 ranges for GKE Cluster creation in the case of existing VPC.

customPrefixNameVPC

If the value of the existing VPC name is left blank, enter the prefix name to be used for VPC to be created.

customPrefixNameSubNetwork

If the value of the existing subnetwork name is left blank, enter the prefix name to be used for VPC to be created.

ipCidrRange

Enter the value of ipCidrRange if value of existing VPC name is left blank.

vpcConnectorName

Enter the name of the VPC Connector to be used with GCP functions. 

kmCount

The number of Kyvos Manager instances to be launched.

kmInstanceType

Instance type of Kyvos Manager (n2-standard-4). 

kmVolumeSizeGB

Size of the disk to be attached to the Kyvos Manager.

kmVolumeType

Type of the disk for KM (pd-ssd).

hostNameBasedDeployment

Change the value to true to use the hostname for the cluster deployment.

createLoadBalancer

Set the value as true to create load balancer.

By default, the value is set as false.

enableWebServerHA

Set the value as true for enabling webserver High Availability.

By default, the value is set as false.

webServerInstanceType

Configure the Web Server instance type.

webServerVolumeSizeGB

Size of the disk to be attached for the Web Server.

webServerVolumeType

Type of the disk for the Web Server (pd-ssd).

qeCount

The number of instances to be used as query engines.

qeInstanceType

Instance type of query engine (n2-highmem-4).

qeDataVolumeSizeGB

Size of the disk to be attached with query engines.

qeCacheVolumeSizeGB

Size of the disk to be attached for the cache.

qeCacheVolumeCount

The number of disks to be attached to the cache.

qeCacheVolumeType

Type of the disk for the Web Server (pd-ssd).

biCount

Enter the number of instances to be used as the BI server.

biInstanceType

Instance type of BI Server (n2-standard-8).

biVolumeCount

The number of disks to be attached to the BI Server.

biVolumeSizeGB

Size of the disk to be attached to the BI Server.

biVolumeType

Type of the disk for BI server (pd-ssd)

createGcpFunctions

Set the value as true to configure GCP Functions in Kyvos.

dataprocMetastoreURI

Enter the Metastore URI if you want to deploy Kyvos with no Spark configuration.

createGKE

Enter the value as True or False.

  • True: To create Kubernetes cluster.

gkeWorkerInstancetype

Enter the worker node instance type for the Kubernetes cluster.

NOTE: n2-standard-16 is the minimum configuration. Instance type smaller than this aren't supported.

existingGkeClusterName 

Enter the name of existing GKE cluster.
Note: Set the value of createGKE' parameter to ‘false’

existingNodePoolName 

Enter the node pool name of existing GKEcluster.

sharedK8sNodePool

Select the value as true to use shared K8s node pool.

existingGKERange

Enter the secondary IP range used in the existing GKE cluster if the VPC used by the GKE cluster different from the one used for the deployment.

existingGKEserviceAccountName

Enter the name of service account used in the existing GKE cluster if the service account used by the GKE cluster differs from the one used for the deployment.

minPodCount

Enter minimum pod count. 

NOTE:

  • If the values of MinPodCount and MaxWorkerNodeCount are the same, then scaling will be disabled.

  • The value of MinPodCount parameter cannot be greater than the value of MaxWorkerNodeCount parameter.

  • If the value of minPodCount is less than maxWorkerNodeCount, then scaling will be enabled.

maxWorkerNodeCount

Enter the maximum worker node count.

kyvosComputeWorkerNamespace

Enter the name of the Kyvos Compute Worker namespace.

minPodCountExistingNodePool

Enter minimum pod count of existing GKEcluster.

maxWorkerNodeCountExistingNodePool

Enter the maximum worker node count of the existing cluster.

createDataProc

Enter the value as True or False.

  • True: If you want to deploy Kyvos with Spark Configuration.

enableComponentGateway

Set the value of ‘enableComponentGateway’ to True to get publicly accessible URL for Dataproc.

sharedDataprocCluster

Select true to use the shared Dataproc cluster. In this case, Kyvos will not manage the Dataproc cluster.Select false to use the on-demand Dataproc cluster. In this case, the Dataproc cluster will automatically start or stop.  

dataProcNetworkTags

Provide a list of comma-separated network tags to be added to the Dataproc cluster.

Example: dataProcNetworkTags : ["abc","xyz"]

enableSshFlag

Set the value to true to enable SSH to the Dataproc cluster.

enableLivy

Set the value of Livy to True if using Dataproc version 2.1.11-debian11.

masterInstanceCount

The number of master nodes. For example, 1 

masterInstanceType

Instance type of master node (n2-highmem-4)

masterInstanceVolumeType

Type of the disk for master node (pd-ssd)

workerInstanceCount

The number of worker instances.

workerInstanceType

Instance type of worker node (n2-highmem-8)

workerInstanceVolumeType

Type of the disk for worker node (pd-ssd)

enableDataProcMetastore

Set the value as true to allow external Dataproc metastore.
NOTE: Existing metastore is not supported if the value of the createVPC is true

dataProcMetastoreProjectId

If enable DataProcMetastore is set as true, provide the name of the metastore project ID.

dataProcMetaStoreName

Provide the name of the metastore name.

dataProcVersion

Supported version is 2.1.11-debian11

enableAutoScaling

Set the value as true to enable the autoscaling of cluster nodes.

existingAutoScalingPolicyName

Provide the name of the existing autoscaling property, if any.
NOTE: Use this configuration only if enableAutoScaling is set as true.

secondaryWorkerMinInstanceCount

Specify the number of minimum worker instances to be kept running while scaling.
NOTE: Use this configuration only if enableAutoScaling is set as true.

secondaryWorkerMaxInstanceCount

Specify the number of maximum worker instances to be kept running while scaling.
NOTE: Use this configuration only if enableAutoScaling is set as true.

existingDataprocClusterName

Enter the name of the existing Dataproc cluster. 

NOTE: Use these Configurations if you want to use the existing Dataproc and set the value of the parameter (createDataProc) to false. 

sshPrivateKeyDataproc

The private key of existing Dataproc.

NOTE: The private key should be base64 encoded

dataprocUsername

Name of the user. 

createServiceAccount

 Change the value to false if you want to use the existing Service Account.

serviceAccountName

Enter the service account name to be attached to all Kyvos Virtual Machines. 

enableNodeScaling

Set the value to false to disable the addition of permission required for Node Scaling on Service Account.

createSecretManager

Set value of the parameter to true to create a new secret manager for Kyvos.

secretManagerName

Provide the name of the existing Secret Manager.

secretManagerProjectId

Provide the name of the Project ID in which the existing secret manager is created.

customPrefixNameDataproc

A prefix is to be added before the name of Dataproc.

customPrefixNameGKE

A prefix is to be added before GKE.

customPrefixNameBI

A prefix is to be added before the name of BI virtual machines.

customPrefixNameBIDisk

A prefix is to be added before the name of BI Disks.

customPrefixNameQE

A prefix is to be added before the name of Query Engine virtual machines.

customPrefixNameQEDisk

A prefix is to be added before the name of Query Engine disks.

customPrefixNameKM

A prefix is to be added before the name of the Kyvos Manager virtual machine. 

customPrefixNameKMDataDisk

A prefix is to be added before the name of the Kyvos Manager disk.

customPrefixNameProxySubNetwork,

A prefix is to be added before the name of proxy sub network.

customPrefixNameWebServer,

A prefix is to be added before the name of web server.

customPrefixNameWebDataDisk

A prefix is to be added before the name of web data disk.

enableEncryption

Set the value to true to enable encryption for the deployment. Encryption will be enabled for Secret Manager, google storage buckets, and disks.

cmkKeyRingName

If this field is left blank, a new CMK key ring will be created. If you want to use an existing key ring, make sure you Bring Your Own Key.

The key should be located in the same region as deployment.

NOTE: Bring Your Own Key: Enter the name of your key ring if encryption is set to true.

cmkKeyName

If this field is left blank, a new CMK key will be created. If you want to use an existing key, make sure you Bring Your Own Key.

The key should be located in the same region as deployment.

NOTE: Bring Your Own Key: Enter the name of your key if encryption is set to true.

bucketName

Enter the name of your bucket.

NOTE: If the bucket name does not exist, a new bucket with the same name will be created.

kyvosWorkDir

Provide the path of the Kyvos work directory.

kyvosClusterName

Name of the Kyvos cluster to be deployed.

bundleAccessKey

Key to access Kyvos bundle.

bundleSecretKey

The secret key for Kyvos bundle.

sshPublicKey

Text of the SSH public key for authentication.

sshPrivateKey

Enter the SSH private Key text of the pem file.

NOTE: The text must be in base64 encoded.

kyvosLicenseFileValue

Enter a valid Kyvos license key.

NOTE: It should be base64 encoded

additionalLabels

Labels to be added to the resources. 

  1. From the terminal, navigate to the directory where your files are stored. For example, use cd terraform. Once navigate to the bin folder, execute the ./deploy.sh command. This command will initialize Terraform, generate a plan, and apply the configuration as specified in the kyvosparams.tfvars file.

Note
Review the output to ensure Terraform will create, modify, or delete the resources.

  1. If you need to interrupt the script while it's running, press Ctrl+Z.

  2. If you need to make modifications to the kyvosparams.tfvars file, do so accordingly.

  1. Upon successful execution of this command, Terraform will display the outputs as specified in the configuration.

  2. Terraform will generate an output.json file containing all outputs. This output will be used for Kyvos Manager configurations.

  3. To delete the previously created deployment resources, execute the ./deploy.sh destroy command from the bin folder.

Note

  • After successfully executing the configuration, Terraform will automatically generate a .tfstate file. To create a new file using the same configuration files, first delete the existing deployment configured by those files.

  • To change the sourceImage or kmSourceImage, navigate to the source folder, open the variable.tf file, and update the default value as needed.

  • Possible values in volume type fields:

    • For SSD type disk: pd-ssd

    • For standard disk: pd-standard

  • Change the value of the parameter kmCount to 0 in the kyvosparams.tfvars file to go with wizard-based deployment.

  • Once created, you can validate if the resources meet the requirements for installing Kyvos on the Google cloud platform.

  • Post-deployment, for a non-SSH based cluster, if you use an existing Dataproc cluster and a new bucket for automated deployment on GCP, you must execute the  dataproc.sh  script on the master node of Dataproc after modifying the values of DEPLOYMENT_BUCKETWORK_DIRCOPY_LIB, and  DATAPROC_VERSION  to the name of the existing bucket. Then, sync the library and configuration files from the Kyvos Manager on the  Dataproc page. 

 

Related content

Copyright Kyvos, Inc. All rights reserved.