...
To create Kyvos resources, read the following:
Prerequisites to run Terraform from GCP cloud shelldeploy Kyvos
Prerequisites to run Terraform form Local MachineAutomated resource creation using Terraform from GCPAutomated resource creation using Terraform from Local Machine
Prerequisites to
...
deploy Kyvos
Anchor | ||||
---|---|---|---|---|
|
You need a valid Google Cloud Platform account. This account will be used to authenticate Terraform to interact with GCP resources.
The following permissions are requiredmust be given to the logged-in user account:
To use an existing service account for deployments, add the cloudfunctions.admin role. Additionally, for specific permissions, seeEditor Role
Secret Manager Admin
Storage Object Admin
storage.buckets.get
storage.buckets.update
storage.objects.update
Google Console users must have the privilege to launch Google resources like Instances, Dataproc cluster, Google Storage, and Disks in the project.
Logged-in users must have the privilege to launch Gcloud in GCP.
Create a custom role and assign the below permission to the role. Ensure that custom role must be attached to logged-in user account.
iam.roles.create
iam.serviceAccounts.setIamPolicy
resourcemanager.projects.setIamPolicy
For additional permissions, refer to the Prerequisites for deploying Kyvos in a GCP environment using Deployment Manager section from Step 2 to Step 27. To use
When using an existing VPC for deployments, it must possess specific permissions as outlined in the Prerequisites for deploying Kyvos in a GCP environment section.
To use an existing bucket for deployments, it must possess specific permissions as outlined in the Prerequisites for deploying Kyvos in a GCP environment section.
...
Download and install Terraform on your local machine.
To install Terraform, refer to the Terraform documentation.
Execute Terraform init command to verify successful installation of Terraform.
Jq should be installed on your local machine.
You need a GCP account to create and manage resources. Ensure that you have the necessary permissions.
Configure GCP on your local machine.
For gcloud initialization, refer to the Google documentation.
...
To create resources using Terraform from GCP, perform the following steps.
To execute Terraform on Google Cloud Platform's Cloud Shell, activate Cloud Shell, then click Open Editor to create the necessary folders
Create a directory named terraform and add subdirectories and files according to the following specifications:
Access the kyvosparams.tfvars file located in the conf directory, and configure the parameters as needed for your deployment
In the Cloud Shell interface on Google Cloud Platform, open a new terminal by clicking on the terminal icon located on the left-hand side.
Panel | ||||||
---|---|---|---|---|---|---|
| ||||||
Note After opening the terminal in Cloud Shell, ensure that Cloud Shell is configured to operate within the same project where you intend to deploy your resources. |
From the terminal, navigate to the directory where your files are stored. For example, use cd terraform. Once navigate to the bin folder, execute the ./deploy.sh command. This command will initialize Terraform, generate a plan, and apply the configuration as specified in the kyvosparams.tfvars file.
Review the output to ensure Terraform will create, modify, or delete the resources as expected.
If you need to interrupt the script while it's running, press Ctrl+Z.
If you need to make modifications to the kyvosparams.tfvars file, do so accordingly.
Upon successful execution of this command, Terraform will display the outputs as specified in the configuration.
Terraform will generate an output.json file containing all outputs, which Kyvos Manager will utilize for configurations.
To destroy your entire deployment, simply execute the ./deploy.sh destroy command.
Panel | ||||||
---|---|---|---|---|---|---|
| ||||||
Note
|
...
To create resources using Terraform from Local Machine, perform the following steps.
Open a terminal or command prompt on your local machine.
Navigate to your Terraform configuration directory (where your .tf files are located).
Create a directory named terraform and add subdirectories and files according to the following specifications:
Access the kyvosparams.tfvars file located in the conf directory, and configure the parameters as needed for your deployment
Cd inside the bin folder, execute the ./deploy.sh command. This command will initialize Terraform, generate a plan, and apply the configuration as specified in the kyvosparams.tfvars file.
Review the output to ensure Terraform will create, modify, or delete the resources as expected.
If you need to interrupt the script while it's running, press Ctrl+Z.
If you need to make modifications to the kyvosparams.tfvars file, do so accordingly.
Upon successful execution of this command, Terraform will display the outputs as specified in the configuration.
To destroy your entire deployment, simply execute the ./deploy.sh destroy command.
...
To run deployment with encryption, set the value of enableEncryption parameter to true.
To run deployment with encryption with new cmk: the subnet must have a minimum mask range of /22
Subnets in which Kubernetes cluster is launched should have connectivity to the subnets in which Kyvos instances are launched.
When using an existing VPC, ensure that the subnet has two secondary IP ranges with valid mask ranges, as these will be used by the Kubernetes cluster.
Click Roles > Create new role. Provide a name like Kyvos-role for storage service, and assign the following permissions. This role should be attached to Kyvos service account.
|
|
|
Add the below predefined roles in service account used by Kyvos cluster.
BigQuery data viewer
BigQuery user
Dataproc Worker
Cloud Functions Admin
Cloud Scheduler Admin
Cloud Scheduler Service Agent
Service Account User
Logs Writer
Workload Identity User
Permissions for Cross-Project Datasets Access with BigQuery:
Use the same service account that is being used by Kyvos VMs.
Give the following roles to the above-created service account on the BigQuery Project.
BigQuery Data Viewer
BigQuery User
Prerequisites for Cross-Project BigQuery setup and Kyvos VMs.
Use the same service account that is being used by Kyvos VMs.
To the service account used by Kyvos VMs, give the following roles on the BigQuery Project:
BigQuery Data Viewer
BigQuery User
For accessing BigQuery Views, add the following permissions to the Kyvos custom role (created above).
bigquery.tables.create
bigquery.tables.delete
bigquery.tables.update
bigquery.tables.updateData
Permissions to generate Temporary Views in Separate Dataset when performing the validation/preview operation from Kyvos on Google BigQuery.
bigquery.tables.create = permissions to create a new table
bigquery.tables.updateData = to write data to a new table, overwrite a table, or append data to a table
Prerequisites to run Terraform form local machine
Anchor | ||||
---|---|---|---|---|
|
Download and install Terraform on your local machine.
To install Terraform, refer to the Terraform documentation.
Execute Terraform init command to verify successful installation of Terraform.
Jq should be installed on your local machine.
You need a GCP account to create and manage resources. Ensure that you have the necessary permissions.
Configure GCP on your local machine.
For gcloud initialization, refer to the Google documentation.
Prerequisites to use Customer Managed Key (CMK) or Bring Your Own Key (BYOK) deployment
Anchor | ||||
---|---|---|---|---|
|
To use an existing service account for deployments, the following permissions predefined roles are needed on Kyvos Service Account:
roles/cloudkms.cryptoKeyEncrypter
roles/cloudkms.cryptoKeyDecrypter
roles/cloudkms.cryptoKeyEncrypterDecrypterCloud KMS CryptoKey Decrypter
Cloud KMS CryptoKey Encrypter
Cloud KMS CryptoKey Encrypter/Decrypter
Panel | ||||||
---|---|---|---|---|---|---|
| ||||||
Note
|
To use the BYOK (Bring Your Own Key) feature:
The service agent must be present in the project where the user is going to deploy for create Google Cloud Storage and Secret Manager. For more details, refer to Google documentation.To use an existing key, specify cmkKeyRingName and cmkKeyName in the parameter.
To use an existing service account for deployments, the following permissions are needed:
roles/cloudkms.cryptoKeyEncrypter
roles/cloudkms.cryptoKeyDecrypter
Roles/cloudkms.cryptoKeyEncrypterDecrypter
...
Additional permission required to run Auto scaling for GCP Enterprise
Apart from existing permissions mentioned in the Creating a service account from Google Cloud Console section, you must need the following permissions for GCP Enterprise:
Permissions required in GCP
compute.instanceGroups.get
compute.instances.create
compute.disks.create
compute.disks.use
compute.subnetworks.use
compute.instances.setServiceAccount
compute.instances.delete
compute.instanceGroups.update
compute.instances.use
compute.instances.detachDisk
compute.disks.delete
compute.instances.attachDisk
Conditional permission needed if using Shared Network
compute.subnetworks.use (on the Kyvos service account in the project where your network resides)
Prerequisites to deploy Kyvos using Kubernetes
Prerequisites to deploy Kyvos using Dataproc section for the complete set of permissions required for deploying Kyvos.
Additionally, for creating a GKE cluster, you must complete the following prerequisites.
Create a GKE cluster
Ensure that the GKE service agent’s default service account (service-PROJECT_NUMBER@container-engine-robot.iam.gserviceaccount.com) has the Kubernetes Engine Service Agent role attached to it.
Existing Virtual Network
If using an existing Virtual Network for creating a GKE Cluster requires two secondary IPV4 addresses in the subnet. Additionally, if using a shared Virtual Network, following roles and permissions are required for by Default service account of Kubernetes (service-PROJECT_NUMBER@container-engine-robot.iam.gserviceaccount.com) on the project of Shared Virtual Network.
Compute Network User
kubernetes_role: You must create a custom role. To do this, click Roles > Create new role. Provide a name like kubernetes_role; assign the following permissions, and then attach to the service account:
The 2181,45460,6903 ports must be allowed in the Firewall inbound rules for all internal communication between the Kubernetes cluster and Kyvos.
Existing (IAM) Service account
Add the following predefined roles to the existing IAM service account:
Service Account Token Creator
Kubernetes Engine Developer
Kubernetes Engine Cluster Admin
Add the following permissions to the kubernetes_role custom role that you created above.
compute.instanceGroupManagers.update
Compute.instanceGroupManagers.get