Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

To create Kyvos resources, read the following:

...

Prerequisites to deploy Kyvos
Anchor
preqcloudshell
preqcloudshell

  • You need a valid Google Cloud Platform account. This account will be used to authenticate Terraform to interact with GCP resources.

  • The following permissions are requiredmust be given to the logged-in user account:

    To utilize an existing service account for deployments, it must possess specific permissions as outlined in
    • Editor Role

    • Secret Manager Admin

    • Storage Object Admin

    • storage.buckets.get

    • storage.buckets.update

    • storage.objects.update

  • Google Console users must have the privilege to launch Google resources like Instances, Dataproc cluster, Google Storage, and Disks in the project.

  • Logged-in users must have the privilege to launch Gcloud in GCP. 

    • Create a custom role and assign the below permission to the role. Ensure that custom role must be attached to logged-in user account.

      • iam.roles.create  

      • iam.serviceAccounts.setIamPolicy

      • resourcemanager.projects.setIamPolicy

  • For additional permissions, refer to the Prerequisites for deploying Kyvos in a GCP environment using Deployment Manager section from Step 2 to Step 27. To utilize

  • When using an existing VPC for deployments, it must possess specific permissions as outlined in the Prerequisites for deploying Kyvos in a GCP environment section. To utilize an existing bucket for deployments, it must possess specific permissions as outlined in the Prerequisites for deploying Kyvos in a GCP environment section. the subnet must have a minimum mask range of /22

  • Subnets in which Kubernetes cluster is launched should have connectivity to the subnets in which Kyvos instances are launched.

  • When using an existing VPC, ensure that the subnet has two secondary IP ranges with valid mask ranges, as these will be used by the Kubernetes cluster.

  • Click Roles > Create new role. Provide a name like Kyvos-role for storage service, and assign the following permissions. This role should be attached to Kyvos service account.

  • deploymentmanager.deployments.list

  • deploymentmanager.resources.list

  • deploymentmanager.manifests.list

  • cloudfunctions.functions.get

  • dataproc.clusters.list

  • dataproc.clusters.get

  • compute.disks.setLabels

  • compute.instances.start

  • compute.instances.stop

  • compute.instances.list

  • compute.instances.setLabels

  • storage.buckets.get

  • storage.buckets.list

  • storage.objects.create

  • storage.objects.delete

  • storage.buckets.update

  • compute.disks.get

  • compute.instances.get

  • dataproc.clusters.update

  • storage.objects.get

  • storage.objects.list

  • storage.objects.update

  • cloudfunctions.functions.update

  • compute.subnetworks.get

  • resourcemanager.projects.getIamPolicy

  • compute.firewalls.list

  • iam.roles.get  

  • compute.machineTypes.get  

  • compute.machineTypes.list  

  • compute.instances.setMachineType

  • compute.instances.setMetadata

  • Add the below predefined roles in service account used by Kyvos cluster.

    • BigQuery data viewer

    • BigQuery user

    • Dataproc Worker

    • Cloud Functions Admin

    • Cloud Scheduler Admin

    • Cloud Scheduler Service Agent

    • Service Account User

    • Logs Writer

    • Workload Identity User

  • Permissions for Cross-Project Datasets Access with BigQuery:

    1. Use the same service account that is being used by Kyvos VMs.

    2. Give the following roles to the above-created service account on the BigQuery Project.

      • BigQuery Data Viewer

      • BigQuery User

  • Prerequisites for Cross-Project BigQuery setup and Kyvos VMs.

    1. Use the same service account that is being used by Kyvos VMs.

    2. To the service account used by Kyvos VMs, give the following roles on the BigQuery Project:

      • BigQuery Data Viewer

      • BigQuery User

  • For accessing BigQuery Views, add the following permissions to the Kyvos custom role (created above).

    • bigquery.tables.create

    • bigquery.tables.delete

    • bigquery.tables.update

    • bigquery.tables.updateData

  • Permissions to generate Temporary Views in Separate Dataset when performing the validation/preview operation from Kyvos on Google BigQuery.

    • bigquery.tables.create = permissions to create a new table  

    • bigquery.tables.updateData = to write data to a new table, overwrite a table, or append data to a table  

Prerequisites to run Terraform form local machine
Anchor
localmachine
localmachine

  • Download and install Terraform on your local machine.

  • To install Terraform, refer to the Terraform documentation.

  • Execute Terraform init command to verify successful installation of Terraform.

  • Jq should be installed on your local machine.

  • You need a GCP account to create and manage resources. Ensure that you have the necessary permissions.

  • Configure GCP on your local machine.

  • For gcloud initialization, refer to the Google documentation.

...

Prerequisites to use Customer Managed Key (CMK) or Bring Your Own Key (BYOK) deployment
Anchor

...

To create resources using Terraform from GCP, perform the following steps.

...

To execute Terraform on Google Cloud Platform's Cloud Shell, activate Cloud Shell, then click Open Editor to create the necessary folders

...

Create a directory named terraform and add subdirectories and files according to the following specifications:

...

Access the kyvosparams.tfvars file located in the conf directory, and configure the parameters as needed for your deployment

...

cmk
cmk

  • To use an existing service account for deployments, the following predefined roles are needed on Kyvos Service Account:

    • Cloud KMS CryptoKey Decrypter

    • Cloud KMS CryptoKey Encrypter

    • Cloud KMS CryptoKey Encrypter/Decrypter

Panel
panelIconIdatlassian-note
panelIcon:note:
bgColor#DEEBFF

Note

After opening the terminal in Cloud Shell, ensure that Cloud Shell is configured to operate within the same project where you intend to deploy your resources.

  1. From the terminal, navigate to the directory where your files are stored. For example, use cd terraform. Once navigate to the bin folder, execute the ./deploy.sh command. This command will initialize Terraform, generate a plan, and apply the configuration as specified in the kyvosparams.tfvars file.

  2. Review the output to ensure Terraform will create, modify, or delete the resources as expected.

    • If you need to interrupt the script while it's running, press Ctrl+Z.

    • If you need to make modifications to the kyvosparams.tfvars file, do so accordingly.

  3. Upon successful execution of this command, Terraform will display the outputs as specified in the configuration.

  4. Terraform will generate an output.json file containing all outputs, which Kyvos Manager will utilize for configurations.

  5. To destroy your entire deployment, simply execute the ./deploy.sh destroy command.

Panel
panelIconIdatlassian-note
panelIcon:note:
bgColor#DEEBFF

Note

  • After successfully executing the configuration, Terraform will automatically generate a .tfstate file. To create a new file using the same configuration files, first destroy the existing deployment configured in those files.

  • To change the sourceImage or kmSourceImage, navigate to the source folder, open the variable.tf file, and update the default value as needed.

...

 To create resources using Terraform from Local Machine, perform the following steps.

...

Open a terminal or command prompt on your local machine.

...

Navigate to your Terraform configuration directory (where your .tf files are located).

...

Create a directory named terraform and add subdirectories and files according to the following specifications:

...

Access the kyvosparams.tfvars file located in the conf directory, and configure the parameters as needed for your deployment

...

Cd inside the bin folder, execute the ./deploy.sh command. This command will initialize Terraform, generate a plan, and apply the configuration as specified in the kyvosparams.tfvars file.

...

Review the output to ensure Terraform will create, modify, or delete the resources as expected.

  • If you need to interrupt the script while it's running, press Ctrl+Z.

  • If you need to make modifications to the kyvosparams.tfvars file, do so accordingly.

...

Upon successful execution of this command, Terraform will display the outputs as specified in the configuration.

...

  • Encryption will be enabled for the following components:

    • Disk

    • Cloud storage

    • Secret manager

  • The service agent must be present in the project where the user is going to create Google Cloud Storage and Secret Manager. For more details, refer to Google documentation.

  • Cloud Key Management Service (KMS) API must be enabled in the project before deployment.

  • The existing CMK must be in the same region as deployment.

  • The existing CMK location must be regional; global keys are not supported by GCS buckets. For more details, refer to Google documentation.

  • To use the BYOK (Bring Your Own Key):
    The service agent must be present in the project where the user is going to create Google Cloud Storage and Secret Manager. For more details, refer to Google documentation.

Additional permission required to run Auto scaling for GCP Enterprise

Apart from existing permissions mentioned in the Creating a service account from Google Cloud Console section, you must need the following permissions for GCP Enterprise:

Permissions required in GCP

  • compute.instanceGroups.get

  • compute.instances.create

  • compute.disks.create

  • compute.disks.use

  • compute.subnetworks.use

  • compute.instances.setServiceAccount

  • compute.instances.delete

  • compute.instanceGroups.update

  • compute.instances.use

  • compute.instances.detachDisk

  • compute.disks.delete

  • compute.instances.attachDisk

Conditional permission needed if using Shared Network

  • compute.subnetworks.use (on the Kyvos service account in the project where your network resides)

Prerequisites to deploy Kyvos using Kubernetes

Prerequisites to deploy Kyvos using Dataproc section for the complete set of permissions required for deploying Kyvos.

Additionally, for creating a GKE cluster, you must complete the following prerequisites.

Create a GKE cluster

  • Ensure that the GKE service agent’s default service account (service-PROJECT_NUMBER@container-engine-robot.iam.gserviceaccount.com) has the Kubernetes Engine Service Agent role attached to it.

  • Existing Virtual Network

    • If using an existing Virtual Network for creating a GKE Cluster requires two secondary IPV4 addresses in the subnet. Additionally, if using a shared Virtual Network, following roles and permissions are required for by Default service account of Kubernetes (service-PROJECT_NUMBER@container-engine-robot.iam.gserviceaccount.com) on the project of Shared Virtual Network.

      • Compute Network User

      • kubernetes_role: You must create a custom role. To do this, click Roles > Create new role. Provide a name like kubernetes_role; assign the following permissions, and then attach to the service account:

        Google documentation.

      • The 2181,45460,6903 ports must be allowed in the Firewall inbound rules for all internal communication between the Kubernetes cluster and Kyvos.

    •   Existing (IAM) Service account

      1. Add the following predefined roles to the existing IAM service account:

        1. Service Account Token Creator

        2. Kubernetes Engine Developer

        3. Kubernetes Engine Cluster Admin

      2. Add the following permissions to the kubernetes_role custom role that you created above.

        1. compute.instanceGroupManagers.update

        2. Compute.instanceGroupManagers.get