Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

...

  • You need a valid Google Cloud Platform account. This account will be used to authenticate Terraform to interact with GCP resources.

  • The following permissions are requiredmust be given to the logged-in user account:

    • Editor Role

    • Secret Manager Admin

    • Storage Object Admin

    • Cloud Functions Admin

    • Create a custom role and assign the below permission to the role.

      • storage.buckets.get

      • storage.buckets.update

      • storage.objects.update

  • Google Console users must have the privilege to launch Google resources like Instances, Dataproc cluster, Google Storage, and Disks in the project.

  • Logged-in users must have the privilege to launch Gcloud in GCP. 

  • To use an existing service account for deployments, add the cloudfunctions.admin role. Additionally, for specific permissions, see For additional permissions, refer to the Prerequisites for deploying Kyvos in a GCP environment using Deployment Manager section .

  • To use an existing VPC for deployments, it must possess specific permissions as outlined in the Prerequisites for deploying Kyvos in a GCP environment section.

  • To use an existing bucket for deployments, it must possess specific permissions as outlined in the Prerequisites for deploying Kyvos in a GCP environment section.

...

Prerequisites to run Terraform form local machine
Anchor
localmachine
localmachine

...

Encryption Key (CMK) support in GCP Terraform
Anchor
cmkcmk

...

...

cmk

...

  • To use an existing service account for deployments, the following permissions predefined roles are needed on Kyvos Service Account:

    • roles/cloudkms.cryptoKeyEncrypter

    • roles/cloudkms.cryptoKeyDecrypter

    • roles/cloudkms.cryptoKeyEncrypterDecrypterCloud KMS CryptoKey Decrypter

    • Cloud KMS CryptoKey Encrypter

    • Cloud KMS CryptoKey Encrypter/Decrypter

Panel
panelIconIdatlassian-note
panelIcon:note:
bgColor#DEEBFF

Note

  • Encryption will be enabled for the following components:

    • Disk

    • Cloud storage

    • Secret manager

  • The service agent must be present in the project where the user is going to deploy for Google Cloud Storage and Secret Manager. For more details, refer to Google documentation.

  • Cloud Key Management Service (KMS) API must be enabled in the project before deployment.

  • The existing cmk must be in the same region as deployment.

  • The existing cmk location must be regional; global keys are not supported by GCS buckets. For more details, refer to Google documentation.

  • To use the BYOK (Bring Your Own Key) feature: The service agent must be present in the project where the user is going to deploy for Google Cloud Storage and Secret Manager. For more details, refer to Google documentation.

  • To use an existing key, specify cmkKeyRingName and cmkKeyName in the parameter.

  • To use an existing service account for deployments, the following permissions are needed:

    • roles/cloudkms.cryptoKeyEncrypter

    • roles/cloudkms.cryptoKeyDecrypter

    • Roles/cloudkms.cryptoKeyEncrypterDecrypter

...

  • Click Roles > Create new role. Provide a name like Kyvos- role for storage service, and assign the following permissions.

  • deploymentmanager.deployments.list

  • deploymentmanager.resources.list

  • deploymentmanager.manifests.list

  • cloudfunctions.functions.get

  • dataproc.clusters.list

  • dataproc.clusters.get

  • compute.disks.setLabels

  • compute.instances.start

  • compute.instances.stop

  • compute.instances.list

  • compute.instances.setLabels

  • storage.buckets.get

  • storage.buckets.list

  • storage.objects.create

  • storage.objects.delete

  • storage.buckets.update

  • compute.disks.get

  • compute.instances.get

  • dataproc.clusters.update

  • storage.objects.get

  • storage.objects.list

  • storage.objects.update

  • cloudfunctions.functions.update

  • compute.subnetworks.get

  • resourcemanager.projects.getIamPolicy

  • compute.firewalls.list

  • iam.roles.get  

  • compute.machineTypes.get  

  • compute.machineTypes.list  

  • compute.instances.setMachineType

  • compute.instances.setMetadata

  1. Click Edit to add roles in the service account and add the following roles.

    • Kyvos-role (created in step 1)

    • BigQuery data viewer

    • BigQuery user

    • Dataproc Worker

    • Cloud Functions Invoker

    • Cloud Scheduler Admin

    • Cloud Scheduler Service Agent

    • Service Account User

    • Logs Writer

  2. Permissions for Cross-Project Datasets Access with BigQuery:

    1. Use the same service account that is being used by Kyvos VMs.

    2. Give the following roles to the above-created service account on the BigQuery Project.

      • BigQuery Data Viewer

      • BigQuery User

  3. Prerequisites for Cross-Project BigQuery setup and Kyvos VMs.

    1. Use the same service account that is being used by Kyvos VMs.

    2. To the service account used by Kyvos VMs, give the following roles on the BigQuery Project:

      • BigQuery Data Viewer

      • BigQuery User

  4. For accessing BigQuery Views, add the following permissions to the Kyvos custom role (created above).

    • bigquery.tables.create

    • bigquery.tables.delete

    • bigquery.tables.update

    • bigquery.tables.updateData

  5. Permissions to generate Temporary Views in Separate Dataset when performing the validation/preview operation from Kyvos on Google BigQuery.

    • bigquery.tables.create = permissions to create a new table  

    • bigquery.tables.updateData = to write data to a new table, overwrite a table, or append data to a table  

Additional permission required to run Auto scaling for GCP Enterprise

Apart from existing permissions mentioned in the Creating a service account from Google Cloud Console section, you must need the following permissions for GCP Enterprise:

Permissions required in GCP

  • compute.instanceGroups.get

  • compute.instances.create

  • compute.disks.create

  • compute.disks.use

  • compute.subnetworks.use

  • compute.instances.setServiceAccount

  • compute.instances.delete

  • compute.instanceGroups.update

  • compute.instances.use

  • compute.instances.detachDisk

  • compute.disks.delete

Conditional permission if using network in the project other than Kyvos resources

  • compute.subnetworks.use (on the Kyvos service account in the project where your network resides)

Prerequisites to deploy Kyvos using Kubernetes

Prerequisites to deploy Kyvos using Dataproc section for the complete set of permissions required for deploying Kyvos.

Additionally, for creating a GKE cluster, you must complete the following prerequisites.

Create a GKE cluster

  • Ensure that the GKE service agent’s default service account (service-PROJECT_NUMBER@container-engine-robot.iam.gserviceaccount.com) has the Kubernetes Engine Service Agent role attached to it.

  • Existing Virtual Network

    • If using an existing Virtual Network for creating a GKE Cluster requires two secondary IPV4 addresses in the subnet. Additionally, if using a shared Virtual Network, following roles and permissions are required for by Default service account of Kubernetes (service-PROJECT_NUMBER@container-engine-robot.iam.gserviceaccount.com) on the project of Shared Virtual Network.

      • Compute Network User

      • kubernetes_role: You must create a custom role. To do this, click Roles > Create new role. Provide a name like kubernetes_role; assign the following permissions, and then attach to the service account:

        Google documentation.

      • The 2181,45460,6903 ports must be allowed in the Firewall inbound rules for all internal communication between the Kubernetes cluster and Kyvos.

    •   Existing (IAM) Service account

      1. Add the following predefined roles to the existing IAM service account:

        1. Service Account Token Creator

        2. Kubernetes Engine Developer

        3. Kubernetes Engine Cluster Admin

      2. Add the following permissions to the kubernetes_role custom role that you created above.

        1. compute.instanceGroupManagers.update

        2. Compute.instanceGroupManagers.get