/
Prerequisites to deploy Kyvos using Dataproc

Prerequisites to deploy Kyvos using Dataproc

Note

Manual resource creation: Google Console users should have the privilege to launch Google resources like Instances, Dataproc cluster, Google Storage, and Disks in the Project.

  • Dataproc Service Agent service account: Dataproc creates this service account with the Dataproc Service Agent role in a Dataproc user's Google Cloud project. This service account cannot be replaced by a user-specified service account when you create a cluster. This service agent account is used to perform Dataproc control plane operations, such as creating, updating, and deleting cluster VMs. Please refer to Dataproc Service Agent (Control Plane identity) for details.
    By default, Dataproc uses the service-[project-number]@dataproc-accounts.iam.gserviceaccount.com as the service agent account. If that service account doesn't exist, Dataproc uses the Google APIs service agent account, [project-number]@cloudservices.gserviceaccount.com, for control plane operations.   

    Permission required:    

    1. The above service account must have the Dataproc Service Agent predefined role attached

    2. Compute Network User: If using a Shared Network, grant the above service account the 'Compute Network User' predefined role to the project where the network originally resides.

  • The logged-in user will need access to VPN, Subnet, Network Interface/Security Group, and Service Account, which will be used by Kyvos to launch compute engines, Dataproc, and Instance Group.

  • Ensure that the following ports are opened/allowed in the Firewall inbound rules for all internal communication between the Dataproc cluster and Kyvos.
    3306, 8030, 8031, 8032, 8033, 8042, 8088, 9083, 8188, 18080, 8050, 8051, 8020, 10020, 19888, 10033, 8188, 9870, 10200, 10000, 10002, 22, 45460, 9866, 8998, and 9867
    NOTE: The port 8998 is required for Livy.

  • Create a firewall rule with all ports open between Dataproc master and worker nodes using network tags as targets, which will be attached to the Dataproc.
    For more information about the required ports between the Dataproc master nodes and the worker nodes, refer to GCP documentation at: Dataproc Cluster Network Configuration

  • If the Kyvos instances and Dataproc clusters are launched in a different VPN/Subnet, then Network Peering should be created between both networks.

  • There should be a private and public key for creating the Kyvos instances and the Dataproc cluster.

  • Create an Autoscaling policy using Kyvos recommended configuration for Dataproc.

  • Private Google Access must be enabled for the subnet that you will use for deploying Kyvos and Dataproc clusters.

  • To enable external Hive metastore, the role attached to the Kyvos Manager node must have the following permissions:

    1. resourcemanager.projects.list

    2. dataproc.clusters.get

    3. compute.instances.get

    4. For cross-project metastore and Dataproc, assign the following roles on the project having metastore. Refer to the GCP documentation for details.

      • Dataproc Service Agent

      • Dataproc Metastore Service Agent

  • Ensure that the Kyvos deployment and the Dataproc cluster for use with Kyvos run in the same Project and Region.

  • Kyvos recommend instance configuration:

    1. Machine type for Kyvos Manager, Query Engine, and BI Server
      Kyvos Manager: n2-standard-4
      Query Engine: n2-highmem-4
      BI Server: n2-standard-8

    2. Master and worker nodes of Dataproc cluster
      Master Node:
      Series: N2
      Machine Type – n2-highmem-4 (4 vCPU and 32 GB)

      Worker Node:
      Series: N2
      Machine Type: n2-highmem-8 (8 vCPU and 64 GB)

  • If the Dataproc cluster is in a different region, then under compute metadata VmDnsSetting, set the value as GlobalDefault.

  • For a non-SSH based cluster, if you use an existing Dataproc cluster and an existing bucket, you must execute the dataproc.sh script (available in the GCP Installation Files folder) on the master node of Dataproc after changing the values of DEPLOYMENT_BUCKET, WORK_DIR, COPY_LIB, and DATAPROC_VERSION to the name of the existing bucket.

  • To store repository credentials and other confidential credentials on the Secret Manager, you need to create a Secret.

  • To deploy the Kyvos cluster using password-based authentication for service nodes, ensure that the permissions listed here are available on all the VM instances for Linux user deploying the cluster.

  • To deploy the Kyvos cluster using custom hostnames for resources, ensure that the steps listed here are completed on the resources created for use in the Kyvos cluster.

Related content

Copyright Kyvos, Inc. All rights reserved.