Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Platform for Compute Type

...

  • Shared Query Engine: In this mode, the query engine not only performs queries but also handles semantic model processing. This dual role is named SHARED because the same process undertakes both activities.

Panel
panelIconIdatlassian-note
panelIcon:note:
bgColor#DEEBFF

Note

From Kyvos 2024.9.1 onwards, if you use Query Engines as a compute server:

  • For Load Based Scaling: Query Engines will be automatically started when the semantic model is processed.

  • For Schedule based scaling: Query Engines will be automatically started when the semantic model is processed, but if you have enabled scheduled-based scaling, the Query Engines will not auto start. In this case, Kyvos recommends switching to Load-based scaling.

  • Dedicated Compute: In this mode, the semantic model is processed via dedicated service. In cloud-based deployment, the semantic model is processed using Kubernetes (K8S) cluster-based nodes while in ON PREM environment, models are processed on dedicated nodes.

With the Kyvos 2024.3.2 release, you You can change from an external compute cluster to Kyvos Native for processing semantic models through Kyvos Manager on the Compute Cluster page.

From Kyvos 2024.10, you can:

  • Process semantic model with no-Spark using the Shared Query Engine and dedicated Kubernetes cluster on AWS Managed Services.

  • Resume failed or canceled semantic model process.

  • Run test data semantic model process job.

Supported Platforms

Supported Environments

 

AWS

AZURE

GCP

ON PREM

Supported Native Types

Kyvos Enterprise

Marketplace

Managed Services

Enterprise

Marketplace

Enterprise

Marketplace

 

SHARED QE

(tick)

(tick)

(tick)

(tick)

(tick)

(tick)

(error)

(tick)

Dedicated Compute (K8S)

(tick)

(error)

(tick)

(tick)

(error)

(tick)

(error)

(error)

  • AWS: For Kubernetes, Kyvos processes the semantic model using Amazon Elastic Kubernetes Service (Amazon EKS). You can select Query Engine or Kubernetes as a compute engine using no Spark model processing.
    From Kyvos 2024.10, you can process semantic model with no-Spark using the Shared Query Engine and dedicated Kubernetes cluster on AWS Managed Services.
    For further details about deployment, see the Automated deployment for AWS via CloudFormation with Kyvos Native section.

  • Azure: For Kubernetes, Kyvos processes the semantic model using Azure’s managed service AKS (Azure Kubernetes Service).

    • The Azure cluster is deployed via ARM templates. You can create a cluster without Spark or process the semantic model using Spark mode within ARM templates.
      For further details about deployment, see the Automated deployment on Azure with Kyvos Native section.

    • From Kyvos 2024.3 onwards, you can select the compute cluster as the Query engine or Kubernetes when deploying Kyvos through Azure Template Specs.

  • GCP: For Kubernetes, Kyvos processes the semantic model using Google Cloud's managed service GKE (Google Kubernetes Engine). The GKE cluster is deployed through GCP Installation Files. Using the scripts, you can select a No-Spark-based cluster or process the semantic model using Spark Mode.
    Optionally, for no-Spark deployments, you can either use new or existing Dataproc cluster.
    For further details about Kyvos deployment on GCP using the no-Spark model, see the following section:

  • On Premises: For On-premises deployment, you can deploy using No Spark types: SHARED_QE and Compute Server. For further details about on premises deployment with no-Spark, see the Deploying no Hadoop no Spark.

...

  1. Shared cluster is not supported on AWS and GCP.

  2. Name spaces must be fixed for existing K8s cluster as kyvos-compute and kyvos-monitoring on AWS and GCP

  3. Node pool of Kubernetes cluster must be for dedicated Kyvos use.

  4. Even if a dedicated node pool is needed for Kyvos, currently, a single Kubernetes cluster can be used with any single Kyvos cluster. This means that one dedicated node pool of a K8s cluster cannot be used with one Kyvos cluster, and another dedicated node pool of the same K8s cluster cannot be used with another Kyvos cluster.

  5. Node pool used for Kyvos must have single instance type used for pool.

  6. Node pool with multiple instance type is not supported.

  7. Currently, Azure is not supported for existing Kubernetes cluster in any of the following cases:

    1. Fresh automated deployment

    2. Fresh wizard- based deployment

  8. Instance type of Node pool must be of 4 minimum cores & 16 GB memory requirement.

  9. There must be required permissions to list Kubernetes clusters and their node pools; without these permissions, the input will be converted to a text input rather than a dropdown.

  10. Node pool for GCP Kubernetes cluster must be of single zone. Multi-zone node pool is not supported.

...

After deploying Kyvos using no-Spark processing model, perform the following post deployment steps.

  • Modify the values of the following properties in the advance properties of semantic model job:

    • kyvos.process.compute.type=KYVOS_COMPUTE

    • kyvos.build.aggregate.type=TABULAR

...

Restart Kyvos services.

Post deployment steps on Azure
Anchor
postdeployment-Azure
postdeployment-Azure

...

  • To add the Storage Blob Data Contributor role,

    1. On the Home page of the Azure portal, search for Storage Accounts.

    2. On the Storage Accounts page, select the storage account that is used for deployment.

    3. Navigate to Access Control (IAM).

    4. Select the Storage Blob Data Contributor role from the list.

    5. In the Assign Access to section, select Managed Identity.

    6. Click Select Member.

    7. On the Select Managed Identity dialog, select the Access Connector for Azure Databricks from the list.

    8. Click Review+assign and click save the permission.

      image-20240802-094013.png
  • To add an external location,

    1. Go to Databricks workspace. In the left pane, click Catalog.

    2. Click Settings, and then click External Locations.

      image-20240802-095059.png
    3. On the External Location page, click Create location.

    4. On the Create Location dialog, add external location name, select the credentials from the list, and URL
      The URL must be in abfss://<Container name >@<Storage name >.dfs.core.windows.net/<Cluster engine_work directroy>
      For example, abfss://kyvoscontainer@kyvossa05751.dfs.core.windows.net/user/engine_work

    5. Click Create. Click Grant and select all privileges and as CREATE EXTERNAL TABLE AND WRITE FILES roleand grant them to the user whose token is used for while creating SQL Warehouse connection.

      image-20240802-100745.png
    6. Click Grant. The Permission is assigned.

...