Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Platform for Compute Type

...

  • Shared Query Engine: In this mode, the query engine not only performs queries but also handles semantic model processing. This dual role is named SHARED because the same process undertakes both activities.

Panel
panelIconIdatlassian-note
panelIcon:note:
bgColor#DEEBFF

Note

From Kyvos 2024.9.1 onwards, if you use Query Engines as a compute server:

  • For Load Based Scaling: Query Engines will be automatically started when the semantic model is processed.

  • For Schedule based scaling: Query Engines will be automatically started when the semantic model is processed, but if you have enabled scheduled-based scaling, the Query Engines will not auto start. In this case, Kyvos recommends switching to Load-based scaling.

  • Dedicated Compute: In this mode, the semantic model is processed via dedicated service. In cloud-based deployment, the semantic model is processed using Kubernetes (K8S) cluster-based nodes while in ON PREM environment, models are processed on dedicated nodes.

...

From Kyvos 2024.10, you can:

  • Process semantic model with no-Spark using the Shared Query Engine and dedicated Kubernetes cluster on AWS Managed Services.

  • Resume failed or canceled semantic model process.

  • Run test data semantic model process job.

...

After deploying Kyvos using no-Spark processing model, perform the following post deployment steps.

  • Modify the values of the following properties in the advance properties of semantic model job:

    • kyvos.process.compute.type=KYVOS_COMPUTE

    • kyvos.build.aggregate.type=TABULAR

...

Post deployment steps on Azure
Anchor
postdeployment-Azure
postdeployment-Azure

...

  • To add the Storage Blob Data Contributor role,

    1. On the Home page of the Azure portal, search for Storage Accounts.

    2. On the Storage Accounts page, select the storage account that is used for deployment.

    3. Navigate to Access Control (IAM).

    4. Select the Storage Blob Data Contributor role from the list.

    5. In the Assign Access to section, select Managed Identity.

    6. Click Select Member.

    7. On the Select Managed Identity dialog, select the Access Connector for Azure Databricks from the list.

    8. Click Review+assign and click save the permission.

      image-20240802-094013.png
  • To add an external location,

    1. Go to Databricks workspace. In the left pane, click Catalog.

    2. Click Settings, and then click External Locations.

      image-20240802-095059.png
    3. On the External Location page, click Create location.

    4. On the Create Location dialog, add external location name, select the credentials from the list, and URL
      The URL must be in abfss://<Container name >@<Storage name >.dfs.core.windows.net/<Cluster engine_work directroy>
      For example, abfss://kyvoscontainer@kyvossa05751.dfs.core.windows.net/user/engine_work

    5. Click Create. Click Grant and select all privileges and as CREATE EXTERNAL TABLE AND WRITE FILES roleand grant them to the user whose token is used for while creating SQL Warehouse connection.

      image-20240802-100745.png
    6. Click Grant. The Permission is assigned.

...