...
Shared Query Engine: In this mode, the query engine not only performs queries but also handles semantic model processing. This dual role is named SHARED because the same process undertakes both activities.
Panel | ||||||
---|---|---|---|---|---|---|
| ||||||
Note From Kyvos 2024.9.1 onwards, if you use Query Engines as a compute server:
|
Dedicated Compute: In this mode, the semantic model is processed via dedicated service. In cloud-based deployment, the semantic model is processed using Kubernetes (K8S) cluster-based nodes while in ON PREM environment, models are processed on dedicated nodes.
With the Kyvos 2024.3.2 release, you You can change from an external compute cluster to Kyvos Native for processing semantic models through Kyvos Manager on the Compute Cluster page.
From Kyvos 2024.10, you can:
Process semantic model with no-Spark using the Shared Query Engine and dedicated Kubernetes cluster on AWS Managed Services.
Resume failed or canceled semantic model process.
Run test data semantic model process job.
Supported Platforms
Supported Environments | ||||||||
| AWS | AZURE | GCP | ON PREM | ||||
Supported Native Types | Kyvos Enterprise | Marketplace | Managed Services | Enterprise | Marketplace | Enterprise | Marketplace |
|
SHARED QE |
|
|
|
|
|
|
|
|
Dedicated Compute (K8S) |
|
|
|
|
|
|
|
|
AWS: For Kubernetes, Kyvos processes the semantic model using Amazon Elastic Kubernetes Service (Amazon EKS). You can select Query Engine or Kubernetes as a compute engine using no Spark model processing.
From Kyvos 2024.10, you can process semantic model with no-Spark using the Shared Query Engine and dedicated Kubernetes cluster on AWS Managed Services.
For further details about deployment, see the Automated deployment for AWS via CloudFormation with Kyvos Native section.Azure: For Kubernetes, Kyvos processes the semantic model using Azure’s managed service AKS (Azure Kubernetes Service).
The Azure cluster is deployed via ARM templates. You can create a cluster without Spark or process the semantic model using Spark mode within ARM templates.
For further details about deployment, see the Automated deployment on Azure with Kyvos Native section.From Kyvos 2024.3 onwards, you can select the compute cluster as the Query engine or Kubernetes when deploying Kyvos through Azure Template Specs.
GCP: For Kubernetes, Kyvos processes the semantic model using Google Cloud's managed service GKE (Google Kubernetes Engine). The GKE cluster is deployed through GCP Installation Files. Using the scripts, you can select a No-Spark-based cluster or process the semantic model using Spark Mode.
Optionally, for no-Spark deployments, you can either use new or existing Dataproc cluster.
For further details about Kyvos deployment on GCP using the no-Spark model, see the following section:On Premises: For On-premises deployment, you can deploy using No Spark types: SHARED_QE and Compute Server. For further details about on premises deployment with no-Spark, see the Deploying no Hadoop no Spark.
...
After deploying Kyvos using no-Spark processing model, perform the following post deployment steps.
Modify the values of the following properties in the advance properties of semantic model job:
kyvos.process.compute.type=KYVOS_COMPUTE
kyvos.build.aggregate.type=TABULAR
...
Post deployment steps on Azure
Anchor | ||||
---|---|---|---|---|
|
...
To add the Storage Blob Data Contributor role,
On the Home page of the Azure portal, search for Storage Accounts.
On the Storage Accounts page, select the storage account that is used for deployment.
Navigate to Access Control (IAM).
Select the Storage Blob Data Contributor role from the list.
In the Assign Access to section, select Managed Identity.
Click Select Member.
On the Select Managed Identity dialog, select the Access Connector for Azure Databricks from the list.
Click Review+assign and click save the permission.
To add an external location,
Go to Databricks workspace. In the left pane, click Catalog.
Click Settings, and then click External Locations.
On the External Location page, click Create location.
On the Create Location dialog, add external location name, select the credentials from the list, and URL
The URL must be in abfss://<Container name >@<Storage name >.dfs.core.windows.net/<Cluster engine_work directroy>
For example, abfss://kyvoscontainer@kyvossa05751.dfs.core.windows.net/user/engine_workClick Create. Click Grant and select all privileges and as CREATE EXTERNAL TABLE AND WRITE FILES roleand grant them to the user whose token is used for while creating SQL Warehouse connection.
Click Grant. The Permission is assigned.
...