Important points for no Spark deployments
No Spark types: SHARED_QE and Compute Server
Shared Query Engine: In SHARED_QE deployment, a semantic model is processed on Query Engines itself.
Dedicated Query Engine: In a COMPUTE SERVER deployment, a semantic model is processed on a dedicated node. This ensures that query performance is not impacted by issues such as a failure in semantic model processing.
A Compute Server can be deployed on any dedicated node, or it can even share nodes with any of the existing services. You can also configure multiple instances of Compute Server service on the same node.
Java options for Compute Service server can be configured from Kyvos Manager on the Java Options page.
Kyvos Manager offers a way to manage service of dedicated Query Engine or Compute Server from Kyvos Dashboard page as offered for other services of cluster.
Logs of dedicated ON PREM Compute Server service can be viewed and downloaded from Logs page of monitor in Kyvos Manager.
Kyvos Manager offers a way to add, delete, or migrate Compute Server in a deployed on-premises cluster.
Migration of SHARED_QE cluster to Compute Server is NOT supported. You must redeploy the cluster.
Note
Before deploying no Hadoop and no Spark, the same Network File System (NFS) must be mounted on each compute node of cluster.
To deploy No-Hadoop and No-Spark Cluster, perform the following steps.
Download the setup provided by your admin/Kyvos Support team. Untar the setup bundle and start the Kyvos Manager.
The Kyvos Installer page is displayed.Enter details as:
From the Default Cuboid Replication Type dropdown list, select the Local option, and click the Next button.
The Summary tab displays all the settings completed so far. Collapse or expand the Cluster, Hadoop, Security, or Kyvos areas to hide or view the corresponding details.
Click the Previous button to go back and make any changes (if needed).
To proceed with the deployment, click the Deploy button.
Note
If you select Kyvos Native (Dedicated Node or Shared QE), the Hadoop Ecosystem Configuration will be disabled.
In case of No Hadoop No Spark deployments, data reading is supported via MSSQL and Postgres. To create connection for MSSQL warehouse, see the Working with MSSQL Warehouse Connection section.