Important points for no Spark deployments
No Spark types: SHARED_QE and Compute Server
Shared Query Engine: In SHARED_QE deployment, a semantic model is processed on Query Engines itself.
Dedicated Query Engine: In a COMPUTE SERVER deployment, a semantic model is processed on a dedicated node. This ensures that query performance is not impacted by issues such as a failure in semantic model processing.
A Compute Server can be deployed on any dedicated node, or it can even share nodes with any of the existing services. You can also configure multiple instances of Compute Server service on the same node.
Java options for Compute Service server can be configured from Kyvos Manager on the Java Options page.
Kyvos Manager offers a way to manage service of dedicated Query Engine or Compute Server from Kyvos Dashboard page as offered for other services of cluster.
Logs of dedicated ON PREM Compute Server service can be viewed and downloaded from Logs page of monitor in Kyvos Manager.
Kyvos Manager offers a way to add, delete, or migrate Compute Server in a deployed on-premises cluster.
Migration of SHARED_QE cluster to Compute Server is not supported. You must redeploy the cluster.
Note
Before deploying no Hadoop and no Spark, the same Network File System (NFS) must be mounted on each compute node of cluster.
To deploy No-Hadoop and No-Spark Cluster, perform the following steps.
Download the setup. After downloading, untar it, and then start Kyvos manager.
Select Kyvos platform as OnPrem.
Select Compute Cluster Type as Kyvos Native in Ecosystem.
Select HTTP in the Security tab.
Clear the Use YARN checkbox in the Kyvos tab of the installer page.
Select Local as the value for the Default Cuboid Replication Type dropdown in the Kyvos tab of the installer page.
The Summary tab page displays all the settings completed so far. Collapse or expand the Cluster, Hadoop, Security, or Kyvos areas to hide or view the corresponding details.
Note
If you select Kyvos Native (Dedicated Node or Shared QE), the Hadoop Ecosystem Configuration will be disabled.
In case of No Hadoop No Spark deployments, data reading is supported via MSSQL and Postgres. To create connection for MSSQL warehouse, see the Working with MSSQL Warehouse Connection section.