Document toolboxDocument toolbox

Execution Engine Configuration

Applies to: Kyvos Enterprise  Kyvos Cloud (SaaS on AWS) Kyvos AWS Marketplace

Kyvos Azure Marketplace   Kyvos GCP Marketplace Kyvos Single Node Installation (Kyvos SNI)


MapReduce is a default execution engine for Hive. Kyvos also supports Spark for running queries on Hive. You can configure the execution engine in this area according to your requirements.

Note

The fields displayed in the following figure are displayed ONLY if you select the Spark option.

Note

In the case of Azure (Databricks) deployment, only the Spark Version parameter is available for selection, and other fields are not displayed.  

To configure execution engine properties for the cluster:

  1. Enter details as:

Area

Parameter/Field

Comments/Description

Area

Parameter/Field

Comments/Description

 

Execution Engine Name

Select the Execution engine from the list.

 

Deployment Mode

Select the yarn-cluster option in case your Spark deployment mode is YARN cluster; else, select the yarn-client option.

Node and Authentication

 

 

Spark Source Node

To use the Hive Source Node, select the Same As NameNode option. Else, select the Other Node option.

Spark Node Host Name

If you selected the Other Node option above, enter your node IP here. 

Use different user account for accessing Spark Node

Select the checkbox to use a user account other than the Hadoop Node authentication user for accessing the Spark node.
NOTE: If you select this option. You will be prompted to provide Username, Authentication Type, and Password/Shared Key for authentication. 

Paths and Version

 

 

 

Spark Version

Select the Spark version from the list.

Spark Home Directory

Provide Spark home directory.

Spark Library Path

Enter library files path for Spark. Refer to the Appendix for details.

Spark Configuration Path

Enter the configuration files path for Spark. Refer to the Appendix for details.

Spark Parameters

Spark Parameters

Use this to add custom Spark parameters for your cluster.
NOTE: You must provide the spark.yarn.historyServer.address parameter.

  1. Click Validate Spark file paths. The system validates user authentication and connection for paths.

Note

The Validate Hive File Paths button is not displayed for the Azure (Databricks) environment. 

  1. Click the Save button from the top-right of the page to save your changes.

 

Copyright Kyvos, Inc. All rights reserved.