Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Version published after converting to the new editor

Applies to: Image Modified Kyvos Enterprise   Image Modified Kyvos Cloud (Managed Services on AWS)     Image Modified Kyvos Azure Marketplace

Image Modified Kyvos AWS Marketplace   Image Modified Kyvos Single Node Installation (Kyvos SNI)     Image Modified Kyvos Free ( Limited offering for AWS)

...

Data connections are used to connect to Kyvos cluster data, for example, to do computations and also to connect to your data.

...

  1. From the Toolbox, click Setup, then choose Connections.
  2. From the Actions menu (    ), click Add Connection.
  3. Enter a Name for the connection.
  4. Choose the Category from the drop-down list.
  5. Select the Provider from the drop-down list.
    The other options change based on your selection.
  6. See the Provider parameters table for details.
  7. To test the connection, click the Test button at the top of the dialog box. If the status is Invalid, click Invalid to learn more. 
  8.  Click Save when you are finished. 

...

Separate read and build connections

On non-SSH and Livy-enabled Azure and GCP clusters, Kyvos allows you to create separate build and read connections for launching build jobs and reading data.

...

When you define a new connection on non-SSH and Livy-enabled Azure and GCP clusters, you will see the option to choose if it is a Build connection, as shown in the following figure.

...

To use Spark or Hive for raw data querying , the Hadoop connection must have the Is Default SQL Engine checkbox disabled for other raw data connections (e.g Presto and Snowflake)

For a Spark connection,  you need to provide the Spark thrift server connection string URL.
For example: jdbc:hive2://10.260.431.111:10001/default;transportMode=http;httpPath=cliservice;principal=hive/intelli-i0056.kyvostest.com@KYVOSTEST.COM