Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Applies to: (tick) Kyvos Enterprise  (tick) Kyvos Cloud (SaaS on AWS) (tick) Kyvos AWS Marketplace

(tick) Kyvos Azure Marketplace   (tick) Kyvos GCP Marketplace (tick) Kyvos Single Node Installation (Kyvos SNI)

...

Data connections are used to connect to Kyvos cluster data, for example, to do computations and also to connect to your data.

...

  1. From the Toolbox, click Setup, then choose Connections.

  2. From the Actions menu (    ), click Add Connection.

  3. Enter a Name for the connection.

  4. Choose the Category from the drop-down list.

  5. Select the Provider from the drop-down list.
    The other options change based on your selection.

  6. See the Provider parameters table for details.

  7. To test the connection, click the Test button at the top of the dialog box. If the status is Invalid, click Invalid to learn more. 

  8.  Click Save when you are finished. 

...

  • Kyvos allows you to share and grant access to data connections. 

  • To allow users or groups to access a data connection and view connection information, refer to Sharing an object

Separate read and

...

process connections

On non-SSH and Livy-enabled Azure and GCP clusters, Kyvos allows you to create separate build process and read connections for launching build process jobs and reading data.

Kyvos requires a build process cluster to launch build process jobs and create aggregates for which a single build process can be configured in Kyvos during deployment. With this release, users can create a separate read connection for holding the data on which the semantic model is to be created, such as Snowflake, BigQuery, and so on.

Users need to create a base build process connection during the deployment, and subsequently add a new build process connection through the Kyvos web portal. The new connection must have the same configuration in terms of cluster version, Hadoop, Hive, and Spark versions.

When you define a new connection on non-SSH and Livy-enabled Azure and GCP clusters, you will see the option to choose if it is a Build Process connection, as shown in the following figure.

By default, all Warehouse connections are Read connections, as they can only be used to read data for registering files.

...

Refer to the Multiple builds processes and read connections section to know more.

...

To use Spark or Hive for raw data querying , the Hadoop connection must have the Is Default SQL Engine checkbox disabled for other raw data connections (e.g Presto and Snowflake)

For a Spark connection,  you need to provide the Spark thrift server connection string URL.
For example: jdbc:hive2://10.260.431.111:10001/default;transportMode=http;httpPath=cliservice;principal=hive/intelli-i0056.kyvostest.com@KYVOSTEST.COM