Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Applies to: (tick) Kyvos Enterprise  (tick) Kyvos Cloud (SaaS on AWS) (tick) Kyvos AWS Marketplace

(tick) Kyvos Azure Marketplace   (tick) Kyvos GCP Marketplace (tick) Kyvos Single Node Installation (Kyvos SNI)

...

Data connections are used to connect to Kyvos cluster data, for example, to do computations and also to connect to your data.

...

  • Computing connections such as Hadoop, Local Process Connection (for AWS), or Dataproc (for GCP). This represents a physical computation cluster used for submitting jobs to process semantic models.

  • Data warehouse connections such as Snowflake, Teradata, or Redshiftor Redshift, or Google BigQuery.

  • Raw data querying connections such as Presto, Spark, and Hive. You can also enable Snowflake and Athena for raw data querying. 

  • Repository for metadata such as Postgres SQL  or Amazon RDS for Postgres.

...

  1. From the Toolbox, click Setup, then choose Connections.

  2. From the Actions menu (    ), click Add Connection.

  3. Enter a Name for the connection.

  4. Choose the Category from the drop-down list.

  5. Select the Provider from the drop-down list.
    The other options change based on your selection.

  6. See the Provider parameters table for details.

  7. To test the connection, click the Test button at the top of the dialog box. If the status is Invalid, click Invalid to learn more. 

  8.  Click Save when you are finished. 

...

  • Kyvos allows you to share and grant access to data connections. 

  • To allow users or groups to access a data connection and view connection information, refer to Sharing an object

Separate read and build connections

On non-SSH and Livy-enabled Azure and GCP clusters, Kyvos allows you to create separate build and read connections for launching build jobs and reading data.

...

When you define a new connection on non-SSH and Livy-enabled Azure and GCP clusters, you will see the option to choose if it is a Build connection, as shown in the following figure.

...

To use Spark or Hive for raw data querying , the Hadoop connection must have the Is Default SQL Engine checkbox disabled for other raw data connections (e.g Presto and Snowflake)

For a Spark connection,  you need to provide the Spark thrift server connection string URL.
For example: jdbc:hive2://10.260.431.111:10001/default;transportMode=http;httpPath=cliservice;principal=hive/intelli-i0056.kyvostest.com@KYVOSTEST.COM