Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Prerequisites for creating AWS SQL Warehouse Connection
Anchor
AzureSQLconnection
AzureSQLconnection

...

  1. Unity Catalog must be enabled on your Databricks cluster.

  2. To create a storage credential for connecting to AWS S3, refer to Databricks documentation.

  3. Test the newly created external location by clicking the 'Test connection' button. This will validate the connection with the external location path. Through the permissions tab, assign your user the CREATE EXTERNAL TABLE role.

  4. Go to semantic model Advanced Properties, and add the following properties:

    1. kyvos.sqlwarehouse.catalog: Enter the temp catalog name for to create a parquet table = .

    2. kyvos.sqlwarehouse.catalogtempdb: Enter the temp database name for to create a parquet table = kyvos.sqlwarehouse.tempdb

  5. You must have 'create table and Use schema' permissions on the temp catalog.

Creating AWS Databricks SQL Warehouse connection

You can create AWS Databricks SQL warehouse connection with no-Spark

...

.

Panel
panelIconIdatlassian-note
panelIcon:note:
bgColor#DEEBFF

Note

The steps for Working with Databricks SQL Warehouse with Personal Token are the same.

  • Supported for AWS and Azure.

  • Supported only with premium workspace.

  • The OAuth connectivity with Databricks SQL Warehouse for AWS no-Spark is not supported.

  • The serverless type Databricks SQL Warehouse cluster is only supported.

  • Unity Catalog must be enabled.

To create AWS SQL Warehouse connection for processing semantic models without with No-Spark, perform the following steps.

...

Parameter

Description

Name

Enter SanityConnection as a name.

Category

Select the Warehouse option.

Provider

Select the Generic option.

Driver

Enter the Driver class as com.databricks.client.jdbc.Driver 

URL

Enter JDBC Databricks URL.

Username

Enter Databricks token.

Password

Enter Databricks token.

Authentication Type

Select Personal Access Token.

Use as Source

This checkbox is auto selected. Enter Spark Read Method as JDBC.

Is Default SQL Engine

To enable the connection for raw data, click the Is Default SQL Engine checkbox to set this connection to run the default SQL engine.

Properties

Click Properties to view or set properties.

Catalog Enabled

Select this checkbox to list different catalog created in the workspace.

...

panelIconIdatlassian-note
panelIcon:note:
bgColor#DEEBFF

Note

...

Supported for AWS and Azure.

...

workspace

...

The OAuth connectivity with Databricks SQL Warehouse for AWS no-Spark is not supported.

...

.

...

Unity Catalog must be enabled.

...

  1. From the Toolbox, click Setup, then choose Connections.

  2. From the Actions menu (  ), click Add Connection.

  3. Enter a Name for the connection.

  4. From the Category drop-down list, select the Warehouse option.

  5. From the Provider list, select the DatabricksSQL option.

  6. The Driver Class field is prepopulated.

  7. Enter Databricks SQL Warehouse JDBC URL. For more information, see Microsoft documentation.

  8. Enter Username as token.

  9. Enter Databricks SQL Personal Access Token for the Databricks SQL workspace in the Password field.

  10. The Use as Source checkbox is disabled as this is a source connection.

  11. To use this connection as the default SQL engine, select the Is Default SQL Engine checkbox.

  12. Select the Catalog Enabled checkbox. It is mandatory to enabled it for Databricks SQL.

  13. Click the Properties link to view or set properties.

  14. After configuring the settings, click the Save button. 

  15. To refresh connections, click the Actions menu ( ⋮ ) at the top of the Connections column and select Refresh.

...