Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Panel
panelIconIdatlassian-note
panelIcon:note:
bgColor#DEEBFF

Note This is only applicable to Azure

  • Supported only with premium workspace.

  • Supported only with Personal Access Token authentication.

Configuring Databricks SQL warehouse

  • To create Databricks SQL warehouse, refer to Microsoft Documentation.

    1. Type: The serverless type will only be supported.

    2. Unity Catalog must be enabled. If Unity Catalog is not enabled for your workspace, you do not see this option.

  • To configure Databricks SQL warehouses with SQL parameters, perform the following steps.

    1. Open Databricks workspace.

    2. Click your username in the top bar of the workspace and select Admin Settings from the list.

    3. Click the SQL Warehouse Settings tab.

    4. In the SQL Configuration Parameters textbox, specify the below key-value pair:
      LEGACY_TIME_PARSER_POLICY LEGACY

    5. Click Save.
      For more information, see the SQL configuration parameters and Legacy_Time_Parser_Policy

Creating a build and read connection

  1. From the Toolbox, click Setup, then choose Connections.

  2. From the Actions menu (  ) click Add Connection.

  3. Enter a Name for the connection.

  4. From the Category drop-down list, select the BUILD option.

  5. From the Provider list, select the Databricks option, and provide the following:

    1. Databricks Cluster Id: Enter the ID of your Databricks cluster.
      To obtain this ID, click the Cluster Name on the Clusters page in Databricks. The page URL shows <https://<databricks-instance>/#/settings/clusters/<cluster-id>

    2. Databricks Service Address: Enter the URL of your Databricks workspace.

    3. Databricks Personal Access Token: Enter the personal access token to access and connect to your Databricks workspace. Refer to the Databricks documentation to get your token 

  6. To use this connection as a read connection also, select the Is Read checkbox.

  7. Provide the Hive Server JDBC URL. You can find this URL from the Kyvos Manager. Navigate to Hadoop Ecosystem Configuration page > Hadoop Parameters. Copy the value of kyvos.hiveserver2.jdbc.url parameter provided in the format:
    jdbc:spark://adb-<Databricks ID>.azuredatabricks.net:443/default;transportMode=http;ssl=1;httpPath=sql/protocolv1/o/<organization ID>/<Databricks cluster ID>;AuthMech=3;
    Ensure that you update the Databricks ID, Organization ID, and Databricks Cluster ID according to the cluster that you are working on.

  8. To use this connection as the default SQL engine, select the Is Default SQL Engine checkbox, and from the SQL engine list, select Database SQL Warehouse .
    See the Provider parameters table for details.

  9. To use the Databricks SQL engine, provide the Server URL and Alternate Server URL.

...

Tip

Tip

In case of Database SQL Warehouse ROLAP queries are failed, you must perform the following:

  1. Click your username in the top bar of the workspace and select Admin Settings from the list.

  2. Click the SQL Warehouse Settings tab. In the SQL Configuration Parameters textbox, specify one key-value pair per line.

  3. Separate the name of the parameter from its value using a space. For example, LEGACY_TIME_PARSER_POLICY LEGACY

  4. Click Save.

...

panelIconIdatlassian-note
panelIcon:note:
bgColor#DEEBFF

Note

...

Supported only with premium workspace.

...