Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

To use the Kyvos Copilot feature, you need to configure GenAI connection details through Kyvos Manager. The default configurations are auto-populated, and you can modify these settings as needed.

To configure the GenAI connection, perform the following steps: 

  1. On the navigation pane, click Kyvos and Ecosystem > GenAI Configuration. The page displays information about the GenAI Configuration connection details.

    image-20240415-092701.png

  2. Enter details as:

Parameter/Field

Description

Enable

Select this checkbox to enable GenAI connection. This will enable the Kyvos Copilot fetaure.

Connection Name

A unique name that identifies your GenAI connections.

Provider

The name of the GenAI provider the system will use to generate output.

URL

The URL of the provider-specific endpoint for generating output.

Authentication Key

A unique key for authenticating and authorizing requests to the provider's endpoint.

Usage

Select one of the following:

  • MDX Calculations

  • MDX Queries

You can specify whether you want to use the feature for MDX calculations or MDX queries. If you want to create expressions for calculated measures or members, select MDX calculations. If you want to query on the semantic model, choose the MDX queries option. However:

  • If you select the MDX calculations option, the Ask Kyvos Copilot option will appear in the Calculated Measures dialog box for creating expressions for calculated measures and members.

  • If you select the MDX queries option, the Ask Kyvos Copilot option will appear in the Query Playground page.

Model

The name of the GenAI LLM model used to generate the output.

Temperature

Configures temperature, controlling randomness. Lowering it results in less random completions. As the temperature approaches zero, the model becomes deterministic and repetitive. It is recommended to alter either this or top p, but not both.

  • Default Value: 1

  • Minimum Value: 0

  • Maximum Value: 2

Maximum length

Specifies the maximum number of tokens shared between the prompt and output, which varies by model. One token is roughly four characters for English text.

  • Default Value: 1

  • Minimum Value: 0

  • Maximum Value: 10000

Top P

Controls diversity via nucleus sampling. If set to 0.5, half of all likelihood-weighted options are considered. It is recommended to adjust either this or the temperature parameter, but not both.

  • Default Value: 1

  • Minimum Value: 0

  • Maximum Value: 1

Frequency penalty

Specifies a number between -2.0 and 2.0. Positive values penalize new tokens based on their frequency in the existing text. This reduces the likelihood of the model repeating the same line verbatim.

  • Default Value: 1

  • Minimum Value: -2

  • Maximum Value: 2

Presence penalty

Specifies a number between -2.0 and 2.0. Positive values penalize new tokens based on their appearance in the text so far, thereby increasing the model's likelihood of discussing new topics.

  • Default Value: 1

  • Minimum Value: -2

  • Maximum Value: 2

  1. Click Save.

  • No labels