Parameter/Field | Description |
---|
Connection Name | A unique name that identifies your GenAI connections. |
Provider | The name of the GenAI provider the system will use to generate output. |
URL | The LLM Service URL of the provider-specific endpoint for generating output. For Azure |
API EndPoint | Specify which endpoint to be used to generate AI-powered conversational responses. |
Authentication Key | A unique key for authenticating and authorizing requests to the provider's endpoint. |
Model | The name of the GenAI LLM model used to generate the output. |
Is Model Fine Tuned | Select one of the following: Yes: Select this option to fine-tune the model. No: Select this option if you do not want to fine-tune the model. |
Embedding Connection | Specify the name of the GenAI embedding provider that the system will use to generate embeddings. |
Usage | Select one of the following: MDX Generations Conversational Analytics
You can specify whether you want to use the feature for MDX calculations or MDX queries. If you want to create expressions for calculated measures or members, select MDX calculations. If you want to query on the semantic model, choose the MDX queries option. However: If you select the MDX calculations option, the Kyvos Dialogues option will appear in the Calculated Measures dialog box for creating expressions for calculated measures and members. If you select the MDX queries option, the Kyvos Dialogues option will appear in the Query Playground page.
You can also set a default connection to be used for MDX calculations, MDX queries. To configure this, select the appropriate checkboxes as needed: |
Allow Sending Data for LLM | Select Yes or No to specify whether the generated questions should include values or not. |
Generate Content | Select Title, Summary, or Key Insight to determine the content to be generated, such as the title, summary, and key insights. NOTE: For summary and key insights, the value for 'Allow Sending Data for NLG', should be set to 'Yes'. |
Max Rows Summary | Enter the value to configure maximum values for row summary. NOTE: The default value is 100. |
Input Prompt Token Limit | Specify the maximum tokens allowed for a prompt in a single request for the current provider. NOTE: The default value is 8000. The minimum value is 0. |
Output Prompt Token Limit | Specify the maximum number of tokens shared between the prompt and output, which varies by model. One token is approximately four characters for English text. NOTE: The default value is 8000. The minimum value is 0. |
Max Retry Count | Specify maximum retry count attempted so that we get correct query. NOTE: The default value is 0. |
Summary Records Threshold | Specify similarity threshold for query autocorrection. NOTE: The default value is 0.2 The minimum value is 2. |
LLM Temperature | Specify the LLM temperature, which controls the level of randomness in the output. Lowering the temperature results in less random completions. The responses of the model become increasingly deterministic and repetitive as it approaches zero. It is recommended to adjust either the temperature or top-p, but not both simultaneously. |
Top P | Controls diversity via nucleus sampling. If set to 0.5, half of all likelihood-weighted options are considered. It is recommended to adjust either this or the temperature parameter, but not both. Default Value: 1 Minimum Value: 0 Maximum Value: 1
|
Frequency penalty | Specifies a number between -2.0 and 2.0. Positive values penalize new tokens based on their frequency in the existing text. This reduces the likelihood of the model repeating the same line verbatim. Default Value: 1 Minimum Value: -2 Maximum Value: 2
|
Presence penalty | Specifies a number between -2.0 and 2.0. Positive values penalize new tokens based on their appearance in the text so far, thereby increasing the model's likelihood of discussing new topics. Default Value: 1 Minimum Value: -2 Maximum Value: 2
|