...
Selective dimension materialization: Use this property to control dimension combinations to be pre-aggregated in a semantic model process. You can specify the dimension names for materialization using the property dialog. In each dimension combination, dimensions are kept in a specific order (defined in property kyvos.build.dimension.order ).
While choosing dimension combinations to materialize using this property, only the combinations starting with the selected dimension(s) are materialized resulting in reduce semantic model size and process time.
Reducing aggregation can impact query performance so you need to cautiously identify which dimensions can be selectively materialized.Selective hierarchy materialization: Use this property to specify the highest level of a dimension to be materialized, allowing you to reduce the semantic model process time and size. Levels higher than the specified level will be aggregated at run time. Dimensions not specified in this property will be materialized based on default settings. You can also specify to materialize individual levels if needed. The property value comes into effect after a full semantic model process.
For Example :
Consider a TIME dimension with 4 levels of hierarchy as YEAR, QTR, MONTH, DAY. If you select QTR, Kyvos will pre-aggregate only the QTR and DAY (as DAY is the lowest level). Any query containing YEAR will be served from QTR, while MONTH will be served from DAY, and they will be aggregated at run time. To materialize YEAR, you can select it individually too.Recommendation Aggregates Configurations: Click Recommend me to view a recommended strategy including base and subpartitions, recommendation details, and reasons. Also, you can get aggregation strategy recommendations. These recommendations are based on how the data is being used for querying. Kyvos automatically recommends aggregates based on its internal logic, which will improve performance and displays the number of recommendations. See Using aggregates for for additional details.
Optimization through Spark properties
...
kyvos.build.spark.levelJob.tasks: Use this property to configure the number of tasks for reducing stage of Level1 and Leveland Level_DistCount jobs for Full or/and Incremental semantic model process job when executing through Spark engine. You can set this property to any positive integer and thereafter that number of tasks will get launched for the reducer stage to increase parallel tasks if the required resources are available on the cluster.
Default value is automatic based on data loads.spark.dynamicAllocation.minExecutors: Use this spark property sets the lower bound for the number of executors if the dynamic allocation is enabled. See details.
spark.yarn.executor.memoryOverhead: Use this spark property to set the amount of off-heap memory (in megabytes) to be allocated per executor. This memory accounts for things like VM overheads, interned strings, other native overheads, etc. This tends to grow with the executor size (typically 6-10%). See details.
spark.driver.memory: Use this property to set the amount of memory to be used for the driver process, i.e., where SparkContext is where SparkContext is initialized . ( e.g., 1g1g, 2g).
kyvos.spark.executor.memory.level1: Use this property t o set the spark executor memory for level1 job(s) launched during the full and/or incremental semantic model process.
kyvos.spark.executor.cores.level1: Use this property to set the spark executor cores for level1 job(s) launched during the full and/or incremental semantic model process.
spark.executor.memory: Use this property to set the amount of memory to be used per executor process ( e.g., 2g, 8g). See details.
spark.executor.cores: Use this property to set the number of cores to be used on each executor. This property is for YARN and standalone mode only in spark. In standalone mode, setting this parameter allows an application to run multiple executors on the same worker if there are enough cores on that worker. Otherwise, only one executor per application will run on each worker. See details.
spark.dynamicAllocation.maxExecutors: Use this property to set property to set the upper bound for the number of executors if the dynamic allocation is enabled. See details.
Panel | ||||||
---|---|---|---|---|---|---|
| ||||||
Note For Azure Databricks based environment you need explicitly define/modify the spark properties in Databricks Advance Spark Configuration. |
...