Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

You can also define a schedule for an incremental build, such that the build job is triggered automatically as per the defined schedule.

...

Read more: Semantic Model Process Types/wiki/spaces/AS/pages/22386079

I want to load incremental data in a built cube, what should I know?

...

For dynamic schema where the table is created over two parquet files, one with data type as int and another as double, add a new property spark.sql.hive.convertMetastoreParquet=false in the Cube Advanced properties

The property controls whether to use the built-in Parquet reader and writer for Hive tables with the parquet storage format (instead of Hive SerDe). The default value is set as true.

...