All the Spark jobs are submitted to a common cluster. The Virtual machines are shared across the jobs. Better control of the total number of Virtual Machines as it is independent of the build process jobs, or semantic model design, etc. Fewer hardware costs due to optimal usage of Virtual machines
| Each Spark job is submitted to a new/dedicated cluster. The Virtual machines are not shared across the jobs Many Kyvos jobs do not operate in a way that utilizes the capacity of Virtual Machines. So, the system might underutilize the hardware and incur higher costs When multiple jobs are running in parallel, it might cause the usage of more Virtual Machines simultaneously, and hence the system would be more vulnerable to the following errors:
|