Applies to: Kyvos Enterprise Kyvos Cloud (SaaS on AWS) Kyvos AWS Marketplace
Kyvos Azure Marketplace Kyvos GCP Marketplace Kyvos Single Node Installation (Kyvos SNI)
...
Prerequisites
Before you start the automated installation for Kyvos on AWS, ensure you have the following information.
Panel | ||||||
---|---|---|---|---|---|---|
| ||||||
Important
. 1 AWS Installation Files2024.3.2 AWS Installation Files |
...
AWS CloudFormation template. Contact Kyvos support to get your custom template. Alternatively, download the default template file, or create a template as per your requirements.
The CloudFormation template can be deployed through the logged-in user or a role. The logged-in user must have the required policies given in the aws-console-user-iam-policy.json file.
EC2 key pair, consisting of a private key and a public key. You can create the key pair if needed.
Networking requirements:
Use the Network CloudFormation template to create network resources (VPC, Subnet, and Security Group) automatically.
If you want to deploy your network with NAT Gateway, use the NATGateway Template (vpc_nat.json file) .
OR
If you want to use existing network resources, perform the following steps in your VPC.
You must create VPC Endpoints within your VPC, to connect with the AWS services. Else, you must have the internet and NAT Gateway in the subnet.
List of VPC Endpoints for AWS services required by Kyvos:
AWS Service Name
Description/Purpose
VPC Endpoint Name
CloudWatch logs
Used to send bootstrap logs of the EC2 machines to CloudWatch Logs.
com.amazonaws.{AWS-REGION}.logs
Glue
Used to connect to Glue from the Kyvos BI Server and fetch metadata of the tables stored.
com.amazonaws.{AWS-REGION}.glue
Cloudformation
Used by Kyvos Manager at the time of deployment to validate and get details from the AWS stack in Cloudformation.
com.amazonaws.{AWS-REGION}.cloudformation
CloudWatch Event
Used to schedule events on CloudWatch Event for scheduled starting of the Kyvos BI Server.
com.amazonaws.{AWS-REGION}.events
S3
Used to connect to an S3 bucket for reading raw data and writing metadata.
com.amazonaws.{AWS-REGION}.s3
RDS
Used for scheduled start/stop of the Kyvos cluster along with RDS.
com.amazonaws.{AWS-REGION}.rds
EC2
Used by Kyvos Manager to describe EC2 and Kyvos BI Server for scheduled start/stop of Query Engines.
com.amazonaws.{AWS-REGION}.ec2
Secrets Manager
Used by the Kyvos BI Server to get the passwords stored in AWS Secrets Manager.
com.amazonaws.${AWS-REGION}.secretsmanager
Info In the table above, change the {AWS-REGION} according to the region in which you are deploying Kyvos.
AWS does not provide a VPC endpoint for the Cost explorer service, so the Kyvos Resource Usage feature will not work without internet access.
Permission requirements:
You can create IAM roles using the CloudFormation template (automated_deployment_iam_role.json file).
ORCreate IAM Role for:
Refer to the section/wiki/spaces/KD20233/pages/18448740to create new roles.EC2 that will be attached to all Kyvos instances. This role contains all the permissions required by Kyvos Services and Kyvos Manager.
Details for permissions required for EC2.Lambda that will be attached to the Kyvos created Lambda functions. This role contains all the permissions required by lambda functions to run.
Port 443 of the Databricks cluster should be accessible by Kyvos.
Create Databricks-instanceprofile-role with the following permissions:
Code Block { "Version": "2012-10-17", "Statement": [ { "Sid": "GrantCatalogAccessToGlue", "Effect": "Allow", "Action": [ "glue:BatchCreatePartition", "glue:BatchDeletePartition", "glue:BatchGetPartition", "glue:CreateDatabase", "glue:CreateTable", "glue:CreateUserDefinedFunction", "glue:DeleteDatabase", "glue:DeletePartition", "glue:DeleteTable", "glue:DeleteUserDefinedFunction", "glue:GetDatabase", "glue:GetDatabases", "glue:GetPartition", "glue:GetPartitions", "glue:GetTable", "glue:GetTables", "glue:GetUserDefinedFunction", "glue:GetUserDefinedFunctions", "glue:UpdateDatabase", "glue:UpdatePartition", "glue:UpdateTable", "glue:UpdateUserDefinedFunction" ], "Resource": [ "*" ] }, { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket" ], "Resource": "*" } ] }
S3 Bucket permissions
If you want to use an existing S3 bucket and IAM role, or if you want to read data from an S3 bucket other than where Kyvos is deployed, then the IAM role must have the following permissions on the S3 bucket.
Here, replace:
<Bucket Name> with the name of your bucket name.
<Lambda Role> with the name of your Lambda Role.
<EC2 Role> with the name of your EC2 Role.
Code Block { "Version": "2008-10-17", "Statement": [ { "Sid": "Ec2LambdaRoleBucketPolicy", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::456531263183:role/EC2-Role", "arn:aws:iam::<AWS Accout ID>:role/<Lambda Role>", "arn:aws:iam::456531263183:role/Databricks-instanceprofile-role" ] }, "Action": [ "s3:PutAnalyticsConfiguration", "s3:GetObjectVersionTagging", "s3:ReplicateObject", "s3:GetObjectAcl", "s3:GetBucketObjectLockConfiguration", "s3:DeleteBucketWebsite", "s3:PutLifecycleConfiguration", "s3:GetObjectVersionAcl", "s3:DeleteObject", "s3:GetBucketPolicyStatus", "s3:GetObjectRetention", "s3:GetBucketWebsite", "s3:PutReplicationConfiguration", "s3:PutObjectLegalHold", "s3:GetObjectLegalHold", "s3:GetBucketNotification", "s3:PutBucketCORS", "s3:GetReplicationConfiguration", "s3:ListMultipartUploadParts", "s3:PutObject", "s3:GetObject", "s3:PutBucketNotification", "s3:PutBucketLogging", "s3:GetAnalyticsConfiguration", "s3:PutBucketObjectLockConfiguration", "s3:GetObjectVersionForReplication", "s3:GetLifecycleConfiguration", "s3:GetInventoryConfiguration", "s3:GetBucketTagging", "s3:PutAccelerateConfiguration", "s3:DeleteObjectVersion", "s3:GetBucketLogging", "s3:ListBucketVersions", "s3:RestoreObject", "s3:ListBucket", "s3:GetAccelerateConfiguration", "s3:GetBucketPolicy", "s3:PutEncryptionConfiguration", "s3:GetEncryptionConfiguration", "s3:GetObjectVersionTorrent", "s3:AbortMultipartUpload", "s3:GetBucketRequestPayment", "s3:GetObjectTagging", "s3:GetMetricsConfiguration", "s3:DeleteBucket", "s3:PutBucketVersioning", "s3:GetBucketPublicAccessBlock", "s3:ListBucketMultipartUploads", "s3:PutMetricsConfiguration", "s3:GetBucketVersioning", "s3:GetBucketAcl", "s3:PutInventoryConfiguration", "s3:GetObjectTorrent", "s3:PutBucketWebsite", "s3:PutBucketRequestPayment", "s3:PutObjectRetention", "s3:GetBucketCORS", "s3:GetBucketLocation", "s3:ReplicateDelete", "s3:GetObjectVersion", "s3:PutBucketTagging" ], "Resource": [ "arn:aws:s3:::bucket-name/*", "arn:aws:s3:::bucket-name" ] } ] }
You must have the Access Key and Secret Key to access the Kyvos bundle. Contact Kyvos Support for details.
Valid Kyvos license file.
Databricks cluster for semantic model processing and processing aggregations, with the following parameters:
Databricks Runtime Version: Select Version 10.4 LTS (includes Apache Spark 3.2.1, Scala 2.12)
Autopilot Options: Select the following:
Enable autoscaling: Select this to enable autoscaling.
Terminate after ___ minutes of inactivity: Set the value as 30
Worker type: Recommended value r5.4xlarge
Min Workers: Recommended value 1
Max Workers: Recommended value 10
Driver Type: Recommended value r 5.xlarge
Advanced options
By default, the Spot fall back to On-demand checkbox is selected. Kyvos recommends clearing this checkbox.
In the Spark Configurations define the below property in case of Glue-based deployment.
spark.databricks.hive.metastore.glueCatalog.enabled true
If cross-account glue is to be used, define the below property to access cross-account glue: spark.hadoop.hive.metastore.glue.catalogid <GLUE_CATALOG_ID>
Now, set the below parquet-specific configuration properties:spark.hadoop.spark.sql.parquet.int96AsTimestamp true
spark.sql.parquet.binaryAsString false
spark.sql.parquet.int96AsTimestamp true
spark.hadoop.spark.sql.parquet.binaryAsString false
spark.sql.caseSensitive false
spark.hadoop.spark.sql.caseSensitive false
spark.databricks.preemption.enabled false
You must change Spark configurations to use managed disk. Ensure that you must not change the configuration in the default root (/tmp) volume.
In the Spark Configurations, add the spark.local.dir /local_disk0 property where the local_disk0 is the managed disk.
Optionally, you can execute the df -h command from a notebook for verification.
Add the SPARK_WORKER_DIR=/local_disk0 value in the Environment variables.
Tags: UsedBy tag with the value set as Kyvos is required to run the cluster.
Instance profile: Copy the Instance Profile ARN of the role which is being created in Point 7 above.
In Databricks console, go to Admin Console > Instance Profile and click Add Instance Profile. Paste the Instance Profile ARN in the text box.
Select the Skip Validation checkbox and then click Add.
In Cluster settings, go to Advance Options, and select the instance profile created above in the Instance Profile field.
Databricks information:
Databricks Cluster Id: To obtain this ID, click the Cluster Name on the Clusters page in Databricks.
The page URL shows <https://<databricks-instance>/#/settings/clusters/<cluster-id>. The cluster ID is the number after the /clusters/ component in the URL of this page.Databricks Cluster Organization ID: To obtain this ID, click the Cluster Name on the Clusters page in Databricks.
The number after o= in the workspace URL is the organization ID. For example, if the workspace URL is https://westus.azuredatabricks.net/?o=7692xxxxxxxx , then the organization ID is 7692xxxxxxxx.Databricks Role ARN: Use the ARN of the Databricks-instanceprofile-role created in point 7 above .
The ARN looks like this: arn:aws:iam ::45653****** *:role /AssumeRoleTest.
This Databricks Role should have " iam:PassRole " permission in the role you have created for Databricks workspace.
If using an existing Secrets Manager, ensure that the KYVOS-DATABRICKS-SERVICE-TOKEN-DefaultHadoopCluster01 key is added to it.
Creating CloudFormation template
Anchor | ||||
---|---|---|---|---|
|
...