v2.7 Installing with Terraform Modules (Kubernetes)
The terraform-exostellar-modules are one of the newest Exostellar installation approaches designed to simplify the setup process.
Current Terraform Modules Version: v0.0.5 (
terraform-exostellar-modules-0.0.5.zip)
Version Matrix
Component | Version |
|---|---|
Exostellar Terraform Modules |
|
Exostellar Management Server (EMS) |
|
Xspot Controller |
|
Xspot Worker |
|
Exostellar's Karpenter (xkarpenter) |
|
Exostellar's CNI (Exo-CNI) |
|
Exostellar's CSI (Exo-CSI) |
|
Kubernetes |
|
Supported Kubernetes Versions
Exostellar supports Kubernetes versions 1.29 to 1.32.
For versions 1.29 to 1.31, Exostellar uses Amazon Linux 2 (AL2)–based images.
Starting from Kubernetes 1.32, Exostellar has migrated to Amazon Linux 2023 (AL2023)–based images, following AWS recommendations.
Because of this change in the underlying image generation, if you need to deploy on older Kubernetes versions (1.29–1.31), you can continue using Exostellar Terraform Modules v0.0.5. However, you’ll also need to use the compatible versions of xKarpenter and xspot, as listed in the version matrix. If the link is inaccessible, check the “Version Matrix” section in the root README.md of the v0.0.5 release archive.
It is recommended to use Kubernetes 1.32 for the latest compatibility and improvements.
Prerequisites
Before running the Terraform commands on terraform-exostellar-modules, ensure that your environment meets the following requirements:
1. Local tools
2. IAM Permissions
The following block contains the IAM (Identity and Access Management) permissions to securely allow only the required access for Terraform modules and other operations (like Helm login to public ECR) in this document.
Set the following IAM policy on the IAM user or role for deploying the Terraform modules:
Any modifications to this policy could cause internal application failures. Changes are not recommended, but if necessary, proceed with caution.
Ensure the local user has the following IAM permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AuthorizeSecurityGroupEgress",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateSecurityGroup",
"ec2:CreateTags",
"ec2:DeleteSecurityGroup",
"ec2:DescribeImages",
"ec2:DescribeInstanceAttribute",
"ec2:DescribeInstances",
"ec2:DescribeInstanceTypes",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeTags",
"ec2:DescribeVolumes",
"ec2:DescribeVpcAttribute",
"ec2:DescribeVpcs",
"ec2:ModifyInstanceAttribute",
"ec2:RevokeSecurityGroupEgress",
"ec2:RunInstances",
"ec2:TerminateInstances"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"eks:AssociateAccessPolicy",
"eks:CreateAccessEntry",
"eks:DeleteAccessEntry",
"eks:DescribeAccessEntry",
"eks:DescribeCluster",
"eks:DisassociateAccessPolicy",
"eks:ListAssociatedAccessPolicies"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"iam:AddRoleToInstanceProfile",
"iam:AttachRolePolicy",
"iam:CreateInstanceProfile",
"iam:CreateOpenIDConnectProvider",
"iam:CreatePolicy",
"iam:CreateRole",
"iam:DeleteInstanceProfile",
"iam:DeleteOpenIDConnectProvider",
"iam:DeletePolicy",
"iam:DeleteRole",
"iam:DeleteRolePolicy",
"iam:DetachRolePolicy",
"iam:GetInstanceProfile",
"iam:GetOpenIDConnectProvider",
"iam:GetPolicy",
"iam:GetPolicyVersion",
"iam:GetRole",
"iam:GetRolePolicy",
"iam:ListAttachedRolePolicies",
"iam:ListInstanceProfilesForRole",
"iam:ListPolicyVersions",
"iam:ListRolePolicies",
"iam:PassRole",
"iam:PutRolePolicy",
"iam:RemoveRoleFromInstanceProfile",
"iam:TagInstanceProfile",
"iam:TagPolicy",
"iam:TagRole"
],
"Resource": "*"
}
]
}
Set the following IAM policy on the user or role for AWS CLI to fetch the auth token for authorizing Helm to access the public ECR to fetch the Exostellar Helm charts:
Ensure the local user has the following IAM permissions for fetching the auth token:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr-public:GetAuthorizationToken"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"sts:GetServiceBearerToken"
],
"Resource": "*"
}
]
}
The following IAM policies are provisioned by the Terraform modules to securely grant the necessary access to different Exostellar components.
These policies are for informational purposes only. Terraform modules create these as a part of deployment. No action is required from the user.
terraform-exostellar-modules creates and uses the following policies for respective components:
Exostellar Management Server (EMS)
JSON{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:RunInstances", "ec2:StopInstances", "ec2:DescribeSpotPriceHistory", "ec2:DescribeInstances", "ec2:DescribeInstanceTypes", "ec2:DescribeInstanceStatus", "ec2:DescribeTags", "ec2:CreateTags", "ec2:CreateFleet", "ec2:CreateLaunchTemplate", "ec2:DeleteLaunchTemplate", "ec2:TerminateInstances", "ec2:AssignPrivateIpAddresses", "ec2:UnassignPrivateIpAddresses", "ec2:AttachNetworkInterface", "ec2:DetachNetworkInterface", "ec2:CreateNetworkInterface", "ec2:DeleteNetworkInterface", "ec2:ModifyNetworkInterfaceAttribute", "ec2:DescribeRegions" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iam:CreateServiceLinkedRole", "iam:ListRoles", "iam:ListInstanceProfiles", "iam:PassRole", "iam:GetRole" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "ec2:DescribeSubnets", "ec2:DescribeSecurityGroups", "ec2:DescribeImages", "ec2:DescribeImageAttribute", "ec2:DescribeKeyPairs", "ec2:DescribeInstanceTypeOfferings", "iam:GetInstanceProfile", "iam:SimulatePrincipalPolicy", "sns:Publish", "ssm:GetParameters", "ssm:GetParametersByPath" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "ec2:CreateVolume", "ec2:DescribeVolumes", "ec2:AttachVolume", "ec2:ModifyInstanceAttribute", "ec2:DetachVolume", "ec2:DeleteVolume" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "ec2:CreateInstanceExportTask", "ec2:DescribeExportTasks", "ec2:RebootInstances", "ec2:CreateSnapshot", "ec2:DescribeSnapshots", "ec2:LockSnapshot", "ec2:CopySnapshot", "ec2:DeleteSnapshot", "kms:DescribeKey", "kms:Encrypt", "kms:CreateGrant", "kms:ListGrants", "kms:Decrypt", "kms:ReEncrypt*", "kms:RevokeGrant", "kms:GenerateDataKey*" ], "Resource": "*" } ] }xspot Controller
JSON{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:RunInstances", "ec2:StopInstances", "ec2:DescribeSpotPriceHistory", "ec2:DescribeInstances", "ec2:DescribeInstanceTypes", "ec2:DescribeInstanceStatus", "ec2:DescribeTags", "ec2:CreateTags", "ec2:CreateFleet", "ec2:CreateLaunchTemplate", "ec2:DeleteLaunchTemplate", "ec2:TerminateInstances", "ec2:AssignPrivateIpAddresses", "ec2:UnassignPrivateIpAddresses", "ec2:AttachNetworkInterface", "ec2:DetachNetworkInterface", "ec2:CreateNetworkInterface", "ec2:DeleteNetworkInterface", "ec2:ModifyNetworkInterfaceAttribute", "ec2:DescribeRegions", "ec2:CreateVolume", "ec2:DescribeVolumes", "ec2:AttachVolume", "ec2:ModifyInstanceAttribute", "ec2:DetachVolume", "ec2:DeleteVolume", "ec2:CreateInstanceExportTask", "ec2:DescribeExportTasks", "ec2:RebootInstances", "ec2:CreateSnapshot", "ec2:DescribeSnapshots", "iam:CreateServiceLinkedRole", "iam:ListRoles", "iam:ListInstanceProfiles", "iam:PassRole", "iam:GetRole", "ec2:DescribeSubnets", "ec2:DescribeSecurityGroups", "ec2:DescribeImages", "ec2:DescribeKeyPairs", "ec2:DescribeInstanceTypeOfferings", "iam:GetInstanceProfile", "iam:SimulatePrincipalPolicy", "sns:Publish", "ssm:GetParameters", "ssm:GetParametersByPath", "kms:DescribeKey", "kms:Encrypt", "kms:CreateGrant", "kms:ListGrants", "kms:Decrypt", "kms:ReEncrypt*", "kms:RevokeGrant", "kms:GenerateDataKey*" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "eks:DescribeCluster" ], "Resource": "*" } ] }xspot Worker
JSON{ "Version": "2012-10-17", "Statement": [ { "Effect": "Deny", "Action": [ "ec2:UnassignPrivateIpAddresses" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "ec2:ModifyInstanceMetadataOptions", "eks:DescribeCluster" ], "Resource": "*" } ] }Exostellar’s Karpenter (xKarpenter)
JSON{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeInstanceTypes", "ec2:DescribeSecurityGroups", "ec2:DescribeSubnets" ], "Resource": "*" } ] }Exo Node Controller (part of xKarpenter chart)
JSON{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "eks:DescribeCluster" ], "Resource": "*" } ] }
3. AWS Authentication
If the AWS CLI is already logged in to your EKS cluster, you may “skip this step”.
Log in to AWS using the CLI in any of the methods recommended by AWS here.
It is recommended to set the AWS region to match your cluster’s region.
4. AWS SSH Key-Pair
This will be used by the Exostellar Management Server (EMS)'s EC2 instance, if you want to enable SSH access to it.
In either of the following cases, you may “skip this step”:
If you already have an SSH key pair in AWS EC2, or you wish to reuse an existing key.
If you are/want to use AWS Systems Manager (SSM).
You can either create a new SSH key pair or reuse an existing one.
Create SSH Key Pair
To create a new (RSA) key pair in AWS, use the following command. This downloads the “private key” to the local.
aws ec2 create-key-pair \
--key-name "my-key-pair" \
--region "us-east-2" \
--key-type "rsa" \
--query "KeyMaterial" \
--output "text" > my-key-pair.pem
That creates an RSA key pair. If you want an ED25519 key for more secure usage, change the --key-type to ed25519.
For more info on this, or to create it from AWS Console (UI), refer to the Create a key pair using Amazon EC2 section.
Set Private Key File Permissions
Change the private key file permissions to recommended:
chmod 400 my-key-pair.pem
Upload SSH Key Pair
If you already have an SSH key pair created using ssh-keygen or by any other means, use the following command to upload the “public key”:
aws ec2 import-key-pair \
--key-name "my-key-pair" \
--region "us-east-2" \
--public-key-material fileb://path/to/my-key-pair.pub
Only RSA and ED25519 key pairs are accepted by AWS. The command auto-detects the type based on the public key’s content.
For more info on this, or to import it using the AWS Console (UI), refer to the Create a key pair using a third-party tool and import the public key to Amazon EC2 section.
Introduction
There are 2 flows in terraform-exostellar-modules:
Deploys an EKS cluster and sets up the Exostellar environment on it.
It performs the following actions:
Deploys EKS cluster and related resources.
Updates kubeconfig to use the specified EKS cluster.
Deploys the Exostellar's IAM resources.
Deploys the Exostellar Management Server (EMS) and related resources.
Configures the EKS cluster according to Exostellar’s standard setup.
Adds license to Exostellar Management Server (EMS).
Deploys Exostellar's Karpenter (xKarpenter) and related resources.
Note:
This module is currently in limited use, and active development has been paused. As a result, several recent features available in the existing cluster flow are not yet supported in this.
We do not recommend using this module in production environments at this time.
If you are interested in using or contributing to this module, please reach out to Exostellar Customer Support (contact details are provided at the end of this page).
Sets up the Exostellar environment on an existing EKS cluster.
It performs the following actions:
Reads the EKS cluster's state.
Reads the EKS cluster's auth info.
Reads the VPC details.
Reads the EKS cluster's TLS certificate details.
Reads the subnets (both public + private) in the VPC.
Reads each subnet's details.
Reads the route table's details for subnets (both public + private).
Updates kubeconfig to use the specified EKS cluster.
Performs prechecks.
Deploys the Exostellar's IAM resources.
Deploys the xspot security group.
Deploys the Exostellar Management Server (EMS) and related resources.
Configures the EKS cluster according to Exostellar’s standard setup.
Deploys IRSA for the EBS CSI driver.
Adds license to Exostellar Management Server (EMS).
Deploys Exostellar's Karpenter (xKarpenter) and related resources.
This is the recommended way to install the Exostellar setup. You are the owner of your EKS cluster, and terraform-exostellar-modules manages only the Exostellar-related resources on AWS and Kubernetes.
For more details on this, refer to this doc from the latest release version: README.md
Installation Steps
Steps to deploy terraform-exostellar-modules standalone flow:
Note:
This module is currently in limited use, and active development has been paused. As a result, several recent features available in the existing cluster flow are not yet supported in this.
We do not recommend using this module in production environments at this time.
If you are interested in using or contributing to this module, please reach out to Exostellar Customer Support (contact details are provided at the end of this page).
This is the recommended way to install the Exostellar setup. You are the owner of your EKS cluster, and terraform-exostellar-modules manages only the Exostellar-related resources on AWS and Kubernetes.
Prerequisites:
A functional EKS cluster: Cluster control plane and managed nodes are up and running. And the cluster access is configured in the kube config (
~/.kube/config).
Note: The terraform-exostellar-modules currently support Amazon EKS versions 1.29 through 1.32.
Support for EKS 1.33 is undergoing QA testing and will be included in an upcoming release.
The following CNI and CSI are required, either installed as add-ons (managed by AWS) or Helm charts (managed by the user):
CNI: amazon-vpc-cni-k8s
CSI: aws-ebs-csi-driver
Note: terraform-exostellar-modules are only tested against the official AWS-maintained CNI (amazon-vpc-cni-k8s) and CSI (aws-ebs-csi-driver), hence the same are recommended to users.
If you need terraform-exostellar-modules to support any other CNI or CSI out of the box, please contact Exostellar Customer Support (contact details are provided at the end of this page).
Helm login to public ECR:
Run the following command for AWS CLI to fetch the auth token that Helm needs to access the Amazon ECR (Elastic Container Registry) Public, to pull the Exostellar Helm charts.
AWS Public ECR still uses AWS’s authentication system, even for public repositories. They are accessible to all AWS users.
aws ecr-public get-login-password --region "us-east-1" \
| helm registry login --username "AWS" --password-stdin "public.ecr.aws"
This is needed because the Terraform modules use the following, which are published in public ECR:
Exostellar’s Karpenter (xKarpenter) Helm chart
Exostellar’s Karpenter (xKarpenter) Resources (for default ExoNodeClass and ExoNodePool) Helm chart
Exostellar’s CNI (Exo-CNI) Helm chart
Exostellar’s CSI (Exo-CSI) Helm chart
Steps to deploy terraform-exostellar-modules existing cluster flow:
1. Import the existing-cluster-full module.
Create a main.tf file in a directory and add the following module to it.
module "existing_cluster_flow" {
source = "git::ssh://git@github.com/Exostellar/terraform-exostellar-modules//modules/existing-cluster-full?ref=v0.0.5"
eks_cluster = "my-exostellar-cluster"
aws_region = "us-east-1"
ems_ami_id = "ami-XXXXXXXXXXXXXXXXX"
ssh_key_name = "my-ssh-key-pair-name"
license_filepath = "/path/to/exo-license-file"
}
This is a minimal example with only the mandatory inputs. For the full list of inputs and more details, check the example module: examples/existing-cluster-flow
Note 1: The
license_filepathfield is optional. If you provide an empty string (""), the Terraform modules will skip adding the license. However, a valid license is required for the Exostellar Management Server (EMS) to operate. You can add the license later by logging in to the EMS UI and uploading it from the Settings page.Note 2: The
ssh_key_nameis optional. You may pass an empty string ("").
You can get the EMS AMI details from the Exostellar release v2.7.0. Or please contact Exostellar Customer Support (contact details are provided at the end of this page).
2. Manually configure AWS CNI and CSI in your EKS cluster.
In an existing cluster flow, the AWS CNI and CSI are outside the scope of the Terraform modules. i.e., AWS CNI and CSI are part of the EKS cluster creation, and terraform-exostellar-modules strictly sticks to the user access policy as specified in #prerequisites above; it doesn’t access kubectl or aws CLI for these operations.
Hence, the following are manual steps.
Prevent AWS CNI from running on
x-computenodes by adding the following condition to the DaemonSet.Check the AWS CNI daemonSet's node affinity:
BASHkubectl get daemonset aws-node -n kube-system \ -o jsonpath='{.spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*]}' \ | jqVerify the output contains the following condition:
JSON{ "key": "eks.amazonaws.com/nodegroup", "operator": "NotIn", "values": ["x-compute"] }If missing, patch the daemonSet:
BASHkubectl get daemonset aws-node -n kube-system -o json \ | jq '.spec.template.spec.affinity.nodeAffinity .requiredDuringSchedulingIgnoredDuringExecution .nodeSelectorTerms[0].matchExpressions += [ { "key": "eks.amazonaws.com/nodegroup", "operator": "NotIn", "values": ["x-compute"] } ]' \ | kubectl apply -f -
Disable
AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFGon containeraws-nodein AWS CNI DaemonSet.Check the current value:
BASHkubectl get daemonset aws-node -n kube-system -o json \ | jq -r '.spec.template.spec.containers[] | select(.name=="aws-node") | .env[] | select(.name=="AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG") | "\(.name)=\(.value)"'Expected output:
TOMLAWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=falseIf missing or set to
true, update it:BASHkubectl set env daemonset/aws-node -n kube-system -c aws-node AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=false
Prevent AWS CSI from running on
x-computenodes by adding the following condition to the DaemonSet.Check the AWS CSI daemonSet's node affinity:
BASHkubectl get daemonset ebs-csi-node -n kube-system \ -o jsonpath='{.spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*]}' \ | jqVerify the output contains the following condition:
JSON{ "key": "eks.amazonaws.com/nodegroup", "operator": "NotIn", "values": ["x-compute"] }If missing, patch the daemonSet:
BASHkubectl get daemonset ebs-csi-node -n kube-system -o json \ | jq '.spec.template.spec.affinity.nodeAffinity .requiredDuringSchedulingIgnoredDuringExecution .nodeSelectorTerms[0].matchExpressions += [ { "key": "eks.amazonaws.com/nodegroup", "operator": "NotIn", "values": ["x-compute"] } ]' \ | kubectl apply -f -
Annotate the EBS CSI driver's service account with the IAM role ARN for IRSA. And restart the EBS CSI driver add-on or pods (if installed using Helm chart).
Check if the annotation is already present:
BASHkubectl get sa ebs-csi-controller-sa \ -n kube-system \ -o jsonpath='{.metadata.annotations.eks\.amazonaws\.com/role-arn}'Annotate the service account:
BASHkubectl annotate serviceaccount/ebs-csi-controller-sa \ -n kube-system \ "eks.amazonaws.com/role-arn=arn:aws:iam::<aws-account-id>:role/<cluster-name>-ebs-csi-driver-role"Note: Replace
<aws-account-id>and<cluster-name>in the above command with your AWS account ID and cluster name, respectively.
Restart driver pods:
BASHkubectl delete pod -n kube-system -l app.kubernetes.io/name=aws-ebs-csi-driver kubectl wait pods -n kube-system -l app.kubernetes.io/name=aws-ebs-csi-driver --for=condition=ready --timeout=300s
3. Deploy using Terraform.
Run the following Terraform commands to deploy the Exostellar Terraform modules on your existing cluster:
terraform init
terraform plan -input=false
terraform apply -auto-approve
Cleaning Up
Steps to delete the terraform-exostellar-modules standalone flow:
Note:
This module is currently in limited use, and active development has been paused. As a result, several recent features available in the existing cluster flow are not yet supported in this.
We do not recommend using this module in production environments at this time.
If you are interested in using or contributing to this module, please reach out to Exostellar Customer Support (contact details are provided at the end of this page).
To clean up everything deployed on top of an (pre-)existing EKS cluster, run the following:
terraform destroy -auto-approve -refresh=true
This will delete the Exostellar components (like EMS, xKarpenter, etc.), while preserving your original EKS cluster.
Additional Help and Support
If you run into any issues:
Capture as much terminal output as possible.
Archive the
terraform-exostellar-modulesdirectory (including hidden files) as.zipor.tar.Submit them along with a description of your issue to Exostellar Customer Support.
Our team will assist you in troubleshooting and resolving the issue promptly.