Installing with Terraform Modules (Kubernetes)
The terraform-exostellar-modules
are one of the newest Exostellar installation approaches designed to simplify the setup process.
Prerequisites
Before running the Terraform commands on terraform-exostellar-modules
, ensure that your environment meets the following requirements:
Terraform: Version 1.8+
Helm: Version 3.14.2+
AWS Authentication, Credentials, and Region
Please properly configure AWS authentication and default region in your local environment.
Ensure the account has the following IAM permissions:
Valid AWS Marketplace Subscriptions for Exostellar Management Server, Exostellar Controller, Exostellar Worker AMIs for the same AWS account
SSH Key
terraform-exostellar-modules
: v0.0.1+
terraform-exostellar-modules-0.0.1.tar.gz
Download, extract, and open a terminal from inside the extracted directory to proceed with the next steps in this doc. You should see the following examples, modules, scripts, and the README.CODE├── examples/ │ ├── existing-cluster-flow/ │ └── standalone-flow/ ├── modules/ │ ├── ems/ │ ├── existing-cluster-full/ │ ├── iam/ │ ├── infra/ │ ├── karpenter/ │ └── standalone-full/ ├── scripts/ └── README.md
Installation Steps
Creating a Sandbox EKS Cluster and Deploying the Management Server
If you'd like to deploy a full Exostellar setup from scratch—including:
A new EKS cluster
Controllers and Workers IAMs
Exostellar Management Server (EMS)
Exostellar’s Karpenter (
exokarpenter
)
Use the example module located at:examples/standalone-flow/
Step 1: Configure the Terraform Module
Open examples/standalone-flow/main.tf
and update the necessary values:
Variable | Description |
---|---|
Infrastructure-related resources | |
| Name of the EKS Cluster to be created. |
| AWS region where the cluster is deployed. Default: |
| EKS (Kubernetes) version. Default: |
| CIDR block for VPC to be created for EKS Cluster. |
Exostellar Management Server (EMS) configurations | |
| Name of an existing SSH key pair in AWS (only the key-pair resource name, not the local file name). |
| AMI ID for Exostellar Management Server. Must match the selected |
| EC2 instance type for EMS. Default: |
| EMS volume size in GB. Default: |
| Whether to enable termination protection on the EMS instance. Default: |
| Profile's availability zone. Must match the selected |
Controller and Worker configurations | |
| Enable hyperthreading. Default: |
| Enable self-ballooning. Default: |
Exostellar Karpenter configurations | |
| xKarpenter's version. Default is v |
Step 2: Deploy with Terraform
Run the Terraform commands to deploy the standalone-flow’s stack:
terraform -chdir=./examples/standalone-flow init
terraform -chdir=./examples/standalone-flow plan -input=false
terraform -chdir=./examples/standalone-flow apply -auto-approve
Note: Terraform commands are idempotent. You can re-run them safely to retry failed operations or refresh the state.
Exostellar Management Server Console Access
URL:
https://<ems_public_ip>
Username:
exostellar_management_server_console_admin_username
Password:
exostellar_management_server_console_admin_password
Exostellar Management Server SSH Access
ssh -i <ssh-private-key> rocky@<ems_public_ip>
The
rocky
is the username for the RockyLinux VM, on which EMS is built.SSH private key is the private key corresponding to
ssh_key_name
specified as input without file extension.
Deploying the Management Server into an Existing EKS Cluster
If you already have an Amazon EKS cluster running and want to deploy the Exostellar setup on top of it—including the following components:
Controller and Worker IAM roles
Exostellar Management Server (EMS)
Exostellar’s Karpenter (exokarpenter)
Use the example module provided at examples/existing-cluster-flow
Step 1: Configure main.tf
for Your Environment
Edit the following variables in examples/existing-cluster-flow/main.tf
according to your setup:
Variable | Description |
---|---|
Infrastructure-related resources | |
| Name of your existing EKS cluster. |
| AWS region where the cluster is deployed. Default: |
| Kubernetes version of your EKS cluster. Default: |
| VPC ID associated with the EKS cluster. |
Exostellar Management Server (EMS) configurations | |
| Name of an existing SSH key pair in AWS (only the key-pair resource name, not the local file name). |
| Public subnet ID from the same VPC (identified via |
| Exostellar profile's availability zone. Default is us-east-1a. |
| AMI ID for Exostellar Management Server. Must match the selected |
| EC2 instance type for EMS. Default: |
| EMS volume size in GB. Default: |
| Whether to enable termination protection on the EMS instance. Default: |
| A list of security group IDs. Must include:
Include others as needed. |
Controller and Worker configurations | |
| Enable hyperthreading. Default: |
| Enable self-ballooning. Default: |
| The ID of the private subnet from the VPC (specified using |
Exostellar Karpenter configurations | |
| xKarpenter's version. Default is v |
Step 2: Deploy Using Terraform
Run the Terraform commands to deploy the existing-cluster-flow’s stack:
terraform -chdir=./examples/existing-cluster-flow init
terraform -chdir=./examples/existing-cluster-flow plan -input=false
terraform -chdir=./examples/existing-cluster-flow apply -auto-approve
Note: Terraform commands are idempotent. You can re-run them safely to retry failed operations or refresh the state.
Output on Successful Deployment
You will see output similar to the following:
eks_cluster = "xio-standalone"
environment = "k8s"
exostellar_management_server_console_admin_password = "xxxxxxxx"
exostellar_management_server_console_admin_username = "admin@xxxx"
exostellar_management_server_private_ip = "10.0.141.xx"
exostellar_management_server_public_ip = "52.53.171.x"
xkarpenter_namespace = "exokarpenter"
xkarpenter_version = "v2.0.1"
xspot_controller_instance_profile_arn = "arn:aws:iam::97709900xxxx:instance-profile/xio-standalone-xspot-controller"
xspot_controller_role_arn = "arn:aws:iam::97709900xxxx:role/xio-standalone-xspot-controller"
xspot_worker_instance_profile_arn = "arn:aws:iam::97709900xxxx:instance-profile/xio-standalone-xspot-worker"
xspot_worker_role_arn = "arn:aws:iam::97709900xxxx:role/xio-standalone-xspot-worker"
Exostellar Management Server Console Access
URL:
https://<ems_public_ip>
Username:
exostellar_management_server_console_admin_username
Password:
exostellar_management_server_console_admin_password
Exostellar Management Server SSH Access
ssh -i <ssh-private-key> rocky@<ems_public_ip>
The
rocky
is the username for the RockyLinux VM, on which EMS is built.SSH private key is the private key corresponding to
ssh_key_name
specified as input without file extension.
Cleaning Up
Standalone Flow
To remove all resources created by the standalone-flow
module—including the EKS cluster, Controller and Worker IAM roles, EMS, and exokarpenter — run:
terraform -chdir=./examples/standalone-flow destroy -auto-approve -refresh=true
This will completely tear down the deployed infrastructure.
Existing Cluster Flow
To clean up everything deployed except the EKS cluster (which was pre-existing), run the following:
terraform -chdir=./examples/existing-cluster-flow destroy -auto-approve -refresh=true
This will delete the Controller and Worker IAM roles, EMS instance, and exokarpenter components—while preserving your original EKS cluster.
Additional Help and Support
For more configuration options, variables, and usage examples, refer to the relevant README.md
files located throughout the repository:
├── examples/
│ ├── existing-cluster-flow/
│ │ └── README.md
│ └── standalone-flow/
│ └── README.md
├── modules/
│ ├── ems/
│ │ └── README.md
│ ├── existing-cluster-full/
│ │ └── README.md
│ ├── iam/
│ │ └── README.md
│ ├── infra/
│ │ └── README.md
│ ├── karpenter/
│ │ └── README.md
│ └── standalone-full/
│ └── README.md
└── README.md
If you run into any issues:
Capture as much terminal output as possible.
Compress the entire repository directory (including hidden files) using
.zip
or.tar
.Submit the archive along with a description of your issue to Exostellar Customer Support.
Our team will assist you in troubleshooting and resolving the issue promptly.