Skip to main content
Skip table of contents

v2.7 Installing with Terraform Modules (Kubernetes)

The terraform-exostellar-modules are one of the newest Exostellar installation approaches designed to simplify the setup process.

Current Terraform Modules Version: v0.0.5 ( terraform-exostellar-modules-0.0.5.zip)

Version Matrix

Component

Version

Exostellar Terraform Modules

v0.0.5

Exostellar Management Server (EMS)

v2.4.0+ (Exostellar Release 2.7.0)

Xspot Controller

v3.4.0+ (Exostellar Release 2.7.0)

Xspot Worker

v3.4.0+ (Exostellar Release 2.7.0)

Exostellar's Karpenter (xkarpenter)

v2.0.6+

Exostellar's CNI (Exo-CNI)

v1.20.0+

Exostellar's CSI (Exo-CSI)

v1.46.0+

Kubernetes

1.32

Supported Kubernetes Versions
Exostellar supports Kubernetes versions 1.29 to 1.32.

  • For versions 1.29 to 1.31, Exostellar uses Amazon Linux 2 (AL2)–based images.

  • Starting from Kubernetes 1.32, Exostellar has migrated to Amazon Linux 2023 (AL2023)–based images, following AWS recommendations.

Because of this change in the underlying image generation, if you need to deploy on older Kubernetes versions (1.29–1.31), you can continue using Exostellar Terraform Modules v0.0.5. However, you’ll also need to use the compatible versions of xKarpenter and xspot, as listed in the version matrix. If the link is inaccessible, check the “Version Matrix” section in the root README.md of the v0.0.5 release archive.

It is recommended to use Kubernetes 1.32 for the latest compatibility and improvements.

Prerequisites

Before running the Terraform commands on terraform-exostellar-modules, ensure that your environment meets the following requirements:

1. Local tools

  1. Terraform: Version 1.8+

  2. AWS CLI: Version 2.0+

  3. Helm: Version 3.14.2+

  4. kubectl: Version 1.29+

2. IAM Permissions

The following block contains the IAM (Identity and Access Management) permissions to securely allow only the required access for Terraform modules and other operations (like Helm login to public ECR) in this document.

IAM Permissions

Set the following IAM policy on the IAM user or role for deploying the Terraform modules:

User IAM Policy

Any modifications to this policy could cause internal application failures. Changes are not recommended, but if necessary, proceed with caution.

Ensure the local user has the following IAM permissions:

JSON
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:AuthorizeSecurityGroupEgress",
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:CreateSecurityGroup",
                "ec2:CreateTags",
                "ec2:DeleteSecurityGroup",
                "ec2:DescribeImages",
                "ec2:DescribeInstanceAttribute",
                "ec2:DescribeInstances",
                "ec2:DescribeInstanceTypes",
                "ec2:DescribeNetworkInterfaces",
                "ec2:DescribeRouteTables",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSubnets",
                "ec2:DescribeTags",
                "ec2:DescribeVolumes",
                "ec2:DescribeVpcAttribute",
                "ec2:DescribeVpcs",
                "ec2:ModifyInstanceAttribute",
                "ec2:RevokeSecurityGroupEgress",
                "ec2:RunInstances",
                "ec2:TerminateInstances"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "eks:AssociateAccessPolicy",
                "eks:CreateAccessEntry",
                "eks:DeleteAccessEntry",
                "eks:DescribeAccessEntry",
                "eks:DescribeCluster",
                "eks:DisassociateAccessPolicy",
                "eks:ListAssociatedAccessPolicies"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:AddRoleToInstanceProfile",
                "iam:AttachRolePolicy",
                "iam:CreateInstanceProfile",
                "iam:CreateOpenIDConnectProvider",
                "iam:CreatePolicy",
                "iam:CreateRole",
                "iam:DeleteInstanceProfile",
                "iam:DeleteOpenIDConnectProvider",
                "iam:DeletePolicy",
                "iam:DeleteRole",
                "iam:DeleteRolePolicy",
                "iam:DetachRolePolicy",
                "iam:GetInstanceProfile",
                "iam:GetOpenIDConnectProvider",
                "iam:GetPolicy",
                "iam:GetPolicyVersion",
                "iam:GetRole",
                "iam:GetRolePolicy",
                "iam:ListAttachedRolePolicies",
                "iam:ListInstanceProfilesForRole",
                "iam:ListPolicyVersions",
                "iam:ListRolePolicies",
                "iam:PassRole",
                "iam:PutRolePolicy",
                "iam:RemoveRoleFromInstanceProfile",
                "iam:TagInstanceProfile",
                "iam:TagPolicy",
                "iam:TagRole"
            ],
            "Resource": "*"
        }
    ]
}

Set the following IAM policy on the user or role for AWS CLI to fetch the auth token for authorizing Helm to access the public ECR to fetch the Exostellar Helm charts:

Helm Auth Policy

Ensure the local user has the following IAM permissions for fetching the auth token:

JSON
{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Action": [
				"ecr-public:GetAuthorizationToken"
			],
			"Resource": "*"
		},
		{
			"Effect": "Allow",
			"Action": [
				"sts:GetServiceBearerToken"
			],
			"Resource": "*"
		}
	]
}

The following IAM policies are provisioned by the Terraform modules to securely grant the necessary access to different Exostellar components.

These policies are for informational purposes only. Terraform modules create these as a part of deployment. No action is required from the user.

Info: Exostellar IAM Policies

terraform-exostellar-modules creates and uses the following policies for respective components:

  1. Exostellar Management Server (EMS)

    JSON
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:RunInstances",
                    "ec2:StopInstances",
                    "ec2:DescribeSpotPriceHistory",
                    "ec2:DescribeInstances",
                    "ec2:DescribeInstanceTypes",
                    "ec2:DescribeInstanceStatus",
                    "ec2:DescribeTags",
                    "ec2:CreateTags",
                    "ec2:CreateFleet",
                    "ec2:CreateLaunchTemplate",
                    "ec2:DeleteLaunchTemplate",
                    "ec2:TerminateInstances",
                    "ec2:AssignPrivateIpAddresses",
                    "ec2:UnassignPrivateIpAddresses",
                    "ec2:AttachNetworkInterface",
                    "ec2:DetachNetworkInterface",
                    "ec2:CreateNetworkInterface",
                    "ec2:DeleteNetworkInterface",
                    "ec2:ModifyNetworkInterfaceAttribute",
                    "ec2:DescribeRegions"
                ],
                "Resource": "*"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "iam:CreateServiceLinkedRole",
                    "iam:ListRoles",
                    "iam:ListInstanceProfiles",
                    "iam:PassRole",
                    "iam:GetRole"
                ],
                "Resource": "*"
            },
            {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeSubnets",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeImages",
                "ec2:DescribeImageAttribute",
                "ec2:DescribeKeyPairs",
                "ec2:DescribeInstanceTypeOfferings",
                "iam:GetInstanceProfile",
                "iam:SimulatePrincipalPolicy",
                "sns:Publish",
                "ssm:GetParameters",
                "ssm:GetParametersByPath"
            ],
            "Resource": "*"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:CreateVolume",
                    "ec2:DescribeVolumes",
                    "ec2:AttachVolume",
                    "ec2:ModifyInstanceAttribute",
                    "ec2:DetachVolume",
                    "ec2:DeleteVolume"
                ],
                "Resource": "*"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:CreateInstanceExportTask",
                    "ec2:DescribeExportTasks",
                    "ec2:RebootInstances",
                    "ec2:CreateSnapshot",
                    "ec2:DescribeSnapshots",
                    "ec2:LockSnapshot",
                    "ec2:CopySnapshot",
                    "ec2:DeleteSnapshot",
                    "kms:DescribeKey",
                    "kms:Encrypt",
                    "kms:CreateGrant",
                    "kms:ListGrants",
                    "kms:Decrypt",
                    "kms:ReEncrypt*",
                    "kms:RevokeGrant",
                    "kms:GenerateDataKey*"
                ],
                "Resource": "*"
            }
        ]
    }
  2. xspot Controller

    JSON
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": [
            "ec2:RunInstances",
            "ec2:StopInstances",
            "ec2:DescribeSpotPriceHistory",
            "ec2:DescribeInstances",
            "ec2:DescribeInstanceTypes",
            "ec2:DescribeInstanceStatus",
            "ec2:DescribeTags",
            "ec2:CreateTags",
            "ec2:CreateFleet",
            "ec2:CreateLaunchTemplate",
            "ec2:DeleteLaunchTemplate",
            "ec2:TerminateInstances",
            "ec2:AssignPrivateIpAddresses",
            "ec2:UnassignPrivateIpAddresses",
            "ec2:AttachNetworkInterface",
            "ec2:DetachNetworkInterface",
            "ec2:CreateNetworkInterface",
            "ec2:DeleteNetworkInterface",
            "ec2:ModifyNetworkInterfaceAttribute",
            "ec2:DescribeRegions",
            "ec2:CreateVolume",
            "ec2:DescribeVolumes",
            "ec2:AttachVolume",
            "ec2:ModifyInstanceAttribute",
            "ec2:DetachVolume",
            "ec2:DeleteVolume",
            "ec2:CreateInstanceExportTask",
            "ec2:DescribeExportTasks",
            "ec2:RebootInstances",
            "ec2:CreateSnapshot",
            "ec2:DescribeSnapshots",
            "iam:CreateServiceLinkedRole",
            "iam:ListRoles",
            "iam:ListInstanceProfiles",
            "iam:PassRole",
            "iam:GetRole",
            "ec2:DescribeSubnets",
            "ec2:DescribeSecurityGroups",
            "ec2:DescribeImages",
            "ec2:DescribeKeyPairs",
            "ec2:DescribeInstanceTypeOfferings",
            "iam:GetInstanceProfile",
            "iam:SimulatePrincipalPolicy",
            "sns:Publish",
            "ssm:GetParameters",
            "ssm:GetParametersByPath",
            "kms:DescribeKey",
            "kms:Encrypt",
            "kms:CreateGrant",
            "kms:ListGrants",
            "kms:Decrypt",
            "kms:ReEncrypt*",
            "kms:RevokeGrant",
            "kms:GenerateDataKey*"
          ],
          "Resource": "*"
        },
        {
          "Effect": "Allow",
          "Action": [
            "eks:DescribeCluster"
          ],
          "Resource": "*"
        }
      ]
    }
  3. xspot Worker

    JSON
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Deny",
          "Action": [
            "ec2:UnassignPrivateIpAddresses"
          ],
          "Resource": "*"
        },
        {
          "Effect": "Allow",
          "Action": [
            "ec2:ModifyInstanceMetadataOptions",
            "eks:DescribeCluster"
          ],
          "Resource": "*"
        }
      ]
    }
  4. Exostellar’s Karpenter (xKarpenter)

    JSON
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:DescribeInstanceTypes",
                    "ec2:DescribeSecurityGroups",
                    "ec2:DescribeSubnets"
                ],
                "Resource": "*"
            }
        ]
    }
  5. Exo Node Controller (part of xKarpenter chart)

    JSON
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "eks:DescribeCluster"
                ],
                "Resource": "*"
            }
        ]
    }

3. AWS Authentication

If the AWS CLI is already logged in to your EKS cluster, you may “skip this step”.

Log in to AWS using the CLI in any of the methods recommended by AWS here.

It is recommended to set the AWS region to match your cluster’s region.

4. AWS SSH Key-Pair

This will be used by the Exostellar Management Server (EMS)'s EC2 instance, if you want to enable SSH access to it.

In either of the following cases, you may “skip this step”:

  1. If you already have an SSH key pair in AWS EC2, or you wish to reuse an existing key.

  2. If you are/want to use AWS Systems Manager (SSM).

You can either create a new SSH key pair or reuse an existing one.

Create (New) SSH Key Pair

Create SSH Key Pair

To create a new (RSA) key pair in AWS, use the following command. This downloads the “private key” to the local.

BASH
aws ec2 create-key-pair \
  --key-name "my-key-pair" \
  --region "us-east-2" \
  --key-type "rsa" \
  --query "KeyMaterial" \
  --output "text" > my-key-pair.pem

That creates an RSA key pair. If you want an ED25519 key for more secure usage, change the --key-type to ed25519.

For more info on this, or to create it from AWS Console (UI), refer to the Create a key pair using Amazon EC2 section.

Set Private Key File Permissions

Change the private key file permissions to recommended:

BASH
chmod 400 my-key-pair.pem

Upload (Reuse) SSH Key Pair

Upload SSH Key Pair

If you already have an SSH key pair created using ssh-keygen or by any other means, use the following command to upload the “public key”:

BASH
aws ec2 import-key-pair \
  --key-name "my-key-pair" \
  --region "us-east-2" \
  --public-key-material fileb://path/to/my-key-pair.pub

Only RSA and ED25519 key pairs are accepted by AWS. The command auto-detects the type based on the public key’s content.

For more info on this, or to import it using the AWS Console (UI), refer to the Create a key pair using a third-party tool and import the public key to Amazon EC2 section.

Introduction

There are 2 flows in terraform-exostellar-modules:

What is standalone flow?

Deploys an EKS cluster and sets up the Exostellar environment on it.

It performs the following actions:

  1. Deploys EKS cluster and related resources.

  2. Updates kubeconfig to use the specified EKS cluster.

  3. Deploys the Exostellar's IAM resources.

  4. Deploys the Exostellar Management Server (EMS) and related resources.

  5. Configures the EKS cluster according to Exostellar’s standard setup.

  6. Adds license to Exostellar Management Server (EMS).

  7. Deploys Exostellar's Karpenter (xKarpenter) and related resources.

Note:
This module is currently in limited use, and active development has been paused. As a result, several recent features available in the existing cluster flow are not yet supported in this.

We do not recommend using this module in production environments at this time.

If you are interested in using or contributing to this module, please reach out to Exostellar Customer Support (contact details are provided at the end of this page).

What is existing cluster flow?

Sets up the Exostellar environment on an existing EKS cluster.

It performs the following actions:

  1. Reads the EKS cluster's state.

  2. Reads the EKS cluster's auth info.

  3. Reads the VPC details.

  4. Reads the EKS cluster's TLS certificate details.

  5. Reads the subnets (both public + private) in the VPC.

  6. Reads each subnet's details.

  7. Reads the route table's details for subnets (both public + private).

  8. Updates kubeconfig to use the specified EKS cluster.

  9. Performs prechecks.

  10. Deploys the Exostellar's IAM resources.

  11. Deploys the xspot security group.

  12. Deploys the Exostellar Management Server (EMS) and related resources.

  13. Configures the EKS cluster according to Exostellar’s standard setup.

  14. Deploys IRSA for the EBS CSI driver.

  15. Adds license to Exostellar Management Server (EMS).

  16. Deploys Exostellar's Karpenter (xKarpenter) and related resources.

This is the recommended way to install the Exostellar setup. You are the owner of your EKS cluster, and terraform-exostellar-modules manages only the Exostellar-related resources on AWS and Kubernetes.

For more details on this, refer to this doc from the latest release version: README.md

Installation Steps

Standalone Flow

Steps to deploy terraform-exostellar-modules standalone flow:

Note:
This module is currently in limited use, and active development has been paused. As a result, several recent features available in the existing cluster flow are not yet supported in this.

We do not recommend using this module in production environments at this time.

If you are interested in using or contributing to this module, please reach out to Exostellar Customer Support (contact details are provided at the end of this page).

Existing Cluster Flow

This is the recommended way to install the Exostellar setup. You are the owner of your EKS cluster, and terraform-exostellar-modules manages only the Exostellar-related resources on AWS and Kubernetes.

Prerequisites:

  • A functional EKS cluster: Cluster control plane and managed nodes are up and running. And the cluster access is configured in the kube config (~/.kube/config).

Note: The terraform-exostellar-modules currently support Amazon EKS versions 1.29 through 1.32.
Support for EKS 1.33 is undergoing QA testing and will be included in an upcoming release.

Note: terraform-exostellar-modules are only tested against the official AWS-maintained CNI (amazon-vpc-cni-k8s) and CSI (aws-ebs-csi-driver), hence the same are recommended to users.

If you need terraform-exostellar-modules to support any other CNI or CSI out of the box, please contact Exostellar Customer Support (contact details are provided at the end of this page).

  • Helm login to public ECR:

Helm login to public ECR

Run the following command for AWS CLI to fetch the auth token that Helm needs to access the Amazon ECR (Elastic Container Registry) Public, to pull the Exostellar Helm charts.

AWS Public ECR still uses AWS’s authentication system, even for public repositories. They are accessible to all AWS users.

BASH
aws ecr-public get-login-password --region "us-east-1" \
  | helm registry login --username "AWS" --password-stdin "public.ecr.aws"

This is needed because the Terraform modules use the following, which are published in public ECR:

  1. Exostellar’s Karpenter (xKarpenter) Helm chart

  2. Exostellar’s Karpenter (xKarpenter) Resources (for default ExoNodeClass and ExoNodePool) Helm chart

  3. Exostellar’s CNI (Exo-CNI) Helm chart

  4. Exostellar’s CSI (Exo-CSI) Helm chart

Steps to deploy terraform-exostellar-modules existing cluster flow:

1. Import the existing-cluster-full module.

Create a main.tf file in a directory and add the following module to it.

HCL
module "existing_cluster_flow" {
    source = "git::ssh://git@github.com/Exostellar/terraform-exostellar-modules//modules/existing-cluster-full?ref=v0.0.5"

    eks_cluster      = "my-exostellar-cluster"
    aws_region       = "us-east-1"
    ems_ami_id       = "ami-XXXXXXXXXXXXXXXXX"
    ssh_key_name     = "my-ssh-key-pair-name"
    license_filepath = "/path/to/exo-license-file"
}
  • This is a minimal example with only the mandatory inputs. For the full list of inputs and more details, check the example module: examples/existing-cluster-flow

    • Note 1: The license_filepath field is optional. If you provide an empty string (""), the Terraform modules will skip adding the license. However, a valid license is required for the Exostellar Management Server (EMS) to operate. You can add the license later by logging in to the EMS UI and uploading it from the Settings page.

    • Note 2: The ssh_key_name is optional. You may pass an empty string ("").

  • You can get the EMS AMI details from the Exostellar release v2.7.0. Or please contact Exostellar Customer Support (contact details are provided at the end of this page).

2. Manually configure AWS CNI and CSI in your EKS cluster.

In an existing cluster flow, the AWS CNI and CSI are outside the scope of the Terraform modules. i.e., AWS CNI and CSI are part of the EKS cluster creation, and terraform-exostellar-modules strictly sticks to the user access policy as specified in #prerequisites above; it doesn’t access kubectl or aws CLI for these operations.

Hence, the following are manual steps.

Manual steps for configuring AWS CNI and CSI.
  1. Prevent AWS CNI from running on x-compute nodes by adding the following condition to the DaemonSet.

    • Check the AWS CNI daemonSet's node affinity:

      BASH
      kubectl get daemonset aws-node -n kube-system \
        -o jsonpath='{.spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*]}' \
        | jq
    • Verify the output contains the following condition:

      JSON
      {
        "key": "eks.amazonaws.com/nodegroup",
        "operator": "NotIn",
        "values": ["x-compute"]
      }
    • If missing, patch the daemonSet:

      BASH
      kubectl get daemonset aws-node -n kube-system -o json \
        | jq '.spec.template.spec.affinity.nodeAffinity
              .requiredDuringSchedulingIgnoredDuringExecution
              .nodeSelectorTerms[0].matchExpressions += [
                {
                  "key": "eks.amazonaws.com/nodegroup",
                  "operator": "NotIn",
                  "values": ["x-compute"]
                }
              ]' \
        | kubectl apply -f -
  2. Disable AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG on container aws-node in AWS CNI DaemonSet.

    • Check the current value:

      BASH
      kubectl get daemonset aws-node -n kube-system -o json \
        | jq -r '.spec.template.spec.containers[]
          | select(.name=="aws-node")
          | .env[]
          | select(.name=="AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG")
          | "\(.name)=\(.value)"'
    • Expected output:

      TOML
      AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=false
    • If missing or set to true, update it:

      BASH
      kubectl set env daemonset/aws-node -n kube-system -c aws-node AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=false
  3. Prevent AWS CSI from running on x-compute nodes by adding the following condition to the DaemonSet.

    • Check the AWS CSI daemonSet's node affinity:

      BASH
      kubectl get daemonset ebs-csi-node -n kube-system \
        -o jsonpath='{.spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*]}' \
        | jq
    • Verify the output contains the following condition:

      JSON
      {
        "key": "eks.amazonaws.com/nodegroup",
        "operator": "NotIn",
        "values": ["x-compute"]
      }
    • If missing, patch the daemonSet:

      BASH
      kubectl get daemonset ebs-csi-node -n kube-system -o json \
        | jq '.spec.template.spec.affinity.nodeAffinity
              .requiredDuringSchedulingIgnoredDuringExecution
              .nodeSelectorTerms[0].matchExpressions += [
                {
                  "key": "eks.amazonaws.com/nodegroup",
                  "operator": "NotIn",
                  "values": ["x-compute"]
                }
              ]' \
        | kubectl apply -f -
  4. Annotate the EBS CSI driver's service account with the IAM role ARN for IRSA. And restart the EBS CSI driver add-on or pods (if installed using Helm chart).

    • Check if the annotation is already present:

      BASH
      kubectl get sa ebs-csi-controller-sa \
          -n kube-system \
          -o jsonpath='{.metadata.annotations.eks\.amazonaws\.com/role-arn}'
    • Annotate the service account:

      BASH
      kubectl annotate serviceaccount/ebs-csi-controller-sa \
        -n kube-system \
        "eks.amazonaws.com/role-arn=arn:aws:iam::<aws-account-id>:role/<cluster-name>-ebs-csi-driver-role"
      • Note: Replace <aws-account-id> and <cluster-name> in the above command with your AWS account ID and cluster name, respectively.

    • Restart driver pods:

      BASH
      kubectl delete pod -n kube-system -l app.kubernetes.io/name=aws-ebs-csi-driver
      kubectl wait pods -n kube-system -l app.kubernetes.io/name=aws-ebs-csi-driver --for=condition=ready --timeout=300s

3. Deploy using Terraform.

Run the following Terraform commands to deploy the Exostellar Terraform modules on your existing cluster:

BASH
terraform init
terraform plan -input=false
terraform apply -auto-approve

Cleaning Up

Standalone Flow

Steps to delete the terraform-exostellar-modules standalone flow:

Note:
This module is currently in limited use, and active development has been paused. As a result, several recent features available in the existing cluster flow are not yet supported in this.

We do not recommend using this module in production environments at this time.

If you are interested in using or contributing to this module, please reach out to Exostellar Customer Support (contact details are provided at the end of this page).

Existing Cluster Flow

To clean up everything deployed on top of an (pre-)existing EKS cluster, run the following:

BASH
terraform destroy -auto-approve -refresh=true

This will delete the Exostellar components (like EMS, xKarpenter, etc.), while preserving your original EKS cluster.

Additional Help and Support

If you run into any issues:

  1. Capture as much terminal output as possible.

  2. Archive the terraform-exostellar-modules directory (including hidden files) as .zip or .tar.

  3. Submit them along with a description of your issue to Exostellar Customer Support.

Our team will assist you in troubleshooting and resolving the issue promptly.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.