Automating Node Management with Exostellar Karpenter
This Karpenter is an enhanced version of the open-source Karpenter, tailored to integrate seamlessly with our scheduler and core technology platform.
Prerequisites
Helm: Version 3.14+
Installation Steps
Uninstalling Exostellar Karpenter (v1.x)
If version v1.x of Exostellar Karpenter is installed in your cluster, follow the steps below to uninstall it cleanly:
Delete existing
exokarpenter
CRDs:
kubectl delete crds exonodeclaims.exokarpenter.sh \
exonodeclasses.karpenter.k8s.exo \
exonodepools.exokarpenter.sh
Verify
exokarpenter
CRDs' deletion:
kubectl get crds
Force-remove finalizers (if CRDs are stuck in Terminating state):
kubectl patch crd exonodeclasses.karpenter.k8s.exo -p '{"metadata":{"finalizers":[]}}' --type=merge
Repeat this command for any other CRDs that are stuck due to finalizers.
Uninstall the Helm chart
helm uninstall xkarpenter --namespace exokarpenter
Installing Exostellar Karpenter (v2.x)
Replace the HEADNODE
variable with the private IP address of your EMS head node.
cat <<'EOF' | bash
#!/bin/bash
set -euo pipefail
# Define variables
HEADNODE="http://10.0.186.213:5000" # Replace head node private address
CLUSTER_ENDPOINT=$(kubectl config view --minify -o jsonpath='{.clusters[].cluster.server}')
# Retrieve AWS account ID if not already set in the environment
if [ -z "${AWS_ACCOUNT_ID-}" ]; then
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
fi
if [ -z "${CLUSTER_NAME:-}" ]; then
CLUSTER_NAME=$(kubectl config view --minify -o jsonpath='{.clusters[].name}' | rev | cut -d'/' -f1 | rev | cut -d'.' -f1)
fi
echo "Cluster name is: $CLUSTER_NAME"
# Install helm chart using variables
helm install xkarpenter oci://public.ecr.aws/u8h5n6o4/exostellar-karpenter/karpenter \
--version v2.0.1 \
--namespace exokarpenter \
--set settings.clusterName="$CLUSTER_NAME" \
--set settings.clusterEndpoint="$CLUSTER_ENDPOINT" \
--set settings.featureGates.drift=true \
--set controller.resources.requests.cpu=1 \
--set controller.resources.requests.memory=1Gi \
--set controller.resources.limits.cpu=1 \
--set controller.resources.limits.memory=1Gi \
--set "headnode=$HEADNODE" \
--set defaultControllerRole="arn:aws:iam::${AWS_ACCOUNT_ID}:instance-profile/"$CLUSTER_NAME"-xio-controller" \
--set defaultWorkerRole="arn:aws:iam::${AWS_ACCOUNT_ID}:instance-profile/"$CLUSTER_NAME"-xio-worker" \
--create-namespace
EOF
Step 1: Setting Environment Variables
Set the necessary environment variables to match your system configuration:
export HEADNODE="http://192.0.7.xx:5000" #Please use the Private IPv4 address
export CLUSTER_NAME="EksClusterName"
export CONTROLLER_ROLE="arn:aws:iam::${AWS_ACCOUNT_ID}:instance-profile/"$CLUSTER_NAME"-xio-controller" # use controller's IAM instance profile that you or x-install created
export WORKER_ROLE="arn:aws:iam::${AWS_ACCOUNT_ID}:instance-profile/"$CLUSTER_NAME"-xio-worker" #use worker IAM instance profile that you or x-install created
Step 2: Installing the Helm Chart
Execute the Helm chart installation with the command below:
helm upgrade --install xkarpenter oci://public.ecr.aws/u8h5n6o4/exostellar-karpenter/karpenter \
--version v2.0.1 \
--namespace exokarpenter \
--create-namespace \
--set "settings.clusterName=${CLUSTER_NAME}" \
--set controller.resources.requests.cpu=1 \
--set controller.resources.requests.memory=1Gi \
--set controller.resources.limits.cpu=1 \
--set controller.resources.limits.memory=1Gi \
--set headnode=$HEADNODE \
--set defaultControllerRole=$CONTROLLER_ROLE \
--set defaultWorkerRole=$WORKER_ROLE \
--wait