Validating Migratable VM and Slurm Communications
Validation of Migratable VM Joined to Your Slurm Cluster
These steps allow for a quick validation that the new compute resources can successfully connect and register with the scheduler.
xli node add -c 4 -m 8192 -n <HostName> -i <ImageName> -p <PoolName> -r <ProfileName> -u ./user_data.sh
The Image Name specified with
-i <IMAGE_NAME>
should correspond to the Image Name added to the EMS’s Image Library earlier.The
-u user_data.sh
is available for any customization that may be required: temporarily changing a password to faciliate logging in, for example.This step is meant to provide a migratable VM so that sanity checking may occur:
Have network mounts appeared as expected?
Is authentication working as intended?
What commands are required to finish bootstrapping?
Et cetera.
Lastly,
slurmd
should be started at the end of bootstrapping.Output from starting
slurmd
willl likely show an error because the arbitrary host is unknown to the scheduler:- CODE
/opt/slurm/sbin/slurmd -N <HostName> -f /opt/slurm/etc/slurm.conf
To remove this temporary VM:
- CODE
xli node rm -n <HostName>
The above steps may need to be iterated through several times. When totally satisfied, stash the various commands required for successful bootstrapping and overwrite the user data scripts in the
exostellar
directory.There will be a per-pool
user_data
script in theslurm.tgz
whose assets were placed in${SLURM_CONF_DIR}/exostellar
. It can be overwritten at any time a change is needed and the next time a node is instantiated from that pool, the node will get the changes.A common scenario is that all the
user_data
scripts are identical, but it could be beneficial for different pools to have differentuser_data
bootstrapping assets.