Skip to main content
Skip table of contents

Validating Migratable VM and Slurm Communications


Validation of Migratable VM Joined to Your Slurm Cluster

The script test_createVm.sh exists for a quick validation that new compute resources can successfully connect and register with the scheduler.

CODE
./test_createVm.sh -h xvm0 -i <IMAGE_NAME> -u user_data.sh
  1. The hostname specified with -h xvm0 is completely arbitrary.

  2. The Image Name specified with -i <IMAGE_NAME> should correspond to the Image Name from the parse_helper.sh command and the environment setup earlier.

  3. The -u user_data.sh is available for any customization that may be required: temporarily changing a password to faciliate logging in, for example.

  4. The test_createVm.shscript will continuously output updates until the VM is created. When the VM is ready, the script will exit and you’ll see all the fields in the output are now filled with values:

    1. CODE
      Waiting for xvm0... (4)
      NodeName: xvm0
      Controller: az1-qeuiptjx-1
      Controller IP: 172.31.57.160
      Vm IP: 172.31.48.108
  5. This step is meant to provide a migratable VM so that sanity checking may occur:

    1. Have network mounts appeared as expected?

    2. Is authentication working as intended?

    3. What commands are required to finish bootstrapping?

    4. Et cetera.

  6. Lastly, slurmd should be started at the end of bootstrapping.

    1. Output from starting slurmd willl likely show an error because the arbitrary host is unknown to the scheduler:

    2. CODE
      /opt/slurm/sbin/slurmd -N xvm0 -f /opt/slurm/etc/slurm.conf

      But that is not a problem since xvm0 has not been added to the cluster yet. That will happen in subsequent steps.

  7. To remove this temporary VM:

    1. Replace VM_NAME with the name of the VM , -h xvm0 example above.

    2. CODE
      curl -X DELETE  http://${MGMT_SERVER_IP}:5000/v1/xcompute/vm/VM_NAME
  8. The above steps may need to be iterated through several times. When totally satisfied, stash the various commands required for successful bootstrapping and overwrite the user data scripts in the exostellar directory.

    1. There will be a per-pool user_data script in the slurm.tgz whose assets were placed in ${SLURM_CONF_DIR}/exostellar. It can be overwritten at any time a change is needed and the next time a node is instantiated from that pool, the node will get the changes.

    2. A common scenario is that all the user_data scripts are identical, but it could be beneficial for different pools to have different user_data bootstrapping assets.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.