Skip to main content

1. Create Your First GPU Cluster

Deploy a single H100 node cluster:
tp cluster create -i ~/.ssh/id_ed25519.pub -t 1xH100 --name my-training-cluster
For multi-node training, create a 4-node H100 cluster:
tp cluster create -i ~/.ssh/id_ed25519.pub -t 8xH100 -n 4 --name distributed-training

2. List Your Clusters

tp cluster list

3. SSH Into Your Cluster

Once your cluster is ready, you’ll receive the connection details. SSH into your nodes and start training! Connect using the TensorPool CLI:
tp ssh connect <instance_id>

4. Clean Up

When you’re done, destroy your cluster:
tp cluster destroy <cluster_id>

Next Steps

I