Clusters are ideal for interactive development, debugging, and building new projects. If you have working code and want to run experiments, Jobs are recommended instead.
Make sure you’ve installed the TensorPool CLI and configured your API key.
Create Your First GPU Cluster
Create a 1xB200For multi-node training, create a 4-node 8xB200 cluster:
See instance types for all available GPU configurations.
Check Your Cluster Status
The
tp cluster create command will give you a cluster ID. Use it to check your cluster’s status:If you lose the cluster ID, you can always find it with
tp cluster listSSH Into Your Cluster
Once your cluster is ready, you’ll receive the connection details. SSH into your nodes and start training!Connect using the TensorPool CLI:
Clean Up
When you’re done, destroy your cluster:
Next Steps
- Learn about cluster management
- Explore NFS storage for persistent data
- Check out multi-node training for distributed workloads