Skip to main content
The TensorPool Agent is currently in beta. We’d love your feedback!
The TensorPool Agent is an autonomous monitoring and recovery system for long-running distributed training jobs on Kubernetes, Slurm, or TensorPool Jobs. It’s designed for large multi-node training jobs that run for days to weeks. When the TensorPool Agent detects a runtime error, it attempts to autonomously recover your training job from its last checkpoint. You explicitly whitelist the actions the TensorPool Agent can take on your behalf. Best case: The TensorPool Agent recovers your training job when you are AFK, letting you get more iteration cycles and avoid burning GPU hours. Worst case: The TensorPool Agent delivers a preliminary root cause analysis and the actions it would have taken.

Target Failures

The TensorPool Agent is designed to address runtime errors that occur deep into training:
  • GPU hardware faults: Xid errors (79, 63, 48, etc.)
  • Distributed communication failures, NCCL errors
  • Infrastructure problems: hardware failures, kernel panics
  • Storage problems: I/O errors, checkpoint corruption, S3 timeouts
  • Network problems: mounted object storage bucket issues
  • GPU memory problems: CUDA out of memory, memory leaks, gradient explosion
The TensorPool Agent is not intended to fix errors that occur early in training, such as dependency issues or distributed communication initialization failures. It’s designed to solve issues that occur after the first checkpoint.

How It Works

  1. Registration: Provide credentials to access your job scheduler of choice (Slurm, K8s, or TensorPool Jobs) on the TensorPool Agent dashboard. Whitelist permissions you allow the agent to take on your behalf.
  2. Monitoring: The training job is continuously monitored for failure.
  3. Recovery (if job fails): The TensorPool Agent analyzes logs, attempts to diagnose and fix the issue. The job enters a recovering state.
  4. Resolution: If recovery succeeds, monitoring resumes. You’re alerted about the failure, actions taken, and recovery status. If the TensorPool Agent lacks permissions, it provides a list of actions it attempted and would have tried.

TensorPool Agent Status Lifecycle

StatusDescription
pendingTensorPool Agent created, credentials being validated
enabledTensorPool Agent is monitoring the job
credential_errorCredential validation failed, job is not accessible by the TensorPool Agent, fix and resubmit
recoveringJob failure detected, TensorPool Agent is attempting to recover it
completedJob finished (succeeded or unrecoverable)
You will be notified via text or email whenever the TensorPool Agent enters the recovering state.

Failure Detection

The TensorPool Agent has the following definitions of failure for each job scheduler:
  • TensorPool Jobs
  • Kubernetes
  • Slurm
Only jobs in ERROR state trigger the TensorPool Agent.

Setup Requirements

The information that has to be provided in order for the TensorPool Agent to monitor a job depends on the job scheduler.
  • TensorPool Jobs
  • Kubernetes
  • Slurm
The simplest option - just provide your TensorPool job ID.
FieldDescription
Job IDYour TensorPool job ID

Next Steps