TensorPool provides high performance persistent storage that can be attached to your clusters.
Storage Types
TensorPool offers two storage volume types:
| Feature | Fast Volumes | Flex Volumes |
|---|
| Cluster Support | Multi-node only (2+ nodes) | All cluster types |
| POSIX Compliant | Yes | No |
Fast Storage Volumes
Fast storage volumes are high-performance NFS-based volumes designed for distributed training on multi-node clusters:
- Multi-node clusters only: Requires clusters with 2 or more nodes
- High aggregate performance: Up to 300 GB/s aggregate read throughput, 150 GB/s aggregate write throughput, 1.5M read IOPS, 750k write IOPS
- Fixed volume size: Volume size must be defined when created, it can be increased at any time. See pricing for details.
- Ideal for: datasets for distributed training, storing model checkpoints
Expected single client performance on a 100TB Fast Storage Volume:
| Metric | Performance |
|---|
| Read Throughput | 6,000 MB/s |
| Write Throughput | 2,000 MB/s |
| Read IOPS | 6,000 |
| Write IOPS | 2,000 |
| Avg Read Latency | 5ms |
| Avg Write Latency | 15ms |
| p99 Read Latency | 9ms |
| p99 Write Latency | 30ms |
Fast storage volume performance scales with volume size. Larger volumes provide higher throughput and IOPS.
Flex Storage Volumes
Flex storage volumes are flexible object-storage-backed volumes designed for workbench clusters and collaboration:
- All cluster types: Supported by all cluster types
- Backed by object storage: Cost-effective for large datasets
- Ideal for: Data archival, researcher collaboration, general persistent storage
- Unlimited volume size: volumes are billed on usage and have no size limit. See pricing for details.
- Available through a high performance fuse mount: TensorPool’s optimized fuse mount achieves performance parity with local nvme storage
Performance ceiling for Flex Storage Volumes:
| Metric | Performance |
|---|
| Read Throughput | 2,300 MB/s |
| Write Throughput | 3,700 MB/s |
| Read IOPS | 2,300 |
| Write IOPS | 3,600 |
| Avg Read Latency | 10ms |
| Avg Write Latency | 8ms |
| p99 Read Latency | 19ms |
| p99 Write Latency | 16ms |
Note that these performance numbers represent the maximum possible performance for a single client.
Flex storage volumes are not POSIX compliant. The following features are unsupported:
- Hard links
- Setting file permissions (
chmod)
- Sticky, set-user-ID (
SUID), and set-group-ID (SGID) bits
- Updating the modification timestamp (
mtime)
- Creating and using FIFOs (first-in-first-out) pipes
- Creating and using Unix sockets
- Obtaining exclusive file locks
- Unlinking an open file while it is still readable
While symlinks are supported, their use is discouraged. Symlink targets may not exist across all clusters, which can cause unexpected behavior.The use of small files (under 100KB) is discouraged due to the request based nature of object storage.Setting up Python virtual environments within a Flex volume is not recommended due to virtual environment’s use of symlinks and large number (~1000) of small files.
Core Commands
tp storage create -t <type> [-s <size_gb>] - Create a new storage volume
tp storage list - View all your storage volumes
tp cluster attach <cluster_id> <storage_id> - Attach storage to a cluster
tp cluster detach <cluster_id> <storage_id> - Detach storage from a cluster
tp storage destroy <storage_id> - Delete a storage volume
Creating Storage Volumes
Create storage volumes by specifying type (fast or flex) and size:
# Create a 500GB fast volume
tp storage create -t fast -s 500 --name training-data
# Create a flex volume (size not required)
tp storage create -t flex --name models
Attaching and Detaching
Attach storage volumes to a cluster:
tp cluster attach <cluster_id> <storage_id>
Detach when you’re done:
tp cluster detach <cluster_id> <storage_id>
Fast storage volumes can only be attached to multi-node clusters (clusters with 2 or more nodes). Flex storage works with all cluster types.
Storage Locations
Volume Mount Points
When you attach a storage volume to your cluster, it will be mounted on each instance at:
/mnt/<storage-type>-<storage_id>
Example Workflow
# 1. Create a 1TB fast storage volume
tp storage create -t fast -s 1000 --name dataset
# 2. Attach the volume to a cluster
tp cluster attach <cluster_id> <storage_id>
# 3. SSH into your cluster and access the data
tp ssh <instance_id>
cd /mnt/fast-<storage_id>
# 4. When done, detach the volume
tp cluster detach <cluster_id> <storage_id>
# 5. Destroy the volume when no longer needed
tp storage destroy <storage_id>
Storage Statuses
Storage volumes progress through various statuses throughout their lifecycle:
| Status | Description |
|---|
| PENDING | Storage creation request has been submitted and is being queued for provisioning. |
| PROVISIONING | Storage has been allocated and is being provisioned. |
| READY | Storage is ready for use. |
| ATTACHING | Storage is being attached to a cluster. |
| DETACHING | Storage is being detached from a cluster. |
| DESTROYING | Storage deletion in progress, resources are being deallocated. |
| DESTROYED | Storage has been successfully deleted. |
| FAILED | System-level problem (e.g., no capacity, hardware failure, etc.). |
Best Practices
- Data Persistence: Use storage volumes for important data that needs to persist across cluster lifecycles
- Shared Data: Attach the same storage volume to multiple clusters to share datasets
- Choose the Right Type: Use fast storage for multi-node distributed training workloads; use flex for cost-effective persistent storage
Next Steps