Retry Mechanism

Last updated:

Retry Mechanism

All storage backends (AWS S3, Azure, GCS, Vercel Blob, Netlify Blob) include automatic retry logic for transient failures. This ensures your uploads and file operations are resilient to network issues, rate limits, and temporary service unavailability.

Default Behavior

By default, storage operations will retry up to 3 times with exponential backoff:

  • Initial delay: 1 second
  • Backoff multiplier: 2x (delays double each retry: 1s → 2s → 4s)
  • Maximum delay: 30 seconds
  • Retryable status codes: 408, 429, 500, 502, 503, 504

Basic Configuration

Configure retry behavior when creating your storage instance:

import { S3Storage } from "@visulima/storage/provider/aws";

const storage = new S3Storage({
    bucket: "my-bucket",
    region: "us-east-1",
    retryConfig: {
        maxRetries: 5,
        initialDelay: 2000,
        backoffMultiplier: 1.5,
        maxDelay: 60_000,
    },
});

Advanced Configuration

Customize retry logic with backend-specific error detection:

import { AzureStorage } from "@visulima/storage/provider/azure";

const storage = new AzureStorage({
    containerName: "uploads",
    accountName: "myaccount",
    accountKey: "mykey",
    retryConfig: {
        maxRetries: 3,
        initialDelay: 1000,
        backoffMultiplier: 2,
        maxDelay: 30_000,
        retryableStatusCodes: [408, 429, 500, 502, 503, 504],
        shouldRetry: (error: unknown) => {
            // Custom retry logic
            if (error instanceof Error) {
                const errorCode = (error as any).code;

                // Retry on network errors
                if (errorCode === "ECONNRESET" || errorCode === "ETIMEDOUT") {
                    return true;
                }
            }

            // Retry on specific HTTP status codes
            if ((error as any).statusCode && [429, 503].includes((error as any).statusCode)) {
                return true;
            }

            return false;
        },
    },
});

Retry Configuration Options

interface RetryConfig {
    /** Maximum number of retry attempts (default: 3) */
    maxRetries?: number;

    /** Initial delay in milliseconds before first retry (default: 1000) */
    initialDelay?: number;

    /** Multiplier for exponential backoff (default: 2) */
    backoffMultiplier?: number;

    /** Maximum delay in milliseconds between retries (default: 30000) */
    maxDelay?: number;

    /** HTTP status codes that should trigger a retry (default: [408, 429, 500, 502, 503, 504]) */
    retryableStatusCodes?: number[];

    /** Custom function to determine if an error should be retried */
    shouldRetry?: (error: unknown) => boolean;

    /** Custom function to calculate delay for a specific retry attempt */
    calculateDelay?: (attempt: number, error: unknown) => number | undefined;
}

Using Retry Utilities Directly

You can also use the retry utilities for custom operations:

import { retry, createRetryWrapper, isRetryableError } from "@visulima/storage";

// One-off retry
const result = await retry(
    async () => {
        // Your operation here
        return await someOperation();
    },
    {
        maxRetries: 3,
        initialDelay: 1000,
    },
);

// Create a reusable retry wrapper
const retryWrapper = createRetryWrapper({
    maxRetries: 5,
    initialDelay: 2000,
});

const result = await retryWrapper(async () => {
    return await someOperation();
});

// Check if an error is retryable
if (isRetryableError(error)) {
    // Handle retryable error
}

Supported Error Types

The retry mechanism automatically handles:

  • Network errors: ECONNRESET, ETIMEDOUT, ENOTFOUND, ECONNREFUSED, EAI_AGAIN
  • AWS SDK errors: Server faults, retryable status codes, SDK v2/v3 error formats
  • Azure Storage errors: HTTP status codes, network connection issues
  • HTTP errors: 408 (Request Timeout), 429 (Too Many Requests), 5xx (Server Errors)

Examples by Storage Backend

AWS S3

import { S3Storage } from "@visulima/storage/provider/aws";

const storage = new S3Storage({
    bucket: "my-bucket",
    region: "us-east-1",
    retryConfig: {
        maxRetries: 3,
        // AWS SDK errors are automatically detected
    },
});

Azure Blob Storage

import { AzureStorage } from "@visulima/storage/provider/azure";

const storage = new AzureStorage({
    containerName: "uploads",
    accountName: "myaccount",
    accountKey: "mykey",
    retryConfig: {
        maxRetries: 5,
        initialDelay: 2000,
    },
});

Google Cloud Storage

GCS already has built-in retry support via gaxios. The retry mechanism works alongside GCS's native retry logic:

import { GCStorage } from "@visulima/storage/provider/gcs";

const storage = new GCStorage({
    bucket: "my-bucket",
    projectId: "my-project",
    retryConfig: {
        maxRetries: 3,
        // Works with GCS's existing retryOptions
    },
});

Vercel Blob

import { VercelBlobStorage } from "@visulima/storage/provider/vercel";

const storage = new VercelBlobStorage({
    token: process.env.BLOB_READ_WRITE_TOKEN,
    retryConfig: {
        maxRetries: 3,
        retryableStatusCodes: [408, 429, 500, 502, 503, 504],
    },
});

Netlify Blob

import { NetlifyBlobStorage } from "@visulima/storage/provider/netlify";

const storage = new NetlifyBlobStorage({
    storeName: "uploads",
    retryConfig: {
        maxRetries: 3,
    },
});

Best Practices

  1. Configure appropriate retry limits - Too many retries can cause unnecessary delays
  2. Use exponential backoff - Prevents overwhelming the service during outages
  3. Monitor retry patterns - Log retry attempts to identify persistent issues
  4. Customize per operation - Some operations may need different retry strategies
  5. Handle non-retryable errors - Distinguish between transient and permanent failures
Support

Contribute to our work and keep us going

Community is the heart of open source. The success of our packages wouldn't be possible without the incredible contributions of users, testers, and developers who collaborate with us every day.Want to get involved? Here are some tips on how you can make a meaningful impact on our open source projects.

Ready to help us out?

Be sure to check out the package's contribution guidelines first. They'll walk you through the process on how to properly submit an issue or pull request to our repositories.

Submit a pull request

Found something to improve? Fork the repo, make your changes, and open a PR. We review every contribution and provide feedback to help you get merged.

Good first issues

Simple issues suited for people new to open source development, and often a good place to start working on a package.
View good first issues