DevDocsDev Docs
S3

S3 Storage Classes

Comprehensive guide to S3 storage classes and cost optimization strategies

Amazon S3 offers a range of storage classes designed for different access patterns and cost requirements. Choosing the right storage class is critical for optimizing costs while meeting performance needs.

Storage Class Overview

Key Principle

S3 storage classes trade off access speed and cost. Frequently accessed data should use Standard; rarely accessed data should use cheaper archive tiers.

Frequently Accessed Data

S3 Standard

The default storage class for frequently accessed data.

AttributeValue
Durability99.999999999% (11 9's)
Availability99.99%
Availability Zones≥3
Minimum Storage DurationNone
Minimum Object SizeNone
Retrieval FeeNone
First Byte LatencyMilliseconds

Use Cases:

  • Active application data
  • Dynamic website content
  • Mobile and gaming applications
  • Big data analytics
  • Content distribution
Upload to Standard (default)
aws s3 cp file.txt s3://my-bucket/

S3 Intelligent-Tiering

Automatically moves objects between tiers based on access patterns.

TierAccess PatternStorage Cost
Frequent AccessAccessed recentlySame as Standard
Infrequent AccessNot accessed for 30 days40% lower
Archive Instant AccessNot accessed for 90 days68% lower
Archive AccessNot accessed for 90+ days (optional)71% lower
Deep Archive AccessNot accessed for 180+ days (optional)95% lower

There's a small monthly monitoring and automation charge per object. Best for objects >128KB.

Upload with Intelligent-Tiering
aws s3 cp file.txt s3://my-bucket/ \
  --storage-class INTELLIGENT_TIERING

Configure archive access tiers:

Enable archive access tiers
aws s3api put-bucket-intelligent-tiering-configuration \
  --bucket my-bucket \
  --id my-config \
  --intelligent-tiering-configuration '{
    "Id": "my-config",
    "Status": "Enabled",
    "Tierings": [
      {
        "Days": 90,
        "AccessTier": "ARCHIVE_ACCESS"
      },
      {
        "Days": 180,
        "AccessTier": "DEEP_ARCHIVE_ACCESS"
      }
    ]
  }'

Infrequently Accessed Data

S3 Standard-Infrequent Access (Standard-IA)

For data accessed less frequently but requiring rapid access when needed.

AttributeValue
Durability99.999999999% (11 9's)
Availability99.9%
Availability Zones≥3
Minimum Storage Duration30 days
Minimum Object Size128KB (charged as 128KB if smaller)
Retrieval FeePer GB retrieved
First Byte LatencyMilliseconds

Cost Comparison (vs Standard):

  • Storage: ~40% cheaper
  • Retrieval: Additional fee per GB

Use Cases:

  • Backups and disaster recovery
  • Long-term storage that needs quick access
  • Older data that's occasionally accessed
Upload to Standard-IA
aws s3 cp file.txt s3://my-bucket/ \
  --storage-class STANDARD_IA

S3 One Zone-Infrequent Access (One Zone-IA)

Same as Standard-IA but stored in a single Availability Zone.

AttributeValue
Durability99.999999999% (11 9's) within AZ
Availability99.5%
Availability Zones1
Minimum Storage Duration30 days
Minimum Object Size128KB
Retrieval FeePer GB retrieved
First Byte LatencyMilliseconds

Data is lost if the Availability Zone is destroyed. Not recommended for sole copies of critical data.

Cost Comparison (vs Standard-IA):

  • Storage: ~20% cheaper than Standard-IA

Use Cases:

  • Secondary backups
  • Easily reproducible data
  • Thumbnails or transcoded media
Upload to One Zone-IA
aws s3 cp file.txt s3://my-bucket/ \
  --storage-class ONEZONE_IA

Archive Storage Classes

S3 Glacier Instant Retrieval

Archive storage with millisecond retrieval.

AttributeValue
Durability99.999999999% (11 9's)
Availability99.9%
Availability Zones≥3
Minimum Storage Duration90 days
Minimum Object Size128KB
Retrieval FeePer GB
First Byte LatencyMilliseconds

Cost Comparison (vs Standard):

  • Storage: ~68% cheaper
  • Retrieval: Higher fee than Standard-IA

Use Cases:

  • Medical images
  • News media archives
  • User-generated content archives
  • Accessed once per quarter or less
Upload to Glacier Instant Retrieval
aws s3 cp file.txt s3://my-bucket/ \
  --storage-class GLACIER_IR

S3 Glacier Flexible Retrieval (formerly Glacier)

Low-cost archive storage with flexible retrieval times.

AttributeValue
Durability99.999999999% (11 9's)
Availability99.99% (after restore)
Availability Zones≥3
Minimum Storage Duration90 days
Minimum Object Size40KB overhead
First Byte LatencyMinutes to hours

Retrieval Options:

TierTimeUse Case
Expedited1-5 minutesUrgent access (costs most)
Standard3-5 hoursRegular retrieval
Bulk5-12 hoursLarge data sets (cheapest)
Upload to Glacier Flexible
aws s3 cp file.txt s3://my-bucket/ \
  --storage-class GLACIER

Restore an object:

Restore from Glacier
# Standard retrieval, available for 7 days
aws s3api restore-object \
  --bucket my-bucket \
  --key my-archive.zip \
  --restore-request '{
    "Days": 7,
    "GlacierJobParameters": {
      "Tier": "Standard"
    }
  }'

S3 Glacier Deep Archive

Lowest cost storage for long-term retention.

AttributeValue
Durability99.999999999% (11 9's)
Availability99.99% (after restore)
Availability Zones≥3
Minimum Storage Duration180 days
Minimum Object Size40KB overhead
First Byte Latency12-48 hours

Retrieval Options:

TierTimeUse Case
StandardWithin 12 hoursRegular retrieval
BulkWithin 48 hoursLarge data sets (cheapest)

Cost Comparison:

  • ~95% cheaper than S3 Standard
  • Cheapest AWS storage option

Use Cases:

  • Compliance archives (7+ years retention)
  • Digital preservation
  • Magnetic tape replacement
  • Healthcare and financial records
Upload to Glacier Deep Archive
aws s3 cp file.txt s3://my-bucket/ \
  --storage-class DEEP_ARCHIVE
Restore from Deep Archive
aws s3api restore-object \
  --bucket my-bucket \
  --key compliance-record.pdf \
  --restore-request '{
    "Days": 30,
    "GlacierJobParameters": {
      "Tier": "Bulk"
    }
  }'

S3 Express One Zone

Newest Storage Class

S3 Express One Zone is designed for performance-critical applications requiring single-digit millisecond latency.

AttributeValue
LatencySingle-digit milliseconds
Requests/secHundreds of thousands
Availability Zones1 (choose your AZ)
Storage TypeDirectory buckets

Use Cases:

  • Machine learning training
  • Financial modeling
  • Real-time analytics
  • High-performance computing
Create Express One Zone directory bucket
aws s3api create-bucket \
  --bucket my-bucket--usw2-az1--x-s3 \
  --region us-west-2 \
  --create-bucket-configuration '{
    "Location": {
      "Type": "AvailabilityZone",
      "Name": "usw2-az1"
    },
    "Bucket": {
      "DataRedundancy": "SingleAvailabilityZone",
      "Type": "Directory"
    }
  }'

Reduced Redundancy (Deprecated)

Not Recommended

S3 Reduced Redundancy Storage (RRS) is deprecated. Use S3 One Zone-IA or S3 Standard instead.

Storage Class Comparison Table

ClassDurabilityAvailabilityAZsMin DurationMin SizeLatency
Standard11 9's99.99%≥3NoneNonems
Intelligent-Tiering11 9's99.9%≥3NoneNonems
Standard-IA11 9's99.9%≥330 days128KBms
One Zone-IA11 9's99.5%130 days128KBms
Glacier Instant11 9's99.9%≥390 days128KBms
Glacier Flexible11 9's99.99%*≥390 days40KB+min-hrs
Glacier Deep Archive11 9's99.99%*≥3180 days40KB+hrs
Express One Zone11 9's99.95%11 hourNone< 10ms

*After restore

Changing Storage Classes

Copy object to new storage class
aws s3 cp s3://my-bucket/file.txt s3://my-bucket/file.txt \
  --storage-class GLACIER_IR
Using S3 API
aws s3api copy-object \
  --bucket my-bucket \
  --key file.txt \
  --copy-source my-bucket/file.txt \
  --storage-class STANDARD_IA \
  --metadata-directive COPY

Use S3 Batch Operations for large-scale storage class changes:

batch-job-manifest.json
{
  "Spec": {
    "Format": "S3BatchOperations_CSV_20180820",
    "Fields": ["Bucket", "Key"]
  },
  "Location": {
    "ObjectArn": "arn:aws:s3:::my-bucket/manifest.csv",
    "ETag": "abc123"
  }
}
Create batch job
aws s3control create-job \
  --account-id 123456789012 \
  --manifest file://manifest.json \
  --operation '{
    "S3PutObjectCopy": {
      "TargetResource": "arn:aws:s3:::my-bucket",
      "StorageClass": "GLACIER"
    }
  }' \
  --report '{
    "Bucket": "arn:aws:s3:::report-bucket",
    "Enabled": true,
    "Format": "Report_CSV_20180820",
    "ReportScope": "AllTasks"
  }' \
  --role-arn arn:aws:iam::123456789012:role/S3BatchRole \
  --priority 1

Automate storage class transitions:

lifecycle-config.json
{
  "Rules": [
    {
      "ID": "OptimizeStorage",
      "Status": "Enabled",
      "Filter": {},
      "Transitions": [
        {
          "Days": 30,
          "StorageClass": "STANDARD_IA"
        },
        {
          "Days": 90,
          "StorageClass": "GLACIER_IR"
        },
        {
          "Days": 365,
          "StorageClass": "DEEP_ARCHIVE"
        }
      ]
    }
  ]
}
Apply lifecycle configuration
aws s3api put-bucket-lifecycle-configuration \
  --bucket my-bucket \
  --lifecycle-configuration file://lifecycle-config.json

Cost Optimization Strategies

Analyze Access Patterns

Use S3 Storage Lens and access logs to understand how data is accessed:

Create Storage Lens configuration
aws s3control put-storage-lens-configuration \
  --account-id 123456789012 \
  --config-id my-dashboard \
  --storage-lens-configuration '{
    "Id": "my-dashboard",
    "IsEnabled": true,
    "AccountLevel": {
      "BucketLevel": {
        "ActivityMetrics": {
          "IsEnabled": true
        }
      }
    }
  }'

Use Intelligent-Tiering for Unknown Patterns

When you can't predict access patterns:

Set default storage class
# In lifecycle rule
{
  "Rules": [{
    "ID": "DefaultToIntelligentTiering",
    "Filter": {"Prefix": ""},
    "Status": "Enabled",
    "Transitions": [{
      "Days": 0,
      "StorageClass": "INTELLIGENT_TIERING"
    }]
  }]
}

Set Lifecycle Rules Based on Data Age

Create rules matching your retention requirements:

Tiered lifecycle example
{
  "Rules": [
    {
      "ID": "LogsLifecycle",
      "Status": "Enabled",
      "Filter": {"Prefix": "logs/"},
      "Transitions": [
        {"Days": 30, "StorageClass": "STANDARD_IA"},
        {"Days": 90, "StorageClass": "GLACIER"}
      ],
      "Expiration": {"Days": 365}
    }
  ]
}

Monitor and Adjust

Regularly review storage costs and adjust policies:

Get bucket storage metrics
aws cloudwatch get-metric-statistics \
  --namespace AWS/S3 \
  --metric-name BucketSizeBytes \
  --dimensions Name=BucketName,Value=my-bucket Name=StorageType,Value=StandardStorage \
  --start-time 2024-01-01T00:00:00Z \
  --end-time 2024-01-31T23:59:59Z \
  --period 86400 \
  --statistics Average

Best Practices

Storage Class Selection Guidelines

  1. Standard: Default for active data with unpredictable access
  2. Intelligent-Tiering: Best for large objects with unknown access patterns
  3. Standard-IA: Data accessed monthly, needs quick access
  4. One Zone-IA: Reproducible data, secondary copies
  5. Glacier Instant: Quarterly access, millisecond retrieval needed
  6. Glacier Flexible: Annual access, can wait hours
  7. Deep Archive: Rarely accessed, compliance/legal requirements

Next Steps

On this page