S3 Buckets
Deep dive into S3 bucket creation, configuration, and management
S3 buckets are containers for storing objects in Amazon S3. Understanding bucket configuration is fundamental to working with S3 effectively.
Bucket Fundamentals
Key Facts
- Bucket names must be globally unique across all AWS accounts
- Buckets are created in a specific AWS Region
- You can have up to 100 buckets per account (soft limit)
- Buckets cannot be nested within other buckets
Bucket Naming Rules
Length Requirements
- Minimum: 3 characters
- Maximum: 63 characters
Character Rules
- Lowercase letters (a-z)
- Numbers (0-9)
- Hyphens (-)
- Must start and end with a letter or number
Format Restrictions
- Cannot be formatted as IP addresses (e.g., 192.168.1.1)
- Cannot start with
xn--(reserved for S3 Access Points) - Cannot end with
-s3alias(reserved) - Cannot contain consecutive periods
Creating Buckets
# Create bucket in your default region
aws s3 mb s3://my-unique-bucket-name
# Create bucket in a specific region
aws s3api create-bucket \
--bucket my-unique-bucket-name \
--region us-west-2 \
--create-bucket-configuration LocationConstraint=us-west-2For regions other than us-east-1, you must specify LocationConstraint.
aws s3api create-bucket \
--bucket my-bucket \
--region us-east-1 \
--acl private \
--object-ownership BucketOwnerEnforcedBucketOwnerEnforced disables ACLs and is the recommended setting.
# First create the bucket
aws s3api create-bucket \
--bucket my-encrypted-bucket \
--region us-east-1
# Then enable default encryption
aws s3api put-bucket-encryption \
--bucket my-encrypted-bucket \
--server-side-encryption-configuration '{
"Rules": [{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "aws:kms",
"KMSMasterKeyID": "alias/aws/s3"
},
"BucketKeyEnabled": true
}]
}'Block Public Access
Security Critical
Always enable Block Public Access settings unless you have a specific requirement for public access.
aws s3api put-public-access-block \
--bucket my-bucket \
--public-access-block-configuration '{
"BlockPublicAcls": true,
"IgnorePublicAcls": true,
"BlockPublicPolicy": true,
"RestrictPublicBuckets": true
}'Understanding Block Settings
| Setting | Description |
|---|---|
| BlockPublicAcls | Rejects PUT calls with public ACLs |
| IgnorePublicAcls | Ignores existing public ACLs on objects |
| BlockPublicPolicy | Rejects bucket policies that grant public access |
| RestrictPublicBuckets | Restricts access to only authorized users |
Bucket Ownership
Control who owns objects uploaded to your bucket:
aws s3api put-bucket-ownership-controls \
--bucket my-bucket \
--ownership-controls '{
"Rules": [{
"ObjectOwnership": "BucketOwnerEnforced"
}]
}'| Setting | Behavior |
|---|---|
| BucketOwnerEnforced | ACLs disabled, bucket owner owns all objects (recommended) |
| BucketOwnerPreferred | Bucket owner owns objects if uploaded with bucket-owner-full-control ACL |
| ObjectWriter | Object uploading account owns the object |
ACLs are legacy. Use bucket policies and IAM policies instead.
If you must use ACLs:
aws s3api put-bucket-acl \
--bucket my-bucket \
--acl privateAvailable canned ACLs:
private- Owner gets full controlpublic-read- Anyone can read (avoid unless necessary)authenticated-read- Any authenticated AWS user can read
Bucket Tagging
Tags help organize and track costs:
aws s3api put-bucket-tagging \
--bucket my-bucket \
--tagging '{
"TagSet": [
{"Key": "Environment", "Value": "Production"},
{"Key": "Project", "Value": "MyApp"},
{"Key": "CostCenter", "Value": "Engineering"}
]
}'aws s3api get-bucket-tagging --bucket my-bucketBucket Logging
Enable server access logging to track requests:
# First, grant log delivery permissions to target bucket
aws s3api put-bucket-acl \
--bucket my-logs-bucket \
--grant-write URI=http://acs.amazonaws.com/groups/s3/LogDelivery \
--grant-read-acp URI=http://acs.amazonaws.com/groups/s3/LogDelivery
# Then enable logging
aws s3api put-bucket-logging \
--bucket my-bucket \
--bucket-logging-status '{
"LoggingEnabled": {
"TargetBucket": "my-logs-bucket",
"TargetPrefix": "logs/my-bucket/"
}
}'Log files are delivered on a best-effort basis. There may be delays of several hours.
Bucket Notifications
Configure event notifications:
{
"TopicConfigurations": [
{
"Id": "ObjectCreated",
"TopicArn": "arn:aws:sns:us-east-1:123456789012:my-topic",
"Events": ["s3:ObjectCreated:*"],
"Filter": {
"Key": {
"FilterRules": [
{
"Name": "prefix",
"Value": "uploads/"
},
{
"Name": "suffix",
"Value": ".jpg"
}
]
}
}
}
]
}{
"QueueConfigurations": [
{
"Id": "ProcessUploads",
"QueueArn": "arn:aws:sqs:us-east-1:123456789012:my-queue",
"Events": ["s3:ObjectCreated:*"]
}
]
}{
"LambdaFunctionConfigurations": [
{
"Id": "ImageProcessor",
"LambdaFunctionArn": "arn:aws:lambda:us-east-1:123456789012:function:ProcessImage",
"Events": ["s3:ObjectCreated:Put"],
"Filter": {
"Key": {
"FilterRules": [
{"Name": "suffix", "Value": ".png"}
]
}
}
}
]
}aws s3api put-bucket-notification-configuration \
--bucket my-bucket \
--notification-configuration file://notification-config.jsonTransfer Acceleration
Speed up uploads from distant locations:
aws s3api put-bucket-accelerate-configuration \
--bucket my-bucket \
--accelerate-configuration Status=EnabledTransfer Acceleration uses CloudFront edge locations. Bucket name cannot contain periods.
aws s3 cp large-file.zip s3://my-bucket/ \
--endpoint-url https://s3-accelerate.amazonaws.comRequester Pays
Make the requester pay for data transfer:
aws s3api put-bucket-request-payment \
--bucket my-bucket \
--request-payment-configuration Payer=RequesterAnonymous requests are not allowed on Requester Pays buckets.
Bucket Metrics
Enable metrics for detailed monitoring:
aws s3api put-bucket-metrics-configuration \
--bucket my-bucket \
--id EntireBucket \
--metrics-configuration Id=EntireBucketaws s3api put-bucket-metrics-configuration \
--bucket my-bucket \
--id UploadsMetrics \
--metrics-configuration '{
"Id": "UploadsMetrics",
"Filter": {
"Prefix": "uploads/"
}
}'Deleting Buckets
Warning
Buckets must be empty before deletion. This includes all object versions if versioning is enabled.
# Delete all objects first
aws s3 rm s3://my-bucket --recursive
# Then delete the bucket
aws s3 rb s3://my-bucketOr in one command:
aws s3 rb s3://my-bucket --force# Delete all object versions
aws s3api list-object-versions \
--bucket my-bucket \
--query 'Versions[].{Key:Key,VersionId:VersionId}' \
--output text | while read KEY VERSION; do
aws s3api delete-object \
--bucket my-bucket \
--key "$KEY" \
--version-id "$VERSION"
done
# Delete all delete markers
aws s3api list-object-versions \
--bucket my-bucket \
--query 'DeleteMarkers[].{Key:Key,VersionId:VersionId}' \
--output text | while read KEY VERSION; do
aws s3api delete-object \
--bucket my-bucket \
--key "$KEY" \
--version-id "$VERSION"
done
# Now delete the bucket
aws s3 rb s3://my-bucketBucket Inventory
Generate inventory reports for auditing:
aws s3api put-bucket-inventory-configuration \
--bucket my-bucket \
--id daily-inventory \
--inventory-configuration '{
"Id": "daily-inventory",
"IsEnabled": true,
"Destination": {
"S3BucketDestination": {
"Bucket": "arn:aws:s3:::my-inventory-bucket",
"Format": "CSV",
"Prefix": "inventory/"
}
},
"Schedule": {
"Frequency": "Daily"
},
"IncludedObjectVersions": "Current",
"OptionalFields": [
"Size",
"LastModifiedDate",
"StorageClass",
"ETag",
"EncryptionStatus"
]
}'Best Practices
Bucket Design Recommendations
- Use meaningful names - Include environment, region, or purpose
- One bucket per application/purpose - Easier access control
- Enable versioning for important data
- Enable encryption by default - Use SSE-S3 or SSE-KMS
- Block public access unless required
- Enable access logging for audit trails
- Use lifecycle policies to manage storage costs
- Tag buckets for cost allocation