Storage Configuration
Complete guide to configuring and managing file storage for MegaVault using Cloudflare R2, AWS S3, or other S3-compatible storage services.
Table of Contents
Storage Overview
MegaVault supports multiple S3-compatible storage providers for scalable, reliable, and cost-effective file storage. Cloudflare R2 is recommended for its performance and pricing.
Supported Providers
Multiple storage options
- ✅ Cloudflare R2 (Recommended)
- ✅ AWS S3
- ✅ MinIO
- ✅ DigitalOcean Spaces
- ✅ Backblaze B2
- ✅ Other S3-compatible services
Features
Advanced storage capabilities
- ✅ Unlimited scalability
- ✅ Automatic thumbnails
- ✅ CORS configuration
- ✅ CDN integration
- ✅ Lifecycle policies
- ✅ Access control
Performance
Optimized file handling
- ✅ Multi-part uploads
- ✅ Resume interrupted uploads
- ✅ Parallel processing
- ✅ Intelligent caching
- ✅ Global distribution
- ✅ Edge optimization
Storage Provider Recommendation
Cloudflare R2 Setup
Step-by-step guide to setting up Cloudflare R2 as your primary storage provider.
Create Cloudflare Account
Sign up for a Cloudflare account at cloudflare.com if you don't have one already.
Enable R2 Storage
Navigate to R2 Object Storage in your Cloudflare dashboard and enable the service.
Create Storage Bucket
Create a new R2 bucket with a unique name for your MegaVault installation.
Generate API Token
Create an API token with R2 read/write permissions for your bucket.
Configure CORS Policy
Set up CORS policy to allow web uploads from your domain.
Set Environment Variables
Configure MegaVault with your R2 credentials and bucket information.
R2 Bucket Creation
# Install Wrangler CLI
npm install -g wrangler
# Login to Cloudflare
wrangler login
# Create R2 bucket
wrangler r2 bucket create megavault-storage
# List buckets to verify
wrangler r2 bucket listCORS Configuration
[
{
"AllowedOrigins": [
"https://your-domain.com",
"https://www.your-domain.com"
],
"AllowedMethods": [
"GET",
"PUT",
"POST",
"DELETE",
"HEAD"
],
"AllowedHeaders": [
"*"
],
"ExposeHeaders": [
"ETag",
"Content-Length"
],
"MaxAgeSeconds": 3600
}
]R2 Environment Variables
# Cloudflare R2 Configuration
R2_ACCOUNT_ID=your-cloudflare-account-id
R2_ACCESS_KEY_ID=your-r2-access-key-id
R2_SECRET_ACCESS_KEY=your-r2-secret-access-key
R2_BUCKET_NAME=megavault-storage
R2_ENDPOINT=https://your-account-id.r2.cloudflarestorage.com
# Optional: Custom Domain for Public URLs
R2_PUBLIC_URL=https://files.your-domain.com
# Optional: Enable R2 Analytics
R2_ANALYTICS_ENABLED=trueCustom Domain Setup (Optional)
- Create a CNAME record pointing to your R2 bucket
- Configure the custom domain in R2 settings
- Update
R2_PUBLIC_URLenvironment variable - Enable SSL/TLS for the custom domain
AWS S3 Setup
Alternative setup guide for using AWS S3 as your storage provider.
Create AWS Account
Sign up for an AWS account and navigate to the S3 console.
Create S3 Bucket
Create a new S3 bucket with appropriate settings and region selection.
Configure Bucket Policy
Set up bucket policy and CORS configuration for web access.
Create IAM User
Create a dedicated IAM user with S3 permissions for MegaVault.
Generate Access Keys
Generate access key and secret key for the IAM user.
Configure Environment Variables
Set up MegaVault with your AWS S3 credentials and bucket information.
S3 Bucket Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::your-bucket-name/public/*"
},
{
"Sid": "MegaVaultAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::YOUR-ACCOUNT-ID:user/megavault-user"
},
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::your-bucket-name",
"arn:aws:s3:::your-bucket-name/*"
]
}
]
}S3 CORS Configuration
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"],
"AllowedOrigins": [
"https://your-domain.com",
"https://www.your-domain.com"
],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 3600
}
]AWS S3 Environment Variables
# AWS S3 Configuration
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=your-aws-access-key-id
AWS_SECRET_ACCESS_KEY=your-aws-secret-access-key
AWS_S3_BUCKET=megavault-s3-bucket
# Optional: Custom S3 Endpoint (for S3-compatible services)
AWS_S3_ENDPOINT=https://s3.amazonaws.com
# Optional: S3 Transfer Acceleration
AWS_S3_ACCELERATED=trueStorage Configuration
Advanced storage configuration options and optimization settings.
File Handling Configuration
# File Size and Type Restrictions
STORAGE_MAX_FILE_SIZE=104857600 # 100MB in bytes
STORAGE_ALLOWED_TYPES=image/*,application/pdf,text/*,application/msword,application/vnd.openxmlformats-officedocument.wordprocessingml.document,video/mp4,audio/mpeg
# Upload Configuration
STORAGE_MULTIPART_THRESHOLD=10485760 # 10MB - use multipart for larger files
STORAGE_MULTIPART_CHUNK_SIZE=5242880 # 5MB chunk size
STORAGE_MAX_CONCURRENT_UPLOADS=3 # Maximum concurrent upload chunks
# Thumbnail Generation
STORAGE_ENABLE_THUMBNAILS=true # Enable automatic thumbnail creation
STORAGE_THUMBNAIL_SIZES=150,300,600 # Thumbnail sizes in pixels
STORAGE_THUMBNAIL_QUALITY=85 # JPEG quality for thumbnails (1-100)
STORAGE_THUMBNAIL_FORMAT=webp # Thumbnail format: jpeg, webp, png
# Caching and Performance
STORAGE_CACHE_CONTROL=public,max-age=31536000 # Cache headers for static files
STORAGE_ENABLE_COMPRESSION=true # Enable gzip compression
STORAGE_OPTIMIZED_DELIVERY=true # Enable optimized deliveryStorage Organization
// MegaVault file organization structure
/uploads/
├── users/
│ ├── {userId}/
│ │ ├── profile/
│ │ ├── documents/
│ │ └── media/
│ └── shared/
├── public/
│ ├── thumbnails/
│ └── temp/
└── system/
├── backups/
└── logs/
// Example file paths:
// /uploads/users/user_123/documents/project-spec.pdf
// /uploads/users/user_123/media/vacation-photo.jpg
// /uploads/public/thumbnails/thumb_150_vacation-photo.webpLifecycle Policies
{
"Rules": [
{
"ID": "TempFileCleanup",
"Status": "Enabled",
"Filter": {
"Prefix": "uploads/public/temp/"
},
"Expiration": {
"Days": 1
}
},
{
"ID": "OldBackupCleanup",
"Status": "Enabled",
"Filter": {
"Prefix": "uploads/system/backups/"
},
"Expiration": {
"Days": 30
}
},
{
"ID": "IntelligentTiering",
"Status": "Enabled",
"Filter": {
"Prefix": "uploads/users/"
},
"Transitions": [
{
"Days": 30,
"StorageClass": "STANDARD_IA"
},
{
"Days": 90,
"StorageClass": "GLACIER"
}
]
}
]
}Storage Access Modes
MegaVault provides configurable storage access modes to support different deployment scenarios and security requirements. Administrators can choose between complete bucket access or folder-restricted access based on their organizational needs.
Available Access Modes
Bucket Mode
Complete access to the entire storage bucket
Use Case: Single-tenant deployments, personal cloud storage
Storage Path: Files stored directly at bucket root
Benefits:
- • Maximum flexibility for file organization
- • Simplified file path structure
- • Easier migration from other systems
- • No folder-based restrictions
Folder Mode
Restricted access to a specific folder within the bucket
Use Case: Multi-tenant systems, shared storage buckets
Storage Path: Files isolated within specified folder
Benefits:
- • Enhanced security and isolation
- • Better organization for shared buckets
- • Prevents accidental data access
- • Easier data management and backups
Configuration Options
# ================================
# Storage Access Configuration
# ================================
# Choose between "bucket" (complete access) or "folder" (folder-specific access)
STORAGE_ACCESS_MODE=bucket # Options: bucket | folder
# Folder name for isolated storage (only used when STORAGE_ACCESS_MODE=folder)
USER_STORAGE_FOLDER=single-user-folder # Can be any valid folder name
# ================================
# Examples for Different Scenarios
# ================================
# Single User / Personal Deployment (Recommended)
STORAGE_ACCESS_MODE=bucket
# USER_STORAGE_FOLDER not needed in bucket mode
# Multi-User / Shared Bucket Deployment
STORAGE_ACCESS_MODE=folder
USER_STORAGE_FOLDER=production-vault
# Development / Testing Environment
STORAGE_ACCESS_MODE=folder
USER_STORAGE_FOLDER=dev-environmentFile Organization Examples
🪣 Bucket Mode Structure
your-bucket/
├── documents/
│ ├── report.pdf
│ └── presentation.pptx
├── photos/
│ ├── vacation/
│ │ └── beach.jpg
│ └── family.jpg
├── projects/
│ └── website.zip
└── backup.tar.gz📁 Folder Mode Structure
your-bucket/
├── other-app-data/
├── shared-resources/
└── single-user-folder/ ← MegaVault files
├── documents/
│ ├── report.pdf
│ └── presentation.pptx
├── photos/
│ ├── vacation/
│ │ └── beach.jpg
│ └── family.jpg
├── projects/
│ └── website.zip
└── backup.tar.gzDeployment Recommendations
🏠 Personal/Single-User Deployments
Recommended: STORAGE_ACCESS_MODE=bucket
Reason: Maximum flexibility and simpler file management
Security: Entire bucket is dedicated to MegaVault
🏢 Enterprise/Multi-User Deployments
Recommended: STORAGE_ACCESS_MODE=folder
Reason: Better isolation and organization
Security: Prevents access to other data in shared bucket
🧪 Development/Testing
Recommended: STORAGE_ACCESS_MODE=folder
Reason: Isolate test data from production
Folder: Use descriptive names like dev-environment
Migration Between Modes
Important: Data Migration Required
#!/bin/bash
# migrate-storage-mode.sh
SOURCE_BUCKET="your-bucket-name"
FOLDER_NAME="single-user-folder"
# Migrating FROM bucket mode TO folder mode
echo "Migrating to folder mode..."
aws s3 sync s3://$SOURCE_BUCKET/ s3://$SOURCE_BUCKET/$FOLDER_NAME/ --exclude "$FOLDER_NAME/*" --delete
# Verify migration
echo "Verifying migration..."
aws s3 ls s3://$SOURCE_BUCKET/$FOLDER_NAME/ --recursive
# Migrating FROM folder mode TO bucket mode
echo "Migrating to bucket mode..."
aws s3 sync s3://$SOURCE_BUCKET/$FOLDER_NAME/ s3://$SOURCE_BUCKET/ --delete
# Remove empty folder
aws s3 rm s3://$SOURCE_BUCKET/$FOLDER_NAME/ --recursive
echo "Migration completed. Update your environment variables:"
echo "STORAGE_ACCESS_MODE=bucket # or folder"
echo "# USER_STORAGE_FOLDER=single-user-folder # only for folder mode"Security Considerations
⚠️ Bucket Mode Security
- • Entire bucket is accessible to MegaVault
- • Ensure bucket is dedicated to MegaVault only
- • Use proper IAM policies to restrict access
- • Monitor bucket-level access logs
✅ Folder Mode Security
- • Access limited to specified folder only
- • Safe for shared storage buckets
- • Natural data isolation boundary
- • Easier to implement data retention policies
File Management
Advanced file management features and optimization techniques.
Upload Optimization
- Multipart Uploads: Large files are automatically split into chunks
- Resume Capability: Interrupted uploads can be resumed
- Parallel Processing: Multiple chunks uploaded simultaneously
- Client-side Validation: File type and size validation before upload
- Progress Tracking: Real-time upload progress monitoring
Image Processing Pipeline
// Image processing settings
const imageProcessingConfig = {
thumbnails: {
sizes: [150, 300, 600, 1200],
format: 'webp',
quality: 85,
progressive: true,
strip: true // Remove EXIF data
},
optimization: {
jpeg: { quality: 90, progressive: true },
png: { compressionLevel: 9 },
webp: { quality: 85, effort: 6 }
},
limits: {
maxWidth: 4096,
maxHeight: 4096,
maxPixels: 16777216 // 16MP
}
};File Metadata Management
{
"id": "file_123456789",
"name": "vacation-photo.jpg",
"originalName": "IMG_20240115_143022.jpg",
"mimeType": "image/jpeg",
"size": 2048576,
"checksums": {
"md5": "5d41402abc4b2a76b9719d911017c592",
"sha256": "aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f"
},
"metadata": {
"dimensions": { "width": 1920, "height": 1080 },
"exif": {
"camera": "iPhone 12 Pro",
"iso": 64,
"focalLength": "6mm",
"dateTimeOriginal": "2024-01-15T14:30:22Z"
},
"location": {
"latitude": 37.7749,
"longitude": -122.4194,
"country": "United States",
"city": "San Francisco"
}
},
"processing": {
"thumbnails": [
{ "size": 150, "url": "thumb_150_vacation-photo.webp" },
{ "size": 300, "url": "thumb_300_vacation-photo.webp" },
{ "size": 600, "url": "thumb_600_vacation-photo.webp" }
],
"processed": true,
"processedAt": "2024-01-15T14:31:15Z"
}
}CDN Integration
Integrate Content Delivery Network for improved performance and global file delivery.
Cloudflare CDN (with R2)
# CDN Configuration
CDN_ENABLED=true
CDN_URL=https://cdn.your-domain.com
CDN_CACHE_TTL=31536000 # 1 year cache TTL
CDN_PURGE_ON_UPDATE=true # Auto-purge cache on file updates
# Cloudflare-specific settings
CLOUDFLARE_ZONE_ID=your-zone-id
CLOUDFLARE_API_TOKEN=your-api-token
CLOUDFLARE_CACHE_EVERYTHING=true # Cache all file typesAWS CloudFront (with S3)
# CloudFront Distribution
CLOUDFRONT_DISTRIBUTION_ID=your-distribution-id
CLOUDFRONT_DOMAIN=d123456789.cloudfront.net
CLOUDFRONT_INVALIDATE_ON_UPDATE=true
# CloudFront Cache Behaviors
CLOUDFRONT_DEFAULT_TTL=86400 # 24 hours
CLOUDFRONT_MAX_TTL=31536000 # 1 year
CLOUDFRONT_MIN_TTL=0 # No minimum TTLCache Optimization
- Static Assets: Long cache TTL for images, documents
- Thumbnails: Aggressive caching with versioning
- Dynamic Content: Short TTL for frequently updated files
- Cache Invalidation: Automatic purging on file updates
- Edge Locations: Global distribution for low latency
Backup Strategy
Implement comprehensive backup and disaster recovery strategies for your storage.
Multi-Region Backup
# Primary Storage
R2_BUCKET_NAME=megavault-storage
R2_REGION=auto
# Backup Storage
BACKUP_ENABLED=true
BACKUP_PROVIDER=s3 # s3, r2, or another provider
BACKUP_BUCKET=megavault-backup
BACKUP_REGION=us-west-2
BACKUP_SCHEDULE=0 2 * * * # Daily at 2 AM UTC
BACKUP_RETENTION_DAYS=30 # Keep backups for 30 days
# Cross-region replication
REPLICATION_ENABLED=true
REPLICATION_DESTINATION=megavault-replica
REPLICATION_STORAGE_CLASS=STANDARD_IABackup Script
#!/bin/bash
# backup-storage.sh
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_PREFIX="backup_$TIMESTAMP"
LOG_FILE="/var/log/megavault/backup.log"
echo "[$TIMESTAMP] Starting storage backup" >> $LOG_FILE
# Sync files from primary to backup storage
aws s3 sync s3://megavault-storage s3://megavault-backup/$BACKUP_PREFIX --exclude "*/temp/*" --exclude "*/cache/*" --storage-class STANDARD_IA
if [ $? -eq 0 ]; then
echo "[$TIMESTAMP] ✓ Backup completed successfully" >> $LOG_FILE
# Update backup metadata
echo "{"timestamp": "$TIMESTAMP", "status": "success"}" > /tmp/backup_status.json
aws s3 cp /tmp/backup_status.json s3://megavault-backup/latest.json
else
echo "[$TIMESTAMP] ✗ Backup failed" >> $LOG_FILE
# Send alert (implement your notification system)
fi
# Cleanup old backups
aws s3 ls s3://megavault-backup/ | grep "backup_" | awk '{print $4}' | sort | head -n -$BACKUP_RETENTION_DAYS | xargs -I {} aws s3 rm s3://megavault-backup/{} --recursive
echo "[$TIMESTAMP] Backup process completed" >> $LOG_FILEDisaster Recovery Plan
- Regular Backups: Automated daily backups to separate region
- Monitoring: Backup job monitoring and alerting
- Testing: Monthly backup restoration tests
- Documentation: Recovery procedures documentation
- RTO/RPO: Recovery Time/Point Objectives definition
Troubleshooting
Common storage configuration issues and their solutions.
Connection Issues
Invalid Credentials
Error: Access denied or authentication failed
Solutions:
- Verify access key and secret key
- Check IAM permissions
- Ensure bucket name is correct
- Validate endpoint URL
CORS Errors
Error: Browser blocks upload requests
Solutions:
- Configure CORS policy on bucket
- Add your domain to allowed origins
- Include required headers
- Check preflight request handling
Performance Issues
Slow Uploads
Symptoms: Upload speeds slower than expected
Solutions:
- Enable multipart uploads
- Optimize chunk size
- Use transfer acceleration
- Check network connectivity
High Storage Costs
Issue: Unexpected storage bills
Solutions:
- Implement lifecycle policies
- Clean up temporary files
- Use intelligent tiering
- Monitor storage usage
Debugging Tools
#!/bin/bash
# storage-health-check.sh
echo "=== MegaVault Storage Health Check ==="
# Test storage connectivity
echo "Testing storage connectivity..."
if [ "$STORAGE_PROVIDER" = "r2" ]; then
aws s3 ls s3://$R2_BUCKET_NAME --endpoint-url=$R2_ENDPOINT
elif [ "$STORAGE_PROVIDER" = "s3" ]; then
aws s3 ls s3://$AWS_S3_BUCKET
fi
# Test upload functionality
echo "Testing file upload..."
echo "test file" > /tmp/test_upload.txt
aws s3 cp /tmp/test_upload.txt s3://$BUCKET_NAME/test/
# Test download functionality
echo "Testing file download..."
aws s3 cp s3://$BUCKET_NAME/test/test_upload.txt /tmp/test_download.txt
# Verify file integrity
if cmp -s /tmp/test_upload.txt /tmp/test_download.txt; then
echo "✓ Upload/download test passed"
else
echo "✗ Upload/download test failed"
fi
# Cleanup test files
aws s3 rm s3://$BUCKET_NAME/test/test_upload.txt
rm -f /tmp/test_upload.txt /tmp/test_download.txt
echo "Health check completed"