Cheap Cold Storage Backups with AWS Glacier Deep Archive and TrueNAS
How to set up a monthly off-site backup from TrueNAS to AWS S3 Glacier Deep Archive for about $1/TB/month using Cloud Sync Tasks.
If you need an off-site backup for a large dataset but don't want to pay traditional cloud storage prices, Glacier Deep Archive is hard to beat at ~$1/TB/month. The trade-off is retrieval time (12-48 hours), but for a true last-resort backup, that's fine.
TrueNAS has built-in Cloud Sync Tasks that use rclone under the hood, making it straightforward to push a dataset to S3 with the Glacier Deep Archive storage class — no lifecycle rules needed.
AWS Setup
Create the Bucket and IAM User
- Create an S3 bucket in your preferred region
- Create a dedicated IAM user for the backup (e.g.,
truenas-backup) - Attach the AmazonS3FullAccess managed policy (or scope it down to your specific bucket if you prefer least-privilege)
- Generate an access key pair and store the secret key somewhere safe — you can't retrieve it after creation
How the Storage Class Works
Objects are uploaded directly to Glacier Deep Archive using rclone's --s3-storage-class DEEP_ARCHIVE flag. TrueNAS handles this automatically when you select the storage tier in the Cloud Sync Task UI. No S3 lifecycle rules are required.
This is actually a nice side benefit — if your AWS account has restrictions on lifecycle rule creation (e.g., Organization-level Service Control Policies), the direct storage class approach bypasses that entirely.
TrueNAS Configuration
Step 1: Add Cloud Credentials
- Navigate to Credentials > Backup Credentials > Cloud Credentials
- Click Add
- Set Provider to Amazon S3
- Enter a name (e.g.,
AWS Glacier Backup) - Enter your Access Key ID and Secret Access Key
- Click Verify Credential to confirm connectivity
- Save
Step 2: Create the Cloud Sync Task
Navigate to Data Protection > Cloud Sync Tasks and click Add.
| Setting | Value |
|---|---|
| Description | Monthly Off-site Backup |
| Direction | PUSH |
| Transfer Mode | SYNC |
| Credential | Your AWS credential from Step 1 |
| Bucket | Your S3 bucket name |
| Source Path | /mnt/your-pool |
| Storage Class | Glacier Deep Archive |
| Schedule | Monthly (1st of month, off-hours) |
SYNC vs COPY: Which Transfer Mode?
SYNC mirrors the exact state of your pool each month — it adds new files, updates changed files, and removes files from the remote that no longer exist locally.
COPY only adds new files and never deletes from the remote. Use this if you want the remote to accumulate everything ever added to the pool, even files you later removed locally.
For most backup scenarios, SYNC is what you want. If you're paranoid about accidental local deletions propagating to your backup, use COPY — but know that your bucket will grow indefinitely.
Excluding Directories
If you need to exclude certain directories from the backup, add exclusion patterns in the Cloud Sync Task's filter/exclude field:
1large-temp-files/**2scratch/**1large-temp-files/**2scratch/**
Adjust paths based on your dataset structure.
Performance Expectations
Based on real-world testing with a 1GB upload:
- Upload speed: ~97 Mbps (unthrottled, will vary by ISP and region)
- Test duration: ~1 minute 24 seconds for 1GB
Estimated Times for a 25TB Dataset
| Throttle Setting | Estimated Duration |
|---|---|
| 10 Mbps | ~5 days 12 hours |
| 25 Mbps | ~2 days 5 hours |
| 50 Mbps | ~1 day 3 hours |
| 75 Mbps | ~18 hours |
| Unthrottled (~97 Mbps) | ~9 hours 15 minutes |
TrueNAS supports bandwidth throttling in the Cloud Sync Task settings. 50 Mbps is a reasonable middle ground — completes within a weekend while leaving bandwidth available for other operations.
The first sync transfers everything. Subsequent monthly syncs only transfer new or changed files, making them significantly faster.
Things to Know About Glacier Deep Archive
Retrieval Takes Time
This isn't a quick-access backup. Retrieving data takes 12-48 hours depending on the retrieval tier (Standard or Bulk). Plan accordingly for disaster recovery — this is your cold copy, not your first line of defense.
180-Day Minimum Storage Charge
Objects in Glacier Deep Archive have a 180-day minimum storage charge. Delete something before 180 days and you're billed for the remaining time. At ~$1/TB/month this is negligible for small deletions, but worth knowing if you're doing large-scale removals.
Strategies to Avoid Early Deletion Fees
For write-once datasets, this is a non-issue. If deletions are a concern:
- Use COPY mode instead of SYNC — never deletes from remote, so no deletion fees. Bucket grows over time.
- Enable S3 versioning — deletes add a marker rather than removing the object. Old versions age past 180 days naturally. (Requires lifecycle rules to clean up old versions eventually.)
- Just be aware — if you delete files locally and run SYNC, those objects get removed from the bucket on the next run, potentially triggering fees.
Troubleshooting
Viewing Task Logs
In the TrueNAS UI, go to Data Protection > Cloud Sync Tasks and look for a task report dropdown after a run completes. It shows start time, finish time, and success/failure.
1# Or check the middleware log directly2grep -i "cloud_sync" /var/log/middlewared.log | tail -501# Or check the middleware log directly2grep -i "cloud_sync" /var/log/middlewared.log | tail -50
Verifying Uploads
Log into the AWS Console, navigate to your S3 bucket, and confirm objects are listed with the Glacier Deep Archive storage class.
Permission Errors
If you see errors related to lifecycle rules or other S3 operations, it's likely an Organization-level Service Control Policy. These can't be modified from a member account — talk to whoever manages your AWS Organization. The direct storage class approach in this guide avoids most of these issues.
Cost Summary
For a 25TB dataset:
| Item | Monthly Cost |
|---|---|
| Glacier Deep Archive Storage | ~$25 |
| PUT requests (first sync, ~25M objects) | ~$1.25 |
| Subsequent syncs (incremental) | Negligible |
| Total ongoing | ~$25/month |
That's a full off-site backup of 25TB for the price of a decent lunch. Hard to argue with that.