LogoLogo
  • Duplicati Documentation
  • Getting Started
    • Installation
    • Set up a backup in the UI
    • Running a backup
    • Restoring files
  • Detailed descriptions
    • Choosing Duplicati Type
    • Using the secret provider
      • Local providers
      • Cloud providers
      • Advanced configurations
    • Using remote management
      • Using remote control with agent
    • Migrating Duplicati to a new machine
    • Scripts
    • Sending reports
      • Monitoring with Duplicati Console
      • Sending reports with email
      • Sending Jabber/XMPP notifications
      • Sending HTTP notifications
      • Sending Telegram notifications
      • Custom message content
    • Duplicati Access Password
    • Import and export backup configurations
    • Filters in Duplicati
    • The local database
    • The server database
    • Preload settings
    • Retention settings
    • Using Duplicati with Linux
    • Using Duplicati from Docker
    • Using Duplicati with MacOS
    • Using Duplicati with Windows
    • Running a self-hosted OAuth Server
  • Using tools
    • Encrypting and decrypting files
    • Using Duplicati from the Command Line
    • Recovering from failure
    • Disaster recovery
  • Backup destinations
    • Destination overview
    • Standard based destinations
      • File Destination
      • S3-compatible Destination
      • FTP Destination
      • SFTP (SSH) Destination
      • WebDAV Destination
      • OpenStack Destination
      • Rclone Destination
      • CIFS (aka SMB) Destination
    • Provider specific destinations
      • Backblaze B2 Destination
      • Box.com Destination
      • Rackspace CloudFiles Destination
      • IDrive e2 Destination
      • Mega.nz Destination
      • Aliyun OSS Destination
      • Tencent COS Destination
      • Jottacloud Destination
      • pCloud Destination
      • Azure Blob Storage Destination
      • Google Cloud Storage Destination
      • Microsoft Group Destination
      • SharePoint Destination
      • SharePoint v2 (Graph API)
      • Amazon S3 destination
    • File synchronization providers
      • Dropbox Destination
      • Google Drive Destination
      • OneDrive Destination
      • OneDrive For Business Destination
    • Decentralized providers
      • Sia Destination
      • Storj Destination
      • TahoeLAFS destination
  • Duplicati Programs
    • TrayIcon
    • Server
    • Command Line Interface CLI
    • Service and WindowsService
    • Command Line Tools
      • AutoUpdater
      • BackendTester
      • BackendTool
      • RecoveryTool
      • SecretTool
      • SharpAESCrypt
      • Snapshots
      • ServerUtil
    • Agent
    • LICENSE
      • Duplicati Inc & Open Source
      • License Agreement
    • OAuth Server
  • SUPPORT
  • Installation details
    • Release channels and versions
      • Upgrading and downgrading
      • Downgrade from 2.1.0.2 to 2.0.8.1
    • Package options
    • Developer
  • TECHNICAL DETAILS
    • Architecture Premises
    • Understanding Backup
      • How Backup Works
      • Encryption Algorithms
      • Backup size parameters
    • Understanding Restore
      • How Restore Works
      • Disaster Recovery
    • Database versions
    • Server authentication model
    • Option formats
Powered by GitBook
On this page
  • Creating a bucket
  • Storage class
  • Note on Glacier storage class

Was this helpful?

Export as PDF
  1. Backup destinations
  2. Provider specific destinations

Amazon S3 destination

This page describes how to use the AWS S3 storage destination

PreviousSharePoint v2 (Graph API)NextFile synchronization providers

Last updated 4 months ago

Was this helpful?

The storage destination is implemented with the , so all details from that page applies here as well, but some additional features are supported by AWS.

To use the AWS S3 destination, use a format such as:

s3://<bucket name>/<prefix>
  ?aws-access-key-id=<account id or username>
  &aws-secret-access-key=<account key or password>
  &s3-location-constraint=<region-id>

If you do not supply a hostname, but instead a region, such as us-east-1, the hostname will be auto-selected, based on the region. If the region is not supported by the library yet, you can supply the hostname via --server-name=<hostname>.

Beware that S3 by default will not use an encrypted connection, and you need to add --use-ssl=trueto get it working.

Creating a bucket

When creating a bucket, it will be created in the location supplied by --s3-location-constraint. In the case no constraint is supplied, the AWS library will decide what to do. If the bucket already exists, it cannot be created again, so the --s3-location-constraint setting will not have any other effect than choosing the hostname.

Storage class

By default, the objects are created with the "Standard" storage setting, which has optimal access times and redundancy. More information about the different are available from AWS. You can choose the storage class with the option --s3-storage-class. Note that you can provide any string here that is supported by your AWS region, despite the UI only offering a few different ones.

Note on Glacier storage class

Glacier storage does not work well with Duplicati. Because Duplicati really likes to verify that things are working as expected you need to disable these checks. You also need to disable cleanup of data after deleting versions. Restores are tricky, because you need to retrieve data manually from Glacier before Duplicati can work with it.

AWS S3
general S3 destination
AWS S3 storage classes