arrow-left

Only this pageAll pages
gitbookPowered by GitBook
triangle-exclamation
Couldn't generate the PDF for 136 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

Duplicati Docs

Loading...

Getting Started

Loading...

Loading...

Loading...

Loading...

Platform-specific guides

Loading...

Loading...

Loading...

Loading...

Loading...

Configuration & Management

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Security & Secrets

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Monitoring & Notifications

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Database & Storage

Loading...

Loading...

Automation & Integration

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Using tools

Loading...

Loading...

Loading...

Loading...

Duplicati Console

Loading...

Loading...

Loading...

Loading...

Backup destinations

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Duplicati Programs

Loading...

Loading...

Loading...

Loading...

Loading...

Duplicati Documentation

Welcome to the Duplicati Documentation! This site contains documentation for using the open source Duplicati client, including best practice, pro tips, and trouble shooting.

If you cannot find an answer on this site, you can always ask a question on our helpful forumarrow-up-right 🤗.

hashtag
Jump right in

hashtag
Contributing

If you spot an error or want to add something to the documentation, head over to the and open an issue or pull request.

Running a backup

This page describes how to run a backup outside of an automatic schedule

With a configured backup, you can have a schedule that runs the backup automatically each day. Having the backup run automatically is recommended because it makes it less likely that the backups are not recent if they are needed.

Even if the backup already has a schedule there may be times where you want to manually run a backup. If you have just configured a backup, you may want to run it ahead of the scheduled next run. If you are within the UI you can click the "Start" button for the backup.

A newly configured backup

Once the backup is running, the top area will act as a progress bar that shows how the backup progresses. Note that the first run of a backup is the slowest run because it needs to process every file and folder that is part of the source. On later runs it will recognize what parts have changed and only process the new and changed data.

If you need to automate starting a backup without using the UI, you can use ServerUtil to trigger backups from the commandline.

After running a backup, the view will change slightly and show some information about the backup.

Installation

Install the Duplicati client

Set up a backup

Configure your first backup

Configuring a destination

Show all destinations

documentation repositoryarrow-up-right
After the backup has completed it shows backup details

Installation

This page describes how to install Duplicati on the various supported platforms

hashtag
The Duplicati package types

For desktop and laptop users, the most common application type is called the "GUI" package, which is short for Graphical User Interface. The GUI package includes the core components, a webserver to show the user interface and a tray icon (also called a status bar icon).

For users installing in environments without a desktop or screen, there are also commandline only, remote management and Docker versions. Depending on your setup, you may also want to use one of those packages on a desktop or laptop.

This page covers only the GUI installation.

Jump to the section that is relevant to you:

hashtag
Install Duplicati on Windows

The most common installation format on Windows is the MSI package. To install on Windows you need to know what kind of processor is on your system. If you are unsure, you are most likely using the 64-bit processor, also known as x64. There is also a version supporting Arm64 processors, and a version for legacy 32-bit Windows called x86.

Simply head over to the and download the relevant MSI package. Once downloaded, double-click the installer. The installation dialog lets you adjust settings to your liking and will install Duplicati. The first time Duplicati starts up, it will open the user interface in your browser. At this point you are ready to .

hashtag
Install Duplicati on MacOS

For MacOS the common installation method is to use a DMG file with the application file. Most modern MacOS machines are using the Apple Silicon which is called Arm64 in Duplicati's packages. If you are on an older Mac that has a 64-bit Intel processor, you can use the x64 package instead.

Once you know which kind of Mac you have, header over to the and download the relevant DMG file. Open the file and drag Duplicati into the Application folder, and then you can start Duplicati.

The first time Duplicati starts up, it will open the user interface in your browser. At this point you are ready to .

hashtag
Install Duplicati on Linux

Most Linux distributions work well with Duplicati but there are only packages for Debian based distributions (Ubuntu, Mint, etc) and for RedHat based based distributions (Fedora, SUSE, etc). For other distributions you may need to manually install some dependencies.

For Linux distributions there are packages for the most common 64-bit based system with x64, support for Arm64 and the predecessor Arm7 aka ArmHF which are commonly found in NAS boxes and the older Raspberry Pi series.

hashtag
Install on Debian-based Linux (Ubuntu, Mint, etc)

To install Duplicati on a Debian based system, first download the .deb package matching the system architecture, then run:

This will install all dependencies and place Duplicati in the default location on the target system. The first time Duplicati starts up, it will open the user interface in your browser. At this point you are ready to .

hashtag
Install on RedHat-based Linux (Fedora, SUSE, etc)

To install Duplicati on a RedHat-based system, first download the .rpm package matching the system architecture, then run:

This will install all dependencies and place Duplicati in the default location on the target system. The first time Duplicati starts up, it will open the user interface in your browser. At this point you are ready to .

hashtag
Install on another Linux distribution

For other linux distributions you can use the .zip file that matches your system architecture. Inside the zip files are all the binaries that are needed, and you can simply place them in a folder that works for your system. Generally, all dependecies are inlcuded in the packages so unless you are using a very slimmed down setup, it should work without additional packages.

The first time Duplicati starts up, it will open the user interface in your browser. At this point you are ready to .

Restoring files

This page describes how to restore files using the Duplicati user interface

The most important reason to make a backup is the ability to recover the data at a later stage, usually due to some unforseen incident. Depending on the incident, the original configuration may not be available.

To start a restore process in Duplicati, start on the "Restore" page.

If the backup configuration is already existing on the machine, you can choose it from the list. In this case you can click "Restore" and skip to the section on .

The restore and browsing process are fastest when using a configured backup, because Duplicati can query a local database with information. If the local database is not present, Duplicati needs to fetch enough information from the remote storage to build a partial database when performing the restore.

If you have exported the backup configuration and have the configuration available, you can click the "Start" button on "Restore from configuration" and skip to the

Using Duplicati from the Command Line

This page is not yet completed. See the section on the CLI interface.

Disaster recovery

This page explains how to recover as much data as possible from a broken remote storage

This page is not yet completed. See the section on the recovery tool.

Provider specific destinations

Recovering from failure

This page describes how to get a backup working again after a failure on the remote storage

Standard based destinations

Decentralized providers

File synchronization providers

Install on Windows
Install on Linux
Install on MacOS
Duplicati download pagearrow-up-right
set up a backup
Duplicati download pagearrow-up-right
set up a backup
set up a backup
set up a backup
set up a backup

Sia Destination

This page describes the Sia storage destination

circle-exclamation

The Sia destination is currenctly deprecated as it is incompatible with the current version of the network.

Duplicati supports backups to the Sia networkarrow-up-right which is a large-scale decentralized storage network. To use the Sia destination, use a this URL format:

sia://<host>:<port>/<path>
  ?sia-password=<password>

If the host is supporting unauthenticated connections, you can omit the password. The default port is 9980 if none is supplied and the default path is /backup if none is supplied.

hashtag
Advanced options

To adjust the amount of redundancy in the Sia network, use the option --sia-redundancy. Note that this value should be more than 1.

sudo dpkg -i duplicati-version-arch.deb
sudo yum -i duplicati-version-arch.rpm
. You can also read up on how to
.

hashtag
Direct restore from backup files

To restore files from the backup, Duplicati needs only to know how to access the files and the encryption passphrase (if any). If you do not have the passphrase, it is not possible to restore.

circle-info

This option is useful restoring from another machine than the one that made the backup originally, but is not recommended for regular operations due to the need to download and process more data.

To restore directly from the backup files, the first step is to provide the destination details. These details are the same as you supplied initially when creating the backup. if you are using a cloud provider, you can usually get the needed information via your account on the vendors website. You can see a list of the destinations that are supported by Duplicati.

Supply storage destination details

Once the details are entered, it is recommended to use the "Test connection" button to ensure that the connection is working correctly. Then click the "Continue" button.

Supply the encryption passphrase

If the backup is not encrypted, leave the field empty. When ready, click "Continue" and Duplicati will examine the remote destination and figure out what backups are present. After working through the information, you can choose files to restore.

hashtag
Restore from configuration

If you have a configuration file you can use the information in that file to avoid entering it manually. If you need to restore more than once, it may be faster to import the configuration and rebuild the local database. After the database is built, you can choose the configuration from the list and skip to choosing files to restore.

Restoring with a backup configuration

In the dialog, provide the exported configuration file. If the file is encrypted, you will be asked to enter the file passphrase.

circle-info

The passphrase the configuration file is encrypted with is not neccesarily the same as the passphrase used to encrypt the backup with.

Once the configuration is correct, click the "Restore" button and you will be ready to choose files to restore.

hashtag
Choosing files to restore

Once Duplicati has a connection to the remote destination it will find all the backups that were made. It will then choose the most recent version and list the files from within that version. Use the dropdown to select the version to restore from, and then choose the files that you want to restore.

Choosing files to restore

Click the files or folders that you want to restore and then click "Continue".

hashtag
Choosing restore options

When restoring there are a few options that control how the files are restored.

Choosing restore options

If you want to restore a file to a previous state, you can leave the settings to their defaults. If you are unsure if you want to revert, or need to examine the files before replacing the current versions, you can choose to restore to another destination. If the folder you are restoring to is not empty, you can choose to store multiple versions of the files by appending the restore timestamp to the filename. This is especially useful if you are restoring multiple versions into a target folder for comparison.

Duplicati will not restore permissions by default because the users and groups that were present on the machine that made the backup may not be present on the machine being restored to. Restoring the permissions can cause you to be unable to access the restored files, if your user does not have the necessary permissions.

When satisfied with the settings, click the "Submit" button and the restore process will restore the files.

Once the restore has completed, you will see the restore summary page:

choosing the files to restore
Choosing a restore approach
restore from configuration section
import and export configurations

Set up a backup in the UI

Describes how to configure a backup in Duplicati

Once Duplicati is running, you can set up a backup through the UI. If the UI is not showing, you can use the TrayIcon in your system menu bar and choose "Open". If you are asked for a password before logging in to the UI, see how to access without a password.

In the UI, start by clicking "Add backup", and choose the option "Add a new backup":

Configuring a new backup

If you have an existing backup configuration you want to load in, see the section on import/export.

To set up a new backup there are some details that are required, and these are divided into 5 steps:

  1. (descriptive name, passphrase)

  2. (where to store the backups)

  3. (what data should be backed up)

  4. (automatically run backups)

  5. (when to delete old backups and more)

hashtag
1. Basic configuration

For the basic configuration, you need to provide a name and setup encryption:

The name and description fields can be any text you like, and is only used to display the backup configuration in lists so you can differentiate if you have multiple backups.

The encryption setup allows you to choose an encryption method and a passphrase. Encryption adds a minor overhead to the processing, but is generally a good idea to add. If you opt out of encryption, make sure you control the storage destination and have adequate protections in place.

circle-exclamation

Be sure to store the chosen or generated passphrase in a safe location as it is not possible to recover anything if this passphrase is lost!

To avoid weak passphrases, Duplicati has a built-in passphrase generator as well as a passphrase strength measurer.

hashtag
2. Storage destination

The storage destination is arguably the most technical step because it is where you specify how to connect to the storage provider you want to hold your information. Some destinations require only a single setting, where others require multiple.

circle-exclamation

Each backup created by Duplicati requires a separate folder. Do not create two backups that use the same destination folder as they will keep breaking each other.

Due to the number of supported backends, this page does not contain the instructions for how to configure it. Instead, each of the supported destinations is described in detail on the .

When the details are entered, it is recommended that you use the "Test destination" button which will perform some connection tests that helps reveal any issues with the entered information.

When the destination is configured as desired, click the "Continue" button.

hashtag
3. Source data

In the third step you need to define what data should be backed up. This part depends on your use. If you are a home user, you may want to back up images and documents. An IT professional may want to back up databases.

In the source picker view you can choose the files and folders you would like to back up. If you pick a folder, all subfolders and files in that folder will be included. You can use the UI to unselect some items that you want to exclude, and they will show up without a selection marker.

For more advanced uses, you can also use the filters to set up rules for what to include and exclude. See the section on if your have advanced needs.

Once you are satisfied with the source view, click the "Continue" button to continue to the schedule step.

hashtag
4. Schedule

Having an outdated backup is rarely an ideal solution, but remembering to run backups is also tedious and easy to forget. To ensure you have up-to-date backups, there is a built-in scheduler in Duplicati that you can enable to have Duplicati run automatically.

If you prefer to run the backups manually, disable the scheduler, and you can use to trigger the backups as needed.

Once satisfied with the schedule, click "Continue".

hashtag
5. Retention and miscelaneous

Even though Duplicati has deduplication and compression to reduce the stored data, it is inevitable that old data is stored that will take up space, but is not needed for restore. In this final configuration step you can decide when old versions are removed and what size of files to store on the destination.

The size of remote volumes is meant for a balanced size usable with cloud-storage and a limited network connection. If you have a fast connection or store files on a local network, consider increasing the size of the remote volumes. For more information see .

For the retention setting, it is inveitable that the backups will grow as new and changed data is added to the backups. If nothing is ever deleted, the backup size will keep growing in size. With the retention settings you can choose how to automatically remove older versions.

The setting "Smart backup retention" is meant to be useful for most users where it keeps one daily backup and then gradually fewer versions going back in time.

Once you are satisfied with the settings, click the "Save" button.

You have now configured your backup! 🎉

Choosing Duplicati Type

This page describes the different ways to run Duplicati

When using Duplicati, you need to decide on what type of instance you want to use. Duplicati is designed to be flexible and work with many different setups, but generally you can use this overview to decide what is best for you:

  • Home user, single desktop machine: TrayIcon or Agent

  • Server backup or headless: Server, CLI or Agent

  • Multiple machines: , or

hashtag
The TrayIcon

The is meant to be the simplest way to run Duplicati with the minimal amount of effort required. The TrayIcon starts as a single process, registers with the machine desktop environment and shows a small icon in the system status bar (usually to the right, either top or bottom of the screen).

When running, the TrayIcon gives a visual indication of the current status, and provides access to the visual user interface by opening a browser window.

hashtag
The Server

The mode is intended for users who want to run the full Duplicati with a user interface, but without a desktop connection. When running the Server it is usually running as a system service so it has elevated privileges and is started automatically with the system.

When running the server it will emit log messages to the system log and it will expose a web server that can be accessed via a browser. Beware that if you are running the Server as root/Administrator you are also running a web server with the same privileges that you need to protect.

When the Server is running it will lock down access to only listen on the loopback adapter and refuse connections not using an IP address as the hostname. If you need to access the Server from another machine, make sure you protect it and and also add .

When running the Server you also need to , either by getting a , , or .

hashtag
The Agent

The mode is intended for users who wants to run Duplicati with remote access through the . The benefit from this is that you do not need to provide any local access as all access is protected with HTTPS and additional channel encryption from the Agent to the browser you are using.

circle-info

The Agent mode is only available on the Enterprise plan

If you have multiple machines to manage, using the console enables you to access all the backups, settings, logs, controls, etc. from one place.

hashtag
The Command Line Interface (CLI)

The mode is intended for advanced users who prefer to manage and configure each of the backups manually. The typical use for this is a server-like setup where the backups are running as cron scheduled tasks or triggered with some external tool.

hashtag
Mixing types

For some additional flexibility in configurations it is also possible to combine the different types in some ways.

hashtag
Combining Server and TrayIcon

It the server is used primarily to elevate privileges, it is possible to have the TrayIcon run in the local user desktop and connect to an already running Server. To do this, change the TrayIcon commandline and add additional arguments:

The --no-hosted-server argument disables launching another (competing) server, and the two other arguments will give information on how to reach the running server.

hashtag
Triggering Server jobs externally

If you prefer to use the Server (or TrayIcon) but would like to trigger the backups with an external scheduler or event system, you can use the to trigger a backup or pause/resume the server.

hashtag
Using the CLI for on Server backups

If you are using the Server (or TrayIcon) but you want to run a command that is not in the UI, it is possible to use the CLI to run commands on the backups defined in the Server. Note that the Server and CLI use different ways of keeping track of the , so you need to obtain the storage destination url and the database path from the Server and then run the CLI.

Rackspace CloudFiles Destination

This page describes the Rackspace CloudFiles storage destination

circle-exclamation

Rackspace Cloudfiles is deprecatedarrow-up-right.

Duplicati supports storing files with Rackspace CloudFiles, which is a large-scale object storage, similar to S3. With CloudFiles you store "objects" (similar to files) in "containers" which define various properties shared between the objects. If you use a / in the object prefix, they can be displayed as virtual folders when listing them.

To use CloudFiles, you can use the following URL format:

cloudfiles://<container>/<prefix>
  ?cloudfiles-username=<username>
  &cloudfiles-accesskey=<access key>

hashtag
Using a different API endpoint

The default authentication will use the US endpoint, which will not work if you are a customer of the UK service. To choose the UK account, add --cloudfiles-uk-account=true to the request:

If you need to use a specific host, you can also provide the authentication URL directly with the --cloudfiles-authentication-url option. If you are providing the URL, the --cloudfiles-uk-account option will be ignored.

Mega.nz Destination

This page describes the Mega.nz storage destination

circle-exclamation

The destination is currently using the MegaApiClientarrow-up-right which is no longer maintained. Since there is little documentation on how to integrate with Mega.nz, it is not recommended that this storage destination is used anymore.

hashtag
User interface

To configure the Mega.nz backend, enter a unique path for the backup to be stored at, the username and password.

hashtag
URL format for Commandline

To use the storage destination, you can use the following URL format:

hashtag
Two-factor authorization

It is possible to provide a two-factor key with the option --auth-two-factor-key but since this value changes often, it is not suitable to use in most automated backup settings. This is a design choice from Mega.nz and cannot be fixed by Duplicati.

Monitoring with Duplicati Console

This page describes how to set up monitoring with Duplicati consoel

The Duplicati console is a paid optionarrow-up-right for handling monitoring of Duplicati backups, but has a free usage tier. To get started with the console, head over to the Duplicati Consolearrow-up-right page and sign up or log in.

If you connect your local instance to the Duplicati Console, this will automatically enable reporting to be sent from the Duplicati client to the console. Within the console, visit the Alert Centerarrow-up-right to configure what notifications you want to received.

Duplicati Console screenshot

Filen.io

This page describes the Filen.io integration

Duplicati supports using Filen.ioarrow-up-right as the storage destination since stable release v2.2. Note that Duplicati encrypts volumes before uploading them to Filen.io, but will encrypt them again using the Filen encryption scheme so they can be downloaded from Filen.

hashtag
User interface

To configure the destination for Filen.io, choose a unique path for the backup, and then provide the username and password to authenticate.

hashtag
URL format for Commandline

To use Filen.io, use the following URL format:

You can also supply a Two-Factor code if your account is protected by 2FA, but note that you need to type in a new 2FA code each time you access the storage as Filen does not have support for API keys.

Duplicati only supports the V2 Auth protocol and will only encrypt data using the version 002 encryption mode. There is experimental support for reading data encrypted with version 003 if you need to upload files outside of Duplicati.

Dropbox Destination

This page describes the Dropbox storage destination

Duplicati supports using Dropboxarrow-up-right as a storage destination. Note that Duplicati stores compressed and encrypted volumes on Dropbox and does not store files so they are individually accessible from Dropbox.

hashtag
User interface

To configure the Dropbox destination you need to pick a unique folder name for the backups, and then authorize Duplicati to work on your behalf. Simply click the "AuthID" link in the text field and the authentication process will start and fill out the "AuthID" when you are done.

hashtag
URL format from Commandline

To use Dropbox, use the following URL format:

To use Dropbox you must first obtain an AuthID by using a Duplicati service to log in to Dropbox and approve the access. See the for different ways to obtain an AuthID.

Sending reports

Describes how to send reports with Duplicati

Duplicati strives to make it as easy as possible to set up backups, and using the built-in scheduler makes it easy to ensure that backups are running regularly. Because it is easy to set up a backup and forget about, it is possible to have a backup running with little interaction.

Despite all efforts to make Duplicati as robust as possible against failures, it is not possible to handle every possible problem that may arise after the initial setup. Common failure causes is revoked credentials, filled storage, missing provider updates, etc.

To avoid discovering too late that the backup had stopped working for some reason, it is highly recommended to set up automated monitoring of backups. Duplicati has a number of ways that you can use to send reports into a monitoring solution:

  • Duplicati Console

TahoeLAFS destination

This page describes the TahoeLAFS storage destination

Duplicati supports backups to the Tahoe least-authority file storearrow-up-right, Tahoe-LAFS.

hashtag
User interface

To configure the TahoeLAFS destination, enter the server and the path.

hashtag
URL format for Commandline

To use the TahoeLAFS destination, use this URL format:

Migrating Duplicati to a new machine

This page describes how to best migrate a Duplicati instance to a new machine

If you have moved to a new machine and want to restore files to the new machine, you can follow the steps outlined in . If instead, you have already moved files to the new machine and would like to set up the new machine to continue backups made on the previous machine, there are a few ways to do this.

Note: it is possible to restore files across operating systems, but due to path differences it is not possible to continue a backup made on Windows on a Linux/MacOS based operating system and vice versa.

Note: do not attempt to run backups from two different machines to the same destination. Before migrating, make sure the previous machine is no longer running backups automatically. If both machines run backups, one instance will detect that the remote destination has been modified and will refuse to continue until the local database has been rebuilt.

If you have access to backup configurations, jump to the . And if you have no configurations, jump to the .

Import and export backup configurations

This page describes how to import and export configurations from Duplicati

While it is not required that you keep a copy of the backup configuration, it can sometimes be convenient to have all settings related to a backup stored in a single file.

hashtag
Export

To export from within the user interface, expand the backup configuration and click "Export ..."

On this page you should select "To File", which is default. The option to export "As commandline..." is not covered here, but allows you to get a string that can be used with the .

Sending Jabber/XMPP notifications

Describes the how to configure sending notifications via Jabber/XMPP

One of the supported notification methods in Duplicati is the open-source , supported by a variety of projects, including commercial enterprise offerings.

To send a notification via XMPP you need to supply one or more recipientes, an XMPP username and a password.

In the UI you can configure these mandatory values as well as the optional values.

The basic options for sending XMPP notifications can be entered into the general settings, which will then apply to each backup. It is also possible to apply or change the settings for the individual backups by editing the advanced options. Here is how it looks when editing it in the user interface:

You can toggle between the two views using the "Edit as list" and "Edit as text" links.

Using the secret provider

This page describes how to use the secret provider.

The secret provider was introduced in Duplicati version 2.0.9.109 and aims to reduce the possibility of leaking passwords from Duplicati by not storing the passwords inside Duplicati.

To start using a secret provider you need to set only a single option:

This will make the secret provider available for the remainder of the application.

You can then insert placeholder values where you want secrets to appear but without storing the actual secret in Duplicati. For commandline users, the secrets can appear in both the backend destination or in the options.

As an example:

The secret provider will find the three keys prefixed with $

Telemetry collection

This page describes the telemetry collected by Duplicati and how to opt out

When Duplicat is running, it collects some basic non-identifying telemetry, such as what version of Duplicati is running, what operating system it is running on and similar values. When running a backup it also collects the type of connection, the duration, the size of the remote and local filesets.

The purpose of this data collection is to give us an insight into how Duplicati is used and determine how we can best focus on making Duplicati better. We are sharing the aggregated data that we collect on a public page:

hashtag
Opting out

Encrypting and decrypting files

This page describes how to work with encrypted files outside of normal operations

In normal Duplicati operations, the files at the remote destination should never be handled by anything but Duplicati. Changing the remote files will always result in warnings or errors when Duplicati needs to access those files.

However, in certain exceptional scenarios, it may be required that the file contents are accessed manually.

hashtag
Processing files encrypted with AES encryption

The files encrypted with the default AES encryption follows the

Rclone Destination

This page describes the Rclone storage destination

Duplicati has a wide variety of storage destinations, but the has even more! If you are familiar with Rclone, you can configure Duplicati to utilize Rclone to transfer files and extend to the full set of destinations supported by Rclone.

If you are using Rclone, some features, such as bandwidth limits and transfer progress do not work.

Duplicati does not bundle Rclone, so you need to download and install the appropriate binaries before you can use this backend.

hashtag
User interface

Storj Destination

This page describes the Storj storage destination

Duplicati supports backups to the which is a large-scale decentralized storage network. The destination supports two different ways of authenticating: Access Grant and Satellite API.

hashtag
User interface

To configure the Storj destination, choose the satellite that you will connect to, then provide the access grant. The bucket and path can be used to control where the data is stored within the network.

The local database

This page describes the local database associated with a backup

Duplicati uses , one for the and one for each backup. This page describes the overall purpose of the local database and how to work with it. The database itself is stored in the same folder as the server database and has a randomly generated name.

If you have access to the backup files generated by Duplicati, you only need the passphrase to restore files. As described in the , it is also everything that is needed to continue the backup. But to increase the performance and reduce the number of remote calls required during regular operations, Duplicati relies on a database with some well-structured data.

The database is essentially a compact view of what data is stored at the remote destination, and as such it can always be created from the remote data. The only information that is lost if the database is recreated are log messages and the hashes of the remote volumes. The log messages are mostly important for error-tracing but the hashes of the remote volumes are important if the files are not encrypted, as this helps to ensure the backup integrity.

Prior to running a backup, Duplicati will do a quick scan of the remote destination to ensure it looks as expected. This check is important, as making a backup with the assumption that data exists, could result in backups that can only be partially restored. If for the check fails for some reason, Duplicati will exit with an error message explaining the problem.

WebDAV Destination

This page describes the WebDAV storage destination

The WebDAV protocol is a minor extension to the HTTP protocol used for web requests. Because it is compatible with HTTP it also supports SSL/TLS certificates and verification similar to what websites are using.

hashtag
User interface

To use the WebDAV destination you must enter: server, path on server, username and password. Depneding on your setup, you may also need to add some advanced options as explained below.

Sending HTTP notifications

This page describes how to send reports via the HTTP protocol

The most versatile reporting option is the ability to send messages via the HTTP(s) protocol. By default messages are sent as a body in a request with the verb.

To use the option, you only need to provide the url to send to:

Besides the URL it is also possible to configure:

  • The message body and type (JSON is supported)

Amazon S3 destination

This page describes how to use the AWS S3 storage destination

The storage destination is implemented with the , so all details from that page applies here as well, but some additional features are supported by AWS.

hashtag
User interface

To use an AWS S3 destination you need to fill out: bucket, folder path, server, AWS Access Key Id, AWS Secret Access Key. You can decide on the bucket name and path, and get the Key Id and Access Key from the IAM center.

Tencent COS Destination

This page describes the Tencent COS storage destination

Duplicati supports storing files on which is a large-scale object storage, similar to S3. In Tencent COS you store "objects" (similar to files) in "buckets" which define various properties shared between the objects. If you use a / in the object prefix, they can be displayed as virtual folders when listing them.

hashtag
User interface

To configure the Tencent COS destination you must supply: bucket, path, app id, region, secret id, and secret key.

OpenStack Destination

This page describes the OpenStack storage destination

Duplicati supports storing files with OpenStack, which is a large-scale object storage, similar to S3. With OpenStack you store "objects" (similar to files) in "containers" which define various properties shared between the objects. If you use a / in the object prefix, they can be displayed as virtual folders when listing them.

hashtag
User interface

To use the OpenStack destination you must fill out the fields: bucket, domain name, tenant name, auth uri, version and region.

Using remote management

This page describes how to configure Duplicati to connect to the Duplicati Console and manage the backups from within the console.

circle-info

hashtag
This page is for setting up remote management with a TrayIcon or Server installation. For Agent based installations, see the subpage on

Jottacloud Destination

This page describes the Jottacloud storage destination

hashtag
User interface

To configure the Jottacloud destination you need to pick a unique folder name for the backups, and then authorize Duplicati to work on your behalf. Simply click the "AuthID" link in the text field and the authentication process will start and fill out the "AuthID" when you are done.

Sending reports with email

Describes the how to configure sending emails with backup details

Sending emails is supported by having access to an SMTP server that will accept the inbound emails. From on your SMTP/email server provider you need to get a url, a username, and a password. If you are a , otherwise consult your provider for these details.

Besides the connection details, you also need to provide the recipient email address. Note that SMTP servers may sometimes restrict what recipients they allow, but generally using the provider SMTP server will allow sending to your own account.

In the UI you can configure these mandatory values as well as the optional values.

The basic options for sending email can be entered into the general settings, which will then apply to each backup. It is also possible to apply or change the settings for the individual backups by editing the advanced options. Here is how it looks when editing it in the user interface:

You can toggle between the two views using the "Edit as list" and "Edit as text" links.

pCloud Destination

This page describes the pCloud storage destination

Duplicati supports using as a storage destination. Note that Duplicati stores compressed and encrypted volumes on pCloud and does not store files so they are individually accessible from pCloud.

The pCloud provider was added in Duplicati v2.1.0.100, and is included in stable release 2.2.

hashtag
User interface

To configure the pCloud destination you need to first choose if you are working with the Global or EU servers. Then pick a unique folder name for the backups, and then authorize Duplicati to work on your behalf. Simply click the "AuthID" link in the text field and the authentication process will start and fill out the "AuthID" when you are done.

Aliyun OSS Destination

This page describes the Alibaba Cloud Object Storage Service, also known as Aliyun OSS.

Duplicati supports storing files on , aka Aliyun OSS, which is a large-scale object storage, similar to S3. In Aliyun OSS you store "objects" (similar to files) in "buckets" which define various properties shared between the objects. If you use a / in the object prefix, they can be displayed as virtual folders when listing them.

Note that the bucket id is globally unique, so it is recommended using a name that is not likely to conflict with other users, such as prefixing the bucket with the project id or a similar unique value. If you use a simple name, like data or backup it is likely already associated with another project and you will get permission errors when attempting to use it.

Microsoft Group Destination

This page describes the Microsoft Group storage destination

Duplicati supports using as a storage destination.

hashtag
User interface

To configure the Microsoft Group destination you need to provide the group email. Then pick a unique folder name for the backups, and then authorize Duplicati to work on your behalf. Simply click the "AuthID" link in the text field and the authentication process will start and fill out the "AuthID" when you are done.

IDrive e2 Destination

This page describes the iDrive e2 Destination

Duplicati supports storing files on , which is a large-scale object storage, similar to S3. In iDrive e2 you store "objects" (similar to files) in "buckets" which define various properties shared between the objects. If you use a / in the object prefix, they can be displayed as virtual folders when listing them..

Note that the bucket id is globally unique, so it is recommended using a name that is not likely to conflict with other users, such as prefixing the bucket with the project id or a similar unique value. If you use a simple name, like data or backup it is likely already associated with another project and you will get permission errors when attempting to use it.

Note that iDrive has a similar offering called , which is not currently supported by Duplicati.

FileJump

This page describes the Filejump integration

Duplicati supports using as the storage destination since stable release v2.2.

circle-exclamation

As of 2025-11-01 FileJump has announced that they will change the solution including the API so Duplicati will likely stop working with Filejump on 2025-12-31. If API docs are updated before, Duplicati may be updated to support FileJump again. Until this happens, we do not recommend using Filejump with Duplicati.

Backblaze B2 Destination

This page describes the Backblaze B2 storage destination

Duplicati supports storing files with Backblaze B2, which is a large-scale object storage, similar to S3. With B2 you store "objects" (similar to files) in "buckets" which define various properties shared between the objects. If you use a / in the object prefix, they can be displayed as virtual folders when listing them.

hashtag
User interface

To configure the B2 destination you must supply: bucket, path in bucket, application id, and application key.

SMB (aka CIFS) Destination

This page describes the CIFS storage destination

The Server Message Block (SMB) / Common Internet File System (CIFS) backend provides native support for accessing shared network resources using the SMB/CIFS protocol. This backend enables direct interaction with Windows shares and other SMB-compatible network storage systems.

hashtag
User interface

To use the SMB connection you must supply the server, shared name, path on server, username, password, and transport. See below for a description of the transport method.

Box.com Destination

This page describes the Box.com storage destination

Duplicati supports using as a storage destination. Note that Duplicati stores compressed and encrypted volumes on box.com and does not store files so they are individually accessible from box.com.

hashtag
User interface

To configure the box.com destination you need to pick a unique folder name for the backups, and then authorize Duplicati to work on your behalf. Simply click the "AuthID" link in the text field and the authentication process will start and fill out the "AuthID" when you are done.

OneDrive For Business Destination

This page describes the OneDrive For Business storage destination

Duplicati supports using as a storage destination. Note that Duplicati stores compressed and encrypted volumes on OneDrive and does not store files so they are individually accessible from OneDrive.

hashtag
User interface

To use the OneDrive for Business dialog you must enter the server, path on server, account name and access key. You can use the "Add advanced option" button to configure some of the options described below.

OneDrive Destination

This page describes the OneDrive storage destination

Duplicati supports using as a storage destination. Note that Duplicati stores compressed and encrypted volumes on OneDrive and does not store files so they are individually accessible from OneDrive.

hashtag
User interface

To configure the OneDrive destination you need to pick a unique folder name for the backups, and then authorize Duplicati to work on your behalf. Simply click the "AuthID" link in the text field and the authentication process will start and fill out the "AuthID" when you are done.

SharePoint Destination

This page describes the SharePoint storage destination

Duplicati supports using as a storage destination. This page describes the SharePoint that uses the legacy API, for the .

hashtag
User interface

To configure the SharePoint destination, enter the values for: server, path on server, account name, and access key.

Service and WindowsService

The page describes the Service and WindowsService programs

The Service binary executable is a small helper program that simply runs the executable and restarts it if it exits. The purpose of this program is to assist in keeping the Server running, even in the face of errors. The Service binary is called Duplicati.Service.exe on Windows and duplicati-service on Linux and MacOS.

hashtag
WindowsService

The Duplicati.WindowsService.exe

Send emails
Send Jabber/XMPP
Send HTTP message
Send Telegram message
hashtag
Previous machine is still available

If the previous machine is still accessible, you can copy over the contents of the Duplicati folder containing the the configuration database Duplicati-server.sqlite and the other support database. This approach is by far the fastest as Duplicati has all the information and does not need to check up with the remote storage.

Make sure to stop Duplicati before moving in the folder into the same location on the new machine. After moving in the folder, you can start Duplicati again and everything will be working as before. If it has been a while since the previous instance was running, this may trigger the scheduled backups on startup. Use the option --startup-delay=5min to start Duplicati in pause mode for 5 minutes if you want to check up before it starts running.

hashtag
Backup configurations are available

If you have the backup configurations, see the section on import/export configuration for a guide on how to create the backup jobs from the configuration files.

With the backup configurations, it is possible to re-create the backup configurations. The flow allows you to modify set setup before saving the configuration, in case some details have changed. Once the backup is re-created, it is required that you run the repair operation to make Duplicati recreate the local databasearrow-up-right for the backup.

Once the local database has been recreated, it is then possible to run the backup as before with no modifications required.

hashtag
Previous machine and configurations are unavailable

If you do not have access to the previous setup, you can still continue the backups, but this requires that you re-create the backups manually. You need at least the storage destination details, the passphrase and to select the sources.

Once the backup configuration has been created it works the same as if you had imported it from a file. Before running a backup, you need to run the repair operation to make Duplicati recreate the local databasearrow-up-right for the backup.

Once the local database has been recreated, it is then possible to run the backup as before with no modifications required.

Restoring files
section for moving with backup configurations
manual setup section
You then need to decide on how to handle secrets stored in the configuration. Since these secrets include both the credentials to connect to the remote destination as well as the encryption passphrase, it is important that the exported file is protected.

You can choose to not include any secrets by unchecking the "Export passwords" option. The resulting file will then not contain the secrets and you need to store them in a different place (credential vault, keychain, etc).

You can also choose to encrypt the file before exporting it. If you choose this option, make sure you choose a strong unique passphrase, and store that passphrase in a safe location.

After completing the export, you will get a file containing the backup configuration. The file is in JSON format and optionally encrypted with AESCrypt.

hashtag
Import configuration

With an exported configuration, you can delete an existing configuration and re-create it by importing the configuration. You can optionally edit the parameters so the re-created backup configuration differs from the original.

To import a configuration, go to the "Add backup" page and choose "Import from file":

Pick the file or drag-n-drop it on the file chooser. If the file is encrypted, provide the file encryption passphrase here as well.

The option to "Import metadata" will create the new backup configuration and restore the statistics, including backup size, number of versions, etc. from the data in the file. If not checked, these will not be filled, and will be updated when the first backup is executed.

If the option "Save immediately" is checked, the backup will be created when clicking import, skipping the option to edit the backup configuration.

When all is configured as desired, click the "Import" button. If you have not checked "Save immediately", the flow will look like it does when setting up the initial backup.

Duplicati CLI executable

For some errors it is possible to run the repair command and have the problem resolved. This works if all data required is still present on the system, but may fail if there is no real way to recover. If this is the case, there may be additional options in the section on recovering from failure.

In rare cases, the database itself may become corrupted or defect. If this seems to be the case, it is safe to delete the local database and run the repair command. Note that it may take a while to recreate the database, but no data is lost in the process, and restores are possible without the database.

two databases
Server
migration section
Besides the mandatory options, it is also possible to configure:
  • The notification message and format

  • Conditions on when to send emails

  • Conditions on what log elements to include

For details on how to customize the notification message, see the section on customizing message content.

XMPP protocolarrow-up-right
Set up XMPP notifications with the default options editor
Set up XMPP option with a text field

Besides the mandatory options, it is also possible to configure:

  • Email sender address

  • The subject line

  • The email body

  • Conditions on when to send emails

For details on how to customize the subject line and message body, see the section on customizing message content.

If you prefer email logs, but would also like to get reports, check out the community provided dupReportarrow-up-right tool that can summarize the emails into overviews.

GMail or Google Workspace user, use the Google SMTP guidearrow-up-right
Set up email with the default options editor
Set up email option with a text field
Basic configuration
Storage destination
Source data
Schedule
Retention and miscelaneous
destination overview page
how filters are evaluated in Duplicati
ServerUtil
this page on the tradeoffs between sizesarrow-up-right
Basic configuration page
Storage destination list
Selecting source folders
Choosing a schedule to run on
Choosing backup retention
Server
CLI
Agent
TrayIcon
Server
enable remote access
HTTPS protection
configure a password
signing token from the logs
changing the password
setting one explicitly
Agent
Duplicati Console
CLI
ServerUtil
local databasearrow-up-right
cloudfiles://<container>/<prefix>
  ?cloudfiles-username=<username>
  &cloudfiles-accesskey=<access key>
  &cloudfiles-uk-account=true
Mega.nzarrow-up-right
Configure the Mega.nz destinationConfigure the Mega.nz destination
Configuring the Filen.io destination
Configuring the Filen.io destination
page on the OAuth Server
View of the configuration of the Dropbox destination
View of the configuration of the Dropbox destination
Configure the Tahoe LAFS destination
Configure the Tahoe LAFS destination
and look them up with the secret provider. The provider will then be invoked to obtain the real values and the values will be replaced before running the operation. If the secret provider has these values:

The example from above will then be updated internally, but without having the keys written on disk:

To ensure you never run with an empty string or a placeholder instead of the real value, all values requested needs to be in the storage provider, or the operation will fail with a message indicating which key was not found.

--secret-provider=<url>
duplicati backup \
  s3://example-bucket?auth-username=$s3-user&password=$s3-pass \
  --passphrase=$passphrase 
s3-user=user
s3-pass=pass
passphrase=my-password
file format, so
can be used to decrypt and encrypt these files.

For convenience, Duplicati also ships with a command line binary named SharpAESCrypt that uses the same library that is used by Duplicati. This tool can be used to decrypt the remote volume files with the encryption passphrase, as well as encrypt files.

hashtag
Processing files encrypted with GPG encryption

Files encrypted with GPGarrow-up-right can choose one of the many ways, and a general overview of how GPG works can be found in the GPG man-pagesarrow-up-right. When using the default options, Duplicati will use the symmetric mode for GPG. In this mode, you can use this command to decrypt a file:

And similarly, to encrypt a file, you can use:

hashtag
Re-compress and re-encrypt

If you need to switch from GPG to AES, or vice-versa, you can use the Recovery Tool to automatically process all files on the storage destination. The recovery tool also supports recompressing or changing the compression method.

If you use this method, make sure to recreate the local database.

AESCryptarrow-up-right
any tool that supports the AESCrypt file formatarrow-up-right
hashtag
URL format for Commandline

hashtag
Access Grant

To use the access grant method, use the following URL format:

hashtag
Satellite API

To use a satellite API, use the following URL format:

If the --storj-satellite is omitted it will default to a US based endpoint.

hashtag
Bucket and folder

To choose the bucket where data is stored, use the --storj-bucket which will default to duplicati. If further differentiation is needed, use --storj-folder to specifiy a folder within the bucket where data is stored.

Storj networkarrow-up-right
Configure the Storj destinationConfigure the Storj destination
hashtag
URL format for Commandline

To use the WebDAV destination, you can use a url such as:

You can supply a port through the hostname, such as webdav://hostname:8080/path.

hashtag
Authentication method

There are three different authentication methods supported with WebDAV:

  • Integrated Authentication (mostly on Windows)

    • Use --integrated-authentication=trueto enable. This works for some hosts on Windows and most likely has no effect on other systems as it requires a Windows-only extension to the request and a server that supports it.

  • Digest Authentication

    • Use --force-digest-authentication=true to use Digest-based authentication

  • Basic Authentication

    • Sending the username and password in plain-text. This is the default, but is insecure if not using an SSL/TLS encrypted connection.

You need to examine your destination servers documentation to find the supported and recommended authentication method.

hashtag
Encryption and Certificates

To use an encrypted connection, add the option --use-ssl=true such as:

This will then use an HTTPS secured connection subject to the operating system certificate validation rules. If you need to use a self-signed certificate that is not trusted by the operating system, you can use the option --accept-specified-ssl-hash=<hash> to specifically trust a certain certificate. The hash value is reported if you attempt to connect and the certificate is not trusted.

This technique is similar to certificate pinning and prevents rotating the certificate and blocks man-in-the-middle attacks. If you are using the graphical user interface, the "Test connection" button will detect untrusted certificates and ask to pin the certificate.

For testing setups you can also use --accept-any-ssl-certificate that will disable certificate validation. As this enables various attacks it is not recommended besides for testing.

Showing the options needed for configuring the WebDAV connectionShowing the options needed for configuring the WebDAV connection

The HTTP verb used

  • Conditions on when to send emails

  • Conditions on what log elements to include

  • For details on how to customize the notification message, see the section on customizing message content.

    hashtag
    New in 2.0.9.106

    You can now specify multiple urls, using the options:

    These two options greatly simplify sending notifications to multiple destinations. Additionally, the options make it possible to send both the form-encoded result in text format as well as in JSON format.

    form url encodedarrow-up-right
    POSTarrow-up-right
    Configuring a HTTP notification
    The server must be set to the region-based server that matches the location where the bucket is created. If you type a non-existing bucket and use the "Test connection" button, Duplicati will ask to create a bucket for you. If you choose "Yes", the bucket will be created in the region you have selected with the advanced options.

    hashtag
    URL format for Commandline

    To use the AWS S3 destination, use a format such as:

    If you do not supply a hostname, but instead a region, such as us-east-1, the hostname will be auto-selected, based on the region. If the region is not supported by the library yet, you can supply the hostname via --server-name=<hostname>.

    Beware that S3 by default will not use an encrypted connection, and you need to add --use-ssl=trueto get it working.

    hashtag
    Creating a bucket

    When creating a bucket, it will be created in the location supplied by --s3-location-constraint. In the case no constraint is supplied, the AWS library will decide what to do. If the bucket already exists, it cannot be created again, so the --s3-location-constraint setting will not have any other effect than choosing the hostname.

    hashtag
    Storage class

    By default, the objects are created with the "Standard" storage setting, which has optimal access times and redundancy. More information about the different AWS S3 storage classesarrow-up-right are available from AWS. You can choose the storage class with the option --s3-storage-class. Note that you can provide any string here that is supported by your AWS region, despite the UI only offering a few different ones.

    hashtag
    Using Glacier storage class

    Since Duplicati stable version 2.2, Duplicati recognizes data in Glacier and will avoid downloading these files for testing. The recommended way to use this is to set up life-cycle rules that move files into cold storage after a period. Once the files are in cold storage, Duplicati will not attempt to read them.

    However, if you have retention enabled, you must set --no-auto-compact as Duplicati will otherwise attempt to download the files from cold storage, in order to compact them.

    Similarly, for a restore, you must manually move files from cold storage into the bucket before attempting the restore operation.

    AWS S3arrow-up-right
    general S3 destination
    View of the configuration of an S3 bucketView of the configuration of an S3 bucket

    hashtag
    URL format for Commandline

    To use COS, you can use the following URL format:

    The bucket name is user-chosen, and the region must match the bucket regionarrow-up-right. The remaining values can be obtained from the Cloud Console.

    Note that the bucket must be created from within the Cloud Console prior to use.

    hashtag
    Storage class

    The objects uploaded can be in different storage classesarrow-up-right, which can be set with --cos-storage-class.

    NOTE: The ARCHIVE and DEEP_ARCHIVE storage does not work well with Duplicati. Because Duplicati really likes to verify that things are working as expected you need to disable these checks. You also need to disable cleanup of data after deleting versions. Restores are tricky, because you need to manually restore data to the standard storage class before Duplicati can access it.

    Tencent Cloud Object Storage (COS)arrow-up-right
    Tencent COS configuration viewTencent COS configuration view
    Note that the Bucket is sometimes called Container. The version of the protocol depends on your provider. See below for details. Depending on your connection you will need to also add some advanced options to specify an API key or a password.

    hashtag
    URL format for Commandline

    hashtag
    OpenStack v2

    If you are using OpenStack with version 2 of the protocol, you can either use an API key or a username/password/tenant combination. To use the password based authentication, use a URL format like this:

    If you are using an API key, leave out the --auth-password and --openstack-tenant-name parameters and add in --openstack-apikey=<apikey>.

    hashtag
    OpenStack v3

    If you are using OpenStack with version 3 of the protocol, you must supply: username, password, domain, and tenant name:

    hashtag
    Region selection

    The authentication response will contain a set of endpoints to be used for actual transfers. In some cases, this response can contain multiple possible endpoints, each with a different region. To prefer a specific region, supply this with --openstack-region. If any of the returned endpoints have the same region (case-insensitive compare), the first endpoint matching will be selected. If no region is specified, or no region matches, the first region in the response is used.

    View of the OpenStack destination configurationView of the OpenStack destination configuration
    hashtag
    URL format for Commandline

    To use the Jottacloudarrow-up-right storage destination, you can use the following URL format:

    To use Jottacloud you must first obtain an AuthID by using a Duplicati service to log in to Jottacloud and approve the access. See the page on the OAuth Server for different ways to obtain an AuthID.

    hashtag
    Device and mount point

    Within Jottacloud, each machine registered is a device that can be used for storage, and within each device you an choose the mount point. By default, Duplicati will use the special device Jotta and the mount point Archive.

    If you need to store data on another device, you can use the options --jottacloud-device and --jottacloud-mountpoint to set the device and mount point. If you only set the device, the mount point will be set to Duplicati.

    hashtag
    Performance tuning

    If you need to tune the performance and resource usage to match your specific setup, you can adjust the two parameters:

    • --jottacloud-threads: The number of threads used to fetch chunks with

    • --jottacloud-chunksize: The size of chunks to download with each thread

    Jottacloud destination configurationJottacloud destination configuration

    hashtag
    Finding backups in pCloud

    Within pCloud's control panel you cannot immediately see the backups made by Duplicati. This is because pCloud restricts each application to its own sub-folder. When you enter a folder path in Duplicati, it will be automatically mapped to a subfolder in pCloud.

    If you enter the folder path in Duplicati:

    You can then find the Duplicati data in the pCloud console under:

    The benefit from this mapping is that it is impossible for Duplicati to ever touch any non-Duplicati files that you may have stored in pCloud.

    hashtag
    URL format for Commandline

    To use pCloud, use the following URL format:

    The <host> value must be one of:

    • api.pcloud.com for US based access

    • eapi.pcloud.com for EU based access

    To use pCloud you must first obtain an AuthID by using a Duplicati service to log in to pCloud and approve the access. See the page on the OAuth Server for different ways to obtain an AuthID.

    Due to the way the pCloud authentication system is implemented, the generated AuthID is not stored by the OAuth server and cannot be revoked via the OAuth server. To revoke the token, you must revoke the Duplicati app from your pCloud account, which will revoke all issued tokens.

    This also means that after issuing the pCloud token, you do not need to contact the OAuth server again, unlike other OAuth solutions.

    pCloudarrow-up-right
    Configuring pCloud destinationConfiguring pCloud destination
    hashtag
    URL format for Commandline

    To use the destination, use the following URL format:

    To use MS Group you must first obtain an AuthID by using a Duplicati service to log in to Microsoft and approve the access. See the page on the OAuth Server for different ways to obtain an AuthID.

    You can either provide the group email via --group-email or the group id via --group-id. If you provide both, they must resolve to the same group id.

    hashtag
    Performance tuning options

    If you need to gain more performance you can fine-tune the performance of chunked transfers with the options:

    • --fragment-size

    • --fragment-retry-count

    • --fragment-retry-delay

    For most uses, it is recommended that these are kept at their default settings and only changed after confirming that there is a gain to be made by changing them.

    Microsoft Groupsarrow-up-right
    Microsoft Group destination configuration
    If you use the "Test connection" button and the bucket does not exist, Duplicati will offer to create the bucket for you.

    hashtag
    URL format for Commandline

    To use the B2 storage destination, use the following URL format:

    hashtag
    Create a bucket

    You can use the Backblaze UI to create your buckets, but if you need to create buckets with Duplicati, this is also possible. The default is to create private buckets, but you can create public buckets with --b2-create-bucket-type=allPublic.

    hashtag
    Performance tuning

    You can change the size of file listings to better match pricing and speed through --b2-page-size, which is default set to 500, meaning you will have a list request for each 500 objects. Note that setting this higher may cause the number of requests to go down, but each requests may be priced as a more expensive request.

    If you prefer downloads from you custom domain name, you can supply this with --b2-download-url. This setting does not affect uploads.

    B2 configuration viewB2 configuration view
    hashtag
    URL format for Commandline

    To use the CIFS destination, you can use a url such as:

    hashtag
    Transport

    SMB supports two distinct transport protocols, each with its own characteristics:

    DirectTCP (directtcp)

    • Port: 445

    • Characteristics:

      • Faster performance

      • Modern implementation

      • Preferred for newer systems

      • Direct TCP/IP connection

      • Lower overhead

    hashtag
    NetBIOS over TCP (netbios)

    • Port: 139

    • Characteristics:

      • Legacy support

      • Compatible with older systems

      • Additional protocol overhead

      • Slower performance

      • Uses NetBIOS naming service

    hashtag
    Advanced Options

    --read-buffer-size

    Defines the read buffer size, in bytes, for SMB operations (Will be capped automatically by SMB negotiated values, values bellow 10000 bytes will be ignored)

    --write-buffer-size

    Defines the write buffer size, in bytes, for SMB operations (Will be capped automatically by SMB negotiated values, values bellow 10000 bytes will be ignored)

    The SMB backend is available on stable release from version 2.2.

    View of the SMB configuration pageView of the SMB configuration page
    hashtag
    URL format for Commandline

    To use box.com, use the following URL format:

    To use box.com you must first obtain an AuthID by using a Duplicati service to log in to box.com and approve the access. See the page on the OAuth Server for different ways to obtain an AuthID.

    hashtag
    Fully delete files

    When files are deleted from your box.com account, they will be placed in the trash folder. To avoid old files taking up storage in your account, you can add --box-delete-from-trash which will then also remove the file from the trash folder.

    box.comarrow-up-right
    View for configuring box.com destinationView for configuring box.com destination
    hashtag
    URL format for Commandline

    To use OneDrive For Business, use the following URL format:

    hashtag
    Integrated Authentication (Windows only)

    If you are on Windows, it may be possible to use the current user's credentials to authenticate. Support for this is depending on many details and is not avaliable in all cases. To use integrated authentication, use the following URL format:

    hashtag
    Advanced options

    Instead of deleting files directly, they can be moved to the recycle bin by setting the option --delete-to-recycler. This gives some additional safety if a version removal was unintended, but is not generally recommended, as it is a manual process to recover from a partial delete.

    The options --web-timeout and --chunk-size can be used to fine-tune performance that matches your setup, but generally it is recommended to keep them at their default values.

    If you are running Duplicati in a data center with a very stable connection, you can use the option --binary-direct-mode to enable direct transfers for optimal performance.

    Microsoft OneDrive for Businessarrow-up-right
    View of the OneDrive Business configuration pageView of the OneDrive Business configuration page
    hashtag
    URL format from Commandline

    To use OneDrive, use the following URL format:

    To use OneDrive you must first obtain an AuthID by using a Duplicati service to log in to Microsoft and approve the access. See the page on the OAuth Server for different ways to obtain an AuthID.

    hashtag
    Drive ID

    A default drive will be used to store the data. If you require another drive to be used to store data, such as a shared drive, use the --drive-id=<drive id> option.

    Microsoft OneDrivearrow-up-right
    View of the configuration page for the OneDrive destinationView of the configuration page for the OneDrive destination
    hashtag
    URL format for Commandline

    To use SharePoint, use the following URL format:

    hashtag
    Integrated Authentication (Windows only)

    If you are on Windows, it may be possible to use the current user's credentials to authenticate. Support for this is depending on many details and is not avaliable in all cases. To use integrated authentication, use the following URL format:

    hashtag
    Advanced options

    Instead of deleting files directly, they can be moved to the recycle bin by setting the option --delete-to-recycler. This gives some additional safety if a version removal was unintended, but is not generally recommended, as it is a manual process to recover from a partial delete.

    The options --web-timeout and --chunk-size can be used to fine-tune performance that matches your setup, but generally it is recommended to keep them at their default values.

    If you are running Duplicati in a data center with a very stable connection, you can use the option --binary-direct-mode to enable direct transfers for optimal performance.

    Microsoft SharePointarrow-up-right
    SharePoint provider that uses the Graph API, see SharePoint v2
    Configure Sharepoint destinationConfigure Sharepoint destination
    executable only exists for Windows and serves two purposes: to manage the Windows Service registration and running the server as a Windows Service.

    The registration of the Windows Service is done by executing the WindowsService binary:

    The arguments can be any of the arguments supported by the Server and will be passed on to the Server on startup. The service will be registered to automatically restart and start at login. These details can be changed from the Windows service manager.

    From version 2.1.1.0 and forward, the service will automatically start after installation. The command can be changed to INSTALL-ONLY to avoid starting the service.

    To remove the service, use the the UNINSTALL command:

    Server
    Duplicati.WindowsService.exe INSTALL [arguments ...]
    Duplicati.WindowsService.exe UNINSTALL
    duplicati --no-hosted-server 
      --hosturl=http://localhost:8200 
      --webservice-password=<password>
    mega://<folder>/<subfolder>
      ?auth-username=<username>
      &auth-password=<password>
    filen://<folder>/<subfolder>?auth-username=email&auth-password=*****
    dropbox://<folder>/<subfolder>?authid=<authid>
    tahoe://<hostname>:<port>/uri/URI:DIR2:<folder>
      ?use-ssl=true
    duplicati backup \
      s3://example-bucket?auth-username=user&password=pass \
      --passphrase=my-password
    gpg -d volume.zip.gpg -o volume.zip
    gpg --symmetric volume.zip -o volume.zip.gpg
    storj://
      ?storj-auth-method=Access%20Grant
      &storj-shared-access=<access key>
    stor://
      ?storj-satellite=<hostname:port>
      &storj-api-key=<api key>
      &storj-secret=<secret>
    webdav://<hostname>/<path>
      ?auth-username=<username>
      &auth-password=<password>
    webdav://<hostname>/<path>
      ?auth-username=<username>
      &auth-password=<password>
      &use-ssl=true
    --send-http-form-urls=
    --send-http-json-urls=
    s3://<bucket name>/<prefix>
      ?aws-access-key-id=<account id or username>
      &aws-secret-access-key=<account key or password>
      &s3-location-constraint=<region-id>
    cos://<prefix>
      ?cos-bucket=<Bucket name>
      &cos-region=<Bucket region>
      &cos-app-id=<Account AppId>
      &cos-secret-id=<API Secret Id>
      &cos-secret-key=<API Secret Key>
    openstack://<container>/<prefix>
      ?auth-username=<username>
      &auth-password=<password>
      &openstack-tenant-name=<tenant>
      &openstack-authuri=<url to auth endpoint>
    openstack://<container>/<prefix>
      ?auth-username=<username>
      &auth-password=<password>
      &openstack-tenant-name=<tenant>
      &openstack-domain-name=<domain>
      &openstack-authuri=<url to keystone server>
      &openstack-version=v3
    jottacloud://<folder>/<subfolder>
      ?authid=<authid>
    folder/subfolder
    folder/subfolder -> /Applications/DuplicatiBackup/folder/subfolder
    pcloud://<host>/<folder>/<subfolder>?authid=<authid>
    msgroup://<folder>/<subfolder>
      ?authid=<authid>
      &group-id=<group-id>
    b2://<bucket>/<prefix>
      ?b2-accountid=<account id>
      &b2-applicationkey=<application key>
    smb://<hostname>/<share>/<path>
      ?auth-username=<username>
      &auth-password=<password>
      &transport=directtcp
    box://<folder>/<subfolder>?authid=<authid>
    od4b://<folder>/<subfolder>
      ?auth-username=<username>
      &auth-password=<password>
    od4b://<folder>/<subfolder>?integrated-authentication=true
    onedrivev2://<folder>/<subfolder>?authid=<authid>
    mssp://<folder>/<subfolder>
      ?auth-username=<username>
      &auth-password=<password>
    mssp://<folder>/<subfolder>?integrated-authentication=true
    Even though we try very hard to only collect non-sensitive information, you are in control and can choose to opt out of data collection if you want to.

    The easy way to opt out when using the UI is to visit the settings page, and simply choose "None / disabled":

    However, since Duplicati also collects the event of starting Duplicati, you will still be sending at least one telemetry data point. If you prefer not to send anything at all, you need to set an environment variable before starting Duplicati:

    If this is set, no telemetry is sent, and it is not possible to enable telemetry from within the UI.

    https://usage-reporter.duplicati.comarrow-up-right
    USAGEREPORTER_Duplicati_LEVEL=none

    For rclone, you need to specify the remote and path. You can use the advanced options to specify more options, including the path to the Rclone binary, if it is not in the default search path.

    hashtag
    URL format for Commandline

    The URL format for the Rclone destination is:

    If the remote repo is not a valid hostname, you can instead use this format:

    hashtag
    Advanced options

    If you need to change the Rclone local repo you can use the option --rclone-local-repository which will otherwise be set to local, which works for most setups.

    If you need to supply options to Rclone, these can be passed via --rclone-option. Note that the values must be url encoded, and multiple options can be passed by separating them with spaces, before encoding.

    As an example adding "--opt1=a --opt2=b" needs to url encoded and results in:

    Rclone projectarrow-up-right
    hashtag
    Register the local installation

    In a default installation, Duplicati will serve up a UI using an internal webserver. This setup works well for workstations and laptops but can be challenging when the machine is not always connected to a display. To securely connect the instance to the Duplicati Console, go to the settings page and find the "Remote access control" section.

    The remote control setup step

    Click the button "Register for remote control" to start the registration process. After a short wait, the machine will obtain a registration link:

    Machine ready to be enrolled

    hashtag
    Registering on the Console

    Click the registration link to open a browser and claim the machine in the Duplicati Console:

    Registrering the machine

    Click "Register machine" to add it to your account, then return to the Duplicati Settings page where the machine is now registered and ready to connect:

    Machine is registered and ready to connect

    Click the "Enable remote control" button and see the machine is now connected to the Duplicati Console:

    Machine is connected

    hashtag
    Connecting to the machine

    Now that the machine is connected to the Duplicati Console, return to the Duplicati Console and visit Settings -> Registered Machinesarrow-up-right:

    Machine is listed in Console

    You can now click "Connect" to access the machine directly from the portal!

    Agent remote mangement
    hashtag
    User interface

    To use the Aliyun OSS destination you must specify: bucket, path in bucket, access key id, access key secret, and endpoint.

    hashtag
    URL format for Commandline

    To use Aliyun OSS, you can use the following URL format:

    The endpoint is defined by Aliyunarrow-up-right and needs to match the region the bucket is created it. The access key can be obtained or created in the Cloud Console.

    Alibaba Cloud Object Storage Servicearrow-up-right
    aliyunoss://<prefix>
      ?oss-bucket=<Bucket name>
      &oss-endpoint=<Endpoint>
      &oss-access-key-id=<Access Key Id>
      &oos-access-key-secret=<Access Key Secret>
    hashtag
    User interface

    To use iDrive e2, you must supply: bucket, path in bucket, access id, and access secret.

    If you use the "Test connection" and the bucket does not exist, Duplicati will offer to create it for you.

    As the iDrive e2 backend is S3 based, there are many advanced options that can be configured.

    hashtag
    URL format for Commandline

    To use iDrive e2, you can use the following URL format:

    iDrive e2arrow-up-right
    iDrive Cloud Backuparrow-up-right
    e2://<bucket>/<prefix>
      ?access_key_id=<Access key id>
      &access_secret_key=<Access secret key>
    hashtag
    User interface

    To configure the FileJump destination, enter a unique path for the backup and an API tokenarrow-up-right.

    hashtag
    URL format for Commandline

    To use Filejump, use the following URL format:

    You can get an API key by visiting your Filejump account settingsarrow-up-right.

    It is also possible to use a username/password combination, but this is not recommeded as it it does not work with 2FA enabled on the account:

    If you use the username/password method, Duplicati will create and use an API token, but will negotiate with Filejump to get the token, which may slow things down a bit.

    FileJumparrow-up-right

    Retention settings

    This page describes the different retention settings available in Duplicati

    Even though Duplicati tries hard to reduce storage use as much as possible, it is inevitable that the remotely stored data grows as new versions of files are added. To avoid running out of space or paying for excessive storage use, it is important that unnecessary backups are removed regularly.

    In Duplicati there are a few different settings that can be used to configure when a "snapshot" is removed. All of these options are invoked automatically at the end of a backup to ensure that removal follows a new version. If you use the Command Line Interface, it is possible to disable the removal and run the delete command as a separate step.

    After deleting one or more versions, Duplicati will mark any data that can no longer be referenced as waste, and may occasionally choose to run a compact process that deletes unused volumes and creates new volumes with no wasted space.

    Despite all deletion rules, Duplicati will never delete the last version, keeping at least one version available.

    hashtag
    Delete older than

    The most intuitive option is to choose a period that data is stored, and then to consider everything older than this period as stale data. The actual period depends on the actual use, but it could be 7 days, 1 year or 5 years for example.

    This option is usually the prefered choice if the backups happen regularly, such as a backup each day, and then keep the last 3 months.

    hashtag
    Keep versions

    If the backups are running irregularly, where the backups are triggered by some external event, there may be long periods where there are no backups. For this case you can choose a number of versions to keep and Duplicati will consider anything outside that count as outdated.

    Another special case is that if the source data has not changed at all, which is uncommon, Duplicati will not make a new version, as it would be identical to the previous version. In such a setup, it may be preferable to use a version count, despite regularly scheduled backups.

    hashtag
    Retention policy

    The retention policy is a "bucket" based strategy, where you define how many backups to keep in each "bucket" and what a "bucket" covers. With this strategy, it is possible to get something similar to style backup rotations.

    The syntax for the rentention policy uses the to define the bucket and contents in that bucket. The bucket size is first, then a colon separator, and then the duration in the bucket. Multiple buckets can be defined with commas. As an example:

    The first bucket is defined as being 7 days, and the value U means unlimited the number of backups in this bucket. In other words: for the most recent 7 days, keep all backups.

    The second bucket is defined as 1 year, keeping a backup for each 1 week, resulting in rougly 52 backups after the first 7 days.

    Any backups outside the buckets are deleted, meaning anything older than a year would be removed.

    In the UI, a helpful default is called "Smart retention" which sets the following retention policy:

    Translated, this policy means that:

    • For the most-recent week, store 1 backup each day

    • For the last 4 weeks, store 1 backup each week

    • For the last 12 months, store 1 backup each month

    Advanced configurations

    hashtag
    Sharing the secret provider

    If the secret provider is configured for the entry application (e.g., the TrayIcon, Server or Agent) it will naturally work for that application, but will also be shared within that process.

    For the Agent, this means that setting the secret provider for the agent, will also let the server that it hosts use the same secret provider. When a backup or other operation is then executed by the server it will also have access to the same secret provider.

    This sharing simplifies the setup by only having a single secret provider configuration and then letting each of the other parts access secrets without further configuration. If needed, the secret providers can be specified for the individual backups, such that it is possible to opt-out of using the shared secret provider.

    hashtag
    How to avoid passing credentials on the commandline

    To make passing arguments to the application a bit harder to obtain, the value for --secret-provider is treated as an environment variable if:

    • It starts with $ optionally with curly brackets {}:

      • $secretprovider

    No expansion is done on environment variables, so the entire provider string is required to be set as an environment variable.

    hashtag
    How to protect against secret provider outages

    If you run an operation and the secret provider is unavailable when the secrets are requested, the operation will fail. For most uses the occurence of an outage is so rare that this situation is acceptable.

    However, for some uses it is important that the backups keep running, even in the face out outages. To handle this need, Duplicati supports an optional cache strategy:

    Storing the secrets somewhere makes it more likely that it is eventually leaked. For that reason, the default is to use the cache setting None which turns off the caching fully and only relies on the provider.

    The InMemory setting is the least intrusive version as it only stores the secrets in the process memory. This option is most useful when using a such that it stays in memory between runs.

    Finally, the Persistent option will write secrets to disk, so it can handle situations where the provider is unavailable during startup, or where a shared provider does not work.

    As the purpose of the secret provider is to prevent the secrets from being written to disk, the secrets are written to disk using a passhprase derived from the secret provider url. If the secret provider url does not contain a strong secret already, it is possible to add any parameter to the url to increase the strength of the key.

    If the secret provider url changes, it is no longer possible to retrieve the cached values, and the next run will fail if the provider is unavailable, but will otherwise write a new encrypted cache file to disk.

    Using remote control with agent

    This page describes how to use the remote agent to connect with remote control

    The Agentarrow-up-right is designed to be deployed in a way that is more secure and easier to manage at scale than the regular TrayIconarrow-up-right or Serverarrow-up-right instances. When the agent is running, it does not have any way to interact with it from the local machine.

    circle-info

    The Agent is only available on the Enterprise plan

    On the very first run, the Agent will attempt to register itself with the Duplicati Console. If there is a desktop environment and a browser on the system, the Agent will attempt to open this with the registration link. In case there is no such option, the Agent will print out the link in the console or Event Viewer on Windows. The Agent will repeatedly poll the Console to find out when it is claimed.

    As long as the Agent is not registered, restarting it will make it attempt to connect again.

    Once the agent is registered, it immediately enables the connection and will be listed as a registered machine in the .

    hashtag
    Simplified registration

    To skip the registration step and have the agent connect directly to the console without any user intervention, it is required to first create a link that is pre-authorized on the Console. To do this head to the and click the "Add registration url" button.

    Any machine can now use this pre-authorized url to add machines to your organization in the Console. You can click the "Copy" button to get the link to your clipboard and paste it in when registering a machine. Do not share this link with anyone as it could allow them to add machines to your account.

    To revoke a link, simply delete it from within the portal. This will prevent new machines from registering, but existing registered machines will remain there.

    With the registration link, start the Agent with a commandline such as:

    This will cause the Agent to immediately show up in the Console. Future invocations of the agent will not require the registration url, but should the Agent somehow be de-registered, it will re-reregister if the url is set and the link is still valid.

    hashtag
    Registration with deployment

    To simplify starting the agent in larger scale deployments, it is possible to configure with the registration url.

    circle-check

    There is a button on the to download the file with the link inserted so you do not have to create the file manually. Using the generated file is recommended as it reduces the chance for typo errors.

    To create a preload file manually, create a new file named preload.jsonwith the following content:

    circle-info

    The first part in the example affects only the , the second parts sets the environment variable for the and configurations.

    This file can then be distributed to the target machine before the package is installed. The describes the possible locations where Duplicati will look for such a file.

    SFTP (SSH) Destination

    This page describes the SFTP (SSH) storage destination

    The SFTP destination is using the ubiquitous SSH system to implement a secure file transfer service. Using SSH allows secure logins with keys and is generally a secure way to connect to another system. The SSH connection is implemented with Renci SSH.Netarrow-up-right.

    hashtag
    User interface

    To use the SFTP destination you must enter at least the shown information: server, port, folder path, and username. You must most likely also provide either a password or an SSH private key through the advanced options.

    hashtag
    URL format for Commandline

    To use the SFTP destination you can use a URL such as:

    You can supply a non-standard port through the hostname, such as ssh://hostname:2222/folder.

    hashtag
    Using key-based authentication

    It is very common, and more secure, to use key-based authentication, and Duplicati supports this as well. You can either provide the entire key as part of the URL or give a path to the key file. If the key is encrypted, you can supply the encryption key with --auth-password.

    Starting with Duplicati 2.2 it is now possible to provide an SSH private key file with the option --ssh-keyfile=/path/to/file or an inline key with the option --ssh-key=url-encoded-key . In the user interface you can drop the file with the private key, or paste in the contents.

    hashtag
    Using key-based authentication from the Commandline

    To use a private key inline, you need to url encode it first and then pass it to --ssh-key. An example with an inline private key:

    Note that you need both the prefix sshkey:// and you need to URL encode the contents.

    If you have the SSH keyfile installed in your home folder, you can use the file directly with --ssh-keyfile:

    Note that Duplicati does not currently support key agents so you must pass the password here.

    For best security it is recommended to use a separate identity and key files for the user, so a compromise of the keys does not grant more permissions than what is required.

    hashtag
    Validating the host key

    Since SSH does not have a global key registry, like for HTTPS, it is possible to launch a man-in-the-middle attack on an SSH connection. To prevent this, Duplicati and other SSH clients will use certificate pinning where the previously recorded host certificate hash is saved and changes to the host certificate must be manually handled by the user.

    On the first connection to the SSH server, Duplicati will throw an exception that explains how to trust the server host key, including the host key fingerprint. Once you obtain the host key fingerprint, you can supply it with the --ssh-fingerprint option.

    If the host key changes, you will get a different message, but also reporting the new host key, so you can update it. The option --ssh-accept-any-fingerprints=true is only recommended for testing and not for production setups as it will disable the man-in-the-middle protection.

    If you are using the UI, you can click the "Test connection" button and it will guide you to set the host key parameters based on what the server reports.

    hashtag
    Timeout and keep-alive

    By default, Duplicati will assume that the connection works once it has been established. If the SSH server is malfunctioning it may cause operations to hang. To guard against this case, you can set the --ssh-operation-timeout option to enforce a maximum time the operation may take.

    A different kind of timeout is when firewalls and other network equipment monitors the connections and closes them if there is no activity. Because Duplicati may open a connection and then perform a long operation locally, it may cause the connection to be closed due to inactivity. The option --ssh-keepalive can be used to define a keep-alive interval where messages are sent if there is no other activity.

    Both options are default disabled and should only be enabled if there are special conditions in a setup where the options are needed.

    Organization management

    Guidance for MSPs and Enterprises using organizations and sub-organizations in the Duplicati console

    circle-info

    Organizations in the Duplicati console are similar to tenants found in other systems.

    Managed service providers (MSPs) can use organizations and sub-organizations in the Duplicati console to isolate tenants, delegate access, and safely offboard customers. The console enforces depth and licensing limits so MSP operators maintain strong governance while working across multiple customer environments.

    The description below is targeted MSPs but can also be applied to larger organizations, where there is a need to isolate different areas, for example geographically or by organizational structures.

    hashtag
    Hierarchy model

    • Root organization: Created without a parent. MSP staff typically sign in to the MSP root and operate downstream organizations from that context.

    • Sub-organization: Created under a parent organization to represent a customer or site. Each sub-organization handles references to the root and parent organizations. The hierarchy is currently limited to three levels to keep trees manageable.

    • Detached organization: Created without a parent or shared root to start a completely new tree. MSPs can use this when a customer must live in an isolated tenant before being linked to a broader tree. Note that each root organization needs its own payment configuration.

    circle-info

    Creating and managing organizations is an Enterprise feature. Contact Duplicati sales or support if you need an Enterprise license or trial.

    hashtag
    Creating organizations from the console

    1. Ensure ownership and licensing: The signed-in operator must be an owner of the current organization. Only organizations with Enterprise-level subscriptions can create sub-organizations, so confirm the MSP root has the right plan before adding customers.

    2. Choose hierarchy placement:

      • To add a customer under the MSP tree, create a new organization while signed in to the MSP root (or an existing parent).

    hashtag
    Delegating access across the hierarchy

    • Security groups can be linked only downward from a parent organization to its descendants. To link a group, you must own the source organization, the target organization, and the security group itself.

    • This model lets MSPs define shared automation or operator roles in the MSP root and selectively expose them to customer organizations without risking cross-tenant access.

    An example setup for a multi-level enterprise organization could look like this:

    hashtag
    Deleting organizations safely

    1. Select the target: From the organizations list, pick the customer organization you need to remove. You cannot delete the organization you are currently operating from, and you must be an owner of the target organization.

    2. Clear blockers: Deletion is blocked while the organization still has active subscriptions, child organizations, or customer resources (backup reports, connected agents, registered machines, client encryption keys, connection strings, or client backup configurations). Remove or migrate these items first.

    3. Confirm removal: Once prerequisites are cleared, run the delete action. The console revokes accounts and other items that are not directly user managed.

    hashtag
    MSP best practices

    1. Add customers as sub-organizations: Create new orgs without the Detached flag so each customer anchors to the MSP root for consistent reporting and licensing checks.

    2. Delegate access with intent: Link MSP-owned security groups down to specific customer orgs that require shared operational roles or automation.

    3. Offboard cleanly: Before deleting a customer org, clear active subscriptions and resources, then delete to revoke linked access keys automatically.

    Custom message content

    This page describes the template system used to format text messages sent

    The template system used in Duplicati is quite simple, as it will essentially expand Windows-style environment placeholders, %EXAMPLE%, into values. The same replace logic works for both the subject line (if applicable) and the message body.

    Note: The description here only covers the text-based output (such as emails, etc). The template system for JSON is a bit different.

    Duplicati has defaults for the body and subject line, but you can specify a custom string here. For convenience, the string can also be a path to a file on the machine, which contains the template.

    An example custom template could look like:

    Duplicati %OPERATIONNAME% for %backup-name% on %machine-name%
    
    The %OPERATIONNAME% operation has completed with the result: %PARSEDRESULT%
    
    Source folders: 
    %LOCALPATH%
    
    Encryption module: %ENCRYPTION-MODULE%
    Compression module: %COMPRESSION-MODULE%
    
    %RESULT%

    The template engine supports reporting any setting by using the setting name as the template value. Besides the options, there are also a few variables that can be used to extract information more easily:

    hashtag
    JSON output

    If the output is JSON it needs to be handled different than regular text, to ensure the result is valid. The logic for this is to re-use the templating concept, but only as a lookup, to figure out what keys to include in the results.

    An example template could be:

    This will ensure that each of those values will be included in the extra element in the JSON output. The default template for JSON output includes all fields listed above, but no options are included by default.

    Local providers

    This page describes the providers that operate locally on the machine they are running

    hashtag
    The Environment Variable provider

    The simplest provider is the env:// provider, which simply extracts environment variables and replaces those. There is no configuration needed for this provider, and the syntax for adding it is simply:

    hashtag
    The File Secret provider

    The file-secret:// provider supports reading secrets from a file containing a JSON encoded dictionary of key/value pairs. As an example, a file could look like:

    The file provider also supports files encrypted with and you supply the decryption key with the option passphrase. Suppose the file is encrypted with the key mypassword you can then configure the provider:

    To avoid passing the encryption key via a commandline, see .

    To encrypt the file, you can use the provided with Duplicati:

    hashtag
    Credential Manager (Windows)

    On Windows XP and later, the can be used to securely store secrets. As the credentials are protected by the account login, there is no configuration needed, so the setup is simply:

    hashtag
    Using libsecret (Linux)

    The stores various credentials on Linux and integrates with various UI applications to let the user approve or reject attempts to read secrets. The libsecret provider supports a single optional setting, collection, which indicates what collection to read from. If not supplied the default collection is used. To use the libsecret provider, use this argument:

    If you are using a system with a Gnome-based desktop, such as Ubuntu, you can use the application to manage your passwords.

    hashtag
    Using the pass secret provider (Linux)

    The is a project that implements a secure password storage solution on Linux system, backed by GPG. Duplicati can use pass as the secret provider:

    If you want to use pass, make sure it is installed on the system. You also need a GPG key, and you can create one with:

    As part of the key generation process, you are asked to enter an email address that will later be used to identify the key. Once you have the GPG key you can initialize pass with:

    hashtag
    Using the KeyChain (MacOS)

    For MacOS users the standard password storage is the program. The secrets stored here as application passwords can be used by Duplicati. The KeyChain can be enabled as a secret provider with:

    For more advanced uses the options account and service can be used to narrow down what secrets can be extracted.

    S3-compatible Destination

    This page describes the S3 storage destination

    The Simple Storage Service, S3, was originally described, developed and offered by Amazon via AWS. Since then, numerous other providers have adopted the protocol and offer S3-compatible services. While these services are mostly compatible with the core S3 protocol, a number of additional AWS-specific settings are usually not supported and will be ignored.

    This page deals with S3 in general, for a specific setup on AWS S3, refer to the AWS specific page.

    When storing data in S3, the storage is divided into a top-level "folder" called a "bucket", and each bucket has "objects", similar to files. For most providers, an object name with /characters will be interpreted as subfolders in some way.

    In the original S3 specification, the bucket name was used as part of the hostname, causing some issues with bucket names that are not valid hostnames, and some delays for new buckets caused by DNS update speeds. Newer solutions use a single shared hostname and provide the bucket name as a parameter.

    For AWS S3, and most other providers, the bucket name is a global name, shared across all users. This means that simple names, such as backup or data will likely be taken, and attempts to use these will cause permission errors. For to make it unique. The Duplicati UI will recommend prefixing the account id to the bucket name, to make it unique.

    hashtag
    User interface

    To use the S3 backend you must fill in details for all the fields: bucket, folder path, server, AWS Access Key ID, and AWS Secret Access Key. Not that your provider may use different names for the different values, especially the Access Key Id and Secret Access Key may be called something like username and password.

    With the advanced options you can choose many extra settings as described below.

    hashtag
    URL format for Commandline

    To use S3 as the storage destination, us a format such as:

    Note that the default for S3 is to use unencrypted connections. The connections are secured with signatures, but all data transfered can be captured through the network. If the provider supports SSL/TLS, which most do, make sure to add --use-ssl=trueto also encrypt the connection.

    Make sure you consult the provider documentation to get the server name you need for the bucket region. If you are using AWS, .

    hashtag
    Choosing the client

    The S3 storage destination can either use the or , and you can choose the library to use with --s3-client=minio.

    Generally, both libraries will work with most providers, but the AWS library has some defaults that may not be compatible with other providers. While you can configure the settings, it may be simpler to use Minio with the default settings.

    hashtag
    Using non-AWS storage

    Most providers other than AWS S3 use an older version of the protocol, so to connect to them you often need to set either the option --s3-disable-chunk-encoding or use the Minio client with --s3-client=minio (but not both):

    hashtag
    Creating the bucket

    Since the bucket defines the place where data is stored, a bucket needs to be created before it can be used. All providers will offer a way to do this through their UI, and allows you to set various options, such as which geographical region the bucket is located in.

    If you use Duplicati to create the bucket, you can also set the option --s3-location-contraintto provide the desired location. Support for this, and available regions, depends on the provider.

    hashtag
    Storage class

    With S3 it is also possible to set the storage class which is sometimes used to fine-tune the cost/performance/durability of the files. The storage class is set with --s3-storage-class, but the possible settings depends on the provider.

    Using Duplicati to backup OpenClaw

    Don't let accidental wipe erase your AI's memory and configurations. Secure your OpenClaw instance with Duplicati’s encrypted, zero-trust backups to keep your sensitive data safe

    Configuring Duplicati to back up OpenClaw (formerly known as Moltbot or Clawdbot) is a smart move. Because OpenClaw stores highly sensitive data - including plain-text API keys, session tokens for WhatsApp/Telegram, and long-term memory - it is critical that your backup is encrypted and stored securely.

    This guide assumes you have Duplicati installedarrow-up-right and OpenClaw running locally.


    hashtag
    Step 1: Locate your OpenClaw Data

    Before opening Duplicati, you need to know exactly what you are backing up. By default, OpenClaw stores its configuration and "memory" in a hidden directory.

    • Linux/macOS: ~/.clawdbot (or ~/.openclaw in newer versions)

    • Windows: C:\Users\<YourUsername>\.clawdbot

    Key files in this folder:

    • clawdbot.json: Your main config and API keys.

    • .bak.X files: Automatic rotating backups (these also contain secrets).

    • memory/: Markdown documents containing the bot's learned context.


    hashtag
    Step 2: Create a New Backup Job

    1. Open the Duplicati Web UI.

    2. Click Add backup > Configure a new backup > Next.

    3. General Settings:

    circle-info

    Always remember to store the password in a safe place; without the password you cannot recover the backup!


    hashtag
    Step 3: Choose a Destination

    Duplicati supports dozens of backends. To keep your "AI brain" safe, choose a destination that isn't on the same physical machine:

    • Cloud Storage: Backblaze B2, Google Drive, or Dropbox.

    • S3 Compatible: If you use a VPS (like DigitalOcean or AWS).

    • Local/SSH: A NAS or a second computer on your network.

    Remember to test the destination to ensure it is working as expected.


    hashtag
    Step 4: Select Source Data

    Navigate to the path identified in Step 1.

    1. In the file tree, find your user directory.

    2. Check the box for .clawdbot (or .openclaw).

    3. Filter Rule (Optional): If you are running OpenClaw via Docker and have massive log files, you might want to exclude *.log


    hashtag
    Step 5: Schedule and Retention

    • Schedule: Since OpenClaw is an "always-on" assistant, a daily backup is usually sufficient. If you use it for heavy task automation, consider every 6 hours.

    • Retention: Use Smart backup retention. This keeps one backup for each of the last 7 days, one for each of the last 4 weeks, and one for each of the last 12 months.


    hashtag
    Step 6: Finalize and Run

    On the last screen (Options), keep the default Remote volume size (50MB). Click Save and then click Run now to start your first backup.

    hashtag
    ⚠️ A Note on Security

    Recent security audits (Jan 2026) have highlighted that OpenClaw stores credentials in cleartext. If your Duplicati destination AND passphrase is compromised, anyone who finds those files has full access to your connected Telegram, WhatsApp, and LLM accounts (OpenAI/Claude). Treat your Duplicati passphrase like the 🔑 to your house.

    File Destination

    This page describes how to use the file destination provider to store backup data on a local drive.

    The most basic destination in Duplicati is the file backend. This backend simply stores the backup data somewhere that is reachable from the file system. The destination can be a network based storage as long as it is mounted when needed, a fixed disk, or a removable media.

    circle-info

    Note that for Windows network shares, you may want to use the CIFS/SMB destination instead.

    hashtag
    User interface

    In the user interface you simply need to either pick or type the path to where the backup data will be stored.

    In the advanced options you can choose the options mentioned below.

    hashtag
    URL format for Commandline

    The file backend can be chosen with the file:// prefix where the rest of the destination url is the path.

    Windows example:

    Linux/MacOS example:

    For most cases it will also work without the file:// prefix, but adding the prefix makes the intention clear.

    hashtag
    Improving speed for local filesystems

    Since Duplicati is intended to be used with remote systems, it will make a temporary file, and then copy the temporary file to the new location. This enables various retry mechanisms, progress reporting and failure handling that may not be desired with local filesystems.

    To change this logic to instead use the operating system movecommand to move the file into place, avoiding a copy, set the option --use-move-for-put, on the file backend and also set --disable-streaming-transfers. With these two options, all special handling will be removed and the transfer speed should be the optimal possible with the current operating system. Note that setting --disable-streaming-transferswill not show any progress during transfers, if you are using the UI, because the underlying copy or move method cannot be monitored.

    hashtag
    Disabling length verification

    Because a local storage destination is expected to have a very low latency, the file backend will verify the length of the file after copy. This additional call is usually very fast and does not impact transfers speeds, but can be disabled for slightly faster uploads with --disable-length-verification.

    hashtag
    Removable drives (mostly Windows)

    For removable drives, the mount path can sometimes change when inserting the drive. This is most prominent on WIndows, where the drive letters are assigned based on what order the drives are connected. To support different paths, you can supply multiple alternate paths with --alternate-target-paths, where each path is separated with the system path separator (;on Windows, :on Linux/MacOS):

    If you would like to support any drive letter, you can also use * as the drive letter (Windows only):

    Because using multiple paths could end up attempting to make a backup to the wrong drive, you can use the option --alternate-destination-marker to provide a unique marker filename that needs to exist on the destination:

    Using this option will scan all paths provided, either using the * drive letter or --alternate-target-paths, and check if the folder contains a folder with the given filename.

    hashtag
    Authentication (Windows Only)

    On Windows, the shares can be authenticated with a username and password (not with integrated authentication). This uses a to authenticate prior to accessing the share.

    To use authentication, provide the --auth-username and --auth-passwordarguments to the query. Since the authentication in Windows is tied to the current user context, it is possible that the share is already mounted with different credentials, that may not have the correct permissions.

    To guard against this, it is possible to drop the current authentication and re-authenticate prior to acessing the share. This can be done by adding the --force-smb-authentication option.

    Azure Blob Storage Destination

    This page describes the Azure Blob Storage destination

    Duplicati supports backing up to Azure Blob Storagearrow-up-right, which is a large scale object storage, similar to S3.

    hashtag
    User interface

    To configure Azure Blob Storage you must fill in: container name, account name and access key. To use a SAS token instead of an access key, use the advanced options.

    The Container name is the name of the container in the Storage Account.

    The Account name is the name of the Storage Account.

    The Access key can be found on the Storage Account under "Security + Networking" -> "Access keys". You can use either key1 or key2.

    circle-info

    Access keys are shared for the Storage Account and the key gives access to all containers in the Storage Account. If you want privilege separation, set up a Service SAS for each backup.

    If you want to use key rotation, consider using a and store the keys in the secret provider, so they are always up-to-date.

    You do not need a separate container for each backup, you can use prefixes to distinguish them, but using individual containers makes it easier to manage rules for each backup.

    hashtag
    URL format for Commandline

    To use the Azure Blob Storage destination, you can use the following URL format:

    hashtag
    Create container

    You can create the container via the Azure portal, but if you prefer, you can also let Duplicati create the container for you. The .

    If you use the UI, the "Test connection" button will prompt you if the container needs to be created.

    hashtag
    Using a Shared Access Signature (SAS) token

    Instead of using a traditional Access Key, you can also use a SAS token. To use this, supply it instead of the access key, for example:

    The account name is the name of the Storage Account, and the SAS token must have access to read, write, list, and delete files in the container.

    hashtag
    Cold storage

    Since Duplicati stable version 2.2, Duplicati recognizes data in cold storage and will avoid downloading these files for testing. The recommended way to use this is to set up life-cycle rules that move files into cold storage after a period. Once the files are in cold storage, Duplicati will not attempt to read them.

    However, if you have retention enabled, you must set --no-auto-compact as Duplicati will otherwise attempt to download the files from cold storage, in order to compact them.

    Similarly, for a restore, you must manually move files from cold storage into the bucket before attempting the restore operation.

    SharePoint v2 (Graph API)

    This page describes the SharePoint v2 storage destination

    Duplicati supports using Microsoft SharePointarrow-up-right as a storage destination. This page describes the SharePoint that uses the Graph API, for the SharePoint provider that uses the legacy API, see SharePoint.

    hashtag
    User interface

    To configure the SharePoint v2 destination you need to pick a unique folder name for the backups, provide the site id, and then authorize Duplicati to work on your behalf. Simply click the "AuthID" link in the text field and the authentication process will start and fill out the "AuthID" when you are done.

    hashtag
    URL format for Commandline

    To use SharePoint, use the following URL format:

    To use SharePoint v2 you must first obtain an AuthID by using a Duplicati service to log in to Microsoft and approve the access. See the for different ways to obtain an AuthID.

    hashtag
    Performance tuning options

    If you need to gain more performance you can fine-tune the performance of chunked transfers with the options:

    • --fragment-size

    • --fragment-retry-count

    • --fragment-retry-delay

    For most uses, it is recommended that these are kept at their default settings and only changed after confirming that there is a gain to be made by changing them.

    Google Drive Destination

    This page describes the Dropbox storage destination

    Duplicati supports using Google Drivearrow-up-right as a storage destination. Note that Duplicati stores compressed and encrypted volumes in Google Drive and does not store files so they are individually accessible from Google Drive.

    hashtag
    User interface

    To configure the Google Drive destination you need to pick a unique folder name for the backups, and then authorize Duplicati to work on your behalf. Simply click the "AuthID" link in the text field and the authentication process will start and fill out the "AuthID" when you are done.

    hashtag
    URL format from Commandline

    To use Google Drive, use the following URL format:

    To use Google Drive you must first obtain an AuthID by using a Duplicati service to log in to Google and approve the access. See the for different ways to obtain an AuthID.

    hashtag
    Access levels

    Duplicati can work with limited access to Google Drive, where it only has access to its own files. This access is recommended, because it prevents accidents where files not relevant for Duplicati can be read or written. On the community server, this option is called "Google Drive (limited)".

    Unfortunately, the security model in Google Drive sometimes resets the access, cutting off Duplicati from accessing the files it has created. If this happens, it is not currently possible to re-assign access to Duplicati.

    To recover from this situation, you must download the files from Google Drive, then delete them in Google Drive, and finally re-upload the files using the .

    You can also choose to , and you can configure this to grant Duplicati full access to all files in Google Drive.

    hashtag
    Team folder

    If you need to use a Team Drive, set the option --googledrive-teamdrive-id to the ID for the Team Drive to use. If this is not set, it will use the personal Google Drive. For example:

    Using Duplicati with Linux

    This page describes how to use Duplicati with Linux

    Before you can install Duplicati, you need to decide on three different parameters:

    • The type you want: , , , .

    • Your package manager: apt, yum or something else.

    Sending Telegram notifications

    Describes the how to configure sending notifications via Telegram

    To send a notification via Telegram you need to supply a channel id, a bot token and a an api key.

    To obtain the bot token (aka bot id), message the @BotFather bot. After creating the bot, send a message to the bot, so it can reply. For more details on Telegram bots, see the .

    After obtaining the bot token you can obtain the channel id with a cURL script:

    To obtain the API key, follow the .

    With all required values obtained, you can set up the Telegram notifications in the general settings:

    User management in the Duplicati Console

    How managed service providers and enterprises use user management to control access across organizations and sub-organizations.

    Managed service providers (MSPs) can manage multiple customer tenants by combining the with user and security group tooling. This article explains how to set up the hierarchy, invite users, and reuse security groups across sub-organizations while keeping access auditable.

    circle-info

    User management and security groups are an Enterprise feature. Contact sales or support if you need an Enterprise trial or license.

    TrayIcon

    This page describes the Duplicati TrayIcon executable

    The main application in the Duplicati installation is the TrayIcon program, called Duplicati.GUI.TrayIcon.exe on Windows and simply duplicati on Linux and MacOS.

    The TrayIcon executable is a fairly small program that has as the primary task to register with the operating system desktop environment, and place a status icon in the desktop tray, menu, or statusbar.

    The TrayIcon is connected to the server and will change the displayed icon based on the server state. Opening the associated context menu, provides the option to quite, pause/resume, or open the UI.

    The second task the TrayIcon is usually responsible for, is to host the . The server is responsible for handling stored backup configurations, provide a user interface, run scheduled tasks and more. When launching the TrayIcon, it will also transparently launch and host the server. It uses this hosted instance to subscribe to changes, so it can change the icon and signal the server state.

    Google Cloud Storage Destination

    This page describes the Google Cloud Storage destination

    Duplicati supports storing files on , aka GCS, which is a large-scale object storage, similar to S3. In GCS you store "objects" (similar to files) in "buckets" which define various properties shared between the objects. If you use a / in the object prefix, they can be displayed as virtual folders when listing them.

    Note that the bucket id is globally unique, so it is recommended using a name that is not likely to conflict with other users, such as prefixing the bucket with the project id or a similar unique value. If you use a simple name, like data or backup it is likely already associated with another project and you will get permission errors when attempting to use it.

    rclone://<remote repo>/<remote path>
      ?rclone-executable=<path to rclone executable>
    rclone://
      ?rclone-remote-repository=<remote repo>
      &rclone-remote-path=<remote path>
      &rclone-executable=<path to rclone executable>
    rclone://<remote repo>/<remote path>
      ?rclone-option=--opt1%3Da%20--opt2%3Db
    filejump://<hostname>/<folder>/<subfolder>?api-token=*****
    filejump://<hostname>/<folder>/<subfolder>?auth-username=email&auth-password=*****
    %PARSEDRESULT%
      The parsed result op the operation: Success, Warning, Error
    %RESULT%
      When used in the body, this is the result/log of the backup, 
      When used in the subject line, this is the same as %PARSEDRESULT%
    %OPERATIONNAME%
      The name of the operation, usually "backup", but could also be "restore" etc.
    %REMOTEURL%
      The backend url
    %LOCALPATH%
      The path to the local folders involved (i.e. the folders being backed up)
    %machine-id%
      The assigned unique random identifier for the current machine. 
      Can be overridden with --machine-id
    %backup-id%
      The assigned id for the backup. Can be overridden with --backup-id
    %backup-name%
      The name of the backup. Can be overridden with --backup-name
    %machine-name%
      The name of the machine. Can be overridden with --machine-name
    --secret-provider=env://
    Name: OpenClaw-Backup
  • Encryption: Leave as AES-256 (Built-in).

  • Passphrase: Crucial. Generate a strong password and save it in a password manager. Since OpenClaw stores keys in cleartext, your backup must be encrypted.

  • to save space.
    hashtag
    Model the organization tree and licensing
    • Hierarchy – Each root organization is a fully isolated entity and cannot share anything across the organization boundary. Sub-organizations are also default isolated from the parent organizations, but the license is inherited from the root organizations.

    • Security group linking – Cross-organization features (invites and linking) can be performed top-down, so one or more groups can be defined at a parent organization and assigned to a child organization.

    hashtag
    Claims and security groups

    • Organization-scoped claims – Users gain privileges through claims tied to a specific organization, including the Owner claim that represents tenant ownership.

    • Security groups as containers – Groups hold owners and members. Owners control membership changes, and members receive the permissions assigned to the group within each organization where it is active.

    In other words: owners of a group has the permissions to add/remove users from that group, but do not gain permissions to use the claims of the group. Members of a group gain the permissions the group entitles them to, but do not grant them access to add/remove members. It is possible for users to be both owner and member of a group.

    hashtag
    Inviting and onboarding users

    • Group-centric invites – Organization owners can invite new people directly into a security group with a predefined role (owner or member). The inviter must own the group and the parent organization and must be covered by Enterprise licensing.

    • Invitation tracking – The invite flow tracks pending invitations with their intended roles so onboarding is predictable.

    • Audit-ready roster – Organization owners can list both active users and pending invitations, along with the security groups that grant their access, making it easy for MSP operators to review each tenant.

    hashtag
    Reusing security groups across sub-organizations

    • Link instead of duplicate – MSP admins can link a security group from the parent organization into a child organization so the same owners and members manage resources across tenants without rebuilding group rosters.

    • Ownership and lineage checks – Linking requires the caller to own both organizations and the source group. Links only work from a parent to one of its sub-organizations; horizontal links or self-links are blocked to preserve tenant boundaries.

    hashtag
    MSP operational playbook

    1. Create the hierarchy – Establish the root MSP org and child customer orgs, ensuring the root carries Enterprise licensing when cross-tenant links are needed.

    2. Build security groups in the parent – Add MSP administrators as owners and assign customer operators as members to reflect who should manage and who should operate.

    3. Link groups to sub-organizations – Reuse the parent’s groups in each customer org that needs them, verifying each target is a true child organization.

    4. Invite customer users into the right groups – Send invitations that place new users directly into the correct roles while honoring duplicate and cooldown protections.

    5. Review access regularly – Use the organization roster view to confirm which users and pending invitations exist per tenant and which linked groups provide their access.

    By following these steps, MSPs can scale consistent access controls across many tenants while keeping ownership clear and audit trails intact.

    organization hierarchy

    The organization can be created under the current organization, so make sure you have switched to the desired parent organization before creating it.

  • To start a fresh tenant tree, select the option to create a detached organization so it uses its own root.

  • Name and confirm: Provide the customer name and confirm creation. The console records the parent/root identifiers automatically and returns you to the organization list. The screen for creating an organization looks like this:

  • Image showing the create organization dialogImage showing the create organization dialog
    Image showing an example 3-level organization structureImage showing an example 3-level organization structure
    View of the Rclone configuration page
    View of the Rclone configuration page
    Aliyun OSS destination configuration
    Aliyun OSS destination configuration
    Configuring iDrive e2
    Configuring iDrive e2
    Configure the destination for FileJump
    Configure the destination for FileJump
    grandfather-father-sonarrow-up-right
    time format
    ${secretprovider}
  • If it starts and ends with % :

    • %secretprovider%

  • shared provider
    portal settingsarrow-up-right
    portal settingsarrow-up-right
    a preload.jsonfile
    Links page in the consolearrow-up-right
    Agent
    TrayIcon
    Server
    preload settings page
    The machine is registered
    Added a registration URL
    Advanced options shown for the SFTP backendAdvanced options shown for the SFTP backend
    View of the SSH connection configuration screen
    View of the SSH connection configuration screen
    %OPERATIONNAME% 
    %backup-name% 
    %machine-name% 
    %PARSEDRESULT% 
    %LOCALPATH%
    %ENCRYPTION-MODULE%
    %COMPRESSION-MODULE%
    AESCrypt
    the section on how to inject the secret provider configuration via an environment variable
    AESCrypt tool
    Credential Managerarrow-up-right
    libsecret implementationarrow-up-right
    Seahorsearrow-up-right
    pass commandarrow-up-right
    KeyChain Accessarrow-up-right
    AWS, the recommendation is to use a guid in the bucket namearrow-up-right
    see the AWS S3 description
    AWS S3 libraryarrow-up-right
    Minio libraryarrow-up-right
    The S3 Backend Configuration viewThe S3 Backend Configuration view
    Advanced options picked for compatibility with non-AWS storageAdvanced options picked for compatibility with non-AWS storage
    Windows APIarrow-up-right
    The file-based storage destination configuration viewThe file-based storage destination configuration view
    secret provider
    container names are unique within the storage account and has a number of restrictionsarrow-up-right
    Configuring Azure Blob Storage destination
    Configuring Azure Blob Storage destination
    page on the OAuth Server
    Configure the SharePoint v2 destination
    Configure the SharePoint v2 destination
    page on the OAuth Server
    Duplicati BackendTool
    run your own OAuth server
    View of configuration of the Google Drive destination
    View of configuration of the Google Drive destination
    You can toggle between the two views using the "Edit as list" and "Edit as text" links.

    Besides the mandatory options, it is also possible to configure:

    • The notification message and format

    • Conditions on when to send emails

    • Conditions on what log elements to include

    hashtag
    Telegram Notification Options

    Bot Configuration

    --send-telegram-bot-id ``(String) - The Telegram bot ID that will send messages

    --send-telegram-api-key ``(String) - The API key for authenticating your Telegram bot

    Message Destination

    --send-telegram-channel-id ``(String) - The channel ID where messages will be sent

    --send-telegram-topid-id ``(String) - Topic ID for posting in specific topics within Telegram groups

    Notification Content

    --send-telegram-message ``(String) - Template for message content with support for variables like %OPERATIONNAME%, %REMOTEURL%, %LOCALPATH%, and %PARSEDRESULT%

    --send-telegram-result-output-format ``(format) - Format for presenting operation results

    • Duplicati

    • Json

    Notification Filtering

    --send-telegram-level ``(level) - Controls which result types trigger notifications:

    • Success - Only successful operations

    • Warning - Operations that completed with warnings

    • Error - Operations that failed with recoverable errors

    • Fatal - Operations that failed with critical errors

    • All - All operation results regardless of status

    --send-telegram-any-operation ``(Boolean) - When enabled, sends notifications for all operations, not just backups

    --send-telegram-log-level ``(Enumeration) - Sets minimum severity level for included log entries:

    • ExplicitOnly - Show only explicitly requested messages

    • Profiling - Include performance measurement data

    • Verbose - Include detailed diagnostic information

    • Retry - Include information about retry attempts

    • Information - Include general status messages

    • DryRun - Include simulation mode outputs

    • Warning - Include potential issues that didn't prevent completion

    • Error - Include critical failures that require attention

    --send-telegram-log-filter ``(String) - Filters log entries based on specified patterns

    --send-telegram-max-log-lines ``(Integer) - Limits the number of log lines included in notifications

    For details on how to customize the notification message, see the section on customizing message content.

    Telegram bot documentationarrow-up-right
    Telegram guide to creating an applicationarrow-up-right
    Set up Telegram notifications with the default options editor
    Set up Telegram option with a text field
  • You machine CPU type: x64, Arm64 or Arm7

  • hashtag
    Deciding on type

    To use Duplicati on Linux, you first need to decide which kind of instance you want: GUI (aka TrayIcon), Server, Agent, CLI). The section on Choosing Duplicati Type has more details on each of the different types.

    hashtag
    Determine package manager

    Next step is checking what Linux distribution you are using. Duplicati supports running on most Linux distros, but does not yet support FreeBSD.

    If you are using a Debian-based operating system, such as Ubuntu or Mint, you can use the .deb package, and for RedHat-based operating system, such as Fedora or SUSE, you can use the .rpm packages.

    For other operating systems you can use the .zip package, or check if your package manager already carries Duplicati.

    hashtag
    Determine CPU architecture

    Finally you need to locate information on what CPU architecture you are using:

    • x64: 64bit Intel or AMD based CPU. This is the most common CPU at this time.

    • Arm64: 64bit ARM based CPU. Used in Raspberry Pi Model 4 and some Laptops and Servers.

    • Arm7: 32bit ARM based CPU. Used in Raspberry Pi Model 3 and older, and some NAS devices.

    hashtag
    Installing the package

    Once you have decided the on (type, distro, cpu) combination you are ready to download the package. The full list of packages can be obtained via the main download pagearrow-up-right, and then clicking "Other versions". Refer to the installation page for details on how to install the packages, or simply use the package manager in your system.

    hashtag
    Using the TrayIcon

    For users with a desktop environment and no special requirements, the TrayIcon instance is the recommended way to run Duplicati. If you are using either .deb or .rpm you should see Duplicati in the program menu, and you can launch it from there. If you do not see Duplicati in the program menu, you can start it with:

    When running the TrayIcon in a user context, it will create a folder in your home folder, typically ~/.config/Duplicati where it stores the local databases and the Server database with the backup configurations.

    hashtag
    Using the Server

    The Server is a regular executable and can simply be invoked with:

    When invoked as a regular user, it will use the same folder, ~/.config/Duplicati, as the TrayIcon and share the configuration.

    Besides the configuration listed below, it is also possible to run Duplicati in Docker.

    hashtag
    Using Server as a Service

    If you would like to run the Server as a service the .rpm and .deb packages includes a regular systemd service. If you are installing from the .zip package, you can grab the service file from the source codearrow-up-right and install it manually on your system.

    If you need to pass options to the server, edit the settings file, usually at /etc/default/duplicati. Make sure you only edit the configuration file and not the service file as it will be overwritten when a new version is installed. The settings file should look something like this:

    You can use DAEMON_OPTS to pass arguments to duplicati-server, such as --webservice-password=<passsword>.

    To enable the service to auto-start, reload configurations, start the service and report the status, run the following commands:

    The server is now running and will automatically start when you restart the machine.

    Note: the service runs in the root user context, so files will be stored in /root/.config/Duplicati on most systems, but in /Duplicati on other systems. Use the DAEMON_OPTS to add --server-datafolder=<path to storage folder> if you want a specific location.

    To check the logs (and possibly obtain a signin link), the following command can usually be used:

    hashtag
    Linux systemd service and supplementary groups

    When Duplicati runs under a dedicated service account on Linux, systemd does not automatically include that user's supplementary groups. If you add the service account to additional groups (for example, to access NFS or Samba shares) you should explicitly configure the unit file so systemd grants those memberships when the service starts.

    To supply the groups, use the edit functionality:

    Then edit the file and add the supplementary groups:

    When you save and exit, an override file will be created, typically in /etc/systemd/system/duplicati.service.d/override.conf . This method ensures that a package upgrade does not erase you edits.

    Finally reload and restart the service so the new group membership takes effect:

    hashtag
    Using the Agent

    With the Agent there is a minimal setup required, which is to register the machine with the Duplicati Console. When installing either the .rpm or .deb packages, it will automatically register the duplicati-agent.service for startup. If you are using the .zip installation, you can find the agent service in the source codearrow-up-right and manually register it:

    When the Agent starts, it will emit a registration link to the log, and you can usually see it with the following command:

    If you are using a pre-authenticated link, you can run the following command to activate the registration:

    After registration is complete, restart the service to pick up the new credentials:

    hashtag
    Using the CLI

    Using the CLI is simply a matter of invoking the binary:

    Since the CLI also needs a local database for each backup, it will use the same location as described for the Server above to place databases. In addition to this, it will keep a small file called dbconfig.json in the storage folder where it maps URLs to databases. The intention of this is to avoid manually specifying the --dbpath parameter on every invocation.

    If you specify the --dbpath parameter, it will not use the dbconfig.json file and it will not store anything in the local datafolder.

    hashtag
    Using the support programs

    Each package of Duplicati contains a number of support utilities, such as the RecoveryTool. Each of these can be invoked from the commandline with a duplicati-* name and all contain built-in help. For example, to invoke ServerUtil, run:

    hashtag
    Handling locked files

    By default, Duplicati will honor Linux advisory locking and refuse to open files that are locked. The logic for this is that it is not guaranteed that reading the file while it is locked, will result in a useful file when restored. However, many Linux applications ignore the locks because the default file operations ignores them as well. If you prefer that Duplicati ignores locked files, and just reads what it finds, you can set the advanced option:

    GUI
    Server
    Agent
    CLI

    hashtag
    Server port

    By default, Duplicati uses the port 8200 as the communication port with hosted server. Should that port be taken, usually because another instance of Duplicati is running in another user context, Duplicati will automatically try other ports from the sequence: 8200, 8300, 8400, ..., 8900.

    Once an available port is found, this port is stored in the server database and attempted first on next run.

    hashtag
    Default browser

    By default, the Duplicati TrayIcon will use the operating systems standard method for opening the system-default browser. If this is not desired, it is possible to choose the binary that will be used to launch the webpage with the option:

    hashtag
    Detached TrayIcon

    In some cases it may be useful to run the server in one process and the TrayIcon in another. For this setup, the TrayIcon can run without a hosted server. To disabled the Server, start the TrayIcon application with the commandline option:

    This will cause the TrayIcon to connect to a Server that is already running. If the Server is not running on the same machine, or using a different port, this can be specified with the commandline option:

    It may also be required to provide the password for the server in the detatched setup, as outlined in Duplicati Access Password. An alternative to providing the password is to use the option:

    The TrayIcon will then attempt to extract signing information from the local database, provided that the TrayIcon process also has read access to the database, and that signin tokens are not disabled.

    It may be convienient to use preload settings to provide arguments to both the Server and TrayIcon when running in detached mode.

    hashtag
    Self-signed certificate

    If the server is using a self-signed certificate (or a certificate not trusted by the OS), the connection will fail. To manually allow a certificate, obtain the certificate hash, and provide it with:

    When the TrayIcon is hosting the server, or has access to the database settings, it will automatically extract the certificate hash, so that particular certificate is accepted. This technique is secure and very similar to certificate pinningarrow-up-right.

    For testing and debugging purposes, the certificate hash * means "any certificate". Beware that this settings is very insecure and should not be used in production settings.

    hashtag
    Server settings

    When hosting the server, the TrayIcon also accepts all the server settings and will forward any commandline options to the hosted server when starting it.

    It is possible to run Duplicati in "portable mode" where it can run from removable media, such as an USB-stick, see the server data location section for more details.

    Server component
    TrayIcon on Windows
    Status icon on Ubuntu
    Statusbar icon on MacOS
    7D:U,1Y:1W
    1W:1D,4W:1W,12M:1M
    --secret-provider-cache=None
    --secret-provider-cache=InMemory
    --secret-provider-cache=Persistent
    duplicati-agent --registration-url=<copied-url>
    {
      "args": {
        "agent": [ "--registration-url=<copied-url>" ],
      },
      "env": {
        "*": {
          "DUPLICATI__REGISTER_REMOTE_CONTROL": "<copied-url>"
        }
      }
    }
    ssh://<hostname>/<path>
      ?auth-username=<username>
      &auth-password=<password>
      &ssh-fingerprint=<fingerprint>
    ssh://server/home/backup
      ?ssh-key=sshkey%3A%2F%2F----%20BEGIN%20SSH2%20PRIVATE%20KEY%20----...
      &auth-username=user
      &ssh-fingerprint=<fingerprint>
    ssh://server/home/backup
      ?ssh-keyfile=/home/user/.ssh/keyfile
      &auth-username=user
      &auth-password=<keyfile password>
      &ssh-fingerprint=<fingerprint>
    {
      "key1": "value1",
      "passphrase": "my password"
    }
    --secret-provider=file-secret:///home/user/secrets.json.aes?passphrase=my-password
    Duplicati.CommandLine.SharpAESCrypt.exe e my-password source.json destination.json.aes
    --secret-provider=wincred://
    --secret-provider=libsecret://
    --secret-provider=pass://
    gpg --full-generate-key
    pass init <your-email-address>
    --secret-provider=keychain://
    s3://<bucket name>/<prefix>
      ?aws-access-key-id=<account id or username>
      &aws-secret-access-key=<account key or password>
      &s3-servername=<server ip or hostname>
      &use-ssl=true
    file://C:\Data
    file://\\server\share\folder
    file:///home/user
    // Note, the paths are URL encoded here: E:\backupdata;G:\backupdata
    file://F:\backupdata?alternate-target-paths=E%3A%5Cbackupdata%3BG%3A%5Cbackupdata
    file://*:\backupdata
    file://F:\backupdata?alternate-destination-marker=<filename>
    azure://<container>/<prefix>
      ?azure-account-name=<account id>
      &azure-access-key=<access key>
    azure://<container>/<prefix>
      ?azure-account-name=<account id>
      &azure-access-sas-token=<SAS token>
    sharepoint://<folder>/<subfolder>
      ?authid=<authid>
      &site-id=<site-id>
    googledrive://<folder>/<subfolder>?authid=<authid>
    googledrive://folder/subfolder?authid=<authid>&googledrive-teamdrive-id=<team id>
    BOT_TOKEN="YOURBOTTOKEN" curl -s "https://api.telegram.org/bot$BOT_TOKEN/getUpdates" \
      | grep -o '"id":[0-9]*' | head -1 | cut -d':' -f2
    duplicati
    duplicati-server
    # Defaults for duplicati initscript
    # sourced by /etc/init.d/duplicati
    # installed at /etc/default/duplicati by the maintainer scripts
    
    #
    # This is a POSIX shell fragment
    #
    
    # Additional options that are passed to the Daemon.
    DAEMON_OPTS=""
    sudo systemctl enable duplicati.service
    sudo systemctl daemon-reload
    sudo systemctl start duplicati.service  
    sudo systemctl status duplicati.service
    sudo journalctl --unit=duplicati
    sudo systemctl edit duplicati.service
    [Service]
    SupplementaryGroups=group1 group2
    sudo systemctl daemon-reload
    sudo systemctl restart duplicati
    sudo systemctl enable duplicati-agent.service
    sudo systemctl start duplicati-agent.service 
    sudo journalctl --unit=duplicati
    duplicati-agent register "<pre-authorized url>"
    sudo systemctl restart duplicati-agent
    duplicati-cli help
    duplicati-server-util help
    --ignore-advisory-locking=true
    --browser-command=<path to binary>
     --no-hosted-server=true
    --hosturl=<host url>
    --read-config-from-db=true
    --host-cert-hash=<hash>
    hashtag
    User interface

    To configure the Google Cloud Storage destination you need to supply the bucket name, then pick a unique folder name for the backups, and then authorize Duplicati to work on your behalf. Simply click the "AuthID" link in the text field and the authentication process will start and fill out the "AuthID" when you are done.

    If you use the "Test connection" button and the bucket does not exist, Duplicati will offer to create the bucket, using the parameters set in advanced options.

    hashtag
    URL format for Commandline

    To use GCS, you can use the following URL format:

    To use Google Cloud Storage you must first obtain an AuthID by using a Duplicati service to log in to Google and approve the access. See the page on the OAuth Server for different ways to obtain an AuthID.

    hashtag
    Using service token

    Because the OAuth token has full access to the Google Account, it is recommended that you use a Google Service Accountsarrow-up-right instead. Once you have obtained the Service Account JSON, provide it with either --gcs-service-account-file=/path/to/jsom or --gcs-service-account-json=<urlencoded json>. If you are using the graphical user interface, you can paste the JSON into the advanced option area:

    hashtag
    Creating a bucket

    You can create a bucket from within the Google Cloud Consolearrow-up-right and here you can set all options as desired. If you prefer to let Duplicati create the bucket, you can also set the parameters from Duplicati.

    You set the project the bucket belongs to with --gcs-project=<project id> and the desired location with --gcs-location=<location>. You can get the project id from the Google Cloud Console and see the possible GCS bucket locations in the GCS documentationarrow-up-right.

    When creating the bucket you can also choose the storage class with --gcs-storage-class. You can choose any of the storage class values shown in the GCS documentationarrow-up-right, even if they are not reported as possible by Duplicati.

    These options have no effect if the bucket is already created.

    Google Cloud Storagearrow-up-right

    Using Duplicati with MacOS

    This page describes common scenarios for configuring Duplicati with MacOS

    Before you can install Duplicati, you need to decide on two different parameters:

    • The type you want: GUI, Server, Agent, CLI.

    • You machine CPU type: Arm64 or x64

    hashtag
    Deciding on type

    To use Duplicati on MacOS, you first need to decide which kind of instance you want: GUI (aka ), , , . The section on has more details on each of the different types. For home users, the common choice is the GUI package in .dmgformat. For enterprise rollouts, you can choose the .pkg packages.

    hashtag
    Determine CPU architecture

    Your Mac is most likely using Arm64 with one of the M1, M2, M3, or M4 chips. If you have an older Mac, it may use the Intel x64 chipset. To see what CPU you have, click the Apple icon and choose "About this Mac". In the field labelled "Chip" it will either show Intel (x64) or M1, M2, M3, M4 (Arm64).

    hashtag
    Installing the package

    The packages can be obtained via the . The default package shown on the page is the MacOS Arm64 GUI package in .dmg format. If you need another version click the "Other versions" link at the bottom of the page.

    If you are using the .dmg package the installation works similar to other application, simply open the .dmg file and drag Duplicati into Applications. Note that with the .dmg package, Duplicati is not set to start automatically with your Mac, but if you restart with the option to re-open running programs, Duplicati will start on login.

    If you are using the .pkg package, Duplicati will install a launchAgent that ensures Duplicati starts on reboots. The CLI package installs a stub file that is not active, so you can edit the launchAgent and have it start the Server if you prefer.

    hashtag
    Using the TrayIcon

    If you have installed the GUI package, you will have Duplicati installed in /Applications and it can be started like any other application. Once Duplicati is started, it will place itself in the menu bar near the clock and battery icons. Because Duplicati is meant to be a background program, there is no Duplicati icon in the dock.

    On the first start Duplicati will also open your browser and allow you to configure your backups. If you need access to the UI again later, locate the TrayIcon in the status bar, click it and click "Open". If you install the CLI or Agent packages, the Duplicati application is not available.

    hashtag
    Using the Server

    If you install the CLI package, Duplicati binaries are placed in /usr/local/duplicati and symlinked into /usr/local/bin and you can start the server simply by running:

    When invoked as a regular user, it will use the same folder, ~/Library/Application Support/Duplicati, as the and share the configuration.

    Note: If you install the GUI package or install from homebrew, Duplicati's binaries are not symlinked into the paths searched by MacOS. You can invoke the binaries by supplying the full path:

    hashtag
    Using the Agent

    With the there is a minimal setup required, which is to register the machine with the Duplicati Console. When installing the Agent package, it will automatically register the Duplicati agent with a launchAgent that starts Duplicati in an Agent mode.

    If the Agent is not registered with the Console, it will open the default browser and ask to be registered. Once registered, it will run in the background and be avilable on the Duplicati Console for management.

    If you have a for registering the machine, you can place a file in /usr/local/share/Duplicati/preload.json with content similar to:

    hashtag
    Using the CLI

    Using the CLI is simply a matter of invoking the binary:

    Since the CLI also needs a local database for each backup, it will use the same location as above to place databases. In addition to this, it will keep a small file called dbconfig.json in the storage folder where it maps URLs to databases. The intention of this is to avoid manually specifying the --dbpath parameter on every invocation.

    If you specify the --dbpath parameter, it will not use the dbconfig.json file and it will not store anything in the local datafolder.

    Note: If you install the GUI package or install from homebrew, Duplicati's binaries are not symlinked into the paths searched by MacOS. You can invoke the binaries by supplying the full path:

    hashtag
    Using the support programs

    Each package of Duplicati contains a number of support utilities, such as the . Each of these can be invoked from the commandline with a duplicati-* name and all contain built-in help. For example, to invoke , run:

    Note: If you install the GUI package or install from homebrew, Duplicati's binaries are not symlinked into the paths searched by MacOS. You can invoke the binaries by supplying the full path:

    Duplicati Access Password

    This page describes how the authentication is working with Duplicati and how to regain access if the password is lost or unknown

    If you are starting Duplicati for the first time, it will ask you to pick a password. Picking a strong password is important to ensure unwanted access to Duplicati from other processes on the system. By default, Duplicati has chosen a strong random password and it is recommended for most users to not change the random password. It is not possible to extract the current password in any way and it is not possible to disable the password.

    hashtag
    Access from the TrayIcon

    The TrayIcon process will usually host the Server that presents the UI. Since the two parts are within the same process they can communicate securely, and this setup enables the TrayIcon to negotiate a short-term signin token with the server, even though it does not know the password.

    This mechanism works for most default installations and is secure as long as the desktop is not compromised. This signin process is the reason that the default random password is prefered, because it is not possible to leak the password.

    The downside is that you can bookmark the Duplicati page, but you may be asked for a password that you do not know when accessing the page. In this case, re-launching from the TrayIcon will log you in again.

    If you prefer, it is possible to choose the password so you can enter it when asked. Optionally, you can also choose to disable the feature that allows the TrayIcon to sign in without a password, through the settings page.

    Login with the TrayIcon is shown here for MacOS, but the same works on Linux and Windows:

    hashtag
    Temporary signin token

    When Duplicati starts up with the randomly generated password it will attempt to emit a temporary sign-in url. If you run either the or in a terminal, most systems will show the link here.

    If you are running Duplicati as a service with no console attached, the link will end up in the system logs. On Windows you can use the utility to find the message with a sign-in url. For Linux you can view the system logs, usually:

    Note that the regular output from journalctl is capped in width, so you cannot see the whole token. Pipe to a file or another program as shown above to get the full output.

    For MacOS you can use the .

    Once you have obtained the link, simply click it or paste it into a browser. Note that the sign-in token has a short lifetime to prevent it being used to gain unathorized access from someone who obtains the logs. If the link has expired, simply restart the service or application and a new link will be generated.

    After a password has been set, the link will no longer be generated.

    hashtag
    Change password with ServerUtil

    If you are not using the TrayIcon or you have disabled the signin feature, but lost the password somehow, you can change the password with in some cases.

    This works by reading the same database as the server is using and extracting the keys used to sign a sign-in token, and then creating a sign-in token. This sign-in token works the same way as the TrayIcon's signin feature. Note that the password itself cannot be extracted from the database, it can only be verified.

    After obtaining a sign-in token, ServerUtil can then change the password in the running instance.

    This only works if:

    • The database is readable from the process running ServerUtil

    • The database field encryption password is available to the process running ServerUtil

    If these constraints are satisfied, it is possible to reset the server password by running:

    If ServerUtil is launched in a similar environment (i.e., same user, same environment variables) this would allow access in most cases. There are a number of commandline options that can be used to guide ServerUtil in case the environments are not entirely the same.

    For Linux user, you can usually use su or sudoto enter the correct user context, but some additional environment variables may be needed. The default location for the database is described in the , and a different location can be provided with --server-datafolder.

    hashtag
    Example change with a different context

    If you need to change the password for a Windows Service instance running in the service context, you can use a command such as this:

    Similarly, if the service is running as root on Linux:

    hashtag
    Change password from the Server

    If the other options are not available, it is possible to restart the process and supply the commandline option:

    This will write a hashed () version of the new password to the database and use this going forward. This process requires restarting the server, but is persisted in the database, so it is only required to start the server once with with the --webservice-password option and future starts can be done without the password.

    Since commandline arguments and environment variables can be viewed through various system tools, it is recommended that the option is not set on every launch. A prefered way to set this would be to stop all running instances, start once with the new password from a commandline terminal, shut down, and then start again normally.

    The option can also be supplied to the and processes, which will pass it on to their internal instance of the Server.

    hashtag
    Disable sign-in tokens

    It is possible to disable the use of sign-in tokens completely, which can increase security further. This is done by passing the option:

    This will make the reject any sign-in tokens and prevent the access from the TrayIcon and ServerUtil without explicitly passing the password. With this option, it will require write access to the database to create a new token, but it will also require handling the password in a safe manner from all instances where this is needed.

    This option can also be supplied to the process and is default enabled by the .

    Filters in Duplicati

    This page describes how filters are evaluated inside Duplicati and how to construct them

    Duplicati uses the same setup for filters to select individual files. It is most prominent when choosing the sources, but can be applied in other places where individual files can be selected.

    hashtag
    Path representations

    Internally, Duplicati represents folders with a trailing path separator, which makes it easy to distinguish between the two types. This distinction is important when constructing filters, as Duplicati requires a full match, including the trailing path separator, before a match is considered. An example for Windows and Linux/MacOS:

    • Windows

      • Folders

        • C:\Users\john\

    • Linux/MacOS

      • Folders

        • /home/john/

    For brewity, the remainder of this page will only use the Linux/MacOS format in examples, but the same can be applied to the Windows paths.

    hashtag
    Filter types

    Duplicati supports 4 different kinds of filters: paths, globbing, regex, and predefined groups. The simplest type of filter is the path. To use a path-type filter, simply provide the full path to the file or folder to target.

    hashtag
    Globbing expressions

    While it would be possible to maintain an ever growing list of paths in a filter, it can quickly become hard to manage. For cases where there is some similarity between multiple files or folder paths, it is possible to target multiple paths with a . The wildcard character * matches any length of characters (including zero) and the character ? matches a exactly one character. Unlike other glob implementations, the path separator is also matched in Duplicati filters.

    An example of glob expressions:

    The first expression matches files with the 4 ? characters replaced by any character, and the second expression matches the Download folder for any user, and the third matches any files with the .iso extension.

    hashtag
    Regular expressions

    If the paths to match are more complicated than what can be expressed with globbing, it is also possible to use , which are a common way of expressing a string pattern. Understanding regular expressions and applying them can be a challenging task, and will most often require some testing to ensure it is working as expected. Also note that since Duplicati is written in C#, it uses the .

    Regular expressions are provided by wrapping the expressions with hard braces [ ]:

    Note that for Windows, the path separators must be escaped with a backslash, \ so each separator becomes a double backslash \\ .

    hashtag
    Predefined filter groups

    Some files are commonly excluded on many systems, and to make it easier to exclude such files, Duplicati has a number of built in filter groups:

    • SystemFiles

      Files that are not real files, such as /proc or System Volume Information.

    • OperatingSystem

      Files that are provided by the operating system, such as /bin or C:\Windows\

    To use a filter group, supply one or more names inside curly braces { }, separated with commas. As an example:

    hashtag
    Apply filters

    By default, Duplicati will recurse the source folders and include every file and folder found. For this reason, most of the filters will be exclude filters that removes something from the backup. Include filters are prefixed with a + and exclude filters are prefixed with a -.

    When Duplicati is evaluating filters, it will consider only the first full match, and not evaluate further. It will also evaluate folders before files, meaning that it is not possible to include a file, if the parent folder is excluded. Importantly, the filters are processed in the order they are supplied, which makes it possible to supply advanced rules. As an example:

    In the example, the first rule is applied before the second rule, which means that all .txt files in /usr/share/ are included, but any other .txt files are excluded. The inverse goes for the .bin files, because the exclude rule is before the last rule, the files will be matched as exclude, even though there is an include rule.

    If we append a rule:

    Even if this rule is last, it will exclude the entire folder. Since the folder is excluded, the match on the include rule is never evaluated. This cut-off at the folder level makes it possible to fully avoid processing subfolders, which could otherwise be time consuming.

    Destination overview

    This page describes what a "destination" is to Duplicati and lists some of the available providers

    Duplicati makes backups of files, called the source, and places the backup data at a destination chosen by the user. To make Duplicati as versatile as possible, each of the destinations are implemented as a "destination" (or "backend"), each with different properties.

    Some storage providers support multiple protocols with each their strenghts, and you can generally pick which storage destination provider you like, but if there is a specific implementation for a given storage provider, that is usually the best pick.

    Each storage destination has a number of options that can be provided via a URL like format. The options should preferably be provided as part of the URL, but can also be provided via regular commandline options. For instance, the --use-ssl=true flag can also be added to the URL with &use-ssl=true. If both are provided, the URL value is used.

    circle-exclamation

    Each backup created by Duplicati requires a separate folder. Do not create two backups that use the same destination folder as they will keep breaking each other.

    hashtag
    Standard based destinations

    Destinations in this category are general purpose enough, or commonly used, so they can be used across a range of storage providers. Destinations in this category are:

    • (any path in the filesystem)

    hashtag
    Provider specific destinations

    Storage destinations in this category are specific to one particular provider and implemented using either their public API description, or by using libraries implemented for that provider. Destinations in this category are:

    hashtag
    File synchronization providers

    Storage destinations in this category are also specific to one particular provider, but these storage provider products are generally intended to be used as file synchronization storage. When they are used with Duplicati, the backup files will generally be visible as part of the synchronization files. Destinations in this category are:

    hashtag
    Decentralized providers

    Storage destinations in this category are utilizing a decentralized storage strategy and requires knowledge about each system to have it working. Some of these may require additional servers or intermediary providers and may have different speed characteristics, compared to other storage providers. Destinations in this category are:

    • (previously Tardigrade)

    The server database

    This page describes the database kept by the Duplicati Server

    When the Server is running, either stand-alone or as part of the TrayIcon or Agent, it needs a place to store the configuration. All configuration data, logs and settings are stored inside the file Duplicati-server.sqlite. As the file extension reveals, this is an SQLitearrow-up-right database file and as such can be viewed and updated by any tool that works with SQLite databases.

    The database file is by default located in a folder that belongs to the user account running it. See the section on the database location for details on where this is and how to change it.

    hashtag
    Securing the database

    Due to the nature of Duplicati, this database can contain a few secrets that are vital to ensuring the integrity and security of the backups and also the Duplicati server itself. These secrets include both the user-provided secrets, such as the backup encryption passphrase and the connection credentials, but also server-provided secrets, such as the token signing keys, and optionally an SSL certificate password.

    Even though the database is located on the machine that makes the backup, it is important to prevent unauthorized access to the database, as it could be used for privilege escalation. And should the database ever be leaked, it is also important to ensure the contents are not accessible.

    To protect the database, Duplicati has support for a field-level encryption password. When activated, any setting that is deemed sensitive will be encrypted before being written to the database. This method ensures that the SQLite database itself is still readable, but the secrets are not readable without the encryption passphrase.

    To supply the field-level encryption password, start the Server, TrayIcon, or Agent with the commandline option --settings-encryption-key=<key>. As the commandline can usually be read by other processes, it is also possible to supply this key via the environment variable SETTINGS_ENCRYPTION_KEY=<key>.

    If you are aware of the risks, you can also set the commandline argument --disable-db-encryption=true instead of the key. This will remove existing encryption and not warn that the database is not encrypted.

    The simplest way to apply an encryption key, is to locate the server database, and create the file preload.json if it does not already exist. The file should contain the following:

    Both the commandline arguments and environment variables can be set with the file, which makes it simpler to apply the same settings across executables, and removes the need for changing the service or launcher files.

    For additional protection of the encryption key, the , can be used to further secure the encryption key.

    hashtag
    Database location

    When running Duplicati for the first time, it will find a place where it can store the configuration database. Some versions of Duplicati change the location where it looks for the databases, but this is always done backwards compatible, so new versions will also find the database in previous locations. Due to this logic, the locations can change a bit depending on what version of Duplicati was originally installed.

    It is possible to pick a different location for the database with the commandline option --server-datafolder=<path> or use the environment variable DUPLICATI_HOME.

    To change the folder of an existing instance of Duplicati, perform these steps:

    1. Stop Duplicati

    2. Move the Duplicati folder from the old location to the new location

    3. Change the startup parameters (environment variables, commandline arguments, or preload.json)

    hashtag
    Limited access to the database folder

    To limit unauthorized access to the server database and other settings, Duplicati will enforce access to the folder for only the account that is currently running Duplicati.

    On Windows the permissions are set to include the current user, the Administrator and the System account. On Linux/MacOS the permissions are set to the current user only, as root always has access.

    For most uses, this setup does not cause issues, but if you rely on access from a different user account, you need to place a file called insecure-permissions.txt inside the data folder (it can be an empty file). When Duplicati starts, it will look for such a file, and if the file does not exist, it will reset the permissions, locking out any other account than the current user.

    hashtag
    Database location on Windows

    The default location for users running Duplicati is %LOCALAPPDATA%\Duplicati which usually resolves to something like C:\Users\username\AppData\Local\Duplicati. This folder is the non-roaming folder. Older versions of Duplicati used %APPDATA%\Duplicati which is the roaming folder, causing files to be synchronized across machines. However, since Duplicati is not meant to be an app that is useful for roaming, it is now using the non-roaming folder.

    When running Duplicati as a Windows Service, the %LOCALAPPDATA%\Duplicati folder resolves to:

    Since this folder is under C:\Windows the contents may be deleted on major Windows upgrades (usually when the version number changes). For that reason, Duplicati will detect an attempt to store files in the C:\Windows folder and emit a warning. From version 2.1.0.108 and forward, Duplicati will avoid using a folder under C:\Windows and instead choose to use:

    hashtag
    Database location on Linux

    The default location when running Duplicati on Linux is ~/.config/Duplicati. For most distros, running Duplicati as a service means running it as the root users, resulting in /root/.config/Duplicati.

    However, due to some compatibility mapping, the mapping is sometimes missing the prefix, causing Duplicati data to be stored in /Duplicati. From version 2.1.0.108, this location is avoided and the location /var/lib/Duplicati is used instead, if possible.

    hashtag
    Database location on MacOS

    The default location when running Duplicati on MacOS is ~/Library/Application Support/Duplicati. Duplicati version 2.0.8.1 and older used the Linux-style ~/.config/Duplicati but this is avoided since version 2.1.0.2.

    FTP Destination

    This page describes the FTP storage destination

    The FTP protocol is widely supported but generally, FTP is considered a legacy protocol with security issuesarrow-up-right despite correct implementation. Due to its continued ubiquity, it is still supported by Duplicati using FluentFTParrow-up-right.

    hashtag
    User interface

    To use the FTP destination you must fill out at least the fields shown: server, port, path on server, username, and password. Based on your server, you may also need to add some advanced options as described below.

    hashtag
    URL format for Commandline

    To use the FTP backend, you can use a URL such as:

    Despite FTP being a well documented standard, there are many different implementations of the protocol, so the FTP backend supports a variety of settings for configuring the connection. You can use a non-standard port through the hostname, such as ftp://hostname:2121 .

    hashtag
    Connection mode

    Due to the way FTP is working, it requires multiple connections to transfer data, and the method for selecting which mode has a number of quirks. The default setting is "AutoPassive" which works great for most setups, leaving the burden of configuring the firewall to the server.

    Use the option --ftp-data-connection-type to choose a specific connection mode if the default does not work for your setup.

    hashtag
    Encryption mode

    To enable encrypted connections, you can use the option --ftp-encryption-mode and setting it to either Implicit or Explicit. The Implicit setting creates a TLS connection and everything is encrypted, where Explicit is more commonly used, and creates an unencrypted connection and then upgrades to an encrypted session.

    The default setting is --ftp-encryption-mode=None which uses unencrypted FTP connections.

    The setting --ftp-encryption-mode=Auto is the most compatible setting, but also insecure, as it connects in unencrypted mode and then attempts to switch to encrypted, but will continue in unencrypted mode if this fails.

    To further lock down the encryption mode, the option --ftp-ssl-protocols can be used to limit the accepted protocols. Note: that due to unfortunate naming in .NET, the option --ftp-ssl-protocols=None means "use the system defaults".

    hashtag
    Self-signed certificates

    To support self signed certificates, the FTP destination also supports the --accept-specified-ssl-hash option is also supported which takes an SHA1 certificate digest and approves the certificate if it matches that hash. This is similar to a manual certificate pinning and allows trusting a specific certificate outside the operating systems normal trust chain.

    For testing, it is also possible to use --accept-any-ssl-certificate which will bypass certificate checks completely and enable man-in-the-middle attacks on the connection.

    hashtag
    Path resolution

    The FTP protcol is tied to a Posix-style path where / is the root folder and subfolders are described using the forward-slash separator. On some systems the filesystem is virtual, so the user can only see the root path, but has no knowledge of the underlying real filesystem. On others, the paths are mapped directly to the user home, like /home/user.

    Use the option --ftp-absolute-path to treat the source path as an absolute path, meaning that folder maps to /folder and not to /home/user/folder.

    A related option is the --ftp-use-cwd-names option that makes Duplicati keep track of the working directory and uses the FTP server's CD command to set the working folder prior to making a request.

    hashtag
    Verification of uploads

    To verify that uploads actually work, the FTP connection will request the file after it has been uploaded to check that it exists and has the correct file size. This check is usually quite fast and does not impact backup speeds, but if needed it can be disabled with --disable-upload-verify.

    A related setting --ftp-upload-delay adjusts the delay that is inserted after the upload but before verifying the file exists, which is required on some servers to ensure the file is fully flushed before validating the existence.

    hashtag
    Debugging commands

    Because the FTP protocol can sometimes be difficult to diagnose, the option --ftp-log-to-console will enable logging various diagnostics output to the terminal. This option works best with the or application. The option --ftp-log-privateinfo-to-console will also enable logging of usernames and passwords being transmitted, to further track down issues. Neither option should be set outside of testing and evaluation scenarios.

    hashtag
    Notes on aFTP

    Prior to Duplicati 2.1.0.2 there were two different FTP backends, FTP and Alternative FTP (aFTP). This was done as the primary FTP backend was based on and was lacking some features. The aFTP backend was introduced to maintain the FTP backend but offer more features using the FluentFTP library.

    With Duplicati 2.1.0.2 the codebase was upgraded to .NET8 which means that FtpWebRequest is now deprecated. For that reason, the FTP backend was converted to also be based on FluentFTP, so both FTP backends are currently using the same library.

    The aFTP backend is still available for backwards compatibility, but is the same as the FTP backend, with some different defaults. The aFTP backend will likely be marked deprecated in a future version, and eventually removed.

    Running a self-hosted OAuth Server

    This page describes how to set up and run a self-hosted OAuth Server

    If you are using one of the backends that requires login via OAuth (Google, Dropbox, OneDrive, etc) you will need to obtain a "clientId" and a "clientSecret". These are given by the service providers when you are logged in, and are usually free.

    If you prefer to avoid the hassle of setting this up, you can opt to use the Duplicati provided OAuth server, where Duplicati's team will handle the configuration. This OAuth server is the default way to authenticate. If you prefer to be more in control of the full infrastructure, you can use this guide to set up and use your own self-hosted OAuth Server.

    For example, this guide will show how to set up an OAuth server for internal use in an organization, granting Duplicati instances full access to the Google Drive files.

    If you need to set up another provider than Google, see .

    Using Duplicati with Windows

    This page describes common scenarios for configuring Duplicati with Windows

    Before you can install Duplicati, you need to decide on three different parameters:

    • The type you want: , , , .

    • You machine CPU type: x64, Arm64, or x86 (32 bit)

    Using remote file locking

    Duplicati supports locking remote files to prevent deletion and this page describes how and when you should use remove locking.

    Duplicati supports Object Locking, also known as WORM (Write Once, Read Many), for compatible storage providers. When enabled, this feature ensures that your backup files cannot be deleted, modified, or overwritten for a specific period, even if your backup credentials are compromised.

    circle-info

    Remote file locking was introduced in Canary 2.2.0.103


    Cloud providers

    This page lists the cloud providers supported as secret providers

    For cloud-based providers there is generally a need to pass some kind of credentials to access the storage as well as the possibility of a provider being unavailable for a shorter period. To address these two issues, see and .

    Setting up and using either of the vaults described here is outside the scope of this document.

    hashtag
    HashiCorp Vault

    The implementation for supports both the cloud-based offering as well as the self-hosted version as sources.

    Preload settings

    This page describes how Preload settings are applied

    The preload settings allow configuring machine-wide or enterprise-wide default settings with a single file. Because of this usecase, all settings are applied only if they are not already present. This means a commandline argument could be set up to change the default blocksize, but if the user has applied another setting via the commandline or parameters-file, the preload setting has no effect.

    For single-machine users, the preload settings are a convenient way to change the arguments passed to either , , or , without needing to edit shortcuts or service files.

    To support different ways of deploying the settings file, 3 locations are checked:

    • %CommonApplicationData%\Duplicati\preload.json

    gcs://<bucket>/<prefix>?authid=<authid>
    SFTP (SSH)
  • WebDAV

  • OpenStack

  • Rclone (binary required)

  • Mega.nz
  • Aliyun OSS

  • Tencent COS

  • Jottacloud

  • pCloud

  • Azure Blob Storage

  • Google Cloud Storage

  • Microsoft Group Drive

  • SharePoint

  • FileJump

  • Filen.io

  • OneDrive for business
    File destination
    S3-compatible
    FTP
    Backblaze B2
    Amazon S3
    Box.com
    Dropbox
    GoogleDrive
    OneDrive
    Storj
    TahoeLAFS
    Configuring Google Cloud Storage destination
    Configuring Google Cloud Storage destination
    Entering Service Account JSON
    Entering Service Account JSON
    TrayIcon
    Server
    Agent
    CLI
    Choosing Duplicati Type
    main download pagearrow-up-right
    TrayIcon
    Agent
    pre-authenticated link
    described for the Server
    RecoveryTool
    ServerUtil
    TrayIcon
    Server
    Event Viewerarrow-up-right
    Console apparrow-up-right
    ServerUtil
    data location sectionarrow-up-right
    Server
    PBKDFarrow-up-right
    TrayIcon
    Agent
    Server
    TrayIcon
    Agent
    X:\data\
  • Files

    • C:\Users\myfile

    • X:\data\file.bin

  • /usr/share/
  • Files

    • /home/myfile

    • /usr/file.bin

  • CacheFiles

    Files that are part of application or operating system caches, such as the browser cache.

  • TemporaryFiles

    Files that are stored temporarily by applications as part of normal operations

  • Applications

    Binary applications, such as /lib/ or C:\Program files\

  • DefaultExcludes

    All the above filters in one group

  • file-globbing syntaxarrow-up-right
    regular expressionsarrow-up-right
    .NET variant of regular expressionsarrow-up-right
    Start Duplicati again
    Preload settings
    operating system Keychain, or an external secret provider
    BackendTool
    BackendTester
    FtpWebRequestarrow-up-right
    View of the FTP configuration page
    View of the FTP configuration page
    hashtag
    Getting access to Google Cloud Services

    The first step is to sign up for Google Cloud Servicesarrow-up-right if you are not already a customer. Once you are signed up, you can create a new project as shown here:

    Once you have create a project where the OAuth settings can live in, you need to enable the "Google Drive API". Go to the top-left menu, choose "API & Services" and then "Enabled APIs & Services". From here search for "Google Drive API", click it and enable:

    Before you can get the values you need to configure the consent screen that is shown when users log in with your OAuth Service. You can choose "Internal" here, unless you need to provide access to people outside your organization. Choosing "External" also requires a Google review. On the consent screen, you only need to fill in the required fields, the app name and some contact information:

    The last step in the consent is choosing the scopes (meaning the permissions) it is possible to grant with this setup. In this example we choose the auth/drive scope, granting full access to all files in the users Drive. For regular uses, it is safest to use auth/drive.file which will only grant Duplicati access to files created by Duplicati. However, in some cases Google Drive will drop your permissions and refuse to let Duplicati access the files. There is no way to change the permissions on the files, so if this happens, your only choice is to use auth/drive and obtain full access:

    You can now click update and save the consent screen and proceed to setting up the credentials needed. Click "Create Credentials" and choose "OAuth client ID". On the next page, choose the type "Web application". In the "Authorized redirect URIs" field you need to enter the url for the server that is being called after login. The Duplicati OAuth server uses a path of /logged-in so make sure it ends with that. In the screenshot, the server is hosted on a single machine, so the setup is for https://localhost:8080/logged-in:

    When you are done, click "Save" and a popup will show the credentials that are generated. Use the convenient copy buttons to get "Client ID" and "Client secret" or download the JSON file containing them. If you loose them, you can get then again via the "Credentials" page. The credentials shown here are redacted:

    hashtag
    Setting up the configuration

    With the credentials available, create a JSON text file similar to this:

    If you are setting up a secure server, you should use SharpAESCrypt to encrypt the file after you have created it. If you do, make a note of the passphrase used. Save the file either as secrets.json or secrets.json.aes if you have encrypted it.

    In the following, we will only set up Full Access Google Drive, which for legacy reasons is called "googledocs" in the OAuth server. If you are looking to set up one of the other services, see the configuration documentarrow-up-right, and pick the ids you need.

    In the following, the services are configured to just googledocs but it can be a comma separated list of services if you want to enable multiple. The storage is here simply a local folder that stores encrypted tokens, but you can also use an S3 compatible storage if needed. See the OAuth server readmearrow-up-right for more details.

    hashtag
    Docker based setup

    If you are using Docker, you can run the OAuth server imagearrow-up-right directly and simply add environment variables:

    The hostname here MUST match the one set as the redirect URI or the authorization will fail. The URLs parameter is what the internal Docker engine thinks it is running. For this setup there is no TLS/SSL certificate, so the URL here is http but note that we used https in the redirect URI and these two must match in the end. Here I am assuming some other service is providing the SSL layer.

    If you need to serve the certificate directly from the Docker container, generate a certificate .pfx file and use a configuration such as:

    hashtag
    Local machine setup

    To run without Docker, first you need to download the OAuth Server binaries for your operating systemarrow-up-right and extract them to a suitable place. The binaries are self-contained so the will run without any additional framework installation.

    To run the server, invoke it with a setup like this:

    The hostname here MUST match the one set as the redirect URI or the authorization will fail. The URLs parameter is what the process thinks it is running locally. For this setup there is no TLS/SSL certificate, so the URL here is http but note that we used https in the redirect URI and these two must match in the end. Here I am assuming some proxy service is providing the SSL certificate.

    If you need to serve the certificate directly from the the binary, generate a certificate .pfx file and use a configuration such as:

    hashtag
    Issuing an AuthID

    Once the service is running, you can navigate to the page and generate an AuthID:

    hashtag
    Using the self-hosted OAuth server in Duplicati

    The final step is to instruct Duplicati to use the self-hosted OAuth server instead of the regular instance. This is done by visiting the "Settings" page in the Duplicati UI and adding the advanced option --oauth-url=https://localhost:8080/refresh:

    Don't forget to click "OK" to save the settings. Once configured, the "AuthID" links in the UI will point to your self-hosted OAuth server, and all authorization is done purely through the self-hosted OAuth server.

    the configuration defaults that has links to the pages where the Client ID and Client secret can be found for other servicesarrow-up-right
    hashtag
    Deciding on type

    To use Duplicati on Windows, you first need to decide which kind of instance you want: GUI (aka TrayIcon), Server, Agent, CLI. The section on Choosing Duplicati Type has more details on each of the different types.

    hashtag
    Determine CPU architecture

    Finally you need to locate information on what CPU architecture you are using:

    • x64: 64bit Intel or AMD based CPU. This is the most common CPU at this time.

    • Arm64: 64bit ARM based CPU. Some laptops, tablets and servers use it.

    • x86: 32bit Intel or AMD based CPU. Note that Windows 10 was the last version to support 32 bit processors.

    If you are in doubt, you can try the x64 version, or use Microsofts guide for determining the CPUarrow-up-right.

    hashtag
    Installing the package

    Once you have decided the on package you want, you are ready to download the package. The default version shown on the main download pagearrow-up-right is the x64 GUI version in .msi format. The full list of packages can be obtained via the main download pagearrow-up-right, and then clicking "Other versions".

    hashtag
    Using the TrayIcon

    For users with a desktop environment and no special requirements, the TrayIcon instance is the recommended way to run Duplicati. If you are using the .msi package to install Duplicati, you will see an option to automatically start Duplicati, as well as create a shortcut on your desktop and in the start menu. If you need to manually start Duplicati, you can find the executable in:

    When running the TrayIcon in a user context, it will create a folder in your home folder, typically C:\Users\<username>\AppData\Local\Duplicati where it stores the local databases and the Server database with the backup configurations.

    hashtag
    Using the Server

    The Server is a regular executable and can simply be invoked with:

    When invoked as a regular user, it will use the same folder, C:\Users\<username>\AppData\Local\Duplicati, as the TrayIcon and share the configuration.

    hashtag
    Running the Server as a Windows Service

    If you want to run Duplicati as a Windows Service, you can use the bundled service tool to install/uninstall the service:

    When installing the Service it will automatically start, and likewise, uninstalling it will stop the service. If you need to pass options to the server, you can provide them to the INSTALL command:

    You can also use the preload.json file to pass settings to the Server when running as a service, which allows you to change the settings without the uninstall/install cycle (you still need to restart the service).

    Note: When running the Windows Service it will default to use port 8200 and fail it that port is not available. If you are running the TrayIcon, that will run a different instance, usually at port 8300. If you want to connect the TrayIcon to the Windows Service, edit the shortcut to Duplicati:

    hashtag
    Using the Agent

    With the Agent there is a minimal setup required, which is to register the machine with the Duplicati Console. The default installation is to install the Agent as a Windows Service, meaning it will run in the LocalService system account, instead of the local user. Due to this, it will not be able to open the browser and start the registration process for you. Instead, you must look into the Windows Event Viewer and extract the registration link from there.

    You can also register the Agent, using the Agent executable:

    After the Agent has been registered, restart the service and it will now be available on the Duplicati Console.

    If you have a pre-authenticated link for registering the machine, and would like to automate the process, you can place a file in C:\ProgramData\Duplicati\preload.json with content similar to:

    hashtag
    Using the CLI

    Using the CLI is simply a matter of invoking the binary:

    Since the CLI also needs a local database for each backup, it will use the same location as described for the Server above to place databases. In addition to this, it will keep a small file called dbconfig.json in the storage folder where it maps URLs to databases. The intention of this is to avoid manually specifying the --dbpath parameter on every invocation.

    If you specify the --dbpath parameter, it will not use the dbconfig.json file and it will not store anything in the local datafolder.

    hashtag
    Using the support programs

    Each package of Duplicati contains a number of support utilities, such as the RecoveryTool. Each of these can be invoked from the commandline with their executable name and all contain built-in help. For example, to invoke ServerUtil, run:

    hashtag
    Handling locked files

    On Windows, file locking is an integrated part of how programs expect the system to work. This is problematic for backup systems that want to read all the files, locked or not, to ensure they are backed up.

    Windows offers two primary ways to read locked files: VSS and BackupRead. Both ways require that the caller has the SeBackupPrivilege enabled and will then permit the caller to read the files. Unfortunately, this is implemented with a permission override as well, meaning that a caller with the SeBackupPrivilege has access to read any file on the system.

    In other words: it is not possible to read the user's own locked files without also granting full read access to the system. Since it is a security issue to have a regular account that has access to all files, the two options are: run as a user with elevated privileges (e.g., Administrator) or ignore locked files.

    For simplicity, most users prefer to run Duplicati as a service, which will run in the LocalSystem account that already has the SeBackupPrivilege set.

    hashtag
    Volume Snapshot Service (VSS)

    The most robust and heavy handed way of making a backup on Windows is to use VSS to create a snapshot of the current disk, and then read this. The implementation on Windows signals all programs that want to write to disk that a snapshot is being prepared, waits for them to flush to disk, makes the snapshot and lets all programs continue. This approach ensures that the disk snapshot is in a consistent state so there are not partial writes being backed up.

    To enable VSS, set the advanced option --snapshot-policy=required . If you are using Duplicati 2.1.1.0 or later, VSS will be enabled by default if the user context has the SeBackupPrivilege.

    hashtag
    BackupRead

    The BackupRead method does not create a snapshot and instead relies on a Windows API call that allows a program to read files for backup purposes. The benefit from this is that you do not need to create disk snapshots, which requires extra disk space and co-operation from other programs.

    To enable BackupRead, set --backupread-policy=required and --snapshot-policy=off to ensure you are only using BackupRead. Note that the --backupread-policy option is currently only available in the canary buildsarrow-up-right.

    GUI
    Server
    Agent
    CLI
    hashtag
    Why Use Remote File Locking?

    Remote file locking provides a "last line of defense" for your data. In a standard backup scenario, if an attacker gains access to your backup credentials, they can delete your offsite backups before encrypting your local data. Object locking prevents this.

    hashtag
    Use Cases

    • Ransomware Protection: Even if a ransomware strain finds your storage credentials, it cannot delete the locked backup volumes. Your data remains safe on the server until the lock expires.

    • Legal Hold & Compliance: For industries with strict data retention requirements (like HIPAA or GDPR),Compliance Mode ensures that data is preserved exactly as it was recorded for a mandatory duration.

    • Protection Against Accidental Deletion: Prevents accidental cleanup or bucket emptying by administrators.


    hashtag
    How it Works

    When you enable locking, Duplicati instructs the storage provider to apply a retention period to every file it uploads (dblock, dindex, and dlist). Since the storage provider is responsible for the actual locking, Duplicati and the machine it runs on does not have any way to unlock the files.

    hashtag
    Retention Modes

    Mode

    Description

    Flexibility

    Governance (Unlocked)

    Files are locked, and cannot be deleted even if the storage credentials are leaked. Accounts with specific high-level permissions (Administrsators) can still bypass the lock if necessary.

    High (Good for testing)

    Compliance (Locked)

    No one can delete the files, including the root account holder. The lock cannot be shortened.

    None (Strict security)

    triangle-exclamation

    WARNING: Use Compliance Mode with extreme caution. If you set a 10-year lock in Compliance Mode, those files cannot be deleted by you, the storage provider, or anyone else until the time expires. You will be billed for that storage for the entire duration, and there is no "undo" button.


    hashtag
    Configuration Guide

    hashtag
    1. Prerequisites

    Ensure your storage provider and bucket support Object Locking. This must often be enabled at the time of bucket creation. For instance, AWS S3 and B2 requires that Versioning is enabled for the bucket before locking can be enabled.

    Supported Backends:

    • Amazon S3 (and S3-compatible like IDrive e2)

    • Backblaze B2

    • Azure Blob Storage

    • Google Cloud Storage

    hashtag
    2. Enabling Locking in a Backup Job

    To enable locking for a backup, add the following option to your configuration (Step 5 "Options" in the WebUI):

    --remote-file-lock-duration=30D

    This example locks every new file for 30 days. You can use units like H (hours), D (days), W (weeks), or M (months).

    circle-info

    Always test this with compliance mode (default) and use a short duration until you are sure it works as intended.

    hashtag
    3. Provider-Specific Settings

    You can specify the mode depending on your provider:

    • S3: --s3-object-lock-mode=governance (default) or compliance

    • Backblaze B2: --b2-retention-mode=governance or compliance

    • Azure: --azure-blob-immutability-policy-mode=unlocked or locked

    • Google Cloud Storage: --gcs-retention-policy-mode=unlocked or locked

    circle-info

    Google Cloud Storage also requires using --service-account-file or --service-account-json as the default OAuth flow does not grant permissions to lock objects


    hashtag
    Managing Locks via CLI

    If you have existing backups that you want to lock retrospectively, or if you need to audit your locks, use the Duplicati Command Line. You can also use the Commandline part of the UI to execute these commands.

    hashtag
    Apply locks to existing versions

    To lock files belonging to the most recent backup:

    hashtag
    Sync lock status to local database

    If you changed locks manually on the server, update Duplicati’s local records:


    hashtag
    Important Storage Considerations

    hashtag
    Soft Deletion (S3, B2 and others)

    In S3 and B2, the buckets have versioning enabled for locking to work. When Duplicati "deletes" a file (e.g., during a compaction process), the provider creates a Delete Marker. The file seems to disappear from your file list, but the locked data is still there taking up space. This happens regardless of the file being locked or not.

    To avoid paying for data that should have been deleted, you must configure Lifecycle Rules on your bucket to permanently remove non-current versions once the lock expires.

    hashtag
    Direct Deletion (Azure and others)

    In Azure, the file remains visible and cannot be deleted. If Duplicati attempts to delete a locked file, the operation will simply fail because the blob itself is protected. In this case, no lifecycle rules are needed, because the file will actually be deleted when requested.

    hashtag
    Mixed (Google Cloud Storage)

    Google Cloud Storage can use both a WORM approach as well as a soft-delete and/or versioning approach. Depending on the bucket settings you will get either a soft-delete approach or a WORM setup. If you disable versioning and soft-delete, but enable object lock retention, this gives a WORM behavior that rejects deletes. Enabling soft-delete will allow the files to be marked as deleted but they are not actually deleted before the lock expires. Additionally, the soft-delete rules may keep the objects in the bucket even after the lock expires.

    • see this SO threadarrow-up-right for details, but usually

    • Linux: /usr/share/Duplicati/preload.json

    • MacOS: /usr/local/share/Duplicati/preload.json

    • Windows:C:\ProgramData\Duplicati\preload.json

  • Inside the installation folder

  • The file pointed to by DUPLICATI_PRELOAD_SETTINGS

  • For security reasons, all these paths are expected to be writeable only by Administrator/root so unprivileged users cannot modify the values. If the settings contains secrets, make sure that only the relevant users can read them.

    The loading of the files is default silent, even if the parsing fails, but the environment variable DUPLICATI_PRELOAD_SETTINGS_DEBUG=1 will toggle loader debug information to help investigate issues.

    The implementation here follows the format:

    The file has 3 sections that are all similar and all optional: env, db, and args. Each section can apply to all executables (*) or a specific executable. The executable names can be seen in the source, but the most common ones are tray and server.

    In the case where the * section and specific executable has the same variable, the specific one is used. If multiple settings files are found, they are loaded in the order described above. Here the last file loaded will be able to overwrite the others. The * settings are collected from all three files, as are the executable specific options, and only after all parsing is done, are the specific executable options applied (see below for an example).

    Note that some executables will load others, such that TrayIcon, Service, and WindowsServer will load Server.

    hashtag
    Environment variables - env

    The env section contains environment variables that are applied inside the process, after starting. Each entry under an executable is a key-value pair, where the key is the name of the environment variable, and the value will be the contents of the environment variable.

    The environment variables are only set if they are not already set, allowing a custom base set, but prefers local machine variables.

    In the case where one binary loads another, the starting application environment variables are applied first, and then any unset environment variables are applied for the loaded executable.

    hashtag
    Database settings - db

    For the db section it is possible to use * but the settings are currently only applied when running the server, so for future compatibility this section should use server only. The settings under an executable in the db section are automatically prefixed with -- to ensure they are valid options and are saved as the "application wide" settings, also visible in the UI under Settings -> Advanced Options.

    The settings here are applied to the database if they are changed, meaning a change to the settings will overwrite settings the user has already applied. This check is performed on startup.

    The database settings are not passed on from a binary when it loads another, so the only database settings that are loaded are done by Server, even if any are supplied by tray (may change in the future).

    hashtag
    Commandline arguments

    The commandline arguments supports both the * and specific executable name. The arguments are expected to be switches in the format --name=value but can be any commandline argument. The general logic in Duplicati is that "last option wins", so the resolver logic for that is applied to try to get the most logical combination of arguments.

    hashtag
    Resolution with conflicts

    If the following fragment is supplied:

    The Server executable will get the settings from * and the TrayIcon will get the values: "E1=c E2=b E3=d".

    If the above fragment is found in the first file, but this fragment is found in a later file:

    First the * variables are collected, giving "E1=a E2=b E3=f", then the tray variables give "E1=g E3=d", and then they are combined to give "E1=g E2=b E3=d" for tray.

    The same combination logic is applied for both the db and args sections, but since the args section are not key-value pairs, and their order matter, it is done by collecting the arguments first, and then reducing them:

    In this case the arguments are collected, with * first, then the executable specifics, giving:

    Since this contains 3 options named --test, they are reduced and appended so it ends up with:

    The intention here is to stay as close as possible to the original line that was entered. If the commandline arguments already contains --test, the values are not applied.

    TrayIcon
    Server
    Agent
    duplicati-server
    /Applications/Duplicati.app/Contents/MacOS/duplicati-server
    {
      "args": {
        "agent": [ "--agent-registration-url=<registration-url>" ]
      }
    }
    duplicati-cli help
    /Applications/Duplicati.app/Contents/MacOS/duplicati-cli help
    duplicati-server-util help
    /Applications/Duplicati.app/Contents/MacOS/duplicati-server-util help
    sudo journalctl --unit=duplicati | less
    > duplicati-server-util change-password
    Duplicati.CommandLine.ServerUtil change-password \ 
      --server-datafolder "C:\Windows\System32\config\systemprofile\AppData\Local\Duplicati"
    duplicati-server-util change-password \
      --server-datafolder=/root/.config/Duplicati
    --webservice-password=<new password>
    --webservice-disable-signin-tokens=true
    /usr/share/IMG_????.jpeg
    /home/*/Download/
    *.iso
    [/usr/share/IMG_\d{4}\.jpeg]
    [/home/[^/]+/Download/]
    [.*\.iso]
    {CacheFiles,TemporaryFiles}
    +/usr/share/*.txt
    -*.txt
    -*.bin
    +/usr/share/*.bin
    -/usr/share/
    {
      "env": {
        "*": { 
          "SETTINGS_ENCRYPTION_KEY": "<key>"
        }
      }
    }
    C:\Windows\System32\config\systemprofile\AppData\Local\Duplicati
    C:\ProgramData\Duplicati
    ftp://<hostname>/<path>
      ?auth-username=<username>
      &auth-password=<password>
    {
      "GD_CLIENT_ID": "<Put Client ID here>",
      "GD_CLIENT_SECRET": "<Put Client secret here>"
    }
    - ASPNETCORE_URLS: "http://localhost:8080"
    - HOSTNAME: "localhost:8080"
    - SECRETS: "/path/to/secrets.json.aes"
    - SECRETS_PASSPHRASE: "<encryption passphrase>"
    - STORAGE: "file:///path/to/persisted/folder"
    - SERVICES: "googledocs"
    - ASPNETCORE_URLS: "https://localhost:8080"
    - HOSTNAME: "localhost:8080"
    - SECRETS: "/path/to/secrets.json.aes"
    - SECRETS_PASSPHRASE: "<encryption passphrase>"
    - STORAGE: "file:///path/to/persisted/folder"
    - SERVICES: "googledocs"
    - ASPNETCORE_Kestrel__Certificates__Default__Path: "/path/to/certificate.pfx"
    - ASPNETCORE_Kestrel__Certificates__Default__Password: "<certificate password>"
    OAuthServer run 
      --listen-urls=http://localhost:8080 
      --hostname=localhost:8080
      --storage=file:///path/to/persisted/folder
      --secrets=/path/to/secrets.json.aes
      --secrets-passphrase=<encryption passphrase>
      --services=googledocs
    OAuthServer run 
      --listen-urls=https://localhost:8080 
      --hostname=localhost:8080
      --storage=file:///path/to/persisted/folder
      --secrets=/path/to/secrets.json.aes
      --secrets-passphrase=<encryption passphrase>
      --services=googledocs
      --certificate-path=/path/to/certificate.pfx
      --certificate-password=<certificate password>
    C:\Program Files\Duplicati 2\Duplicati.GUI.TrayIcon.exe
    C:\Program Files\Duplicati 2\Duplicati.Server.exe
    C:\Program Files\Duplicati 2\Duplicati.WindowsService.exe INSTALL
    C:\Program Files\Duplicati 2\Duplicati.WindowsService.exe UNINSTALL
    C:\Program Files\Duplicati 2\Duplicati.WindowsService.exe INSTALL --webservice-port=8100 --server-datafolder=<path>
    C:\Program Files\Duplicati 2\Duplicati.GUI.TrayIcon.exe --no-hosted-server --host-url=http://localhost:8200 --webservice-password=<password>
    C:\Program Files\Duplicati 2\Duplicati.Agent.exe register <registration url>
    {
      "args": {
        "agent": ["--agent-registration-url=<registration-url>"]
      }
    }
    C:\Program Files\Duplicati 2\Duplicati.CommandLine.exe help
     C:\Program Files\Duplicati 2\Duplicati.CommandLine.ServerUtil.exe help
    duplicati-cli set-locks <storage-url> --remote-file-lock-duration=1M --version=0
    duplicati-cli read-lock-info <storage-url> --refresh-lock-info-complete=true
    {
      "env": {
        "*": {
          "TEMP": "/mnt/tmp",
          "LOGGING": "false"
       },
       "tray": {
          "LOG": "1"
        },
        "server": {
            "DUPLICATI__WEBSERVICE_ALLOWED_HOSTNAMES": "m1"
        }
      },
    
      "db": {
        "server": {
          "--compression-module": "zip",
          "--send-http-result-output-format": "Json"
         }
      },
    
      "args": {
        "tray": [ "--hosturl=http://m1:8299" ],
        "server": [ "--webservice-port=8299" ]
      }
    }
    "env": {
      "*": {
        "E1": "a",
        "E2": "b"
      },
      "tray": {
        "E1": "c",
        "E3": "d"
      }
    }
    "env": {
      "*": {
        "E3": "f"
      },
      "tray": {
        "E1": "g"
      }
    }
    "args": {
      "*": ["--test=1", "--abc=123"],
      "server": ["--xyz=z", "--test=1", "--test=2"]
    }
    ["--test=1", "--abc=123", "--xyz=z", "--test=1", "--test=2"]
    ["--abc=123", "--xyz=z", "--test=2"]

    To connect to the vault, provide the url as part of the configuration:

    The url is converted to the url used to connect to the vault (e.g., https://localhost:8200 in this example). The token is used to authenticate, and the secrets are the vaults that secrets are read from.

    In the cloud-based offering the "secrets" values shown here are referred to as Apps and in the CLI as "mount points". When more than one value is supplied, the vaults are tried in order and stops once all secrets are resolved. This means that if the same secret key is found in two vaults, the value will be used from the first vault examined.

    hashtag
    Other options for hcv://

    For development purposes, the url can use a http connection by setting &connection-type=http , but this should not be used in production.

    To connect using a credential pair instead of the token, the credentials can be provided with the values client-id and client-secret , but should be passed via the environment variables:

    By default, the key lookup is done case-insensitive but can be toggled case-sensitive with the option &case-sensitive=true.

    hashtag
    Amazon Secret Manager

    The provider for AWS Secret Managerarrow-up-right supports the AWS hosted vault. The credentials for the vault are the regular Access Key Id and Access Key Secret. While these can be provided via the secret provider url as access-id and secret-key, they should be passed via the environment variables:

    The secrets values name the vaults to use (called "Secret Name" in the AWS Console). When more than one value is supplied, the vaults are tried in order and stops once all secrets are resolved. This means that if the same secret key is found in two vaults, the value will be used from the first vault examined.

    Instead of suppling the region the entire service point url can also be provided via &service-url=.

    By default, the key lookup is done case-insensitive but can be toggled case-sensitive with the option &case-sensitive=true.

    If you use IAM to create an account for this, you can use the policy SecretsManagerReadWrite.

    circle-exclamation

    You need to create the "Secret Name" in AWS console before you can use it.

    The secrets in AWS should be key/value and can contain multiple values. Setting a value will always append it to the first secret provided.

    hashtag
    Google Cloud Secret Manager

    The secret provider for Google Cloud Secret Managerarrow-up-right relies on the Google Cloud SDKarrow-up-right to handle the authentication. Follow the steps to get the environment authenticatedarrow-up-right with Google. After the athentication is complete, the configuration is:

    If you need to integrate with a different flow you can also supply an access tokenarrow-up-right, but notice that the token may be short-lived and you cannot change the token after configuring the secret provider:

    hashtag
    Additional options for gcsm://

    By default, the secrets are accessed with the version set to latest but this can be changed with &version=. The communication protocol can be changed from gRPC to https with by adding &api-type=Rest.

    It is also possible to use either service-account-file or service-account-json to authenticate with a service account.

    hashtag
    Configure a service account for Google Cloud Secret Manager

    To configure a service account for Google Cloud Secret Manager, visit IAM arrow-up-right→ Service Accountsarrow-up-right → Create.

    Use the role Secret Accessor for read-only access, and add Secret Version Adder if the account should be able to update the secrets.

    Then create the key with: Service account → Keys → Add key → Create new key → JSON.

    This will generate a key that will be downloaded. You can either use the key-file with service-account-file=<path-to-file>, or you can url-encode the contents and supply it to service-account-json=<url-encoded-json>.

    hashtag
    Azure Key Vault

    With Azure Key Vaultarrow-up-right as the provider there are several options for authenticating, where the most secure method is to use the Azure CLI loginarrow-up-right that handles all the details. Since this method is the default, the secret provider can be configured as:

    Instead of supplying the name of the keyvault, the full vault url can be supplied with &vault-uri=.

    hashtag
    Manually authenticating

    Instead of relying on the autmated login handling, it is possible to authenticate with either a client credential, or a username/password pair.

    For authenticating with client credentials, use:

    And for username/password, use:

    hashtag
    How to configure manual access to an Azure Key Vault

    There are several ways as shown above, and the recommended is to use the CI method, but here is a description of a manual way: creating a client secret.

    First, visit Portal → Microsoft Entra ID → App registrations → New registration.

    Enter a name for the application and create it. On the application screen, record the Application Id and Directory Id:

    Then visit: App → Certificates & secrets → Client secrets → New client secret.

    Create a new client secret, and make sure you copy the secret as it will not be shown again:

    Event with all the information ,you still need to grant permissions for the app/client to access the key vault: Key Vault → Access control (IAM) → Add role assignment.

    If you need read-only access, use the role Key Vault Secrets User and if you need read/write acces, use the role Key Vault Secrets Officer. On the member page, add the app/client.

    Now you have all the details to connect to the vault, add them as shown above, and make sure to url-encode the values, particularly the secret as it may contain characters that are not url-safe. You need to encode: keyvault-name, tenant-id, client-id, and client-secret.

    the section on how to avoid passing credentials on the commandline
    how to protect against outages
    HashiCorp Vaultarrow-up-right

    Agent

    This page describes the Agent executable

    The Duplicati Agent is one of the primary ways to run Duplicati, similar to the Server and TrayIcon. The Agent can be deployed in settings where there is no desktop or user interaction is not desired. The Agent needs to connect to a remote control destination from wher it can be controlled, and due to this, the Agent employs a number of additional settings that prevents applications from running on the same machine to interact with the Agent.

    circle-info

    The Agent requires an Enterprise plan

    A benefit from using the Agent is that it will only communicate over TLS encrypted connections and does not require you to manually handle the configuration of certificates for the Server.

    The Agent binary is called Duplicati.Agent.exe on Windows and duplicati-agent on Linux and MacOS.

    hashtag
    Registering the machine

    When the Agent starts for the first time, it will attempt to register with the Duplicati Console. To do this, it will open a browser window where the user can accept the registration and add the machine to their account. If the Agent needs to be registered without user interaction, a pre-authorized link can be generated on the :

    To register the Agent, run the following command:

    This will cause the Agent to register using the token from the url and the --agent-register-only option will cause it to exit after registration has completed. If the Agent is already registered, it will simply exit.

    To remove the registration information, use the command:

    After the settings are cleared, the agent can be registered again.

    The Agent settings are stored in a file called agent.json in the same folder where the is stored. The file path can be supplied with --agent-settings-file and the file can be encrypted with the setting --agent-settings-file-passphrase.

    To protect the settings file passphrase, it is possible to use the .

    hashtag
    Configuring the hosted server

    The Agent is not intended to be accessible locally and for that reason, it is locked down with a number of settings. If you need to configure the Server, most of the options can be given to the Agent and passed on to the server. This includes --webservice-port and --settings-encryption-key.

    The hosted agent server will use the port 8210 by default, to not clash with the regular Duplicati instance on port 8200.

    hashtag
    Opening the hosted server for local access

    To make the hosted server fully accessible from the local machine that it is running on, add the following settings:

    The first option, --disable-pre-shared-key, will disable the random key that is required for all requests to the webserver. This key is a random value that is generated on each start, and only kept in memory, preventing any requests to the Duplicati API.

    The second option, --webservice-api-only=false will enable the access to the static .html, .css, and .js files that provide the UI.

    The last option sets the UI password, which would otherwise be a randomly generated password.

    You may also want to re-enable the signin tokens with --webservice-disable-signin-tokens=false.

    hashtag
    WindowsService support

    The Duplicati.WindowsService.exe installer can also install the Agent as a service:

    Note that since the Agent cannot open a browser from within the service context, it will instead emit the link that is used to claim the Agent in the Windows Event Log. You need to find the link from there and open it in a browser to claim the machine. Alternatively, use the , but beware that you need to run in the same context as the service, or the agent.json file will be placed in another folder.

    circle-info

    The Agent installer does not work in Duplicati 2.1.0.5 or older, due to incorrectly passed commandline argument order. Use a later version to get the Agent running as a service.

    Similarly, you can uninstall the Agent service with:

    hashtag
    Linux service

    On Linux-based installations, the Agent installer will create the service files, which can be used to automatically start and run the Agent:

    As is common for other services, additional start parameters can be added to /etc/default/duplicati.

    Note that when running the service, the Agent does not have access to the desktop environment (if one even exists) and it cannot open the registration url in the browser. Instead, it will emit a url in the system logs that you need to open to register the machine. Alternatively, use the , but beware that you need to run in the same context as the service, or the agent.json file will be placed in another folder.

    hashtag
    MacOS support

    When installing on MacOS, the packages will register a launchagent that will start the Agent on each login. The assumption here is that the desktop context contains a browser, so the Agent will open the registration url in the default browser.

    To use a pre-authenticated url, use the , and then restart the service to have it pick up the updated agent.json file.

    Command Line Interface CLI

    This page describes the command line interface (CLI)

    The commandline interface gives access to run all Duplicati operations without needing a server instance running. This is beneficial if your setup does not benefit from a UI and you want to use an external scheduler to perform the operations.

    The binary is called Duplicati.CommandLine.exe on Windows and duplicati-cli on MacOS/Linux. All commands from the commandline interface follow the same structure:

    Each command also requires the option --dbpath=<path to local database>, but if this is not supplied, Duplicati will use a shared JSON file in the settings folder to keep track of which database belongs to each backup. Since there is no state given, the remote url is used as a key, because it is expected to uniquely identify each backup. If no entry is found, a new entry will be created and subsequent operations will use that database.

    Most options have no relationship and can be applied in any order, but some options, mostly the filter options, are order sensitive and must be supplied in the order they are evaluated. The remote url is a url-like representation of the storage destination and options. The page provides an overview of what is currently supported.

    The list of options that are supported is quite extensive and only the most common options are described on this page. For the sensitive options: --passphrase, --auth-username, and --auth-password, these can also be supplied thorugh the matching environment variables: PASSPHRASE, AUTH_USERNAME, and AUTH_PASSWORD. For further safeguarding of these values, see the section on .

    All commands support the --dry-run parameter that will simulate the operations and provide output, but not actually change any local or remote files.

    hashtag
    The help command

    The commandline interface has full documentation for all supported options and some small examples for each of the supported operations. Running the help command will output the possible topics:

    To list all options supported by the commandline interface, run the following command:

    Note that the number of options is quite large, so you will likely need to use some kind of search functionality to navigate the output.

    hashtag
    Backup

    The most common command is clearly the backup command, and the related restore command. To run a backup, use the following command:

    The source path argument can be repeated to include multiple top-level folders. By default, backups are encrypted on the remote destination, and if no passphrase is supplied with --passphrase, the commandline interface will prompt for one. If the backups should be done unencrypted, provide the option --no-encryption.

    The most common additional option(s) supplied are the filter options. The filters can selectively change what files and folders are excluded from the source paths. The describe the format of filters. Filters are supplied with the --include and --exclude options. For example:

    When supplying only exclude filters, any file not matching will be included; likewise, if only includes are present, anything else will be included. The order of the arguments define the order the filters are evaluated. Beware that some symbols, such as * and \ needs to be escaped on the commandline, and rules vary based on operating system and terminal application/shell.

    If either of the --keep-time, --keep-versions, or --retention-policy options are set, a successfull backup will subsequently invoke the delete and compact operation as needed. This enables a single command to run all required maintenance, but can optionally be invoked as manual steps.

    hashtag
    Restore

    The restore command is equally as important as the backup command and can be executed with:

    The restore command in this form will restore the specified file(s) to their original location. If a file is already present in the original location, the files will be restored with a timestamp added to their name. If no files are specified, or the filename is *, all files will be restored.

    To restore to a different location than the original, such as to a staging folder, use the option --restore-path=<destination>. The restore will find the shortest common path for the files to restore, and make a minimal folder structure to restore into.

    If you are sure you want to restore the files, and potentially loose existing files, use the option --overwrite.

    The restore command will restore from the latest version of the backup, but other versions can be selected with the --version=<version>. As with backups, the --include and --exclude options can be used to filter down the desired files to restire.

    hashtag
    Find

    The find command is responsible for locating files within the backups:

    If no filename is specified, the command will instead list all the known backup versions (or "snapshots). Multiple filenames can be specified, and they are all treated as . If a full file path is specified, the find command will instead list all versions of that file.

    To list files in a specific version, use the --version=<version> option. To search across all versions, use the --all-versions option.

    As with backup and restore, the --include and --exclude filters can be added to assist in narrowing down the search output.

    A related operation is the "compare" command, which will show a summary of differences between two versions.

    hashtag
    Handling exceptional situations

    For normal uses, it should be sufficient to only use the backup, restore, and find commands. However, in some exceptional cases, it may be needed to manually fix the problem. If such a situation occurs, Duplicati will abort the backup and give an error message that indicates the problem.

    hashtag
    Repair

    If the local database is missing or somehow out-of-sync with the remote storage, it can be rebuilt with the repair command. The repair command is invoked with:

    If the local database is missing, it is recreated from the remote storage. If the local database is present, the repair command will attempt to recreate any data that is missing on the remote storage. Recreate is only possible if the missing data is still available on the local system. If the required data is missing, the repair command will fail with an error message, explaining what is missing.

    hashtag
    List broken files

    The command list-broken-files will check which remote files are missing or damaged and report what files can no longer be restored due to this:

    The related command "affected" can give a similar output where reports what files would be lost, if the given remote files were damaged. It is possible that files can be partially restored despite damaged remote files. For handling partial restore, see the section on .

    hashtag
    Purge broken files

    If the remote files cannot be recovered, but you would like the backup to continue, you can use the purge-broken-files command to rewrite the remote storage to simply exclude the files that are no longer restorable:

    After succesfully purging the broken files, the local database and remote storage will be in sync and you can continue backups.

    The related command "purge" can be used to selectively remove files from the backup.

    After purging files, you can run the compact command to release space that was held by the removed files.

    Using Duplicati from Docker

    This page describes common scenarios for configuring Duplicati with Docker

    The Duplicati Docker images are available from and are released as part of the regular releases. The Docker images provided by Duplicati are quite minimal and includes only the binaries required to run Duplicati. There are also variations of the , including the popular variant.

    hashtag
    Configure the image

    The Duplicati Docker images are using /data inside the container to store configurations and any files that should persist between container restarts. Note that other images may choose a different location to store data, so be sure to follow the instructions if using a different image.

    Single Sign-On (SSO)

    This page describes how to set up SSO with the Duplicati Console, using Okta as an example

    For this guide we will be looking at setting up an application and also possibly configuring an access policy for the authorization server in Okta. While this guide is using Okta as an example, other OIDC or SAML2 providers, including Azure, can be used as well.

    circle-info

    SSO is an additional Enterprise feature. Contact Duplicati sales or support if you need SSO enabled for your license or trial

    --secret-provider=
      hcv://localhost:8200?token=<token>&secrets=app1,app2
    export HCP_CLIENT_ID=<client-id>
    export HCP_CLIENT_SECRET=<secret>
    
    --secret-provider=hcv://localhost:8200?secrets=app1
    export AWS_ACCESS_KEY_ID=<id>
    export AWS_SECRET_ACCESS_KEY=<key>
    
    --secret-provider=awssm://?region=us-east-1&secrets=vault1,vault2
    --secret-provider=gcsm://?project-id=<projectid>
    --secret-provider=gcsm://?project-id=<projectid>&token=<token>
    --secret-provider=azkv://?keyvault-name=keyvault
    --secret-provider=azkv://?keyvault-name=keyvault
      &auth-type=ClientSecret
      &tenant-id=<tenantid>
      &client-id=<clientid>
      &client-secret=<secret>
    --secret-provider=azkv://?keyvault-name=keyvault
      &auth-type=UsernamePassword
      &tenant-id=<tenantid>
      &client-id=<clientid>
      &username=<username>
      &password=<password>
    duplicati-cli <command> <remote url> [arguments and options]
    Duplicati Console registration pagearrow-up-right
    Server database
    secret provider
    method outlined above to register the machine
    method outlined above to register the machine
    method outlined above to register the machine
    The Duplicati Console with a pre-authorized link
    destination overview
    using the secret provider
    page on filters
    filter expressions
    disaster recovery

    You also need a way to sign in to the server after it has started. You can either watch the log output, which will emit a special signin url with a token that expires a few minutes after the server has started, or provide the password from within the configuration file.

    To ensure that any secrets configured within the application are not stored in plain text, it is also important to set up the database encryption key.

    See also the DockerHub page for details on how to configure the image: https://hub.docker.com/r/duplicati/duplicati/arrow-up-right

    hashtag
    Hostname access

    Duplicati's server allows access from IP-based requests, but disallows access from requests that use a hostname. This is done to prevent certain DNS-based attacks, but will also block attempts to use a correct hostname. To avoid this, set the environment variable:

    Setting this environment variable will enable using desired hostnames instead of IP addresses only. The special hostname * will disable the protection and allow any hostname, but this is not recommended for security reasons.

    hashtag
    Managing secrets in Docker

    Ideally, you need at least the settings encryption key provided to the container, but perhaps also the webservice password. You can easily provide this via a regular environment variable:

    But you can make it a bit more secure by using Docker secretsarrow-up-right which are abstracted as files that are mounted under /run/secrets/. Since Duplicati does not support reading files in place of the environment variables, you can either use a preload configuration file or use one of the secret providers.

    hashtag
    Using a preload file

    To use the preload approach, prepare a preload.json file with your encryption key:

    You can then configure this in the compose file:

    hashtag
    Using a secret manager

    Setting up the secret manager is a bit more work, but it has the benefit of being able to configure multiple secrets in a single place. To configure the file-based secret provider, you need to create a secrets.json file such as this:

    Then set it up in the compose file:

    It is also possible to use one of the other secret providers, such as one that fetches secrets from a secure key vault. In this case, you do not need the secrets.json file, but can just configure the provider.

    hashtag
    Read locked files

    Duplicati has support for LVM-based snapshots which is the recommended way for getting a consistent point-in-time copy of the disk. For some uses, it is not possible to configure LVM snapshots, and this can cause problems due to some files being locked. By default, Duplicati will respect the advisory file locking and fail to open locked files, as the lock is usually an indication that the files are in use, and reading it may not result in a meaningful copy.

    If you prefer to make a best-effort backup, which was the default in Duplicati v2.0.8.1 and older, you can disable advisory file locking for individual jobs with the advanced option: --ignore-advisory-locking=true. You can also disable file locking support entirely in Duplicati:

    hashtag
    Running behind a proxy

    If you want to run Duplicati behind an nginx proxy, you can use a docker-compose configuration like this example

    And then use an nginx.conf file like this example:

    hashtag
    Pre-authenticated with reverse proxy

    If your proxy setup already authenticates the user and you prefer not having to use another password to access Duplicati, you can configure the proxy to forward a preconfigured authentication header.

    It is not possible to disable authentication for Duplicati, as that would make it possible to accidentially expose the server without access control. To avoid being asked for a password on each accss, you need to generate a random token that you can pass from the nginx server to Duplicati that serves as authentication and grants access to Duplicati.

    triangle-exclamation

    This setup bypasses the Duplicati authentication so make sure your authentication system is sufficiently secure before deploying it.

    When you have a secure random token, make Duplicati trust it via the pre-authenticated header:

    Then make the nginx proxy forward the header on each request:

    DockerHubarrow-up-right
    Duplicati images provided by third partiesarrow-up-right
    linuxserver/duplicatiarrow-up-right
    duplicati-agent run \ 
      --agent-registration-url="<pre-authorized url>" \
      --agent-register-only
    duplicati-agent clear
    duplicati-agent \
      --disable-pre-shared-key \
      --webservice-api-only=false \
      --webservice-password=<password>
    Duplicati.WindowsService.exe INSTALL-AGENT <options>
    Duplicati.WindowsService.exe UNINSTALL-AGENT
    sudo systemctl enable duplicati-agent.service
    sudo systemctl daemon-reload
    sudo systemctl start duplicati-agent.service  
    sudo systemctl status duplicati-agent.service
    See duplicati-cli help <topic> for more information.
      General: example, changelog
      Commands: backup, find, restore, delete, compact, test, compare, purge, vacuum
      Repair: repair, affected, list-broken-files, purge-broken-files
      Debug: debug, logging, create-report, test-filters, system-info, send-mail
      Targets: aliyunoss, azure, b2, box, cloudfiles, dropbox, ftp, aftp, file, gcs, googledrive, e2,
      jottacloud, mega, msgroup, onedrivev2, openstack, rclone, s3, ssh, od4b, mssp, sharepoint, sia,
      storj, tahoe, cos, webdav
      Modules: aes, gpg, zip, console-password-input, http-options, hyperv-options, mssql-options,
      runscript, sendhttp, sendxmpp, sendtelegram, sendmail
      Formats: date, time, size, decimal, encryption, compression
      Advanced: mail, advanced, returncodes, filter, filter-groups, <option>
      Secrets: secret, <provider>
    duplicati-cli help advanced
    duplicati-cli backup <remote url> <source path> [options]
    --exclude=*.iso
    --exclude=Thumbs.db
    --exclude=*/tmp-*
    duplicati-cli restore <remote url> <filename> <options>
    duplicati-cli find <remote url> <filename> <options>
    duplicati-cli repair <remote url>
    duplicati-cli list-broken-files <remote url> <options>
    duplicati-cli purge-broken-files <remote url> <options>
    environment:
      DUPLICATI__WEBSERVICE_ALLOWED_HOSTNAMES: <hostname1>;<hostname2>
    services:
      myapp:
        image: duplicati/duplicati:latest
        volumes:
          - ./data:/data
        environment:
          SETTINGS_ENCRYPTION_KEY: "<real encryption key>"
          DUPLICATI__WEBSERVICE_PASSWORD: "<ui password>"
    {
      "env": {
        "server": {
            "SETTINGS_ENCRYPTION_KEY": "<real encryption key>",
            "DUPLICATI__WEBSERVICE_PASSWORD": "<ui password>"
        }
      }
    }
    services:
      myapp:
        image: duplicati/duplicati:latest
        volumes:
          - ./data:/data
        environment:
          DUPLICATI_PRELOAD_SETTINGS: /run/secrets/preloadsettings
        secrets:
          - preloadsettings
    
    secrets:
      preloadsettings:
        file: ./preload.json
    {
      "settings-key": "<real encryption key>",
      "ui-password": "<real UI password>"
    }
    services:
      myapp:
        image: duplicati/duplicati:latest
        volumes:
          - ./data:/data
        environment:
          SETTINGS_ENCRYPTION_KEY: "$$settings-key"
          DUPLICATI__SECRET_PROVIDER: file-secret:///run/secrets/secretprovider
          DUPLICATI__WEBSERVICE_PASSWORD: "$$ui-password"
        secrets:
          - secretprovider
    
    secrets:
      secretprovider:
        file: ./secrets.json
    services:
      myapp:
        image: duplicati/duplicati:latest
        volumes:
          - ./data:/data
        environment:
          SETTINGS_ENCRYPTION_KEY: "<real encryption key>"
          DUPLICATI__WEBSERVICE_PASSWORD: "<ui password>"
          DOTNET_SYSTEM_IO_DISABLEFILELOCKING: true
    services:
      nginx:
        image: nginx:alpine
        ports:
          - "8200:8200"
        volumes:
          - ./nginx.conf:/etc/nginx/nginx.conf:ro
        depends_on:
          - duplicati
        restart: unless-stopped
    
      duplicati:
        image: duplicati/duplicati:latest
        environment:
          - DUPLICATI__WEBSERVICE_PASSWORD: "<ui password>"
        volumes:
          - ./data:/data
        restart: unless-stopped
    events {
      worker_connections 1024;
    }
    
    http {
      server {
        listen 8200;
        server_name localhost;
    
        location / {
          proxy_pass http://duplicati:8200;
          proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $scheme;
        }
      }
    }
    services:
      myapp:
        image: duplicati/duplicati:latest
        volumes:
          - ./data:/data
        environment:
          SETTINGS_ENCRYPTION_KEY: "<real encryption key>"
          DUPLICATI__WEBSERVICE_PRE_AUTH_TOKENS: "<secure random token>"
    http {
      server {
        listen 8200;
        server_name localhost;
    
        location / {
          proxy_pass http://duplicati:8200;
          proxy_set_header PreAuth <secure random token>
          proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $scheme;
        }
      }
    }
    hashtag
    Create a Duplicati application in Okta
    1. Sign in to your Okta account.

    2. Navigate to the Admin page.

    3. In the left menu, select Applications.

    circle-info

    Ensure you have an Okta account available with super admin rights.

    hashtag
    Choose sign-in method and application type

    In the daiglog for creating the application, choose these two options:

    • Sign-in method: OIDC - OpenID Connect

    • Application type: Web Application

    Then click Next.

    hashtag
    Configure the Duplicati application in Okta

    1. Choose a suitable application name, such as Duplicati.

    2. Note that Sign-in redirect URIs must be provided later — leave it at default for now.

    3. Set controlled access, preferably limiting access to selected groups for better control.

    hashtag
    Configure Access Policies for the Duplicati application in Okta

    1. Go to Security → API.

    2. Here you can:

      • Retrieve the Metadata URI needed for SSO configuration in Duplicati.

      • Verify existing access policies.

    If no access policies are present, or you want another one:

    1. Click Add New Access Policy.

    2. Configure it to match your security requirements.


    hashtag
    Add Okta SSO to Duplicati

    1. In the Duplicati Console, go to the Settingsarrow-up-right page.

    2. Click the SSO tab.

    3. The bold SSO name (example shown as “SSO Demo”) is case-sensitive and is required later at login.

    4. Click New SSO Configuration and choose Add OIDC.

    circle-info

    If the SSO tab is not visible, SSO may not be enabled for your organization; contact Duplicati sales or support.

    hashtag
    Configure the OIDC connection in Duplicati Console

    To configure OIDC, fill in values from the Okta application.

    • Name: Used to identify the login method for users. A suggested name is Okta.

    • Notes: Free text, only used in this dialog.

    • Default security group: New users must be assigned to a group to join the organization. Select the standard owner group created with the organization.

    circle-info

    The default group affects only users who have not yet logged in to Duplicati Console. It will not change the group(s) of existing users.

    hashtag
    Enter Client ID, Client Secret, and Metadata URI

    1. In Okta, open your application page.

    2. Copy:

      • Client Id

      • Client secret

    3. Paste both into the Duplicati Console OIDC dialog.

    hashtag
    Metadata URI

    1. In Okta, go to Security → API → Settings.

    2. Copy the Metadata URI and paste into the metadata address field in Duplicati.

    If Metadata URI is not shown (some Okta plans):

    Use your Okta domain (from the Okta URL or Issuer field) in:

    hashtag
    Initial configured OIDC dialog

    Your configuration should look similar to the example shown in the guide once the fields are filled.

    hashtag
    Updating Okta for the connection

    When creating the Okta app earlier, the redirect URI was left at default because it wasn’t available yet. Now we will update it.

    hashtag
    Obtain the redirect URI

    1. In Duplicati Console, open the SSO configuration list.

    2. For the relevant SSO configuration, open the action menu.

    3. Click the copy button to copy the redirect URI.

    hashtag
    Configure redirect URI in Okta

    1. In Okta, open your application front page.

    2. Scroll to General Settings.

    3. Click Edit.

    4. Paste the redirect URI into Sign-in redirect URIs.

    5. Click Save.


    hashtag
    Sign in with Okta SSO

    Once configured, you can log in with Okta.

    hashtag
    Add Okta login to your existing account

    1. In Duplicati Console, go to your Account pagearrow-up-right.

    2. Click Add login account.

    3. Choose the new Okta integration.

    This allows your current account to be accessed with either login method.


    hashtag
    New users logging in with Okta

    1. Log out of Duplicati Console.

    2. On the login screen, choose Sign in with SSO.

    3. Enter your organization’s SSO name (case-sensitive).

      • The name appears on the SSO configuration page.

      • If not, obtain it from Duplicati Inc.

    4. After entering a valid name, you’ll see available login options.

      • Typically there is one option, but multiple can be configured.

    5. Click the login button to be redirected to Okta and complete sign-in.

    Configuring HTTPS

    This page explains how to configure HTTPS for secure web UI access in Duplicati.

    circle-info

    Available since: Canary 2.2.0.106

    Duplicati can automatically generate HTTPS certificates for secure web UI access. This document explains the security model and provides platform-specific instructions for configuring HTTPS.

    hashtag
    Certificate Approach

    Browsers currently reject certificates that have a validity period for more than 90 days, with the logic that rotation should be frequent and automated to prevent long-term exposure to potential vulnerabilities. Since Duplicati uses localhost serving by default, there is no Certificate Authority (CA) to request certificates from.

    If Duplicati creates a self-signed certificate, it would need to be re-authorized after 90 days, which would require admin permissions and possibly manual intervention every 90 days. To avoid this, Duplicati generates its own Certificate Authority (CA) and uses it to sign server certificates.

    The CA private key is stored in the Duplicati database file, which is encrypted with the database encryption key. When a new certificate is needed, Duplicati generates a new certificate signed by the CA. Since the CA is local and not shared with any external service, this approach provides a secure way to manage certificates without requiring external dependencies. As the CA is trusted by the user or system, this enables Duplicati to automatically renew certificates without user intervention, in a way that is trusted by the user's browser.

    circle-exclamation

    Security Warning: While the CA is local, it is still a CA and can be used to sign certificates for other domains. If someone gains access to the Duplicati database, they can use the CA to sign certificates for other domains, essentially providing an undetected man-in-the-middle attack.

    If you prefer providing your own certificate, you can do so by setting the server-ssl-certificate and server-ssl-certificatepassword settings. This will not activate auto-renewal or generate a CA.

    hashtag
    Certificate Authority (CA) Security Model

    When HTTPS is configured, Duplicati creates a local Certificate Authority (CA) with the following security properties:

    hashtag
    Local-Only CA

    • The CA is generated locally on your machine and is not shared with any external service

    • The CA certificate is installed only in your system's local trust store

    • Other machines do not trust this CA unless explicitly configured to do so

    hashtag
    Certificate Validity Periods

    • CA Certificate: Valid for approximately 10 years

    • Server Certificate: Valid for 90 days

    • Auto-renewal: Server certificates are automatically renewed 30 days before expiration

    hashtag
    CA Key Storage Security

    The CA private key is stored with multiple layers of protection:

    1. Encryption: The private key is encrypted using AES-256 with a password-derived key

    2. Password Separation: The encryption password is stored separately from the encrypted key

    3. Database Encryption: If database field encryption is enabled, an additional encryption layer is applied

    circle-exclamation

    Important: The security of your HTTPS certificates depends on the security of your Duplicati database file. Ensure that database encryption is enabled.

    hashtag
    Platform-Specific Configuration

    hashtag
    Windows

    On Windows, the CA certificate is installed in the certificate store. By default, it uses the LocalMachine store when running as Administrator, or CurrentUser store when running as a regular user. A dialog is shown to confirm the installation location when installing without Administrator privileges.

    hashtag
    Generating Certificates on Windows

    Open Command Prompt or PowerShell as Administrator (for system-wide trust) or as a regular user (for user-only trust):

    To specify a particular certificate store:

    Valid store values are:

    • local or machine - Install in the LocalMachine store (requires Administrator privileges)

    • user or currentuser - Install in the CurrentUser store

    hashtag
    Trust Store Location

    The CA certificate is installed in:

    • LocalMachine: Cert:\LocalMachine\Root (system-wide trust)

    • CurrentUser: Cert:\CurrentUser\Root (user-only trust)

    hashtag
    Linux

    On Linux, the CA certificate is installed in distribution-specific certificate directories. The ConfigureTool will automatically detect the correct location for your distribution.

    hashtag
    Generating Certificates on Linux

    Run the ConfigureTool with appropriate privileges (typically requires sudo for system-wide trust):

    To specify a custom certificate directory:

    hashtag
    Common Certificate Directories

    Different Linux distributions use different paths for CA certificates:

    • Debian/Ubuntu: /usr/local/share/ca-certificates/

    • RHEL/CentOS/Fedora: /etc/pki/ca-trust/source/anchors/

    • SUSE: /etc/pki/trust/anchors/

    The ConfigureTool will automatically detect the correct location for your distribution.

    hashtag
    Chrome and Firefox on Linux

    Chrome and Firefox on Linux maintain their own certificate stores and may not automatically trust the system CA. This is especially true for sandboxed installations (Snap, Flatpak).

    Exporting the CA Certificate

    First, export the CA certificate for manual import:

    Importing to Firefox

    1. Open Firefox and go to Settings → Privacy & Security

    2. Scroll to Certificates and click View Certificates

    3. Select the Authorities tab

    Importing to Chrome

    1. Open Chrome and go to Settings → Privacy and security → Security

    2. Click Manage certificates

    3. Select the Authorities tab

    circle-info

    For Flatpak or Snap installations, browsers run in a sandbox that may restrict file access. If you encounter "error reading file" during import, copy the certificate to a location accessible by the sandbox:

    Avoiding System Store Trust

    Since Flatpak or Snap browsers don't use the system certificate store anyway, you can avoid installing the CA in the system trust store by using the --no-trust flag:

    This generates a CA certificate without requiring elevated privileges. You can then export and manually import it into your browser.

    hashtag
    macOS

    On macOS, the CA certificate is installed in the System keychain by default, which requires administrator privileges.

    hashtag
    Generating Certificates on macOS

    Run the ConfigureTool with administrator privileges:

    To specify a custom keychain:

    hashtag
    Trust Store Location

    By default, the CA certificate is installed in:

    • System keychain: /Library/Keychains/System.keychain (requires administrator privileges)

    The ConfigureTool uses the security add-trusted-cert command to add the certificate to the trust store.

    hashtag
    Revocation and Compromise Response

    If you suspect your CA private key has been compromised:

    1. Immediate Action: Remove the certificates using:

    2. Regenerate: Create a new CA with:

    3. Review: Check recent server logs and backup history for unauthorized access

    hashtag
    Certificate Pinning Considerations

    For high-security environments, consider implementing certificate pinning:

    • Extract the CA certificate thumbprint after generation

    • Configure clients or monitoring systems to expect only this specific CA

    • This prevents acceptance of certificates signed by other CAs that might be installed on the system

    hashtag
    Viewing Certificate Status

    To check the current status of your HTTPS certificates:

    This displays:

    • CA certificate details (subject, issuer, validity dates)

    • Whether the CA is installed in the trust store

    • Server certificate details (subject, issuer, validity dates, DNS names, IP addresses)

    hashtag
    Renewing Certificates

    Server certificates are automatically renewed 30 days before expiration. To manually renew the server certificate:

    hashtag
    Removing HTTPS Configuration

    To remove HTTPS certificates and stop using HTTPS:

    This will:

    • Remove the CA certificate from the system trust store

    • Delete all certificate data from the Duplicati database

    hashtag
    Using Custom Certificates

    If you prefer to use your own certificates instead of the auto-generated CA:

    1. Obtain a certificate and private key from a trusted CA (or create your own)

    2. Configure Duplicati Server with the server-ssl-certificate and server-ssl-certificatepassword options

    3. Note that auto-renewal will not be available when using custom certificates

    See the for details on configuring custom SSL certificates.

    hashtag
    See Also

    • - Complete command reference

    • - Managing server access

    • - Database configuration

    Using the Duplicati Console

    A comprehensive guide to using the Duplicati Console for centralized backup management and monitoring across multiple machines and organizations.

    The Duplicati Console is a centralized management interface designed to streamline the monitoring and configuration of backups across multiple machines and organizations. Whether you are managing a handful of devices or hundreds of machines across multiple organizations, the console provides a comprehensive set of tools for ensuring data safety and operational efficiency.

    circle-info

    The Duplicati Console is available at https://app.duplicati.comarrow-up-right. Some features require an Enterprise or Pro license.

    hashtag
    Free Pro Trial

    When you connect your first machine to the Duplicati Console, you automatically receive a free 30-day Pro trial. This trial gives you access to advanced features including:

    • Alert Center – Set up proactive monitoring rules and notifications

    • Advanced Reporting – Detailed backup reports and analytics

    • AI analysis - Get an AI expert to analyse your backup results

    The trial begins as soon as a machine successfully connects to your console account. No credit card is required to start the trial. After the 30-day period, you can choose to upgrade to a Pro or Enterprise plan to continue using these features, or continue with the limited free tier which includes basic monitoring and management capabilities.

    hashtag
    Getting Started with the Console

    hashtag
    Accessing the Console

    Navigate to and sign in with your Duplicati account. Once logged in, you will be presented with the main dashboard showing an overview of your organizations and connected machines.

    hashtag
    Connecting Your First Machine

    Before you can manage backups through the console, you need to connect machines to your account. There are two primary methods:

    1. Using the TrayIcon or Server – For existing Duplicati installations, enable remote management from the local settings page. See for detailed instructions.

    2. Using the Agent – For new installations, deploy the which is designed for headless environments and easier mass deployment. The Agent can be pre-configured with a registration link for seamless onboarding.

    hashtag
    Monitoring at Scale

    The Duplicati Console excels at providing visibility across large installations. Instead of checking each machine individually, you can monitor all your backups from a single interface.

    hashtag
    Machine Dashboard

    The Machine Dashboard serves as your command center for all connected devices:

    • Overview – View a complete list of all machines associated with your organization, including their current connection status.

    • Status Indicators – Quickly identify which machines are online, offline, or experiencing issues.

    • Registration Links – Manage pre-authorized registration tokens for onboarding new machines at scale.

    For MSPs and enterprises managing many devices, the ability to see all machines at a glance eliminates the need to log into each device individually.

    hashtag
    Backup Dashboard

    The Backup Dashboard provides a centralized view of all backup operations across your entire fleet:

    • Global View – Monitor backup job statuses across all machines in a single view.

    • Storage Usage – Track total storage consumption across all backups, helping you plan capacity needs.

    • Visual Status Indicators – Color-coded indicators show backup results at a glance:

    hashtag
    Alert Center (Pro Feature)

    The Alert Center enables proactive monitoring through customizable rules and multi-channel notifications:

    hashtag
    Monitoring Rules

    Create custom profiles to detect abnormalities before they become problems:

    • Missed Backups – Alert when a backup hasn't run by an expected time, ensuring no machine falls through the cracks.

    • Modified Files Threshold – Alert when the number of modified files exceeds a defined threshold, potentially indicating unusual activity.

    • Duration Monitoring – Track backup duration deviations that might indicate performance issues or network problems.

    hashtag
    Notification Channels

    Configure multiple channels to receive alerts where your team will see them:

    • Email – Send alerts to validated email addresses.

    • Discord – Post alerts to a Discord channel.

    • Slack – Send notifications to a Slack workspace.

    hashtag
    Managing Backups Centrally (Enterprise feature)

    hashtag
    Creating Backup Configurations

    The console allows you to create backup configurations that can be applied to multiple machines, ensuring consistency across your organization:

    hashtag
    Source Configuration

    • Custom Paths – Specify exact file or folder paths to back up.

    • Filters – Add inclusion or exclusion filters based on file patterns.

    • Built-in Exclusions – Toggle options to exclude Hidden, System, and Temporary files, or files larger than a specific size.

    hashtag
    Destination Support

    The console supports the full range of Duplicati storage providers:

    • Local Storage – File System backups.

    • Cloud Object Storage – S3 Compatible (AWS, MinIO), Azure Blob Storage, Google Cloud Storage.

    • Cloud Drives – Google Drive, OneDrive (Personal & Business), pCloud, Microsoft SharePoint.

    hashtag
    Scheduling and Encryption

    • Automatic Scheduling – Set backups to run automatically at specific times with configurable frequency (daily, weekly, or custom intervals).

    • Manual Triggers – Option to trigger backups manually when needed.

    • Encryption Management – Manage encryption keys centrally to ensure data security across all backups.

    hashtag
    Applying Configurations to Machines

    Once a backup configuration is created, it can be deployed to any connected machine in your organization. This ensures that all machines follow the same backup policy without requiring manual configuration on each device.

    hashtag
    Organization Management (Enterprise feature)

    For MSPs and large enterprises, the console provides powerful multi-organization capabilities:

    hashtag
    Multi-Organization Support

    • Organization Hierarchy – Create and manage multiple organizations from a single account, with support for up to three levels of hierarchy.

    • Context Switching – Seamlessly toggle between different organizations to manage their respective resources.

    • Tenant Isolation – Each organization is isolated, ensuring data and access boundaries between customers or departments.

    See for detailed information on setting up hierarchies.

    hashtag
    User and Team Management

    • User Invitation – Invite new members to your organization via email with predefined roles.

    • Teams – Group users into teams for easier permission management.

    • Access Control – Granularly assign access rights to specific machines or backup sets for different teams.

    See for more details.

    hashtag
    Machine Management at Scale

    hashtag
    Claiming Machines

    Securely add new machines to your organization using unique registration tokens. This process ensures that only authorized machines can connect to your console.

    hashtag
    Registration Links

    Generate pre-configured links to easily onboard clients or new devices:

    • Pre-authorized URLs – Create links that allow machines to register automatically without manual approval.

    • Mass Deployment – Use registration links with deployment tools or preload configurations for unattended installations.

    • Security – Revoke links at any time to prevent unauthorized registrations.

    hashtag
    Remote Access

    Once a machine is connected, you can access its local Duplicati interface directly from the console:

    1. Navigate to Settings → Registered Machines.

    2. Find the machine you want to access.

    3. Click Connect to open the machine's local interface.

    This eliminates the need for VPNs or remote desktop connections to manage individual machines.

    hashtag
    Best Practices for Large Installations

    hashtag
    1. Organize with Tags

    Apply meaningful tags to machines (e.g., by location, department, or customer) to make filtering and management easier.

    hashtag
    2. Use Configuration Templates

    Create standardized backup configurations for different use cases (e.g., "Workstation Backup", "Server Backup") and apply them consistently.

    hashtag
    3. Set Up Proactive Monitoring

    Configure Alert Center rules to catch issues early:

    • Set up missed backup alerts for critical machines.

    • Monitor for unusual file modification patterns.

    • Track backup duration trends to identify performance degradation.

    hashtag
    4. Leverage the Organization Hierarchy

    For MSPs, use the three-level hierarchy to model your customer structure:

    • Root organization for MSP staff

    • Sub-organizations for each customer

    • Further sub-organizations for customer departments or sites

    hashtag
    5. Automate Onboarding

    Use the Client or Agent with pre-authorized registration links and for seamless mass deployment. The preload.json file can be downloaded directly from the Links page in the console and should be placed into the installation folder. When the Agent starts, it will automatically register with the console and any configured backups will be pushed to the agent.

    hashtag
    Summary

    The Duplicati Console transforms backup management from a machine-by-machine task into a centralized, scalable operation. By providing a unified view of all backups, proactive monitoring capabilities, and powerful organizational tools, the console makes it easy to manage and monitor large installations while maintaining security and compliance.

    Whether you are an MSP managing hundreds of customer devices or an enterprise IT team overseeing a distributed workforce, the Duplicati Console provides the visibility and control needed to ensure your data protection strategy succeeds.

    Server

    This page describes the Duplicati server component

    The Duplicati server is the primary instance, and is usually hosted by the TrayIcon in desktop environments. The server itself is intended to be a long-running process, usually running as a service-like process that starts automatically. The binary executable is called Duplicati.Server.exe on Windows and duplicati-server on Linux and MacOS.

    The server is responsible for saving backup configurations, starting scheduled backups, and provide the user interface. The user interface is provided by hosting a webserver inside the process. This webserver provides both the static files as well as the API that is needed to control the server.

    When the server runs any operation, such as a backup or restore, it will configure an environment from various settings, primarily the backup configuration. The actual implementation is the same code that is executed by the command line interface, but runs within the server process.

    Unlike the command line interface, the Server keeps track of the local database to ensure the database is present for all operations. This is possible because the server has additional state in the and the path to the local database is kept there.

    During the operation, the server will report progress and log messages, which can be viewed if a client is attached during the run. After the run, the Server will record metadata and log data in the database, to assist in troubleshooting later.

    hashtag
    Configuring the server password

    As described in the section, it is possible to set or reset the server password by starting the server with the option:

    This new password is stored in the and does not need to be supplied on future launches. Note that changing the password does not invalidate that are already issued. To clear any issued tokens, which should be done if there is a suspicion that the signing keys are leaked, start with the following option:

    This will generate new and immediately invalidate any previously issued tokens. You can start the server with this parameter on each launch if you do not rely on a refresh token stored in the browser.

    It is also possible to disable the use of signin tokens, which are used in some cases in favor of requiring the password. This can be set with the option:

    hashtag
    Configuring the server encryption

    Since the , it is possible to set a field-level encryption password:

    Ensure you use double quotes to escape special characters as required by your operating system's command line.

    If the server starts without a settings encryption key, it will emit a warning in the logs explaining the problem. If any fields are already encrypted, Duplicati will refuse to start without the encryption key. If no fields are encrypted, but an encryption key is supplied, the fields will be encrypted.

    If you need to remove the encryption key for some reason, provide the key as above, and additionally supply the option:

    If this flag is supplied, Duplicati will not emit a warning that the database is not encrypted. If the database was encrypted, it will be decrypted. After the database is decrypted, it can be re-encrypted with a different password.

    To prevent ever starting the Server without an encryption key, provide the option:

    Note that this is exclusive with --disable-db-encryption and that the server will not start if the fields are encrypted and no encryption key is provided.

    hashtag
    External access to the server

    The server will by default only listen to requests on the local machine., which is done to ensure that requests from the local network cannot access the Duplicati instance. However, any applications that are running on the same machine will be able to send commands to Duplicati. To prevent local privilege escalation attacks, Duplicati requires a and a for all requests.

    To activate access from the local network, the server must be started with:

    It is also possible to specify loopback (the default value) or the IP address to listen on.

    When accessing the server from an external machine, it will only respond to requests that use an IP address as the hostname. This security mechanism is meant to combat fast-flux DNS attacks that could expose the local API to a website. If you need to access Duplicati from an external machine, you need to explicitly allow the hostname(s) that you will be using, by starting the server with:

    Multiple hostnames can be supplied with semicolons: host1;host2.example.com;host3.

    The server will attempt to use port 8200 and terminate if that port is not available. Use the commandline option to set a specific port:

    hashtag
    SSL/TLS support

    To ensure all communication is secure, Duplicati supports adding a TLS certificate. The certificate can be a self-signed certificate, but in this case the browser will not accept it, and extra tweaks must be made.

    To create a trusted certificate, it is easiest to use one of the many tools to manage it, such as . which can generate the various components and configure your system to trust these certificates. Beware that this requires good operational security, as the generated certificate authority can issue certificates for ANY website, including ones you do not own, and eavesdrop on your traffic.

    Once you have the desired certificate, in .pfx aka .p12 format, you can provide it to the Server on startup:

    After starting the server with an SSL certificate, the certificate is stored in the with a randomly generated password. Any subsequent launches of the server will then use the certificate and the server will only communicate over https.

    To change the certificate, exit all running instances, then run again once with the new certificate path, as shown above, and the internally stored certificate will be replaced.

    If you need to revert to unencrypted http communication, you can use the option:

    It is also possible to temporarily disable the use of the certificate, without removing it, with:

    hashtag
    Serving a different UI

    If you are developing a new UI for Duplicati, or prefer to use a customized UI, it is possible to configure the server to serve another UI, or none at all. If you want to use the Server component and only manipulate it with another tool, such as the , start with this option:

    This option will fully disable the serving of static files and only leave the API available.

    If instead, you would like to serve a different folder, you can use the option to set the webroot:

    To better support SPA type applications, the Server can be started with:

    For the SPA enabled path, any attempt to access a non-existing page will serve the index.html file, which can then render the appropriate view. Multiple paths can be supplied with semicolons.

    hashtag
    Timezone

    Internally, all time operations are recorded in UTC to avoid issues with daylight savings and changes caused by changing the machine timezone. The only difference to this rule is the scheduler, which is timezone aware.

    The scheduler needs to be timezone aware so scheduled backups run at the same local time, even during daylight savings time. On the initial startup, the system timezone is detected and stored in the server database. It is possible to change the timezone from the UI, but it can also be set with the commandline option:

    hashtag
    Configuring logging

    Duplicati will log various messages to the server database, but it is possible to also log these messages to a log file for better integration with monitoring tools or manual inspection. To configure file-based logging, provide the two options:

    By default, the --log-level parameter is set to only log warnings, but can be configured to any of the log levels: Error, Warning, Information, Verbose, and Profiling.

    The log data that is stored in the database is by default kept for 30 days, but this period can be defined with the option:

    On Windows, it is also possible to log data to the Windows Eventlog. To activate this, set the options:

    hashtag
    Storing data in different places

    By default, Duplicati will use the location that is recommended by the operating system to store "Application Support Files" or "Application Data":

    • Windows: %LOCALAPPDATA%\Duplicati

    • Linux: ~/.config/Duplicati

    • MacOS: ~/Library/Application Support

    These paths are sensitive to the user context, meaning that the actual paths will change based on the user that is running the Server. This is especially important when running the server with elevated privileges, because this usually causes it to run in a different user context, resulting in different paths.

    For more details on the location in different versions, refer to the . For details on security aspects of the database folder, see the .

    To force a specific folder to be used, set the option:

    This can also be supplied with the environment variable:

    If both are supplied, the commandline options are used.

    hashtag
    Environment variables

    For the server options, it is also possible to supply them as environment variables. This makes it easier to toggle options from Docker-like setups where is is desirable to have then entire service config in a single file, and setting commandline arguments may be error prone.

    Any of the commandline options for the server an be applied by transforming the option name to an environment variable name. The transformation is to upper-case the option, change hyphen, -, to underscore, _, and prepend DUPLICATI__.

    For example, to set the commandline option --webservice-api-only=true with an environment variable:

    Any arguments supplied on the commandline will take precedence over an environment variable, as commandline arguments are considered more "local".

    https://{yourOktaDomain}/.well-known/openid-configuration

    Machine Tags – Apply custom tags to machines for better organization and filtering.

    Green (Success) – Backup completed successfully.

  • Yellow (Warning) – Backup completed with warnings that may need attention.

  • Red (Error/Fatal) – Backup failed or encountered critical errors requiring immediate action.

  • Webhooks – Integrate with custom endpoints or other services for advanced automation.
    Network Storage – SSH/SFTP.
    https://app.duplicati.comarrow-up-right
    Using remote management
    Duplicati Agent
    Organization management
    User management in the Duplicati Console
    preload configurations
    The CA should never be exported or shared with other systems, unless you understand the security implications

    Alpine: /usr/local/share/ca-certificates/

    Click Import and select the exported duplicati-ca.crt file
  • Check "Trust this CA to identify websites" and click OK

  • Click Import and select the exported duplicati-ca.crt file
  • Check "Trust this certificate for identifying websites" and click OK

  • Monitor: Set up alerts for unexpected certificate changes
    Database encryption status
  • Certificate expiration status

  • Server documentationarrow-up-right
    ConfigureTool Reference
    Duplicati Access Password
    The Server Database
    server database
    access password
    server database
    tokens
    token signing keys
    server database is a critical resource to protect
    password
    valid token
    mkcertarrow-up-right
    server database
    ServerUtil
    storage section for the server database
    section on server database permissions
    duplicati-configure https generate
    duplicati-configure https generate --store=local
    sudo duplicati-configure https generate
    sudo duplicati-configure https generate --cert-dir=/usr/local/share/ca-certificates/
    duplicati-configure https export-ca --file ~/duplicati-ca.crt
    cp ~/duplicati-ca.crt /tmp/duplicati-ca.crt
    chmod 644 /tmp/duplicati-ca.crt
    # Then import from /tmp in the browser
    duplicati-configure https generate --no-trust
    sudo duplicati-configure https generate
    duplicati-configure https generate --keychain=/Library/Keychains/System.keychain
    duplicati-configure https remove
    duplicati-configure https regenerate-ca
    duplicati-configure https show
    duplicati-configure https renew
    duplicati-configure https remove
    --webservice-password=<new password>
    --webservice-reset-jwt-config=true
    --webservice-disable-signin-tokens=true
    --settings-encryption-key=<encryption key>
    --disable-db-encryption=true
    --require-db-encryption-key
    --webservice-interface=any
    --webservice-allowed-hostnames=<hostname>
    --webservice-port=<port number>
    --webservice-sslcertificatefile=<path to certificate file>
    --webservice-sslcertificatepassword=<password to ssl file>
    --webservice-remove-sslcertificate=true
    --webservice-disable-https
    --webservice-api-only=true
    --webservice-webroot=<path-to-webroot>
    --webservice-spa-paths=<path to SPA>
    --webservice-timezone=<timezone>
    --log-file=<path to logfile>
    --log-level=<loglevel>
    --log-retention=<time to keep logs>
    --windows-eventlog=true
    --windows-eventlog-level=<loglevel>
    --server-datafolder=<path to storage folder>
    DUPLICATI_HOME=<path to storage folder>
    DUPLICATI__WEBSERVICE_API_ONLY=true

    Scripts

    These options allow you to integrate custom scripts with Duplicati operations, providing automation capabilities before and after backups, restores, or other tasks.

    Pre and Post Operation Scripts Run custom scripts before an operation starts or after it completes. Use these to perform preparation tasks (like database locking), cleanup actions, or to trigger notifications based on operation results.

    Control Flow Management Configure whether operations should continue or abort based on script execution status, with customizable timeout settings to prevent operation blocking.

    Script Output Processing Post-operation scripts receive operation results via standard output, enabling conditional processing based on success or failure.

    hashtag
    Scripting options

    --run-script-before(Path) Run a script on startup. Executes a script before performing an operation. The operation will block until the script has completed or timed out.

    --run-script-after(Path) Run a script on exit. Executes a script after performing an operation. The script will receive the operation results written to stdout.

    --run-script-before-required(Path) Run a required script on startup. Executes a script before performing an operation. The operation will block until the script has completed or timed out. If the script returns a non-zero error code or times out, the operation will be aborted.

    --run-script-timeout(Timespan) Sets the script timeout. Sets the maximum time a script is allowed to execute. If the script has not completed within this time, it will continue to execute but the operation will continue too, and no script output will be processed. Default value: 60s

    hashtag
    Script Output Integration with Duplicati Logging

    You can add custom entries directly to Duplicati's log system from your scripts by using special prefixes in stdout messages. This allows script events to appear in both the Duplicati Log and Reports alongside native application events.

    Supported Log Level Prefixes:

    • LOG:INFO - For general information and success notifications

    • LOG:WARN - For potential issues that didn't prevent completion

    • LOG:ERROR - For critical failures that require attention

    Example Usage (Linux / MacOS):

    Example Usage (Windows):

    These messages will be captured with their appropriate severity levels and integrated into Duplicati's logging system, making script events traceable within the same monitoring interfaces you use for Duplicati itself.

    hashtag
    Sample Scripts

    hashtag
    run-script-example.bat (Windows)

    hashtag
    run-script-example.sh (Linux)

    echo "LOG:INFO Preparation tasks completed successfully"
    echo "LOG:WARN Database backup older than 24 hours detected"
    echo "LOG:ERROR Unable to lock database, backup may contain inconsistent data"
    echo LOG:INFO Preparation tasks completed successfully
    echo LOG:WARN Database backup older than 24 hours detected
    echo LOG:ERROR Unable to lock database, backup may contain inconsistent data
    @echo off
    
    REM ###############################################################################
    REM How to run scripts before or after backups
    REM ###############################################################################
    
    REM Duplicati is able to run scripts before and after backups. This 
    REM functionality is available in the advanced options of any backup job (UI) or
    REM as option (CLI). The (advanced) options to run scripts are
    REM --run-script-before = <filename>
    REM --run-script-before-required = <filename>
    REM --run-script-timeout = <time>
    REM --run-script-after = <filename>
    REM --run-script-with-arguments = <boolean>
    REM
    REM --run-script-before-required = <filename>
    REM Duplicati will run the script before the backup job and wait for its 
    REM completion for 60 seconds (default timeout value). The backup will only be
    REM run if the script completes with an allowed exit code (0, 2, or 4). 
    REM A timeout or any other exit code will abort the backup.
    REM The following exit codes are supported:
    REM
    REM - 0: OK, run operation
    REM - 1: OK, don't run operation
    REM - 2: Warning, run operation
    REM - 3: Warning, don't run operation
    REM - 4: Error, run operation
    REM - 5: Error don't run operation
    REM - other: Error don't run operation
    REM
    REM --run-script-before = <filename>
    REM Duplicati will run the script before the backup job and waits for its 
    REM completion for 60 seconds (default timeout value). After a timeout a 
    REM warning is logged and the backup is started.
    REM Any other exit code than 0 will be logged as a warning.
    REM
    REM --run-script-timeout = <time>
    REM Specify a new value for the timeout. Default is 60s. Accepted values are
    REM e.g. 30s, 1m15s, 1h12m03s, and so on. To turn off the timeout set the value 
    REM to 0. Duplicati will then wait endlessly for the script to finish.
    REM
    REM --run-script-after = <filename>
    REM Duplicati will run the script after the backup job and wait for its 
    REM completion for 60 seconds (default timeout value). After a timeout a 
    REM warning is logged.
    REM The same exit codes as in --run-script-before are supported, but
    REM the operation will always continue (i.e. 1 => 0, 3 => 2, 5 => 4)
    REM as it has already completed so stopping it during stop is useless.
    REM
    REM --run-script-with-arguments = <boolean>
    REM If set to true, the script path will be parsed as a command line, and the
    REM arguments will be passed to the script. If set to false (default), 
    REM the script path will used as a single path.
    REM If you do not have spaces in your script path or arguments, simply enter 
    REM it as a string:
    REM Example: --run-script-before="C:\path\to\script.bat arg1 arg2 --option1=a"
    REM If you have spaces in the path or arguements, use double- or single-quotes
    REM around the elements that have spaces, similar to how you would do 
    REM on the command line:
    REM Example: --run-script-before="\"C:\path to\script.bat\" \"arg1 \" arg2"
    
    
    
    REM ###############################################################################
    REM Changing options from within the script 
    REM ###############################################################################
    
    REM Within a script, all Duplicati options are exposed as environment variables
    REM with the prefix "DUPLICATI__". Please notice that the dash (-) character is
    REM not allowed in environment variable keys, so it is replaced with underscore
    REM (_). For a list of available options, have a look at the output of
    REM "duplicati.commandline.exe help".
    REM
    REM For instance the current value of the option --encryption-module can be 
    REM accessed in the script by
    REM ENCRYPTIONMODULE=%DUPLICATI__encryption_module%
    
    REM All Duplicati options can be changed by the script by writing options to
    REM stdout (with echo or similar). Anything not starting with a double dash (--)
    REM will be ignored:
    REM echo "Hello! -- test, this line is ignored"
    REM echo --new-option=This will be a setting
    
    REM Filters are supplied in the DUPLICATI__FILTER variable.
    REM The variable contains all filters supplied with --include and --exclude,
    REM combined into a single string, separated with semicolon (;).
    REM Filters set with --include will be prefixed with a plus (+),
    REM and filters set with --exclude will be prefixed with a minus (-).
    REM
    REM Example:
    REM     --include=*.txt --exclude=[.*\.abc] --include=*
    REM 
    REM Will be encoded as:
    REM     DUPLICATI__FILTER=+*.txt;-[.*\.abc];+*
    REM
    REM You can set the filters by writing --filter=<new filter> to stdout.
    REM You may want to append to the existing filter like this:
    REM     echo "--filter=+*.123;%DUPLICATI__FILTER%;-*.xyz"
    
    
    REM ###############################################################################
    REM Special Environment Variables
    REM ###############################################################################
    
    REM DUPLICATI__EVENTNAME
    REM Eventname is BEFORE if invoked as --run-script-before, and AFTER if 
    REM invoked as --run-script-after. This value cannot be changed by writing
    REM it back!
    
    REM DUPLICATI__OPERATIONNAME
    REM Operation name can be any of the operations that Duplicati supports. For
    REM example it can be "Backup", "Cleanup", "Restore", or "DeleteAllButN".
    REM This value cannot be changed by writing it back!
    
    REM DUPLICATI__RESULTFILE
    REM If invoked as --run-script-after this will contain the name of the 
    REM file where result data is placed. This value cannot be changed by 
    REM writing it back!
    
    REM DUPLICATI__REMOTEURL
    REM This is the remote url for the target backend. This value can be changed by
    REM echoing --remoteurl = "new value".
    
    REM DUPLICATI__LOCALPATH
    REM This is the path to the folders being backed up or restored. This variable
    REM is empty operations  other than backup or restore. The local path can 
    REM contain : to separate multiple folders. This value can be changed by echoing
    REM --localpath = "new value".
    
    REM DUPLICATI__PARSED_RESULT
    REM This is a value indicating how well the operation was performed.
    REM It can take the values: Unknown, Success, Warning, Error, Fatal.
    
    
    REM ###############################################################################
    REM Example script
    REM ###############################################################################
    
    REM We read a few variables first.
    SET "EVENTNAME=%DUPLICATI__EVENTNAME%"
    SET "OPERATIONNAME=%DUPLICATI__OPERATIONNAME%"
    SET "REMOTEURL=%DUPLICATI__REMOTEURL%"
    SET "LOCALPATH=%DUPLICATI__LOCALPATH%"
    
    REM Basic setup, we use the same file for both before and after,
    REM so we need to figure out which event has happened
    if "%EVENTNAME%" == "BEFORE" GOTO ON_BEFORE
    if "%EVENTNAME%" == "AFTER" GOTO ON_AFTER
    
    REM This should never happen, but there may be new operations
    REM in new version of Duplicati
    REM We write this to stderr, and it will show up as a warning in the logfile
    echo Got unknown event "%EVENTNAME%", ignoring 1>&2
    GOTO end
    
    :ON_BEFORE
    
    REM If the operation is a backup starting, 
    REM then we check if the --dblock-size option is unset
    REM or 50mb, and change it to 25mb, otherwise we 
    REM leave it alone
    
    IF "%OPERATIONNAME%" == "Backup" GOTO ON_BEFORE_BACKUP
    REM This will be ignored
    echo Got operation "%OPERATIONNAME%", ignoring
    GOTO end
    
    :ON_BEFORE_BACKUP
    REM Check if volsize is either not set, or set to 50mb
    IF "%DUPLICATI__dblock_size%" == "" GOTO SET_VOLSIZE
    IF "%DUPLICATI__dblock_size%" == "50mb" GOTO SET_VOLSIZE
    
    REM We write this to stderr, and it will show up as a warning in the logfile
    echo Not setting volumesize, it was already set to %DUPLICATI__dblock_size% 1>&2
    GOTO end
    
    :SET_VOLSIZE
    REM Write the option to stdout to change it
    echo --dblock-size=25mb
    GOTO end
    
    
    :ON_AFTER
    
    IF "%OPERATIONNAME%" == "Backup" GOTO ON_AFTER_BACKUP
    REM This will be ignored
    echo "Got operation "%OPERATIONNAME%", ignoring	"
    GOTO end
    
    :ON_AFTER_BACKUP
    
    REM Basic email setup		
    SET EMAIL="[email protected]"		
    SET SUBJECT="Duplicati backup"
    
    REM We use a temp file to store the email body
    SET MESSAGE="%TEMP%\duplicati-mail.txt"
    echo Duplicati finished a backup. > %MESSAGE%
    echo This is the result : >> %MESSAGE%
    echo.  >> %MESSAGE%
    
    REM We append the results to the message
    type "%DUPLICATI__RESULTFILE%" >> %MESSAGE%
    
    REM If the log-file is enabled, we append that as well
    IF EXIST "%DUPLICATI__log_file%" type "%DUPLICATI__log_file%" >> %MESSAGE%
    
    REM If the backend-log-database file is enabled, we append that as well
    IF EXIST "%DUPLICATI__backend_log_database%" type "%DUPLICATI__backend_log_database%" >> %MESSAGE%
    
    REM Finally send the email using a fictive sendmail program
    sendmail %SUBJECT% %EMAIL% < %MESSAGE%
    
    GOTO end
    
    :end
    
    REM We want the exit code to always report success.
    REM For scripts that can abort execution, use the option
    REM --run-script-on-start-required = <filename> when running Duplicati
    exit /B 0
    
    #!/bin/bash
    
    ###############################################################################
    # How to run scripts before or after backups
    ###############################################################################
    
    # Duplicati is able to run scripts before and after backups. This 
    # functionality is available in the advanced options of any backup job (UI) or
    # as option (CLI). The (advanced) options to run scripts are
    # --run-script-before = <filename>
    # --run-script-before-required = <filename>
    # --run-script-timeout = <time>
    # --run-script-after = <filename>
    # --run-script-with-arguments = <boolean>
    #
    # --run-script-before-required = <filename>
    # Duplicati will run the script before the backup job and wait for its 
    # completion for 60 seconds (default timeout value). The backup will only be
    # run if the script completes with an allowed exit code (0, 2, or 4). 
    # A timeout or any other exit code will abort the backup.
    # The following exit codes are supported:
    #
    # - 0: OK, run operation
    # - 1: OK, don't run operation
    # - 2: Warning, run operation
    # - 3: Warning, don't run operation
    # - 4: Error, run operation
    # - 5: Error don't run operation
    # - other: Error don't run operation
    #
    # --run-script-before = <filename>
    # Duplicati will run the script before the backup job and waits for its 
    # completion for 60 seconds (default timeout value). After a timeout a 
    # warning is logged and the backup is started.
    # Any other exit code than 0 will be logged as a warning.
    #
    # --run-script-timeout = <time>
    # Specify a new value for the timeout. Default is 60s. Accepted values are
    # e.g. 30s, 1m15s, 1h12m03s, and so on. To turn off the timeout set the value 
    # to 0. Duplicati will then wait endlessly for the script to finish.
    #
    # --run-script-after = <filename>
    # Duplicati will run the script after the backup job and wait for its 
    # completion for 60 seconds (default timeout value). After a timeout a 
    # warning is logged.
    # Any other exit code than 0 will be logged as a warning.
    #
    # --run-script-with-arguments = <boolean>
    # If set to true, the script path will be parsed as a command line, and the
    # arguments will be passed to the script. If set to false (default), 
    # the script path will used as a single path.
    # If you do not have spaces in your script path or arguments, simply enter 
    # it as a string:
    # Example: --run-script-before="/path/to/script.sh arg1 arg2 --option=a"
    # If you have spaces in the path or arguements, use double- or single-quotes
    # around the elements that have spaces, similar to how you would do in a shell:
    # Example: --run-script-before="\"/path to/script.sh\" \"arg1 \" arg2"
    
    
    ###############################################################################
    # Changing options from within the script 
    ###############################################################################
    
    # Within a script, all Duplicati options are exposed as environment variables
    # with the prefix "DUPLICATI__". Please notice that the dash (-) character is
    # not allowed in environment variable keys, so it is replaced with underscore
    # (_). For a list of available options, have a look at the output of
    # "duplicati.commandline.exe help".
    #
    # For instance the current value of the option --encryption-module can be 
    # accessed in the script by
    # ENCRYPTIONMODULE=$DUPLICATI__encryption_module
    
    # All Duplicati options can be changed by the script by writing options to
    # stdout (with echo or similar). Anything not starting with a double dash (--)
    # will be ignored:
    # echo "Hello! -- test, this line is ignored"
    # echo "--new-option=\"This will be a setting\""
    
    # Filters are supplied in the DUPLICATI__FILTER variable.
    # The variable contains all filters supplied with --include and --exclude,
    # combined into a single string, separated with colon (:).
    # Filters set with --include will be prefixed with a plus (+),
    # and filters set with --exclude will be prefixed with a minus (-).
    #
    # Example:
    #     --include=*.txt --exclude=[.*\.abc] --include=*
    # 
    # Will be encoded as:
    #     DUPLICATI__FILTER=+*.txt:-[.*\.abc]:+*
    #
    # You can set the filters by writing --filter=<new filter> to stdout.
    # You may want to append to the existing filter like this:
    #     echo "--filter=+*.123:%DUPLICATI__FILTER%:-*.xyz"
    
    
    ###############################################################################
    # Special Environment Variables
    ###############################################################################
    
    # DUPLICATI__EVENTNAME
    # Eventname is BEFORE if invoked as --run-script-before, and AFTER if 
    # invoked as --run-script-after. This value cannot be changed by writing
    # it back!
    
    # DUPLICATI__OPERATIONNAME
    # Operation name can be any of the operations that Duplicati supports. For
    # example it can be "Backup", "Cleanup", "Restore", or "DeleteAllButN".
    # This value cannot be changed by writing it back!
    
    # DUPLICATI__RESULTFILE
    # If invoked as --run-script-after this will contain the name of the 
    # file where result data is placed. This value cannot be changed by 
    # writing it back!
    
    # DUPLICATI__REMOTEURL
    # This is the remote url for the target backend. This value can be changed by
    # echoing --remoteurl = "new value".
    
    # DUPLICATI__LOCALPATH
    # This is the path to the folders being backed up or restored. This variable
    # is empty operations  other than backup or restore. The local path can 
    # contain : to separate multiple folders. This value can be changed by echoing
    # --localpath = "new value".
    
    # DUPLICATI__PARSED_RESULT
    # This is a value indicating how well the operation was performed.
    # It can take the values: Unknown, Success, Warning, Error, Fatal.
    
    
    
    ###############################################################################
    # Example script
    ###############################################################################
    
    # We read a few variables first.
    EVENTNAME=$DUPLICATI__EVENTNAME
    OPERATIONNAME=$DUPLICATI__OPERATIONNAME
    REMOTEURL=$DUPLICATI__REMOTEURL
    LOCALPATH=$DUPLICATI__LOCALPATH
    
    # Basic setup, we use the same file for both before and after,
    # so we need to figure out which event has happened
    if [ "$EVENTNAME" == "BEFORE" ]
    then
    	# If the operation is a backup starting, 
    	# then we check if the --dblock-size option is unset
    	# or 50mb, and change it to 25mb, otherwise we 
    	# leave it alone
    	
    	if [ "$OPERATIONNAME" == "Backup" ]
    	then
    		if [ "$DUPLICATI__dblock_size" == "" ] || ["$DUPLICATI__dblock_size" == "50mb"]
    		then
    			# Write the option to stdout to change it
    			echo "--dblock-size=25mb"
    		else
    			# We write this to stderr, and it will show up as a warning in the logfile
    			echo "Not setting volumesize, it was already set to $DUPLICATI__dblock_size" >&2
    		fi
    	else
    		# This will be ignored
    		echo "Got operation \"OPERATIONNAME\", ignoring"	
    	fi
    
    elif [ "$EVENTNAME" == "AFTER" ]
    then
    
    	# If this is a finished backup, we send an email
    	if [ "$OPERATIONNAME" == "Backup" ]
    	then
    
    		# Basic email setup		
    		EMAIL="[email protected]"		
    		SUBJECT="Duplicati backup"
    		
    		# We use a temp file to store the email body
    		MESSAGE="/tmp/duplicati-mail.txt"
    		echo "Duplicati finished a backup."> $MESSAGE
    		echo "This is the result :" >> $MESSAGE
    		echo "" >> $MESSAGE
    
    		# We append the result of the operation to the email
    		cat "$DUPLICATI__RESULTFILE" >> $MESSAGE
    
    		# If the log-file is enabled, we append it
    		if [ -f "$DUPLICATI__log_file" ]
    		then
    			echo "The log file looks like this: " >> $MESSAGE
    			cat "$DUPLICATI__log_file" >> $MESSAGE
    		fi
    		
    		# If the backend-log-database file is enabled, we append that as well
    		if [ -f "$DUPLICATI__backend_log_database" ]
    		then
    			echo "The backend-log file looks like this: " >> $MESSAGE
    			cat "$DUPLICATI__backend_log_database" >> $MESSAGE
    		fi
    
    		# Finally send the email using /bin/mail
    		/bin/mail -s "$SUBJECT" "$EMAIL" < $MESSAGE
    	else
    		# This will be ignored
    		echo "Got operation \"OPERATIONNAME\", ignoring"	
    	fi
    else
    	# This should never happen, but there may be new operations
    	# in new version of Duplicati
    	# We write this to stderr, and it will show up as a warning in the logfile
    	echo "Got unknown event \"$EVENTNAME\", ignoring" >&2
    fi
    
    # We want the exit code to always report success.
    # For scripts that can abort execution, use the option
    # --run-script-before-required = <filename> when running Duplicati
    exit 0
    

    Google Workspace backup and restore

    This page describes how Google Workspace backup and restore works with Duplicati

    Duplicati supports backing up and restoring Google Workspace data through the Google APIs. The backup workflow captures content in native formats (for example, RFC 2822 for email and JSON for structured objects), and restore operations re-create items through the appropriate Google API endpoints.

    circle-info

    Google Workspace backup and restore was added in Canary 2.2.0.105

    circle-exclamation

    The Google Workspace backup feature has a limit that covers up to 5 users, groups, shared drives, or sites. The Google Workspace backup feature is source-available, but not open-source like the rest of Duplicati. There are no limitations on restore.

    A license is required to use Google Workspace backup in production. Contact Duplicati Inc. support or sales to obtain a license.

    hashtag
    Overview

    Key characteristics

    • Format preservation: Data is stored in native formats such as RFC 2822 (EML), JSON, and standard document formats. Drive documents are stored in their exported formats, such as PDF, DOCX, or XLSX.

    • Metadata retention: Original timestamps, identifiers, and properties are preserved where possible.

    • Restore to local disk: It is possible to restore all data to a local destination for forensics or manual investigation.

    hashtag
    Google Workspace configuration options

    To use the Google Workspace backup source you must supply credentials via either a service account with domain-wide delegation or OAuth 2.0 user credentials. When choosing what to back up, it is possible to filter on root types (users, groups, shared drives, sites, organizational units), and possible to apply filters to obtain fine-grained exclusion of data.

    When making backups of a Google Workspace domain, the advanced option --store-metadata-content-in-database must be activated.

    hashtag
    Authentication methods

    Method
    Description
    Use case

    Service Account Configuration

    hashtag
    Connection configuration settings

    Parameter
    Description
    Required for

    hashtag
    Additional settings

    Parameter
    Description

    hashtag
    Supported data types

    hashtag
    Root-level components

    Data type
    Backup
    Restore
    Notes

    hashtag
    Per-user data types

    Data type
    Backup
    Restore
    Notes

    hashtag
    Backup and restore details by type

    hashtag
    Gmail

    Backup

    • Emails exported as RFC 2822 (.eml) format with full headers and attachments.

    • Folder hierarchy captured and preserved as labels.

    • Label definitions, colors, and visibility settings backed up.

    Restore

    • Messages imported with a "Restored" label by default.

    • Original folder structure recreated within the target.

    • Duplicate detection via Message-ID header.

    hashtag
    Google Drive

    Backup

    • Files downloaded with original content.

    • Google Workspace native files exported to standard formats (Docs → DOCX, Sheets → XLSX, Slides → PPTX, Drawings → PNG).

    • Folder structure and metadata captured.

    Restore

    • Files restored to a "Restored" folder by default.

    • Folder hierarchy is recreated.

    • Binary files uploaded as new revisions when duplicates exist.

    Duplicate detection

    • Name + path matching

    • Size comparison

    • Hash comparison

    hashtag
    Google Calendar

    Backup

    • Events exported as JSON and ICS formats.

    • Recurrence patterns captured.

    • Event attachments backed up as binary files.

    circle-info

    Calendar ACLs require write permissions to read. If your backup account does not have write permissions, use the --google-ignore-calendar-acl option to skip ACL backup.

    Restore

    • Events restored to a "Restored" calendar by default.

    • Duplicate detection based on iCalUID.

    • Attachments uploaded to Drive and linked to events.

    hashtag
    Google Contacts

    Backup

    • Contacts exported as JSON and vCard formats.

    • Contact photos backed up as binary files.

    • Contact groups and membership captured.

    Restore

    • Contacts created with duplicate detection based on email address.

    • Contact photos uploaded separately after contact creation.

    • Contacts added to restored groups.

    hashtag
    Google Tasks

    Backup

    • Task lists enumerated with all properties.

    • Tasks captured with full details.

    • Task relationships and completion status preserved.

    Restore

    • Task lists recreated with original titles.

    • Tasks restored to their respective lists.

    • Orphaned tasks placed in a "Restored" task list.

    hashtag
    Google Keep

    Backup

    • Notes captured with content, title, and metadata.

    • Attachments (images and other files) backed up as binary data (including Drive files).

    Restore

    • Notes recreated with original titles and content.

    • Attachments uploaded to Drive and linked in the note.

    • Duplicate detection based on note title.

    circle-info

    The Keep API does not support direct attachment uploads. Attachments are uploaded to Drive and referenced in the restored note.

    hashtag
    Google Chat

    Backup

    • Chat spaces enumerated with settings.

    • Messages captured with content.

    • Message attachments backed up as binary files.

    Restore

    • Spaces can be recreated with original settings.

    • Messages can be posted to spaces (with limitations).

    circle-exclamation

    Chat restore has significant limitations:

    • Messages appear as being created by the impersonated user, not the original sender

    hashtag
    Google Groups (backup only)

    Backup

    • Group metadata and settings captured.

    • Membership lists backed up.

    • Group email aliases preserved.

    Restore

    Groups cannot be restored through the API. Group data is backed up for reference purposes only.

    hashtag
    Google Sites (backup only)

    Backup

    • Site metadata captured.

    • Content backed up where API permits.

    circle-exclamation

    Sites cannot be restored through the API. The Sites API has limited export capabilities, and Sites data is backed up for reference purposes only.

    hashtag
    Shared Drives

    Backup

    • Shared drive metadata captured.

    • Drive-level permissions recorded.

    • Files and folders backed up (same as user Drive).

    Restore

    • Shared drive permissions restored (with role limitations).

    • Content restored maintaining folder structure.

    • File organizer and organizer roles only valid for shared drives.

    hashtag
    Organizational Units

    Backup

    • Organizational unit hierarchy captured.

    • OU-specific settings backed up.

    Restore

    OU data is backed up for reference purposes only. Manual configuration is required for restores.

    hashtag
    Technical limitations

    hashtag
    API limitations

    Limitation
    Description
    Impact

    hashtag
    Permission limitations

    Feature
    Limitation
    Notes

    hashtag
    Data fidelity limitations

    circle-info

    For restores into Google Workspace, data is technically "created again" as the API does not support true restoration. Visually the data looks the same but internal values, such as timestamps or references, may be different in the restored version.

    Data type
    Limitation

    hashtag
    Rate limiting

    Google APIs have usage quotas. The implementation includes exponential backoff for:

    • HTTP 429 (Too Many Requests)

    • HTTP 503 (Service Unavailable)

    API
    Default Quota
    Recommendation

    hashtag
    API permissions reference

    Permissions are divided by operation (backup vs restore). All permissions require admin consent for domain-wide delegation when using service accounts.

    hashtag
    Backup permissions

    Scope
    Permission
    Description

    hashtag
    Restore permissions

    Scope
    Permission
    Description

    hashtag
    Permissions by data type

    Data type
    Backup (read)
    Restore (write)

    hashtag
    Data format and storage

    hashtag
    Backup data formats

    Data type
    Format
    Extension

    hashtag
    Export formats for Google Workspace files

    Google Format
    Export type

    hashtag
    Storage requirements

    • Email: RFC 2822 content + JSON metadata per message

    • Files: Original file size + metadata overhead

    • Calendar: JSON per event (typically 1–10 KB)

    hashtag
    Best practices

    hashtag
    Backup recommendations

    1. Enable metadata storage with --store-metadata-content-in-database.

    2. Use service account authentication with domain-wide delegation for organizational backups.

    3. Schedule regular backups during off-peak hours.

    hashtag
    Restore recommendations

    1. Test restore procedures regularly.

    2. Verify target user exists before restore.

    3. Use --google-ignore-existing carefully; default behavior skips duplicates.

    hashtag
    Security considerations

    1. Store service account JSON and OAuth tokens securely.

    2. Use least privilege principle - only request scopes that are actually needed.

    3. Enable audit logging in Google Workspace Admin Console.

    hashtag
    Troubleshooting

    hashtag
    Common issues

    Issue
    Solution

    hashtag
    References

    Cross-user support: Data from one user can be restored into another user account.

  • Domain-wide backup: Supports backing up all users in a Google Workspace domain via domain-wide delegation.

  • OAuth Refresh Token

    OAuth authentication

    --google-service-account-json

    Service Account JSON content

    Service account authentication

    --google-service-account-file

    Path to Service Account JSON file

    Service account authentication

    --google-admin-email

    Admin email for impersonation

    Service account with domain-wide delegation

    Google Groups and their settings (backup only)

    Shared Drives

    ✅

    ✅

    Team/Shared drives

    Sites

    ✅

    ❌

    Google Sites (backup only)

    Organizational Units

    ✅

    ⚠️

    OU structure and hierarchy (limited restore)

    Files, folders, permissions, comments

    Google Calendar

    ✅

    ✅

    Events, calendars, ACLs

    Google Contacts

    ✅

    ✅

    Contact information and groups

    Google Tasks

    ✅

    ✅

    Task lists and individual tasks

    Google Keep

    ✅

    ✅

    Notes and attachments (limited attachement restore)

    Google Chat

    ✅

    ⚠️

    Spaces and messages (limited restore)

    Email filters backed up as JSON.
  • Forwarding settings, vacation responder, signatures, and IMAP settings captured.

  • Labels recreated with original colors and visibility.
  • Settings restored to their original values.

  • Sharing permissions and access controls recorded.
  • File comments and replies backed up.

  • Previous file versions backed up as binary data.

  • Permissions restored (except owner permissions which cannot be set via API).
  • Comments recreated on restored files.

  • Restores between shared and personal drives supported both ways.

  • Calendar sharing permissions (ACLs) captured.
    ACLs recreated (except owner permissions).
    Folder hierarchy preserved.
    Duplicate detection based on task title.
    Note body includes links to restored attachments.
    Membership information preserved.
    Use of the Import API is not currently supported
    Group email conversations backed up.

    Form responses require separate API

    Structure backed up via Drive only

    Revision history

    Native file revisions cannot be directly restored

    Only current version restorable

    Calendar attachments

    Direct attachment upload not supported

    Attachments uploaded to Drive and linked

    Keep attachments

    Direct attachment upload not supported

    Attachments uploaded to Drive and linked

    Some settings require domain admin privileges

    May need manual configuration

    Chat messages

    Original sender/timestamp lost on restore

    500 requests/100 seconds

    Usually sufficient

    People API

    90 requests/minute

    May need increase

    Read-only

    Read calendar events and metadata

    https://www.googleapis.com/auth/contacts.readonly

    Read-only

    Read contacts and contact groups

    https://www.googleapis.com/auth/tasks.readonly

    Read-only

    Read task lists and tasks

    https://www.googleapis.com/auth/keep.readonly

    Read-only

    Read notes and attachments

    https://www.googleapis.com/auth/chat.spaces.readonly

    Read-only

    Read chat spaces

    https://www.googleapis.com/auth/chat.messages.readonly

    Read-only

    Read chat messages

    https://www.googleapis.com/auth/chat.memberships.readonly

    Read-only

    Read chat memberships

    https://www.googleapis.com/auth/admin.directory.group.readonly

    Read-only

    Read group information

    https://www.googleapis.com/auth/admin.directory.user.readonly

    Read-only

    Read user information

    https://www.googleapis.com/auth/apps.groups.settings

    Settings

    Read group settings

    https://www.googleapis.com/auth/admin.directory.orgunit.readonly

    Read-only

    Read OU structure

    https://www.googleapis.com/auth/calendar

    Full access

    Read calendar ACLs (optional)

    Full access

    Create and modify calendars and events

    https://www.googleapis.com/auth/contacts

    Full access

    Create and modify contacts

    https://www.googleapis.com/auth/tasks

    Full access

    Create and modify tasks

    https://www.googleapis.com/auth/keep

    Full access

    Create notes

    https://www.googleapis.com/auth/chat.messages

    Write

    Create messages (limited)

    https://www.googleapis.com/auth/chat.spaces

    Write

    Create spaces

    https://www.googleapis.com/auth/admin.directory.orgunit

    Write

    Modify OU structure

    https://www.googleapis.com/auth/admin.directory.user

    Write

    Modify user information

    https://www.googleapis.com/auth/admin.directory.group

    Write

    Modify group information

    https://www.googleapis.com/auth/apps.groups.settings

    Write

    Modify group settings

    calendar.readonly

    calendar

    Google Contacts

    contacts.readonly

    contacts

    Google Tasks

    tasks.readonly

    tasks

    Google Keep

    keep.readonly

    keep

    Google Chat

    chat.spaces.readonly, chat.messages.readonly, chat.memberships.readonly

    chat.messages, chat.spaces

    Groups

    admin.directory.group.readonly, apps.groups.settings

    Not restorable

    Users

    admin.directory.user.readonly

    admin.directory.user

    Shared Drives

    drive.readonly

    drive

    Organizational Units

    admin.directory.orgunit.readonly

    admin.directory.orgunit

    Sites

    drive.readonly

    Not restorable

    JSON/ICS

    .json/.ics

    Contacts

    JSON/vCard

    .json/.vcf

    Contact photos

    Binary

    .photo

    Files

    Original/Export

    (original or converted)

    Tasks

    JSON

    .json

    Keep notes

    JSON

    .json

    Chat messages

    JSON

    .json

    All metadata

    JSON

    .json

    Google Forms

    Metadata only (via Drive API)

    Google Sites

    Metadata only (no export available)

    Contacts: JSON per contact (typically 1–5 KB)
  • Tasks: JSON per task (typically 0.5–2 KB)

  • Verify required permissions before backup.
  • Monitor API quota usage and request increases if needed.

  • Review restored sharing permissions (owner permissions cannot be restored).
  • Be aware of limitations when restoring chat messages and sites.

  • Encrypt backup data at rest.
  • Limit who can configure and run backups.

  • Rotate service account keys regularly.

  • "Rate limit exceeded"

    Enable exponential backoff or request quota increases

    Google Calendar API Referencearrow-up-right
  • Google People API Referencearrow-up-right

  • Google Tasks API Referencearrow-up-right

  • Google Keep API Referencearrow-up-right

  • Google Chat API Referencearrow-up-right

  • Google Cloud Consolearrow-up-right

  • Google Admin Consolearrow-up-right

  • Service Account (Recommended)

    JSON key with domain-wide delegation

    Automated or unattended backup for entire domain

    OAuth 2.0 User Credentials

    Client ID, secret, and refresh token

    Single user or personal Google accounts (not currently supported)

    --google-client-id

    OAuth Client ID

    OAuth authentication

    --google-client-secret

    OAuth Client Secret

    OAuth authentication

    --google-included-root-types

    Root types to backup: Users, Groups, SharedDrives, Sites, OrganizationalUnits. Default: all types

    --google-included-user-types

    User data types to backup: Gmail, Drive, Calendar, Contacts, Tasks, Keep, Chat. Default: all types

    --google-ignore-calendar-acl

    Skip reading calendar ACLs if the backup account does not have write permission. Default: false

    --google-ignore-existing

    Users

    ✅

    ✅

    Individual user accounts and their data

    Groups

    ✅

    Gmail

    ✅

    ✅

    Emails, labels, settings, filters

    Google Drive

    ✅

    Chat message attribution

    Messages appear as sent by impersonated user

    Original sender context lost

    Sites export

    No comprehensive export API available

    Only metadata backed up

    Owner permissions

    Cannot be restored via API

    Must be manually reassigned

    External sharing

    Some external sharing permissions may not be restorable

    Depends on domain settings

    Email

    Original Message-ID preserved, but internal Google ID changes

    Calendar

    Online meeting links preserved but may not be recreated

    Contacts

    Contact ID changes on restore

    Files

    Gmail API

    250 quota units/user/second

    Increase for bulk backup

    Drive API

    12,000 requests/minute

    Increase for large drives

    https://www.googleapis.com/auth/gmail.readonly

    Read-only

    Access to read all Gmail messages, labels, and settings

    https://www.googleapis.com/auth/drive.readonly

    Read-only

    Read files, folders, and metadata

    https://www.googleapis.com/auth/gmail.modify

    Modify

    Import messages, create labels, modify settings

    https://www.googleapis.com/auth/drive

    Full access

    Create, modify, and delete files

    Gmail

    gmail.readonly

    gmail.modify

    Google Drive

    drive.readonly

    drive

    Email content

    RFC 2822

    .eml

    Email metadata

    JSON

    .json

    Google Docs

    DOCX

    Google Sheets

    XLSX

    Google Slides

    PPTX

    Google Drawings

    "Access to calendar ACLs was denied"

    Use --google-ignore-calendar-acl option or grant Calendar write permission

    "Licensed Google Workspace feature seats exceeded"

    Purchase additional licenses or reduce scope of backup

    "Missing credentials"

    Verify service account JSON or OAuth credentials are correct

    "Domain-wide delegation not enabled"

    Google Workspace Admin SDK Directory APIarrow-up-right
    Gmail API Referencearrow-up-right
    Google Drive API Referencearrow-up-right

    --google-refresh-token

    When restoring, skip items that already exist instead of updating them. Default: false

    ❌

    ✅

    Google Forms

    Domain-specific settings

    File ID changes; content hash and timestamps preserved

    Calendar API

    https://www.googleapis.com/auth/calendar.readonly

    https://www.googleapis.com/auth/calendar

    Google Calendar

    Calendar events

    PNGß

    Enable domain-wide delegation in Google Cloud Console and authorize scopes

    {
      "type": "service_account",
      "project_id": "your-project-id",
      "private_key_id": "...",
      "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n",
      "client_email": "[email protected]",
      "client_id": "...",
      "auth_uri": "https://accounts.google.com/o/oauth2/auth",
      "token_uri": "https://oauth2.googleapis.com/token"
    }

    Office 365 backup and restore

    This page describes how Office 365 backup and restore works with Duplicati

    Duplicati supports backing up and restoring Microsoft Office 365 (Microsoft 365) data through the Microsoft Graph API. The backup workflow captures content in native formats (for example, MIME for email and JSON for structured objects), and restore operations re-create items through the appropriate Graph API endpoints.

    circle-info

    Office 365 backup and restore was added in Canary 2.2.0.104

    circle-exclamation

    The Office 365 backup feature has a limit that covers up to 5 mailboxes or sites. The Office 365 backup feature is source-available, but not open-source like the rest of Duplicati. There are no limitations on restore.

    A license is required to use Office 365 backup in production. Contact Duplicati Inc. support or sales to obtain a license.

    hashtag
    Overview

    Key characteristics

    • Format preservation: Data is stored in native formats such as MIME, JSON, and HTML.

    • Metadata retention: Original timestamps, identifiers, and properties are preserved where possible.

    • Restore to local disk: It is possible to restore all data to a local destination for forensics or manual investigation.

    hashtag
    Office 365 configuration options

    To use the Office 365 backup source you must supply a tenant ID and either a client secret or a certificate. When choosing what to back up, it is possible to filter on types (users, groups, etc), and possible to apply filters to obtain fine-grained exclusion of data.

    When making backups of an Office 365 tenant, the advanced option --store-metadata-content-in-database must be activated.

    hashtag
    Authentication methods

    Method
    Description
    Use case

    hashtag
    Connection configuration settings

    Parameter
    Description
    Required for

    hashtag
    Additional settings

    Parameter
    Description

    hashtag
    Permission types

    • Application permissions: Used for most backup/restore operations and require admin consent.

    • Delegated permissions: Required for certain features (Tasks, Notes, Group Calendar).

    hashtag
    Supported data types

    Data type
    Backup
    Restore (*)
    Notes

    hashtag
    Backup and restore details by type

    hashtag
    Email (Exchange Online)

    Backup

    • Emails exported as MIME (.eml) plus JSON metadata.

    • Folder hierarchy captured and preserved.

    • Attachments included in the MIME content.

    Restore

    • Emails restored to a Restored folder by default, a target folder can be specified.

    • Original folder structure recreated within the target.

    • Duplicate detection via InternetMessageId.

    hashtag
    OneDrive for Business

    Backup

    • Files downloaded with original content.

    • Folder structure and metadata captured.

    • Sharing permissions recorded.

    Restore

    • Files uploaded to the target drive.

    • Large files (>4 MB) use upload sessions.

    • Timestamps restored via fileSystemInfo.

    Duplicate detection

    • Name + path matching

    • Size comparison

    • Hash comparison (QuickXorHash or SHA1)

    hashtag
    Calendar (Outlook)

    Backup

    • Events exported as JSON.

    • Recurrence patterns captured.

    • Attachments backed up separately.

    Restore

    • Events created in a Restored calendar by default, a target calendar can be specified.

    • Series master events restored first.

    • Exception events linked to restored masters.

    hashtag
    Contacts (Outlook)

    Backup

    • Contacts exported as JSON.

    • Contact photos backed up separately.

    • Folder hierarchy captured.

    Restore

    • Contacts created in a Restored folder by default, a target folder can be specified.

    • Original folder structure recreated.

    • Photos restored after contact creation.

    hashtag
    Planner

    Backup

    • Plans, buckets, and tasks captured.

    • Task details (description, checklist) included.

    • Assignments and labels preserved.

    Restore

    • Plans must already exist (Graph API limitation).

    • Buckets created in the target plan.

    • Tasks restored with full details.

    hashtag
    OneNote

    triangle-exclamation

    All OneNote items require delegated permissions

    Backup

    • Notebooks, section groups, sections enumerated.

    • Pages exported as HTML content.

    • Requires delegated permissions.

    Restore

    • Notebooks created in the user’s OneDrive.

    • Sections created within notebooks.

    • Pages restored as HTML.

    hashtag
    To-Do tasks

    Backup

    • Task lists enumerated.

    • Tasks with all properties captured.

    • Checklist items and linked resources included.

    Restore

    • Task lists created.

    • Tasks restored with properties.

    • Checklist items restored.

    • Linked resources restored.

    hashtag
    Teams chats

    Backup

    • Chat conversations enumerated.

    • Messages captured with content.

    • Hosted content (images) backed up.

    Restore

    • New chats created with members.

    • Messages sent to new chats.

    • Limitation: Original sender context lost.

    hashtag
    Teams channels

    Backup

    • Standard and private channels enumerated.

    • Channel properties captured.

    • Messages and replies backed up.

    Restore

    • Channels created if not existing.

    • Existing channels reused by name match.

    • Messages posted to channels if App is whitelisted

    hashtag
    SharePoint

    Document libraries

    • Files and folders captured and restored similarly to OneDrive.

    Lists

    • Lists enumerated with schema.

    • List items captured with attachments.

    • Lists and items restored with field values and attachments.

    hashtag
    Technical limitations

    hashtag
    API limitations

    Limitation
    Description
    Impact

    hashtag
    Permission limitations

    Feature
    Permission type
    Notes

    hashtag
    Data fidelity limitations

    circle-info

    For restores into an Office 365 tenant, data is technically "created again" as the API does not support restoring content. Visually the data looks the same but internal values, such as timestamps or references may be different in the restored version.

    Data type
    Limitation

    hashtag
    API permissions reference

    Permissions are divided by operation (backup vs restore) and permission model (application vs delegated). Application permissions require admin consent.

    hashtag
    Backup permissions

    Application permissions

    Delegated permissions (Planner, OneNote, To-Do, Group Calendar)

    hashtag
    Restore permissions

    Application permissions

    Delegated permissions (Planner, OneNote, To-Do, Group Calendar)

    hashtag
    Permissions by data type

    Data type
    Backup (read)
    Restore (write)
    Permission model

    hashtag
    Data format and storage

    hashtag
    Backup data formats

    Data type
    Format
    Extension

    hashtag
    Metadata structure

    Each backed-up item includes metadata with common properties:

    hashtag
    Storage requirements

    • Email: MIME content + JSON metadata per message

    • Files: Original file size + metadata overhead

    • Calendar: JSON per event (typically 1–10 KB)

    hashtag
    Cross-tenant restore

    hashtag
    Supported scenarios

    Scenario
    Supported
    Notes

    hashtag
    Cross-tenant considerations

    1. User mapping: Users referenced in data (assignments, sharing) must exist in the target tenant.

    2. Group mapping: Groups must exist in the target tenant for group-related restores.

    3. App availability: Teams apps must be available in the target tenant’s app catalog.

    hashtag
    Restore path syntax

    If not using the UI to pick the destination, the commandline syntax looks like this:

    hashtag
    Best practices

    hashtag
    Backup recommendations

    1. Enable metadata storage with --store-metadata-content-in-database.

    2. Schedule regular backups.

    3. Verify required permissions before backup.

    hashtag
    Restore recommendations

    1. Test restore procedures regularly.

    2. Verify target user/group exists before restore.

    3. Use --office365-ignore-existing carefully; default behavior skips duplicates.

    hashtag
    Security considerations

    1. Store client secrets securely.

    2. Prefer certificates over secrets in production.

    3. Request only the permissions you need.

    4. Monitor backup/restore operations with audit logging.

    hashtag
    References

    Cross-tenant support: Data can be restored into different tenants.

  • Cross-target support: Data from one user/group/site can be restored into another.

  • OAuth 2.0 credentials for an AD super admin

    Some APIs require this

    Client secret for authentication

    Client secret

    --office365-certificate-path

    Path to X.509 certificate (alternative to secret)

    Certificate

    --office365-certificate-password

    Certificate password if encrypted

    Certificate

    --office365-graph-base-url

    Graph API base URL (default: https://graph.microsoft.com)

    Sovereign clouds

    --office365-scope

    Set the permission scope (default: https://graph.microsoft.com/.default

    Custom permissions

    Hierarchy preserved

    Mailbox rules

    ✅

    ✅

    Inbox rules and filters

    Mailbox settings

    ✅

    ✅

    Auto-replies, signatures

    OneDrive files

    ✅

    ✅

    Large file upload sessions supported

    OneDrive folders

    ✅

    ✅

    Structure preserved

    File permissions

    ✅

    ✅

    Sharing restored via invite endpoint

    Calendar events

    ✅

    ✅

    Including recurrence patterns

    Calendar attachments

    ✅

    ✅

    File attachments on events

    Contacts

    ✅

    ✅

    Including contact photos

    Contact folders

    ✅

    ✅

    Folder hierarchy preserved

    Planner plans

    ✅

    ⚠️

    Plans cannot be created via API

    Planner buckets

    ✅

    ✅

    Restored to existing plans

    Planner tasks

    ✅

    ✅

    Full task details and assignments

    OneNote notebooks

    ⚠️

    ⚠️

    Requires delegated permissions

    OneNote sections

    ⚠️

    ⚠️

    Including section groups

    OneNote pages

    ⚠️

    ⚠️

    HTML content

    To-Do task lists

    ⚠️

    ⚠️

    Requires delegated permissions

    To-Do tasks

    ⚠️

    ⚠️

    Including checklist items

    User profile

    ✅

    ✅

    Photo and editable properties

    User chats

    ⚠️

    ⚠️

    Backup needs delegated permissions Restore has API limitations, needs MS whitelist

    Chat messages

    ⚠️

    ⚠️

    Backup needs delegated permissions Restore has API limitations, needs MS whitelist

    Teams channels

    ✅

    ✅

    Standard and private channels

    Channel messages

    ✅

    ⚠️

    Including replies Restore requires MS whitelist

    Channel tabs

    ✅

    ✅

    Tab configuration preserved

    Team apps

    ✅

    ✅

    App installation restored

    Group conversations

    ✅

    ⚠️

    Threads and posts Restore requires MS whitelist

    Group calendar

    ✅

    ✅

    Requires delegated permissions

    Group members

    ✅

    ✅

    Membership restored

    Group owners

    ✅

    ✅

    Ownership restored

    Group settings

    ✅

    ✅

    Configuration properties

    SharePoint sites

    ✅

    ✅

    Document libraries

    SharePoint lists

    ✅

    ✅

    Including list items

    List item attachments

    ✅

    ✅

    File attachments

    Large emails handled via chunked upload.

    Permissions restored via the invite endpoint.
    Duplicate detection by email/name.
    Limitation: Requires an App registration whitelisted by Microsoft

    Cannot restore inline images

    Images lost on restore

    Planner plan creation

    Plans cannot be created via API

    Plans must pre-exist

    File versions

    Only current version backed up

    Historical versions not available

    Soft-deleted items

    Not captured in backup

    Recently deleted items excluded

    Rate limiting

    Graph API throttling (429 responses)

    Automatic retry with backoff

    Delegated

    Requires user context

    All others

    Application

    Admin consent required

    Messages

    Original sender/timestamp lost on restore

    Application

    Mailbox rules

    MailboxSettings.Read

    MailboxSettings.ReadWrite

    Application

    Mailbox settings

    MailboxSettings.Read

    MailboxSettings.ReadWrite

    Application

    User calendar

    Calendars.Read

    Calendars.ReadWrite

    Application

    Calendar attachments

    Calendars.Read

    Calendars.ReadWrite

    Application

    User contacts

    Contacts.Read

    Contacts.ReadWrite

    Application

    Contact folders

    Contacts.Read

    Contacts.ReadWrite

    Application

    Contact photos

    Contacts.Read

    Contacts.ReadWrite

    Application

    OneDrive files

    Files.Read.All

    Files.ReadWrite.All

    Application

    OneDrive folders

    Files.Read.All

    Files.ReadWrite.All

    Application

    File permissions

    Files.Read.All

    Files.ReadWrite.All

    Application

    User profile

    User.Read.All

    User.ReadWrite.All

    Application

    User photo

    User.Read.All

    User.ReadWrite.All

    Application

    SharePoint sites

    Sites.Read.All

    Sites.ReadWrite.All

    Application

    SharePoint lists

    Sites.Read.All

    Sites.ReadWrite.All

    Application

    List items

    Sites.Read.All

    Sites.ReadWrite.All

    Application

    Teams channels

    Channel.ReadBasic.All

    Channel.Create

    Application

    Channel messages

    ChannelMessage.Read.All

    ChannelMessage.Send

    Application

    Channel tabs

    TeamsTab.Read.All

    TeamsTab.ReadWrite.All

    Application

    Team apps

    TeamsAppInstallation.ReadForTeam.All

    TeamsAppInstallation.ReadWriteForTeam.All

    Application

    User chats

    Chat.Read.All

    Chat.ReadWrite.All

    Application

    Chat messages

    Chat.Read.All

    ChatMessage.Send

    Application

    Group membership

    Group.Read.All

    Group.ReadWrite.All

    Application

    Group settings

    Group.Read.All

    Group.ReadWrite.All

    Application

    Planner plans

    Tasks.Read

    Tasks.ReadWrite

    Delegated

    Planner buckets

    Tasks.Read

    Tasks.ReadWrite

    Delegated

    Planner tasks

    Tasks.Read

    Tasks.ReadWrite

    Delegated

    OneNote notebooks

    Notes.Read

    Notes.ReadWrite.All

    Delegated

    OneNote sections

    Notes.Read

    Notes.ReadWrite.All

    Delegated

    OneNote pages

    Notes.Read

    Notes.ReadWrite.All

    Delegated

    To-Do task lists

    Tasks.Read

    Tasks.ReadWrite

    Delegated

    To-Do tasks

    Tasks.Read

    Tasks.ReadWrite

    Delegated

    Group calendar

    Group.Read.All

    Group.ReadWrite.All

    Delegated

    JSON

    .json

    Contacts

    JSON

    .json

    Contact photos

    Binary

    .photo

    Files

    Original

    (original extension)

    OneNote pages

    HTML

    .html

    All metadata

    JSON

    .json

    Contacts: JSON per contact (typically 1–5 KB)

    ✅

    Requires credentials for target

    Different user, different tenant

    ✅

    Full cross-tenant restore

    Permission scope: Application must have permissions in the target tenant.
    Review restored sharing permissions.
  • Be aware of limitations when restoring into a tenant.

  • Client secret

    App client credentials with secret

    Automated or unattended backup

    Certificate

    OAuth 2.0 client credentials with X.509 certificate

    Higher security environments

    --office365-tenant-id

    Azure AD tenant ID (GUID or domain)

    All

    --office365-client-id

    Azure AD application (client) ID

    Client secret

    --office365-included-root-types

    The different root types to include for backups. The default setting is to include all types: Users, Groups and Sites

    --office365-included-user-types

    The data types to include from users in backups. The default settings include: Profile, Mailbox, Calendar, Contacts, Planner and Chats. If delegated permissions are used, these types can also be included: Tasks and Notes

    --office365-included-group-types

    The data types to include from groups in backups. The default settings include: Mailbox, Files, Planner and Teams. If delegated permissions are used, these types can also be included: Calendar and Notes

    --office365-ignore-existing

    User email

    ✅

    ✅

    MIME format with metadata

    Email folders

    ✅

    Chat messages restore

    Chat messages needs an App whitelisted by Microsoft

    Requires an application to Microsoft before chat messages can be restored

    Chat message restore

    Cannot preserve original sender

    Messages appear from application

    Tasks (To-Do)

    Delegated

    Requires user context

    Notes (OneNote)

    Delegated

    Requires user context

    Email

    Original message ID preserved, but internal Graph ID changes

    Calendar

    Online meeting links preserved but not recreated

    Contacts

    Contact ID changes on restore

    Files

    User email

    Mail.Read

    Mail.ReadWrite

    Application

    Email folders

    Mail.Read

    Email content

    MIME

    .eml

    Email metadata

    JSON

    .json

    Same user, same tenant

    ✅

    Default restore behavior

    Different user, same tenant

    ✅

    Use --restore-path

    Microsoft Graph Permissions Referencearrow-up-right
    Azure AD App Registrationarrow-up-right

    Delegated user (not currently supported)

    --office365-client-secret

    When restoring data into a tenant the default is to check for existing data to avoid creating duplicates. Use this option to always recreate data in the destination.

    ✅

    Chat hosted content

    Group calendar

    File ID changes; hash and timestamps preserved

    Mail.ReadWrite

    Calendar events

    Same user, different tenant

    Mail.Read
    MailboxSettings.Read
    Calendars.Read
    Contacts.Read
    Files.Read.All
    Sites.Read.All
    User.Read.All
    Group.Read.All
    ChannelMessage.Read.All
    Channel.ReadBasic.All
    Team.ReadBasic.All
    TeamsTab.Read.All
    TeamsAppInstallation.ReadForTeam.All
    Chat.Read.All
    Tasks.Read
    Notes.Read
    Group.Read.All
    Mail.ReadWrite
    MailboxSettings.ReadWrite
    Calendars.ReadWrite
    Contacts.ReadWrite
    Files.ReadWrite.All
    Sites.ReadWrite.All
    User.ReadWrite.All
    Group.ReadWrite.All
    ChannelMessage.Send
    Channel.Create
    TeamsTab.ReadWrite.All
    TeamsAppInstallation.ReadWriteForTeam.All
    Chat.ReadWrite.All
    ChatMessage.Send
    Tasks.ReadWrite
    Notes.ReadWrite.All
    Group.ReadWrite.All
    {
        "o365:Type": "SourceItemType",
        "o365:Id": "Graph API ID",
        "o365:Name": "Display name",
        "o365:CreatedDateTime": "ISO 8601 timestamp",
        "o365:LastModifiedDateTime": "ISO 8601 timestamp"
    }
    --restore-path="@office365://users/{target-user-id}?office365-tenant-id=..."
    --restore-path="@office365://groups/{target-group-id}?office365-tenant-id=..."
    --restore-path="@office365://sites/{target-site-id}?office365-tenant-id=..."