Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 114 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

Duplicati Docs

Loading...

Getting Started

Loading...

Loading...

Loading...

Loading...

Detailed descriptions

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Using tools

Loading...

Loading...

Recovering from failure

This page describes how to get a backup working again after a failure on the remote storage

Loading...

Backup destinations

Loading...

Standard based destinations

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Provider specific destinations

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

File synchronization providers

Loading...

Loading...

Loading...

Loading...

Decentralized providers

Loading...

Loading...

Loading...

Duplicati Programs

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

LICENSE

Loading...

Loading...

Loading...

Loading...

Installation details

Loading...

Loading...

Loading...

Duplicati Documentation

Welcome to the Duplicati Documentation! This site contains documentation for using the open source Duplicati client, including best practice, pro tips, and trouble shooting.

Jump right in

If you cannot find an answer on this site, you can always ask a question on our 🤗.

helpful forum

Installation

Install the Duplicati client

Set up a backup

Configure your first backup

Configuring a destination

Show all destinations

Running a backup

This page describes how to run a backup outside of an automatic schedule

With a configured backup, you can have a schedule that runs the backup automatically each day. Having the backup run automatically is recommended because it makes it less likely that the backups are not recent if they are needed.

Even if the backup already has a schedule there may be times where you want to manually run a backup. If you have just configured a backup, you may want to run it ahead of the scheduled next run. If you are within the UI you can click the "Run now" link for the backup.

Once the backup is running, the top area will act as a progress bar that shows how the backup progresses. Note that the first run of a backup is the slowest run because it needs to process every file and folder that is part of the source. On later runs it will recognize what parts have changed and only process the new and changed data.

After running a backup, the view will change slightly and show some information about the backup.

If you need to automate starting a backup without using the UI, you can use to trigger backups from the commandline.

ServerUtil

Using the secret provider

This page describes how to use the secret provider.

The secret provider was introduced in Duplicati version 2.0.9.109 and aims to reduce the possibility of leaking passwords from Duplicati by not storing the passwords inside Duplicati.

To start using a secret provider you need to set only a single option:

--secret-provider=<url>

This will make the secret provider available for the remainder of the application.

You can then insert placeholder values where you want secrets to appear but without storing the actual secret in Duplicati. For commandline users, the secrets can appear in both the backend destination or in the options.

As an example:

duplicati backup \
  s3://example-bucket?auth-username=$s3-user&password=$s3-pass \
  --passphrase=$passphrase 

The secret provider will find the three keys prefixed with $ and look them up with the secret provider. The provider will then be invoked to obtain the real values and the values will be replaced before running the operation. If the secret provider has these values:

s3-user=user
s3-pass=pass
passphrase=my-password

The example from above will then be updated internally, but without having the keys written on disk:

duplicati backup \
  s3://example-bucket?auth-username=user&password=pass \
  --passphrase=my-password

To ensure you never run with an empty string or a placeholder instead of the real value, all values requested needs to be in the storage provider, or the operation will fail with a message indicating which key was not found.

Choosing Duplicati Type

This page describes the different ways to run Duplicati

When using Duplicati, you need to decide on what type of instance you want to use. Duplicati is designed to be flexible and work with many different setups, but generally you can use this overview to decide what is best for you:

The TrayIcon

When running, the TrayIcon gives a visual indication of the current status, and provides access to the visual user interface by opening a browser window.

The Server

When running the server it will emit log messages to the system log and it will expose a web server that can be accessed via a browser. Beware that if you are running the Server as root/Administrator you are also running a web server with the same privileges that you need to protect.

The Agent

If you have multiple machines to manage, using the console enables you to access all the backups, settings, logs, controls, etc. from one place.

The Command Line Interface (CLI)

Mixing types

For some additional flexibility in configurations it is also possible to combine the different types in some ways.

Combining Server and TrayIcon

It the server is used primarily to elevate privileges, it is possible to have the TrayIcon run in the local user desktop and connect to an already running Server. To do this, change the TrayIcon commandline and add additional arguments:

duplicati --no-hosted-server 
  --hosturl=http://localhost:8200 
  --webservice-password=<password>

The --no-hosted-server argument disables launching another (competing) server, and the two other arguments will give information on how to reach the running server.

Triggering Server jobs externally

Using the CLI for on Server backups

Cloud providers

This page lists the cloud providers supported as secret providers

Setting up and using either of the vaults described here is outside the scope of this document.

HashiCorp Vault

To connect to the vault, provide the url as part of the configuration:

--secret-provider=
  hcv://localhost:8200?token=<token>&secrets=app1,app2

The url is converted to the url used to connect to the vault (e.g., https://localhost:8200 in this example). The token is used to authenticate, and the secrets are the vaults that secrets are read from.

In the cloud-based offering the "secrets" values shown here are referred to as Apps and in the CLI as "mount points". When more than one value is supplied, the vaults are tried in order and stops once all secrets are resolved. This means that if the same secret key is found in two vaults, the value will be used from the first vault examined.

Other options for hcv://

For development purposes, the url can use a http connection by setting &connection-type=http , but this should not be used in production.

To connect using a credential pair instead of the token, the credentials can be provided with the values client-id and client-secret , but should be passed via the environment variables:

export HCP_CLIENT_ID=<client-id>
export HCP_CLIENT_SECRET=<secret>

--secret-provider=hcv://localhost:8200?secrets=app1

By default, the key lookup is done case-insensitive but can be toggled case-sensitive with the option &case-sensitive=true.

Amazon Secret Manager

export AWS_ACCESS_KEY_ID=<id>
export AWS_SECRET_ACCESS_KEY=<key>

--secret-provider=awssm://?region=us-east-1&secrets=vault1,vault2

The secrets values name the vaults to use (called "Secret Name" in the AWS Console). When more than one value is supplied, the vaults are tried in order and stops once all secrets are resolved. This means that if the same secret key is found in two vaults, the value will be used from the first vault examined.

Instead of suppling the region the entire service point url can also be provided via &service-url=.

By default, the key lookup is done case-insensitive but can be toggled case-sensitive with the option &case-sensitive=true.

Google Cloud Secret Manager

--secret-provider=gcsm://?project-id=<projectid>
--secret-provider=gcsm://?project-id=<projectid>&token=<token>

Additional options for gcsm://

By default, the screts are accessed with the version set to latest but this can be changed with &version=. Finally, the communication protocol can be changed from gRPC to https with by adding &api-type=Rest.

Azure Key Vault

--secret-provider=azkv://?keyvault-name=keyvault

Instead of supplying the name of the keyvault, the full vault url can be supplied with &vault-uri=.

Manually authenticating

Instead of relying on the autmated login handling, it is possible to authenticate with either a client credential, or a username/password pair.

For authenticating with client credentials, use:

--secret-provider=azkv://?keyvault-name=keyvault
  &auth-type=ClientSecret
  &tenant-id=<tenantid>
  &client-id=<clientid>
  &client-secret=<secret>

And for username/password, use:

--secret-provider=azkv://?keyvault-name=keyvault
  &auth-type=UsernamePassword
  &tenant-id=<tenantid>
  &client-id=<clientid>
  &username=<username>
  &password=<password>

Home user, single desktop machine: or

Server backup or headless: , or

Multiple machines: , or

The is meant to be the simplest way to run Duplicati with the minimal amount of effort required. The TrayIcon starts as a single process, registers with the machine desktop environment and shows a small icon in the system status bar (usually to the right, either top or bottom of the screen).

The mode is intended for users who want to run the full Duplicati with a user interface, but without a desktop connection. When running the Server it is usually running as a system service so it has elevated privileges and is started automatically with the system.

When the Server is running it will lock down access to only listen on the loopback adapter and refuse connections not using an IP address as the hostname. If you need to access the Server from another machine, make sure you protect it and and also add .

When running the Server you also need to , either by getting a , , or .

The mode is intended for users who wants to run Duplicati with remote access through the . The benefit from this is that you do not need to provide any local access as all access is protected with HTTPS and additional channel encryption from the Agent to the browser you are using.

The mode is intended for advanced users who prefer to manage and configure each of the backups manually. The typical use for this is a server-like setup where the backups are running as cron scheduled tasks or triggered with some external tool.

If you prefer to use the Server (or TrayIcon) but would like to trigger the backups with an external scheduler or event system, you can use the to trigger a backup or pause/resume the server.

If you are using the Server (or TrayIcon) but you want to run a command that is not in the UI, it is possible to use the CLI to run commands on the backups defined in the Server. Note that the Server and CLI use different ways of keeping track of the , so you need to obtain the storage destination url and the database path from the Server and then run the CLI.

For cloud-based providers there is generally a need to pass some kind of credentials to access the storage as well as the possibility of a provider being unavailable for a shorter period. To address these two issues, see and .

The implementation for supports both the cloud-based offering as well as the self-hosted version as sources.

The provider for supports the AWS hosted vault. The credentials for the vault are the regular Access Key Id and Access Key Secret. While these can be provided via the secret provider url as access-id and access-key, they should be passed via the environment variables:

The secret provider for relies on the to handle the authentication. Follow with Google. After the athentication is complete, the configuration is:

If you need to integrate with a different flow you can also , but notice that the token may be short-lived and you cannot change the token after configuring the secret provider:

With as the provider there are several options for authenticating, where the most secure method is to use the that handles all the details. Since this method is the default, the secret provider can be configured as:

TrayIcon
Agent
Server
CLI
Agent
Server
CLI
Agent
TrayIcon
Server
Agent
Duplicati Console
CLI
ServerUtil
local database
HashiCorp Vault
AWS Secret Manager
Google Cloud Secret Manager
Google Cloud SDK
the steps to get the environment authenticated
supply an access token
Azure Key Vault
Azure CLI login
A newly configured backup
After the backup has completed it shows backup details

Installation

This page describes how to install Duplicati on the various supported platforms

The Duplicati package types

For desktop and laptop users, the most common application type is called the "GUI" package, which is short for Graphical User Interface. The GUI package includes the core components, a webserver to show the user interface and a tray icon (also called a status bar icon).

For users installing in environments without a desktop or screen, there are also commandline only, remote management and Docker versions. Depending on your setup, you may also want to use one of those packages on a desktop or laptop.

This page covers only the GUI installation.

Jump to the section that is relevant to you:

Install Duplicati on Windows

The most common installation format on Windows is the MSI package. To install on Windows you need to know what kind of processor is on your system. If you are unsure, you are most likely using the 64-bit processor, also known as x64. There is also a version supporting Arm64 processors, and a version for legacy 32-bit Windows called x86.

Install Duplicati on MacOS

For MacOS the common installation method is to use a DMG file with the application file. Most modern MacOS machines are using the Apple Silicon which is called Arm64 in Duplicati's packages. If you are on an older Mac that has a 64-bit Intel processor, you can use the x64 package instead.

Install Duplicati on Linux

Most Linux distributions work well with Duplicati but there are only packages for Debian based distributions (Ubuntu, Mint, etc) and for RedHat based based distributions (Fedora, SUSE, etc). For other distributions you may need to manually install some dependencies.

For Linux distributions there are packages for the most common 64-bit based system with x64, support for Arm64 and the predecessor Arm7 aka ArmHF which are commonly found in NAS boxes and the older Raspberry Pi series.

Install on Debian-based Linux (Ubuntu, Mint, etc)

To install Duplicati on a Debian based system, first download the .deb package matching the system architecture, then run:

sudo dpkg -i duplicati-version-arch.deb

Install on RedHat-based Linux (Fedora, SUSE, etc)

To install Duplicati on a RedHat-based system, first download the .rpm package matching the system architecture, then run:

sudo yum -i duplicati-version-arch.rpm

Install on another Linux distribution

For other linux distributions you can use the .zip file that matches your system architecture. Inside the zip files are all the binaries that are needed, and you can simply place them in a folder that works for your system. Generally, all dependecies are inlcuded in the packages so unless you are using a very slimmed down setup, it should work without additional packages.

Set up a backup in the UI

Describes how to configure a backup in Duplicati

In the UI, start by clicking "Add backup", and choose the option "Configure a new backup":

To set up a new backup there are some details that are required, and these are divided into 5 steps:

1. Basic configuration

For the basic configuration, you need to provide a name and setup encryption:

The name and description fields can be any text you like, and is only used to display the backup configuration in lists so you can differentiate if you have multiple backups.

The encryption setup allows you to choose an encryption method and a passphrase. Encryption adds a minor overhead to the processing, but is generally a good idea to add. If you opt out of encryption, make sure you control the storage destination and have adequate protections in place.

Be sure to store the chosen or generated passphrase in a safe location as it is not possible to recover anything if this passphrase is lost!

To avoid weak passphrases, Duplicati has a built-in passphrase generator as well as a passphrase strength measurer.

2. Storage destination

The storage destination is arguably the most technical step because it is where you specify how to connect to the storage provider you want to hold your information. Some destinations require only a single setting, where others require multiple.

Each backup created by Duplicati requires a separate folder. Do not create two backups that use the same destination folder as they will keep breaking each other.

When the details are entered, it is recommended that you use the "Test" button which will perform some connection tests that helps reveal any issues with the entered information.

When the destination is configured as desired, click the "Next" button.

3. Source data

In the third step you need to define what data should be backed up. This part depends on your use. If you are a home user, you may want to back up images and documents. An IT professional may want to back up databases.

In the source picker view you can choose the files and folders you would like to back up. If you pick a folder, all subfolders and files in that folder will be included. You can use the UI to uncheck some items that you want to exclude, and they will show up with a red X.

Once you are satisfied with the source view, click the "Next" button to continue to the schedule step.

4. Schedule

Having an outdated backup is rarely an ideal solution, but remembering to run backups is also tedious and easy to forget. To ensure you have up-to-date backups, there is a built-in scheduler in Duplicati that you can enable to have Duplicati run automatically.

Once satisfied with the schedule, click "Next".

5. Retention and miscelaneous

Even though Duplicati has deduplication and compression to reduce the stored data, it is inevitable that old data is stored that will take up space, but is not needed for restore. In this final configuration step you can decide when old versions are removed and what size of files to store on the destination.

For the retention setting, it is inveitable that the backups will grow as new and changed data is added to the backups. If nothing is ever deleted, the backup size will keep growing in size. With the retention settings you can choose how to automatically remove older versions.

The setting "Smart backup retention" is meant to be useful for most users where it keeps one daily backup and then gradually fewer versions going back in time.

Once you are satisfied with the settings, click the "Save" button.

You have now configured your backup! 🎉

Advanced configurations

Sharing the secret provider

This sharing simplifies the setup by only having a single secret provider configuration and then letting each of the other parts access secrets without further configuration. If needed, the secret providers can be specified for the individual backups, such that it is possible to opt-out of using the shared secret provider.

How to avoid passing credentials on the commandline

To make passing arguments to the application a bit harder to obtain, the value for --secret-provider is treated as an environment variable if:

  • It starts with $ optionally with curly brackets {}:

    • $secretprovider

    • ${secretprovider}

  • If it starts and ends with % :

    • %secretprovider%

No expansion is done on environment variables, so the entire provider string is required to be set as an environment variable.

How to protect against secret provider outages

If you run an operation and the secret provider is unavailable when the secrets are requested, the operation will fail. For most uses the occurence of an outage is so rare that this situation is acceptable.

However, for some uses it is important that the backups keep running, even in the face out outages. To handle this need, Duplicati supports an optional cache strategy:

Storing the secrets somewhere makes it more likely that it is eventually leaked. For that reason, the default is to use the cache setting None which turns off the caching fully and only relies on the provider.

Finally, the Persistent option will write secrets to disk, so it can handle situations where the provider is unavailable during startup, or where a shared provider does not work.

As the purpose of the secret provider is to prevent the secrets from being written to disk, the secrets are written to disk using a passhprase derived from the secret provider url. If the secret provider url does not contain a strong secret already, it is possible to add any parameter to the url to increase the strength of the key.

If the secret provider url changes, it is no longer possible to retrieve the cached values, and the next run will fail if the provider is unavailable, but will otherwise write a new encrypted cache file to disk.

Restoring files

This page describes how to restore files using the Duplicati user interface

The most important reason to make a backup is the ability to recover the data at a later stage, usually due to some unforseen incident. Depending on the incident, the original configuration may not be available.

To start a restore process in Duplicati, start on the "Restore" page.

The restore and browsing process are fastest when using a configured backup, because Duplicati can query a local database with information. If the local database is not present, Duplicati needs to fetch enough information from the remote storage to build a partial database when performing the restore.

Direct restore from backup files

To restore files from the backup, Duplicati needs only to know how to access the files and the encryption passphrase (if any). If you do not have the passphrase, it is not possible to restore.

To restore directly from the backup files, the first step is to provide the destination details. These details are the same as you supplied initially when creating the backup. if you are using a cloud provider, you can usually get the needed information via your account on the vendors website.

Once the details are entered, it is recommended to use the "Test connection" button to ensure that the connection is working correctly. Then click the "Next" button.

Restore from configuration

In the dialog, provide the exported configuration file and the configuration file's encryption passphrase. Note that the passphrase the configuration file is encrypted with is not neccesarily the same as the passphrase used to encrypt the backup with.

Once the configuration is correct, click the "Import" button and you are given the option to correct the settings before starting the restore process. If you do not need to change anything, click "Next" and then "Connect".

Choosing files to restore

Once Duplicati has a connection to the remote destination it will find all the backups that were made. It will then choose the most recent version and list the files from within that version. Use the "Restore from" dropdown to select the version to restore from, and use the search field to highlight files matching the expression. Click the "Search" button to list only files matching the criteria.

Check the files or folders that you want to restore and then click "Continue".

Choosing restore options

When restoring there are a few options that control how the files are restored.

If you want to restore a file to a previous state, you can leave the settings to their defaults. If you are unsure if you want to revert, or need to examine the files before replacing the current versions, you can choose to restore to another destination. If the folder you are restoring to is not empty, you can choose to store multiple versions of the files by appending the restore timestamp to the filename. This is especially useful if you are restoring multiple versions into a target folder for comparison.

Duplicati will not restore permissions by default because the users and groups that were present on the machine that made the backup may not be present on the machine being restored to. Restoring the permissions can cause you to be unable to access the restored files, if your user does not have the necessary permissions.

When satisfied with the settings, click the "Restore" button and the restore process will restore the files.

Local providers

This page describes the providers that operate locally on the machine they are running

The Environment Variable provider

The simplest provider is the env:// provider, which simply extracts environment variables and replaces those. There is no configuration needed for this provider, and the syntax for adding it is simply:

The File Secret provider

The file-secret:// provider supports reading secrets from a file containing a JSON encoded dictionary of key/value pairs. As an example, a file could look like:

Credential Manager (Windows)

Using libsecret (Linux)

Using the pass secret provider (Linux)

Using the KeyChain (MacOS)

For more advanced uses the options account and service can be used to narrow down what secrets can be extracted.

configure a password

Simply head over to the and download the relevant MSI package. Once downloaded, double-click the installer. The installation dialog lets you adjust settings to your liking and will install Duplicati. The first time Duplicati starts up, it will open the user interface in your browser. At this point you are ready to .

Once you know which kind of Mac you have, header over to the and download the relevant DMG file. Open the file and drag Duplicati into the Application folder, and then you can start Duplicati.

The first time Duplicati starts up, it will open the user interface in your browser. At this point you are ready to .

This will install all dependencies and place Duplicati in the default location on the target system. The first time Duplicati starts up, it will open the user interface in your browser. At this point you are ready to .

This will install all dependencies and place Duplicati in the default location on the target system. The first time Duplicati starts up, it will open the user interface in your browser. At this point you are ready to .

The first time Duplicati starts up, it will open the user interface in your browser. At this point you are ready to .

Once Duplicati is running, you can set up a backup through the UI. If the UI is not showing, you can use the in your system menu bar and choose "Open". If you are asked for a password before logging in to the UI, see .

If you have an existing backup configuration you want to load in, see the .

(descriptive name, passphrase)

(where to store the backups)

(what data should be backed up)

(automatically run backups)

(when to delete old backups and more)

Due to the number of supported backends, this page does not contain the instructions. Instead, each of the supported destinations is described in detail on the .

For more advanced uses, you can also use the filters to set up rules for what to include and exclude. See the section on if your have advanced needs.

If you prefer to run the backups manually, disable the scheduler, and you can use to trigger the backups as needed.

The size of remote volumes is meant for a balanced size usable with cloud-storage and a limited network connection. If you have a fast connection or store files on a local network, consider increasing the size of the remote volumes. For more information see .

If the secret provider is configured for the entry application (e.g., the , or ) it will naturally work for that application, but will also be shared within that process.

For the , this means that setting the secret provider for the agent, will also let the server that it hosts use the same secret provider. When a backup or other operation is then executed by the server it will also have access to the same secret provider.

The InMemory setting is the least intrusive version as it only stores the secrets in the process memory. This option is most useful when using a such that it stays in memory between runs.

If the backup configuration is already existing on the machine, you can choose it from the list below the two options for not having a configuration. In this case you can click "Next" and skip to the section on .

If you have exported the backup configuration and have the configuration available, you click "Next" and skip to the . You can also read up on how to .

If the backup is not encrypted, leave the field empty. When ready, click "Connect" and Duplicati will examine the remote destination and figure out what backups are present. After working through the information, you can .

If you have a configuration file you can use the information in that file to avoid entering it manually. If you need to restore more than once, it may be faster to and . After the database is built, you can choose the configuration from the list and skip to .

The file provider also supports files encrypted with and you supply the decryption key with the option passphrase. Suppose the file is encrypted with the key mypassword you can then configure the provider:

To avoid passing the encryption key via a commandline, see .

On Windows XP and later, the can be used to securely store secrets. As the credentials are protected by the account login, there is no configuration needed, so the setup is simply:

The stores various credentials on Linux and integrates with various UI applications to let the user approve or reject attempts to read secrets. The libsecret provider supports a single optional setting, collection, which indicates what collection to read from. If not supplied the default collection is used. To use the libsecret provider, use this argument:

The is a project that implements a secure password storage solution on Linux system, backed by GPG. Duplicati can use pass as the secret provider:

For MacOS users the standard password storage is the program. The secrets stored here as application passwords can be used by Duplicati. The KeyChain can be enabled as a secret provider with:

Duplicati download page
set up a backup
Duplicati download page
set up a backup
set up a backup
set up a backup
set up a backup
TrayIcon
how to access without a password
section on import/export
destination overview page
how filters are evaluated in Duplicati
ServerUtil
this page on the tradeoffs between sizes
Install on Windows
Install on Linux
Install on MacOS
Basic configuration
Storage destination
Source data
Schedule
Retention and miscelaneous
--secret-provider-cache=None
--secret-provider-cache=InMemory
--secret-provider-cache=Persistent
the section on how to avoid passing credentials on the commandline
how to protect against outages
--secret-provider=env://
{
  "key1": "value1",
  "passphrase": "my password"
}
--secret-provider=file-secret:///home/user/secrets.json.aes?passphrase=my-password
--secret-provider=wincred://
--secret-provider=libsecret://?collection=default
--secret-provider=pass://
--secret-provider=keychain://
TrayIcon
Server
Agent
Agent
AESCrypt
Credential Manager
libsecret implementation
pass command
KeyChain Access
shared provider
the section on how to inject the secret provider configuration via an environment variable
choosing the files to restore
import and export configurations
restore from configuration section
choose files to restore
import the configuration
rebuild the local database
choosing files to restore

Sending reports

Describes how to send reports with Duplicati

Despite all efforts to make Duplicati as robust as possible against failures, it is not possible to handle every possible problem that may arise after the initial setup. Common failure causes is revoked credentials, filled storage, missing provider updates, etc.

To avoid discovering too late that the backup had stopped working for some reason, it is highly recommended to set up automated monitoring of backups. Duplicati has a number of ways that you can use to send reports into a monitoring solution:

Using remote management

This page describes how to configure Duplicati to connect to the Duplicati Console and manage the backups from within the console.

Register the local installation

In a default installation, Duplicati will serve up a UI using an internal webserver. This setup works well for workstations and laptops but can be challenging when the machine is not always connected to a display. To securely connect the instance to the Duplicati Console, go to the settings page and find the "Remote access control" section.

Click the button "Register for remote control" to start the registration process. After a short wait, the machine will obtain a registration link:

Registering on the Console

Click the registration link to open a browser and claim the machine in the Duplicati Console:

Click "Register machine" to add it to your account, then return to the Duplicati Settings page where the machine is now registered and ready to connect:

Click the "Enable remote control" button and see the machine is now connected to the Duplicati Console:

Connecting to the machine

You can now click "Connect" to access the machine directly from the portal!

Using remote control with agent

This page describes how to use the remote agent to connect with remote control

As long as the Agent is not registered, restarting it will make it attempt to connect again.

Simplified registration

Any machine can now use this pre-authorized url to add machines to your organization in the Console. You can click the "Copy" button to get the link to your clipboard and paste it in when registering a machine. Do not share this link with anyone as it could allow them to add machines to your account.

To revoke a link, simply delete it from within the portal. This will prevent new machines from registering, but existing registered machines will remain there.

With the registration link, start the Agent with a commandline such as:

duplicati-agent --registration-url=<copied-url>

This will cause the Agent to immediately show up in the Console. Future invocations of the agent will not require the registration url, but should the Agent somehow be de-registered, it will re-reregister if the url is set and the link is still valid.

Registration with deployment

{
  "args": {
    "agent": [ "--registration-url=<copied-url>" ],
  }
}

Duplicati strives to make it as easy as possible to , and using the built-in scheduler makes it easy to ensure that backups are running regularly. Because it is easy to set up a backup and forget about, it is possible to have a backup running with little interaction.

Now that the machine is connected to the Duplicati Console, return to the :

The is designed to be deployed in a way that is more secure and easier to manage at scale than the regular or instances. When the agent is running, it does not have any way to interact with it from the local machine.

On the very first run, the Agent will attempt to register itself with the Duplicati Console. If there is a desktop environment and a browser on the system, the Agent will attempt to open this with . In case there is no such option, the Agent will print out the link in the console or Event Viewer on Windows. The Agent will repeatedly poll the Console to find out when it is claimed.

Once the agent is registered, it immediately enables the connection and will be listed as a registered machine in the .

To skip the registration step and have the agent connect directly to the console without any user intervention, it is required to first create a link that is pre-authorized on the Console. To do this head to the and click the "Add registration url" button.

To simplify starting the agent in larger scale deployments, it is possible to configure with the registration url. To do so, create a file named preload.jsonwith the following content:

This file can then be distributed to the target machine before the package is installed. The describes the possible locations where Duplicati will look for such a file.

set up backups
Duplicati Console
Send emails
Send Jabber/XMPP
Send HTTP message
Send Telegram message
Duplicati Console and visit Settings -> Registered Machines
Agent
TrayIcon
Server
portal settings
portal settings
a preload.jsonfile
preload settings page
the registration link

Monitoring with Duplicati Console

This page describes how to set up monitoring with Duplicati consoel

The Duplicati console is a for handling monitoring of Duplicati backups, but has a free usage tier. To get started with the console, head over to the page and sign up or log in.

On the "" page you can see the instructions, and this is essentially to copy-n-paste in the reporting url into the settings page in your Duplicati client. Once set up, all backups will automatically send a report to the console, and you will have a dashboard and the ability to drill down into each machine, each backup configuration and each report.

paid option
Duplicati Console
Getting started

Scripts

These options allow you to integrate custom scripts with Duplicati operations, providing automation capabilities before and after backups, restores, or other tasks.

Pre and Post Operation Scripts Run custom scripts before an operation starts or after it completes. Use these to perform preparation tasks (like database locking), cleanup actions, or to trigger notifications based on operation results.

Control Flow Management Configure whether operations should continue or abort based on script execution status, with customizable timeout settings to prevent operation blocking.

Script Output Processing Post-operation scripts receive operation results via standard output, enabling conditional processing based on success or failure.

Scripting options

--run-script-before(Path) Run a script on startup. Executes a script before performing an operation. The operation will block until the script has completed or timed out.

--run-script-after(Path) Run a script on exit. Executes a script after performing an operation. The script will receive the operation results written to stdout.

--run-script-before-required(Path) Run a required script on startup. Executes a script before performing an operation. The operation will block until the script has completed or timed out. If the script returns a non-zero error code or times out, the operation will be aborted.

--run-script-timeout(Timespan) Sets the script timeout. Sets the maximum time a script is allowed to execute. If the script has not completed within this time, it will continue to execute but the operation will continue too, and no script output will be processed. Default value: 60s

Script Output Integration with Duplicati Logging

You can add custom entries directly to Duplicati's log system from your scripts by using special prefixes in stdout messages. This allows script events to appear in both the Duplicati Log and Reports alongside native application events.

Supported Log Level Prefixes:

  • LOG:INFO - For general information and success notifications

  • LOG:WARN - For potential issues that didn't prevent completion

  • LOG:ERROR - For critical failures that require attention

Example Usage:

echo "LOG:INFO Preparation tasks completed successfully"
echo "LOG:WARN Database backup older than 24 hours detected"
echo "LOG:ERROR Unable to lock database, backup may contain inconsistent data"

These messages will be captured with their appropriate severity levels and integrated into Duplicati's logging system, making script events traceable within the same monitoring interfaces you use for Duplicati itself.

Sample Scripts

run-script-example.bat (Windows)

@echo off

REM ###############################################################################
REM How to run scripts before or after backups
REM ###############################################################################

REM Duplicati is able to run scripts before and after backups. This 
REM functionality is available in the advanced options of any backup job (UI) or
REM as option (CLI). The (advanced) options to run scripts are
REM --run-script-before = <filename>
REM --run-script-before-required = <filename>
REM --run-script-timeout = <time>
REM --run-script-after = <filename>
REM
REM --run-script-before = <filename>
REM Duplicati will run the script before the backup job and waits for its 
REM completion for 60 seconds (default timeout value). After a timeout a 
REM warning is logged and the backup is started.
REM The following exit codes are supported:
REM
REM - 0: OK, run operation
REM - 1: OK, don't run operation
REM - 2: Warning, run operation
REM - 3: Warning, don't run operation
REM - 4: Error, run operation
REM - 5: Error don't run operation
REM - other: Error don't run operation
REM
REM --run-script-before-required = <filename>
REM Duplicati will run the script before the backup job and wait for its 
REM completion for 60 seconds (default timeout value). The backup will only be
REM run if the script completes with the exit code 0. Other exit codes or a
REM timeout will cancel the backup job.
REM
REM --run-script-timeout = <time>
REM Specify a new value for the timeout. Default is 60s. Accepted values are
REM e.g. 30s, 1m15s, 1h12m03s, and so on. To turn off the timeout set the value 
REM to 0. Duplicati will then wait endlessly for the script to finish.
REM
REM --run-script-after = <filename>
REM Duplicati will run the script after the backup job and wait for its 
REM completion for 60 seconds (default timeout value). After a timeout a 
REM warning is logged.
REM The same exit codes as in --run-script-before are supported, but
REM the operation will always continue (i.e. 1 => 0, 3 => 2, 5 => 4)
REM as it has already completed so stopping it during stop is useless.



REM ###############################################################################
REM Changing options from within the script 
REM ###############################################################################

REM Within a script, all Duplicati options are exposed as environment variables
REM with the prefix "DUPLICATI__". Please notice that the dash (-) character is
REM not allowed in environment variable keys, so it is replaced with underscore
REM (_). For a list of available options, have a look at the output of
REM "duplicati.commandline.exe help".
REM
REM For instance the current value of the option --encryption-module can be 
REM accessed in the script by
REM ENCRYPTIONMODULE=%DUPLICATI__encryption_module%

REM All Duplicati options can be changed by the script by writing options to
REM stdout (with echo or similar). Anything not starting with a double dash (--)
REM will be ignored:
REM echo "Hello! -- test, this line is ignored"
REM echo --new-option=This will be a setting

REM Filters are supplied in the DUPLICATI__FILTER variable.
REM The variable contains all filters supplied with --include and --exclude,
REM combined into a single string, separated with semicolon (;).
REM Filters set with --include will be prefixed with a plus (+),
REM and filters set with --exclude will be prefixed with a minus (-).
REM
REM Example:
REM     --include=*.txt --exclude=[.*\.abc] --include=*
REM 
REM Will be encoded as:
REM     DUPLICATI__FILTER=+*.txt;-[.*\.abc];+*
REM
REM You can set the filters by writing --filter=<new filter> to stdout.
REM You may want to append to the existing filter like this:
REM     echo "--filter=+*.123;%DUPLICATI__FILTER%;-*.xyz"


REM ###############################################################################
REM Special Environment Variables
REM ###############################################################################

REM DUPLICATI__EVENTNAME
REM Eventname is BEFORE if invoked as --run-script-before, and AFTER if 
REM invoked as --run-script-after. This value cannot be changed by writing
REM it back!

REM DUPLICATI__OPERATIONNAME
REM Operation name can be any of the operations that Duplicati supports. For
REM example it can be "Backup", "Cleanup", "Restore", or "DeleteAllButN".
REM This value cannot be changed by writing it back!

REM DUPLICATI__RESULTFILE
REM If invoked as --run-script-after this will contain the name of the 
REM file where result data is placed. This value cannot be changed by 
REM writing it back!

REM DUPLICATI__REMOTEURL
REM This is the remote url for the target backend. This value can be changed by
REM echoing --remoteurl = "new value".

REM DUPLICATI__LOCALPATH
REM This is the path to the folders being backed up or restored. This variable
REM is empty operations  other than backup or restore. The local path can 
REM contain : to separate multiple folders. This value can be changed by echoing
REM --localpath = "new value".

REM DUPLICATI__PARSED_RESULT
REM This is a value indicating how well the operation was performed.
REM It can take the values: Unknown, Success, Warning, Error, Fatal.


REM ###############################################################################
REM Example script
REM ###############################################################################

REM We read a few variables first.
SET EVENTNAME=%DUPLICATI__EVENTNAME%
SET OPERATIONNAME=%DUPLICATI__OPERATIONNAME%
SET REMOTEURL=%DUPLICATI__REMOTEURL%
SET LOCALPATH=%DUPLICATI__LOCALPATH%

REM Basic setup, we use the same file for both before and after,
REM so we need to figure out which event has happened
if "%EVENTNAME%" == "BEFORE" GOTO ON_BEFORE
if "%EVENTNAME%" == "AFTER" GOTO ON_AFTER

REM This should never happen, but there may be new operations
REM in new version of Duplicati
REM We write this to stderr, and it will show up as a warning in the logfile
echo Got unknown event "%EVENTNAME%", ignoring 1>&2
GOTO end

:ON_BEFORE

REM If the operation is a backup starting, 
REM then we check if the --dblock-size option is unset
REM or 50mb, and change it to 25mb, otherwise we 
REM leave it alone

IF "%OPERATIONNAME%" == "Backup" GOTO ON_BEFORE_BACKUP
REM This will be ignored
echo Got operation "%OPERATIONNAME%", ignoring
GOTO end

:ON_BEFORE_BACKUP
REM Check if volsize is either not set, or set to 50mb
IF "%DUPLICATI__dblock_size%" == "" GOTO SET_VOLSIZE
IF "%DUPLICATI__dblock_size%" == "50mb" GOTO SET_VOLSIZE

REM We write this to stderr, and it will show up as a warning in the logfile
echo Not setting volumesize, it was already set to %DUPLICATI__dblock_size% 1>&2
GOTO end

:SET_VOLSIZE
REM Write the option to stdout to change it
echo --dblock-size=25mb
GOTO end


:ON_AFTER

IF "%OPERATIONNAME%" == "Backup" GOTO ON_AFTER_BACKUP
REM This will be ignored
echo "Got operation "%OPERATIONNAME%", ignoring	"
GOTO end

:ON_AFTER_BACKUP

REM Basic email setup		
SET EMAIL="admin@example.com"		
SET SUBJECT="Duplicati backup"

REM We use a temp file to store the email body
SET MESSAGE="%TEMP%\duplicati-mail.txt"
echo Duplicati finished a backup. > %MESSAGE%
echo This is the result : >> %MESSAGE%
echo.  >> %MESSAGE%

REM We append the results to the message
type "%DUPLICATI__RESULTFILE%" >> %MESSAGE%

REM If the log-file is enabled, we append that as well
IF EXIST "%DUPLICATI__log_file%" type "%DUPLICATI__log_file%" >> %MESSAGE%

REM If the backend-log-database file is enabled, we append that as well
IF EXIST "%DUPLICATI__backend_log_database%" type "%DUPLICATI__backend_log_database%" >> %MESSAGE%

REM Finally send the email using a fictive sendmail program
sendmail %SUBJECT% %EMAIL% < %MESSAGE%

GOTO end

:end

REM We want the exit code to always report success.
REM For scripts that can abort execution, use the option
REM --run-script-on-start-required = <filename> when running Duplicati
exit /B 0

run-script-example.sh (Linux)

#!/bin/bash

###############################################################################
# How to run scripts before or after backups
###############################################################################

# Duplicati is able to run scripts before and after backups. This 
# functionality is available in the advanced options of any backup job (UI) or
# as option (CLI). The (advanced) options to run scripts are
# --run-script-before = <filename>
# --run-script-before-required = <filename>
# --run-script-timeout = <time>
# --run-script-after = <filename>
#
# --run-script-before = <filename>
# Duplicati will run the script before the backup job and waits for its 
# completion for 60 seconds (default timeout value). After a timeout a 
# warning is logged and the backup is started.
# The following exit codes are supported:
#
# - 0: OK, run operation
# - 1: OK, don't run operation
# - 2: Warning, run operation
# - 3: Warning, don't run operation
# - 4: Error, run operation
# - 5: Error don't run operation
# - other: Error don't run operation
#
# --run-script-before-required = <filename>
# Duplicati will run the script before the backup job and wait for its 
# completion for 60 seconds (default timeout value). The backup will only be
# run if the script completes with the exit code 0. Other exit codes or a
# timeout will cancel the backup job.
#
# --run-script-timeout = <time>
# Specify a new value for the timeout. Default is 60s. Accepted values are
# e.g. 30s, 1m15s, 1h12m03s, and so on. To turn off the timeout set the value 
# to 0. Duplicati will then wait endlessly for the script to finish.
#
# --run-script-after = <filename>
# Duplicati will run the script after the backup job and wait for its 
# completion for 60 seconds (default timeout value). After a timeout a 
# warning is logged.
# The same exit codes as in --run-script-before are supported, but
# the operation will always continue (i.e. 1 => 0, 3 => 2, 5 => 4)
# as it has already completed so stopping it during stop is useless.


###############################################################################
# Changing options from within the script 
###############################################################################

# Within a script, all Duplicati options are exposed as environment variables
# with the prefix "DUPLICATI__". Please notice that the dash (-) character is
# not allowed in environment variable keys, so it is replaced with underscore
# (_). For a list of available options, have a look at the output of
# "duplicati.commandline.exe help".
#
# For instance the current value of the option --encryption-module can be 
# accessed in the script by
# ENCRYPTIONMODULE=$DUPLICATI__encryption_module

# All Duplicati options can be changed by the script by writing options to
# stdout (with echo or similar). Anything not starting with a double dash (--)
# will be ignored:
# echo "Hello! -- test, this line is ignored"
# echo "--new-option=\"This will be a setting\""

# Filters are supplied in the DUPLICATI__FILTER variable.
# The variable contains all filters supplied with --include and --exclude,
# combined into a single string, separated with colon (:).
# Filters set with --include will be prefixed with a plus (+),
# and filters set with --exclude will be prefixed with a minus (-).
#
# Example:
#     --include=*.txt --exclude=[.*\.abc] --include=*
# 
# Will be encoded as:
#     DUPLICATI__FILTER=+*.txt:-[.*\.abc]:+*
#
# You can set the filters by writing --filter=<new filter> to stdout.
# You may want to append to the existing filter like this:
#     echo "--filter=+*.123:%DUPLICATI__FILTER%:-*.xyz"


###############################################################################
# Special Environment Variables
###############################################################################

# DUPLICATI__EVENTNAME
# Eventname is BEFORE if invoked as --run-script-before, and AFTER if 
# invoked as --run-script-after. This value cannot be changed by writing
# it back!

# DUPLICATI__OPERATIONNAME
# Operation name can be any of the operations that Duplicati supports. For
# example it can be "Backup", "Cleanup", "Restore", or "DeleteAllButN".
# This value cannot be changed by writing it back!

# DUPLICATI__RESULTFILE
# If invoked as --run-script-after this will contain the name of the 
# file where result data is placed. This value cannot be changed by 
# writing it back!

# DUPLICATI__REMOTEURL
# This is the remote url for the target backend. This value can be changed by
# echoing --remoteurl = "new value".

# DUPLICATI__LOCALPATH
# This is the path to the folders being backed up or restored. This variable
# is empty operations  other than backup or restore. The local path can 
# contain : to separate multiple folders. This value can be changed by echoing
# --localpath = "new value".

# DUPLICATI__PARSED_RESULT
# This is a value indicating how well the operation was performed.
# It can take the values: Unknown, Success, Warning, Error, Fatal.



###############################################################################
# Example script
###############################################################################

# We read a few variables first.
EVENTNAME=$DUPLICATI__EVENTNAME
OPERATIONNAME=$DUPLICATI__OPERATIONNAME
REMOTEURL=$DUPLICATI__REMOTEURL
LOCALPATH=$DUPLICATI__LOCALPATH

# Basic setup, we use the same file for both before and after,
# so we need to figure out which event has happened
if [ "$EVENTNAME" == "BEFORE" ]
then
	# If the operation is a backup starting, 
	# then we check if the --dblock-size option is unset
	# or 50mb, and change it to 25mb, otherwise we 
	# leave it alone
	
	if [ "$OPERATIONNAME" == "Backup" ]
	then
		if [ "$DUPLICATI__dblock_size" == "" ] || ["$DUPLICATI__dblock_size" == "50mb"]
		then
			# Write the option to stdout to change it
			echo "--dblock-size=25mb"
		else
			# We write this to stderr, and it will show up as a warning in the logfile
			echo "Not setting volumesize, it was already set to $DUPLICATI__dblock_size" >&2
		fi
	else
		# This will be ignored
		echo "Got operation \"OPERATIONNAME\", ignoring"	
	fi

elif [ "$EVENTNAME" == "AFTER" ]
then

	# If this is a finished backup, we send an email
	if [ "$OPERATIONNAME" == "Backup" ]
	then

		# Basic email setup		
		EMAIL="admin@example.com"		
		SUBJECT="Duplicati backup"
		
		# We use a temp file to store the email body
		MESSAGE="/tmp/duplicati-mail.txt"
		echo "Duplicati finished a backup."> $MESSAGE
		echo "This is the result :" >> $MESSAGE
		echo "" >> $MESSAGE

		# We append the result of the operation to the email
		cat "$DUPLICATI__RESULTFILE" >> $MESSAGE

		# If the log-file is enabled, we append it
		if [ -f "$DUPLICATI__log_file" ]
		then
			echo "The log file looks like this: " >> $MESSAGE
			cat "$DUPLICATI__log_file" >> $MESSAGE
		fi
		
		# If the backend-log-database file is enabled, we append that as well
		if [ -f "$DUPLICATI__backend_log_database" ]
		then
			echo "The backend-log file looks like this: " >> $MESSAGE
			cat "$DUPLICATI__backend_log_database" >> $MESSAGE
		fi

		# Finally send the email using /bin/mail
		/bin/mail -s "$SUBJECT" "$EMAIL" < $MESSAGE
	else
		# This will be ignored
		echo "Got operation \"OPERATIONNAME\", ignoring"	
	fi
else
	# This should never happen, but there may be new operations
	# in new version of Duplicati
	# We write this to stderr, and it will show up as a warning in the logfile
	echo "Got unknown event \"$EVENTNAME\", ignoring" >&2
fi

# We want the exit code to always report success.
# For scripts that can abort execution, use the option
# --run-script-before-required = <filename> when running Duplicati
exit 0

Migrating Duplicati to a new machine

This page describes how to best migrate a Duplicati instance to a new machine

Note: it is possible to restore files across operating systems, but due to path differences it is not possible to continue a backup made on Windows on a Linux/MacOS based operating system and vice versa.

Note: do not attempt to run backups from two different machines to the same destination. Before migrating, make sure the previous machine is no longer running backups automatically. If both machines run backups, one instance will detect that the remote destination has been modified and will refuse to continue until the local database has been rebuilt.

Previous machine is still available

If the previous machine is still accessible, you can copy over the contents of the Duplicati folder containing the the configuration database Duplicati-server.sqlite and the other support database. This approach is by far the fastest as Duplicati has all the information and does not need to check up with the remote storage.

Make sure to stop Duplicati before moving in the folder into the same location on the new machine. After moving in the folder, you can start Duplicati again and everything will be working as before. If it has been a while since the previous instance was running, this may trigger the scheduled backups on startup. Use the option --startup-delay=5min to start Duplicati in pause mode for 5 minutes if you want to check up before it starts running.

Backup configurations are available

Once the local database has been recreated, it is then possible to run the backup as before with no modifications required.

Previous machine and configurations are unavailable

If you do not have access to the previous setup, you can still continue the backups, but this requires that you re-create the backups manually. You need at least the storage destination details, the passphrase and to select the sources.

Once the local database has been recreated, it is then possible to run the backup as before with no modifications required.

Sample scripts extracted from Community Docs:

If you have moved to a new machine and want to restore files to the new machine, you can follow the steps outlined in . If instead, you have already moved files to the new machine and would like to set up the new machine to continue backups made on the previous machine, there are a few ways to do this.

If you have access to backup configurations, jump to the . And if you have no configurations, jump to the .

If you have the backup configurations, see the section on for a guide on how to create the backup jobs from the configuration files.

With the backup configurations, it is possible to re-create the backup configurations. The flow allows you to modify set setup before saving the configuration, in case some details have changed. Once the backup is re-created, it is required that you run the repair operation to make Duplicati recreate the for the backup.

Once the backup configuration has been created it works the same as if you had imported it from a file. Before running a backup, you need to run the repair operation to make Duplicati recreate the for the backup.

https://github.com/kees-z/DuplicatiDocs
Restoring files
import/export configuration
local database
local database
section for moving with backup configurations
manual setup section

Sending HTTP notifications

This page describes how to send reports via the HTTP protocol

To use the option, you only need to provide the url to send to:

Besides the URL it is also possible to configure:

  • The message body and type (JSON is supported)

  • The HTTP verb used

  • Conditions on when to send emails

  • Conditions on what log elements to include

New in 2.0.9.106

You can now specify multiple urls, using the options:

--send-http-form-urls=
--send-http-json-urls=

These two options greatly simplify sending notifications to multiple destinations. Additionally, the options make it possible to send both the form-encoded result in text format as well as in JSON format.

Sending reports with email

Describes the how to configure sending emails with backup details

Besides the connection details, you also need to provide the recipient email address. Note that SMTP servers may sometimes restrict what recipients they allow, but generally using the provider SMTP server will allow sending to your own account.

In the UI you can configure these mandatory values as well as the optional values.

The basic options for sending email can be entered into the general settings, which will then apply to each backup. It is also possible to apply or change the settings for the individual backups by editing the advanced options. Here is how it looks when editing it in the user interface:

You can toggle between the two views using the "Edit as list" and "Edit as text" links.

Besides the mandatory options, it is also possible to configure:

  • Email sender address

  • The subject line

  • The email body

  • Conditions on when to send emails

Sending Jabber/XMPP notifications

Describes the how to configure sending notifications via Jabber/XMPP

To send a notification via XMPP you need to supply one or more recipientes, an XMPP username and a password.

In the UI you can configure these mandatory values as well as the optional values.

The basic options for sending XMPP notifications can be entered into the general settings, which will then apply to each backup. It is also possible to apply or change the settings for the individual backups by editing the advanced options. Here is how it looks when editing it in the user interface:

You can toggle between the two views using the "Edit as list" and "Edit as text" links.

Besides the mandatory options, it is also possible to configure:

  • The notification message and format

  • Conditions on when to send emails

  • Conditions on what log elements to include

Configuring a new backup
Basic configuration page
Storage destination list
Selecting source folders
Choosing a schedule to run on
Choosing backup retention
Choosing a restore approach
Supply storage destination details
Supply the encryption passphrase
Restoring with a backup configuration
Choosing files to restore
Choosing restore options
The remote control setup step
Machine ready to be enrolled
Registrering the machine
Machine is registered and ready to connect
Machine is connected
Machine is listed in Console
The machine is registered
Added a registration URL
Duplicati Console screenshot

The most versatile reporting option is the ability to send messages via the HTTP(s) protocol. By default messages are sent as a body in a request with the verb.

Configuring a HTTP notification

For details on how to customize the notification message, see the .

Sending emails is supported by having access to an SMTP server that will accept the inbound emails. From on your SMTP/email server provider you need to get a url, a username, and a password. If you are a , otherwise consult your provider for these details.

Set up email with the default options editor
Set up email option with a text field

For details on how to customize the subject line and message body, see the .

If you prefer email logs, but would also like to get reports, check out the community provided tool that can summarize the emails into overviews.

One of the supported notification methods in Duplicati is the open-source , supported by a variety of projects, including commercial enterprise offerings.

Set up XMPP notifications with the default options editor
Set up XMPP option with a text field

For details on how to customize the notification message, see the .

form url encoded
POST
section on customizing message content
GMail or Google Workspace user, use the Google SMTP guide
section on customizing message content
dupReport
XMPP protocol
section on customizing message content

Custom message content

This page describes the template system used to format text messages sent

The template system used in Duplicati is quite simple, as it will essentially expand Windows-style environment placeholders, %EXAMPLE%, into values. The same replace logic works for both the subject line (if applicable) and the message body.

Duplicati has defaults for the body and subject line, but you can specify a custom string here. For convenience, the string can also be a path to a file on the machine, which contains the template.

An example custom template could look like:

Duplicati %OPERATIONNAME% for %backup-name% on %machine-name%

The %OPERATIONNAME% operation has completed with the result: %PARSEDRESULT%

Source folders: 
%LOCALPATH%

Encryption module: %ENCRYPTION-MODULE%
Compression module: %COMPRESSION-MODULE%

%RESULT%

The template engine supports reporting any setting by using the setting name as the template value. Besides the options, there are also a few variables that can be used to extract information more easily:

%PARSEDRESULT%
  The parsed result op the operation: Success, Warning, Error
%RESULT%
  When used in the body, this is the result/log of the backup, 
  When used in the subject line, this is the same as %PARSEDRESULT%
%OPERATIONNAME%
  The name of the operation, usually "backup", but could also be "restore" etc.
%REMOTEURL%
  The backend url
%LOCALPATH%
  The path to the local folders involved (i.e. the folders being backed up)
%machine-id%
  The assigned unique random identifier for the current machine. 
  Can be overridden with --machine-id
%backup-id%
  The assigned id for the backup. Can be overridden with --backup-id
%backup-name%
  The name of the backup. Can be overridden with --backup-name
%machine-name%
  The name of the machine. Can be overridden with --machine-name

JSON output

If the output is JSON it needs to be handled different than regular text, to ensure the result is valid. The logic for this is to re-use the templating concept, but only as a lookup, to figure out what keys to include in the results.

An example template could be:

%OPERATIONNAME% 
%backup-name% 
%machine-name% 
%PARSEDRESULT% 
%LOCALPATH%
%ENCRYPTION-MODULE%
%COMPRESSION-MODULE%

This will ensure that each of those values will be included in the extra element in the JSON output. The default template for JSON output includes all fields listed above, but no options are included by default.

Sending Telegram notifications

Describes the how to configure sending notifications via Telegram

To send a notification via Telegram you need to supply a channel id, a bot token and a an api key.

After obtaining the bot token you can obtain the channel id with a cURL script:

BOT_TOKEN="YOURBOTTOKEN" curl -s "https://api.telegram.org/bot$BOT_TOKEN/getUpdates" \
  | grep -o '"id":[0-9]*' | head -1 | cut -d':' -f2

With all required values obtained, you can set up the Telegram notifications in the general settings:

You can toggle between the two views using the "Edit as list" and "Edit as text" links.

Besides the mandatory options, it is also possible to configure:

  • The notification message and format

  • Conditions on when to send emails

  • Conditions on what log elements to include

Telegram Notification Options

Bot Configuration

--send-telegram-bot-id (String) - The Telegram bot ID that will send messages

--send-telegram-api-key (String) - The API key for authenticating your Telegram bot

Message Destination

--send-telegram-channel-id (String) - The channel ID where messages will be sent

--send-telegram-topid-id (String) - Topic ID for posting in specific topics within Telegram groups

Notification Content

--send-telegram-message (String) - Template for message content with support for variables like %OPERATIONNAME%, %REMOTEURL%, %LOCALPATH%, and %PARSEDRESULT%

--send-telegram-result-output-format (format) - Format for presenting operation results

  • Duplicati

  • Json

Notification Filtering

--send-telegram-level (level) - Controls which result types trigger notifications:

  • Success - Only successful operations

  • Warning - Operations that completed with warnings

  • Error - Operations that failed with recoverable errors

  • Fatal - Operations that failed with critical errors

  • All - All operation results regardless of status

--send-telegram-any-operation (Boolean) - When enabled, sends notifications for all operations, not just backups

--send-telegram-log-level (Enumeration) - Sets minimum severity level for included log entries:

  • ExplicitOnly - Show only explicitly requested messages

  • Profiling - Include performance measurement data

  • Verbose - Include detailed diagnostic information

  • Retry - Include information about retry attempts

  • Information - Include general status messages

  • DryRun - Include simulation mode outputs

  • Warning - Include potential issues that didn't prevent completion

  • Error - Include critical failures that require attention

--send-telegram-log-filter (String) - Filters log entries based on specified patterns

--send-telegram-max-log-lines (Integer) - Limits the number of log lines included in notifications

The server database

This page describes the database kept by the Duplicati Server

Securing the database

Due to the nature of Duplicati, this database can contain a few secrets that are vital to ensuring the integrity and security of the backups and also the Duplicati server itself. These secrets include both the user-provided secrets, such as the backup encryption passphrase and the connection credentials, but also server-provided secrets, such as the token signing keys, and optionally an SSL certificate password.

Even though the database is located on the machine that makes the backup, it is important to prevent unauthorized access to the database, as it could be used for privilege escalation. And should the database ever be leaked, it is also important to ensure the contents are not accessible.

To protect the database, Duplicati has support for a field-level encryption password. When activated, any setting that is deemed sensitive will be encrypted before being written to the database. This method ensures that the SQLite database itself is still readable, but the secrets are not readable without the encryption passphrase.

To supply the field-level encryption password, start the Server, TrayIcon, or Agent with the commandline option --settings-encryption-key=<key>. As the commandline can usually be read by other processes, it is also possible to supply this key via the environment variable SETTINGS_ENCRYPTION_KEY=<key>.

If you are aware of the risks, you can also set the commandline argument --disable-db-encryption=true instead of the key. This will remove existing encryption and not warn that the database is not encrypted.

The simplest way to apply an encryption key, is to locate the server database, and create the file preload.json if it does not already exist. The file should contain the following:

Database location

When running Duplicati for the first time, it will find a place where it can store the configuration database. Some versions of Duplicati change the location where it looks for the databases, but this is always done backwards compatible, so new versions will also find the database in previous locations. Due to this logic, the locations can change a bit depending on what version of Duplicati was originally installed.

It is possible to pick a different location for the database with the commandline option --server-datafolder=<path> or use the environment variable DUPLICATI_HOME.

To change the folder of an existing instance of Duplicati, perform these steps:

  1. Stop Duplicati

  2. Move the Duplicati folder from the old location to the new location

  3. Change the startup parameters (environment variables, commandline arguments, or preload.json)

  4. Start Duplicati again

Database location on Windows

The default location for users running Duplicati is %LOCALAPPDATA%\Duplicati which usually resolves to something like C:\Users\username\AppData\Local\Duplicati. This folder is the non-roaming folder. Older versions of Duplicati used %APPDATA%\Duplicati which is the roaming folder, causing files to be synchronized across machines. However, since Duplicati is not meant to be an app that is useful for roaming, it is now using the non-roaming folder.

When running Duplicati as a Windows Service, the %LOCALAPPDATA%\Duplicati folder resolves to:

Since this folder is under C:\Windows the contents may be deleted on major Windows upgrades (usually when the version number changes). For that reason, Duplicati will detect an attempt to store files in the C:\Windows folder and emit a warning. From version 2.1.0.108 and forward, Duplicati will choose to use C:\Users\LocalService\Duplicati as the storage folder, if it would otherwise be under C:\Windows.

Database location on Linux

The default location when running Duplicati on Linux is ~/.config/Duplicati. For most distros, running Duplicati as a service means running it as the root users, resulting in /root/.config/Duplicati.

However, due to some compatibility mapping, the mapping is sometimes missing the prefix, causing Duplicati data to be stored in /Duplicati. From version 2.1.0.108, this location is avoided and the location /var/lib/Duplicati is used instead, if possible.

Database location on MacOS

The default location when running Duplicati on MacOS is ~/Library/Application Support/Duplicati. Duplicati version 2.0.8.1 and older used the Linux-style ~/.config/Duplicati but this is avoided since version 2.1.0.2.

Import and export backup configurations

This page describes how to import and export configurations from Duplicati

While it is not required that you keep a copy of the backup configuration, it can sometimes be convenient to have all settings related to a backup stored in a single file.

Export

To export from within the user interface, expand the backup configuration and click "Export ..."

You then need to decide on how to handle secrets stored in the configuration. Since these secrets include both the credentials to connect to the remote destination as well as the encryption passphrase, it is important that the exported file is protected.

You can choose to not include any secrets by unchecking the "Export passwords" option. The resulting file will then not contain the secrets and you need to store them in a different place (credential vault, keychain, etc).

You can also choose to encrypt the file before exporting it. If you choose this option, make sure you choose a strong unique passphrase, and store that passphrase in a safe location.

If you choose to export with passwords but without encryption, you will be warned that this is insecure:

Import configuration

With an exported configuration, you can delete an existing configuration and re-create it by importing the configuration. You can optionally edit the parameters so the re-created backup configuration differs from the original.

To import a configuration, go to the "Add backup" page and choose "Import from file":

Pick the file or drag-n-drop it on the file chooser. If the file is encrypted, provide the file encryption passphrase here as well.

The option to "Import metadata" will create the new backup configuration and restore the statistics, including backup size, number of versions, etc. from the data in the file. If not checked, these will not be filled, and will be updated when the first backup is executed.

If the option "Save immediately" is checked, the backup will be created when clicking import, skipping the option to edit the backup configuration.

Duplicati Access Password

This page describes how the authentication is working with Duplicati and how to regain access if the password is lost or unknown

If you are starting Duplicati for the first time, it will ask you to pick a password. Picking a strong password is important to ensure unwanted access to Duplicati from other processes on the system. By default, Duplicati has chosen a strong random password and it is recommended for most users to not change the random password. It is not possible to extract the current password in any way and it is not possible to disable the password.

Access from the TrayIcon

This mechanism works for most default installations and is secure as long as the desktop is not compromised. This signin process is the reason that the default random password is prefered, because it is not possible to leak the password.

The downside is that you can bookmark the Duplicati page, but you may be asked for a password that you do not know when accessing the page. In this case, re-launching from the TrayIcon will log you in again.

If you prefer, it is possible to choose the password so you can enter it when asked. Optionally, you can also choose to disable the feature that allows the TrayIcon to sign in without a password, through the settings page.

Login with the TrayIcon is shown here for MacOS, but the same works on Linux and Windows:

Temporary signin token

Note that the regular output from journalctl is capped in width, so you cannot see the whole token. Pipe to a file or another program as shown above to get the full output.

Once you have obtained the link, simply click it or paste it into a browser. Note that the sign-in token has a short lifetime to prevent it being used to gain unathorized access from someone who obtains the logs. If the link has expired, simply restart the service or application and a new link will be generated.

After a password has been set, the link will no longer be generated.

Change password with ServerUtil

This works by reading the same database as the server is using and extracting the keys used to sign a sign-in token, and then creating a sign-in token. This sign-in token works the same way as the TrayIcon's signin feature. Note that the password itself cannot be extracted from the database, it can only be verified.

After obtaining a sign-in token, ServerUtil can then change the password in the running instance.

This only works if:

  • The database is readable from the process running ServerUtil

  • The database field encryption password is available to the process running ServerUtil

If these constraints are satisfied, it is possible to reset the server password by running:

If ServerUtil is launched in a similar environment (i.e., same user, same environment variables) this would allow access in most cases. There are a number of commandline options that can be used to guide ServerUtil in case the environments are not entirely the same.

Example change with a different context

If you need to change the password for a Windows Service instance running in the service context, you can use a command such as this:

Similarly, if the service is running as root on Linux:

Change password from the Server

Since commandline arguments and environment variables can be viewed through various system tools, it is recommended that the option is not set on every launch. A prefered way to set this would be to stop all running instances, start once with the new password from a commandline terminal, shut down, and then start again normally.

Disable sign-in tokens

It is possible to disable the use of sign-in tokens completely, which can increase security further. This is done by passing the option:

The local database

This page describes the local database associated with a backup

The database is essentially a compact view of what data is stored at the remote destination, and as such it can always be created from the remote data. The only information that is lost if the database is recreated are log messages and the hashes of the remote volumes. The log messages are mostly important for error-tracing but the hashes of the remote volumes are important if the files are not encrypted, as this helps to ensure the backup integrity.

Prior to running a backup, Duplicati will do a quick scan of the remote destination to ensure it looks as expected. This check is important, as making a backup with the assumption that data exists, could result in backups that can only be partially restored. If for the check fails for some reason, Duplicati will exit with an error message explaining the problem.

In rare cases, the database itself may become corrupted or defect. If this seems to be the case, it is safe to delete the local database and run the repair command. Note that it may take a while to recreate the database, but no data is lost in the process, and restores are possible without the database.

Filters in Duplicati

This page describes how filters are evaluated inside Duplicati and how to construct them

Duplicati uses the same setup for filters to select individual files. It is most prominent when choosing the sources, but can be applied in other places where individual files can be selected.

Path representations

Internally, Duplicati represents folders with a trailing path separator, which makes it easy to distinguish between the two types. This distinction is important when constructing filters, as Duplicati requires a full match, including the trailing path separator, before a match is considered. An example for Windows and Linux/MacOS:

  • Windows

    • Folders

      • C:\Users\john\

      • X:\data\

    • Files

      • C:\Users\myfile

      • X:\data\file.bin

  • Linux/MacOS

    • Folders

      • /home/john/

      • /usr/share/

    • Files

      • /home/myfile

      • /usr/file.bin

For brewity, the remainder of this page will only use the Linux/MacOS format in examples, but the same can be applied to the Windows paths.

Filter types

Duplicati supports 4 different kinds of filters: paths, globbing, regex, and predefined groups. The simplest type of filter is the path. To use a path-type filter, simply provide the full path to the file or folder to target.

Globbing expressions

An example of glob expressions:

The first expression matches files with the 4 ? characters replaced by any character, and the second expression matches the Download folder for any user, and the third matches any files with the .iso extension.

Regular expressions

Regular expressions are provided by wrapping the expressions with hard braces [ ]:

Note that for Windows, the path separators must be escaped with a backslash, \ so each separator becomes a double backslash \\ .

Predefined filter groups

Some files are commonly excluded on many systems, and to make it easier to exclude such files, Duplicati has a number of built in filter groups:

  • SystemFiles

    Files that are not real files, such as /proc or System Volume Information.

  • OperatingSystem

    Files that are provided by the operating system, such as /bin or C:\Windows\

  • CacheFiles

    Files that are part of application or operating system caches, such as the browser cache.

  • TemporaryFiles

    Files that are stored temporarily by applications as part of normal operations

  • Applications

    Binary applications, such as /lib/ or C:\Program files\

  • DefaultExcludes

    All the above filters in one group

To use a filter group, supply one or more names inside curly braces { }, separated with commas. As an example:

Apply filters

By default, Duplicati will recurse the source folders and include every file and folder found. For this reason, most of the filters will be exclude filters that removes something from the backup. Include filters are prefixed with a + and exclude filters are prefixed with a -.

When Duplicati is evaluating filters, it will consider only the first full match, and not evaluate further. It will also evaluate folders before files, meaning that it is not possible to include a file, if the parent folder is excluded. Importantly, the filters are processed in the order they are supplied, which makes it possible to supply advanced rules. As an example:

In the example, the first rule is applied before the second rule, which means that all .txt files in /usr/share/ are included, but any other .txt files are excluded. The inverse goes for the .bin files, because the exclude rule is before the last rule, the files will be matched as exclude, even though there is an include rule.

If we append a rule:

Even if this rule is last, it will exclude the entire folder. Since the folder is excluded, the match on the include rule is never evaluated. This cut-off at the folder level makes it possible to fully avoid processing subfolders, which could otherwise be time consuming.

Note: The description here only covers the text-based output (such as emails, etc). The t is a bit different.

To obtain the bot token (aka bot id), message the @BotFather bot. After creating the bot, send a message to the bot, so it can reply. For more details on Telegram bots, see the .

To obtain the API key, follow the .

Set up Telegram notifications with the default options editor
Set up Telegram option with a text field

For details on how to customize the notification message, see the .

When the is running, either stand-alone or as part of the or , it needs a place to store the configuration. All configuration data, logs and settings are stored inside the file Duplicati-server.sqlite. As the file extension reveals, this is an database file and as such can be viewed and updated by any tool that works with SQLite databases.

The database file is by default located in a folder that belongs to the user account running it. See the section on the for details on where this is and how to change it.

Both the commandline arguments and environment variables can be set with the file, which makes it simpler to apply the same settings across executables, and removes the need for changing the service or launcher files.

For additional protection of the encryption key, the , can be used to further secure the encryption key.

On this page you should select "To File", which is default. The option to export "As commandline..." is not covered here, but allows you to get a string that can be used with the .

After completing the export, you will get a file containing the backup configuration. The file is in JSON format and optionally encrypted with .

When all is configured as desired, click the "Import" button. If you have not checked "Save immediately", the flow will look like it does when .

The process will usually host the that presents the UI. Since the two parts are within the same process they can communicate securely, and this setup enables the TrayIcon to negotiate a short-term signin token with the server, even though it does not know the password.

When Duplicati starts up with the randomly generated password it will attempt to emit a temporary sign-in url. If you run either the or in a terminal, most systems will show the link here.

If you are running Duplicati as a service with no console attached, the link will end up in the system logs. On Windows you can use the utility to find the message with a sign-in url. For Linux you can view the system logs, usually:

For MacOS you can use the .

If you are not using the TrayIcon or you have disabled the signin feature, but lost the password somehow, you can change the password with in some cases.

For Linux user, you can usually use su or sudoto enter the correct user context, but some additional environment variables may be needed. The default location for the database is described in the , and a different location can be provided with --server-datafolder.

If the other options are not available, it is possible to restart the process and supply the commandline option:

This will write a hashed () version of the new password to the database and use this going forward. This process requires restarting the server, but is persisted in the database, so it is only required to start the server once with with the --webservice-password option and future starts can be done without the password.

The option can also be supplied to the and processes, which will pass it on to their internal instance of the Server.

This will make the reject any sign-in tokens and prevent the access from the TrayIcon and ServerUtil without explicitly passing the password. With this option, it will require write access to the database to create a new token, but it will also require handling the password in a safe manner from all instances where this is needed.

This option can also be supplied to the process and is default enabled by the .

Duplicati uses , one for the and one for each backup. This page describes the overall purpose of the local database and how to work with it. The database itself is stored in the same folder as the server database and has a randomly generated name.

If you have access to the backup files generated by Duplicati, you only need the passphrase to restore files. As described in the , it is also everything that is needed to continue the backup. But to increase the performance and reduce the number of remote calls required during regular operations, Duplicati relies on a database with some well-structured data.

For some errors it is possible to run the repair command and have the problem resolved. This works if all data required is still present on the system, but may fail if there is no real way to recover. If this is the case, there may be additional options in the section on .

While it would be possible to maintain an ever growing list of paths in a filter, it can quickly become hard to manage. For cases where there is some similarity between multiple files or folder paths, it is possible to target multiple paths with a . The wildcard character * matches any length of characters (including zero) and the character ? matches a exactly one character. Unlike other glob implementations, the path separator is also matched in Duplicati filters.

If the paths to match are more complicated than what can be expressed with globbing, it is also possible to use , which are a common way of expressing a string pattern. Understanding regular expressions and applying them can be a challenging task, and will most often require some testing to ensure it is working as expected. Also note that since Duplicati is written in C#, it uses the .

Telegram bot documentation
Telegram guide to creating an application
section on customizing message content
emplate system for JSON
{
  "env": {
    "*": { 
      "SETTINGS_ENCRYPTION_KEY": "<key>"
    }
  }
}
C:\Windows\System32\config\systemprofile\AppData\Local\Duplicati
sudo journalctl --unit=duplicati | less
> duplicati-server-util change-password
Duplicati.CommandLine.ServerUtil change-password \ 
  --server-datafolder "C:\Windows\System32\config\systemprofile\AppData\Local\Duplicati"
duplicati-server-util change-password \
  --server-datafolder=/root/.config/Duplicati
--webservice-password=<new password>
--webservice-disable-signin-tokens=true
/usr/share/IMG_????.jpeg
/home/*/Download/
*.iso
[/usr/share/IMG_\d{4}\.jpeg]
[/home/[^/]+/Download/]
[.*\.iso]
{CacheFiles,TemporaryFiles}
+/usr/share/*.txt
-*.txt
-*.bin
+/usr/share/*.bin
-/usr/share/
Expanded backup configuration
Choosing export options
Warning shown when exporting without encryption
Adding a backup
Choosing import options
Log in with the TrayIcon
Server
TrayIcon
Agent
SQLite
Preload settings
operating system Keychain, or an external secret provider
Duplicati CLI executable
AESCrypt
setting up the initial backup
TrayIcon
Server
TrayIcon
Server
Event Viewer
Console app
ServerUtil
Server
PBKDF
TrayIcon
Agent
Server
TrayIcon
Agent
two databases
Server
migration section
recovering from failure
file-globbing syntax
regular expressions
.NET variant of regular expressions

Preload settings

This page describes how Preload settings are applied

The preload settings allow configuring machine-wide or enterprise-wide default settings with a single file. Because of this usecase, all settings are applied only if they are not already present. This means a commandline argument could be set up to change the default blocksize, but if the user has applied another setting via the commandline or parameters-file, the preload setting has no effect.

To support different ways of deploying the settings file, 3 locations are checked:

  • %CommonApplicationData%\Duplicati\preload.json

    • Linux: /usr/share/Duplicati/preload.json

    • MacOS: /usr/local/share/Duplicati/preload.json

    • Windows:C:\ProgramData\Duplicati\preload.json

  • Inside the installation folder

  • The file pointed to by DUPLICATI_PRELOAD_SETTINGS

For security reasons, all these paths are expected to be writeable only by Administrator/root so unprivileged users cannot modify the values. If the settings contains secrets, make sure that only the relevant users can read them.

The loading of the files is default silent, even if the parsing fails, but the environment variable DUPLICATI_PRELOAD_SETTINGS_DEBUG=1 will toggle loader debug information to help investigate issues.

The implementation here follows the format:

{
  "env": {
    "*": {
      "TEMP": "/mnt/tmp",
      "LOGGING": "false"
   },
   "tray": {
      "LOG": "1"
    },
    "server": {
        "DUPLICATI__WEBSERVICE_ALLOWED_HOSTNAMES": "m1"
    }
  },

  "db": {
    "server": {
      "--compression-module": "zip",
      "--send-http-result-output-format": "Json"
     }
  },

  "args": {
    "tray": [ "--hosturl=http://m1:8299" ],
    "server": [ "--webservice-port=8299" ]
  }
}

The file has 3 sections that are all similar and all optional: env, db, and args. Each section can apply to all executables (*) or a specific executable. The executable names can be seen in the source, but the most common ones are tray and server.

In the case where the * section and specific executable has the same variable, the specific one is used. If multiple settings files are found, they are loaded in the order described above. Here the last file loaded will be able to overwrite the others. The * settings are collected from all three files, as are the executable specific options, and only after all parsing is done, are the specific executable options applied (see below for an example).

Note that some executables will load others, such that TrayIcon, Service, and WindowsServer will load Server.

Environment variables - env

The env section contains environment variables that are applied inside the process, after starting. Each entry under an executable is a key-value pair, where the key is the name of the environment variable, and the value will be the contents of the environment variable.

The environment variables are only set if they are not already set, allowing a custom base set, but prefers local machine variables.

In the case where one binary loads another, the starting application environment variables are applied first, and then any unset environment variables are applied for the loaded executable.

Database settings - db

For the db section it is possible to use * but the settings are currently only applied when running the server, so for future compatibility this section should use server only. The settings under an executable in the db section are automatically prefixed with -- to ensure they are valid options and are saved as the "application wide" settings, also visible in the UI under Settings -> Advanced Options.

The settings here are applied to the database if they are changed, meaning a change to the settings will overwrite settings the user has already applied. This check is performed on startup.

The database settings are not passed on from a binary when it loads another, so the only database settings that are loaded are done by Server, even if any are supplied by tray (may change in the future).

Commandline arguments

The commandline arguments supports both the * and specific executable name. The arguments are expected to be switches in the format --name=value but can be any commandline argument. The general logic in Duplicati is that "last option wins", so the resolver logic for that is applied to try to get the most logical combination of arguments.

Resolution with conflicts

If the following fragment is supplied:

"env": {
  "*": {
    "E1": "a",
    "E2": "b"
  },
  "tray": {
    "E1": "c",
    "E3": "d"
  }
}

The Server executable will get the settings from * and the TrayIcon will get the values: "E1=c E2=b E3=d".

If the above fragment is found in the first file, but this fragment is found in a later file:

"env": {
  "*": {
    "E3": "f"
  },
  "tray": {
    "E1": "g"
  }
}

First the * variables are collected, giving "E1=a E2=b E3=f", then the tray variables give "E1=g E3=d", and then they are combined to give "E1=g E2=b E3=d" for tray.

The same combination logic is applied for both the db and args sections, but since the args section are not key-value pairs, and their order matter, it is done by collecting the arguments first, and then reducing them:

"args": {
  "*": ["--test=1", "--abc=123"],
  "server": ["--xyz=z", "--test=1", "--test=2"]
}

In this case the arguments are collected, with * first, then the executable specifics, giving:

["--test=1", "--abc=123", "--xyz=z", "--test=1", "--test=2"]

Since this contains 3 options named --test, they are reduced and appended so it ends up with:

["--abc=123", "--xyz=z", "--test=2"]

The intention here is to stay as close as possible to the original line that was entered. If the commandline arguments already contains --test, the values are not applied.

Using Duplicati from Docker

This page describes common scenarios for configuring Duplicati with Docker

Configure the image

The Duplicati Docker images are using /data inside the container to store configurations and any files that should persist between container restarts. Note that other images may choose a different location to store data, so be sure to follow the instructions if using a different image.

You also need a way to sign in to the server after it has started. You can either watch the log output, which will emit a special signin url with a token that expires a few minutes after the server has started, or provide the password from within the configuration file.

To ensure that any secrets configured within the application are not stored in plain text, it is also important to set up the database encryption key.

Managing secrets in Docker

Ideally, you need at least the settings encryption key provided to the container, but perhaps also the webservice password. You can easily provide this via a regular environment variable:

Using a preload file

To use the preload approach, prepare a preload.json file with your encryption key:

You can then configure this in the compose file:

Using a secret manager

Setting up the secret manager is a bit more work, but it has the benefit of being able to configure multiple secrets in a single place. To configure the file-based secret provider, you need to create a secrets.json file such as this:

Then set it up in the compose file:

It is also possible to use one of the other secret providers, such as one that fetches secrets from a secure key vault. In this case, you do not need the secrets.json file, but can just configure the provider.

Read locked files

Duplicati has support for LVM-based snapshots which is the recommended way for getting a consistent point-in-time copy of the disk. For some uses, it is not possible to configure LVM snapshots, and this can cause problems due to some files being locked. By default, Duplicati will respect the advisory file locking and fail to open locked files, as the lock is usually an indication that the files are in use, and reading it may not result in a meaningful copy.

If you prefer to make a best-effort backup, which was the default in Duplicati v2.0.8.1 and older, you can disable advisory file locking for individual jobs with the advanced option: --ignore-advisory-locking=true. You can also disable file locking support entirely in Duplicati:

Using Duplicati with Linux

This page describes how to use Duplicati with Linux

Before you can install Duplicati, you need to decide on three different parameters:

  • Your package manager: apt, yum or something else.

  • You machine CPU type: x64, Arm64 or Arm7

Deciding on type

Determine package manager

Next step is checking what Linux distribution you are using. Duplicati supports running on most Linux distros, but does not yet support FreeBSD.

If you are using a Debian-based operating system, such as Ubuntu or Mint, you can use the .deb package, and for RedHat-based operating system, such as Fedora or SUSE, you can use the .rpm packages.

For other operating systems you can use the .zip package, or check if your package manager already carries Duplicati.

Determine CPU architecture

Finally you need to locate information on what CPU architecture you are using:

  • x64: 64bit Intel or AMD based CPU. This is the most common CPU at this time.

  • Arm64: 64bit ARM based CPU. Used in Raspberry Pi Model 4 and some Laptops and Servers.

  • Arm7: 32bit ARM based CPU. Used in Raspberry Pi Model 3 and older, and some NAS devices.

Installing the package

Using the TrayIcon

When running the TrayIcon in a user context, it will create a folder in your home folder, typically ~/.config/Duplicati where it stores the local databases and the Server database with the backup configurations.

Using the Server

Using Server as a Service

If you need to pass options to the server, edit the settings file, usually at /etc/default/duplicati. Make sure you only edit the configuration file and not the service file as it will be overwritten when a new version is installed. The settings file should look something like this:

You can use DAEMON_OPTS to pass arguments to duplicati-server, such as --webservice-password=<passsword>.

To enable the service to auto-start, reload configurations, start the service and report the status, run the following commands:

The server is now running and will automatically start when you restart the machine.

Note: the service runs in the root user context, so files will be stored in /root/.config/Duplicati on most systems, but in /Duplicati on other systems. Use the DAEMON_OPTS to add --server-datafolder=<path to storage folder> if you want a specific location.

To check the logs (and possibly obtain a signin link), the following command can usually be used:

Using the Agent

When the Agent starts, it will emit a registration link to the log, and you can usually see it with the following command:

After registration is complete, restart the service to pick up the new credentials:

Using the CLI

Using the CLI is simply a matter of invoking the binary:

If you specify the --dbpath parameter, it will not use the dbconfig.json file and it will not store anything in the local datafolder.

Using the support programs

signing token from the logs
changing the password

For single-machine users, the preload settings are a convenient way to change the arguments passed to either , , or , without needing to edit shortcuts or service files.

see this for details, but usually

The Duplicati Docker images are available from and are released as part of the regular releases. The Docker images provided by Duplicati are quite minimal and includes only the binaries required to run Duplicati. There are also variations of the , including the popular variant.

But you can make it a bit more secure by using which are abstracted as files that are mounted under /run/secrets/. Since Duplicati does not support reading files in place of the environment variables, you can either use a or use one of .

The type you want: , , , .

To use Duplicati on Linux, you first need to decide which kind of instance you want: GUI (aka ), , , . The section on has more details on each of the different types.

Once you have decided the on (type, distro, cpu) combination you are ready to download the package. The full list of packages can be obtained via the main , and then clicking "Other versions". Refer to the for details on how to install the packages, or simply use the package manager in your system.

For users with a desktop environment and no special requirements, the instance is the recommended way to run Duplicati. If you are using either .deb or .rpm you should see Duplicati in the program menu, and you can launch it from there. If you do not see Duplicati in the program menu, you can start it with:

The is a regular executable and can simply be invoked with:

When invoked as a regular user, it will use the same folder, ~/.config/Duplicati, as the and share the configuration.

Besides the configuration listed below, it is also possible to run .

If you would like to run the Server as a service the .rpm and .deb packages includes a regular systemd service. If you are installing from the .zip package, you can grab the and install it manually on your system.

With the there is a minimal setup required, which is to register the machine with the Duplicati Console. When installing either the .rpm or .deb packages, it will automatically register the duplicati-agent.service for startup. If you are using the .zip installation, you can find the and manually register it:

If you are using a , you can run the following command to activate the registration:

Since the CLI also needs a local database for each backup, it will use the same location as above to place databases. In addition to this, it will keep a small file called dbconfig.json in the storage folder where it maps URLs to databases. The intention of this is to avoid manually specifying the --dbpath parameter on every invocation.

Each package of Duplicati contains a number of support utilities, such as the . Each of these can be invoked from the commandline with a duplicati-* name and all contain built-in help. For example, to invoke , run:

TrayIcon
Server
Agent
SO thread
services:
  myapp:
    image: duplicati/duplicati:latest
    volumes:
      - ./data:/data
    environment:
      SETTINGS_ENCRYPTION_KEY: "<real encryption key>"
      DUPLICATI__WEBSERVICE_PASSWORD: "<ui password>"
{
  "env": {
    "server": {
        "SETTINGS_ENCRYPTION_KEY": "<real encryption key>",
        "DUPLICATI__WEBSERVICE_PASSWORD": "<ui password>"
    }
  }
}
services:
  myapp:
    image: duplicati/duplicati:latest
    volumes:
      - ./data:/data
    environment:
      DUPLICATI_PRELOAD_SETTINGS: /run/secrets/preloadsettings
    secrets:
      - preloadsettings

secrets:
  preloadsettings:
    file: ./preload.json
{
  "settings-key": "<real encryption key>",
  "ui-password": "<real UI password>"
}
services:
  myapp:
    image: duplicati/duplicati:latest
    volumes:
      - ./data:/data
    environment:
      SETTINGS_ENCRYPTION_KEY: "$$settings-key"
      DUPLICATI__SECRET_PROVIDER: file-secret:///run/secrets/secretprovider
      DUPLICATI__WEBSERVICE_PASSWORD: "$$ui-password"
    secrets:
      - secretprovider

secrets:
  secretprovider:
    file: ./secrets.json
services:
  myapp:
    image: duplicati/duplicati:latest
    volumes:
      - ./data:/data
    environment:
      SETTINGS_ENCRYPTION_KEY: "<real encryption key>"
      DUPLICATI__WEBSERVICE_PASSWORD: "<ui password>"
      DOTNET_SYSTEM_IO_DISABLEFILELOCKING: true
duplicati
duplicati-server
# Defaults for duplicati initscript
# sourced by /etc/init.d/duplicati
# installed at /etc/default/duplicati by the maintainer scripts

#
# This is a POSIX shell fragment
#

# Additional options that are passed to the Daemon.
DAEMON_OPTS=""
sudo systemctl enable duplicati.service
sudo systemctl daemon-reload
sudo systemctl start duplicati.service  
sudo systemctl status duplicati.service
sudo journalctl --unit=duplicati
sudo systemctl enable duplicati-agent.service
sudo systemctl start duplicati-agent.service 
sudo journalctl --unit=duplicati
duplicati-agent register "<pre-authorized url>"
sudo systemctl restart duplicati-agent
duplicati-cli help
duplicati-server-util help
DockerHub
Duplicati images provided by third parties
linuxserver/duplicati
Docker secrets
preload configuration file
the secret providers
GUI
Server
Agent
CLI
TrayIcon
Server
Agent
CLI
Choosing Duplicati Type
download page
installation page
TrayIcon
Server
TrayIcon
Duplicati in Docker
service file from the source code
Agent
agent service in the source code
RecoveryTool
ServerUtil
described for the Server

Using Duplicati with Windows

This page describes common scenarios for configuring Duplicati with Windows

Before you can install Duplicati, you need to decide on three different parameters:

  • You machine CPU type: x64, Arm64, or x86 (32 bit)

Deciding on type

Determine CPU architecture

Finally you need to locate information on what CPU architecture you are using:

  • x64: 64bit Intel or AMD based CPU. This is the most common CPU at this time.

  • Arm64: 64bit ARM based CPU. Some laptops, tablets and servers use it.

  • x86: 32bit Intel or AMD based CPU. Note that Windows 10 was the last version to support 32 bit processors.

Installing the package

Using the TrayIcon

C:\Program Files\Duplicati 2\Duplicati.GUI.TrayIcon.exe

When running the TrayIcon in a user context, it will create a folder in your home folder, typically C:\Users\<username>\AppData\Local\Duplicati where it stores the local databases and the Server database with the backup configurations.

Using the Server

C:\Program Files\Duplicati 2\Duplicati.Server.exe

Running the Server as a Windows Service

If you want to run Duplicati as a Windows Service, you can use the bundled service tool to install/uninstall the service:

C:\Program Files\Duplicati 2\Duplicati.WindowsService.exe INSTALL
C:\Program Files\Duplicati 2\Duplicati.WindowsService.exe UNINSTALL

When installing the Service it will automatically start, and likewise, uninstalling it will stop the service. If you need to pass options to the server, you can provide them to the INSTALL command:

C:\Program Files\Duplicati 2\Duplicati.WindowsService.exe INSTALL --webservice-port=8100 --server-datafolder=<path>

Note: When running the Windows Service it will default to use port 8200 and fail it that port is not available. If you are running the TrayIcon, that will run a different instance, usually at port 8300. If you want to connect the TrayIcon to the Windows Service, edit the shortcut to Duplicati:

C:\Program Files\Duplicati 2\Duplicati.GUI.TrayIcon.exe --no-hosted-server --host-url=http://localhost:8200 --webservice-password=<password>

Using the Agent

You can also register the Agent, using the Agent executable:

C:\Program Files\Duplicati 2\Duplicati.Agent.exe register <registration url>

After the Agent has been registered, restart the service and it will now be available on the Duplicati Console.

{
  "args": {
    "agent": [ "--agent-registration-url=<registration-url>" ]
  }
}

Using the CLI

Using the CLI is simply a matter of invoking the binary:

C:\Program Files\Duplicati 2\Duplicati.CommandLine.exe help

If you specify the --dbpath parameter, it will not use the dbconfig.json file and it will not store anything in the local datafolder.

Using the support programs

 C:\Program Files\Duplicati 2\Duplicati.CommandLine.ServerUtil.exe help

Retention settings

This page describes the different retention settings available in Duplicati

Even though Duplicati tries hard to reduce storage use as much as possible, it is inevitable that the remotely stored data grows as new versions of files are added. To avoid running out of space or paying for excessive storage use, it is important that unnecessary backups are removed regularly.

After deleting one or more versions, Duplicati will mark any data that can no longer be referenced as waste, and may occasionally choose to run a compact process that deletes unused volumes and creates new volumes with no wasted space.

Despite all deletion rules, Duplicati will never delete the last version, keeping at least one version available.

Delete older than

The most intuitive option is to choose a period that data is stored, and then to consider everything older than this period as stale data. The actual period depends on the actual use, but it could be 7 days, 1 year or 5 years for example.

This option is usually the prefered choice if the backups happen regularly, such as a backup each day, and then keep the last 3 months.

Keep versions

If the backups are running irregularly, where the backups are triggered by some external event, there may be long periods where there are no backups. For this case you can choose a number of versions to keep and Duplicati will consider anything outside that count as outdated.

Another special case is that if the source data has not changed at all, which is uncommon, Duplicati will not make a new version, as it would be identical to the previous version. In such a setup, it may be preferable to use a version count, despite regularly scheduled backups.

Retention policy

7D:U,1Y:1W

The first bucket is defined as being 7 days, and the value U means unlimited the number of backups in this bucket. In other words: for the most recent 7 days, keep all backups.

The second bucket is defined as 1 year, keeping a backup for each 1 week, resulting in rougly 52 backups after the first 7 days.

Any backups outside the buckets are deleted, meaning anything older than a year would be removed.

In the UI, a helpful default is called "Smart retention" which sets the following retention policy:

1W:1D,4W:1W,12M:1M

Translated, this policy means that:

  • For the most-recent week, store 1 backup each day

  • For the last 4 weeks, store 1 backup each week

  • For the last 12 months, store 1 backup each month

Encrypting and decrypting files

This page describes how to work with encrypted files outside of normal operations

In normal Duplicati operations, the files at the remote destination should never be handled by anything but Duplicati. Changing the remote files will always result in warnings or errors when Duplicati needs to access those files.

However, in certain exceptional scenarios, it may be required that the file contents are accessed manually.

Processing files encrypted with AES encryption

Processing files encrypted with GPG encryption

gpg -d volume.zip.gpg -o volume.zip

And similarly, to encrypt a file, you can use:

gpg --symmetric volume.zip -o volume.zip.gpg

Re-compress and re-encrypt

Using Duplicati with MacOS

This page describes common scenarios for configuring Duplicati with MacOS

Before you can install Duplicati, you need to decide on two different parameters:

  • You machine CPU type: Arm64 or x64

Deciding on type

Determine CPU architecture

Your Mac is most likely using Arm64 with one of the M1, M2, M3, or M4 chips. If you have an older Mac, it may use the Intel x64 chipset. To see what CPU you have, click the Apple icon and choose "About this Mac". In the field labelled "Chip" it will either show Intel (x64) or M1, M2, M3, M4 (Arm64).

Installing the package

If you are using the .dmg package the installation works similar to other application, simply open the .dmg file and drag Duplicati into Applications. Note that with the .dmg package, Duplicati is not set to start automatically with your Mac, but if you restart with the option to re-open running programs, Duplicati will start on login.

If you are using the .pkg package, Duplicati will install a launchAgent that ensures Duplicati starts on reboots. The CLI package installs a stub file that is not active, so you can edit the launchAgent and have it start the Server if you prefer.

Using the TrayIcon

If you have installed the GUI package, you will have Duplicati installed in /Applications and it can be started like any other application. Once Duplicati is started, it will place itself in the menu bar near the clock and battery icons. Because Duplicati is meant to be a background program, there is no Duplicati icon in the dock.

On the first start Duplicati will also open your browser and allow you to configure your backups. If you need access to the UI again later, locate the TrayIcon in the status bar, click it and click "Open". If you install the CLI or Agent packages, the Duplicati application is not available.

Using the Server

If you install the CLI package, Duplicati binaries are placed in /usr/local/duplicati and symlinked into /usr/local/bin and you can start the server simply by running:

duplicati-server

Note: If you install the GUI package or install from homebrew, Duplicati's binaries are not symlinked into the paths searched by MacOS. You can invoke the binaries by supplying the full path:

/Applications/Duplicati.app/Contents/MacOS/duplicati-server

Using the Agent

If the Agent is not registered with the Console, it will open the default browser and ask to be registered. Once registered, it will run in the background and be avilable on the Duplicati Console for management.

{
  "args": {
    "agent": [ "--agent-registration-url=<registration-url>" ]
  }
}

Using the CLI

Using the CLI is simply a matter of invoking the binary:

duplicati-cli help

If you specify the --dbpath parameter, it will not use the dbconfig.json file and it will not store anything in the local datafolder.

Note: If you install the GUI package or install from homebrew, Duplicati's binaries are not symlinked into the paths searched by MacOS. You can invoke the binaries by supplying the full path:

/Applications/Duplicati.app/Contents/MacOS/duplicati-cli help

Using the support programs

duplicati-server-util help

Note: If you install the GUI package or install from homebrew, Duplicati's binaries are not symlinked into the paths searched by MacOS. You can invoke the binaries by supplying the full path:

/Applications/Duplicati.app/Contents/MacOS/duplicati-server-util help

Running a self-hosted OAuth Server

This page describes how to set up and run a self-hosted OAuth Server

If you are using one of the backends that requires login via OAuth (Google, Dropbox, OneDrive, etc) you will need to obtain a "clientId" and a "clientSecret". These are given by the service providers when you are logged in, and are usually free.

If you prefer to avoid the hassle of setting this up, you can opt to use the Duplicati provided OAuth server, where Duplicati's team will handle the configuration. This OAuth server is the default way to authenticate. If you prefer to be more in control of the full infrastructure, you can use this guide to set up and use your own self-hosted OAuth Server.

For example, this guide will show how to set up an OAuth server for internal use in an organization, granting Duplicati instances full access to the Google Drive files.

Getting access to Google Cloud Services

Once you have create a project where the OAuth settings can live in, you need to enable the "Google Drive API". Go to the top-left menu, choose "API & Services" and then "Enabled APIs & Services". From here search for "Google Drive API", click it and enable:

Before you can get the values you need to configure the consent screen that is shown when users log in with your OAuth Service. You can choose "Internal" here, unless you need to provide access to people outside your organization. Choosing "External" also requires a Google review. On the consent screen, you only need to fill in the required fields, the app name and some contact information:

The last step in the consent is choosing the scopes (meaning the permissions) it is possible to grant with this setup. In this example we choose the auth/drive scope, granting full access to all files in the users Drive. For regular uses, it is safest to use auth/drive.file which will only grant Duplicati access to files created by Duplicati. However, in some cases Google Drive will drop your permissions and refuse to let Duplicati access the files. There is no way to change the permissions on the files, so if this happens, your only choice is to use auth/drive and obtain full access:

You can now click update and save the consent screen and proceed to setting up the credentials needed. Click "Create Credentials" and choose "OAuth client ID". On the next page, choose the type "Web application". In the "Authorized redirect URIs" field you need to enter the url for the server that is being called after login. The Duplicati OAuth server uses a path of /logged-in so make sure it ends with that. In the screenshot, the server is hosted on a single machine, so the setup is for https://localhost:8080/logged-in:

When you are done, click "Save" and a popup will show the credentials that are generated. Use the convenient copy buttons to get "Client ID" and "Client secret" or download the JSON file containing them. If you loose them, you can get then again via the "Credentials" page. The credentials shown here are redacted:

Setting up the configuration

With the credentials available, create a JSON text file similar to this:

{
  "GD_CLIENT_ID": "<Put Client ID here>",
  "GD_CLIENT_SECRET": "<Put Client secret here>"
}

Docker based setup

- ASPNETCORE_URLS: "http://localhost:8080"
- HOSTNAME: "localhost:8080"
- SECRETS: "/path/to/secrets.json.aes"
- SECRETS_PASSPHRASE: "<encryption passphrase>"
- STORAGE: "file:///path/to/persisted/folder"
- SERVICES: "googledocs"

The hostname here MUST match the one set as the redirect URI or the authorization will fail. The URLs parameter is what the internal Docker engine thinks it is running. For this setup there is no TLS/SSL certificate, so the URL here is http but note that we used https in the redirect URI and these two must match in the end. Here I am assuming some other service is providing the SSL layer.

If you need to serve the certificate directly from the Docker container, generate a certificate .pfx file and use a configuration such as:

- ASPNETCORE_URLS: "https://localhost:8080"
- HOSTNAME: "localhost:8080"
- SECRETS: "/path/to/secrets.json.aes"
- SECRETS_PASSPHRASE: "<encryption passphrase>"
- STORAGE: "file:///path/to/persisted/folder"
- SERVICES: "googledocs"
- ASPNETCORE_Kestrel__Certificates__Default__Path: "/path/to/certificate.pfx"
- ASPNETCORE_Kestrel__Certificates__Default__Password: "<certificate password>"

Local machine setup

To run the server, invoke it with a setup like this:

OAuthServer run 
  --listen-urls=http://localhost:8080 
  --hostname=localhost:8080
  --storage=file:///path/to/persisted/folder
  --secrets=/path/to/secrets.json.aes
  --secrets-passphrase=<encryption passphrase>
  --services=googledocs

The hostname here MUST match the one set as the redirect URI or the authorization will fail. The URLs parameter is what the process thinks it is running locally. For this setup there is no TLS/SSL certificate, so the URL here is http but note that we used https in the redirect URI and these two must match in the end. Here I am assuming some proxy service is providing the SSL certificate.

If you need to serve the certificate directly from the the binary, generate a certificate .pfx file and use a configuration such as:

OAuthServer run 
  --listen-urls=https://localhost:8080 
  --hostname=localhost:8080
  --storage=file:///path/to/persisted/folder
  --secrets=/path/to/secrets.json.aes
  --secrets-passphrase=<encryption passphrase>
  --services=googledocs
  --certificate-path=/path/to/certificate.pfx
  --certificate-password=<certificate password>

Issuing an AuthID

Once the service is running, you can navigate to the page and generate an AuthID:

Using the self-hosted OAuth server in Duplicati

The final step is to instruct Duplicati to use the self-hosted OAuth server instead of the regular instance. This is done by visiting the "Settings" page in the Duplicati UI and adding the advanced option --oauth-url=https://localhost:8080/refresh:

Don't forget to click "OK" to save the settings. Once configured, the "AuthID" links in the UI will point to your self-hosted OAuth server, and all authorization is done purely through the self-hosted OAuth server.

The type you want: , , , .

To use Duplicati on Windows, you first need to decide which kind of instance you want: GUI (aka ), , , . The section on has more details on each of the different types.

If you are in doubt, you can try the x64 version, or use .

Once you have decided the on package you want, you are ready to download the package. The default version shown on the main is the x64 GUI version in .msi format. The full list of packages can be obtained via the main , and then clicking "Other versions".

For users with a desktop environment and no special requirements, the instance is the recommended way to run Duplicati. If you are using the .msi package to install Duplicati, you will see an option to automatically start Duplicati, as well as create a shortcut on your desktop and in the start menu. If you need to manually start Duplicati, you can find the executable in:

The is a regular executable and can simply be invoked with:

When invoked as a regular user, it will use the same folder, C:\Users\<username>\AppData\Local\Duplicati, as the and share the configuration.

You can also use the file to pass settings to the Server when running as a service, which allows you to change the settings without the uninstall/install cycle (you still need to restart the service).

With the there is a minimal setup required, which is to register the machine with the Duplicati Console. The default installation is to install the Agent as a Windows Service, meaning it will run in the LocalService system account, instead of the local user. Due to this, it will not be able to open the browser and start the registration process for you. Instead, you must look into the Windows Event Viewer and extract the registration link from there.

If you have a for registering the machine, and would like to automate the process, you can place a file in C:\ProgramData\Duplicati\preload.json with content similar to:

Since the CLI also needs a local database for each backup, it will use the same location as above to place databases. In addition to this, it will keep a small file called dbconfig.json in the storage folder where it maps URLs to databases. The intention of this is to avoid manually specifying the --dbpath parameter on every invocation.

Each package of Duplicati contains a number of support utilities, such as the . Each of these can be invoked from the commandline with their executable name and all contain built-in help. For example, to invoke , run:

In Duplicati there are a few different settings that can be used to configure when a "snapshot" is removed. All of these options are invoked automatically at the end of a backup to ensure that removal follows a new version. If you use the , it is possible to disable the removal and run the delete command as a separate step.

The retention policy is a "bucket" based strategy, where you define how many backups to keep in each "bucket" and what a "bucket" covers. With this strategy, it is possible to get something similar to style backup rotations.

The syntax for the rentention policy uses the to define the bucket and contents in that bucket. The bucket size is first, then a colon separator, and then the duration in the bucket. Multiple buckets can be defined with commas. As an example:

The files encrypted with the default AES encryption follows the file format, so can be used to decrypt and encrypt these files.

For convenience, Duplicati also ships with a command line binary named that uses the same library that is used by Duplicati. This tool can be used to decrypt the remote volume files with the encryption passphrase, as well as encrypt files.

Files encrypted with can choose one of the many ways, and a general overview of how GPG works can be found in the . When using the default options, Duplicati will use the symmetric mode for GPG. In this mode, you can use this command to decrypt a file:

If you need to switch from GPG to AES, or vice-versa, you can use the to automatically process all files on the storage destination. The recovery tool also supports recompressing or changing the compression method.

If you use this method, make sure to .

The type you want: , , , .

To use Duplicati on MacOS, you first need to decide which kind of instance you want: GUI (aka ), , , . The section on has more details on each of the different types. For home users, the common choice is the GUI package in .dmgformat. For enterprise rollouts, you can choose the .pkg packages.

The packages can be obtained via the . The default package shown on the page is the MacOS Arm64 GUI package in .dmg format. If you need another version click the "Other versions" link at the bottom of the page.

When invoked as a regular user, it will use the same folder, ~/Library/Application Support/Duplicati, as the and share the configuration.

With the there is a minimal setup required, which is to register the machine with the Duplicati Console. When installing the Agent package, it will automatically register the Duplicati agent with a launchAgent that starts Duplicati in an Agent mode.

If you have a for registering the machine, you can place a file in /usr/local/share/Duplicati/preload.json with content similar to:

Since the CLI also needs a local database for each backup, it will use the same location as above to place databases. In addition to this, it will keep a small file called dbconfig.json in the storage folder where it maps URLs to databases. The intention of this is to avoid manually specifying the --dbpath parameter on every invocation.

Each package of Duplicati contains a number of support utilities, such as the . Each of these can be invoked from the commandline with a duplicati-* name and all contain built-in help. For example, to invoke , run:

If you need to set up another provider than Google, see .

The first step is to if you are not already a customer. Once you are signed up, you can create a new project as shown here:

Creating a new project
Choosing the menu "Enabled APIs & Services"
Enabling API and Services
Choosing Audience
Setting up the consent screen
Choosing the scopes
Choose OAuth client ID
Configure the OAuth client ID
Redacted view of the generated credentials

If you are setting up a secure server, you should use to encrypt the file after you have created it. If you do, make a note of the passphrase used. Save the file either as secrets.json or secrets.json.aes if you have encrypted it.

In the following, we will only set up Full Access Google Drive, which for legacy reasons is called "googledocs" in the OAuth server. If you are looking to set up one of the other services, see , and pick the ids you need.

In the following, the services are configured to just googledocs but it can be a comma separated list of services if you want to enable multiple. The storage is here simply a local folder that stores encrypted tokens, but you can also use an S3 compatible storage if needed. See the for more details.

If you are using Docker, you can run the directly and simply add environment variables:

To run without Docker, first you need to and extract them to a suitable place. The binaries are self-contained so the will run without any additional framework installation.

Ready to generate an AuthID
Adding the OAuth URL to Duplicati
GUI
Server
Agent
CLI
TrayIcon
Server
Agent
CLI
Choosing Duplicati Type
Microsofts guide for determining the CPU
download page
download page
TrayIcon
Server
TrayIcon
preload.json
Agent
RecoveryTool
ServerUtil
Command Line Interface
grandfather-father-son
AESCrypt
any tool that supports the AESCrypt file format
SharpAESCrypt
GPG
GPG man-pages
Recovery Tool
recreate the local database
GUI
Server
Agent
CLI
TrayIcon
Server
Agent
CLI
Choosing Duplicati Type
main download page
TrayIcon
Agent
RecoveryTool
ServerUtil
the configuration defaults that has links to the pages where the Client ID and Client secret can be found for other services
sign up for Google Cloud Services
SharpAESCrypt
the configuration document
OAuth server readme
OAuth server image
download the OAuth Server binaries for your operating system
described for the Server
described for the Server

Disaster recovery

This page explains how to recover as much data as possible from a broken remote storage

File Destination

This page describes how to use the file destination provider to store backup data on a local drive.

The most basic destination in Duplicati is the file backend. This backend simply stores the backup data somewhere that is reachable from the file system. The destination can be a network based storage as long as it is mounted when needed, a fixed disk, or a removable media.

The file backend can be chosen with the file:// prefix where the rest of the destination url is the path.

Windows example:

file://C:\Data
file://\\server\share\folder

Linux/MacOS example:

file:///home/user

For most cases it will also work without the file:// prefix, but adding the prefix makes the intention clear.

Improving speed for local filesystems

Since Duplicati is intended to be used with remote systems, it will make a temporary file, and then copy the temporary file to the new location. This enables various retry mechanisms, progress reporting and failure handling that may not be desired with local filesystems.

To change this logic to instead use the operating system movecommand to move the file into place, avoiding a copy, set the option --use-move-for-put, on the file backend and also set --disable-streaming-transfers. With these two options, all special handling will be removed and the transfer speed should be the optimal possible with the current operating system. Note that setting --disable-streaming-transferswill not show any progress during transfers, if you are using the UI, because the underlying copy or move method cannot be monitored.

Disabling length verification

Because a local storage destination is expected to have a very low latency, the file backend will verify the length of the file after copy. This additional call is usually very fast and does not impact transfers speeds, but can be disabled for slightly faster uploads with --disable-length-verification.

Removable drives (mostly Windows)

For removable drives, the mount path can sometimes change when inserting the drive. This is most prominent on WIndows, where the drive letters are assigned based on what order the drives are connected. To support different paths, you can supply multiple alternate paths with --alternate-target-paths, where each path is separated with the system path separator (;on Windows, :on Linux/MacOS):

// Note, the paths are URL encoded here: E:\backupdata;G:\backupdata
file://F:\backupdata?alternate-target-paths=E%3A%5Cbackupdata%3BG%3A%5Cbackupdata

If you would like to support any drive letter, you can also use * as the drive letter (Windows only):

file://*:\backupdata

Because using multiple paths could end up attempting to make a backup to the wrong drive, you can use the option --alternate-destination-marker to provide a unique marker filename that needs to exist on the destination:

file://F:\backupdata?alternate-destination-marker=<filename>

Using this option will scan all paths provided, either using the * drive letter or --alternate-target-paths, and check if the folder contains a folder with the given filename.

Authentication (Windows Only)

To use authentication, provide the --auth-username and --auth-passwordarguments to the query. Since the authentication in Windows is tied to the current user context, it is possible that the share is already mounted with different credentials, that may not have the correct permissions.

To guard against this, it is possible to drop the current authentication and re-authenticate prior to acessing the share. This can be done by adding the --force-smb-authentication option.

Using Duplicati from the Command Line

S3-compatible Destination

This page describes the S3 storage destination

The Simple Storage Service, S3, was originally described, developed and offered by Amazon via AWS. Since then, numerous other providers have adopted the protocol and offer S3-compatible services. While these services are mostly compatible with the core S3 protocol, a number of additional AWS-specific settings are usually not supported and will be ignored.

When storing data in S3, the storage is divided into a top-level "folder" called a "bucket", and each bucket has "objects", similar to files. For most providers, an object name with /characters will be interpreted as subfolders in some way.

In the original S3 specification, the bucket name was used as part of the hostname, causing some issues with bucket names that are not valid hostnames, and some delays for new buckets caused by DNS update speeds. Newer solutions use a single shared hostname and provide the bucket name as a parameter.

To use S3 as the storage destination, us a format such as:

s3://<bucket name>/<prefix>
  ?aws-access-key-id=<account id or username>
  &aws-secret-access-key=<account key or password>
  &s3-servername=<server ip or hostname>
  &use-ssl=true

Note that the default for S3 is to use unencrypted connections. The connections are secured with signatures, but all data transfered can be captured through the network. If the provider supports SSL/TLS, which most do, make sure to add --use-ssl=trueto also encrypt the connection.

Choosing the client

Generally, both libraries will work with most providers, but the AWS library has some defaults that may not be compatible with other providers. While you can configure the settings, it may be simpler to use Minio with the default settings.

Creating the bucket

Since the bucket defines the place where data is stored, a bucket needs to be created before it can be used. All providers will offer a way to do this through their UI, and allows you to set various options, such as which geographical region the bucket is located in.

If you use Duplicati to create the bucket, you can also set the option --s3-location-contraintto provide the desired location. Support for this, and available regions, depends on the provider.

Storage class

With S3 it is also possible to set the storage class which is sometimes used to fine-tune the cost/performance/durability of the files. The storage class is set with --s3-storage-class, but the possible settings depends on the provider.

Destination overview

This page describes what a "destination" is to Duplicati and lists some of the available providers

Duplicati makes backups of files, called the source, and places the backup data at a destination chosen by the user. To make Duplicati as versatile as possible, each of the destinations are implemented as a "destination" (or "backend"), each with different properties.

Some storage providers support multiple protocols with each their strenghts, and you can generally pick which storage destination provider you like, but if there is a specific implementation for a given storage provider, that is usually the best pick.

Each storage destination has a number of options that can be provided via a URL like format. The options should preferably be provided as part of the URL, but can also be provided via regular commandline options. For instance, the --use-ssl=true flag can also be added to the URL with &use-ssl=true. If both are provided, the URL value is used.

Each backup created by Duplicati requires a separate folder. Do not create two backups that use the same destination folder as they will keep breaking each other.

Standard based destinations

Destinations in this category are general purpose enough, or commonly used, so they can be used across a range of storage providers. Destinations in this category are:

Provider specific destinations

Storage destinations in this category are specific to one particular provider and implemented using either their public API description, or by using libraries implemented for that provider. Destinations in this category are:

File synchronization providers

Storage destinations in this category are also specific to one particular provider, but these storage provider products are generally intended to be used as file synchronization storage. When they are used with Duplicati, the backup files will generally be visible as part of the synchronization files. Destinations in this category are:

Decentralized providers

Storage destinations in this category are utilizing a decentralized storage strategy and requires knowledge about each system to have it working. Some of these may require additional servers or intermediary providers and may have different speed characteristics, compared to other storage providers. Destinations in this category are:

FTP Destination

This page describes the FTP storage destination

To use the FTP backend, you can use a URL such as:

Despite FTP being a well documented standard, there are many different implementations of the protocol, so the FTP backend supports a variety of settings for configuring the connection. You can use a non-standard port through the hostname, such as ftp://hostname:2121 .

Connection mode

Due to the way FTP is working, it requires multiple connections to transfer data, and the method for selecting which mode has a number of quirks. The default setting is "AutoPassive" which works great for most setups, leaving the burden of configuring the firewall to the server.

Use the option --ftp-data-connection-type to choose a specific connection mode if the default does not work for your setup.

Encryption mode

To enable encrypted connections, you can use the option --ftp-encryption-mode and setting it to either Implicit or Explicit. The Implicit setting creates a TLS connection and everything is encrypted, where Explicit is more commonly used, and creates an unencrypted connection and then upgrades to an encrypted session.

The default setting is --ftp-encryption-mode=None which uses unencrypted FTP connections.

The setting --ftp-encryption-mode=Auto is the most compatible setting, but also insecure, as it connects in unencrypted mode and then attempts to switch to encrypted, but will continue in unencrypted mode if this fails.

To further lock down the encryption mode, the option --ftp-ssl-protocols can be used to limit the accepted protocols. Note: that due to unfortunate naming in .NET, the option --ftp-ssl-protocols=None means "use the system defaults".

Self-signed certificates

To support self signed certificates, the FTP destination also supports the --accept-specified-ssl-hash option is also supported which takes an SHA1 certificate digest and approves the certificate if it matches that hash. This is similar to a manual certificate pinning and allows trusting a specific certificate outside the operating systems normal trust chain.

For testing, it is also possible to use --accept-any-ssl-certificate which will bypass certificate checks completely and enable man-in-the-middle attacks on the connection.

Path resolution

The FTP protcol is tied to a Posix-style path where / is the root folder and subfolders are described using the forward-slash separator. On some systems the filesystem is virtual, so the user can only see the root path, but has no knowledge of the underlying real filesystem. On others, the paths are mapped directly to the user home, like /home/user.

Use the option --ftp-absolute-path to treat the source path as an absolute path, meaning that folder maps to /folder and not to /home/user/folder.

A related option is the --ftp-use-cwd-names option that makes Duplicati keep track of the working directory and uses the FTP server's CD command to set the working folder prior to making a request.

Verification of uploads

To verify that uploads actually work, the FTP connection will request the file after it has been uploaded to check that it exists and has the correct file size. This check is usually quite fast and does not impact backup speeds, but if needed it can be disabled with --disable-upload-verify.

A related setting --ftp-upload-delay adjusts the delay that is inserted after the upload but before verifying the file exists, which is required on some servers to ensure the file is fully flushed before validating the existence.

Debugging commands

Notes on aFTP

With Duplicati 2.1.0.2 the codebase was upgraded to .NET8 which means that FtpWebRequest is now deprecated. For that reason, the FTP backend was converted to also be based on FluentFTP, so both FTP backends are currently using the same library.

The aFTP backend is still available for backwards compatibility, but is the same as the FTP backend, with some different defaults. The aFTP backend will likely be marked deprecated in a future version, and eventually removed.

This page is not yet completed. See the .

Note that for Windows network shares, you may want to use the instead.

On Windows, the shares can be authenticated with a username and password (not with integrated authentication). This uses a to authenticate prior to accessing the share.

This page is not yet completed. See the .

This page deals with S3 in general, for a specific .

For AWS S3, and most other providers, the bucket name is a global name, shared across all users. This means that simple names, such as backup or data will likely be taken, and attempts to use these will cause permission errors. For to make it unique. The Duplicati UI will recommend prefixing the account id to the bucket name, to make it unique.

Make sure you consult the provider documentation to get the server name you need for the bucket region. If you are using AWS, .

The S3 storage destination can either use the or , and you can choose the library to use with --s3-client=minio.

(any path in the filesystem)

(SSH)

(binary required)

(aka Tardigrade)

The FTP protocol is widely supported but generally, despite correct implementation. Due to its continued ubiquity, it is still supported by Duplicati using .

Because the FTP protocol can sometimes be difficult to diagnose, the option --ftp-log-to-console will enable logging various diagnostics output to the terminal. This option works best with the or application. The option --ftp-log-privateinfo-to-console will also enable logging of usernames and passwords being transmitted, to further track down issues. Neither option should be set outside of testing and evaluation scenarios.

Prior to Duplicati 2.1.0.2 there were two different FTP backends, FTP and Alternative FTP (aFTP). This was done as the primary FTP backend was based on and was lacking some features. The aFTP backend was introduced to maintain the FTP backend but offer more features using the FluentFTP library.

section on the recovery tool
CIFS/SMB destination
Windows API
section on the CLI interface
setup on AWS S3, refer to the AWS specific page
AWS, the recommendation is to use a guid in the bucket name
see the AWS S3 description
AWS S3 library
Minio library
ftp://<hostname>/<path>
  ?auth-username=<username>
  &auth-password=<password>

WebDAV Destination

This page describes the WebDAV storage destination

The WebDAV protocol is a minor extension to the HTTP protocol used for web requests. Because it is compatible with HTTP it also supports SSL/TLS certificates and verification similar to what websites are using.

To use the WebDAV destination, you can use a url such as:

webdav://<hostname>/<path>
  ?auth-username=<username>
  &auth-password=<password>

You can supply a port through the hostname, such as webdav://hostname:8080/path.

Authentication method

There are three different authentication methods supported with WebDAV:

  • Integrated Authentication (mostly on Windows)

    • Use --integrated-authentication=trueto enable. This works for some hosts on Windows and most likely has no effect on other systems as it requires a Windows-only extension to the request and a server that supports it.

  • Digest Authentication

    • Use --force-digest-authentication=true to use Digest-based authentication

  • Basic Authentication

    • Sending the username and password in plain-text. This is the default, but is insecure if not using an SSL/TLS encrypted connection.

You need to examine your destination servers documentation to find the supported and recommended authentication method.

Encryption and Certificates

To use an encrypted connection, add the option --use-ssl=truesuch as:

webdav://<hostname>/<path>
  ?auth-username=<username>
  &auth-password=<password>
  &use-ssl=true

This will then use an HTTPS secured connection subject to the operating system certificate validation rules. If you need to use a self-signed certificate that is not trusted by the operating system, you can use the option --accept-specified-ssl-hash=<hash> to specifically trust a certain certificate. The hash value is reported if you attempt to connect and the certificate is not trusted.

This technique is similar to certificate pinning and prevents rotating the certificate and blocks man-in-the-middle attacks.

For testing setups you can also use --accept-any-ssl-certificate that will disable certificate validation. As this enables various attacks it is not recommended besides for testing.

File destination
S3-compatible
FTP
SFTP
WebDAV
OpenStack
Rclone
Backblaze B2
Box.com
Rackspace CloudFiles
Mega.nz
Aliyun OSS
Tencent COS
Jottacloud
pCloud
Azure Blob Storage
Google Cloud Storage
Microsoft Group Drive
SharePoint
Amazon S3
Dropbox
GoogleDrive
OneDrive
OneDrive for business
Sia
Storj
TahoeLAFS
FTP is considered a legacy protocol with security issues
FluentFTP
BackendTool
BackendTester
FtpWebRequest

Rclone Destination

This page describes the Rclone storage destination

If you are using Rclone, some features, such as bandwidth limits and transfer progress do not work.

Duplicati does not bundle Rclone, so you need to download and install the appropriate binaries before you can use this backend. The URL format for the Rclone destination is:

rclone://<remote repo>/<remote path>
  ?rclone-executable=<path to rclone executable>

If the remote repo is not a valid hostname, you can instead use this format:

rclone://
  ?rclone-remote-repository=<remote repo>
  &rclone-remote-path=<remote path>
  &rclone-executable=<path to rclone executable>

Advanced options

If you need to change the Rclone local repo you can use the option --rclone-local-repository which will otherwise be set to local, which works for most setups.

If you need to supply options to Rclone, these can be passed via --rclone-option. Note that the values must be url encoded, and multiple options can be passed by separating them with spaces, before encoding.

As an example adding "--opt1=a --opt2=b" needs to url encoded and results in:

rclone://<remote repo>/<remote path>
  ?rclone-option=--opt1%3Da%20--opt2%3Db

OpenStack Destination

This page describes the OpenStack storage destination

Duplicati supports storing files with OpenStack, which is a large-scale object storage, similar to S3. With OpenStack you store "objects" (similar to files) in "containers" which define various properties shared between the objects. If you use a / in the object prefix, they can be displayed as virtual folders when listing them.

OpenStack v2

If you are using OpenStack with version 2 of the protocol, you can either use an API key or a username/password/tenant combination. To use the password based authentication, use a URL format like this:

If you are using an API key, leave out the --auth-password and --openstack-tenant-name parameters and add in --openstack-apikey=<apikey>.

OpenStack v3

If you are using OpenStack with version 3 of the protocol, you must supply: username, password, domain, and tenant name:

Region selection

The authentication response will contain a set of endpoints to be used for actual transfers. In some cases, this response can contain multiple possible endpoints, each with a different region. To prefer a specific region, supply this with --openstack-region. If any of the returned endpoints have the same region (case-insensitive compare), the first endpoint matching will be selected. If no region is specified, or no region matches, the first region in the response is used.

SFTP (SSH) Destination

This page describes the SFTP (SSH) storage destination

To use the SFTP destination you can use a URL such as:

You can supply a non-standard port through the hostname, such as ssh://hostname:2222/folder.

Using key-based authentication

It is very common, and more secure, to use key-based authentication, and Duplicati supports this as well. You can either provide the entire key as part of the URL or give a path to the key file. If the key is encrypted, you can supply the encryption key with --auth-password.

To use a private key inline, you need to url encode it first and then pass it to --ssh-key. An example with an inline private key:

Note that you need both the prefix sshkey:// and you need to URL encode the contents.

If you have the SSH keyfile installed in your home folder, you can use the file directly with --ssh-keyfile:

Note that Duplicati does not currently support key agents so you must pass the password here.

For best security it is recommended to use a separate identity and key files for the user, so a compromise of the keys does not grant more permissions than what is required.

Validating the host key

Since SSH does not have a global key registry, like for HTTPS, it is possible to launch a man-in-the-middle attack on an SSH connection. To prevent this, Duplicati and other SSH clients will use certificate pinning where the previously recorded host certificate hash is saved and changes to the host certificate must be manually handled by the user.

On the first connection to the SSH server, Duplicati will throw an exception that explains how to trust the server host key, including the host key fingerprint. Once you obtain the host key fingerprint, you can supply it with the --ssh-fingerprint option.

If the host key changes, you will get a different message, but also reporting the new host key, so you can update it. The option --ssh-accept-any-fingerprints=true is only recommended for testing and not for production setups as it will disable the man-in-the-middle protection.

If you are using the UI, you can click the "Test connection" button and it will guide you to set the host key parameters based on what the server reports.

Timeout and keep-alive

By default, Duplicati will assume that the connection works once it has been established. If the SSH server is malfunctioning it may cause operations to hang. To guard against this case, you can set the --ssh-operation-timeout option to enforce a maximum time the operation may take.

A different kind of timeout is when firewalls and other network equipment monitors the connections and closes them if there is no activity. Because Duplicati may open a connection and then perform a long operation locally, it may cause the connection to be closed due to inactivity. The option --ssh-keepalive can be used to define a keep-alive interval where messages are sent if there is no other activity.

Both options are default disabled and should only be enabled if there are special conditions in a setup where the options are needed.

Duplicati has a wide variety of storage destinations, but the has even more! If you are familiar with Rclone, you can configure Duplicati to utilize Rclone to transfer files and extend to the full set of destinations supported by Rclone.

The SFTP destination is using the ubiquitous SSH system to implement a secure file transfer service. Using SSH allows secure logins with keys and is generally a secure way to connect to another system. The SSH connection is implemented with .

Rclone project
openstack://<container>/<prefix>
  ?auth-username=<username>
  &auth-password=<password>
  &openstack-tenant-name=<tenant>
  &openstack-authuri=<url to auth endpoint>
openstack://<container>/<prefix>
  ?auth-username=<username>
  &auth-password=<password>
  &openstack-tenant-name=<tenant>
  &openstack-domain-name=<domain>
  &openstack-authuri=<url to keystone server>
  &openstack-version=v3
ssh://<hostname>/<path>
  ?auth-username=<username>
  &auth-password=<password>
  &ssh-fingerprint=<fingerprint>
ssh://server/home/backup
  ?ssh-key=sshkey%3A%2F%2F----%20BEGIN%20SSH2%20PRIVATE%20KEY%20----...
  &auth-username=user
  &ssh-fingerprint=<fingerprint>
ssh://server/home/backup
  ?ssh-keyfile=/home/user/.ssh/keyfile
  &auth-username=user
  &auth-password=<keyfile password>
  &ssh-fingerprint=<fingerprint>
Renci SSH.Net

CIFS (aka SMB) Destination

This page describes the CIFS storage destination

The Common Internet File System (CIFS) backend provides native support for accessing shared network resources using the CIFS/SMB protocol. This backend enables direct interaction with Windows shares and other CIFS-compatible network storage systems.

To use the CIFS destination, you can use a url such as:

cifs://<hostname>/<share>/<path>
  ?auth-username=<username>
  &auth-password=<password>
  &transport=directtcp

Transport

CIFS supports two distinct transport protocols, each with its own characteristics:

DirectTCP (directtcp)

  • Port: 445

  • Characteristics:

    • Faster performance

    • Modern implementation

    • Preferred for newer systems

    • Direct TCP/IP connection

    • Lower overhead

NetBIOS over TCP (netbios)

  • Port: 139

  • Characteristics:

    • Legacy support

    • Compatible with older systems

    • Additional protocol overhead

    • Slower performance

    • Uses NetBIOS naming service

Advanced Options

--

Defines the read buffer size, in bytes, for SMB operations (Will be capped automatically by SMB negotiated values, values bellow 10000 bytes will be ignored)

--

Defines the write buffer size, in bytes, for SMB operations (Will be capped automatically by SMB negotiated values, values bellow 10000 bytes will be ignored)

Box.com Destination

This page describes the Box.com storage destination

To use box.com, use the following URL format:

box://<folder>/<subfolder>?authid=<authid>

Fully delete files

When files are deleted from your box.com account, they will be placed in the trash folder. To avoid old files taking up storage in your account, you can add --box-delete-from-trash which will then also remove the file from the trash folder.

CIFS Backend is available on Canary release from

Duplicati supports using as a storage destination. Note that Duplicati stores compressed and encrypted volumes on box.com and does not store files so they are individually accessible from box.com.

To use box.com you must first obtain an AuthID by using a Duplicati service to log in to box.com and approve the access. See the for different ways to obtain an AuthID.

v2.1.0.106
box.com
page on the OAuth Server

Rackspace CloudFiles Destination

This page describes the Rackspace CloudFiles storage destination

Duplicati supports storing files with Rackspace CloudFiles, which is a large-scale object storage, similar to S3. With CloudFiles you store "objects" (similar to files) in "containers" which define various properties shared between the objects. If you use a / in the object prefix, they can be displayed as virtual folders when listing them.

To use CloudFiles, you can use the following URL format:

cloudfiles://<container>/<prefix>
  ?cloudfiles-username=<username>
  &cloudfiles-accesskey=<access key>

Using a different API endpoint

The default authentication will use the US endpoint, which will not work if you are a customer of the UK service. To choose the UK account, add --cloudfiles-uk-account=true to the request:

cloudfiles://<container>/<prefix>
  ?cloudfiles-username=<username>
  &cloudfiles-accesskey=<access key>
  &cloudfiles-uk-account=true

If you need to use a specific host, you can also provide the authentication URL directly with the --cloudfiles-authentication-url option. If you are providing the URL, the --cloudfiles-uk-account option will be ignored.

IDrive e2 Destination

This page describes the iDrive e2 Destination

Note that the bucket id is globally unique, so it is recommended using a name that is not likely to conflict with other users, such as prefixing the bucket with the project id or a similar unique value. If you use a simple name, like data or backup it is likely already associated with another project and you will get permission errors when attempting to use it.

To use iDrive e2, you can use the following URL format:

e2://<bucket>/<prefix>
  ?access_key_id=<Access key id>
  &access_secret_key=<Access secret key>

Duplicati supports storing files on , which is a large-scale object storage, similar to S3. In iDrive e2 you store "objects" (similar to files) in "buckets" which define various properties shared between the objects. If you use a / in the object prefix, they can be displayed as virtual folders when listing them..

Note that iDrive has a similar offering called , which is not currently supported by Duplicati.

iDrive e2
iDrive Cloud Backup

Aliyun OSS Destination

This page describes the Alibaba Cloud Object Storage Service, also known as Aliyun OSS.

Note that the bucket id is globally unique, so it is recommended using a name that is not likely to conflict with other users, such as prefixing the bucket with the project id or a similar unique value. If you use a simple name, like data or backup it is likely already associated with another project and you will get permission errors when attempting to use it.

To use Aliyun OSS, you can use the following URL format:

aliyunoss://<prefix>
  ?oss-bucket=<Bucket name>
  &oss-endpoint=<Endpoint>
  &oss-access-key-id=<Access Key Id>
  &oos-access-key-secret=<Access Key Secret>

Duplicati supports storing files on , aka Aliyun OSS, which is a large-scale object storage, similar to S3. In Aliyun OSS you store "objects" (similar to files) in "buckets" which define various properties shared between the objects. If you use a / in the object prefix, they can be displayed as virtual folders when listing them.

The and needs to match the region the bucket is created it. The access key can be obtained or created in the Cloud Console.

Alibaba Cloud Object Storage Service
endpoint is defined by Aliyun

Mega.nz Destination

This page describes the Mega.nz storage destination

mega://<folder>/<subfolder>
  ?auth-username=<username>
  &auth-password=<password>

Two-factor authorization

It is possible to provide a two-factor key with the option --auth-two-factor-key but since this value changes often, it is not suitable to use in most automated backup settings. This is a design choice from Mega.nz and cannot be fixed by Duplicati.

To use the storage destination, you can use the following URL format:

NOTE: The destination is currently using the which is no longer maintained. Since there is little documentation on how to integrate with Mega.nz, it is not recommended that this storage destination is used anymore.

Mega.nz
MegaApiClient

Tencent COS Destination

This page describes the Tencent COS storage destination

To use GCS, you can use the following URL format:

cos://<prefix>
  ?cos-bucket=<Bucket name>
  &cos-region=<Bucket region>
  &cos-app-id=<Account AppId>
  &cos-secret-id=<API Secret Id>
  &cos-secret-key=<API Secret Key>

Note that the bucket must be created from within the Cloud Console prior to use.

Storage class

NOTE: The ARCHIVE and DEEP_ARCHIVE storage does not work well with Duplicati. Because Duplicati really likes to verify that things are working as expected you need to disable these checks. You also need to disable cleanup of data after deleting versions. Restores are tricky, because you need to manually restore data to the standard storage class before Duplicati can access it.

Duplicati supports storing files on which is a large-scale object storage, similar to S3. In Tencent COS you store "objects" (similar to files) in "buckets" which define various properties shared between the objects. If you use a / in the object prefix, they can be displayed as virtual folders when listing them.

The bucket name is user-chosen, and the region must match the . The remaining values can be obtained from the Cloud Console.

The objects uploaded can be in different , which can be set with --cos-storage-class.

Tencent Cloud Object Storage (COS)
bucket region
storage classes

Jottacloud Destination

This page describes the Jottacloud storage destination

jottacloud://<folder>/<subfolder>
  ?authid=<authid>

Device and mount point

Within Jottacloud, each machine registered is a device that can be used for storage, and within each device you an choose the mount point. By default, Duplicati will use the special device Jotta and the mount point Archive.

If you need to store data on another device, you can use the options --jottacloud-device and --jottacloud-mountpoint to set the device and mount point. If you only set the device, the mount point will be set to Duplicati.

Performance tuning

If you need to tune the performance and resource usage to match your specific setup, you can adjust the two parameters:

  • --jottacloud-threads: The number of threads used to fetch chunks with

  • --jottacloud-chunksize: The size of chunks to download with each thread

Backblaze B2 Destination

This page describes the Backblaze B2 storage destination

Duplicati supports storing files with Backblaze B2, which is a large-scale object storage, similar to S3. With B2 you store "objects" (similar to files) in "buckets" which define various properties shared between the objects. If you use a / in the object prefix, they can be displayed as virtual folders when listing them.

To use the B2 storage destination, use the following URL format:

Create a bucket

You can use the Backblaze UI to create your buckets, but if you need to create buckets with Duplicati, this is also possible. The default is to create private buckets, but you can create public buckets with --b2-create-bucket-type=allPublic.

Performance tuning

You can change the size of file listings to better match pricing and speed through --b2-page-size, which is default set to 500, meaning you will have a list request for each 500 objects. Note that setting this higher may cause the number of requests to go down, but each requests may be priced as a more expensive request.

If you prefer downloads from you custom domain name, you can supply this with --b2-download-url. This setting does not affect uploads.

To use the storage destination, you can use the following URL format:

To use Jottacloud you must first obtain an AuthID by using a Duplicati service to log in to Jottacloud and approve the access. See the for different ways to obtain an AuthID.

Jottacloud
page on the OAuth Server
b2://<bucket>/<prefix>
  ?b2-accountid=<account id>
  &b2-applicationkey=<application key>

pCloud Destination

This page describes the pCloud storage destination

The pCloud provider was added in Duplicati v2.1.0.100, and is not yet included in a stable release.

To use pCloud, use the following URL format:

pcloud://<host>/<folder>/<subfolder>?authid=<authid>

The <host> value must be one of:

  • api.pcloud.com for US based access

  • eapi.pcloud.com for EU based access

Due to the way the pCloud authentication system is implemented, the generated AuthID is not stored by the OAuth server and cannot be revoked via the OAuth server. To revoke the token, you must revoke the Duplicati app from your pCloud account, which will revoke all issued tokens.

This also means that after issuing the pCloud token, you do not need to contact the OAuth server again, unlike other OAuth solutions.

Duplicati supports using as a storage destination. Note that Duplicati stores compressed and encrypted volumes on pCloud and does not store files so they are individually accessible from pCloud.

To use pCloud you must first obtain an AuthID by using a Duplicati service to log in to pCloud and approve the access. See the for different ways to obtain an AuthID.

pCloud
page on the OAuth Server

Azure Blob Storage Destination

This page describes the Azure Blob Storage destination

To use the Azure Blob Storage destination, you can use the following URL format:

azure://<container>/<prefix>
  ?azure-account-name=<account id>
  &azure-access-key=<access key>

Create container

If you use the UI, the "Test connection" button will prompt you if the container needs to be created.

Using a Shared Access Signature (SAS) token

Instead of using a traditional Access Key, you can also use a SAS token. To use this, supply it instead of the access key, for example:

azure://<container>/<prefix>
  ?azure-account-name=<account id>
  &azure-access-sas-token=<SAS token>

Duplicati supports backing up to , which is a large scale object storage, similar to S3.

You can create the container via the Azure portal, but if you prefer, you can also let Duplicati create the container for you. The .

Azure Blob Storage
container names are unique within the storage account and has a number of restrictions

Microsoft Group Destination

This page describes the Microsoft Group storage destination

msgroup://<folder>/<subfolder>
  ?authid=<authid>
  &group-id=<group-id>

You can either provide the group email via --group-email or the group id via --group-id. If you provide both, they must resolve to the same group id.

Performance tuning options

If you need to gain more performance you can fine-tune the performance of chunked transfers with the options:

  • --fragment-size

  • --fragment-retry-count

  • --fragment-retry-delay

For most uses, it is recommended that these are kept at their default settings and only changed after confirming that there is a gain to be made by changing them.

Duplicati supports using as a storage destination. To use the destination, use the following URL format:

To use MS Group you must first obtain an AuthID by using a Duplicati service to log in to Microsoft and approve the access. See the for different ways to obtain an AuthID.

Microsoft Groups
page on the OAuth Server

Google Cloud Storage Destination

This page describes the Google Cloud Storage destination

Note that the bucket id is globally unique, so it is recommended using a name that is not likely to conflict with other users, such as prefixing the bucket with the project id or a similar unique value. If you use a simple name, like data or backup it is likely already associated with another project and you will get permission errors when attempting to use it.

To use GCS, you can use the following URL format:

gcs://<bucket>/<prefix>?authid=<authid>

Creating a bucket

These options have no effect if the bucket is already created.

Duplicati supports storing files on , aka GCS, which is a large-scale object storage, similar to S3. In GCS you store "objects" (similar to files) in "buckets" which define various properties shared between the objects. If you use a / in the object prefix, they can be displayed as virtual folders when listing them.

To use Google Cloud Storage you must first obtain an AuthID by using a Duplicati service to log in to Google and approve the access. See the for different ways to obtain an AuthID.

You can and here you can set all options as desired. If you prefer to let Duplicati create the bucket, you can also set the parameters from Duplicati.

You set the project the bucket belongs to with --gcs-project=<project id> and the desired location with --gcs-location=<location>. You can get the project id from the Google Cloud Console and .

When creating the bucket you can also choose the storage class with --gcs-storage-class. You can choose any of the , even if they are not reported as possible by Duplicati.

Google Cloud Storage
page on the OAuth Server
create a bucket from within the Google Cloud Console
see the possible GCS bucket locations in the GCS documentation
storage class values shown in the GCS documentation

SharePoint Destination

This page describes the SharePoint storage destination

To use SharePoint, use the following URL format:

mssp://<folder>/<subfolder>
  ?auth-username=<username>
  &auth-password=<password>

Integrated Authentication (Windows only)

If you are on Windows, it may be possible to use the current user's credentials to authenticate. Support for this is depending on many details and is not avaliable in all cases. To use integrated authentication, use the following URL format:

mssp://<folder>/<subfolder>?integrated-authentication=true

Advanced options

Instead of deleting files directly, they can be moved to the recycle bin by setting the option --delete-to-recycler. This gives some additional safety if a version removal was unintended, but is not generally recommended, as it is a manual process to recover from a partial delete.

The options --web-timeout and --chunk-size can be used to fine-tune performance that matches your setup, but generally it is recommended to keep them at their default values.

If you are running Duplicati in a data center with a very stable connection, you can use the option --binary-direct-mode to enable direct transfers for optimal performance.

Duplicati supports using as a storage destination. This page describes the SharePoint that uses the legacy API, for the .

Microsoft SharePoint
SharePoint provider that uses the Graph API, see SharePoint v2

SharePoint v2 (Graph API)

This page describes the SharePoint v2 storage destination

To use SharePoint, use the following URL format:

sharepoint://<folder>/<subfolder>
  ?authid=<authid>
  &site-id=<site-id>

Performance tuning options

If you need to gain more performance you can fine-tune the performance of chunked transfers with the options:

  • --fragment-size

  • --fragment-retry-count

  • --fragment-retry-delay

For most uses, it is recommended that these are kept at their default settings and only changed after confirming that there is a gain to be made by changing them.

Duplicati supports using as a storage destination. This page describes the SharePoint that uses the Graph API, for the .

To use SharePoint v2 you must first obtain an AuthID by using a Duplicati service to log in to Microsoft and approve the access. See the for different ways to obtain an AuthID.

Microsoft SharePoint
SharePoint provider that uses the legacy API, see SharePoint
page on the OAuth Server

Amazon S3 destination

This page describes how to use the AWS S3 storage destination

To use the AWS S3 destination, use a format such as:

s3://<bucket name>/<prefix>
  ?aws-access-key-id=<account id or username>
  &aws-secret-access-key=<account key or password>
  &s3-location-constraint=<region-id>

If you do not supply a hostname, but instead a region, such as us-east-1, the hostname will be auto-selected, based on the region. If the region is not supported by the library yet, you can supply the hostname via --server-name=<hostname>.

Beware that S3 by default will not use an encrypted connection, and you need to add --use-ssl=trueto get it working.

Creating a bucket

When creating a bucket, it will be created in the location supplied by --s3-location-constraint. In the case no constraint is supplied, the AWS library will decide what to do. If the bucket already exists, it cannot be created again, so the --s3-location-constraint setting will not have any other effect than choosing the hostname.

Storage class

Note on Glacier storage class

Glacier storage does not work well with Duplicati. Because Duplicati really likes to verify that things are working as expected you need to disable these checks. You also need to disable cleanup of data after deleting versions. Restores are tricky, because you need to retrieve data manually from Glacier before Duplicati can work with it.

The storage destination is implemented with the , so all details from that page applies here as well, but some additional features are supported by AWS.

By default, the objects are created with the "Standard" storage setting, which has optimal access times and redundancy. More information about the different are available from AWS. You can choose the storage class with the option --s3-storage-class. Note that you can provide any string here that is supported by your AWS region, despite the UI only offering a few different ones.

AWS S3
general S3 destination
AWS S3 storage classes

Dropbox Destination

This page describes the Dropbox storage destination

To use Dropbox, use the following URL format:

dropbox://<folder>/<subfolder>?authid=<authid>

Duplicati supports using as a storage destination. Note that Duplicati stores compressed and encrypted volumes on Dropbox and does not store files so they are individually accessible from Dropbox.

To use Dropbox you must first obtain an AuthID by using a Duplicati service to log in to Dropbox and approve the access. See the for different ways to obtain an AuthID.

Dropbox
page on the OAuth Server

Google Drive Destination

This page describes the Dropbox storage destination

To use Google Drive, use the following URL format:

googledrive://<folder>/<subfolder>?authid=<authid>

Access levels

Duplicati can work with limited access to Google Drive, where it only has access to its own files. This access is recommended, because it prevents accidents where files not relevant for Duplicati can be read or written. On the community server, this option is called "Google Drive (limited)".

Unfortunately, the security model in Google Drive sometimes resets the access, cutting off Duplicati from accessing the files it has created. If this happens, it is not currently possible to re-assign access to Duplicati, and in this case you must grant full access to the Google Drive for Duplicati to work. On the community server, this option is called "Google Drive (full access)".

Team folder

If you need to use a Team Drive, set the option --googledrive-teamdrive-id to the ID for the Team Drive to use. If this is not set, it will use the personal Google Drive. For example:

googledrive://folder/subfolder?authid=<authid>&googledrive-teamdrive-id=<team id>

Duplicati supports using as a storage destination. Note that Duplicati stores compressed and encrypted volumes in Google Drive and does not store files so they are individually accessible from Google Drive.

To use Google Drive you must first obtain an AuthID by using a Duplicati service to log in to Google and approve the access. See the for different ways to obtain an AuthID.

Google Drive
page on the OAuth Server

OneDrive Destination

This page describes the OneDrive storage destination

To use OneDrive, use the following URL format:

onedrivev2://<folder>/<subfolder>?authid=<authid>

Drive ID

A default drive will be used to store the data. If you require another drive to be used to store data, such as a shared drive, use the --drive-id=<drive id> option.

Duplicati supports using as a storage destination. Note that Duplicati stores compressed and encrypted volumes on OneDrive and does not store files so they are individually accessible from OneDrive.

To use OneDrive you must first obtain an AuthID by using a Duplicati service to log in to Microsoft and approve the access. See the for different ways to obtain an AuthID.

Microsoft OneDrive
page on the OAuth Server

Sia Destination

This page describes the Sia storage destination

sia://<host>:<port>/<path>
  ?sia-password=<password>

If the host is supporting unauthenticated connections, you can omit the password. The default port is 9980 if none is supplied and the default path is /backup if none is supplied.

Advanced options

To adjust the amount of redundancy in the Sia network, use the option --sia-redundancy. Note that this value should be more than 1.

Duplicati supports backups to the which is a large-scale decentralized storage network. To use the Sia destination, use a this URL format:

Sia network

Storj Destination

This page describes the Storj storage destination

Access Grant

To use the access grant method, use the following URL format:

storj://
  ?storj-auth-method=Access%20Grant
  &storj-shared-access=<access key>

Satellite API

To use a satellite API, use the following URL format:

stor://
  ?storj-satellite=<hostname:port>
  &storj-api-key=<api key>
  &storj-secret=<secret>

If the --storj-satellite is omitted it will default to a US based endpoint.

Bucket and folder

To choose the bucket where data is stored, use the --storj-bucket which will default to duplicati. If further differentiation is needed, use --storj-folder to specifiy a folder within the bucket where data is stored.

Duplicati supports backups to the which is a large-scale decentralized storage network. The destination supports two different ways of authenticating: Access Grant and Satellite API.

Storj network
enable remote access
HTTPS protection
setting one explicitly
database location
data location section

Server

This page describes the Duplicati server component

The server is responsible for saving backup configurations, starting scheduled backups, and provide the user interface. The user interface is provided by hosting a webserver inside the process. This webserver provides both the static files as well as the API that is needed to control the server.

During the operation, the server will report progress and log messages, which can be viewed if a client is attached during the run. After the run, the Server will record metadata and log data in the database, to assist in troubleshooting later.

Configuring the server password

--webservice-password=<new password>
--webservice-reset-jwt-config=true

It is also possible to disable the use of signin tokens, which are used in some cases in favor of requiring the password. This can be set with the option:

--webservice-disable-signin-tokens=true

Configuring the server encryption

--settings-encryption-key=<encryption key>

Ensure you use double quotes to escape special characters as required by your operating system's command line.

If the server starts without a settings encryption key, it will emit a warning in the logs explaining the problem. If any fields are already encrypted, Duplicati will refuse to start without the encryption key. If no fields are encrypted, but an encryption key is supplied, the fields will be encrypted.

If you need to remove the encryption key for some reason, provide the key as above, and additionally supply the option:

--disable-db-encryption=true

If this flag is supplied, Duplicati will not emit a warning that the database is not encrypted. If the database was encrypted, it will be decrypted. After the database is decrypted, it can be re-encrypted with a different password.

To prevent ever starting the Server without an encryption key, provide the option:

--require-db-encryption-key

Note that this is exclusive with --disable-db-encryption and that the server will not start if the fields are encrypted and no encryption key is provided.

External access to the server

To activate access from the local network, the server must be started with:

--webservice-interface=any

It is also possible to specify loopback (the default value) or the IP address to listen on.

When accessing the server from an external machine, it will only respond to requests that use an IP address as the hostname. This security mechanism is meant to combat fast-flux DNS attacks that could expose the local API to a website. If you need to access Duplicati from an external machine, you need to explicitly allow the hostname(s) that you will be using, by starting the server with:

--webservice-allowed-hostnames=<hostname>

Multiple hostnames can be supplied with semicolons: host1;host2.example.com;host3.

The server will attempt to use port 8200 and terminate if that port is not available. Use the commandline option to set a specific port:

--webservice-port=<port number>

SSL/TLS support

To ensure all communication is secure, Duplicati supports adding a TLS certificate. The certificate can be a self-signed certificate, but in this case the browser will not accept it, and extra tweaks must be made.

Once you have the desired certificate, in .pfx aka .p12 format, you can provide it to the Server on startup:

--webservice-sslcertificatefile=<path to certificate file>
--webservice-sslcertificatepassword=<password to ssl file>

To change the certificate, exit all running instances, then run again once with the new certificate path, as shown above, and the internally stored certificate will be replaced.

If you need to revert to unencrypted http communication, you can use the option:

--webservice-remove-sslcertificate=true

It is also possible to temporarily disable the use of the certificate, without removing it, with:

--webservice-disable-https

Serving a different UI

--webservice-api-only=true

This option will fully disable the serving of static files and only leave the API available.

If instead, you would like to serve a different folder, you can use the option to set the webroot:

--webservice-webroot=<path-to-webroot>

To better support SPA type applications, the Server can be started with:

--webservice-spa-paths=<path to SPA>

For the SPA enabled path, any attempt to access a non-existing page will serve the index.html file, which can then render the appropriate view. Multiple paths can be supplied with semicolons.

Timezone

Internally, all time operations are recorded in UTC to avoid issues with daylight savings and changes caused by changing the machine timezone. The only difference to this rule is the scheduler, which is timezone aware.

The scheduler needs to be timezone aware so scheduled backups run at the same local time, even during daylight savings time. On the initial startup, the system timezone is detected and stored in the server database. It is possible to change the timezone from the UI, but it can also be set with the commandline option:

--webservice-timezone=<timezone>

Configuring logging

Duplicati will log various messages to the server database, but it is possible to also log these messages to a log file for better integration with monitoring tools or manual inspection. To configure file-based logging, provide the two options:

--log-file=<path to logfile>
--log-level=<loglevel>

By default, the --log-level parameter is set to only log warnings, but can be configured to any of the log levels: Error, Warning, Information, Verbose, and Profiling.

The log data that is stored in the database is by default kept for 30 days, but this period can be defined with the option:

--log-retention=<time to keep logs>

On Windows, it is also possible to log data to the Windows Eventlog. To activate this, set the options:

--windows-eventlog=true
--windows-eventlog-level=<loglevel>

Storing data in different places

By default, Duplicati will use the location that is recommended by the operating system to store "Application Support Files" or "Application Data":

  • Windows: %LOCALAPPDATA%\Duplicati

  • Linux: ~/.config/Duplicati

  • MacOS: ~/Library/Application Support

These paths are sensitive to the user context, meaning that the actual paths will change based on the user that is running the Server. This is especially important when running the server with elevated privileges, because this usually causes it to run in a different user context, resulting in different paths.

To force a specific folder to be used, set the option:

--server-datafolder=<path to storage folder>

This can also be supplied with the environment variable:

DUPLICATI_HOME=<path to storage folder>

If both are supplied, the commandline options are used.

Environment variables

For the server options, it is also possible to supply them as environment variables. This makes it easier to toggle options from Docker-like setups where is is desirable to have then entire service config in a single file, and setting commandline arguments may be error prone.

Any of the commandline options for the server an be applied by transforming the option name to an environment variable name. The transformation is to upper-case the option, change hyphen, -, to underscore, _, and prepend DUPLICATI__.

For example, to set the commandline option --webservice-api-only=true with an environment variable:

DUPLICATI__WEBSERVICE_API_ONLY=true

Any arguments supplied on the commandline will take precedence over an environment variable, as commandline arguments are considered more "local".

The Duplicati server is the primary instance, and is usually hosted by the in desktop environments. The server itself is intended to be a long-running process, usually running as a service-like process that starts automatically. The binary executable is called Duplicati.Server.exe on Windows and duplicati-server on Linux and MacOS.

When the server runs any operation, such as a backup or restore, it will configure an environment from various settings, primarily the backup configuration. The actual implementation is the same code that is executed by the , but runs within the server process.

Unlike the command line interface, the Server keeps track of the local database to ensure the database is present for all operations. This is possible because the server has additional state in the and the path to the local database is kept there.

As described in the section, it is possible to set or reset the server password by starting the server with the option:

This new password is stored in the and does not need to be supplied on future launches. Note that changing the password does not invalidate that are already issued. To clear any issued tokens, which should be done if there is a suspicion that the signing keys are leaked, start with the following option:

This will generate new and immediately invalidate any previously issued tokens. You can start the server with this parameter on each launch if you do not rely on a refresh token stored in the browser.

Since the , it is possible to set a field-level encryption password:

The server will by default only listen to requests on the local machine., which is done to ensure that requests from the local network cannot access the Duplicati instance. However, any applications that are running on the same machine will be able to send commands to Duplicati. To prevent local privilege escalation attacks, Duplicati requires a and a for all requests.

To create a trusted certificate, it is easiest to use one of the many tools to manage it, such as . which can generate the various components and configure your system to trust these certificates. Beware that this requires good operational security, as the generated certificate authority can issue certificates for ANY website, including ones you do not own, and eavesdrop on your traffic.

After starting the server with an SSL certificate, the certificate is stored in the with a randomly generated password. Any subsequent launches of the server will then use the certificate and the server will only communicate over https.

If you are developing a new UI for Duplicati, or prefer to use a customized UI, it is possible to configure the server to serve another UI, or none at all. If you want to use the Server component and only manipulate it with another tool, such as the , start with this option:

TrayIcon
command line interface
server database
access password
server database
tokens
token signing keys
server database is a critical resource to protect
password
valid token
mkcert
server database
ServerUtil

TahoeLAFS destination

This page describes the TahoeLAFS storage destination

tahoe://<hostname>:<port>/uri/URI:DIR2:<folder>
  ?use-ssl=true

TrayIcon

This page describes the Duplicati TrayIcon executable

The main application in the Duplicati installation is the TrayIcon program, called Duplicati.GUI.TrayIcon.exe on Windows and simply duplicati on Linux and MacOS.

The TrayIcon executable is a fairly small program that has as the primary task to register with the operating system desktop environment, and place a status icon in the desktop tray, menu, or statusbar.

The TrayIcon is connected to the server and will change the displayed icon based on the server state. Opening the associated context menu, provides the option to quite, pause/resume, or open the UI.

Server port

By default, Duplicati uses the port 8200 as the communication port with hosted server. Should that port be taken, usually because another instance of Duplicati is running in another user context, Duplicati will automatically try other ports from the sequence: 8200, 8300, 8400, ..., 8900.

Once an available port is found, this port is stored in the server database and attempted first on next run.

Default browser

By default, the Duplicati TrayIcon will use the operating systems standard method for opening the system-default browser. If this is not desired, it is possible to choose the binary that will be used to launch the webpage with the option:

--browser-command=<path to binary>

Detached TrayIcon

In some cases it may be useful to run the server in one process and the TrayIcon in another. For this setup, the TrayIcon can run without a hosted server. To disabled the Server, start the TrayIcon application with the commandline option:

 --no-hosted-server=true

This will cause the TrayIcon to connect to a Server that is already running. If the Server is not running on the same machine, or using a different port, this can be specified with the commandline option:

--hosturl=<host url>
--read-config-from-db=true

The TrayIcon will then attempt to extract signing information from the local database, provided that the TrayIcon process also has read access to the database, and that signin tokens are not disabled.

Self-signed certificate

If the server is using a self-signed certificate (or a certificate not trusted by the OS), the connection will fail. To manually allow a certificate, obtain the certificate hash, and provide it with:

--host-cert-hash=<hash>

For testing and debugging purposes, the certificate hash * means "any certificate". Beware that this settings is very insecure and should not be used in production settings.

Server settings

When hosting the server, the TrayIcon also accepts all the server settings and will forward any commandline options to the hosted server when starting it.

Command Line Interface CLI

This page describes the command line interface (CLI)

The commandline interface gives access to run all Duplicati operations without needing a server instance running. This is beneficial if your setup does not benefit from a UI and you want to use an external scheduler to perform the operations.

The binary is called Duplicati.CommandLine.exe on Windows and duplicati-cli on MacOS/Linux. All commands from the commandline interface follow the same structure:

Each command also requires the option --dbpath=<path to local database>, but if this is not supplied, Duplicati will use a shared JSON file in the settings folder to keep track of which database belongs to each backup. Since there is no state given, the remote url is used as a key, because it is expected to uniquely identify each backup. If no entry is found, a new entry will be created and subsequent operations will use that database.

All commands support the --dry-run parameter that will simulate the operations and provide output, but not actually change any local or remote files.

The help command

The commandline interface has full documentation for all supported options and some small examples for each of the supported operations. Running the help command will output the possible topics:

To list all options supported by the commandline interface, run the following command:

Note that the number of options is quite large, so you will likely need to use some kind of search functionality to navigate the output.

Backup

The most common command is clearly the backup command, and the related restore command. To run a backup, use the following command:

The source path argument can be repeated to include multiple top-level folders. By default, backups are encrypted on the remote destination, and if no passphrase is supplied with --passphrase, the commandline interface will prompt for one. If the backups should be done unencrypted, provide the option --no-encryption.

When supplying only exclude filters, any file not matching will be included; likewise, if only includes are present, anything else will be included. The order of the arguments define the order the filters are evaluated. Beware that some symbols, such as * and \ needs to be escaped on the commandline, and rules vary based on operating system and terminal application/shell.

If either of the --keep-time, --keep-versions, or --retention-policy options are set, a successfull backup will subsequently invoke the delete and compact operation as needed. This enables a single command to run all required maintenance, but can optionally be invoked as manual steps.

Restore

The restore command is equally as important as the backup command and can be executed with:

The restore command in this form will restore the specified file(s) to their original location. If a file is already present in the original location, the files will be restored with a timestamp added to their name. If no files are specified, or the filename is *, all files will be restored.

To restore to a different location than the original, such as to a staging folder, use the option --restore-path=<destination>. The restore will find the shortest common path for the files to restore, and make a minimal folder structure to restore into.

If you are sure you want to restore the files, and potentially loose existing files, use the option --overwrite.

The restore command will restore from the latest version of the backup, but other versions can be selected with the --version=<version>. As with backups, the --include and --exclude options can be used to filter down the desired files to restire.

Find

The find command is responsible for locating files within the backups:

To list files in a specific version, use the --version=<version> option. To search across all versions, use the --all-versions option.

As with backup and restore, the --include and --exclude filters can be added to assist in narrowing down the search output.

A related operation is the "compare" command, which will show a summary of differences between two versions.

Handling exceptional situations

For normal uses, it should be sufficient to only use the backup, restore, and find commands. However, in some exceptional cases, it may be needed to manually fix the problem. If such a situation occurs, Duplicati will abort the backup and give an error message that indicates the problem.

Repair

If the local database is missing or somehow out-of-sync with the remote storage, it can be rebuilt with the repair command. The repair command is invoked with:

If the local database is missing, it is recreated from the remote storage. If the local database is present, the repair command will attempt to recreate any data that is missing on the remote storage. Recreate is only possible if the missing data is still available on the local system. If the required data is missing, the repair command will fail with an error message, explaining what is missing.

List broken files

The command list-broken-files will check which remote files are missing or damaged and report what files can no longer be restored due to this:

Purge broken files

If the remote files cannot be recovered, but you would like the backup to continue, you can use the purge-broken-files command to rewrite the remote storage to simply exclude the files that are no longer restorable:

After succesfully purging the broken files, the local database and remote storage will be in sync and you can continue backups.

The related command "purge" can be used to selectively remove files from the backup.

After purging files, you can run the compact command to release space that was held by the removed files.

OneDrive For Business Destination

This page describes the OneDrive For Business storage destination

To use OneDrive For Business, use the following URL format:

Integrated Authentication (Windows only)

If you are on Windows, it may be possible to use the current user's credentials to authenticate. Support for this is depending on many details and is not avaliable in all cases. To use integrated authentication, use the following URL format:

Advanced options

Instead of deleting files directly, they can be moved to the recycle bin by setting the option --delete-to-recycler. This gives some additional safety if a version removal was unintended, but is not generally recommended, as it is a manual process to recover from a partial delete.

The options --web-timeout and --chunk-size can be used to fine-tune performance that matches your setup, but generally it is recommended to keep them at their default values.

If you are running Duplicati in a data center with a very stable connection, you can use the option --binary-direct-mode to enable direct transfers for optimal performance.

Service and WindowsService

The page describes the Service and WindowsService programs

WindowsService

The Duplicati.WindowsService.exe executable only exists for Windows and serves two purposes: to manage the Windows Service registration and running the server as a Windows Service.

The registration of the Windows Service is done by executing the WindowsService binary:

The arguments can be any of the arguments supported by the Server and will be passed on to the Server on startup. The service will be registered to automatically restart and start at login. These details can be changed from the Windows service manager.

From version 2.1.0.0 and forward, the service will automatically start after installation. The command can be changed to INSTALL-ONLY to avoid starting the service.

To remove the service, use the the UNINSTALL command:

Command Line Tools

This page describes the

BackendTester

This page describes the backend tester tool

Before trusting a storage location with your backups, it's essential to verify its reliability. The built-in Storage Testing Tool helps validate your backup destination through comprehensive integrity testing.

The BackendTester binary is called Duplicati.CommandLine.BackendTester.exe and duplicati-backend-tester on Linux and MacOS. The tool is mostly intended for system administrators that needs to verify a certain storage solution is working as expected, or for developers who are writing a new storage destination provider.

How the Storage Test Works:

  1. The tool automatically creates test files:

    • Generates files of varying sizes

    • Uses randomized file names

    • Creates the number of files you specify

  2. Performs a complete backup simulation:

    • Uploads all test files to your chosen storage location

    • Downloads each file back to verify retrieval

    • Validates file integrity using hash verification

    • Repeats this cycle multiple times for confidence

  3. Provides detailed test results:

    • Success/failure status of each operation

    • Upload and download performance metrics

    • Data integrity confirmation

Customizable Test Parameters:

  • File count: Choose how many test files to generate

  • File sizes: Set minimum and maximum file sizes

  • Filename parameters: Configure allowed characters

  • Test iterations: Specify how many test cycles to run

Duplicati supports backups to the , Tahoe-LAFS. To use the TahoeLAFS destination, use this URL format:

TrayIcon on Windows
Status icon on Ubuntu
Statusbar icon on MacOS

The second task the TrayIcon is usually responsible for, is to host the . The server is responsible for handling stored backup configurations, provide a user interface, run scheduled tasks and more. When launching the TrayIcon, it will also transparently launch and host the server. It uses this hosted instance to subscribe to changes, so it can change the icon and signal the server state.

It may also be required to provide the password for the server in the detatched setup, as outlined in . An alternative to providing the password is to use the option:

It may be convienient to to provide arguments to both the Server and TrayIcon when running in detached mode.

When the TrayIcon is hosting the server, or has access to the database settings, it will automatically extract the certificate hash, so that particular certificate is accepted. This technique is secure and very similar to .

It is possible to run Duplicati in "portable mode" where it can run from removable media, such as an USB-stick, see the section for more details.

Most options have no relationship and can be applied in any order, but some options, mostly the filter options, are order sensitive and must be supplied in the order they are evaluated. The remote url is a url-like representation of the storage destination and options. The page provides an overview of what is currently supported.

The list of options that are supported is quite extensive and only the most common options are described on this page. For the sensitive options: --passphrase, --auth-username, and --auth-password, these can also be supplied thorugh the matching environment variables: PASSPHRASE, AUTH_USERNAME, and AUTH_PASSWORD. For further safeguarding of these values, see the section on .

The most common additional option(s) supplied are the filter options. The filters can selectively change what files and folders are excluded from the source paths. The describe the format of filters. Filters are supplied with the --include and --exclude options. For example:

If no filename is specified, the command will instead list all the known backup versions (or "snapshots). Multiple filenames can be specified, and they are all treated as . If a full file path is specified, the find command will instead list all versions of that file.

The related command "affected" can give a similar output where reports what files would be lost, if the given remote files were damaged. It is possible that files can be partially restored despite damaged remote files. For handling partial restore, see the section on .

Duplicati supports using as a storage destination. Note that Duplicati stores compressed and encrypted volumes on OneDrive and does not store files so they are individually accessible from OneDrive.

The Service binary executable is a small helper program that simply runs the executable and restarts it if it exits. The purpose of this program is to assist in keeping the Server running, even in the face of errors. The Service binary is called Duplicati.Service.exe on Windows and duplicati-service on Linux and MacOS.

Besides the general , Duplicati ships with a number of supporting commandline tools. Besides ServerUtil, each of the tools are intended to be used in special circumstances, outside the expected normal operation of Duplicati.

, , , and are intended to be used for testing functionality on the actual setup, ahead of making changes or running backups.

The and tools are intended to work directly with the remote storage files.

The can work directly with the remote storage without using the regular Duplicati code, and can both recover files from a damaged remote destination, but also re-upload existing files.

Tahoe least-authority file store
Server component
Duplicati Access Password
use preload settings
certificate pinning
server data location
duplicati-cli <command> <remote url> [arguments and options]
See duplicati-cli help <topic> for more information.
  General: example, changelog
  Commands: backup, find, restore, delete, compact, test, compare, purge, vacuum
  Repair: repair, affected, list-broken-files, purge-broken-files
  Debug: debug, logging, create-report, test-filters, system-info, send-mail
  Targets: aliyunoss, azure, b2, box, cloudfiles, dropbox, ftp, aftp, file, gcs, googledrive, e2,
  jottacloud, mega, msgroup, onedrivev2, openstack, rclone, s3, ssh, od4b, mssp, sharepoint, sia,
  storj, tahoe, cos, webdav
  Modules: aes, gpg, zip, console-password-input, http-options, hyperv-options, mssql-options,
  runscript, sendhttp, sendxmpp, sendtelegram, sendmail
  Formats: date, time, size, decimal, encryption, compression
  Advanced: mail, advanced, returncodes, filter, filter-groups, <option>
  Secrets: secret, <provider>
duplicati-cli help advanced
duplicati-cli backup <remote url> <source path> [options]
--exclude=*.iso
--exclude=Thumbs.db
--exclude=*/tmp-*
duplicati-cli restore <remote url> <filename> <options>
duplicati-cli find <remote url> <filename> <options>
duplicati-cli repair <remote url>
duplicati-cli list-broken-files <remote url> <options>
duplicati-cli purge-broken-files <remote url> <options>
od4b://<folder>/<subfolder>
  ?auth-username=<username>
  &auth-password=<password>
od4b://<folder>/<subfolder>?integrated-authentication=true
Duplicati.WindowsService.exe INSTALL [arguments ...]
Duplicati.WindowsService.exe UNINSTALL
Usage: <protocol>://<username>:<password>@<path>
Example: ftp://user:pass@server/folder

Supported backends: aliyunoss,aftp,azure,b2,box,cloudfiles,dropbox,ftp,file,gcs,googledrive,e2,jottacloud,mega,msgroup,onedrivev2,openstack,rclone,s3,ssh,od4b,mssp,sharepoint,sia,storj,tahoe,cos,webdav

 --reruns (Inteiro): The number of test runs to perform
   A number that describes how many times the test is performed
   * default value:: 5
 --tempdir (Caminho): The path used to store temporary files
   The backend tester will use the system default temp path. You can set this option to choose another path.
 --extended-chars (String): A list of allowed extended filename chars
   A list of characters besides {a-z, A-Z, 0-9} to use when generating filenames
   * default value:: -_',=)(&%$#@! +
 --number-of-files (Inteiro): The number of files to test with
   An integer describing how many files to upload during a test run
   * default value:: 10
 --min-file-size (Tamanho): The minimum allowed file size
   File sizes are chosen at random, this value is the lower bound
   * default value:: 1kb
 --max-file-size (Tamanho): The maximum allowed file size
   File sizes are chosen at random, this value is the upper bound
   * default value:: 50mb
 --min-filename-length (Inteiro): The minimum allowed filename length
   File name lengths are chosen at random, this value is the lower bound
   * default value:: 5
 --max-filename-length (Inteiro): The minimum allowed filename length
   File name lengths are chosen at random, this value is the upper bound
   * default value:: 80
 --trim-filename-spaces (Boleano): Trim whitespace from filenames
   A value that indicates if whitespace should be trimmed from the ends of randomly generated filenames
   * default value:: false
 --auto-create-folder (Boleano): Allow automatic folder creation
   A value that indicates if missing folders are created automatically
   * default value:: false
 --skip-overwrite-test (Boleano): Bypass the overwrite test
   A value that indicates if dummy files should be uploaded prior to uploading the real files
   * default value:: false
 --auto-clean (Boleano): Remove any files found in target folder
   A value that indicates if all files in the target folder should be deleted before starting the first test
   * default value:: false
 --force (Boleano): Activate file deletion
   A value that indicates if existing files should really be deleted when using auto-clean
   * default value:: false
destination overview
using the secret provider
page on filters
filter expressions
disaster recovery
Microsoft OneDrive for Business
Server
commandline interface
BackendTester
Snapshots
AutoUpdater
SecretTool
BackendTool
SharpAESCrypt
RecoveryTool

AutoUpdater

This page describes the AutoUpdater tool in Duplicati

The AutoUpdater is intended to support automatic updating of Duplicati. In the current version, the name is a bit misleading as it only supports checking for a new version, it does not yet support actually installing a new version automatically.

The binary is called Duplicati.CommandLine.AutoUpdater.exe on Windows and duplicati-autoupdater on Linux and MacOS.

To use the AutoUpdater, simply invoke it from the commandline:

duplicati-autoupdater check

This will check if there is a newer version available and remote the running version number.

It is also possible to download an updated installer package:

duplicati-autoupdater download

Environment variables

By default, Duplicati uses the domains updates.duplicati.com and alt.updates.duplicati.com to find updates. If you are running Duplicati within a controlled environment, you can use the environment variables to change where Duplicati is looking for the updates:

AUTOUPDATER_Duplicati_URLS=https://example.com/stable/latest.manifest

Duplicati will detect the /stable/ part of the url and replace with the channel the user has chosen.

AUTOUPDATER_Duplicati_CHANNEL=canary

BackendTool

This page describes the backend tool in Duplicati

The BackendTool is intended to provide commandline access to the remote destination. This can be used to create remote folders, locate remote files, and fetch remote files.

The BackendTool is called Duplicati.CommandLine.BackendTool.exe on Windows and duplicati-backend-tool on Linux and MacOS.

The basic usage for the tool is:

There are 5 supported commands: GET, PUT, DELETE, LIST, CREATEFOLDER.

The LIST command will simply list all files found on the remote location and has no side-effects. The CREATEFOLDER command can be used to created folders in preparation for making a backup or moving files.

The download feature checks what Duplicati is current installed with, and then obtains the most recent URL for that package and downloads it the the current directory. This feature only works if the installed package can be determined and there is an updated package available. If not, the download page is reported to the terminal for manual download.

It is also possible to set the with an environment variable:

The GET, PUT, and DELETE commands will download, upload, and delete a file, respectively. The filename parameter refers to the remote filename and will be matched to a local filename. It is not possible to have different filenames on the remote and local system with this operation. Note that any change to the remote storage will likely required a recreate of the .

package
channel
duplicati-backend-tool <command> <remote url> [filename]
local database

Snapshots

This page describes how to use the Duplicati Snapshots tool

The Snapshots tool is intended to test the system snapshot capability, and will invoke the same system calls as Duplicati to set up and tear down a system snapshot.

The Snapshots tool is called Duplicati.CommandLine.Snapshots.exe on Windows and duplicati-snapshots on Linux and MacOS.

To run the tool, invoke it with a folder to use for testing. To work correctly the folder should be on the filesystem/disk/volume/etc that will be part of the snapshot:

duplicati-snapshots <path to test folder>

The tool will do the following:

  1. Create the folder if it does not exist

  2. Place a file named testfile.bin inside the folder

  3. Lock the file

  4. Verify that the file is locked

  5. Create a snapshot containing the folder

  6. Check that the file can be read from the snapshot

  7. Tear down the snapshot

On Windows, this will use VSS to create snapshots, which require elevated privileges, usually Administrator.

On Linux, this will use LVM and a set of shell scripts to obtain the vgroup and manipulate it. These scripts are located in the source folder lvmscripts and are named:

  • find-volume.sh: Locates the volume where the given folder path is in.

  • create-lvm-snapshot.sh: Creates the LVM snapshot and returns the path to it.

  • remove-lvm-snapshot.sh: Removes a created snapshot

Usually, the operations require elevated privileges, for example root permissions.

For MacOS, the snapshots are not currently supported.

SharpAESCrypt

This page describes the SharpAESCrypt commandline encryption tool

The SharpAESCrypt commandline tool uses the provided AES encryption library but exposes it as a commandline tool.

To encrypt a file, use the syntax:

And similarly to decrypt a file:

For decryption, it is possible to use the "optimistic mode", which will leave the decrypted file on disk, even if it does not pass the validation. This is insecure, because the file contents can be modified if the integrity checks fail, but in some cases it can help to recover lost data:

To enable the compatibility check for regular Duplicati operations, add the environment variable:

The SharpAESCryp tool is called Duplicati.CommandLine.SharpAESCrypt.exe on Windows and duplicati-aescrypt on Linux and MacOS. The library and commandline tool implements the , so the commandline tool is compatible with any other tool using the .

If you are encrypting files with a different tool, note that SharpAESCrypt adds an additional check, similar to , which is not part of the AESCrypt specification. This does not change the file format, but makes it harder to inject trailing bytes. However, since other tools do not follow this standard, the decryption will reject such (otherwise valid) files. To decrypt such files, enable the compatibility mode:

duplicati-aescrypt e <password> <plain-text file> <encrypted file>
duplicati-aescrypt d <password> <encrypted file> <plain-text file>
duplicati-aescrypt do <password> <encrypted file> <plain-text file>
duplicati-aescrypt dc <password> <encrypted file> <plain-text file>
AES_IGNORE_PADDING_BYTES=1
AESCrypt file format
AESCrypt file format
PCKS#7 padding

SecretTool

This page describes the Duplicati SecretTool

The SecretTool is called Duplicati.CommandLine.SecretTool.exe on Windows and duplicati-secret-tool on Linux and MacOS.

To use the tool, invoke it with a configuration and some secrets to locate:

duplicati-secret-tool test <provider url> <secret>
duplicati-secret-tool info <provider url>

Note that to protect the secrets, the tool will not report the actual values, but just report if it was able to obtain a value from the secret provider.

RecoveryTool

This page describes the Duplicati recovery tool

Duplicati Recovery Tool

This tool performs a recovery of as much data as possible in small steps that must be performed in order. We recommend that you use duplicati-cli to do the restore, and rely only on this tool if all else fails.

The recovery tool is called Duplicati.CommandLine.RecoveryTool.exe on Windows and duplicati-recovery-tool on Linux and MacOS.

The steps to perform a disaster recovery are:

1: Download: Download files from the remote store and keep them unencrypted on a location available in the local filesystem.

2: Index: Builds an index file to figure out what data is contained inside the files downloaded

3: Restore: Restores the files to a destination you choose

Optionally you can also run:

4: List: Shows what files are available and tests filters

5: Recompress: Ability to change compression type of files on remote backend e.g. from 7z to ZIP

Download

duplicati-recovery-tool download <backend url> <working folder> [options]

Downloads all files matching the Duplicati filenames from the remote storage to the current directory, and decrypts them in the process. The remote url must be one supported by Duplicati. Use duplicati-cli help backends to see backends and options.

Index

duplicati-recovery-tool index <working folder> [options]

Examines all files found in the current folder and produces an index.txt file, which is a list of all block hashes found in the files. The index file can be rather large. It defaults to being stored in the current working directory, but can be specified with --indexfile. Some files are created in the system temporary folder, use --tempdir to set an alternative temporary folder location.

Restore

duplicati-recovery-tool restore <working folder> [version] [options]

Restores all files to their respective destinations. Use --targetpath to choose another folder where the files are restored into. Use the filters, --exclude, to perform a partial restore. Version can be either a number, a filename or a date. If omitted the most recent backup is used.

The restore process requires a fast lookup, which is optimal if all the hashes can be kept in memory. Use the option to --reduce-memory-use=true to toggle a slower low-memory restore. If the process is interrupted for any reason, note the file counter and use --offset=<count> to start the restore after the last restored file.

Advanced performance options are:

  • --reduce-memory-use: Disables keeping all hashes in memory; use if memory is limited on the restoring machine

  • --disable-file-verify: Disables the initial hashing of the restored file

  • --disable-wrapped-zip: Disable using the faster .NET native ZIP archive in favor of the more resilient one in Duplicati

  • --max-open-archives: Sets the number of archives to keep open for faster access (uses some memory pr. archive); default 200

List

duplicati-recovery-tool list <working folder> [version] [options]

Lists contents of backups. Version can be either a number, a filename or a date. If [version] is omitted a list of backup versions are shown, if [version] is supplied, files from that version are listed. Use the filters, --exclude, to show a subset of files.

Recompress

duplicati-recovery-tool recompress zip <backend url> <working folder> \
  --reupload --reencrypt [options]
  1. Downloads whole remote storage to the current working folder.

  2. Recompress from existing compression type to the chosen compression format.

  3. If --reencrypt is supplied, again reencrypts using same passphrase (needs to be decrypted for compression type change)

  4. If --reupload is supplied, files with old compression are deleted and recompressed files are uploaded back to remote storage (it is recommended to take at least temporary copy of remote storage before enabling this switch)

Warning: If --reupload is supplied it is advisable to specify --reencrypt otherwise the files will be uploaded unencrypted!

Supported Options

The backend modules support all their normal options. To see what options a specific backend supports, type:

duplicati-cli help

The environment variables AUTH_USERNAME and AUTH_PASSWORD are supported. The options --parameters-file and --tempdir are supported.

The SecretTool is a small utility tool that can be used to test the configuration.

Multiple secrets can be provided and the tool will attempt to resolve each of them. See the for details on how to use and configure the secret providers. Commandline help is also available with:

Warning: Before recompress delete the and after recompress recreate local database before executing any operation on backup. This allows Duplicati to read new file names from remote storage.

secret provider
secret provider section
local database

ServerUtil

This page describes the Duplicati ServerUtil helper program

The ServerUtil binaries are called Duplicati.CommandLine.ServerUtil.exe on Windows ana duplicati-server-util on Linux and MacOS.

Handling login

If the database is encrypted, write protected, or in some other way inacessible, the caller needs to provide both the url and the password on the commandline.

duplicati-server-util login --password=<password> --hosturl=<hosturl>

To revoke the stored refresh token, run the logout command with the host url:

duplicati-server-util logout --hosturl=<hosturl>

Working with backups

To show the backups currently configured, run the list-backups command:

duplicati-server-util list-backups

Each backup configuration has a name and an ID associated with it. All operations that work on one or more backups will accept either the ID or the name the backup has in the server (case insensitive). Using the ID is prefered as that is stable across backup renames, but the name may be more convenient.

Once you know the name or ID of a backup configuration, you can schedule the backup:

duplicati-server-util run <backup id or name>

This will put the backup into the running queue and start the backup as soon as the queue is empty.

With the backup ID or name, it is also possible to export the backup configuration for later use:

duplicati-server-util export <backup id or name> --encryption-passphrase=<passphrase>

You can later import a backup that was previously exported with the command:

duplicati-server-util import <filename> <passphrase>

Note that this will create a new backup with the same configuration, so make sure you have removed the previous backup configuration first.

Pausing and resuming the server

duplicati-server-util pause 5m

This will cause the scheduler to pause and not issue new backups until 5 minutes has passed. If no duration is passed, the server will pause until resumed.

To resume the server, run the following command:

duplicati-server-util resume

Changing the Server password

duplicati-server-util change-password <new password>

Issuing a "forever token"

The "issue-forever-token" command was added to Duplicati beta 2.1.0.3 and canary 2.0.102.

All requests to the Duplicati server needs to be authenticated with a valid token. Usually the token is obtained by providing the password to the server and receiving a token in the response. For some advanced setups, especially when running Duplicati behind an authenticated proxy server. In such a setup, the Duplicati password is an unwanted "double authentication".

In a setup where there is another layer of authentication, it is possible to issue a token that last 10 years, significantly longer than the 15 minutes regular tokens last. To prevent unintended usage of the feature, it requires three steps to configure:

  • Stop Duplicati and start duplicati-server with --webservice-enable-forever-token=true

  • Run the command: duplicati-server-util issue-forever-token

  • Stop Duplicati and start without --webservice-enable-forever-token

The commandline option --webservice-enable-forever-token toggles the ability to issue the token. The API is implemented such that it will only issue a single token pr. server start.

Once the API is enabled, the server-util can call the API and obtain the single token. If something goes wrong, you can restart the Server and try again.

Once the token is obtained, it is important to remove the --webservice-enable-forever-token again, so regular users cannot issue such a token.

With the token in hand, configure the proxy to attach the header to each request:

Authorization: Bearer <token>

With this header present, all requests to Duplicati will be authenticated. If you need to revoke a forever token, start the server once with --webservice-reset-jwt-config which will immediately invalidate any issued token.

Agent

This page describes the Agent executable

The Agent binary is called Duplicati.Agent.exe on Windows and duplicati-agent on Linux and MacOS.

Registering the machine

To register the Agent, run the following command:

This will cause the Agent to register using the token from the url and the --agent-register-only option will cause it to exit after registration has completed. If the Agent is already registered, it will simply exit.

To remove the registration information, use the command:

After the settings are cleared, the agent can be registered again.

Configuring the hosted server

The Agent is not intended to be accessible locally and for that reason, it is locked down with a number of settings. If you need to configure the Server, most of the options can be given to the Agent and passed on to the server. This includes --webservice-port and --settings-encryption-key.

The hosted agent server will use the port 8210 by default, to not clash with the regular Duplicati instance on port 8200.

Opening the hosted server for local access

To make the hosted server fully accessible from the local machine that it is running on, add the following settings:

The first option, --disable-pre-shared-key, will disable the random key that is required for all requests to the webserver. This key is a random value that is generated on each start, and only kept in memory, preventing any requests to the Duplicati API.

The second option, --webservice-api-only=false will enable the access to the static .html, .css, and .js files that provide the UI.

The last option sets the UI password, which would otherwise be a randomly generated password.

You may also want to re-enable the signin tokens with --webservice-disable-signin-tokens=false.

WindowsService support

The Duplicati.WindowsService.exe installer can also install the Agent as a service:

Similarly, you can uninstall the Agent service with:

Linux service

On Linux-based installations, the Agent installer will create the service files, which can be used to automatically start and run the Agent:

As is common for other services, additional start parameters can be added to /etc/default/duplicati.

MacOS support

When installing on MacOS, the packages will register a launchagent that will start the Agent on each login. The assumption here is that the desktop context contains a browser, so the Agent will open the registration url in the default browser.

The ServerUtil executable is a helper program that can interact with a running Duplicati instance. The main use-case for this program is to allow scripted or programmatic interactions with the server, without resorting to loading the web UI.

The ServerUtil is a replacement for a contributed script that is no longer maintained. Bot approaches works by accessing the Duplicat server API and issuing the same requests as the user interface would otherwise do.

The ServerUtil needs to authenticate with the Server, which requires a connection url and a password. To avoid needing this, the ServerUtil will attempt to read the and obtain information from there. If this succeeds, the ServerUtil will automatically configure an authenticated session with the server, without needing additional input.

If the tool is intended to be invoked from a script, it is possible to secure a by calling the login command:

This will cause the ServerUtil to store a refresh token in the settings file, such that future operations do not need the password (but will still need the hosturl). To safeguard the token, it is possible to provide --settings-encryption-key=<key> that will encrypt the settings file. The can be used to further secure this key, or can be used to provide the password on the commandline.

This will export the configuration to a local file, encrypted with . If you do not supply a passphrase, the exported configuration will not include the passphrase or storage credentials. Use --export-passwords=true to force export the passwords to a plain-text file.

A common use for the ServerUtil is to pause and resume the server, which can be done to avoid running backups during peak hours. To pause the server, invoke the ServerUtil with a :

As explained in the section on the , it is possible to use the ServerUtil to change the password. In the general case, this can be done with access to the , but in some cases it requires knowing the previous password. Change the password with the command:

Note that this will not revoke access that is already granted, as such . Restart the Server with --webservice-reset-jwt-config=true as explained in the section.

The Duplicati Agent is one of the primary ways to run Duplicati, similar to the and . The Agent can be deployed in settings where there is no desktop or user interaction is not desired. The Agent needs to connect to a remote control destination from wher it can be controlled, and due to this, the Agent employs a number of additional settings that prevents applications from running on the same machine to interact with the Agent.

A benefit from using the Agent is that it will only communicate over TLS encrypted connections and does not require you to manually handle the configuration of .

When the Agent starts for the first time, it will attempt to register with the Duplicati Console. To do this, it will open a browser window where the user can accept the registration and add the machine to their account. If the Agent needs to be registered without user interaction, a pre-authorized link can be generated on the :

The Agent settings are stored in a file called agent.json in the same folder where the is stored. The file path can be supplied with --agent-settings-file and the file can be encrypted with the setting --agent-settings-file-passphrase.

To protect the settings file passphrase, it is possible to use the .

Note that since the Agent cannot open a browser from within the service context, it will instead emit the link that is used to claim the Agent in the Windows Event Log. You need to find the link from there and open it in a browser to claim the machine. Alternatively, use the , but beware that you need to run in the same context as the service, or the agent.json file will be placed in another folder.

Note that when running the service, the Agent does not have access to the desktop environment (if one even exists) and it cannot open the registration url in the browser. Instead, it will emit a url in the system logs that you need to open to register the machine. Alternatively, use the , but beware that you need to run in the same context as the service, or the agent.json file will be placed in another folder.

To use a pre-authenticated url, use the , and then restart the service to have it pick up the updated agent.json file.

Server
duplicati_client
Server database
refresh token
secret provider
AESCrypt
access password
server database
access lives in refresh and access tokens
Server
duplicati-agent run \ 
  --agent-registration-url="<pre-authorized url>" \
  --agent-register-only
duplicati-agent clear
duplicati-agent \
  --disable-pre-shared-key \
  --webservice-api-only=false \
  --webservice-password=<password>
Duplicati.WindowsService.exe INSTALL-AGENT <options>
Duplicati.WindowsService.exe UNINSTALL-AGENT
sudo systemctl enable duplicati-agent.service
sudo systemctl daemon-reload
sudo systemctl start duplicati-agent.service  
sudo systemctl status duplicati-agent.service
pre-authenticated link
pre-authenticated link
pre-authenticated link
The Duplicati Console with a pre-authorized link
Server
TrayIcon
Duplicati Console registration page
Server database
secret provider
certificates for the Server
method outlined above to register the machine
method outlined above to register the machine
method outlined above to register the machine

License Agreement

MIT License

Copyright (c) 2024 Duplicati

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Duplicati Inc & Open Source

About Duplicati Inc & its relation to the open source Duplicati

Duplicati Inc. is a US-based for-profit entity incorporated in Delaware in March 2024. Duplicati Inc, helps develop the Duplicati Open Source Client and pay for various infrastructure costs. The Duplicati client is fully open source and free to use with no limitations.

Duplicati Inc is founded with an Open Core model, where the open source client is further developed as open source, but with additional enterprise focused tools and services as a paid feature. Being an Open Core company, we believe that a strong open source client and a vibrant open source community is our strongest asset. At the same time, the for-profit model enables us to take on larger development tasks and maintenance that would otherwise not be sustainable with a pure volunteer based project.

OAuth Server

This page describes the OAuth login used by some providers

Many large providers only allow access with OAuth, requiring the user to authenticate that Duplicati may access their resources on their behalf. This generally works by initiating a login request, then redirecting the web browser to the login page, and then delivering a secure access token to Duplicati.

For web-based applications, this is a very smooth process, but for a tool such as Duplicati that needs to run, even when there is no browser or UI available, it is not an ideal solution. The workaround developed for Duplicati is to pre-authenticate with a long-lived token from a place where there is a browser available. Once the token is created, it is returned to the user in the form of an AuthID string.

Cloud service

This service is the default as it is the most convenient for most users. To generate a token, simply visit:

Click the button for your prefered provider, complete the login, and obtain the AuthID, that you can then use on another machine as needed.

If you are using the UI, you can click the AuthID label/link to start the process. Once you complete it, the UI will automatically fill in the ID, no interaction required.

Self-hosted server

After you have set up the server, use the option --oauth-url=<local server url> to configure Duplicati to use another server to authenticate with.

that allows applications to securely admit third-party access from legitimate users without exposing details, such as real name, passwords, etc.

This AuthID can then be used by Duplicati to access resources on the users behalf, acting as a kind of API key. Further details on how the OAuth server works is described in the .

Duplicati has a hosted service that can be used to get access to a variety of different storage providers. It is hosted on and the .

If you want to remove access, you can either revoke a specific AuthID at the same place where you created it, using the . You can also go to the provider, say Dropbox or OneDrive, and remove the authorization for Duplicati, which will immediately revoke all tokens issued for your account.

If you prefer to manage the full cycle and not send tokens into a provider not under your control, you can use the . The server is Docker enabled and also available as a .

Refer to the for how to configure it. Before you can use the server, you need to obtain a Client ID and Client Secret for the provider you want to support. Refer to the default providers file to see the links to each service, or consult your service provider for details on how to obtain these values.

OAuth is an industry standard
forum post on OAuth
Google App Engine
source code is available
https://duplicati-oauth-handler.appspot.com
revoke AuthID feature
self-hosted open-source OAuth Server
pre-built Docker image
Github documentation

SUPPORT

Welcome to Duplicati's support community! As an open-source project, we believe in the power of community collaboration. Users can find help by raising issues on our or joining the discussions at , where years of shared knowledge from both users and developers create an invaluable resource for troubleshooting and best practices.

For our corporate customers, we offer dedicated support through our integrated support system on . If you have other inquiries, please don't hesitate to reach out to - we're here to help you protect your valuable data. Your success with Duplicati matters to us, and we're committed to providing the support you need.

GitHub repository
forum.duplicati.com
duplicati.com
hello@duplicati.com

Release channels and versions

This page describes the different channels and how releases are used.

When using any software, it is important to use an updated version, but each update also carries a risk of containing a bug or change that requires intervention on the machine. To balance these two parameters, Duplicati uses channels to push updates at different speeds. Builds start out as canary builds and once stability is achieved they move up through the channels unless a breaking issue is discovered.

Installations work the same on any channel so you may choose to uninstall one version and install another. By default, the built-in update checker will use the channel of the package you installed to check for new versions.

Stable channel

The stable channel is the slowest moving channel. Builds in this channel are considered well tested and robust. This channel is recommended for most users.

Beta channel

The beta channel is generally used as a staging ground before moving to a stable release. Releases in this category are more frequent than the stable, but is also usually a slowly moving channel. This channel is recommended for users who wants to be on top of new developments. For larger installations, it may make sense to have a few machines on the beta channel to discover changes before affecting the entire setup.

Experimental channel

Releases in the experimental channel usually contains a new experimental setting or algorithm that is not yet battle proven across a large set of system. These releases are generally considered safe for general use but may contain features that will be removed again or are not working in all environments.

Canary channel

The canary builds are regular builds that are extracted from the latest developments. The releases in this channel can have bugs and are generally not recommede for production use. These builds are usually the first time the changes are tested on machines that are not managed by developers. These builds are mostly recommeded for users that want to follow the development closely and give feedback on development direction and impact feature development.

Upgrading and downgrading

This page describes how to downgrade Duplicati from a newer version to an older version

Upgrading

Installing new versions of Duplicati is part of the test process so any upgrade is intended to keep things working the same as before. In some cases the updates will start to give a warning on backups that were previously running without a warning. These warnings will describe what has changed and explain what to do to remove the warning.

Such warnings generally releate to a feature that will be removed or renamed but is not yet removed. The warnings give you a heads-up to avoid issues in the future and are generally simple to implement by editing a backup.

In rare cases a feature can no longer be supported, such as when a storage provider stops offering a service. For these, the feature will be removed and this will be mentioned in the release notes.

Downgrading

Downgrades are usually not supported automatically because the old version was created before the current version, the code inside the old version cannot know what was changed. To avoid data loss, this process is controlled by version numbers inside the database.

Each update to the data will increment the version number of the database such that when an older version is running it will detect a higher number than it knows and stop there.

When a version upgrades the database, it wil create a backup of the current database before upgrading. You can look for the database and backups in:

  • ~/.config/Duplicati on Linux

  • ~/Library/Application Support/Duplicati on MacOS

  • %LOCALAPPDATA%\Duplicati on Windows.

If you have been using the new version you may have changes in the current database that would be lost by restoring the pre-upgrade database. In that case, you can for advice on how to downgrade.

ask on the forum

Downgrade from 2.1.0.2 to 2.0.8.1

This page describes how to downgrade from Duplicati 2.1.0.2 to 2.0.8.1

To downgrade from 2.1.0.2 to an earlier version, note that the two are built on different core technologies (.NET8 vs .NET4/Mono). If you have previously been able to run 2.0.8.1, you should be able to downgrade by installing the previous version as before.

duplicati-server --disable-db-encryption=true
DROP TABLE "TokenFamily";
UPDATE "Version" SET "Version" = 6;

For each of the local databases, run the following:

DROP INDEX "UniqueBlockVolumeDuplicateBlock";
UPDATE "Version" SET "Version" = 12;

Close the SQLite editor, and then start Duplicati 2.0.8.1.

Obtaining older releases

Before you downgrade, you should make sure you have removed database encryption. You can do this by stopping all running instances, and then running or with:

This will remove the field-level encryption in the . After starting with this parameter, stop the server, uninstall 2.1.0.2 and install 2.0.8.1.

Since both the and the was updated, you need to downgrade both. Note that there is one local database for each backup you have configured, and all of those may need to be downgraded.

To downgrade the server database, use an SQLite tool, such as . Open the database and run the following query:

This will downgrade the server database to , and allow it to properly upgrade later if needed.

This will downgrade the database to , and allow it to upgrade later if needed.

The are available on Github. You can for other versions you may want.

Server
TrayIcon
server database
server database
local database
SQLite Browser
installer packages for 2.0.8.1
browse the list of releases
time format
duration value
version 6
version 12