Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This page describes how to get a backup working again after a failure on the remote storage
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Welcome to the Duplicati Documentation! This site contains documentation for using the open source Duplicati client, including best practice, pro tips, and trouble shooting.
If you cannot find an answer on this site, you can always ask a question on our helpful forum 🤗.
Installation
Install the Duplicati client
Set up a backup
Configure your first backup
Configuring a destination
Show all destinations
This page describes the different ways to run Duplicati
When using Duplicati, you need to decide on what type of instance you want to use. Duplicati is designed to be flexible and work with many different setups, but generally you can use this overview to decide what is best for you:
The TrayIcon is meant to be the simplest way to run Duplicati with the minimal amount of effort required. The TrayIcon starts as a single process, registers with the machine desktop environment and shows a small icon in the system status bar (usually to the right, either top or bottom of the screen).
When running, the TrayIcon gives a visual indication of the current status, and provides access to the visual user interface by opening a browser window.
The Server mode is intended for users who want to run the full Duplicati with a user interface, but without a desktop connection. When running the Server it is usually running as a system service so it has elevated privileges and is started automatically with the system.
When running the server it will emit log messages to the system log and it will expose a web server that can be accessed via a browser. Beware that if you are running the Server as root/Administrator you are also running a web server with the same privileges that you need to protect.
When the Server is running it will lock down access to only listen on the loopback adapter and refuse connections not using an IP address as the hostname. If you need to access the Server from another machine, make sure you protect it and enable remote access and also add HTTPS protection.
When running the Server you also need to configure a password, either by getting a signing token from the logs, changing the password, or setting one explicitly.
The Agent mode is intended for users who wants to run Duplicati with remote access through the Duplicati Console. The benefit from this is that you do not need to provide any local access as all access is protected with HTTPS and additional channel encryption from the Agent to the browser you are using.
If you have multiple machines to manage, using the console enables you to access all the backups, settings, logs, controls, etc. from one place.
The CLI mode is intended for advanced users who prefer to manage and configure each of the backups manually. The typical use for this is a server-like setup where the backups are running as cron scheduled tasks or triggered with some external tool.
For some additional flexibility in configurations it is also possible to combine the different types in some ways.
It the server is used primarily to elevate privileges, it is possible to have the TrayIcon run in the local user desktop and connect to an already running Server. To do this, change the TrayIcon commandline and add additional arguments:
The --no-hosted-server
argument disables launching another (competing) server, and the two other arguments will give information on how to reach the running server.
If you prefer to use the Server (or TrayIcon) but would like to trigger the backups with an external scheduler or event system, you can use the ServerUtil to trigger a backup or pause/resume the server.
If you are using the Server (or TrayIcon) but you want to run a command that is not in the UI, it is possible to use the CLI to run commands on the backups defined in the Server. Note that the Server and CLI use different ways of keeping track of the local database, so you need to obtain the storage destination url and the database path from the Server and then run the CLI.
This page describes how to use the secret provider.
The secret provider was introduced in Duplicati version 2.0.9.109 and aims to reduce the possibility of leaking passwords from Duplicati by not storing the passwords inside Duplicati.
To start using a secret provider you need to set only a single option:
This will make the secret provider available for the remainder of the application.
You can then insert placeholder values where you want secrets to appear but without storing the actual secret in Duplicati. For commandline users, the secrets can appear in both the backend destination or in the options.
As an example:
The secret provider will find the three keys prefixed with $
and look them up with the secret provider. The provider will then be invoked to obtain the real values and the values will be replaced before running the operation. If the secret provider has these values:
The example from above will then be updated internally, but without having the keys written on disk:
To ensure you never run with an empty string or a placeholder instead of the real value, all values requested needs to be in the storage provider, or the operation will fail with a message indicating which key was not found.
This page describes how to install Duplicati on the various supported platforms
For desktop and laptop users, the most common application type is called the "GUI" package, which is short for Graphical User Interface. The GUI package includes the core components, a webserver to show the user interface and a tray icon (also called a status bar icon).
For users installing in environments without a desktop or screen, there are also commandline only, remote management and Docker versions. Depending on your setup, you may also want to use one of those packages on a desktop or laptop.
This page covers only the GUI installation.
Jump to the section that is relevant to you:
The most common installation format on Windows is the MSI package. To install on Windows you need to know what kind of processor is on your system. If you are unsure, you are most likely using the 64-bit processor, also known as x64
. There is also a version supporting Arm64
processors, and a version for legacy 32-bit Windows called x86
.
Simply head over to the Duplicati download page and download the relevant MSI package. Once downloaded, double-click the installer. The installation dialog lets you adjust settings to your liking and will install Duplicati. The first time Duplicati starts up, it will open the user interface in your browser. At this point you are ready to set up a backup.
For MacOS the common installation method is to use a DMG file with the application file. Most modern MacOS machines are using the Apple Silicon which is called Arm64
in Duplicati's packages. If you are on an older Mac that has a 64-bit Intel processor, you can use the x64
package instead.
Once you know which kind of Mac you have, header over to the Duplicati download page and download the relevant DMG file. Open the file and drag Duplicati into the Application folder, and then you can start Duplicati.
The first time Duplicati starts up, it will open the user interface in your browser. At this point you are ready to set up a backup.
Most Linux distributions work well with Duplicati but there are only packages for Debian based distributions (Ubuntu, Mint, etc) and for RedHat based based distributions (Fedora, SUSE, etc). For other distributions you may need to manually install some dependencies.
For Linux distributions there are packages for the most common 64-bit based system with x64
, support for Arm64
and the predecessor Arm7
aka ArmHF
which are commonly found in NAS boxes and the older Raspberry Pi series.
To install Duplicati on a Debian based system, first download the .deb
package matching the system architecture, then run:
This will install all dependencies and place Duplicati in the default location on the target system. The first time Duplicati starts up, it will open the user interface in your browser. At this point you are ready to set up a backup.
To install Duplicati on a RedHat-based system, first download the .rpm
package matching the system architecture, then run:
This will install all dependencies and place Duplicati in the default location on the target system. The first time Duplicati starts up, it will open the user interface in your browser. At this point you are ready to set up a backup.
For other linux distributions you can use the .zip
file that matches your system architecture. Inside the zip files are all the binaries that are needed, and you can simply place them in a folder that works for your system. Generally, all dependecies are inlcuded in the packages so unless you are using a very slimmed down setup, it should work without additional packages.
The first time Duplicati starts up, it will open the user interface in your browser. At this point you are ready to set up a backup.
This page describes how to run a backup outside of an automatic schedule
With a configured backup, you can have a schedule that runs the backup automatically each day. Having the backup run automatically is recommended because it makes it less likely that the backups are not recent if they are needed.
Even if the backup already has a schedule there may be times where you want to manually run a backup. If you have just configured a backup, you may want to run it ahead of the scheduled next run. If you are within the UI you can click the "Run now" link for the backup.
Once the backup is running, the top area will act as a progress bar that shows how the backup progresses. Note that the first run of a backup is the slowest run because it needs to process every file and folder that is part of the source. On later runs it will recognize what parts have changed and only process the new and changed data.
If you need to automate starting a backup without using the UI, you can use ServerUtil to trigger backups from the commandline.
After running a backup, the view will change slightly and show some information about the backup.
This page describes how to restore files using the Duplicati user interface
The most important reason to make a backup is the ability to recover the data at a later stage, usually due to some unforseen incident. Depending on the incident, the original configuration may not be available.
To start a restore process in Duplicati, start on the "Restore" page.
If the backup configuration is already existing on the machine, you can choose it from the list below the two options for not having a configuration. In this case you can click "Next" and skip to the section on choosing the files to restore.
The restore and browsing process are fastest when using a configured backup, because Duplicati can query a local database with information. If the local database is not present, Duplicati needs to fetch enough information from the remote storage to build a partial database when performing the restore.
If you have exported the backup configuration and have the configuration available, you click "Next" and skip to the restore from configuration section. You can also read up on how to import and export configurations.
To restore files from the backup, Duplicati needs only to know how to access the files and the encryption passphrase (if any). If you do not have the passphrase, it is not possible to restore.
To restore directly from the backup files, the first step is to provide the destination details. These details are the same as you supplied initially when creating the backup. if you are using a cloud provider, you can usually get the needed information via your account on the vendors website.
Once the details are entered, it is recommended to use the "Test connection" button to ensure that the connection is working correctly. Then click the "Next" button.
If the backup is not encrypted, leave the field empty. When ready, click "Connect" and Duplicati will examine the remote destination and figure out what backups are present. After working through the information, you can choose files to restore.
If you have a configuration file you can use the information in that file to avoid entering it manually. If you need to restore more than once, it may be faster to import the configuration and rebuild the local database. After the database is built, you can choose the configuration from the list and skip to choosing files to restore.
In the dialog, provide the exported configuration file and the configuration file's encryption passphrase. Note that the passphrase the configuration file is encrypted with is not neccesarily the same as the passphrase used to encrypt the backup with.
Once the configuration is correct, click the "Import" button and you are given the option to correct the settings before starting the restore process. If you do not need to change anything, click "Next" and then "Connect".
Once Duplicati has a connection to the remote destination it will find all the backups that were made. It will then choose the most recent version and list the files from within that version. Use the "Restore from" dropdown to select the version to restore from, and use the search field to highlight files matching the expression. Click the "Search" button to list only files matching the criteria.
Check the files or folders that you want to restore and then click "Continue".
When restoring there are a few options that control how the files are restored.
If you want to restore a file to a previous state, you can leave the settings to their defaults. If you are unsure if you want to revert, or need to examine the files before replacing the current versions, you can choose to restore to another destination. If the folder you are restoring to is not empty, you can choose to store multiple versions of the files by appending the restore timestamp to the filename. This is especially useful if you are restoring multiple versions into a target folder for comparison.
Duplicati will not restore permissions by default because the users and groups that were present on the machine that made the backup may not be present on the machine being restored to. Restoring the permissions can cause you to be unable to access the restored files, if your user does not have the necessary permissions.
When satisfied with the settings, click the "Restore" button and the restore process will restore the files.
This page lists the cloud providers supported as secret providers
Setting up and using either of the vaults described here is outside the scope of this document.
To connect to the vault, provide the url as part of the configuration:
The url is converted to the url used to connect to the vault (e.g., https://localhost:8200
in this example). The token is used to authenticate, and the secrets are the vaults that secrets are read from.
In the cloud-based offering the "secrets" values shown here are referred to as Apps and in the CLI as "mount points". When more than one value is supplied, the vaults are tried in order and stops once all secrets are resolved. This means that if the same secret key is found in two vaults, the value will be used from the first vault examined.
hcv://
For development purposes, the url can use a http
connection by setting &connection-type=http
, but this should not be used in production.
To connect using a credential pair instead of the token, the credentials can be provided with the values client-id
and client-secret
, but should be passed via the environment variables:
By default, the key lookup is done case-insensitive but can be toggled case-sensitive with the option &case-sensitive=true
.
The secrets values name the vaults to use (called "Secret Name" in the AWS Console). When more than one value is supplied, the vaults are tried in order and stops once all secrets are resolved. This means that if the same secret key is found in two vaults, the value will be used from the first vault examined.
Instead of suppling the region the entire service point url can also be provided via &service-url=
.
By default, the key lookup is done case-insensitive but can be toggled case-sensitive with the option &case-sensitive=true
.
gcsm://
By default, the screts are accessed with the version set to latest
but this can be changed with &version=
. Finally, the communication protocol can be changed from gRPC to https with by adding &api-type=Rest
.
Instead of supplying the name of the keyvault, the full vault url can be supplied with &vault-uri=
.
Instead of relying on the autmated login handling, it is possible to authenticate with either a client credential, or a username/password pair.
For authenticating with client credentials, use:
And for username/password, use:
This page describes the providers that operate locally on the machine they are running
The simplest provider is the env://
provider, which simply extracts environment variables and replaces those. There is no configuration needed for this provider, and the syntax for adding it is simply:
The file-secret://
provider supports reading secrets from a file containing a JSON encoded dictionary of key/value pairs. As an example, a file could look like:
libsecret
(Linux)pass
secret provider (Linux)For more advanced uses the options account
and service
can be used to narrow down what secrets can be extracted.
This page describes how to configure Duplicati to connect to the Duplicati Console and manage the backups from within the console.
In a default installation, Duplicati will serve up a UI using an internal webserver. This setup works well for workstations and laptops but can be challenging when the machine is not always connected to a display. To securely connect the instance to the Duplicati Console, go to the settings page and find the "Remote access control" section.
Click the button "Register for remote control" to start the registration process. After a short wait, the machine will obtain a registration link:
Click the registration link to open a browser and claim the machine in the Duplicati Console:
Click "Register machine" to add it to your account, then return to the Duplicati Settings page where the machine is now registered and ready to connect:
Click the "Enable remote control" button and see the machine is now connected to the Duplicati Console:
You can now click "Connect" to access the machine directly from the portal!
Describes how to configure a backup in Duplicati
In the UI, start by clicking "Add backup", and choose the option "Configure a new backup":
To set up a new backup there are some details that are required, and these are divided into 5 steps:
For the basic configuration, you need to provide a name and setup encryption:
The name and description fields can be any text you like, and is only used to display the backup configuration in lists so you can differentiate if you have multiple backups.
The encryption setup allows you to choose an encryption method and a passphrase. Encryption generally adds a minor overhead to the processing, but is generally a good idea to add. If you opt out of encryption, make sure you control the storage destination and have adequate protections in place.
To avoid weak passphrases, Duplicati has a built-in passphrase generator as well as a passphrase strength measurer. Be sure to store the chosen or generated passphrase in a safe location as it is not possible to recover anything if this passphrase is lost!
The storage destination is arguably the most technical step because it is where you specify how to connect to the storage provider you want to hold your information. Some destinations require only a single setting, where others require multiple.
When the details are entered, it is recommended that you use the "Test" button which will perform some connection tests that helps reveal any issues with the entered information.
When the destination is configured as desired, click the "Next" button.
In the third step you need to define what data should be backed up. This part depends on your use. If you are a home user, you may want to back up images and documents. An IT professional may want to back up databases.
In the source picker view you can choose the files and folders you would like to back up. If you pick a folder, all subfolders and files in that folder will be included. You can use the UI to uncheck some items that you want to exclude, and they will show up with a red X.
Once you are satisfied with the source view, click the "Next" button to continue to the schedule step.
Having an outdated backup is rarely an ideal solution, but remembering to run backups is also tedious and easy to forget. To ensure you have up-to-date backups, there is a built-in scheduler in Duplicati that you can enable to have Duplicati run automatically.
Once satisfied with the schedule, click "Next".
Even though Duplicati has deduplication and compression to reduce the stored data, it is inevitable that old data is stored that will take up space, but is not needed for restore. In this final configuration step you can decide when old versions are removed and what size of files to store on the destination.
For the retention setting, it is inveitable that the backups will grow as new and changed data is added to the backups. If nothing is ever deleted, the backup size will keep growing in size. With the retention settings you can choose how to automatically remove older versions.
The setting "Smart backup retention" is meant to be useful for most users where it keeps one daily backup and then gradually fewer versions going back in time.
Once you are satisfied with the settings, click the "Save" button.
You have now configured your backup! 🎉
This page describes how the authentication is working with Duplicati and how to regain access if the password is lost or unknown
If you are starting Duplicati for the first time, it will ask you to pick a password. Picking a strong password is important to ensure unwanted access to Duplicati from other processes on the system. By default, Duplicati has chosen a strong random password and it is recommended for most users to not change the random password. It is not possible to extract the current password in any way and it is not possible to disable the password.
This mechanism works for most default installations and is secure as long as the desktop is not compromised. This signin process is the reason that the default random password is prefered, because it is not possible to leak the password.
The downside is that you can bookmark the Duplicati page, but you may be asked for a password that you do not know when accessing the page. In this case, re-launching from the TrayIcon will log you in again.
If you prefer, it is possible to choose the password so you can enter it when asked. Optionally, you can also choose to disable the feature that allows the TrayIcon to sign in without a password, through the settings page.
Login with the TrayIcon is shown here for MacOS, but the same works on Linux and Windows:
Note that the regular output from journalctl
is capped in width, so you cannot see the whole token. Pipe to a file or another program as shown above to get the full output.
Once you have obtained the link, simply click it or paste it into a browser. Note that the sign-in token has a short lifetime to prevent it being used to gain unathorized access from someone who obtains the logs. If the link has expired, simply restart the service or application and a new link will be generated.
After a password has been set, the link will no longer be generated.
This works by reading the same database as the server is using and extracting the keys used to sign a sign-in token, and then creating a sign-in token. This sign-in token works the same way as the TrayIcon's signin feature. Note that the password itself cannot be extracted from the database, it can only be verified.
After obtaining a sign-in token, ServerUtil can then change the password in the running instance.
This only works if:
The database is readable from the process running ServerUtil
The database field encryption password is available to the process running ServerUtil
If these constraints are satisfied, it is possible to reset the server password by running:
If ServerUtil is launched in a similar environment (i.e., same user, same environment variables) this would allow access in most cases. There are a number of commandline options that can be used to guide ServerUtil in case the environments are not entirely the same.
If you need to change the password for a Windows Service instance running in the service context, you can use a command such as this:
Similarly, if the service is running as root on Linux:
Since commandline arguments and environment variables can be viewed through various system tools, it is recommended that the option is not set on every launch. A prefered way to set this would be to stop all running instances, start once with the new password from a commandline terminal, shut down, and then start again normally.
It is possible to disable the use of sign-in tokens completely, which can increase security further. This is done by passing the option:
For cloud-based providers there is generally a need to pass some kind of credentials to access the storage as well as the possibility of a provider being unavailable for a shorter period. To address these two issues, see and .
The implementation for supports both the cloud-based offering as well as the self-hosted version as sources.
The provider for supports the AWS hosted vault. The credentials for the vault are the regular Access Key Id and Access Key Secret. While these can be provided via the secret provider url as access-id
and access-key
, they should be passed via the environment variables:
The secret provider for relies on the to handle the authentication. Follow with Google. After the athentication is complete, the configuration is:
If you need to integrate with a different flow you can also , but notice that the token may be short-lived and you cannot change the token after configuring the secret provider:
With as the provider there are several options for authenticating, where the most secure method is to use the that handles all the details. Since this method is the default, the secret provider can be configured as:
The file provider also supports files encrypted with and you supply the decryption key with the option passphrase
. Suppose the file is encrypted with the key mypassword
you can then configure the provider:
To avoid passing the encryption key via a commandline, see .
On Windows XP and later, the can be used to securely store secrets. As the credentials are protected by the account login, there is no configuration needed, so the setup is simply:
The stores various credentials on Linux and integrates with various UI applications to let the user approve or reject attempts to read secrets. The libsecret
provider supports a single optional setting, collection
, which indicates what collection to read from. If not supplied the default collection is used. To use the libsecret
provider, use this argument:
The is a project that implements a secure password storage solution on Linux system, backed by GPG. Duplicati can use pass
as the secret provider:
For MacOS users the standard password storage is the program. The secrets stored here as application passwords can be used by Duplicati. The KeyChain can be enabled as a secret provider with:
Now that the machine is connected to the Duplicati Console, return to the :
Once Duplicati is running, you can set up a backup through the UI. If the UI is not showing, you can use the in your system menu bar and choose "Open". If you are asked for a password before logging in to the UI, see .
If you have an existing backup configuration you want to load in, see the .
(descriptive name, passphrase)
(where to store the backups)
(what data should be backed up)
(automatically run backups)
(when to delete old backups and more)
Due to the number of supported backends, this page does not contain the instructions. Instead, each of the supported destinations is described in detail on the .
For more advanced uses, you can also use the filters to set up rules for what to include and exclude. See the section on if your have advanced needs.
If you prefer to run the backups manually, disable the scheduler, and you can use to trigger the backups as needed.
The size of remote volumes is meant for a balanced size usable with cloud-storage and a limited network connection. If you have a fast connection or store files on a local network, consider increasing the size of the remote volumes. For more information see .
The process will usually host the that presents the UI. Since the two parts are within the same process they can communicate securely, and this setup enables the TrayIcon to negotiate a short-term signin token with the server, even though it does not know the password.
When Duplicati starts up with the randomly generated password it will attempt to emit a temporary sign-in url. If you run either the or in a terminal, most systems will show the link here.
If you are running Duplicati as a service with no console attached, the link will end up in the system logs. On Windows you can use the utility to find the message with a sign-in url. For Linux you can view the system logs, usually:
For MacOS you can use the .
If you are not using the TrayIcon or you have disabled the signin feature, but lost the password somehow, you can change the password with in some cases.
For Linux user, you can usually use su
or sudo
to enter the correct user context, but some additional environment variables may be needed. The default location for the database is described in the , and a different location can be provided with --server-datafolder.
If the other options are not available, it is possible to restart the process and supply the commandline option:
This will write a hashed () version of the new password to the database and use this going forward. This process requires restarting the server, but is persisted in the database, so it is only required to start the server once with with the --webservice-password
option and future starts can be done without the password.
The option can also be supplied to the and processes, which will pass it on to their internal instance of the Server.
This will make the reject any sign-in tokens and prevent the access from the TrayIcon and ServerUtil without explicitly passing the password. With this option, it will require write access to the database to create a new token, but it will also require handling the password in a safe manner from all instances where this is needed.
This option can also be supplied to the process and is default enabled by the .
If the secret provider is configured for the entry application (e.g., the TrayIcon, Server or Agent) it will naturally work for that application, but will also be shared within that process.
For the Agent, this means that setting the secret provider for the agent, will also let the server that it hosts use the same secret provider. When a backup or other operation is then executed by the server it will also have access to the same secret provider.
This sharing simplifies the setup by only having a single secret provider configuration and then letting each of the other parts access secrets without further configuration. If needed, the secret providers can be specified for the individual backups, such that it is possible to opt-out of using the shared secret provider.
To make passing arguments to the application a bit harder to obtain, the value for --secret-provider
is treated as an environment variable if:
It starts with $
optionally with curly brackets {}
:
$secretprovider
${secretprovider}
If it starts and ends with %
:
%secretprovider%
No expansion is done on environment variables, so the entire provider string is required to be set as an environment variable.
If you run an operation and the secret provider is unavailable when the secrets are requested, the operation will fail. For most uses the occurence of an outage is so rare that this situation is acceptable.
However, for some uses it is important that the backups keep running, even in the face out outages. To handle this need, Duplicati supports an optional cache strategy:
Storing the secrets somewhere makes it more likely that it is eventually leaked. For that reason, the default is to use the cache setting None
which turns off the caching fully and only relies on the provider.
The InMemory
setting is the least intrusive version as it only stores the secrets in the process memory. This option is most useful when using a shared provider such that it stays in memory between runs.
Finally, the Persistent
option will write secrets to disk, so it can handle situations where the provider is unavailable during startup, or where a shared provider does not work.
As the purpose of the secret provider is to prevent the secrets from being written to disk, the secrets are written to disk using a passhprase derived from the secret provider url. If the secret provider url does not contain a strong secret already, it is possible to add any parameter to the url to increase the strength of the key.
If the secret provider url changes, it is no longer possible to retrieve the cached values, and the next run will fail if the provider is unavailable, but will otherwise write a new encrypted cache file to disk.
This page describes how to best migrate a Duplicati instance to a new machine
If you have moved to a new machine and want to restore files to the new machine, you can follow the steps outlined in Restoring files. If instead, you have already moved files to the new machine and would like to set up the new machine to continue backups made on the previous machine, there are a few ways to do this.
Note: it is possible to restore files across operating systems, but due to path differences it is not possible to continue a backup made on Windows on a Linux/MacOS based operating system and vice versa.
Note: do not attempt to run backups from two different machines to the same destination. Before migrating, make sure the previous machine is no longer running backups automatically. If both machines run backups, one instance will detect that the remote destination has been modified and will refuse to continue until the local database has been rebuilt.
If you have access to backup configurations, jump to the section for moving with backup configurations. And if you have no configurations, jump to the manual setup section.
If the previous machine is still accessible, you can copy over the contents of the Duplicati
folder containing the the configuration database Duplicati-server.sqlite
and the other support database. This approach is by far the fastest as Duplicati has all the information and does not need to check up with the remote storage.
Make sure to stop Duplicati before moving in the folder into the same location on the new machine. After moving in the folder, you can start Duplicati again and everything will be working as before. If it has been a while since the previous instance was running, this may trigger the scheduled backups on startup. Use the option --startup-delay=5min
to start Duplicati in pause mode for 5 minutes if you want to check up before it starts running.
If you have the backup configurations, see the section on import/export configuration for a guide on how to create the backup jobs from the configuration files.
With the backup configurations, it is possible to re-create the backup configurations. The flow allows you to modify set setup before saving the configuration, in case some details have changed. Once the backup is re-created, it is required that you run the repair operation to make Duplicati recreate the local database for the backup.
Once the local database has been recreated, it is then possible to run the backup as before with no modifications required.
If you do not have access to the previous setup, you can still continue the backups, but this requires that you re-create the backups manually. You need at least the storage destination details, the passphrase and to select the sources.
Once the backup configuration has been created it works the same as if you had imported it from a file. Before running a backup, you need to run the repair operation to make Duplicati recreate the local database for the backup.
Once the local database has been recreated, it is then possible to run the backup as before with no modifications required.
This page describes common scenarios for configuring Duplicati with Docker
The Duplicati Docker images are available from DockerHub and are released as part of the regular releases. The Docker images provided by Duplicati are quite minimal and includes only the binaries required to run Duplicati. There are also variations of the Duplicati images provided by third parties, including the popular linuxserver/duplicati variant.
The Duplicati Docker images are using /data
inside the container to store configurations and any files that should persist between container restarts. Note that other images may choose a different location to store data, so be sure to follow the instructions if using a different image.
You also need a way to sign in to the server after it has started. You can either watch the log output, which will emit a special signin url with a token that expires a few minutes after the server has started, or provide the password from within the configuration file.
To ensure that any secrets configured within the application are not stored in plain text, it is also important to set up the database encryption key.
Ideally, you need at least the settings encryption key provided to the container, but perhaps also the webservice password. You can easily provide this via a regular environment variable:
But you can make it a bit more secure by using Docker secrets which are abstracted as files that are mounted under /run/secrets/
. Since Duplicati does not support reading files in place of the environment variables, you can either use a preload configuration file or use one of the secret providers.
To use the preload approach, prepare a preload.json
file with your encryption key:
You can then configure this in the compose file:
Setting up the secret manager is a bit more work, but it has the benefit of being able to configure multiple secrets in a single place. To configure the file-based secret provider, you need to create a secrets.json
file such as this:
Then set it up in the compose file:
It is also possible to use one of the other secret providers, such as one that fetches secrets from a secure key vault. In this case, you do not need the secrets.json
file, but can just configure the provider.
Duplicati has support for LVM-based snapshots which is the recommended way for getting a consistent point-in-time copy of the disk. For some uses, it is not possible to configure LVM snapshots, and this can cause problems due to some files being locked. By default, Duplicati will respect the advisory file locking and fail to open locked files, as the lock is usually an indication that the files are in use, and reading it may not result in a meaningful copy.
If you prefer to make a best-effort backup, which was the default in Duplicati v2.0.8.1 and older, you can disable advisory file locking for individual jobs with the advanced option: --ignore-advisory-locking=true
. You can also disable file locking support entirely in Duplicati:
Describes how to send reports with Duplicati
Duplicati strives to make it as easy as possible to set up backups, and using the built-in scheduler makes it easy to ensure that backups are running regularly. Because it is easy to set up a backup and forget about, it is possible to have a backup running with little interaction.
Despite all efforts to make Duplicati as robust as possible against failures, it is not possible to handle every possible problem that may arise after the initial setup. Common failure causes is revoked credentials, filled storage, missing provider updates, etc.
To avoid discovering too late that the backup had stopped working for some reason, it is highly recommended to set up automated monitoring of backups. Duplicati has a number of ways that you can use to send reports into a monitoring solution:
Describes the how to configure sending emails with backup details
Sending emails is supported by having access to an SMTP server that will accept the inbound emails. From on your SMTP/email server provider you need to get a url, a username, and a password. If you are a GMail or Google Workspace user, use the Google SMTP guide, otherwise consult your provider for these details.
Besides the connection details, you also need to provide the recipient email address. Note that SMTP servers may sometimes restrict what recipients they allow, but generally using the provider SMTP server will allow sending to your own account.
In the UI you can configure these mandatory values as well as the optional values.
The basic options for sending email can be entered into the general settings, which will then apply to each backup. It is also possible to apply or change the settings for the individual backups by editing the advanced options. Here is how it looks when editing it in the user interface:
You can toggle between the two views using the "Edit as list" and "Edit as text" links.
Besides the mandatory options, it is also possible to configure:
Email sender address
The subject line
The email body
Conditions on when to send emails
For details on how to customize the subject line and message body, see the section on customizing message content.
If you prefer email logs, but would also like to get reports, check out the community provided dupReport tool that can summarize the emails into overviews.
This page describes how to set up monitoring with Duplicati consoel
The Duplicati console is a paid option for handling monitoring of Duplicati backups, but has a free usage tier. To get started with the console, head over to the Duplicati Console page and sign up or log in.
On the "Getting started" page you can see the instructions, and this is essentially to copy-n-paste in the reporting url into the settings page in your Duplicati client. Once set up, all backups will automatically send a report to the console, and you will have a dashboard and the ability to drill down into each machine, each backup configuration and each report.
Describes the how to configure sending notifications via Jabber/XMPP
One of the supported notification methods in Duplicati is the open-source XMPP protocol, supported by a variety of projects, including commercial enterprise offerings.
To send a notification via XMPP you need to supply one or more recipientes, an XMPP username and a password.
In the UI you can configure these mandatory values as well as the optional values.
The basic options for sending XMPP notifications can be entered into the general settings, which will then apply to each backup. It is also possible to apply or change the settings for the individual backups by editing the advanced options. Here is how it looks when editing it in the user interface:
You can toggle between the two views using the "Edit as list" and "Edit as text" links.
Besides the mandatory options, it is also possible to configure:
The notification message and format
Conditions on when to send emails
Conditions on what log elements to include
For details on how to customize the notification message, see the section on customizing message content.
These options allow you to integrate custom scripts with Duplicati operations, providing automation capabilities before and after backups, restores, or other tasks.
Pre and Post Operation Scripts Run custom scripts before an operation starts or after it completes. Use these to perform preparation tasks (like database locking), cleanup actions, or to trigger notifications based on operation results.
Control Flow Management Configure whether operations should continue or abort based on script execution status, with customizable timeout settings to prevent operation blocking.
Script Output Processing Post-operation scripts receive operation results via standard output, enabling conditional processing based on success or failure.
--run-script-before
(Path)
Run a script on startup. Executes a script before performing an operation. The operation will block until the script has completed or timed out.
--run-script-after
(Path)
Run a script on exit. Executes a script after performing an operation. The script will receive the operation results written to stdout.
--run-script-before-required
(Path)
Run a required script on startup. Executes a script before performing an operation. The operation will block until the script has completed or timed out. If the script returns a non-zero error code or times out, the operation will be aborted.
--run-script-timeout
(Timespan)
Sets the script timeout. Sets the maximum time a script is allowed to execute. If the script has not completed within this time, it will continue to execute but the operation will continue too, and no script output will be processed. Default value: 60s
You can add custom entries directly to Duplicati's log system from your scripts by using special prefixes in stdout messages. This allows script events to appear in both the Duplicati Log and Reports alongside native application events.
Supported Log Level Prefixes:
LOG:INFO
- For general information and success notifications
LOG:WARN
- For potential issues that didn't prevent completion
LOG:ERROR
- For critical failures that require attention
Example Usage:
These messages will be captured with their appropriate severity levels and integrated into Duplicati's logging system, making script events traceable within the same monitoring interfaces you use for Duplicati itself.
This page describes how to use the remote agent to connect with remote control
As long as the Agent is not registered, restarting it will make it attempt to connect again.
Any machine can now use this pre-authorized url to add machines to your organization in the Console. You can click the "Copy" button to get the link to your clipboard and paste it in when registering a machine. Do not share this link with anyone as it could allow them to add machines to your account.
To revoke a link, simply delete it from within the portal. This will prevent new machines from registering, but existing registered machines will remain there.
With the registration link, start the Agent with a commandline such as:
This will cause the Agent to immediately show up in the Console. Future invocations of the agent will not require the registration url, but should the Agent somehow be de-registered, it will re-reregister if the url is set and the link is still valid.
This page describes how to send reports via the HTTP protocol
To use the option, you only need to provide the url to send to:
Besides the URL it is also possible to configure:
The message body and type (JSON is supported)
The HTTP verb used
Conditions on when to send emails
Conditions on what log elements to include
You can now specify multiple urls, using the options:
These two options greatly simplify sending notifications to multiple destinations. Additionally, the options make it possible to send both the form-encoded result in text format as well as in JSON format.
This page describes how to import and export configurations from Duplicati
While it is not required that you keep a copy of the backup configuration, it can sometimes be convenient to have all settings related to a backup stored in a single file.
To export from within the user interface, expand the backup configuration and click "Export ..."
You then need to decide on how to handle secrets stored in the configuration. Since these secrets include both the credentials to connect to the remote destination as well as the encryption passphrase, it is important that the exported file is protected.
You can choose to not include any secrets by unchecking the "Export passwords" option. The resulting file will then not contain the secrets and you need to store them in a different place (credential vault, keychain, etc).
You can also choose to encrypt the file before exporting it. If you choose this option, make sure you choose a strong unique passphrase, and store that passphrase in a safe location.
If you choose to export with passwords but without encryption, you will be warned that this is insecure:
With an exported configuration, you can delete an existing configuration and re-create it by importing the configuration. You can optionally edit the parameters so the re-created backup configuration differs from the original.
To import a configuration, go to the "Add backup" page and choose "Import from file":
Pick the file or drag-n-drop it on the file chooser. If the file is encrypted, provide the file encryption passphrase here as well.
The option to "Import metadata" will create the new backup configuration and restore the statistics, including backup size, number of versions, etc. from the data in the file. If not checked, these will not be filled, and will be updated when the first backup is executed.
If the option "Save immediately" is checked, the backup will be created when clicking import, skipping the option to edit the backup configuration.
This page describes the template system used to format text messages sent
The template system used in Duplicati is quite simple, as it will essentially expand Windows-style environment placeholders, %EXAMPLE%
, into values. The same replace logic works for both the subject line (if applicable) and the message body.
Duplicati has defaults for the body and subject line, but you can specify a custom string here. For convenience, the string can also be a path to a file on the machine, which contains the template.
An example custom template could look like:
The template engine supports reporting any setting by using the setting name as the template value. Besides the options, there are also a few variables that can be used to extract information more easily:
If the output is JSON it needs to be handled different than regular text, to ensure the result is valid. The logic for this is to re-use the templating concept, but only as a lookup, to figure out what keys to include in the results.
An example template could be:
This will ensure that each of those values will be included in the extra
element in the JSON output. The default template for JSON output includes all fields listed above, but no options are included by default.
Describes the how to configure sending notifications via Telegram
To send a notification via Telegram you need to supply a channel id, a bot token and a an api key.
After obtaining the bot token you can obtain the channel id with a cURL script:
With all required values obtained, you can set up the Telegram notifications in the general settings:
You can toggle between the two views using the "Edit as list" and "Edit as text" links.
Besides the mandatory options, it is also possible to configure:
The notification message and format
Conditions on when to send emails
Conditions on what log elements to include
Bot Configuration
--send-telegram-bot-id
(String)
- The Telegram bot ID that will send messages
--send-telegram-api-key
(String)
- The API key for authenticating your Telegram bot
Message Destination
--send-telegram-channel-id
(String)
- The channel ID where messages will be sent
--send-telegram-topid-id
(String)
- Topic ID for posting in specific topics within Telegram groups
Notification Content
--send-telegram-message
(String)
- Template for message content with support for variables like %OPERATIONNAME%, %REMOTEURL%, %LOCALPATH%, and %PARSEDRESULT%
--send-telegram-result-output-format
(format)
- Format for presenting operation results
Duplicati
Json
Notification Filtering
--send-telegram-level
(level)
- Controls which result types trigger notifications:
Success - Only successful operations
Warning - Operations that completed with warnings
Error - Operations that failed with recoverable errors
Fatal - Operations that failed with critical errors
All - All operation results regardless of status
--send-telegram-any-operation
(Boolean)
- When enabled, sends notifications for all operations, not just backups
--send-telegram-log-level
(Enumeration)
- Sets minimum severity level for included log entries:
ExplicitOnly - Show only explicitly requested messages
Profiling - Include performance measurement data
Verbose - Include detailed diagnostic information
Retry - Include information about retry attempts
Information - Include general status messages
DryRun - Include simulation mode outputs
Warning - Include potential issues that didn't prevent completion
Error - Include critical failures that require attention
--send-telegram-log-filter
(String)
- Filters log entries based on specified patterns
--send-telegram-max-log-lines
(Integer)
- Limits the number of log lines included in notifications
Sample scripts extracted from Community Docs:
The is designed to be deployed in a way that is more secure and easier to manage at scale than the regular or instances. When the agent is running, it does not have any way to interact with it from the local machine.
On the very first run, the Agent will attempt to register itself with the Duplicati Console. If there is a desktop environment and a browser on the system, the Agent will attempt to open this with . In case there is no such option, the Agent will print out the link in the console or Event Viewer on Windows. The Agent will repeatedly poll the Console to find out when it is claimed.
Once the agent is registered, it immediately enables the connection and will be listed as a registered machine in the .
To skip the registration step and have the agent connect directly to the console without any user intervention, it is required to first create a link that is pre-authorized on the Console. To do this head to the and click the "Add registration url" button.
The most versatile reporting option is the ability to send messages via the HTTP(s) protocol. By default messages are sent as a body in a request with the verb.
For details on how to customize the notification message, see the .
On this page you should select "To File", which is default. The option to export "As commandline..." is not covered here, but allows you to get a string that can be used with the .
After completing the export, you will get a file containing the backup configuration. The file is in JSON format and optionally encrypted with .
When all is configured as desired, click the "Import" button. If you have not checked "Save immediately", the flow will look like it does when .
Note: The description here only covers the text-based output (such as emails, etc). The t is a bit different.
To obtain the bot token (aka bot id), message the @BotFather
bot. After creating the bot, send a message to the bot, so it can reply. For more details on Telegram bots, see the .
To obtain the API key, follow the .
For details on how to customize the notification message, see the .
This page describes how filters are evaluated inside Duplicati and how to construct them
Duplicati uses the same setup for filters to select individual files. It is most prominent when choosing the sources, but can be applied in other places where individual files can be selected.
Internally, Duplicati represents folders with a trailing path separator, which makes it easy to distinguish between the two types. This distinction is important when constructing filters, as Duplicati requires a full match, including the trailing path separator, before a match is considered. An example for Windows and Linux/MacOS:
Windows
Folders
C:\Users\john\
X:\data\
Files
C:\Users\myfile
X:\data\file.bin
Linux/MacOS
Folders
/home/john/
/usr/share/
Files
/home/myfile
/usr/file.bin
For brewity, the remainder of this page will only use the Linux/MacOS format in examples, but the same can be applied to the Windows paths.
Duplicati supports 4 different kinds of filters: paths, globbing, regex, and predefined groups. The simplest type of filter is the path. To use a path-type filter, simply provide the full path to the file or folder to target.
While it would be possible to maintain an ever growing list of paths in a filter, it can quickly become hard to manage. For cases where there is some similarity between multiple files or folder paths, it is possible to target multiple paths with a file-globbing syntax. The wildcard character *
matches any length of characters (including zero) and the character ?
matches a exactly one character. Unlike other glob implementations, the path separator is also matched in Duplicati filters.
An example of glob expressions:
The first expression matches files with the 4 ?
characters replaced by any character, and the second expression matches the Download
folder for any user, and the third matches any files with the .iso
extension.
If the paths to match are more complicated than what can be expressed with globbing, it is also possible to use regular expressions, which are a common way of expressing a string pattern. Understanding regular expressions and applying them can be a challenging task, and will most often require some testing to ensure it is working as expected. Also note that since Duplicati is written in C#, it uses the .NET variant of regular expressions.
Regular expressions are provided by wrapping the expressions with hard braces [ ]
:
Note that for Windows, the path separators must be escaped with a backslash, \
so each separator becomes a double backslash \\
.
Some files are commonly excluded on many systems, and to make it easier to exclude such files, Duplicati has a number of built in filter groups:
SystemFiles
Files that are not real files, such as /proc
or System Volume Information.
OperatingSystem
Files that are provided by the operating system, such as /bin
or C:\Windows\
CacheFiles
Files that are part of application or operating system caches, such as the browser cache.
TemporaryFiles
Files that are stored temporarily by applications as part of normal operations
Applications
Binary applications, such as /lib/
or C:\Program files\
DefaultExcludes
All the above filters in one group
To use a filter group, supply one or more names inside curly braces { }
, separated with commas. As an example:
By default, Duplicati will recurse the source folders and include every file and folder found. For this reason, most of the filters will be exclude filters that removes something from the backup. Include filters are prefixed with a +
and exclude filters are prefixed with a -
.
When Duplicati is evaluating filters, it will consider only the first full match, and not evaluate further. It will also evaluate folders before files, meaning that it is not possible to include a file, if the parent folder is excluded. Importantly, the filters are processed in the order they are supplied, which makes it possible to supply advanced rules. As an example:
In the example, the first rule is applied before the second rule, which means that all .txt
files in /usr/share/
are included, but any other .txt
files are excluded. The inverse goes for the .bin
files, because the exclude rule is before the last rule, the files will be matched as exclude, even though there is an include rule.
If we append a rule:
Even if this rule is last, it will exclude the entire folder. Since the folder is excluded, the match on the include rule is never evaluated. This cut-off at the folder level makes it possible to fully avoid processing subfolders, which could otherwise be time consuming.
This page describes the local database associated with a backup
Duplicati uses two databases, one for the Server and one for each backup. This page describes the overall purpose of the local database and how to work with it. The database itself is stored in the same folder as the server database and has a randomly generated name.
If you have access to the backup files generated by Duplicati, you only need the passphrase to restore files. As described in the migration section, it is also everything that is needed to continue the backup. But to increase the performance and reduce the number of remote calls required during regular operations, Duplicati relies on a database with some well-structured data.
The database is essentially a compact view of what data is stored at the remote destination, and as such it can always be created from the remote data. The only information that is lost if the database is recreated are log messages and the hashes of the remote volumes. The log messages are mostly important for error-tracing but the hashes of the remote volumes are important if the files are not encrypted, as this helps to ensure the backup integrity.
Prior to running a backup, Duplicati will do a quick scan of the remote destination to ensure it looks as expected. This check is important, as making a backup with the assumption that data exists, could result in backups that can only be partially restored. If for the check fails for some reason, Duplicati will exit with an error message explaining the problem.
For some errors it is possible to run the repair command and have the problem resolved. This works if all data required is still present on the system, but may fail if there is no real way to recover. If this is the case, there may be additional options in the section on recovering from failure.
In rare cases, the database itself may become corrupted or defect. If this seems to be the case, it is safe to delete the local database and run the repair command. Note that it may take a while to recreate the database, but no data is lost in the process, and restores are possible without the database.
This page describes how to use Duplicati with Linux
Before you can install Duplicati, you need to decide on three different parameters:
Your package manager: apt
, yum
or something else.
You machine CPU type: x64, Arm64 or Arm7
To use Duplicati on Linux, you first need to decide which kind of instance you want: GUI (aka TrayIcon), Server, Agent, CLI. The section on Choosing Duplicati Type has more details on each of the different types.
Next step is checking what Linux distribution you are using. Duplicati supports running on most Linux distros, but does not yet support FreeBSD.
If you are using a Debian-based operating system, such as Ubuntu or Mint, you can use the .deb
package, and for RedHat-based operating system, such as Fedora or SUSE, you can use the .rpm
packages.
For other operating systems you can use the .zip
package, or check if your package manager already carries Duplicati.
Finally you need to locate information on what CPU architecture you are using:
x64: 64bit Intel or AMD based CPU. This is the most common CPU at this time.
Arm64: 64bit ARM based CPU. Used in Raspberry Pi Model 4 and some Laptops and Servers.
Arm7: 32bit ARM based CPU. Used in Raspberry Pi Model 3 and older, and some NAS devices.
Once you have decided the on (type, distro, cpu) combination you are ready to download the package. The full list of packages can be obtained via the main download page, and then clicking "Other versions". Refer to the installation page for details on how to install the packages, or simply use the package manager in your system.
For users with a desktop environment and no special requirements, the TrayIcon instance is the recommended way to run Duplicati. If you are using either .deb
or .rpm
you should see Duplicati in the program menu, and you can launch it from there. If you do not see Duplicati in the program menu, you can start it with:
When running the TrayIcon in a user context, it will create a folder in your home folder, typically ~/.config/Duplicati
where it stores the local databases and the Server database with the backup configurations.
The Server is a regular executable and can simply be invoked with:
When invoked as a regular user, it will use the same folder, ~/.config/Duplicati
, as the TrayIcon and share the configuration.
Besides the configuration listed below, it is also possible to run Duplicati in Docker.
If you would like to run the Server as a service the .rpm
and .deb
packages includes a regular systemd service. If you are installing from the .zip
package, you can grab the service file from the source code and install it manually on your system.
If you need to pass options to the server, edit the settings file, usually at /etc/default/duplicati
. Make sure you only edit the configuration file and not the service file as it will be overwritten when a new version is installed. The settings file should look something like this:
You can use DAEMON_OPTS
to pass arguments to duplicati-server
, such as --webservice-password=<passsword>
.
To enable the service to auto-start, reload configurations, start the service and report the status, run the following commands:
The server is now running and will automatically start when you restart the machine.
Note: the service runs in the root
user context, so files will be stored in /root/.config/Duplicati
on most systems, but in /Duplicati
on other systems. Use the DAEMON_OPTS
to add --server-datafolder=<path to storage folder>
if you want a specific location.
To check the logs (and possibly obtain a signin link), the following command can usually be used:
With the Agent there is a minimal setup required, which is to register the machine with the Duplicati Console. When installing either the .rpm
or .deb
packages, it will automatically register the duplicati-agent.service
for startup. If you are using the .zip
installation, you can find the agent service in the source code and manually register it:
When the Agent starts, it will emit a registration link to the log, and you can usually see it with the following command:
If you are using a pre-authenticated link, you can run the following command to activate the registration:
After registration is complete, restart the service to pick up the new credentials:
Using the CLI is simply a matter of invoking the binary:
Since the CLI also needs a local database for each backup, it will use the same location as described for the Server above to place databases. In addition to this, it will keep a small file called dbconfig.json
in the storage folder where it maps URLs to databases. The intention of this is to avoid manually specifying the --dbpath
parameter on every invocation.
If you specify the --dbpath
parameter, it will not use the dbconfig.json
file and it will not store anything in the local datafolder.
Each package of Duplicati contains a number of support utilities, such as the RecoveryTool. Each of these can be invoked from the commandline with a duplicati-*
name and all contain built-in help. For example, to invoke ServerUtil, run:
This page describes the different retention settings available in Duplicati
Even though Duplicati tries hard to reduce storage use as much as possible, it is inevitable that the remotely stored data grows as new versions of files are added. To avoid running out of space or paying for excessive storage use, it is important that unnecessary backups are removed regularly.
In Duplicati there are a few different settings that can be used to configure when a "snapshot" is removed. All of these options are invoked automatically at the end of a backup to ensure that removal follows a new version. If you use the Command Line Interface, it is possible to disable the removal and run the delete command as a separate step.
After deleting one or more versions, Duplicati will mark any data that can no longer be referenced as waste, and may occasionally choose to run a compact process that deletes unused volumes and creates new volumes with no wasted space.
Despite all deletion rules, Duplicati will never delete the last version, keeping at least one version available.
The most intuitive option is to choose a period that data is stored, and then to consider everything older than this period as stale data. The actual period depends on the actual use, but it could be 7 days, 1 year or 5 years for example.
This option is usually the prefered choice if the backups happen regularly, such as a backup each day, and then keep the last 3 months.
If the backups are running irregularly, where the backups are triggered by some external event, there may be long periods where there are no backups. For this case you can choose a number of versions to keep and Duplicati will consider anything outside that count as outdated.
Another special case is that if the source data has not changed at all, which is uncommon, Duplicati will not make a new version, as it would be identical to the previous version. In such a setup, it may be preferable to use a version count, despite regularly scheduled backups.
The retention policy is a "bucket" based strategy, where you define how many backups to keep in each "bucket" and what a "bucket" covers. With this strategy, it is possible to get something similar to grandfather-father-son style backup rotations.
The syntax for the rentention policy uses the time format to define the bucket and contents in that bucket. The bucket size is first, then a colon separator, and then the duration in the bucket. Multiple buckets can be defined with commas. As an example:
The first bucket is defined as being 7 days, and the value U
means unlimited the number of backups in this bucket. In other words: for the most recent 7 days, keep all backups.
The second bucket is defined as 1 year, keeping a backup for each 1 week, resulting in rougly 52 backups after the first 7 days.
Any backups outside the buckets are deleted, meaning anything older than a year would be removed.
In the UI, a helpful default is called "Smart retention" which sets the following retention policy:
Translated, this policy means that:
For the most-recent week, store 1 backup each day
For the last 4 weeks, store 1 backup each week
For the last 12 months, store 1 backup each month
This page describes common scenarios for configuring Duplicati with MacOS
Before you can install Duplicati, you need to decide on two different parameters:
You machine CPU type: Arm64 or x64
To use Duplicati on MacOS, you first need to decide which kind of instance you want: GUI (aka TrayIcon), Server, Agent, CLI. The section on Choosing Duplicati Type has more details on each of the different types. For home users, the common choice is the GUI package in .dmg
format. For enterprise rollouts, you can choose the .pkg
packages.
Your Mac is most likely using Arm64 with one of the M1, M2, M3, or M4 chips. If you have an older Mac, it may use the Intel x64 chipset. To see what CPU you have, click the Apple icon and choose "About this Mac". In the field labelled "Chip" it will either show Intel (x64) or M1, M2, M3, M4 (Arm64).
The packages can be obtained via the main download page. The default package shown on the page is the MacOS Arm64 GUI package in .dmg
format. If you need another version click the "Other versions" link at the bottom of the page.
If you are using the .dmg
package the installation works similar to other application, simply open the .dmg
file and drag Duplicati into Applications. Note that with the .dmg
package, Duplicati is not set to start automatically with your Mac, but if you restart with the option to re-open running programs, Duplicati will start on login.
If you are using the .pkg
package, Duplicati will install a launchAgent
that ensures Duplicati starts on reboots. The CLI package installs a stub file that is not active, so you can edit the launchAgent
and have it start the Server if you prefer.
If you have installed the GUI package, you will have Duplicati installed in /Applications
and it can be started like any other application. Once Duplicati is started, it will place itself in the menu bar near the clock and battery icons. Because Duplicati is meant to be a background program, there is no Duplicati icon in the dock.
On the first start Duplicati will also open your browser and allow you to configure your backups. If you need access to the UI again later, locate the TrayIcon in the status bar, click it and click "Open". If you install the CLI or Agent packages, the Duplicati application is not available.
If you install the CLI package, Duplicati binaries are placed in /usr/local/duplicati
and symlinked into /usr/local/bin
and you can start the server simply by running:
When invoked as a regular user, it will use the same folder, ~/Library/Application Support/Duplicati
, as the TrayIcon and share the configuration.
Note: If you install the GUI package or install from homebrew, Duplicati's binaries are not symlinked into the paths searched by MacOS. You can invoke the binaries by supplying the full path:
With the Agent there is a minimal setup required, which is to register the machine with the Duplicati Console. When installing the Agent package, it will automatically register the Duplicati agent with a launchAgent
that starts Duplicati in an Agent mode.
If the Agent is not registered with the Console, it will open the default browser and ask to be registered. Once registered, it will run in the background and be avilable on the Duplicati Console for management.
If you have a pre-authenticated link for registering the machine, you can place a file in /usr/local/share/Duplicati/preload.json
with content similar to:
Using the CLI is simply a matter of invoking the binary:
Since the CLI also needs a local database for each backup, it will use the same location as described for the Server above to place databases. In addition to this, it will keep a small file called dbconfig.json
in the storage folder where it maps URLs to databases. The intention of this is to avoid manually specifying the --dbpath
parameter on every invocation.
If you specify the --dbpath
parameter, it will not use the dbconfig.json
file and it will not store anything in the local datafolder.
Note: If you install the GUI package or install from homebrew, Duplicati's binaries are not symlinked into the paths searched by MacOS. You can invoke the binaries by supplying the full path:
Each package of Duplicati contains a number of support utilities, such as the RecoveryTool. Each of these can be invoked from the commandline with a duplicati-*
name and all contain built-in help. For example, to invoke ServerUtil, run:
Note: If you install the GUI package or install from homebrew, Duplicati's binaries are not symlinked into the paths searched by MacOS. You can invoke the binaries by supplying the full path:
This page describes how Preload settings are applied
The preload settings allow configuring machine-wide or enterprise-wide default settings with a single file. Because of this usecase, all settings are applied only if they are not already present. This means a commandline argument could be set up to change the default blocksize, but if the user has applied another setting via the commandline or parameters-file, the preload setting has no effect.
For single-machine users, the preload settings are a convenient way to change the arguments passed to either TrayIcon, Server, or Agent, without needing to edit shortcuts or service files.
To support different ways of deploying the settings file, 3 locations are checked:
%CommonApplicationData%\Duplicati\preload.json
see this SO thread for details, but usually
Linux: /usr/share/Duplicati/preload.json
MacOS: /usr/local/share/Duplicati/preload.json
Windows:C:\ProgramData\Duplicati\preload.json
Inside the installation folder
The file pointed to by DUPLICATI_PRELOAD_SETTINGS
For security reasons, all these paths are expected to be writeable only by Administrator/root so unprivileged users cannot modify the values. If the settings contains secrets, make sure that only the relevant users can read them.
The loading of the files is default silent, even if the parsing fails, but the environment variable DUPLICATI_PRELOAD_SETTINGS_DEBUG=1
will toggle loader debug information to help investigate issues.
The implementation here follows the format:
The file has 3 sections that are all similar and all optional: env
, db
, and args
. Each section can apply to all executables (*
) or a specific executable. The executable names can be seen in the source, but the most common ones are tray
and server
.
In the case where the *
section and specific executable has the same variable, the specific one is used. If multiple settings files are found, they are loaded in the order described above. Here the last file loaded will be able to overwrite the others. The *
settings are collected from all three files, as are the executable specific options, and only after all parsing is done, are the specific executable options applied (see below for an example).
Note that some executables will load others, such that TrayIcon, Service, and WindowsServer will load Server.
env
The env
section contains environment variables that are applied inside the process, after starting. Each entry under an executable is a key-value pair, where the key is the name of the environment variable, and the value will be the contents of the environment variable.
The environment variables are only set if they are not already set, allowing a custom base set, but prefers local machine variables.
In the case where one binary loads another, the starting application environment variables are applied first, and then any unset environment variables are applied for the loaded executable.
db
For the db
section it is possible to use *
but the settings are currently only applied when running the server, so for future compatibility this section should use server
only. The settings under an executable in the db
section are automatically prefixed with --
to ensure they are valid options and are saved as the "application wide" settings, also visible in the UI under Settings -> Advanced Options.
The settings here are applied to the database if they are changed, meaning a change to the settings will overwrite settings the user has already applied. This check is performed on startup.
The database settings are not passed on from a binary when it loads another, so the only database settings that are loaded are done by Server, even if any are supplied by tray
(may change in the future).
The commandline arguments supports both the *
and specific executable name. The arguments are expected to be switches in the format --name=value
but can be any commandline argument. The general logic in Duplicati is that "last option wins", so the resolver logic for that is applied to try to get the most logical combination of arguments.
If the following fragment is supplied:
The Server executable will get the settings from *
and the TrayIcon will get the values: "E1=c E2=b E3=d
".
If the above fragment is found in the first file, but this fragment is found in a later file:
First the *
variables are collected, giving "E1=a E2=b E3=f
", then the tray
variables give "E1=g E3=d
", and then they are combined to give "E1=g E2=b E3=d
" for tray
.
The same combination logic is applied for both the db
and args
sections, but since the args
section are not key-value pairs, and their order matter, it is done by collecting the arguments first, and then reducing them:
In this case the arguments are collected, with *
first, then the executable specifics, giving:
Since this contains 3 options named --test
, they are reduced and appended so it ends up with:
The intention here is to stay as close as possible to the original line that was entered. If the commandline arguments already contains --test
, the values are not applied.
This page describes how to set up and run a self-hosted OAuth Server
If you are using one of the backends that requires login via OAuth (Google, Dropbox, OneDrive, etc) you will need to obtain a "clientId" and a "clientSecret". These are given by the service providers when you are logged in, and are usually free.
If you prefer to avoid the hassle of setting this up, you can opt to use the Duplicati provided OAuth server, where Duplicati's team will handle the configuration. This OAuth server is the default way to authenticate. If you prefer to be more in control of the full infrastructure, you can use this guide to set up and use your own self-hosted OAuth Server.
For example, this guide will show how to set up an OAuth server for internal use in an organization, granting Duplicati instances full access to the Google Drive files.
If you need to set up another provider than Google, see the configuration defaults that has links to the pages where the Client ID and Client secret can be found for other services.
The first step is to sign up for Google Cloud Services if you are not already a customer. Once you are signed up, you can create a new project as shown here:
Once you have create a project where the OAuth settings can live in, you need to enable the "Google Drive API". Go to the top-left menu, choose "API & Services" and then "Enabled APIs & Services". From here search for "Google Drive API", click it and enable:
Before you can get the values you need to configure the consent screen that is shown when users log in with your OAuth Service. You can choose "Internal" here, unless you need to provide access to people outside your organization. Choosing "External" also requires a Google review. On the consent screen, you only need to fill in the required fields, the app name and some contact information:
The last step in the consent is choosing the scopes (meaning the permissions) it is possible to grant with this setup. In this example we choose the auth/drive
scope, granting full access to all files in the users Drive. For regular uses, it is safest to use auth/drive.file
which will only grant Duplicati access to files created by Duplicati. However, in some cases Google Drive will drop your permissions and refuse to let Duplicati access the files. There is no way to change the permissions on the files, so if this happens, your only choice is to use auth/drive
and obtain full access:
You can now click update and save the consent screen and proceed to setting up the credentials needed. Click "Create Credentials" and choose "OAuth client ID". On the next page, choose the type "Web application". In the "Authorized redirect URIs" field you need to enter the url for the server that is being called after login. The Duplicati OAuth server uses a path of /logged-in
so make sure it ends with that. In the screenshot, the server is hosted on a single machine, so the setup is for https://localhost:8080/logged-in
:
When you are done, click "Save" and a popup will show the credentials that are generated. Use the convenient copy buttons to get "Client ID" and "Client secret" or download the JSON file containing them. If you loose them, you can get then again via the "Credentials" page. The credentials shown here are redacted:
With the credentials available, create a JSON text file similar to this:
If you are setting up a secure server, you should use SharpAESCrypt to encrypt the file after you have created it. If you do, make a note of the passphrase used. Save the file either as secrets.json
or secrets.json.aes
if you have encrypted it.
In the following, we will only set up Full Access Google Drive, which for legacy reasons is called "googledocs
" in the OAuth server. If you are looking to set up one of the other services, see the configuration document, and pick the ids you need.
In the following, the services
are configured to just googledocs
but it can be a comma separated list of services if you want to enable multiple. The storage is here simply a local folder that stores encrypted tokens, but you can also use an S3 compatible storage if needed. See the OAuth server readme for more details.
If you are using Docker, you can run the OAuth server image directly and simply add environment variables:
The hostname here MUST match the one set as the redirect URI or the authorization will fail. The URLs parameter is what the internal Docker engine thinks it is running. For this setup there is no TLS/SSL certificate, so the URL here is http
but note that we used https
in the redirect URI and these two must match in the end. Here I am assuming some other service is providing the SSL layer.
If you need to serve the certificate directly from the Docker container, generate a certificate .pfx
file and use a configuration such as:
To run without Docker, first you need to download the OAuth Server binaries for your operating system and extract them to a suitable place. The binaries are self-contained so the will run without any additional framework installation.
To run the server, invoke it with a setup like this:
The hostname here MUST match the one set as the redirect URI or the authorization will fail. The URLs parameter is what the process thinks it is running locally. For this setup there is no TLS/SSL certificate, so the URL here is http
but note that we used https
in the redirect URI and these two must match in the end. Here I am assuming some proxy service is providing the SSL certificate.
If you need to serve the certificate directly from the the binary, generate a certificate .pfx
file and use a configuration such as:
Once the service is running, you can navigate to the page and generate an AuthID:
The final step is to instruct Duplicati to use the self-hosted OAuth server instead of the regular instance. This is done by visiting the "Settings" page in the Duplicati UI and adding the advanced option --oauth-url=https://localhost:8080/refresh
:
Don't forget to click "OK" to save the settings. Once configured, the "AuthID" links in the UI will point to your self-hosted OAuth server, and all authorization is done purely through the self-hosted OAuth server.
This page explains how to recover as much data as possible from a broken remote storage
This page describes how to work with encrypted files outside of normal operations
In normal Duplicati operations, the files at the remote destination should never be handled by anything but Duplicati. Changing the remote files will always result in warnings or errors when Duplicati needs to access those files.
However, in certain exceptional scenarios, it may be required that the file contents are accessed manually.
And similarly, to encrypt a file, you can use:
This page describes what a "destination" is to Duplicati and lists some of the available providers
Duplicati makes backups of files, called the source, and places the backup data at a destination chosen by the user. To make Duplicati as versatile as possible, each of the destinations are implemented as a "destination" (or "backend"), each with different properties.
Some storage providers support multiple protocols with each their strenghts, and you can generally pick which storage destination provider you like, but if there is a specific implementation for a given storage provider, that is usually the best pick.
Each storage destination has a number of options that can be provided via a URL like format. The options should preferably be provided as part of the URL, but can also be provided via regular commandline options. For instance, the --use-ssl=true
flag can also be added to the URL with &use-ssl=true
. If both are provided, the URL value is used.
Destinations in this category are general purpose enough, or commonly used, so they can be used across a range of storage providers. Destinations in this category are:
Storage destinations in this category are specific to one particular provider and implemented using either their public API description, or by using libraries implemented for that provider. Destinations in this category are:
Storage destinations in this category are also specific to one particular provider, but these storage provider products are generally intended to be used as file synchronization storage. When they are used with Duplicati, the backup files will generally be visible as part of the synchronization files. Destinations in this category are:
Storage destinations in this category are utilizing a decentralized storage strategy and requires knowledge about each system to have it working. Some of these may require additional servers or intermediary providers and may have different speed characteristics, compared to other storage providers. Destinations in this category are:
This page describes how to use the file destination provider to store backup data on a local drive.
The most basic destination in Duplicati is the file backend. This backend simply stores the backup data somewhere that is reachable from the file system. The destination can be a network based storage as long as it is mounted when needed, a fixed disk, or a removable media.
The file backend can be chosen with the file://
prefix where the rest of the destination url is the path.
Windows example:
Linux/MacOS example:
For most cases it will also work without the file://
prefix, but adding the prefix makes the intention clear.
Since Duplicati is intended to be used with remote systems, it will make a temporary file, and then copy the temporary file to the new location. This enables various retry mechanisms, progress reporting and failure handling that may not be desired with local filesystems.
To change this logic to instead use the operating system move
command to move the file into place, avoiding a copy, set the option --use-move-for-put
, on the file backend and also set --disable-streaming-transfers
. With these two options, all special handling will be removed and the transfer speed should be the optimal possible with the current operating system. Note that setting --disable-streaming-transfers
will not show any progress during transfers, if you are using the UI, because the underlying copy or move method cannot be monitored.
Because a local storage destination is expected to have a very low latency, the file backend will verify the length of the file after copy. This additional call is usually very fast and does not impact transfers speeds, but can be disabled for slightly faster uploads with --disable-length-verification
.
For removable drives, the mount path can sometimes change when inserting the drive. This is most prominent on WIndows, where the drive letters are assigned based on what order the drives are connected. To support different paths, you can supply multiple alternate paths with --alternate-target-paths
, where each path is separated with the system path separator (;
on Windows, :
on Linux/MacOS):
If you would like to support any drive letter, you can also use *
as the drive letter (Windows only):
Because using multiple paths could end up attempting to make a backup to the wrong drive, you can use the option --alternate-destination-marker
to provide a unique marker filename that needs to exist on the destination:
Using this option will scan all paths provided, either using the *
drive letter or --alternate-target-paths
, and check if the folder contains a folder with the given filename.
To use authentication, provide the --auth-username
and --auth-password
arguments to the query. Since the authentication in Windows is tied to the current user context, it is possible that the share is already mounted with different credentials, that may not have the correct permissions.
To guard against this, it is possible to drop the current authentication and re-authenticate prior to acessing the share. This can be done by adding the --force-smb-authentication
option.
This page is not yet completed. See the .
This page is not yet completed. See the .
The files encrypted with the default AES encryption follows the file format, so can be used to decrypt and encrypt these files.
For convenience, Duplicati also ships with a command line binary named that uses the same library that is used by Duplicati. This tool can be used to decrypt the remote volume files with the encryption passphrase, as well as encrypt files.
Files encrypted with can choose one of the many ways, and a general overview of how GPG works can be found in the . When using the default options, Duplicati will use the symmetric mode for GPG. In this mode, you can use this command to decrypt a file:
If you need to switch from GPG to AES, or vice-versa, you can use the to automatically process all files on the storage destination. The recovery tool also supports recompressing or changing the compression method.
If you use this method, make sure to .
(any path in the filesystem)
(SSH)
(binary required)
(aka Tardigrade)
Note that for Windows network shares, you may want to use the instead.
On Windows, the shares can be authenticated with a username and password (not with integrated authentication). This uses a to authenticate prior to accessing the share.
This page describes the OpenStack storage destination
Duplicati supports storing files with OpenStack, which is a large-scale object storage, similar to S3. With OpenStack you store "objects" (similar to files) in "containers" which define various properties shared between the objects. If you use a /
in the object prefix, they can be displayed as virtual folders when listing them.
If you are using OpenStack with version 2 of the protocol, you can either use an API key or a username/password/tenant combination. To use the password based authentication, use a URL format like this:
If you are using an API key, leave out the --auth-password
and --openstack-tenant-name
parameters and add in --openstack-apikey=<apikey>
.
If you are using OpenStack with version 3 of the protocol, you must supply: username, password, domain, and tenant name:
The authentication response will contain a set of endpoints to be used for actual transfers. In some cases, this response can contain multiple possible endpoints, each with a different region. To prefer a specific region, supply this with --openstack-region
. If any of the returned endpoints have the same region (case-insensitive compare), the first endpoint matching will be selected. If no region is specified, or no region matches, the first region in the response is used.
This page describes the S3 storage destination
The Simple Storage Service, S3, was originally described, developed and offered by Amazon via AWS. Since then, numerous other providers have adopted the protocol and offer S3-compatible services. While these services are mostly compatible with the core S3 protocol, a number of additional AWS-specific settings are usually not supported and will be ignored.
This page deals with S3 in general, for a specific setup on AWS S3, refer to the AWS specific page.
When storing data in S3, the storage is divided into a top-level "folder" called a "bucket", and each bucket has "objects", similar to files. For most providers, an object name with /
characters will be interpreted as subfolders in some way.
In the original S3 specification, the bucket name was used as part of the hostname, causing some issues with bucket names that are not valid hostnames, and some delays for new buckets caused by DNS update speeds. Newer solutions use a single shared hostname and provide the bucket name as a parameter.
For AWS S3, and most other providers, the bucket name is a global name, shared across all users. This means that simple names, such as backup
or data
will likely be taken, and attempts to use these will cause permission errors. For AWS, the recommendation is to use a guid in the bucket name to make it unique. The Duplicati UI will recommend prefixing the account id to the bucket name, to make it unique.
To use S3 as the storage destination, us a format such as:
Note that the default for S3 is to use unencrypted connections. The connections are secured with signatures, but all data transfered can be captured through the network. If the provider supports SSL/TLS, which most do, make sure to add --use-ssl=true
to also encrypt the connection.
Make sure you consult the provider documentation to get the server name you need for the bucket region. If you are using AWS, see the AWS S3 description.
The S3 storage destination can either use the AWS S3 library or Minio library, and you can choose the library to use with --s3-client=minio
.
Generally, both libraries will work with most providers, but the AWS library has some defaults that may not be compatible with other providers. While you can configure the settings, it may be simpler to use Minio with the default settings.
Since the bucket defines the place where data is stored, a bucket needs to be created before it can be used. All providers will offer a way to do this through their UI, and allows you to set various options, such as which geographical region the bucket is located in.
If you use Duplicati to create the bucket, you can also set the option --s3-location-contraint
to provide the desired location. Support for this, and available regions, depends on the provider.
With S3 it is also possible to set the storage class which is sometimes used to fine-tune the cost/performance/durability of the files. The storage class is set with --s3-storage-class
, but the possible settings depends on the provider.
This page describes the FTP storage destination
The FTP protocol is widely supported but generally, FTP is considered a legacy protocol with security issues despite correct implementation. Due to its continued ubiquity, it is still supported by Duplicati using FluentFTP.
To use the FTP backend, you can use a URL such as:
Despite FTP being a well documented standard, there are many different implementations of the protocol, so the FTP backend supports a variety of settings for configuring the connection. You can use a non-standard port through the hostname, such as ftp://hostname:2121
.
Due to the way FTP is working, it requires multiple connections to transfer data, and the method for selecting which mode has a number of quirks. The default setting is "AutoPassive" which works great for most setups, leaving the burden of configuring the firewall to the server.
Use the option --ftp-data-connection-type
to choose a specific connection mode if the default does not work for your setup.
To enable encrypted connections, you can use the option --ftp-encryption-mode
and setting it to either Implicit
or Explicit
. The Implicit
setting creates a TLS connection and everything is encrypted, where Explicit
is more commonly used, and creates an unencrypted connection and then upgrades to an encrypted session.
The default setting is --ftp-encryption-mode=None
which uses unencrypted FTP connections.
The setting --ftp-encryption-mode=Auto
is the most compatible setting, but also insecure, as it connects in unencrypted mode and then attempts to switch to encrypted, but will continue in unencrypted mode if this fails.
To further lock down the encryption mode, the option --ftp-ssl-protocols
can be used to limit the accepted protocols. Note: that due to unfortunate naming in .NET, the option --ftp-ssl-protocols=None
means "use the system defaults".
To support self signed certificates, the FTP destination also supports the --accept-specified-ssl-hash
option is also supported which takes an SHA1 certificate digest and approves the certificate if it matches that hash. This is similar to a manual certificate pinning and allows trusting a specific certificate outside the operating systems normal trust chain.
For testing, it is also possible to use --accept-any-ssl-certificate
which will bypass certificate checks completely and enable man-in-the-middle attacks on the connection.
The FTP protcol is tied to a Posix-style path where /
is the root folder and subfolders are described using the forward-slash separator. On some systems the filesystem is virtual, so the user can only see the root path, but has no knowledge of the underlying real filesystem. On others, the paths are mapped directly to the user home, like /home/user
.
Use the option --ftp-absolute-path
to treat the source path as an absolute path, meaning that folder
maps to /folder
and not to /home/user/folder
.
A related option is the --ftp-use-cwd-names
option that makes Duplicati keep track of the working directory and uses the FTP server's CD
command to set the working folder prior to making a request.
To verify that uploads actually work, the FTP connection will request the file after it has been uploaded to check that it exists and has the correct file size. This check is usually quite fast and does not impact backup speeds, but if needed it can be disabled with --disable-upload-verify
.
A related setting --ftp-upload-delay
adjusts the delay that is inserted after the upload but before verifying the file exists, which is required on some servers to ensure the file is fully flushed before validating the existence.
Because the FTP protocol can sometimes be difficult to diagnose, the option --ftp-log-to-console
will enable logging various diagnostics output to the terminal. This option works best with the BackendTool or BackendTester application. The option --ftp-log-privateinfo-to-console
will also enable logging of usernames and passwords being transmitted, to further track down issues. Neither option should be set outside of testing and evaluation scenarios.
aFTP
Prior to Duplicati 2.1.0.2 there were two different FTP backends, FTP and Alternative FTP (aFTP). This was done as the primary FTP backend was based on FtpWebRequest and was lacking some features. The aFTP backend was introduced to maintain the FTP backend but offer more features using the FluentFTP library.
With Duplicati 2.1.0.2 the codebase was upgraded to .NET8 which means that FtpWebRequest
is now deprecated. For that reason, the FTP backend was converted to also be based on FluentFTP, so both FTP backends are currently using the same library.
The aFTP
backend is still available for backwards compatibility, but is the same as the FTP backend, with some different defaults. The aFTP
backend will likely be marked deprecated in a future version, and eventually removed.
This page describes the Rclone storage destination
Duplicati has a wide variety of storage destinations, but the Rclone project has even more! If you are familiar with Rclone, you can configure Duplicati to utilize Rclone to transfer files and extend to the full set of destinations supported by Rclone.
If you are using Rclone, some features, such as bandwidth limits and transfer progress do not work.
Duplicati does not bundle Rclone, so you need to download and install the appropriate binaries before you can use this backend. The URL format for the Rclone destination is:
If the remote repo is not a valid hostname, you can instead use this format:
If you need to change the Rclone local repo you can use the option --rclone-local-repository
which will otherwise be set to local
, which works for most setups.
If you need to supply options to Rclone, these can be passed via --rclone-option
. Note that the values must be url encoded, and multiple options can be passed by separating them with spaces, before encoding.
As an example adding "--opt1=a --opt2=b
" needs to url encoded and results in:
This page describes the WebDAV storage destination
The WebDAV protocol is a minor extension to the HTTP protocol used for web requests. Because it is compatible with HTTP it also supports SSL/TLS certificates and verification similar to what websites are using.
To use the WebDAV destination, you can use a url such as:
You can supply a port through the hostname, such as webdav://hostname:8080/path
.
There are three different authentication methods supported with WebDAV:
Integrated Authentication (mostly on Windows)
Use --integrated-authentication=true
to enable. This works for some hosts on Windows and most likely has no effect on other systems as it requires a Windows-only extension to the request and a server that supports it.
Digest Authentication
Use --force-digest-authentication=true
to use Digest-based authentication
Basic Authentication
Sending the username and password in plain-text. This is the default, but is insecure if not using an SSL/TLS encrypted connection.
You need to examine your destination servers documentation to find the supported and recommended authentication method.
To use an encrypted connection, add the option --use-ssl=true
such as:
This will then use an HTTPS secured connection subject to the operating system certificate validation rules. If you need to use a self-signed certificate that is not trusted by the operating system, you can use the option --accept-specified-ssl-hash=<hash>
to specifically trust a certain certificate. The hash value is reported if you attempt to connect and the certificate is not trusted.
This technique is similar to certificate pinning and prevents rotating the certificate and blocks man-in-the-middle attacks.
For testing setups you can also use --accept-any-ssl-certificate
that will disable certificate validation. As this enables various attacks it is not recommended besides for testing.
This page describes the Rackspace CloudFiles storage destination
Duplicati supports storing files with Rackspace CloudFiles, which is a large-scale object storage, similar to S3. With CloudFiles you store "objects" (similar to files) in "containers" which define various properties shared between the objects. If you use a /
in the object prefix, they can be displayed as virtual folders when listing them.
To use CloudFiles, you can use the following URL format:
The default authentication will use the US endpoint, which will not work if you are a customer of the UK service. To choose the UK account, add --cloudfiles-uk-account=true
to the request:
If you need to use a specific host, you can also provide the authentication URL directly with the --cloudfiles-authentication-url
option. If you are providing the URL, the --cloudfiles-uk-account
option will be ignored.
This page describes the Alibaba Cloud Object Storage Service, also known as Aliyun OSS.
Duplicati supports storing files on Alibaba Cloud Object Storage Service, aka Aliyun OSS, which is a large-scale object storage, similar to S3. In Aliyun OSS you store "objects" (similar to files) in "buckets" which define various properties shared between the objects. If you use a /
in the object prefix, they can be displayed as virtual folders when listing them.
Note that the bucket id is globally unique, so it is recommended using a name that is not likely to conflict with other users, such as prefixing the bucket with the project id or a similar unique value. If you use a simple name, like data
or backup
it is likely already associated with another project and you will get permission errors when attempting to use it.
To use Aliyun OSS, you can use the following URL format:
The endpoint is defined by Aliyun and needs to match the region the bucket is created it. The access key can be obtained or created in the Cloud Console.
This page describes the Backblaze B2 storage destination
Duplicati supports storing files with Backblaze B2, which is a large-scale object storage, similar to S3. With B2 you store "objects" (similar to files) in "buckets" which define various properties shared between the objects. If you use a /
in the object prefix, they can be displayed as virtual folders when listing them.
To use the B2 storage destination, use the following URL format:
You can use the Backblaze UI to create your buckets, but if you need to create buckets with Duplicati, this is also possible. The default is to create private buckets, but you can create public buckets with --b2-create-bucket-type=allPublic
.
You can change the size of file listings to better match pricing and speed through --b2-page-size
, which is default set to 500, meaning you will have a list request for each 500 objects. Note that setting this higher may cause the number of requests to go down, but each requests may be priced as a more expensive request.
If you prefer downloads from you custom domain name, you can supply this with --b2-download-url
. This setting does not affect uploads.
This page describes the iDrive e2 Destination
Duplicati supports storing files on iDrive e2, which is a large-scale object storage, similar to S3. In iDrive e2 you store "objects" (similar to files) in "buckets" which define various properties shared between the objects. If you use a /
in the object prefix, they can be displayed as virtual folders when listing them..
Note that the bucket id is globally unique, so it is recommended using a name that is not likely to conflict with other users, such as prefixing the bucket with the project id or a similar unique value. If you use a simple name, like data
or backup
it is likely already associated with another project and you will get permission errors when attempting to use it.
Note that iDrive has a similar offering called iDrive Cloud Backup, which is not currently supported by Duplicati.
To use iDrive e2, you can use the following URL format:
This page describes the Box.com storage destination
Duplicati supports using box.com as a storage destination. Note that Duplicati stores compressed and encrypted volumes on box.com and does not store files so they are individually accessible from box.com.
To use box.com, use the following URL format:
To use box.com you must first obtain an AuthID
by using a Duplicati service to log in to box.com and approve the access. See the page on the OAuth Server for different ways to obtain an AuthID.
When files are deleted from your box.com account, they will be placed in the trash folder. To avoid old files taking up storage in your account, you can add --box-delete-from-trash
which will then also remove the file from the trash folder.
This page describes the Mega.nz storage destination
To use the storage destination, you can use the following URL format:
NOTE: The destination is currently using the which is no longer maintained. Since there is little documentation on how to integrate with Mega.nz, it is not recommended that this storage destination is used anymore.
This page describes common scenarios for configuring Duplicati with Windows
Before you can install Duplicati, you need to decide on three different parameters:
You machine CPU type: x64, Arm64, or x86 (32 bit)
To use Duplicati on Windows, you first need to decide which kind of instance you want: GUI (aka TrayIcon), Server, Agent, CLI. The section on Choosing Duplicati Type has more details on each of the different types.
Finally you need to locate information on what CPU architecture you are using:
x64: 64bit Intel or AMD based CPU. This is the most common CPU at this time.
Arm64: 64bit ARM based CPU. Some laptops, tablets and servers use it.
x86: 32bit Intel or AMD based CPU. Note that Windows 10 was the last version to support 32 bit processors.
If you are in doubt, you can try the x64 version, or use Microsofts guide for determining the CPU.
Once you have decided the on package you want, you are ready to download the package. The default version shown on the main download page is the x64 GUI version in .msi
format. The full list of packages can be obtained via the main download page, and then clicking "Other versions".
For users with a desktop environment and no special requirements, the TrayIcon instance is the recommended way to run Duplicati. If you are using the .msi
package to install Duplicati, you will see an option to automatically start Duplicati, as well as create a shortcut on your desktop and in the start menu. If you need to manually start Duplicati, you can find the executable in:
When running the TrayIcon in a user context, it will create a folder in your home folder, typically C:\Users\<username>\AppData\Local\Duplicati
where it stores the local databases and the Server database with the backup configurations.
The Server is a regular executable and can simply be invoked with:
When invoked as a regular user, it will use the same folder, C:\Users\<username>\AppData\Local\Duplicati
, as the TrayIcon and share the configuration.
If you want to run Duplicati as a Windows Service, you can use the bundled service tool to install/uninstall the service:
When installing the Service it will automatically start, and likewise, uninstalling it will stop the service. If you need to pass options to the server, you can provide them to the INSTALL command:
You can also use the preload.json file to pass settings to the Server when running as a service, which allows you to change the settings without the uninstall/install cycle (you still need to restart the service).
Note: When running the Windows Service it will default to use port 8200 and fail it that port is not available. If you are running the TrayIcon, that will run a different instance, usually at port 8300. If you want to connect the TrayIcon to the Windows Service, edit the shortcut to Duplicati:
With the Agent there is a minimal setup required, which is to register the machine with the Duplicati Console. The default installation is to install the Agent as a Windows Service, meaning it will run in the LocalService system account, instead of the local user. Due to this, it will not be able to open the browser and start the registration process for you. Instead, you must look into the Windows Event Viewer and extract the registration link from there.
You can also register the Agent, using the Agent executable:
After the Agent has been registered, restart the service and it will now be available on the Duplicati Console.
If you have a pre-authenticated link for registering the machine, and would like to automate the process, you can place a file in C:\ProgramData\Duplicati\preload.json
with content similar to:
Using the CLI is simply a matter of invoking the binary:
Since the CLI also needs a local database for each backup, it will use the same location as described for the Server above to place databases. In addition to this, it will keep a small file called dbconfig.json
in the storage folder where it maps URLs to databases. The intention of this is to avoid manually specifying the --dbpath
parameter on every invocation.
If you specify the --dbpath
parameter, it will not use the dbconfig.json
file and it will not store anything in the local datafolder.
Each package of Duplicati contains a number of support utilities, such as the RecoveryTool. Each of these can be invoked from the commandline with their executable name and all contain built-in help. For example, to invoke ServerUtil, run:
This page describes the SFTP (SSH) storage destination
The SFTP destination is using the ubiquitous SSH system to implement a secure file transfer service. Using SSH allows secure logins with keys and is generally a secure way to connect to another system. The SSH connection is implemented with Renci SSH.Net.
To use the SFTP destination you can use a URL such as:
You can supply a non-standard port through the hostname, such as ssh://hostname:2222/folder
.
It is very common, and more secure, to use key-based authentication, and Duplicati supports this as well. You can either provide the entire key as part of the URL or give a path to the key file. If the key is encrypted, you can supply the encryption key with --auth-password
.
To use a private key inline, you need to url encode it first and then pass it to --ssh-key
. An example with an inline private key:
Note that you need both the prefix sshkey://
and you need to URL encode the contents.
If you have the SSH keyfile installed in your home folder, you can use the file directly with --ssh-keyfile
:
Note that Duplicati does not currently support key agents so you must pass the password here.
For best security it is recommended to use a separate identity and key files for the user, so a compromise of the keys does not grant more permissions than what is required.
Since SSH does not have a global key registry, like for HTTPS, it is possible to launch a man-in-the-middle attack on an SSH connection. To prevent this, Duplicati and other SSH clients will use certificate pinning where the previously recorded host certificate hash is saved and changes to the host certificate must be manually handled by the user.
On the first connection to the SSH server, Duplicati will throw an exception that explains how to trust the server host key, including the host key fingerprint. Once you obtain the host key fingerprint, you can supply it with the --ssh-fingerprint
option.
If the host key changes, you will get a different message, but also reporting the new host key, so you can update it. The option --ssh-accept-any-fingerprints=true
is only recommended for testing and not for production setups as it will disable the man-in-the-middle protection.
If you are using the UI, you can click the "Test connection" button and it will guide you to set the host key parameters based on what the server reports.
By default, Duplicati will assume that the connection works once it has been established. If the SSH server is malfunctioning it may cause operations to hang. To guard against this case, you can set the --ssh-operation-timeout
option to enforce a maximum time the operation may take.
A different kind of timeout is when firewalls and other network equipment monitors the connections and closes them if there is no activity. Because Duplicati may open a connection and then perform a long operation locally, it may cause the connection to be closed due to inactivity. The option --ssh-keepalive
can be used to define a keep-alive interval where messages are sent if there is no other activity.
Both options are default disabled and should only be enabled if there are special conditions in a setup where the options are needed.
This page describes the Jottacloud storage destination
Within Jottacloud, each machine registered is a device that can be used for storage, and within each device you an choose the mount point. By default, Duplicati will use the special device Jotta
and the mount point Archive
.
If you need to store data on another device, you can use the options --jottacloud-device
and --jottacloud-mountpoint
to set the device and mount point. If you only set the device, the mount point will be set to Duplicati
.
If you need to tune the performance and resource usage to match your specific setup, you can adjust the two parameters:
--jottacloud-threads
: The number of threads used to fetch chunks with
--jottacloud-chunksize
: The size of chunks to download with each thread
This page describes the Tencent COS storage destination
To use GCS, you can use the following URL format:
Note that the bucket must be created from within the Cloud Console prior to use.
NOTE: The ARCHIVE
and DEEP_ARCHIVE
storage does not work well with Duplicati. Because Duplicati really likes to verify that things are working as expected you need to disable these checks. You also need to disable cleanup of data after deleting versions. Restores are tricky, because you need to manually restore data to the standard storage class before Duplicati can access it.
To use the storage destination, you can use the following URL format:
To use Jottacloud you must first obtain an AuthID
by using a Duplicati service to log in to Jottacloud and approve the access. See the for different ways to obtain an AuthID.
Duplicati supports storing files on which is a large-scale object storage, similar to S3. In Tencent COS you store "objects" (similar to files) in "buckets" which define various properties shared between the objects. If you use a /
in the object prefix, they can be displayed as virtual folders when listing them.
The bucket name is user-chosen, and the region must match the . The remaining values can be obtained from the Cloud Console.
The objects uploaded can be in different , which can be set with --cos-storage-class
.
This page describes the Google Cloud Storage destination
Duplicati supports storing files on Google Cloud Storage, aka GCS, which is a large-scale object storage, similar to S3. In GCS you store "objects" (similar to files) in "buckets" which define various properties shared between the objects. If you use a /
in the object prefix, they can be displayed as virtual folders when listing them.
Note that the bucket id is globally unique, so it is recommended using a name that is not likely to conflict with other users, such as prefixing the bucket with the project id or a similar unique value. If you use a simple name, like data
or backup
it is likely already associated with another project and you will get permission errors when attempting to use it.
To use GCS, you can use the following URL format:
To use Google Cloud Storage you must first obtain an AuthID
by using a Duplicati service to log in to Google and approve the access. See the page on the OAuth Server for different ways to obtain an AuthID.
You can create a bucket from within the Google Cloud Console and here you can set all options as desired. If you prefer to let Duplicati create the bucket, you can also set the parameters from Duplicati.
You set the project the bucket belongs to with --gcs-project=<project id>
and the desired location with --gcs-location=<location>
. You can get the project id from the Google Cloud Console and see the possible GCS bucket locations in the GCS documentation.
When creating the bucket you can also choose the storage class with --gcs-storage-class
. You can choose any of the storage class values shown in the GCS documentation, even if they are not reported as possible by Duplicati.
These options have no effect if the bucket is already created.
This page describes the Azure Blob Storage destination
Duplicati supports backing up to Azure Blob Storage, which is a large scale object storage, similar to S3.
To use the Azure Blob Storage destination, you can use the following URL format:
You can create the container via the Azure portal, but if you prefer, you can also let Duplicati create the container for you. The container names are unique within the storage account and has a number of restrictions.
If you use the UI, the "Test connection" button will prompt you if the container needs to be created.
Instead of using a traditional Access Key, you can also use a SAS token. To use this, supply it instead of the access key, for example:
This page describes the Microsoft Group storage destination
Duplicati supports using Microsoft Groups as a storage destination. To use the destination, use the following URL format:
To use MS Group you must first obtain an AuthID
by using a Duplicati service to log in to Microsoft and approve the access. See the page on the OAuth Server for different ways to obtain an AuthID.
You can either provide the group email via --group-email
or the group id via --group-id
. If you provide both, they must resolve to the same group id.
If you need to gain more performance you can fine-tune the performance of chunked transfers with the options:
--fragment-size
--fragment-retry-count
--fragment-retry-delay
For most uses, it is recommended that these are kept at their default settings and only changed after confirming that there is a gain to be made by changing them.
This page describes the CIFS storage destination
The Common Internet File System (CIFS) backend provides native support for accessing shared network resources using the CIFS/SMB protocol. This backend enables direct interaction with Windows shares and other CIFS-compatible network storage systems.
To use the CIFS destination, you can use a url such as:
CIFS supports two distinct transport protocols, each with its own characteristics:
DirectTCP (directtcp)
Port: 445
Characteristics:
Faster performance
Modern implementation
Preferred for newer systems
Direct TCP/IP connection
Lower overhead
Port: 139
Characteristics:
Legacy support
Compatible with older systems
Additional protocol overhead
Slower performance
Uses NetBIOS naming service
--
Defines the read buffer size, in bytes, for SMB operations (Will be capped automatically by SMB negotiated values, values bellow 10000 bytes will be ignored)
--
Defines the write buffer size, in bytes, for SMB operations (Will be capped automatically by SMB negotiated values, values bellow 10000 bytes will be ignored)
This page describes the pCloud storage destination
The pCloud provider was added in Duplicati v2.1.0.100, and is not yet included in a stable release.
To use pCloud, use the following URL format:
Due to the way the pCloud authentication system is implemented, the generated AuthID is not stored by the OAuth server and cannot be revoked via the OAuth server. To revoke the token, you must revoke the Duplicati app from your pCloud account, which will revoke all issued tokens.
This also means that after issuing the pCloud token, you do not need to contact the OAuth server again, unlike other OAuth solutions.
This page describes the Dropbox storage destination
To use Dropbox, use the following URL format:
This page describes how to use the AWS S3 storage destination
To use the AWS S3 destination, use a format such as:
If you do not supply a hostname, but instead a region, such as us-east-1
, the hostname will be auto-selected, based on the region. If the region is not supported by the library yet, you can supply the hostname via --server-name=<hostname>
.
Beware that S3 by default will not use an encrypted connection, and you need to add --use-ssl=true
to get it working.
When creating a bucket, it will be created in the location supplied by --s3-location-constraint
. In the case no constraint is supplied, the AWS library will decide what to do. If the bucket already exists, it cannot be created again, so the --s3-location-constraint
setting will not have any other effect than choosing the hostname.
Glacier storage does not work well with Duplicati. Because Duplicati really likes to verify that things are working as expected you need to disable these checks. You also need to disable cleanup of data after deleting versions. Restores are tricky, because you need to retrieve data manually from Glacier before Duplicati can work with it.
CIFS Backend is available on Canary release from
Duplicati supports using as a storage destination. Note that Duplicati stores compressed and encrypted volumes on pCloud and does not store files so they are individually accessible from pCloud.
To use pCloud you must first obtain an AuthID
by using a Duplicati service to log in to pCloud and approve the access. See the for different ways to obtain an AuthID.
Duplicati supports using as a storage destination. Note that Duplicati stores compressed and encrypted volumes on Dropbox and does not store files so they are individually accessible from Dropbox.
To use Dropbox you must first obtain an AuthID
by using a Duplicati service to log in to Dropbox and approve the access. See the for different ways to obtain an AuthID.
The storage destination is implemented with the , so all details from that page applies here as well, but some additional features are supported by AWS.
By default, the objects are created with the "Standard" storage setting, which has optimal access times and redundancy. More information about the different are available from AWS. You can choose the storage class with the option --s3-storage-class
. Note that you can provide any string here that is supported by your AWS region, despite the UI only offering a few different ones.
This page describes the SharePoint v2 storage destination
Duplicati supports using Microsoft SharePoint as a storage destination. This page describes the SharePoint that uses the Graph API, for the SharePoint provider that uses the legacy API, see SharePoint.
To use SharePoint, use the following URL format:
To use SharePoint v2 you must first obtain an AuthID
by using a Duplicati service to log in to Microsoft and approve the access. See the page on the OAuth Server for different ways to obtain an AuthID.
If you need to gain more performance you can fine-tune the performance of chunked transfers with the options:
--fragment-size
--fragment-retry-count
--fragment-retry-delay
For most uses, it is recommended that these are kept at their default settings and only changed after confirming that there is a gain to be made by changing them.
This page describes the OneDrive storage destination
Duplicati supports using Microsoft OneDrive as a storage destination. Note that Duplicati stores compressed and encrypted volumes on OneDrive and does not store files so they are individually accessible from OneDrive.
To use OneDrive, use the following URL format:
To use OneDrive you must first obtain an AuthID
by using a Duplicati service to log in to Microsoft and approve the access. See the page on the OAuth Server for different ways to obtain an AuthID.
A default drive will be used to store the data. If you require another drive to be used to store data, such as a shared drive, use the --drive-id=<drive id>
option.
This page describes the Storj storage destination
Duplicati supports backups to the Storj network which is a large-scale decentralized storage network. The destination supports two different ways of authenticating: Access Grant and Satellite API.
To use the access grant method, use the following URL format:
To use a satellite API, use the following URL format:
If the --storj-satellite
is omitted it will default to a US based endpoint.
To choose the bucket where data is stored, use the --storj-bucket
which will default to duplicati
. If further differentiation is needed, use --storj-folder
to specifiy a folder within the bucket where data is stored.
This page describes the SharePoint storage destination
Duplicati supports using Microsoft SharePoint as a storage destination. This page describes the SharePoint that uses the legacy API, for the SharePoint provider that uses the Graph API, see SharePoint v2.
To use SharePoint, use the following URL format:
If you are on Windows, it may be possible to use the current user's credentials to authenticate. Support for this is depending on many details and is not avaliable in all cases. To use integrated authentication, use the following URL format:
Instead of deleting files directly, they can be moved to the recycle bin by setting the option --delete-to-recycler.
This gives some additional safety if a version removal was unintended, but is not generally recommended, as it is a manual process to recover from a partial delete.
The options --web-timeout
and --chunk-size
can be used to fine-tune performance that matches your setup, but generally it is recommended to keep them at their default values.
If you are running Duplicati in a data center with a very stable connection, you can use the option --binary-direct-mode
to enable direct transfers for optimal performance.
This page describes the Dropbox storage destination
Duplicati supports using Google Drive as a storage destination. Note that Duplicati stores compressed and encrypted volumes in Google Drive and does not store files so they are individually accessible from Google Drive.
To use Google Drive, use the following URL format:
To use Google Drive you must first obtain an AuthID
by using a Duplicati service to log in to Google and approve the access. See the page on the OAuth Server for different ways to obtain an AuthID.
Duplicati can work with limited access to Google Drive, where it only has access to its own files. This access is recommended, because it prevents accidents where files not relevant for Duplicati can be read or written. On the community server, this option is called "Google Drive (limited)".
Unfortunately, the security model in Google Drive sometimes resets the access, cutting off Duplicati from accessing the files it has created. If this happens, it is not currently possible to re-assign access to Duplicati, and in this case you must grant full access to the Google Drive for Duplicati to work. On the community server, this option is called "Google Drive (full access)".
If you need to use a Team Drive, set the option --googledrive-teamdrive-id
to the ID for the Team Drive to use. If this is not set, it will use the personal Google Drive. For example:
This page describes the OneDrive For Business storage destination
Duplicati supports using Microsoft OneDrive for Business as a storage destination. Note that Duplicati stores compressed and encrypted volumes on OneDrive and does not store files so they are individually accessible from OneDrive.
To use OneDrive For Business, use the following URL format:
If you are on Windows, it may be possible to use the current user's credentials to authenticate. Support for this is depending on many details and is not avaliable in all cases. To use integrated authentication, use the following URL format:
Instead of deleting files directly, they can be moved to the recycle bin by setting the option --delete-to-recycler.
This gives some additional safety if a version removal was unintended, but is not generally recommended, as it is a manual process to recover from a partial delete.
The options --web-timeout
and --chunk-size
can be used to fine-tune performance that matches your setup, but generally it is recommended to keep them at their default values.
If you are running Duplicati in a data center with a very stable connection, you can use the option --binary-direct-mode
to enable direct transfers for optimal performance.
This page describes the Sia storage destination
Duplicati supports backups to the Sia network which is a large-scale decentralized storage network. To use the Sia destination, use a this URL format:
If the host is supporting unauthenticated connections, you can omit the password. The default port is 9980
if none is supplied and the default path is /backup
if none is supplied.
To adjust the amount of redundancy in the Sia network, use the option --sia-redundancy
. Note that this value should be more than 1
.
This page describes the command line interface (CLI)
The commandline interface gives access to run all Duplicati operations without needing a server instance running. This is beneficial if your setup does not benefit from a UI and you want to use an external scheduler to perform the operations.
The binary is called Duplicati.CommandLine.exe
on Windows and duplicati-cli
on MacOS/Linux. All commands from the commandline interface follow the same structure:
Each command also requires the option --dbpath=<path to local database>
, but if this is not supplied, Duplicati will use a shared JSON file in the settings folder to keep track of which database belongs to each backup. Since there is no state given, the remote url is used as a key, because it is expected to uniquely identify each backup. If no entry is found, a new entry will be created and subsequent operations will use that database.
Most options have no relationship and can be applied in any order, but some options, mostly the filter options, are order sensitive and must be supplied in the order they are evaluated. The remote url is a url-like representation of the storage destination and options. The destination overview page provides an overview of what is currently supported.
The list of options that are supported is quite extensive and only the most common options are described on this page. For the sensitive options: --passphrase
, --auth-username
, and --auth-password
, these can also be supplied thorugh the matching environment variables: PASSPHRASE
, AUTH_USERNAME
, and AUTH_PASSWORD
. For further safeguarding of these values, see the section on using the secret provider.
All commands support the --dry-run
parameter that will simulate the operations and provide output, but not actually change any local or remote files.
help
command The commandline interface has full documentation for all supported options and some small examples for each of the supported operations. Running the help command will output the possible topics:
To list all options supported by the commandline interface, run the following command:
Note that the number of options is quite large, so you will likely need to use some kind of search functionality to navigate the output.
The most common command is clearly the backup command, and the related restore command. To run a backup, use the following command:
The source path
argument can be repeated to include multiple top-level folders. By default, backups are encrypted on the remote destination, and if no passphrase is supplied with --passphrase
, the commandline interface will prompt for one. If the backups should be done unencrypted, provide the option --no-encryption
.
The most common additional option(s) supplied are the filter options. The filters can selectively change what files and folders are excluded from the source paths. The page on filters describe the format of filters. Filters are supplied with the --include
and --exclude
options. For example:
When supplying only exclude filters, any file not matching will be included; likewise, if only includes are present, anything else will be included. The order of the arguments define the order the filters are evaluated. Beware that some symbols, such as *
and \
needs to be escaped on the commandline, and rules vary based on operating system and terminal application/shell.
If either of the --keep-time
, --keep-versions
, or --retention-policy
options are set, a successfull backup will subsequently invoke the delete and compact operation as needed. This enables a single command to run all required maintenance, but can optionally be invoked as manual steps.
The restore command is equally as important as the backup command and can be executed with:
The restore command in this form will restore the specified file(s) to their original location. If a file is already present in the original location, the files will be restored with a timestamp added to their name. If no files are specified, or the filename is *
, all files will be restored.
To restore to a different location than the original, such as to a staging folder, use the option --restore-path=<destination>
. The restore will find the shortest common path for the files to restore, and make a minimal folder structure to restore into.
If you are sure you want to restore the files, and potentially loose existing files, use the option --overwrite
.
The restore command will restore from the latest version of the backup, but other versions can be selected with the --version=<version>
. As with backups, the --include
and --exclude
options can be used to filter down the desired files to restire.
The find command is responsible for locating files within the backups:
If no filename is specified, the command will instead list all the known backup versions (or "snapshots). Multiple filenames can be specified, and they are all treated as filter expressions. If a full file path is specified, the find command will instead list all versions of that file.
To list files in a specific version, use the --version=<version>
option. To search across all versions, use the --all-versions
option.
As with backup and restore, the --include
and --exclude
filters can be added to assist in narrowing down the search output.
A related operation is the "compare" command, which will show a summary of differences between two versions.
For normal uses, it should be sufficient to only use the backup, restore, and find commands. However, in some exceptional cases, it may be needed to manually fix the problem. If such a situation occurs, Duplicati will abort the backup and give an error message that indicates the problem.
If the local database is missing or somehow out-of-sync with the remote storage, it can be rebuilt with the repair command. The repair command is invoked with:
If the local database is missing, it is recreated from the remote storage. If the local database is present, the repair command will attempt to recreate any data that is missing on the remote storage. Recreate is only possible if the missing data is still available on the local system. If the required data is missing, the repair command will fail with an error message, explaining what is missing.
The command list-broken-files will check which remote files are missing or damaged and report what files can no longer be restored due to this:
The related command "affected" can give a similar output where reports what files would be lost, if the given remote files were damaged. It is possible that files can be partially restored despite damaged remote files. For handling partial restore, see the section on disaster recovery.
If the remote files cannot be recovered, but you would like the backup to continue, you can use the purge-broken-files command to rewrite the remote storage to simply exclude the files that are no longer restorable:
After succesfully purging the broken files, the local database and remote storage will be in sync and you can continue backups.
The related command "purge" can be used to selectively remove files from the backup.
After purging files, you can run the compact command to release space that was held by the removed files.
This page describes the
Besides the general commandline interface, Duplicati ships with a number of supporting commandline tools. Besides ServerUtil, each of the tools are intended to be used in special circumstances, outside the expected normal operation of Duplicati.
BackendTester, Snapshots, AutoUpdater, and SecretTool are intended to be used for testing functionality on the actual setup, ahead of making changes or running backups.
The BackendTool and SharpAESCrypt tools are intended to work directly with the remote storage files.
The RecoveryTool can work directly with the remote storage without using the regular Duplicati code, and can both recover files from a damaged remote destination, but also re-upload existing files.
This page describes the Duplicati TrayIcon executable
The main application in the Duplicati installation is the TrayIcon program, called Duplicati.GUI.TrayIcon.exe
on Windows and simply duplicati
on Linux and MacOS.
The TrayIcon executable is a fairly small program that has as the primary task to register with the operating system desktop environment, and place a status icon in the desktop tray, menu, or statusbar.
The TrayIcon is connected to the server and will change the displayed icon based on the server state. Opening the associated context menu, provides the option to quite, pause/resume, or open the UI.
The second task the TrayIcon is usually responsible for, is to host the Server component. The server is responsible for handling stored backup configurations, provide a user interface, run scheduled tasks and more. When launching the TrayIcon, it will also transparently launch and host the server. It uses this hosted instance to subscribe to changes, so it can change the icon and signal the server state.
By default, Duplicati uses the port 8200
as the communication port with hosted server. Should that port be taken, usually because another instance of Duplicati is running in another user context, Duplicati will automatically try other ports from the sequence: 8200
, 8300
, 8400
, ...
, 8900
.
Once an available port is found, this port is stored in the server database and attempted first on next run.
By default, the Duplicati TrayIcon will use the operating systems standard method for opening the system-default browser. If this is not desired, it is possible to choose the binary that will be used to launch the webpage with the option:
In some cases it may be useful to run the server in one process and the TrayIcon in another. For this setup, the TrayIcon can run without a hosted server. To disabled the Server, start the TrayIcon application with the commandline option:
This will cause the TrayIcon to connect to a Server that is already running. If the Server is not running on the same machine, or using a different port, this can be specified with the commandline option:
It may also be required to provide the password for the server in the detatched setup, as outlined in Duplicati Access Password. An alternative to providing the password is to use the option:
The TrayIcon will then attempt to extract signing information from the local database, provided that the TrayIcon process also has read access to the database, and that signin tokens are not disabled.
It may be convienient to use preload settings to provide arguments to both the Server and TrayIcon when running in detached mode.
If the server is using a self-signed certificate (or a certificate not trusted by the OS), the connection will fail. To manually allow a certificate, obtain the certificate hash, and provide it with:
When the TrayIcon is hosting the server, or has access to the database settings, it will automatically extract the certificate hash, so that particular certificate is accepted. This technique is secure and very similar to certificate pinning.
For testing and debugging purposes, the certificate hash *
means "any certificate". Beware that this settings is very insecure and should not be used in production settings.
When hosting the server, the TrayIcon also accepts all the server settings and will forward any commandline options to the hosted server when starting it.
It is possible to run Duplicati in "portable mode" where it can run from removable media, such as an USB-stick, see the server data location section for more details.
The page describes the Service and WindowsService programs
The Duplicati.WindowsService.exe
executable only exists for Windows and serves two purposes: to manage the Windows Service registration and running the server as a Windows Service.
The registration of the Windows Service is done by executing the WindowsService binary:
The arguments can be any of the arguments supported by the Server and will be passed on to the Server on startup. The service will be registered to automatically restart and start at login. These details can be changed from the Windows service manager.
From version 2.1.0.0 and forward, the service will automatically start after installation. The command can be changed to INSTALL-ONLY
to avoid starting the service.
To remove the service, use the the UNINSTALL
command:
This page describes the TahoeLAFS storage destination
The Service binary executable is a small helper program that simply runs the executable and restarts it if it exits. The purpose of this program is to assist in keeping the Server running, even in the face of errors. The Service binary is called Duplicati.Service.exe
on Windows and duplicati-service
on Linux and MacOS.
Duplicati supports backups to the , Tahoe-LAFS. To use the TahoeLAFS destination, use this URL format:
This page describes the AutoUpdater tool in Duplicati
The AutoUpdater is intended to support automatic updating of Duplicati. In the current version, the name is a bit misleading as it only supports checking for a new version, it does not yet support actually installing a new version automatically.
The binary is called Duplicati.CommandLine.AutoUpdater.exe
on Windows and duplicati-autoupdater
on Linux and MacOS.
To use the AutoUpdater, simply invoke it from the commandline:
This will check if there is a newer version available and remote the running version number.
It is also possible to download an updated installer package:
The download feature checks what package Duplicati is current installed with, and then obtains the most recent URL for that package and downloads it the the current directory. This feature only works if the installed package can be determined and there is an updated package available. If not, the download page is reported to the terminal for manual download.
By default, Duplicati uses the domains updates.duplicati.com
and alt.updates.duplicati.com
to find updates. If you are running Duplicati within a controlled environment, you can use the environment variables to change where Duplicati is looking for the updates:
Duplicati will detect the /stable/
part of the url and replace with the channel the user has chosen.
It is also possible to set the channel with an environment variable:
This page describes the backend tester tool
Before trusting a storage location with your backups, it's essential to verify its reliability. The built-in Storage Testing Tool helps validate your backup destination through comprehensive integrity testing.
The BackendTester binary is called Duplicati.CommandLine.BackendTester.exe
and duplicati-backend-tester
on Linux and MacOS. The tool is mostly intended for system administrators that needs to verify a certain storage solution is working as expected, or for developers who are writing a new storage destination provider.
How the Storage Test Works:
The tool automatically creates test files:
Generates files of varying sizes
Uses randomized file names
Creates the number of files you specify
Performs a complete backup simulation:
Uploads all test files to your chosen storage location
Downloads each file back to verify retrieval
Validates file integrity using hash verification
Repeats this cycle multiple times for confidence
Provides detailed test results:
Success/failure status of each operation
Upload and download performance metrics
Data integrity confirmation
Customizable Test Parameters:
File count: Choose how many test files to generate
File sizes: Set minimum and maximum file sizes
Filename parameters: Configure allowed characters
Test iterations: Specify how many test cycles to run
This page describes the Duplicati SecretTool
The SecretTool is a small utility tool that can be used to test the secret provider configuration.
The SecretTool is called Duplicati.CommandLine.SecretTool.exe
on Windows and duplicati-secret-tool
on Linux and MacOS.
To use the tool, invoke it with a configuration and some secrets to locate:
Multiple secrets can be provided and the tool will attempt to resolve each of them. See the secret provider section for details on how to use and configure the secret providers. Commandline help is also available with:
Note that to protect the secrets, the tool will not report the actual values, but just report if it was able to obtain a value from the secret provider.
This page describes the backend tool in Duplicati
The BackendTool is intended to provide commandline access to the remote destination. This can be used to create remote folders, locate remote files, and fetch remote files.
The BackendTool is called Duplicati.CommandLine.BackendTool.exe
on Windows and duplicati-backend-tool
on Linux and MacOS.
The basic usage for the tool is:
There are 5 supported commands: GET, PUT, DELETE, LIST, CREATEFOLDER.
The GET, PUT, and DELETE commands will download, upload, and delete a file, respectively. The filename parameter refers to the remote filename and will be matched to a local filename. It is not possible to have different filenames on the remote and local system with this operation. Note that any change to the remote storage will likely required a recreate of the local database.
The LIST command will simply list all files found on the remote location and has no side-effects. The CREATEFOLDER command can be used to created folders in preparation for making a backup or moving files.
This page describes the SharpAESCrypt commandline encryption tool
The SharpAESCrypt commandline tool uses the provided AES encryption library but exposes it as a commandline tool.
To encrypt a file, use the syntax:
And similarly to decrypt a file:
For decryption, it is possible to use the "optimistic mode", which will leave the decrypted file on disk, even if it does not pass the validation. This is insecure, because the file contents can be modified if the integrity checks fail, but in some cases it can help to recover lost data:
To enable the compatibility check for regular Duplicati operations, add the environment variable:
The SharpAESCryp tool is called Duplicati.CommandLine.SharpAESCrypt.exe
on Windows and duplicati-aescrypt
on Linux and MacOS. The library and commandline tool implements the , so the commandline tool is compatible with any other tool using the .
If you are encrypting files with a different tool, note that SharpAESCrypt adds an additional check, similar to , which is not part of the AESCrypt specification. This does not change the file format, but makes it harder to inject trailing bytes. However, since other tools do not follow this standard, the decryption will reject such (otherwise valid) files. To decrypt such files, enable the compatibility mode:
This page describes the Duplicati ServerUtil helper program
The ServerUtil executable is a helper program that can interact with a running Duplicati Server instance. The main use-case for this program is to allow scripted or programmatic interactions with the server, without resorting to loading the web UI.
The ServerUtil is a replacement for a contributed duplicati_client script that is no longer maintained. Bot approaches works by accessing the Duplicat server API and issuing the same requests as the user interface would otherwise do.
The ServerUtil binaries are called Duplicati.CommandLine.ServerUtil.exe
on Windows ana duplicati-server-util
on Linux and MacOS.
The ServerUtil needs to authenticate with the Server, which requires a connection url and a password. To avoid needing this, the ServerUtil will attempt to read the Server database and obtain information from there. If this succeeds, the ServerUtil will automatically configure an authenticated session with the server, without needing additional input.
If the database is encrypted, write protected, or in some other way inacessible, the caller needs to provide both the url and the password on the commandline.
If the tool is intended to be invoked from a script, it is possible to secure a refresh token by calling the login command:
This will cause the ServerUtil to store a refresh token in the settings file, such that future operations do not need the password (but will still need the hosturl). To safeguard the token, it is possible to provide --settings-encryption-key=<key>
that will encrypt the settings file. The secret provider can be used to further secure this key, or can be used to provide the password on the commandline.
To revoke the stored refresh token, run the logout command with the host url:
To show the backups currently configured, run the list-backups
command:
Each backup configuration has a name and an ID associated with it. All operations that work on one or more backups will accept either the ID or the name the backup has in the server (case insensitive). Using the ID is prefered as that is stable across backup renames, but the name may be more convenient.
Once you know the name or ID of a backup configuration, you can schedule the backup:
This will put the backup into the running queue and start the backup as soon as the queue is empty.
With the backup ID or name, it is also possible to export the backup configuration for later use:
This will export the configuration to a local file, encrypted with AESCrypt. If you do not supply a passphrase, the exported configuration will not include the passphrase or storage credentials. Use --export-passwords=true
to force export the passwords to a plain-text file.
You can later import a backup that was previously exported with the command:
Note that this will create a new backup with the same configuration, so make sure you have removed the previous backup configuration first.
A common use for the ServerUtil is to pause and resume the server, which can be done to avoid running backups during peak hours. To pause the server, invoke the ServerUtil with a duration value:
This will cause the scheduler to pause and not issue new backups until 5 minutes has passed. If no duration is passed, the server will pause until resumed.
To resume the server, run the following command:
As explained in the section on the access password, it is possible to use the ServerUtil to change the password. In the general case, this can be done with access to the server database, but in some cases it requires knowing the previous password. Change the password with the command:
Note that this will not revoke access that is already granted, as such access lives in refresh and access tokens. Restart the Server with --webservice-reset-jwt-config=true
as explained in the Server section.
The "issue-forever-token
" command was added to Duplicati beta 2.1.0.3 and canary 2.0.102.
All requests to the Duplicati server needs to be authenticated with a valid token. Usually the token is obtained by providing the password to the server and receiving a token in the response. For some advanced setups, especially when running Duplicati behind an authenticated proxy server. In such a setup, the Duplicati password is an unwanted "double authentication".
In a setup where there is another layer of authentication, it is possible to issue a token that last 10 years, significantly longer than the 15 minutes regular tokens last. To prevent unintended usage of the feature, it requires three steps to configure:
Stop Duplicati and start duplicati-server
with --webservice-enable-forever-token=true
Run the command: duplicati-server-util issue-forever-token
Stop Duplicati and start without --webservice-enable-forever-token
The commandline option --webservice-enable-forever-token
toggles the ability to issue the token. The API is implemented such that it will only issue a single token pr. server start.
Once the API is enabled, the server-util
can call the API and obtain the single token. If something goes wrong, you can restart the Server and try again.
Once the token is obtained, it is important to remove the --webservice-enable-forever-token
again, so regular users cannot issue such a token.
With the token in hand, configure the proxy to attach the header to each request:
With this header present, all requests to Duplicati will be authenticated. If you need to revoke a forever token, start the server once with --webservice-reset-jwt-config
which will immediately invalidate any issued token.
This page describes the database kept by the Duplicati Server
When the Server is running, either stand-alone or as part of the TrayIcon or Agent, it needs a place to store the configuration. All configuration data, logs and settings are stored inside the file Duplicati-server.sqlite
. As the file extension reveals, this is an SQLite database file and as such can be viewed and updated by any tool that works with SQLite databases.
The database file is by default located in a folder that belongs to the user account running it. See the section on the database location for details on where this is and how to change it.
Due to the nature of Duplicati, this database can contain a few secrets that are vital to ensuring the integrity and security of the backups and also the Duplicati server itself. These secrets include both the user-provided secrets, such as the backup encryption passphrase and the connection credentials, but also server-provided secrets, such as the token signing keys, and optionally an SSL certificate password.
Even though the database is located on the machine that makes the backup, it is important to prevent unauthorized access to the database, as it could be used for privilege escalation. And should the database ever be leaked, it is also important to ensure the contents are not accessible.
To protect the database, Duplicati has support for a field-level encryption password. When activated, any setting that is deemed sensitive will be encrypted before being written to the database. This method ensures that the SQLite database itself is still readable, but the secrets are not readable without the encryption passphrase.
To supply the field-level encryption password, start the Server, TrayIcon, or Agent with the commandline option --settings-encryption-key=<key>
. As the commandline can usually be read by other processes, it is also possible to supply this key via the environment variable SETTINGS_ENCRYPTION_KEY=<key>
.
If you are aware of the risks, you can also set the commandline argument --disable-db-encryption=true
instead of the key. This will remove existing encryption and not warn that the database is not encrypted.
The simplest way to apply an encryption key, is to locate the server database, and create the file preload.json
if it does not already exist. The file should contain the following:
Both the commandline arguments and environment variables can be set with the Preload settings file, which makes it simpler to apply the same settings across executables, and removes the need for changing the service or launcher files.
For additional protection of the encryption key, the operating system Keychain, or an external secret provider, can be used to further secure the encryption key.
When running Duplicati for the first time, it will find a place where it can store the configuration database. Some versions of Duplicati change the location where it looks for the databases, but this is always done backwards compatible, so new versions will also find the database in previous locations. Due to this logic, the locations can change a bit depending on what version of Duplicati was originally installed.
It is possible to pick a different location for the database with the commandline option --server-datafolder=<path>
or use the environment variable DUPLICATI_HOME
.
To change the folder of an existing instance of Duplicati, perform these steps:
Stop Duplicati
Move the Duplicati
folder from the old location to the new location
Change the startup parameters (environment variables, commandline arguments, or preload.json)
Start Duplicati again
The default location for users running Duplicati is %LOCALAPPDATA%\Duplicati
which usually resolves to something like C:\Users\username\AppData\Local\Duplicati
. This folder is the non-roaming folder. Older versions of Duplicati used %APPDATA%\Duplicati
which is the roaming folder, causing files to be synchronized across machines. However, since Duplicati is not meant to be an app that is useful for roaming, it is now using the non-roaming folder.
When running Duplicati as a Windows Service, the %LOCALAPPDATA%\Duplicati
folder resolves to:
Since this folder is under C:\Windows
the contents may be deleted on major Windows upgrades (usually when the version number changes). For that reason, Duplicati will detect an attempt to store files in the C:\Windows
folder and emit a warning. From version 2.1.0.108 and forward, Duplicati will choose to use C:\Users\LocalService\Duplicati
as the storage folder, if it would otherwise be under C:\Windows
.
The default location when running Duplicati on Linux is ~/.config/Duplicati
. For most distros, running Duplicati as a service means running it as the root users, resulting in /root/.config/Duplicati
.
However, due to some compatibility mapping, the mapping is sometimes missing the prefix, causing Duplicati data to be stored in /Duplicati
. From version 2.1.0.108, this location is avoided and the location /var/lib/Duplicati
is used instead, if possible.
The default location when running Duplicati on MacOS is ~/Library/Application Support/Duplicati
. Duplicati version 2.0.8.1 and older used the Linux-style ~/.config/Duplicati
but this is avoided since version 2.1.0.2.
This page describes how to use the Duplicati Snapshots tool
The Snapshots tool is intended to test the system snapshot capability, and will invoke the same system calls as Duplicati to set up and tear down a system snapshot.
The Snapshots tool is called Duplicati.CommandLine.Snapshots.exe
on Windows and duplicati-snapshots
on Linux and MacOS.
To run the tool, invoke it with a folder to use for testing. To work correctly the folder should be on the filesystem/disk/volume/etc that will be part of the snapshot:
The tool will do the following:
Create the folder if it does not exist
Place a file named testfile.bin
inside the folder
Lock the file
Verify that the file is locked
Create a snapshot containing the folder
Check that the file can be read from the snapshot
Tear down the snapshot
On Windows, this will use VSS to create snapshots, which require elevated privileges, usually Administrator.
On Linux, this will use LVM and a set of shell scripts to obtain the vgroup
and manipulate it. These scripts are located in the source folder lvmscripts
and are named:
find-volume.sh
: Locates the volume where the given folder path is in.
create-lvm-snapshot.sh
: Creates the LVM snapshot and returns the path to it.
remove-lvm-snapshot.sh
: Removes a created snapshot
Usually, the operations require elevated privileges, for example root permissions.
For MacOS, the snapshots are not currently supported.
About Duplicati Inc & its relation to the open source Duplicati
Duplicati Inc. is a US-based for-profit entity incorporated in Delaware in March 2024. Duplicati Inc, helps develop the Duplicati Open Source Client and pay for various infrastructure costs. The Duplicati client is fully open source and free to use with no limitations.
Duplicati Inc is founded with an Open Core model, where the open source client is further developed as open source, but with additional enterprise focused tools and services as a paid feature. Being an Open Core company, we believe that a strong open source client and a vibrant open source community is our strongest asset. At the same time, the for-profit model enables us to take on larger development tasks and maintenance that would otherwise not be sustainable with a pure volunteer based project.
MIT License
Copyright (c) 2024 Duplicati
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
This page describes the different channels and how releases are used.
When using any software, it is important to use an updated version, but each update also carries a risk of containing a bug or change that requires intervention on the machine. To balance these two parameters, Duplicati uses channels to push updates at different speeds. Builds start out as canary builds and once stability is achieved they move up through the channels unless a breaking issue is discovered.
Installations work the same on any channel so you may choose to uninstall one version and install another. By default, the built-in update checker will use the channel of the package you installed to check for new versions.
The stable channel is the slowest moving channel. Builds in this channel are considered well tested and robust. This channel is recommended for most users.
The beta channel is generally used as a staging ground before moving to a stable release. Releases in this category are more frequent than the stable, but is also usually a slowly moving channel. This channel is recommended for users who wants to be on top of new developments. For larger installations, it may make sense to have a few machines on the beta channel to discover changes before affecting the entire setup.
Releases in the experimental channel usually contains a new experimental setting or algorithm that is not yet battle proven across a large set of system. These releases are generally considered safe for general use but may contain features that will be removed again or are not working in all environments.
The canary builds are regular builds that are extracted from the latest developments. The releases in this channel can have bugs and are generally not recommede for production use. These builds are usually the first time the changes are tested on machines that are not managed by developers. These builds are mostly recommeded for users that want to follow the development closely and give feedback on development direction and impact feature development.
This page describes the Agent executable
The Duplicati Agent is one of the primary ways to run Duplicati, similar to the Server and TrayIcon. The Agent can be deployed in settings where there is no desktop or user interaction is not desired. The Agent needs to connect to a remote control destination from wher it can be controlled, and due to this, the Agent employs a number of additional settings that prevents applications from running on the same machine to interact with the Agent.
A benefit from using the Agent is that it will only communicate over TLS encrypted connections and does not require you to manually handle the configuration of certificates for the Server.
The Agent binary is called Duplicati.Agent.exe
on Windows and duplicati-agent
on Linux and MacOS.
When the Agent starts for the first time, it will attempt to register with the Duplicati Console. To do this, it will open a browser window where the user can accept the registration and add the machine to their account. If the Agent needs to be registered without user interaction, a pre-authorized link can be generated on the Duplicati Console registration page:
To register the Agent, run the following command:
This will cause the Agent to register using the token from the url and the --agent-register-only
option will cause it to exit after registration has completed. If the Agent is already registered, it will simply exit.
To remove the registration information, use the command:
After the settings are cleared, the agent can be registered again.
The Agent settings are stored in a file called agent.json
in the same folder where the Server database is stored. The file path can be supplied with --agent-settings-file
and the file can be encrypted with the setting --agent-settings-file-passphrase
.
To protect the settings file passphrase, it is possible to use the secret provider.
The Agent is not intended to be accessible locally and for that reason, it is locked down with a number of settings. If you need to configure the Server, most of the options can be given to the Agent and passed on to the server. This includes --webservice-port
and --settings-encryption-key
.
The hosted agent server will use the port 8210
by default, to not clash with the regular Duplicati instance on port 8200
.
To make the hosted server fully accessible from the local machine that it is running on, add the following settings:
The first option, --disable-pre-shared-key
, will disable the random key that is required for all requests to the webserver. This key is a random value that is generated on each start, and only kept in memory, preventing any requests to the Duplicati API.
The second option, --webservice-api-only=false
will enable the access to the static .html
, .css
, and .js
files that provide the UI.
The last option sets the UI password, which would otherwise be a randomly generated password.
You may also want to re-enable the signin tokens with --webservice-disable-signin-tokens=false
.
The Duplicati.WindowsService.exe
installer can also install the Agent as a service:
Note that since the Agent cannot open a browser from within the service context, it will instead emit the link that is used to claim the Agent in the Windows Event Log. You need to find the link from there and open it in a browser to claim the machine. Alternatively, use the method outlined above to register the machine, but beware that you need to run in the same context as the service, or the agent.json
file will be placed in another folder.
Similarly, you can uninstall the Agent service with:
On Linux-based installations, the Agent installer will create the service files, which can be used to automatically start and run the Agent:
As is common for other services, additional start parameters can be added to /etc/default/duplicati
.
Note that when running the service, the Agent does not have access to the desktop environment (if one even exists) and it cannot open the registration url in the browser. Instead, it will emit a url in the system logs that you need to open to register the machine. Alternatively, use the method outlined above to register the machine, but beware that you need to run in the same context as the service, or the agent.json
file will be placed in another folder.
When installing on MacOS, the packages will register a launchagent that will start the Agent on each login. The assumption here is that the desktop context contains a browser, so the Agent will open the registration url in the default browser.
To use a pre-authenticated url, use the method outlined above to register the machine, and then restart the service to have it pick up the updated agent.json
file.
This page describes the OAuth login used by some providers
Many large providers only allow access with OAuth, requiring the user to authenticate that Duplicati may access their resources on their behalf. This generally works by initiating a login request, then redirecting the web browser to the login page, and then delivering a secure access token to Duplicati.
For web-based applications, this is a very smooth process, but for a tool such as Duplicati that needs to run, even when there is no browser or UI available, it is not an ideal solution. The workaround developed for Duplicati is to pre-authenticate with a long-lived token from a place where there is a browser available. Once the token is created, it is returned to the user in the form of an AuthID
string.
This service is the default as it is the most convenient for most users. To generate a token, simply visit:
Click the button for your prefered provider, complete the login, and obtain the AuthID, that you can then use on another machine as needed.
If you are using the UI, you can click the AuthID
label/link to start the process. Once you complete it, the UI will automatically fill in the ID, no interaction required.
After you have set up the server, use the option --oauth-url=<local server url>
to configure Duplicati to use another server to authenticate with.
that allows applications to securely admit third-party access from legitimate users without exposing details, such as real name, passwords, etc.
This AuthID can then be used by Duplicati to access resources on the users behalf, acting as a kind of API key. Further details on how the OAuth server works is described in the .
Duplicati has a hosted service that can be used to get access to a variety of different storage providers. It is hosted on and the .
If you want to remove access, you can either revoke a specific AuthID at the same place where you created it, using the . You can also go to the provider, say Dropbox or OneDrive, and remove the authorization for Duplicati, which will immediately revoke all tokens issued for your account.
If you prefer to manage the full cycle and not send tokens into a provider not under your control, you can use the . The server is Docker enabled and also available as a .
Refer to the for how to configure it. Before you can use the server, you need to obtain a Client ID and Client Secret for the provider you want to support. Refer to the default providers file to see the links to each service, or consult your service provider for details on how to obtain these values.
Welcome to Duplicati's support community! As an open-source project, we believe in the power of community collaboration. Users can find help by raising issues on our or joining the discussions at , where years of shared knowledge from both users and developers create an invaluable resource for troubleshooting and best practices.
For our corporate customers, we offer dedicated support through our integrated support system on . If you have other inquiries, please don't hesitate to reach out to - we're here to help you protect your valuable data. Your success with Duplicati matters to us, and we're committed to providing the support you need.
This page describes how to downgrade Duplicati from a newer version to an older version
Installing new versions of Duplicati is part of the test process so any upgrade is intended to keep things working the same as before. In some cases the updates will start to give a warning on backups that were previously running without a warning. These warnings will describe what has changed and explain what to do to remove the warning.
Such warnings generally releate to a feature that will be removed or renamed but is not yet removed. The warnings give you a heads-up to avoid issues in the future and are generally simple to implement by editing a backup.
In rare cases a feature can no longer be supported, such as when a storage provider stops offering a service. For these, the feature will be removed and this will be mentioned in the release notes.
Downgrades are usually not supported automatically because the old version was created before the current version, the code inside the old version cannot know what was changed. To avoid data loss, this process is controlled by version numbers inside the database.
Each update to the data will increment the version number of the database such that when an older version is running it will detect a higher number than it knows and stop there.
When a version upgrades the database, it wil create a backup of the current database before upgrading. You can look for the database and backups in:
~/.config/Duplicati
on Linux
~/Library/Application Support/Duplicati
on MacOS
%LOCALAPPDATA%\Duplicati
on Windows.
If you have been using the new version you may have changes in the current database that would be lost by restoring the pre-upgrade database. In that case, you can ask on the forum for advice on how to downgrade.
This page describes how to downgrade from Duplicati 2.1.0.2 to 2.0.8.1
To downgrade from 2.1.0.2 to an earlier version, note that the two are built on different core technologies (.NET8 vs .NET4/Mono). If you have previously been able to run 2.0.8.1, you should be able to downgrade by installing the previous version as before.
Before you downgrade, you should make sure you have removed database encryption. You can do this by stopping all running instances, and then running Server or TrayIcon with:
This will remove the field-level encryption in the server database. After starting with this parameter, stop the server, uninstall 2.1.0.2 and install 2.0.8.1.
Since both the server database and the local database was updated, you need to downgrade both. Note that there is one local database for each backup you have configured, and all of those may need to be downgraded.
To downgrade the server database, use an SQLite tool, such as SQLite Browser. Open the database and run the following query:
This will downgrade the server database to version 6, and allow it to properly upgrade later if needed.
For each of the local databases, run the following:
This will downgrade the database to version 12, and allow it to upgrade later if needed.
Close the SQLite editor, and then start Duplicati 2.0.8.1.
The installer packages for 2.0.8.1 are available on Github. You can browse the list of releases for other versions you may want.
This page describes the Duplicati recovery tool
This tool performs a recovery of as much data as possible in small steps that must be performed in order. We recommend that you use duplicati-cli to do the restore, and rely only on this tool if all else fails.
The recovery tool is called Duplicati.CommandLine.RecoveryTool.exe
on Windows and duplicati-recovery-tool
on Linux and MacOS.
1: Download: Download files from the remote store and keep them unencrypted on a location available in the local filesystem.
2: Index: Builds an index file to figure out what data is contained inside the files downloaded
3: Restore: Restores the files to a destination you choose
4: List: Shows what files are available and tests filters
5: Recompress: Ability to change compression type of files on remote backend e.g. from 7z to ZIP
Downloads all files matching the Duplicati filenames from the remote storage to the current directory, and decrypts them in the process. The remote url must be one supported by Duplicati. Use duplicati-cli help backends
to see backends and options.
Examines all files found in the current folder and produces an index.txt
file, which is a list of all block hashes found in the files. The index file can be rather large. It defaults to being stored in the current working directory, but can be specified with --indexfile
. Some files are created in the system temporary folder, use --tempdir
to set an alternative temporary folder location.
Restores all files to their respective destinations. Use --targetpath
to choose another folder where the files are restored into. Use the filters, --exclude
, to perform a partial restore. Version can be either a number, a filename or a date. If omitted the most recent backup is used.
The restore process requires a fast lookup, which is optimal if all the hashes can be kept in memory. Use the option to --reduce-memory-use=true
to toggle a slower low-memory restore. If the process is interrupted for any reason, note the file counter and use --offset=<count>
to start the restore after the last restored file.
Advanced performance options are:
--reduce-memory-use
: Disables keeping all hashes in memory; use if memory is limited on the restoring machine
--disable-file-verify
: Disables the initial hashing of the restored file
--disable-wrapped-zip
: Disable using the faster .NET native ZIP archive in favor of the more resilient one in Duplicati
--max-open-archives
: Sets the number of archives to keep open for faster access (uses some memory pr. archive); default 200
Lists contents of backups. Version can be either a number, a filename or a date. If [version] is omitted a list of backup versions are shown, if [version] is supplied, files from that version are listed. Use the filters, --exclude
, to show a subset of files.
Downloads whole remote storage to the current working folder.
Recompress from existing compression type to the chosen compression format.
If --reencrypt
is supplied, again reencrypts using same passphrase (needs to be decrypted for compression type change)
If --reupload
is supplied, files with old compression are deleted and recompressed files are uploaded back to remote storage (it is recommended to take at least temporary copy of remote storage before enabling this switch)
Warning: If --reupload
is supplied it is advisable to specify --reencrypt
otherwise the files will be uploaded unencrypted!
Warning: Before recompress delete the local database and after recompress recreate local database before executing any operation on backup. This allows Duplicati to read new file names from remote storage.
The backend modules support all their normal options. To see what options a specific backend supports, type:
The environment variables AUTH_USERNAME
and AUTH_PASSWORD
are supported. The options --parameters-file
and --tempdir
are supported.
This page describes the Duplicati server component
The Duplicati server is the primary instance, and is usually hosted by the TrayIcon in desktop environments. The server itself is intended to be a long-running process, usually running as a service-like process that starts automatically. The binary executable is called Duplicati.Server.exe
on Windows and duplicati-server
on Linux and MacOS.
The server is responsible for saving backup configurations, starting scheduled backups, and provide the user interface. The user interface is provided by hosting a webserver inside the process. This webserver provides both the static files as well as the API that is needed to control the server.
When the server runs any operation, such as a backup or restore, it will configure an environment from various settings, primarily the backup configuration. The actual implementation is the same code that is executed by the command line interface, but runs within the server process.
Unlike the command line interface, the Server keeps track of the local database to ensure the database is present for all operations. This is possible because the server has additional state in the server database and the path to the local database is kept there.
During the operation, the server will report progress and log messages, which can be viewed if a client is attached during the run. After the run, the Server will record metadata and log data in the database, to assist in troubleshooting later.
As described in the access password section, it is possible to set or reset the server password by starting the server with the option:
This new password is stored in the server database and does not need to be supplied on future launches. Note that changing the password does not invalidate tokens that are already issued. To clear any issued tokens, which should be done if there is a suspicion that the signing keys are leaked, start with the following option:
This will generate new token signing keys and immediately invalidate any previously issued tokens. You can start the server with this parameter on each launch if you do not rely on a refresh token stored in the browser.
It is also possible to disable the use of signin tokens, which are used in some cases in favor of requiring the password. This can be set with the option:
Since the server database is a critical resource to protect, it is possible to set a field-level encryption password:
If the server starts without a settings encryption key, it will emit a warning in the logs explaining the problem. If any fields are already encrypted, Duplicati will refuse to start without the encryption key. If no fields are encrypted, but an encryption key is supplied, the fields will be encrypted.
If you need to remove the encryption key for some reason, provide the key as above, and additionally supply the option:
If this flag is supplied, Duplicati will not emit a warning that the database is not encrypted. If the database was encrypted, it will be decrypted. After the database is decrypted, it can be re-encrypted with a different password.
To prevent ever starting the Server without an encryption key, provide the option:
Note that this is exclusive with --disable-db-encryption
and that the server will not start if the fields are encrypted and no encryption key is provided.
The server will by default only listen to requests on the local machine., which is done to ensure that requests from the local network cannot access the Duplicati instance. However, any applications that are running on the same machine will be able to send commands to Duplicati. To prevent local privilege escalation attacks, Duplicati requires a password and a valid token for all requests.
To activate access from the local network, the server must be started with:
It is also possible to specify loopback
(the default value) or the IP address to listen on.
When accessing the server from an external machine, it will only respond to requests that use an IP address as the hostname. This security mechanism is meant to combat fast-flux DNS attacks that could expose the local API to a website. If you need to access Duplicati from an external machine, you need to explicitly allow the hostname(s) that you will be using, by starting the server with:
Multiple hostnames can be supplied with semicolons: host1;host2.example.com;host3
.
The server will attempt to use port 8200
and terminate if that port is not available. Use the commandline option to set a specific port:
To ensure all communication is secure, Duplicati supports adding a TLS certificate. The certificate can be a self-signed certificate, but in this case the browser will not accept it, and extra tweaks must be made.
To create a trusted certificate, it is easiest to use one of the many tools to manage it, such as mkcert. which can generate the various components and configure your system to trust these certificates. Beware that this requires good operational security, as the generated certificate authority can issue certificates for ANY website, including ones you do not own, and eavesdrop on your traffic.
Once you have the desired certificate, in .pfx
aka .p12
format, you can provide it to the Server on startup:
After starting the server with an SSL certificate, the certificate is stored in the server database with a randomly generated password. Any subsequent launches of the server will then use the certificate and the server will only communicate over https.
To change the certificate, exit all running instances, then run again once with the new certificate path, as shown above, and the internally stored certificate will be replaced.
If you need to revert to unencrypted http communication, you can use the option:
It is also possible to temporarily disable the use of the certificate, without removing it, with:
If you are developing a new UI for Duplicati, or prefer to use a customized UI, it is possible to configure the server to serve another UI, or none at all. If you want to use the Server component and only manipulate it with another tool, such as the ServerUtil, start with this option:
This option will fully disable the serving of static files and only leave the API available.
If instead, you would like to serve a different folder, you can use the option to set the webroot:
To better support SPA type applications, the Server can be started with:
For the SPA enabled path, any attempt to access a non-existing page will serve the index.html
file, which can then render the appropriate view. Multiple paths can be supplied with semicolons.
Internally, all time operations are recorded in UTC to avoid issues with daylight savings and changes caused by changing the machine timezone. The only difference to this rule is the scheduler, which is timezone aware.
The scheduler needs to be timezone aware so scheduled backups run at the same local time, even during daylight savings time. On the initial startup, the system timezone is detected and stored in the server database. It is possible to change the timezone from the UI, but it can also be set with the commandline option:
Duplicati will log various messages to the server database, but it is possible to also log these messages to a log file for better integration with monitoring tools or manual inspection. To configure file-based logging, provide the two options:
By default, the --log-level
parameter is set to only log warnings, but can be configured to any of the log levels: Error
, Warning
, Information
, Verbose
, and Profiling
.
The log data that is stored in the database is by default kept for 30 days, but this period can be defined with the option:
On Windows, it is also possible to log data to the Windows Eventlog. To activate this, set the options:
By default, Duplicati will use the location that is recommended by the operating system to store "Application Support Files" or "Application Data":
Windows: %LOCALAPPDATA%\Duplicati
Linux: ~/.config/Duplicati
MacOS: ~/Library/Application Support
These paths are sensitive to the user context, meaning that the actual paths will change based on the user that is running the Server. This is especially important when running the server with elevated privileges, because this usually causes it to run in a different user context, resulting in different paths.
To force a specific folder to be used, set the option:
This can also be supplied with the environment variable:
If both are supplied, the commandline options are used.
For the server options, it is also possible to supply them as environment variables. This makes it easier to toggle options from Docker-like setups where is is desirable to have then entire service config in a single file, and setting commandline arguments may be error prone.
Any of the commandline options for the server an be applied by transforming the option name to an environment variable name. The transformation is to upper-case the option, change hyphen, -
, to underscore, _
, and prepend DUPLICATI__
.
For example, to set the commandline option --webservice-api-only=true
with an environment variable:
Any arguments supplied on the commandline will take precedence over an environment variable, as commandline arguments are considered more "local".