Tutorial¶
In this tutorial we will make our first Deptool deployment against a single host. Before we start, follow the build instructions to build a static binary; prebuilt binaries are not yet available.
Prerequisites¶
In this tutorial we are going to manage a host named webserver. This host must be reachable via SSH — either because it has a DNS name, or because it’s defined in your ~/.ssh/config. To manage it, Deptool needs root access on this host. If root SSH is allowed, you can add a User line in your ~/.ssh/config. Alternatively, if your default user is allowed to use passwordless sudo, that also works. Confirm that this works:
$ ssh webserver 'sudo cat /etc/hostname'
webserver
Preparing a store¶
Deptool uses a store to track its deployment history and cluster state. Under the hood the store is a bare Git repository, by default located at .deptool. To populate the store, we need a directory to hold the cluster configuration: the config tree. This directory is named after the cluster, and it lives next to the store.1 Let’s create a cluster named prod:
$ deptool init prod
Initialized store at '.deptool'.
Created cluster directory 'prod' and recorded it as the default.
In the cluster directory, we create one directory per target host:
$ mkdir prod/webserver
On a host, Deptool manages apps. Let's say we want to manage a configuration file Caddyfile, for an app named caddy. Then we’d set up the file tree like so:
prod
└── webserver
└── caddy
└── Caddyfile
Let’s create it:
$ mkdir prod/webserver/caddy
$ vim prod/webserver/caddy/Caddyfile
Upon deploy, Deptool will commit this directory tree to its store.
Deploying¶
Let’s deploy this to our 1-host cluster!
$ deptool deploy
webserver
add caddy
+ Caddyfile
Auto-rollback if deploy fails.
Apply to 1 host in cluster 'prod'? [y/N/d]
Before even connecting to a host, Deptool shows us the plan. In this case, the plan is to add a new app caddy on host webserver, which contains a new file Caddyfile. Press d to see the full diff, and then y to deploy.
Because we haven’t connected to this host before, the first thing Deptool will do is copy the deptool binary to the host into /var/lib/deptool/bin. Then it executes deptool agent on the remote host over an SSH connection. The agent is short-lived: it runs only during our deployment. It provides a channel through which deptool deploy can send data and commands to the host over a single SSH connection. If the latency to the host is not too bad, this should all happen within a second:
Apply to 1 host in cluster 'prod'? [y/N/d] y
webserver: done
Changes deployed successfully to 1 host in 0.78s.
This created a directory /var/lib/deptool on the target host:
root@webserver ~ $ tree /var/lib/deptool
/var/lib/deptool
├── apps
│ └── caddy
│ ├── 8bb051121b
│ │ └── Caddyfile
│ └── current -> 8bb051121b
├── bin
│ └── deptool-0.1.0-cd51d88f1b
└── store
├── HEAD
├── objects
└── ...
apps/caddycontains a directory named after the commit that Deptool created for this deployment. It contains theCaddyfilethat we wanted to deploy.apps/caddy/currentis a symlink to that directory.bincontains a binarydeptoolnamed after the version and the commit it was built from.5storecontains a bare Git repository which contains a copy of the local store.
That’s our first deployment completed! The Caddyfile is now managed by Deptool, and we can find the latest version at /var/lib/deptool/apps/caddy/current/Caddyfile.
Making changes¶
Back on the operator machine, let’s make a change to our Caddyfile and deploy again:
$ vim prod/webserver/caddy/Caddyfile
$ deptool deploy
webserver
update caddy
~ Caddyfile
Auto-rollback if deploy fails.
Apply to 1 host in cluster 'prod'? [y/N/d]
When we deploy the new configuration, Deptool indicates that the file Caddyfile, part of app caddy on host webserver, has changes. Press d to diff the contents of Caddyfile, y to deploy.
Apply to 1 host in cluster 'prod'? [y/N/d] y
webserver: done
Changes deployed successfully to 1 host in 0.67s.
On the target host, the caddy directory has changed:
root@webserver ~ $ tree /var/lib/deptool/apps/caddy
/var/lib/deptool/apps/caddy
├── 8bb051121b
│ └── Caddyfile
├── e87dcde346
│ └── Caddyfile
├── current -> e87dcde346
└── previous -> 8bb051121b
The current symlink now points to directory e87…, the new commit that Deptool created for this deploy. The previous version is still around for debugging purposes.
Adding a systemd unit¶
So far we’ve been managing files in /var/lib/deptool/apps. That’s nice and self-contained — just dropping files there can’t do damage to the rest of your system — but it’s also fairly limited. We still need to somehow tell Caddy to load its configuration from /var/lib/deptool/apps/caddy/current, and how do we manage that configuration?
One answer is to run Caddy under systemd, with a unit much like this one:
[Unit]
Description=Caddy webserver
After=network-online.target nss-lookup.target
[Service]
ExecStart=/bin/caddy run --config /var/lib/deptool/apps/caddy/current/Caddyfile
# Other configuration keys omitted here for brevity.
[Install]
WantedBy=multi-user.target
We can manage this systemd unit with Deptool as well. If we place it in the systemd subdirectory of the caddy app, then Deptool will automatically make this unit available by creating a symlink to it in /etc/systemd/system. That means systemd knows about this unit, but it doesn’t yet activate it.2 We want Caddy to run, so we also enable the unit by adding the following manifest.json3 to the app:
{
"systemd": {
"units_enabled": ["caddy.service"]
}
}
Our prod directory now looks like this:
webserver
└── caddy
├── Caddyfile
├── manifest.json
└── systemd
└── caddy.service
Let’s deploy it:
$ deptool deploy
webserver (rollback unavailable)
update caddy
+ manifest.json
+ systemd/caddy.service
link unit caddy.service
enable unit caddy.service
Rollback unavailable for some hosts.
Apply to 1 host in cluster 'prod'? [y/N/d]
The plan tells us a few things:
- We’re going to deploy to host
webserver, where we modify thecaddyapp. Because we’re adding a new systemd unit, rollback is not available. - The files
manifest.jsonandsystemd/caddy.serviceare going to be newly created inside the app directory. - A symlink to
caddy.serviceis going to be placed in/etc/systemd/system. - The unit
caddy.serviceis going to be enabled.
Furthermore, Deptool warns that rollback is not available. This is fine, we’ll dive into the details of rollback later. Press y to accept.
Apply to 1 host in cluster 'prod'? [y/N/d] y
webserver:
● caddy.service - Caddy webserver
Loaded: loaded (/etc/systemd/system/caddy.service; enabled; preset: disabled)
Active: active (running) since Sat 2026-04-18 20:51:02 UTC; 307ms ago
Main PID: 1040 (caddy)
Apr 18 20:51:02 webserver caddy[1040]: {"level":"info","ts":1776545462.829505,"msg":"serving initial configuration"}
Apr 18 20:51:02 webserver caddy[1040]: {"level":"info","ts":1776545462.8388124,"logger":"tls","msg":"cleaning storage unit","storage":"FileStorage:./caddy"}
Apr 18 20:51:02 webserver caddy[1040]: ...
webserver: done
Changes deployed successfully to 1 host in 0.68s.
When an app contains enabled systemd units, Deptool prints the status of the unit, so you can see that it activated correctly — or when it didn’t, to help you diagnose why it failed.
Restarting systemd units¶
Let’s update our Caddy configuration again, and deploy:
$ vim prod/webserver/caddy/Caddyfile
$ deptool deploy
webserver
update caddy
~ Caddyfile
restart unit caddy.service
Auto-rollback if deploy fails.
Apply to 1 host in cluster 'prod'? [y/N/d]
This time the plan tells us:
- The
Caddyfilein thecaddyapp will change. Pressdto view the diff. - The systemd unit
caddy.servicewill be restarted. - Rollback is available.
The change to Caddyfile is intentional, it’s the change we are trying to deploy. When a deployment changes an app in any way, Deptool also restarts all of the systemd units that are listed as enabled in the app’s manifest.4 Rollback means that if caddy.service fails to start (for example, because we introduced a syntax error in the Caddyfile), then Deptool will point the current symlink back at the previous revision again, and restart systemd units once more so they pick up the previous known-good configuration. This ensures that we don’t leave caddy.service in a failed state, with no webserver running. Let’s accept:
Apply to 1 host in cluster 'prod'? [y/N/d] y
webserver:
● caddy.service - Caddy webserver
Loaded: loaded (/etc/systemd/system/caddy.service; enabled; preset: disabled)
Active: active (running) since Sat 2026-04-18 21:17:48 UTC; 308ms ago
Main PID: 1174 (caddy)
Apr 18 21:17:48 webserver caddy[1174]: {"level":"info","ts":1776547068.4650128,"msg":"serving initial configuration"}
Apr 18 21:17:48 webserver caddy[1174]: {"level":"info","ts":1776547068.468667,"logger":"tls","msg":"storage cleaning happened too recently; skipping for now","storage":"FileStorage:./caddy","instance":"25b86653-5307-42de-a4a7-de691f59428a","try_again":1776633468.4686666,"try_again_in":86399.999999798}
Apr 18 21:17:48 webserver caddy[1174]: {"level":"info","ts":1776547068.468865,"logger":"tls","msg":"finished cleaning storage units"}
Apr 18 21:17:48 webserver caddy[1174]: ...
webserver: done
Changes deployed successfully to 1 host in 0.72s.
Creating symlinks¶
When we control the configuration files and we write the systemd units, we can put all the files we need in /var/lib/deptool/apps. Sometimes though, we need to manage files at prescribed locations in the filesystem, and we don’t get to choose the path. For example, we may need to add files in /etc/sudoers.d or /etc/tmpfiles.d. To handle this, Deptool can create symlinks at arbitrary filesystem locations, that point to files in /var/lib/deptool. Let’s add a tmpfiles entry.
$ echo 'd /var/lib/caddy 0700 caddy caddy - -' > prod/webserver/caddy/tmpfiles.conf
Next we update manifest.json to include a symlinks section:
{
"systemd": {
"units_enabled": ["caddy.service"]
},
"symlinks": {
"/etc/tmpfiles.d/caddy.conf": "tmpfiles.conf"
}
}
Deploy this:
$ deptool deploy
webserver (rollback unavailable)
update caddy
~ manifest.json
+ tmpfiles.conf
link /etc/tmpfiles.d/caddy.conf -> tmpfiles.conf
restart unit caddy.service
Rollback unavailable for some hosts.
Apply to 1 host in cluster 'prod'? [y/N/d]
This time the plan includes the new file tmpfiles.conf, and the new symlink at /etc/tmpfiles.d/caddy.conf. This symlink points through current:
root@webserver $ readlink /etc/tmpfiles.d/caddy.conf
/var/lib/deptool/apps/caddy/current/tmpfiles.conf
This means that if we make another change, the symlink will not change, only the target file. For example, let’s change the group owner from caddy to www:
$ echo 'd /var/lib/caddy 0770 caddy www - -' > prod/webserver/caddy/tmpfiles.conf
$ deptool deploy
webserver
update caddy
~ tmpfiles.conf
restart unit caddy.service
Auto-rollback if deploy fails.
Apply to 1 host in cluster 'prod'? [y/N/d]
This time the plan does not mention the /etc/tmpfiles.d/caddy.conf, because it does not need to change. Press d to double-check the diff, then y to accept.
If we remove this symlink again from the manifest (in fact, we can remove the entire symlinks section), Deptool will remove the symlink from the host:
$ rm prod/webserver/caddy/tmpfiles.conf
$ vim prod/webserver/caddy/manifest.json
$ deptool deploy
webserver
update caddy
~ manifest.json
- tmpfiles.conf
unlink /etc/tmpfiles.d/caddy.conf
restart unit caddy.service
Auto-rollback if deploy fails.
Apply to 1 host in cluster 'prod'? [y/N/d]
This time the plan says:
- There was a change to the manifest.
- The file
tmpfiles.confwill be removed. - The unit
caddy.servicewill be restarted, since there is a change to the app and Deptool can’t tell that a restart is not needed. - The symlink
/etc/tmpfiles.d/caddy.confwill be removed.
Because Deptool knows exactly which files it manages and what is currently deployed, it can clean up after itself and delete symlinks that are no longer included in a new revision of the configuration. As an additional safeguard, it will only remove symlinks that point into /var/lib/deptool.
Removing the app¶
To remove an app, simply remove it from the configuration, and Deptool will remove it from the host. In our somewhat artificial tutorial, this would leave the host empty, which will make Deptool ignore it: just like Git, Deptool ignores empty directories. We can work around this by adding an empty file:
$ rm -fr prod/webserver/caddy
$ touch prod/webserver/intentionally-left-blank
$ deptool deploy
webserver
remove caddy
- Caddyfile
- manifest.json
- systemd/caddy.service
disable unit caddy.service
unlink unit caddy.service
Auto-rollback if deploy fails.
Apply to 1 host in cluster 'prod'? [y/N/d] y
webserver: done
If we check the host, the app is indeed gone:
root@webserver $ ls -l /var/lib/deptool/apps
total 0
root@webserver $ systemctl status caddy.service
Unit caddy.service could not be found.
/var/lib/deptool does still exist on the host, but it does not interfere with anything.
Conclusion¶
In this tutorial we deployed configuration for a single app on a single host. The cluster configuration resides in a directory tree, which we can apply against the cluster with deptool deploy. To add more hosts and more apps, simply create more directories in the configuration directory.
The store is located outside of the config directory
prod, so that you can easily delete the entire config directory. This is because in larger cluster configurations, this tree is supposed to be generated rather than written by hand, and if we can delete and regenerate the config directory, then we can’t accidentally forget to delete files that are no longer generated by the generator. ↩Deptool does not automatically activate all available units, because some units are not meant to be activated directly. For example, a unit may be activated through socket activation instead, or by a timer. ↩
The manifest is a json file, and not a yaml or toml file, because in larger cluster configurations it’s supposed to be generated, not written by hand. ↩
While many applications can reload configuration, Deptool opts to keep things simple, so it always restarts all affected systemd units. ↩
This ensures that the driver side and agent side —
deptool deploywhich runs on the operator machine, anddeptool agentwhich runs on the target host — run exactly the same binary, so there are never any compatibility issues with the wire protocol. Deptool automatically deletes older versions from this directory to prevent it from filling up the disk. ↩