| Title: Declaratively manage your Qubes OS | |
| Author: Solène | |
| Date: 02 June 2023 | |
| Tags: qubesos salt qubes | |
| Description: In this article, you will learn how to use Qubes OS | |
| internal salt stack configuration management to manage it | |
| programmatically | |
| # Introduction | |
| As a recent Qubes OS user, but also a NixOS user, I want to be able to | |
| reproduce my system configuration instead of fiddling with files | |
| everywhere by hand and being clueless about what I changed since the | |
| installation time. | |
| Fortunately, Qubes OS is managed internally with Salt Stack (it's | |
| similar to Ansible if you didn't know about Salt), so we can leverage | |
| salt to modify dom0 or Qubes templates/VMs. | |
| Qubes OS official project website | |
| Salt Stack project website | |
| Qubes OS documentation: Salt | |
| # Simple setup | |
| In this example, I'll show how to write a simple Salt state files, | |
| allowing you to create/modify system files, install packages, add | |
| repositories etc... | |
| Everything will happen in dom0, you may want to install your favorite | |
| text editor in it. Note that I'm still trying to figure a nice way to | |
| have a git repository to handle this configuration, and being able to | |
| synchronize it somewhere, but I still can't find a solution I like. | |
| The dom0 salt configuration can be found in `/srv/salt/`, this is where | |
| we will write: | |
| * a .top file that is used to associate state files to apply to which | |
| hosts | |
| * a state file that contain actual instructions to run | |
| Quick extra explanation: there is a directory `/srv/pillar/`, where you | |
| store things named "pillars", see them as metadata you can associate to | |
| remote hosts (AppVM / Templates in the Qubes OS case). We won't use | |
| pillars in this guide, but if you want to write more advanced | |
| configurations, you will surely need them. | |
| # dom0 management | |
| Let's use dom0 to manage itself 🙃. | |
| Create a text file `/srv/salt/custom.top` with the content (YAML | |
| format): | |
| ```yaml | |
| base: | |
| 'dom0': | |
| - dom0 | |
| ``` | |
| This tells that hosts matching `dom0` (2nd line) will use the state | |
| named `dom0`. | |
| We need to enable that .top file so it will be included when salt is | |
| applying the configuration. | |
| ```command | |
| qubesctl top.enable custom | |
| ``` | |
| Now, create the file `/srv/salt/dom0.sls` with the content (YAML | |
| format): | |
| ```yaml | |
| my packages: | |
| pkg.installed: | |
| - pkgs: | |
| - kakoune | |
| - git | |
| ``` | |
| This uses the salt module named `pkg`, and pass it options in order to | |
| install the packages "git" and "kakoune". | |
| Salt Stack documentation about the pkg module | |
| On my computer, I added the following piece of configuration to | |
| `/srv/salt/dom0.sls` to automatically assign the USB mouse to dom0 | |
| instead of being asked every time, this implements the instructions | |
| explained in the documentation link below: | |
| Qubes OS documentation: USB mice | |
| ```yaml | |
| /etc/qubes-rpc/policy/qubes.InputMouse: | |
| file.line: | |
| - mode: ensure | |
| - content: "sys-usb dom0 allow" | |
| - before: "^sys-usb dom0 ask" | |
| ``` | |
| Salt Stack documentation: file line | |
| This snippet makes sure that the line `sys-usb dom0 allow` in the file | |
| `/etc/qubes-rpc/policy/qubes.InputMouse` is present above the line | |
| matching `^sys-usb dom0 ask`. This is a more reproducible way of | |
| adding lines to configuration file than editing by hand. | |
| Now, we need to apply the changes by running salt on dom0: | |
| ```command | |
| qubesctl --target dom0 state.apply | |
| ``` | |
| You will obtain a list of operations done by salt, with a diff for each | |
| task, it will be easy to know if something changed. | |
| Note: state.apply used to be named state.highstate (for people who used | |
| salt a while ago, don't be confused, it's the same thing). | |
| # Template management | |
| Using the same method as above, we will add a match for the fedora | |
| templates in the custom top file: | |
| In `/srv/salt/custom.top` add: | |
| ```yaml | |
| 'fedora-*': | |
| - globbing: true | |
| - fedora | |
| ``` | |
| This example is slightly different than the one for dom0 where we | |
| matched the host named "dom0". As I want my salt files to require the | |
| least maintenance possible, I won't write the template name verbatim, | |
| but I'd rather use a globbing (this is the name for simpler wildcard | |
| like `foo*`) matching everything starting by `fedora-`, I currently | |
| have fedora-37 and fedora-38 on my computer, so they are both matching. | |
| Create `/srv/salt/fedora.sls`: | |
| ```yaml | |
| custom packages: | |
| pkg.installed: | |
| - pkgs: | |
| - borgbackup | |
| - dino | |
| - evolution | |
| - fossil | |
| - git | |
| - pavucontrol | |
| - rsync | |
| - sbcl | |
| - tig | |
| ``` | |
| In order to apply, we can type `qubesctl --all state.apply`, this will | |
| work but it's slow as salt will look for changes in each VM / template | |
| (but we only added changes for fedora templates here, so nothing would | |
| change except for the fedora templates). | |
| For a faster feedback loop, we can specify one or multiple targets, for | |
| me it would be `qubesctl --targets fedora-37,fedora-38 state.apply`, | |
| but it's really a matter of me being impatient. | |
| # Auto configure Split SSH | |
| An interesting setup with Qubes OS is to have your SSH key in a | |
| separate VM, and use Qubes OS internal RPC to use the SSH from another | |
| VM, with a manual confirmation on each use. However, this setup | |
| requires modifying files at multiple places, let's see how to manage | |
| everything with salt. | |
| Qubes OS community documentation: Split SSH | |
| Reusing the file `/srv/salt/custom.top` created earlier, we add | |
| `split_ssh_client.sls` for some AppVMs that will use the split SSH | |
| setup. Note that you should not deploy this state to your Vault, it | |
| would self reference for SSH and would prevent the agent to start (been | |
| there :P): | |
| ``` | |
| base: | |
| 'dom0': | |
| - dom0 | |
| 'fedora-*': | |
| - globbing: true | |
| - fedora | |
| 'MyDevAppVm or MyWebBrowserAppVM': | |
| - split_ssh_client | |
| ``` | |
| Create `/srv/salt/split_ssh_client.sls`: this will add two files to | |
| load the environment variables from `/rw/config/rc.local` and | |
| `~/.bashrc`. It's actually easier to separate the bash snippets in | |
| separate files and use `source`, rather than using salt to insert the | |
| snippets directly in place where needed. | |
| ``` | |
| /rw/config/bashrc_ssh_agent: | |
| file.managed: | |
| - user: root | |
| - group: wheel | |
| - mode: 444 | |
| - contents: | | |
| SSH_VAULT_VM="vault" | |
| if [ "$SSH_VAULT_VM" != "" ]; then | |
| export SSH_AUTH_SOCK="/home/user/.SSH_AGENT_$SSH_VAULT_VM" | |
| fi | |
| /rw/config/rclocal_ssh_agent: | |
| file.managed: | |
| - user: root | |
| - group: wheel | |
| - mode: 444 | |
| - contents: | | |
| SSH_VAULT_VM="vault" | |
| if [ "$SSH_VAULT_VM" != "" ]; then | |
| export SSH_SOCK="/home/user/.SSH_AGENT_$SSH_VAULT_VM" | |
| rm -f "$SSH_SOCK" | |
| sudo -u user /bin/sh -c "umask 177 && exec socat 'UNIX-LISTEN:$SSH_SO… | |
| fi | |
| /rw/config/rc.local: | |
| file.append: | |
| - text: source /rw/config/rclocal_ssh_agent | |
| /rw/home/user/.bashrc: | |
| file.append: | |
| - text: source /rw/config/bashrc_ssh_agent | |
| ``` | |
| Edit `/srv/salt/dom0.sls` to add the SshAgent RPC policy: | |
| ``` | |
| /etc/qubes-rpc/policy/qubes.SshAgent: | |
| file.managed: | |
| - user: root | |
| - group: wheel | |
| - mode: 444 | |
| - contents: | | |
| MyClientSSH vault ask,default_target=vault | |
| ``` | |
| Now, run `qubesctl --all state.apply` to configure all your VMs, which | |
| are the template, dom0 and the matching AppVMs. If everything went | |
| well, you shouldn't have errors when running the command. | |
| # Use a dedicated AppVM for web browsing | |
| Another real world example, using Salt to configure your AppVMs to open | |
| links in a dedicated AppVM (named WWW for me): | |
| Qubes OS Community Documentation: Opening URLs in VMs | |
| In your custom top file `/srv/salt/custom.top`, you need something | |
| similar to this (please adapt if you already have top files or state | |
| files): | |
| ``` | |
| 'dom0': | |
| - dom0 | |
| 'fedora-*': | |
| - globbing: true | |
| - fedora | |
| 'vault or qubes-communication or qubes-devel': | |
| - default_www | |
| ``` | |
| Add the following text to `/srv/salt/dom0.sls`, this is used to | |
| configure the RPC: | |
| ```yaml | |
| /etc/qubes-rpc/policy/qubes.OpenURL: | |
| file.managed: | |
| - user: root | |
| - group: wheel | |
| - mode: 444 | |
| - contents: | | |
| @anyvm @anyvm ask,default_target=WWW | |
| ``` | |
| Add this to `/srv/salt/fedora.sls` to create the desktop file in the | |
| template: | |
| ```yaml | |
| /usr/share/applications/browser_vm.desktop: | |
| file.managed: | |
| - user: root | |
| - group: wheel | |
| - mode: 444 | |
| - contents: | | |
| [Desktop Entry] | |
| Encoding=UTF-8 | |
| Name=BrowserVM | |
| Exec=qvm-open-in-vm browser %u | |
| Terminal=false | |
| X-MultipleArgs=false | |
| Type=Application | |
| Categories=Network;WebBrowser; | |
| MimeType=x-scheme-handler/unknown;x-scheme-handler/about;text/html;text… | |
| ``` | |
| Create `/srv/salt/default_www.sls` with the following content, this | |
| will run xdg-settings to set the default browser: | |
| ```yaml | |
| xdg-settings set default-web-browser browser_vm.desktop: | |
| cmd.run: | |
| - runas: user | |
| ``` | |
| Now, run `qubesctl --target fedora-38,dom0 state.apply`. | |
| From there, you MUST reboot the VMs that will be configured to use the | |
| WWW AppVm as the default browser, they need to have the new file | |
| `browser_vm.desktop` available for `xdg-settings` to succeed. Run | |
| `qubesctl --target vault,qubes-communication,qubes-devel state.apply`. | |
| Congratulations, now you will have a RPC prompt when an AppVM wants to | |
| open a file to ask you if you want to open it in your browsing AppVM. | |
| # Conclusion | |
| This method is a powerful way to handle your hosts, and it's ready to | |
| use on Qubes OS. Unfortunately, I still need to figure a nicer way to | |
| export the custom files written in /srv/salt/ and track the changes | |
| properly in a version control system. | |
| Erratum: I found a solution to manage the files :-) stay tuned for the | |
| next article. |