<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
 <channel>
   <title>Solene'%</title>
   <description></description>
   <link>https://dataswamp.org/~solene/</link>
   <atom:link href="https://dataswamp.org/~solene/rss.xml" rel="self" type="application/rss+xml" />
   <item>
 <title>Hardware review: ergonomic mouse Logitech Lift</title>
 <description>
   <![CDATA[
<pre># Introduction

In addition to my regular computer mouse, by the end of 2024 I bought a Logitech Lift, a wireless ergonomic vertical mouse.  This was the first time I used such mouse, although I am regularly using a track ball, the experience is really different.

=> https://www.logitech.com/en-gb/shop/p/lift-vertical-ergonomic-mouse.910-006475 Logitech.com : Lift product

I wanted to write this article to give some feedback about this device, I enjoy it a lot and I can not really go back to a regular mouse now.

# Specifications

The mouse works with a single AA / LR6 battery that with a heavy daily use for nine months is still reported as 30% charged.

The lift connects using Bluetooth, but Logitech provides a small USB dongle for a perfect "out of the box" experience with any operating system.  The dongle can be stored within the mouse when travelling, or when not using it.  There is a small button on the bottom of the mouse and 3 LED, this allows the mouse to be switched to different computers: two in Bluetooth, one for the dongle.  The first profile is always the dongle.  This allows you to connect the mouse to two different computers with Bluetooth and be able to switch between them.  This works very well in practice.

About the buttons, nothing fancy with the standard two buttons, there are extra "back / next" buttons easily available, one button to cycle the laser resolution / sensitivity.  The wheel is excellent, it is precise and easy to use, but if you give it a good kick it will spin a lot without being in free wheel like some other wheels, which is super handy to scroll a huge chunk of text.

Due to the mouse design, it is not ambidextrous, but Logitech made a version for left-handed users and right-hander users.

# Experience

The first week with the mouse was really weird, I was switching back and forth with my old Steel Series mouse because I was less accurate and not used to it.

After a week, I became used to holding it, moving it, and it was a real joy and source of fun to go on the computer to use this mouse :)

Then, without noticing, I started using it exclusively.  A few months later, I realized I did not use the previous mouse for a long time and gave it a try.  This was a terrible experience, I was surprised that it was fitting really poorly in my hand, then I disconnected it, and it has been stored in a box since then.

It is hard to describe the feeling of this ergonomic mouse, the hand position is really different, but it feels much more enjoyable that I do not consider using a non-ergonomic mouse ever again.

I was reluctant to use a wireless mouse at first, but not having to deal with the cable acting as a "spring" is really appreciable.

I can definitely play video games with this mouse, except nervous FPS (maybe with some training?).

# Conclusion

The price tag could be a blocker for many, but at the same time it is an essential peripheral when using your computer.  If you feel some pain in your hand when using your computer mouse, maybe give a try to ergonomic mice.
</pre>
   ]]>
 </description>
 <guid>gopher://dataswamp.org:70/1/~solene/article-hardware-review-logitech-lift</guid>
 <link>gopher://dataswamp.org:70/1/~solene/article-hardware-review-logitech-lift</link>
 <pubDate>Fri, 05 Sep 2025 00:00:00 GMT</pubDate>
</item>
<item>
 <title>URL filtering HTTP(S) proxy on Qubes OS</title>
 <description>
   <![CDATA[
<pre># Preamble

This article was first published as a community guide on Qubes OS forum.  Both are kept in sync.

=> https://forum.qubes-os.org/t/url-filtering-https-proxy/35846

# Introduction

This guide is meant to users who want to allow a qube to reach some websites but not all the Internet, but facing the issue that using the firewall does not work well for DNS names using often changing IPs.

⚠️ This guide is for advanced users who understand what a HTTP(s) proxy is, and how to type commands or edit files in a terminal.

The setup will create a `sys-proxy-out` qube that will define a list of allowed domains, and use qvm-connect-tcp to allow client qubes to use it as a proxy. Those qubes could have no netvm, but still reach the filtered websites.

I based it on debian 12 xfce, so it's easy to set up and will be supported long term.

# Use case

* an offline qube that need to reach a particular website
* a web browsing qube restricted to a list of websites
* mix multiple netvm / VPNs into a single qube

# Setup the template

* Install debian-12-xfce template
* Make a clone of it, let's call it debian-12-xfce-squid
* Start the qube and open a terminal
* Type `sudo apt install -y squid`
* Delete and replace `/etc/squid/squid.conf` with this content (the default file is not suitable at all)

```
acl localnet src 127.0.0.1/32

acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 443

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

acl permit_list dstdomain '/rw/config/domains.txt'
http_access allow localnet permit_list

http_port 3128

cache deny all
logfile_rotate 0
coredump_dir /var/spool/squid
```

The configuration file only allows the proxy to be used for ports 80 and 443, and disables cache (which would only apply to port 80).

Close the template, you are done with it.

# Setup an out proxy qube

This step could be repeated multiple times, if you want to have multiple proxies with different lists of domains.

* Create a new qube, let's call it `sys-proxy-out`, based on the template you configured above (`debian-12-xfce-squid` in the example)
* Configure its firewall to allow the destination `*` and port TCP 443, and also `*` and port TCP 80 (this covers basic needs for doing http/https). This is an extra safety to be sure the proxy will not use another port.
* Start the qube
* Configure the domain list in `/rw/config/domains.txt` with this format:

```
# for a single domain
domain.example

# for all direct subdomains of qubes.org including qubes.org
# this work for doc.qubes-os.org for instance, but not foo.doc.qubes-os.org
qubes-os.org
```

ℹ️ If you change the file, reload with `sudo systemctl reload squid`.

ℹ️ If you want to check squid started correctly, type `systemctl status squid`.  You should read that it's active, and that there are no error in the log lines.

⚠️ If you have a line with a domain included by another line, squid will not start as it considers it an error! For instance `.qubes.org` includes `doc.qubes-os.org`.

⚠️ As far as I know, it is only possible to allow a hostname or a wildcard of this hostname, so you at least need to know the depth of the hostname. If you want to allow `anything.anylevel.domain.com`, you could use `dstdom_regex` instead of `dstdomain`, but it seems a regular source of configuration problems,  and should not be useful for most users.

In dom0, using the "Qubes Policy Editor" GUI, create a new file named 50-squid (or edit the file `/etc/qubes/policy.d/50-squid.policy`) and append the configuration lines that you need to adapt from the following example:

```
qubes.ConnectTCP +3128 MyQube @default allow target=sys-proxy-out
qubes.ConnectTCP +3128 MyQube2 @default allow target=sys-proxy-out
```

This will allow qubes `MyQube` and `MyQube2` to use the proxy from `sys-proxy-out`. Adapt to your needs here.

# How to use the proxy

Now the proxy is set up, and `MyQube` is allowed to use it, a few more things are required:

* Start qube `MyQube`
* Edit `/rw/config/rc.local` to add `qvm-connect-tcp ::3128`
* Configure http(s) clients to use `localhost:3128` as a proxy

It's possible to define the proxy user wide, this should be picked by all running programs, using this:

```
mkdir -p /home/user/.config/environment.d/
cat <<EOF >/home/user/.config/environment.d/proxy.conf
all_proxy=http://127.0.0.1:3128/
EOF
```

# Going further

## Using a disposable qube for the proxy

The sys-proxy-out could be a disposable. In order to proceed:

* mark sys-proxy-out as a disposable template in its settings
* create a new disposable qube using sys-proxy-out as a template
* adapt the dom0 rule to have the new disposable qube name in the target field

## Checking logs

In the proxy qube, you can check all requests done in `/var/log/squid/access.log`, you can filter with `grep TCP_DENIED` to see denied requests, this can be useful to adapt the domain list.

## Test the proxy

### Check allowed domains are reachable

From the http(s) client qube, you can try this command to see if the proxy is working:

```
curl -x http://localhost:3128 https://a_domain_you_allowed/
```

If the output is not `curl: (56) CONNECT tunnel failed, response 403` then it's working.

### Check non-allowed domains are denied

Use the same command as above, but with a domain you did not allow

```
curl -x http://localhost:3128 https://a_domain_you_allowed/
```

The output should be `curl: (56) CONNECT tunnel failed, response 403`.

### Verify nothing is getting cached

In the qube `sys-proxy-out`, inspect `/var/spool/squid/`, it should be empty. If not, please report here, this should not happen.

Some logs file exist in `/var/log/squid/`, if you don't want any hints about queried domains, configure squid accordingly. Privacy-specific tweaks are beyond the scope of this guide.
</pre>
   ]]>
 </description>
 <guid>gopher://dataswamp.org:70/1/~solene/article-qubes-os-filtering-out-proxy</guid>
 <link>gopher://dataswamp.org:70/1/~solene/article-qubes-os-filtering-out-proxy</link>
 <pubDate>Fri, 29 Aug 2025 00:00:00 GMT</pubDate>
</item>
<item>
 <title>Introduction to Qubes OS when you do not know what it is</title>
 <description>
   <![CDATA[
<pre># Introduction

Qubes OS can appear as something weird and hard to figure for people that never used it.  By this article, I would like to help other understanding what it is, and when it is useful.

=> https://www.qubes-os.org/ Qubes OS official project page

Two years ago, I wrote something that was mostly a list of Qubes OS features, but this was not really helping readers to understand what is Qubes OS except it does XYZ stuff.

While Qubes OS is often tagged as a security operating system, it only offers a canvas to handling compartmentalized systems to work as a whole.

Qubes OS gives its user the ability to do cyber risk management the way they want, which is unique.  A quick word about it if you are not familiar with risk management: for instance, when running software at different level, you should ask "can I trust this?", can you trust the packager?  The signing key?  The original developer?  The transitive dependencies involved?  It is not possible to entirely trust the whole chain, so you might want to take actions like handling sensitive data only when disconnected.  Or you might want to ensure that if your web browser is compromised, the data leak and damage will be reduced to a minimum.  This can go pretty far and is complementary to in-depth defense or security hardening of operating systems.

=> https://dataswamp.org/~solene/2023-06-17-qubes-os-why.html 2023-06-17 Why one would use Qubes OS?

In the article, I will pass on some features that I do not think are interesting for introducing Qubes OS to people or that could be too confusing, so no need to tell me I forgot to talk about XYZ feature :-)

# Meta operating system

I like to call Qubes OS a meta operating system, because it is not a Linux / BSD / Windows based OS: its core is Xen (some kind of virtualization enabled kernel).  Not only it's Xen based, but by design it is meant to run virtual machines, hence the name "meta operating system" which is an OS meant to run many OSes make sense to me.

Qubes OS comes with a few virtual machines templates that are managed by the development team:

* debian
* fedora
* whonix (debian based distribution hardened for privacy)

There are also community templates for arch linux, gentoo, alpine, kali, kicksecure and certainly other you can find within the community.

Templates are not just templates, they are a ready to work, one-click/command install systems that integrate well within Qubes OS.  It is time to explain how virtual machines interact together, as it is what makes Qubes OS great compared to any Linux system running KVM.

A virtual machine is named a "qube", it is a set of information and integration (template, firewall rules, resources, services, icons, ...).

# Virtual machines synergy and integration

The host system which has some kind of "admin" powers with regard to virtualization is named dom0 in Xen jargon.  On Qubes OS, dom0 is a Fedora system (using a Xen kernel) with very few things installed, no networking and no USB access.  Those two devices classes are assigned to two qubes, respectively named "sys-net" and "sys"usb".  It is so to reduce the surface attack of dom0.

When running a graphical program within a qube, it will show a dedicated window in dom0 window manager, there are no big windows for each virtual machine, so running programs feels like a unified experience.  The seamless windows feature works through a specific graphics driver within the qube, official templates support it and there is a Windows driver for it too.

Each qube has its own X11 server running, its own clipboard, kernel and memory.  There are features to copy the clipboard of one qube, and transfer it to the clipboard of another qube.  This can be configured to prevent clipboards to be used where you should not.  This is rather practical if you store all your passwords in a qube, and you want to copy/paste them.

There are also file copy capabilities between qubes, which goes through Xen channels (some interconnection between Xen virtual machines allowing to transfer data), so no network is involved for data transfer.  Data copy can also be configured, like one qube may be able to receive files from any, but never allow file to be transferred out.

In operations involving RPC features like file copy, a GUI in dom0 is shown to ask confirmation by the user (with a tiny delay to prevent hitting Enter before being able to understand what was going on).

As mentioned above, USB devices are assigned to a qube named "sys-usb", it provides a program to pass a device to a given qube (still through Xen channels), so it is easy to dispatch devices where you need them.

# Networking

Qubes OS offer a tree like networking with sys-net (holding the hardware networking devices) at the root and a sys-firewall qube below, from there, you can attach qubes to sys-firewall to get network.

Firewall rules can be configured per qube, and will be applied on the qube providing network to the one configured, this prevents the qube from removing its own rules because it is done at a level higher in the tree.

A tree like networking system also allow running multiple VPN in parallel, and assign qubes to each VPNs as you need.  In my case, when I work for multiple clients they all have their own VPN, so I dedicate them a qube connecting to their VPN, then I attach qubes I use to work for this client to the according VPN.  With the firewall rule set on the VPN qube to prevent any connection except to the endpoint, I have the guarantee that all traffic of that client work will go through their VPN.

It is also possible to not use any network in a qube, so it is offline and unable to connect to network.

Qubes OS come out of the box (except if you uncheck the box) with a qube encapsulating all traffic network through Tor network (incompatible traffic like UDP is discarded).

# Templates (in Qubes OS jargon)

I talked about templates earlier, in the sense of "ready to be installed and used", but a "Template VM" in Qubes OS has a special meaning.  In order to make things manageable when you have a few dozen qubes, like handling updates or installing software, Qubes OS introduced Templates VMs.

A Template VM is a qube that you almost never use, except when you need to install a software or make a system change within it.  Qubes OS updater will also make sure, from time to time, that installed packages are up-to-date.

So, what are them if there are not used?  They are templates for a type of qubes named "AppVM".  An AppVM is what you work the most with.  It is an instance of the template it is configured to use, always reset from pristine state when starting, with a few directories persistent across reboot for this AppVM.  The directories are all in `/rw/` and symlinked where useful: `/home` and `/usr/local/` by default.  You can have a single Template VM of Debian 13 and a dozen AppVM with each their own data in it, if you want to install "vim", you do it in the template and then all AppVM using Debian 13 Template VM will have "vim" installed (after a reboot after the change). Note that is also work for emacs :)

With this mechanism, it is easy to switch an AppVM from a Linux distribution to another, just switch the qube template to use Fedora instead of Debian, reboot, done.  This is also useful when switching to a new major release of the distribution in the template: Debian 13 is bugged?  Let's switch back to Debian 12 until it is fixed and continue working (do not forget writing a bug report to Debian).

# Disposables templates

You learned about Templates VM and how a AppVM inherits all the template, reset in fresh state every time.  What about an AppVM that could be run from its pristine state the same way?  They did it, it is called a disposable qube.

Basically, a disposable qube is a temporary copy of an AppVM with all its storage discarded on shutdown.  It is the default for the sys-usb qube handling USB, if it gets infected by a device, it will be reset from a fresh state next boot.

Disposables have many use case:

* running a command on non-trusted file, to view or try to convert it to something more trustable (a PDF into BMP?)
* running a known to work system for a specific task, and be sure it will work exactly the same every time, like when using a printer
* as a playground to try stuff in an environment identical to another

# Automatic snapshot

Last but not least, a pretty nice but hidden feature is the ability to revert the storage of a qube to a previous state.

=> https://www.qubes-os.org/doc/volume-backup-revert/ Qubes OS documentation: volume backup and revert

qubes are using virtual storage that can stack multiple changes, from a base image with different layers of changes over time stacked on top of it.  Once the number of revisions to keep is reached, the oldest layer above the base image is merged.  This is a simple mechanism that allows to revert to any given checkpoint between the base image and the last checkpoint.

Did you delete important files, and restoring a backup is way too much effort?  Revert the last volume.  Did a package update break an important software in a template? Revert the last volume.

Obviously, it comes as an extra storage cost, deleted files are only freed from the storage once they do not exist in a checkpoint.

# Downsides of running Qubes OS


Qubes OS has some drawbacks:

* it is slower than running a vanilla system, because all virtualization involved as a cost, most notably all 3D rendering is done on CPU within qubes, which is terrible for eye candy effects or video decoding.  It is possible, with a lot of efforts, to assign second GPU when you have one, to a single qube at a time, to use it, but as the sentence already long enough is telling out loud, it is not practical.
* it requires effort to get into as it is different from your usual operating system, you will need to learn how to use it (this sounds rather logical when using a tool)
* hardware compatibility is a bit limited due Xen kernel, there is compatibility list curated by the community

=> https://www.qubes-os.org/hcl/ Qubes OS hardware compatibility list

# Conclusion

I tried to give a simple overview of major Qubes OS features.  The goal was not to make you reader an expert or be aware of every single feature, but to allow you to understand what Qubes OS can offer.
</pre>
   ]]>
 </description>
 <guid>gopher://dataswamp.org:70/1/~solene/article-introduction-to-qubes-os</guid>
 <link>gopher://dataswamp.org:70/1/~solene/article-introduction-to-qubes-os</link>
 <pubDate>Sun, 03 Aug 2025 00:00:00 GMT</pubDate>
</item>
<item>
 <title>How to trigger a command on a running Linux laptop when disconnected from power</title>
 <description>
   <![CDATA[
<pre># Introduction

After thinking about BusKill product that triggers a command once the USB cord disconnects, I have been thinking at a simple alternative.

=> https://www.buskill.in BusKill official project website

When using a laptop connected to power most of the time, you may want it to power off once it gets disconnected, this can be really useful if you use it in a public area like a bar or a train.  The idea is to protect the laptop if it gets stolen while in use and unlocked.

Here is how to proceed on Linux, using a trigger on an udev rule looking for a change in the power_supply subsystem.

For OpenBSD users, it is possible to use apmd as I explained in this article:

=> https://dataswamp.org/~solene/2024-02-20-rarely-known-openbsd-features.html#_apmd_daemon_hooks => Rarely known OpenBSD features: apmd daemon hooks

In the example, the script will just power off the machine, it is up to you to do whatever you want like destroy the LUKS master key or trigger the coffee machine :D

# Setup

Create a file `/etc/udev/rules.d/disconnect.rules`, you can name it how you want as long as it ends with `.rules`:

```
SUBSYSTEM=="power_supply", ENV{POWER_SUPPLY_ONLINE}=="0", ENV{POWER_SUPPLY_TYPE}=="Mains", RUN+="/usr/local/bin/power_supply_off"
```

Create a file `/usr/local/bin/power_supply_off` that will be executed when you unplug the laptop:

```
#!/bin/sh
echo "Going off because power supply got disconnected" | systemd-cat
systemctl poweroff
```

This simple script will add an entry in journald before triggering the system shutdown.

Mark this script executable with:
```
chmod +x /usr/local/bin/power_supply_off
```

Reload udev rules using the following commands:

```
udevadm control --reload-rules
udevadm trigger
```

# Testing

If you unplug your laptop power, it should power off, you should find an entry in the logs.

If nothing happens, looks at systemd logs to see if something is wrong in udev, like a syntax error in the file you created or an incorrect path for the script.

# Script ideas

Depending on your needs, here is a list of actions the script could do, from gentle to hardcore:

* Lock user sessions
* Hibernate
* Proper shutdown
* Instant power off (through sysrq)
* Destroy LUKS master key to make LUKS volume unrecoverable + Instant power off

# Conclusion

While BusKill is an effective / unusual product that is certainly useful for a niche, protecting a running laptop against thieves is an extra layer when being outside.

Obviously, this use case works only when the laptop is connected to power.
</pre>
   ]]>
 </description>
 <guid>gopher://dataswamp.org:70/1/~solene/article-linux-killswitch-on-power-disconnect</guid>
 <link>gopher://dataswamp.org:70/1/~solene/article-linux-killswitch-on-power-disconnect</link>
 <pubDate>Sat, 31 May 2025 00:00:00 GMT</pubDate>
</item>
<item>
 <title>PDF bruteforce tool to recover locked files</title>
 <description>
   <![CDATA[
<pre># Introduction

Today, I had to open a password protected PDF (medical report), unfortunately it is a few years old document and I did not remember the password format (usually something based on named and birthdate -_-).

I found a nice tool that can try a lot of combinations, and it is even better as if you know a bit the password format you can easily generate tested patterns.

=> https://github.com/mufeedvh/pdfrip pdfrip GitHub page

# Usage

The project page offers binaries for some operating systems, but you can compile it using cargo.

The documentation on the project's README is quite clear and easy to understand.  It is possible to generate some simple patterns, try all combinations of random characters or use a dictionary (some tools exists to generate a dictionary).

Inside a virtual machine with 4 vCPU, I was able to achieve 36 000 checks per second, on baremetal I expect this to be a higher.
</pre>
   ]]>
 </description>
 <guid>gopher://dataswamp.org:70/1/~solene/article-test-pdf-passwords</guid>
 <link>gopher://dataswamp.org:70/1/~solene/article-test-pdf-passwords</link>
 <pubDate>Sun, 09 Mar 2025 00:00:00 GMT</pubDate>
</item>
<item>
 <title>Blog activity for 2025</title>
 <description>
   <![CDATA[
<pre># Introduction

Hello, you may have wondered why the blog has not been really active this year, let's talk about it :-)

# Retrospective

## Patreon

First, I decided to stop the Patreon page for multiple reasons.  It was an interesting experimentation and helped me a lot in 2023 and a part of 2024 as I went freelance and did not earn much money.  Now, the business is running fine and prefer my former patrons to support someone else more active / who need money.

The way I implemented Patreon support was like this: people supporting me financially had access to blog posts a 2 or 3 days earlier than the public release, the point was to give them a little something for their support without creating a paywall for some content.  I think it worked quite well in that regard.  A side effect of the "early access publishing" was that, almost every time, I used this extra delay to add more content / fix issues that I did not think about when writing.  As a reminder, I usually just write then proofread quickly and publish.

Having people paying money to have early access to my blog posts created some kind of expectations from them in my mind, so I tried to raise the level higher in terms of content at the point that I came to procrastinate because "this blog post will not be interesting enough" or "this will just take too long to write, I'm bored".  My writing cadence got delayed, I was able to sustain once a week at first then moved to twice a month.  I have no idea if readers had "expectations", but I imagined it and acted like if it was a thing.

For each blog post I was publishing, this also created extra work for me:

* publish in early access
* write short news about it on Patreon
* wait a few days and republish not in early access

It is not much more work, but this was still more work to think and schedule.

Cherry on the cake, Patreon was already bloated when I started using it, but it has been more and more aggressive in terms of marketing and selling features, which disgusted me at some point.  I was not using all of this, but I felt bad to have people supporting me having to deal with it.

I used Patreon to publish a "I stop Patreon support but the blog continues" news, but it seems it is poorly handled on Patreon when you freeze a creator's page as subscribers are not able to see anything anymore once you put on freeze?!  Sorry for the lack of news, I thought it was working fine :/

## Different contribution place

The blog started and has lived as the place where I shared my knowledge during my continuous learning journey.  The thing is I learn less nowadays and more complicated knowledge that is hard to share, because it is super niche and certainly not fascinating to most, and because sharing it correctly may be hard.

Most of the blog is about OpenBSD, there were no community place to share it, so I self-hosted it.  Then, I started to write about NixOS and got invited by the people I worked with at that time (at Tweag company) to contribute to NixOS documentation, this made sense after all to not write something only me can update and which can not be fixed by others.  I did it a bit, but also continued my blog in parallel to share experience and ideas, not really "documentation".

Now I am using Qubes OS daily, for more than a year, I wrote a bit about it, but I started to contribute actively to community guides handled on the project's forum.  As a result, this made less content to publish on the blog because it just makes sense to centralize all the documentation in one place that is manageable by a team instead of here.

I spent a lot of time contributing to Qubes OS community guides, mostly about networking/VPN, and early 2025 I officially joined Qubes OS core team as a documentation maintainer (concretely, this gives commit rights on some repositories that are website/documentation related).  Qubes OS team is super nice, and the way the work is handled is cool, I will spend a lot of contribution time there (there is a huge backlog of changes to review first), still less time and incentive to write here.

## New real job and new place

As stated earlier, I finally found a work place that I enjoy and can keep me busy, my last two employers were not really able to figure how to use my weird skill set.  I had a lot of time to kill during work in the previous years, so time to experiment and write, I just have a lot less time now because I am really busy at work doing cool things.

My family moved to a new place in 2024 as well, there is a lot of work and gardening to handle, so after this and job work, I just do not have many things to share about on the blog at the moment.

# Conclusion

The blog is not dead, I think I will be able to resume activity soon now I turned the page on Patreon and identified why I was not writing here (I like writing here!).

I have a backlog of ideas, I also may write simpler blog posts when I would like to share an idea or a cool project without having to cover it entirely.

Thank you everyone for your support!
</pre>
   ]]>
 </description>
 <guid>gopher://dataswamp.org:70/1/~solene/article-2025-news</guid>
 <link>gopher://dataswamp.org:70/1/~solene/article-2025-news</link>
 <pubDate>Sun, 16 Feb 2025 00:00:00 GMT</pubDate>
</item>
<item>
 <title>Introduction to GrapheneOS</title>
 <description>
   <![CDATA[
<pre># Introduction

This blog post is an introduction to the smartphone and security oriented operating system GrapheneOS.

=> https://grapheneos.org/ GrapheneOS official project web page

Thanks to my patrons support, last week I have been able to replace my 6.5 years old BQ Aquaris X which has been successfully running Lineage OS all that time, by a Google Pixel 8a now running GrapheneOS.

Introducing GrapheneOS is a daunting task, I will do my best to present you the basics information you need to understand if it might be useful for you, and let a link to the project FAQ which contains a lot of valuable technical explanations I do not want to repeat here.

=> https://grapheneos.org/faq GrapheneOS FAQ

# What is GrapheneOS?

GrapheneOS (written GOS from now on) is an Android based operating system that focuses security.  It is only compatible with Google Pixel devices for multiple reasons: availability of hardware security components, long term support (series 8 and 9 are supported at least 7 years after release) and the hardware has a good quality / price ratio.

The goal of GOS is to provide users a lot more control about what their smartphone is doing.  A main profile is used by default (the owner profile), but users are encouraged to do all their activities in a separate profile (or multiples profiles).  This may remind you about Qubes OS workflow, although it does not translate entirely here.  Profiles can not communicate between each others, encryption is done per profile, and some permissions can be assigned per profile (installing apps, running applications in background when a profile is not used, using the SIM...).  This is really effective for privacy or security reasons (or both), you can have a different VPN per profile if you want, or use a different Google Play login, different applications sets, whatever!  The best feature here in my opinion is the ability to completely stop a profile so you are sure it does not run anything in the background once you exit it.

When you make a new profile, it is important to understand it is like booting your phone again, the first log-in with the profile you will be asked questions like if you started the system for the first time.  All settings have the defaults values, and any change is limited to the profile only, this includes ringtones, sound, default apps, themes…  Switching between profile is a bit painful, you need to get the top to bottom dropdown menu at full size, then tap the bottom right corner icon and choose the profile you want to switch to, and tap the PIN of that profile.  Only the owner profile can toggle important settings like 4G/5G network, or do SIM operations and other "lower level" settings.

GOS has a focus on privacy, but let the user in charge.  Google Play and Google Play Services can be installed in one click from a dedicated GOS app store which is limited to GOS apps only, as you are supposed to install apps from Google Play, F-droid or Accrescent.  Applications can be installed in a single profile, but can also be installed in the owner profile which lets you copy it to other profiles.  This is actually how I do, I install all apps in the user profile, I always uncheck the "network permission" so they just can't do anything, and then I copy them to profiles where I will use it for real.  There is no good or bad approach, this fits your need in terms of usability, privacy and security.

Just to make sure it is clear, it is possible to use GOS totally Google free, but if you want to use Google services, it is made super easy to do so.  Google Play could be used in a dedicated profile if you ever need it once.

# Installation and updates

The installation was really simple as it can be done from the web page (from a Linux, Windows or macOS system), by just clicking buttons in the correct order from the installation page.  The image integrity check can be done AFTER installation, thanks to the TPM features in the phone which guarantees the boot of valid software only, which will allow you to generate a proof of boot that is basically a post-install checksum. (More explanations in GOS website).  The whole process took approximately 15 minutes between plugging the phone to my computer and using the phone.

It is possible to install from the command line, I did not test it.

Updates are 100% over-the-air (OTA), which mean the system is able to download updates over network.  This is rather practical as you never need to do any adb command to push a new image, which have always been a stressful experience for me when using smartphones.  GOS automatically download base system updates and offer you to reboot to install it, while GOS apps will just be downloaded and update in place.  This is a huge difference from LineageOS which always required to manually download new builds, and applications updates were parts of the big image update.

# Permission management

A cool thing with GOS is the tight controls offered over applications.  First, this is done by profile, so if you use the same app in two profiles, you can give different permissions, and secondly, GOS allows you to define a scope to some permissions.  For example, if an application requires storage permission, you can list which paths are allowed, if it requires contacts access, you can give a list of contacts entries (or empty).

GOS Google Play installation (which is not installed by default) is sand-boxed to restrict what it can do, they also succeeded at sand-boxing Android Auto. (More details in the FAQ).  I have a dedicated Android Auto profile, the setup was easy thanks to the FAQ has a lot of permissions must be manually given for it to work.

GOS does not allow you to become root on your phone though, it just gives you more control through permissions and profiles.

# Performance

I did not try CPU/GPU intensive tasks for now, but there should be almost no visible performance penalty when using GOS.  There are many extra security features enabled which may lead to a few percent of extra CPU usage, but there are no benchmark and the few reviews of people who played high demanding video games on their phone did not notice any performance change.

# Security

GOS website has a long and well detailed list of hardening done over the stock Android code, you can read about them on the following link.

=> https://grapheneos.org/features#exploit-protection GrapheneOS website: Exploitation Protection

# My workflow

As an example, here is how I configured my device, this is not the only way to proceed, so I just share it to give the readers an idea of what it looks like for me:

* my owner profile has Google Play installed used to install most apps.  All apps are installed there with no network permission, then I copy them to the profile that will use the applications.
* a profile that looks like what I was doing in my previous phone: allowed to phone/SMS, web browser, IM apps, TOTP app.
* a profile for multimedia where I store music files, run audio players and use Android Auto.  Profile is not allowed to run in background.
* a profile for games (local and cloud).  Profile is not allowed to run in background.
* a "other" profile used to run crappy apps.  Profile is not allowed to run in background.
* a profile for each of my clients, so I can store any authentication app (TOTP, Microsoft authenticator, whatever), use any app required.  Profile is not allowed to run in background.
* a guest profile that can be used if I need to lend my phone to someone if they want to do something like look up something on the Internet.  This profile always starts freshly reset.

After a long week of use, I came up with this.  At first, I had a separate profile for TOTP, but having to switch back and forth to it a dozen time a day was creating too much friction.

# The device itself

I chose to buy a Google Pixel 8a 128 GB as it was the cheapest of the 8 and 9 series which have a 7 years support, but also got a huge CPU upgrade compared to the 7 series.  The device could be bought at 300€ on second hand market and 400€ brand new.

The 120 Hz OLED screen is a blast!  Colors are good, black is truly black (hence dark themes for OLED reduce battery usage and looks really great) and it is super smooth.

There is no SD card support, which is pretty sad especially since almost every Android smartphone support this, I guess they just want you to pay more for storage.  I am fine with 128 GB though, I do not store much data on my smartphone, but being able to extend it would have been nice.

The camera is OK, I am not using it a lot and I have no comparison, from reviews I have read they were saying it is just average.

Wi-Fi 6 works really fine (latency, packet loss, range and bandwidth) although I have no way to verify its maximum bandwidth because it is faster than my gigabit wired network.

The battery lasts long, I use my smartphone a bit more now, the battery approximately drops by 20% for a day of usage.  I did not test charge speed.

# Conclusion

I am really happy with GrapheneOS, I finally feel in control of my smartphone and I never considered it a safe device before.  I never really used an Android ROM from a manufacturer or iOS, I bet they can provide a better user experience, but they can not provide anything like GrapheneOS.

LineageOS was actually ok on my former BQ Aquaris X, but there were often regressions, and it did not provide anything special in terms of features, except it was still having updates for my old phone.  GrapheneOS on the other hand provides a whole new experience, that may be what you are looking for.

This system is not for everyone!  If you are happy with your current Android, do not bother buying a Google Pixel to try GOS.

# Going further

The stock Android version supports profiles (this can be enabled in system -> users -> allow multiple users), but there is no way to restrict what profiles can do, it seems they are all administrators.  I have been using this on our Android tablet at home, it is available on every Android phone as well.  I am not sure if it can be used as a security feature as this.
</pre>
   ]]>
 </description>
 <guid>gopher://dataswamp.org:70/1/~solene/article-intro-to-grapheneos</guid>
 <link>gopher://dataswamp.org:70/1/~solene/article-intro-to-grapheneos</link>
 <pubDate>Tue, 14 Jan 2025 00:00:00 GMT</pubDate>
</item>
<item>
 <title>Systemd journald cheatsheet</title>
 <description>
   <![CDATA[
<pre># Introduction

This blog post is part of a series that will be about Systemd ecosystem, today's focus is on journaling.

Systemd got a regrettable reputation since its arrival mid 2010.  I think this is due to Systemd being radically different than traditional tooling, and people got lost without a chance to be noticed beforehand they would have to deal with it.  The transition was maybe rushed a bit with a half-baked product, in addition to the fact users had to learn new paradigms and tooling to operate their computer.

Nowadays, Systemd is working well, and there are serious non-Systemd alternatives, so everyone should be happy. :)

# Introduction to journald

Journald is the logging system that was created as part of Systemd.  It handles logs created by all Systemd units.  A huge difference compared to the traditional logs is that there is a single journal file acting as a database to store all the data.  If you want to read logs, you need to use `journalctl` command to extract data from the database as it is not plain text.

Most of the time journald logs data from units by reading their standard error and output, but it is possible to send data to journald directly.

On the command line, you can use `systemd-cat` to run a program or pipe data to it to send them to logs.

=> https://www.man7.org/linux/man-pages/man1/systemd-cat.1.html systemd-cat man page

# Journalctl 101

Here is a list of the most common cases you will encounter:

* View new logs live: `journalctl -f`
* View last 2000 lines of logs: `journalctl -n 2000`
* Restrict logs to a given unit: `journalctl -u nginx.service`
* Pattern matching: `journalctl -g somepattern`
* Filter by date (since): `journalctl --since="10 minutes ago"` or `journalctl --since="1 hour ago"` or `journalctl --since=2024-12-01`
* Filter by date (range): `journalctl --since="today" --until="1 hour ago"` or `journalctl --since="2024-12-01 12:30:00" --until="2024-12-01 16:00:00"`
* Filter logs since boot: `journalctl -b`
* Filter logs to previous (n-1) boot: `journalctl -b -1`
* Switch date time output to UTC: `journalctl --utc`

You can use multiple parameters at the same time:

* Last 200 lines of logs of nginx since current boot: `journalctl -n 200 -u nginx -b`
* Live display of nginx logs files matching "wp-content": `journalctl -f -g wg-content -u nginx`

=> https://www.man7.org/linux/man-pages/man1/journalctl.1.html journalctl man page

# Send logs to syslog

If you want to bypass journald and send all messages to syslog to handle your logs with it, you can edit the file `/etc/systemd/journald.conf` to add the line `ForwardToSyslog=Yes`.

This will make journald relay all incoming messages to syslog, so you can process your logs as you want.

Restart journald service: `systemctl restart systemd-journal.service`

=> https://www.man7.org/linux/man-pages/man8/systemd-journald.service.8.html systemd-journald man page
=> https://www.man7.org/linux/man-pages/man5/journald.conf.5.html journald.conf man page

# Journald entries metadata

Journalctl contains a lot more information than just the log line (raw content).  Traditional syslog files contain the date and time, maybe the hostname, and the log message.

This is just for information, only system administrators will ever need to dig through this, it is important to know it exists in case you need it.

## Example

Here is what journald stores for each line (pretty printed from json output), using samba server as an example.

```
# journalctl -u smbd -o json -n 1 | jq
{
 "_EXE": "/usr/libexec/samba/rpcd_winreg",
 "_CMDLINE": "/usr/libexec/samba/rpcd_winreg --configfile=/etc/samba/smb.conf --worker-group=4 --worker-index=5 --debuglevel=0",
 "_RUNTIME_SCOPE": "system",
 "__MONOTONIC_TIMESTAMP": "749298223244",
 "_SYSTEMD_SLICE": "system.slice",
 "MESSAGE": "  Copyright Andrew Tridgell and the Samba Team 1992-2023",
 "_MACHINE_ID": "f23c6ba22f8e02aaa8a9722df464cae3",
 "_SYSTEMD_INVOCATION_ID": "86f0f618c0b7dedee832aef6b28156e7",
 "_BOOT_ID": "42d47e1b9a109551eaf1bc82bd242aef",
 "_GID": "0",
 "PRIORITY": "5",
 "SYSLOG_IDENTIFIER": "rpcd_winreg",
 "SYSLOG_TIMESTAMP": "Dec 19 11:00:03 ",
 "SYSLOG_RAW": "<29>Dec 19 11:00:03 rpcd_winreg[4142801]:   Copyright Andrew Tridgell and the Samba Team 1992-2023\n",
 "_CAP_EFFECTIVE": "1ffffffffff",
 "_SYSTEMD_UNIT": "smbd.service",
 "_PID": "4142801",
 "_HOSTNAME": "pelleteuse",
 "_SYSTEMD_CGROUP": "/system.slice/smbd.service",
 "_UID": "0",
 "SYSLOG_PID": "4142801",
 "_TRANSPORT": "syslog",
 "__REALTIME_TIMESTAMP": "1734606003126791",
 "__CURSOR": "s=1ab47d484c31144909c90b4b97f3061d;i=bcdb43;b=42d47e1b9a109551eaf1bc82bd242aef;m=ae75a7888c;t=6299d6ea44207;x=8d7340882cc85cab",
 "_SOURCE_REALTIME_TIMESTAMP": "1734606003126496",
 "SYSLOG_FACILITY": "3",
 "__SEQNUM": "12376899",
 "_COMM": "rpcd_winreg",
 "__SEQNUM_ID": "1ab47d484c31144909c90b4b97f3061d",
 "_SELINUX_CONTEXT": "unconfined\n"
}
```

The "real" log line is the value of `SYSLOG_RAW`, everything else is created by journald when it receives the information.

## Filter

As the logs can be extracted in JSON format, it becomes easy to parse them properly using any programming language able to deserialize JSON data, this is far more robust than piping lines to AWK / grep, although it can work "most of the time" (until it does not due to a weird input).

On the command line, you can query/filter such logs using `jq` which is a bit the awk of JSON.  For instance, if I output all the logs of "today" to filter lines generated by the binary `/usr/sbin/sshd`, I can use this:

```
journalctl --since="today" -o json | jq -s '.[] | select(._EXE == "/usr/sbin/sshd")'
```

This command line will report each line of logs where "_EXE" field is exactly "/usr/sbin/sshd" and all the metadata.  This kind of data can be useful when you need to filter tightly for a problem or a security incident.

The example above was made easy as it is a bit silly in its form: filtering on SSH server can be done with `journalctl -u sshd.service --since=today`.

# Conclusion

Journald is a powerful logging system, journalctl provides a single entry point to extract and filter logs in a unified system.

With journald, it became easy to read logs of multiple services over a time range, and log rotation is now a problem of the past for me.
</pre>
   ]]>
 </description>
 <guid>gopher://dataswamp.org:70/1/~solene/article-systemd-journald-cheatsheet</guid>
 <link>gopher://dataswamp.org:70/1/~solene/article-systemd-journald-cheatsheet</link>
 <pubDate>Wed, 25 Dec 2024 00:00:00 GMT</pubDate>
</item>
<item>
 <title>Presentation of Pi-hole</title>
 <description>
   <![CDATA[
<pre># Introduction

This blog post is about the project Pi-hole, a libre software suite to monitor and filter DNS requests over a local network.

=> https://pi-hole.net/ Pi-hole official project page

Pi-hole is Linux based, it is a collection of components and configuration that can be installed on Linux, or be used from a Raspberry PI image ready to write on a flash memory.

=> static/img/pihole-startrek.png The top of Pi-hole dashboard display, star trek skin

# Features

Most of Pi-hole configuration happens on a clear web interface (which is available with a star trek skin by the way), but there is also a command line utility and a telnet API if you need to automate some tasks.

## Filtering

The most basic feature of Pi-hole is filtering DNS requests.  While it comes with a default block list from the Internet, you can add custom lists using their URLs, the import supports multiple formats as long as you tell Pi-hole which format to use for each source.

Filtering can be done for all queries, although you can create groups that will not be filtered and assign LAN hosts that will belong to this group, in some situation there are hosts you may not want to filter.

The resolving can be done using big upstream DNS servers (Cloudflare, Google, OpenDNS, Quad9 ...), but also custom servers.  It is possible to configure a recursive resolver by installing unbound locally.

=> https://docs.pi-hole.net/guides/dns/unbound/ Pi-hole documentation: how to install and configure unbound

## Dashboard

A nice dashboard allows you to see all queries with the following information:

* date
* client IP / host
* domain in the query
* result (allowed, blocked)

It can be useful to understand what is happening if a website is not working, but also see how much queries are blocked.

It is possible to choose the privacy level of logging, because you may only want to have statistics about numbers of queries allowed / blocked and not want to know who asked what (this may also be illegal to monitor this on your LAN).

=> https://docs.pi-hole.net/ftldns/privacylevels/ Documentation about privacy levels

## Audit log

In addition to lists, the audit log will display two columns with the 10 most allowed / blocked domains appearing in queries, that were not curated through the audit log.

Each line in the "allowed" column have a "blacklist" and "audit" buttons.  The former will add the domain to the internal blacklist while the latter will just acknowledge this domain and remove it from the audit log.  If you click on audit, it means "I agree with this domain being allowed".

The column with blocked queries will show a "Whitelist" and "Audit" buttons that can be used to definitely allow a domain or just acknowledge that it's blocked.

Once you added a domain to a list or clicked on audit, it got removed from the displayed list, and you can continue to manually review the new top 10 domains.

## Disable blocking

There is a feature to temporarily disable blocking for 10 seconds, 30 seconds, 5 minutes, indefinitely or a custom time.  This can be useful if you have an important website that misbehave and want to be sure the DNS filtering is not involved.

## Local hostnames

It is possible to add custom hostnames that resolve to whatever IP you want, this makes easy to give nice names to your machines on your LAN.  There is nothing really fancy, but the web ui makes it easy to handle this task.

## Extra features

Pi-hole can provide a DHCP server to your LAN, has self diagnosis, easy configuration backup / restore.  Maybe more features I did not see or never used.

# Conclusion

While Pi-hole requires more work than configuring unbound on your local LAN and feed it with a block list, it provides a lot more features, flexibility and insights about your DNS than unbound.

Pi-hole works perfectly fine on low end hardware, it uses very little resources despite all its features.

# Going further

I am currently running Pi-hole as a container with podman, from an unpriviliged user.  This setup is out of scope, but I may write about it later (or if people ask for it) as it required some quirks due to replying to UDP packets through the local NAT, and the use of the port 53 (which is restricted to root, usually).
</pre>
   ]]>
 </description>
 <guid>gopher://dataswamp.org:70/1/~solene/article-pi-hole</guid>
 <link>gopher://dataswamp.org:70/1/~solene/article-pi-hole</link>
 <pubDate>Sat, 21 Dec 2024 00:00:00 GMT</pubDate>
</item>
<item>
 <title>Getting started to write firewall rules</title>
 <description>
   <![CDATA[
<pre># Introduction

This blog post is about designing firewall rules, not focusing on a specific operating system.

The idea came after I made a mistake on my test network where I exposed LAN services to the Internet after setting up a VPN with a static IPv4 on it due to too simplistic firewall rules.  While discussing this topic on Mastodon, some mentioned they never know where to start when writing firewall rules.

# Firewall rules ordering

Firewall rules are evaluated one by one, and the evaluation order matters.

Some firewall use a "first match" type, where the first rule matching a packet is the rule that is applied.  Other firewalls are of type "last match", where the last matching rule is the one applied.

# Block everything

The first step when writing firewall rules is to block all incoming and outgoing traffic.

There is no other way to correctly configure a firewall, if you plan to block all services you want to restrict and let the default allow rule do its job, you are doing it wrong.

# Identify flows to open

As all flows should be blocked by default, you have to list what should go through the firewall, inbound and outbound.

In most cases, you will want to allow outbound traffic, except if you have a specific environment on which you want to only allow outgoing traffic to a certain IP / port.

For inbound traffic, if you do not host any services, there are nothing to open.  Otherwise, make a list of TCP, UDP, or any other ports that should be reachable, and who should be allowed to reach it.

# Write the rules

When writing your rules, whether they are inbound or outbound, be explicit whenever possible about this:

* restrict to a network interface
* restrict the source addresses (maybe a peer, a LAN, or anyone?)
* restrict to required ports only

Eventually, in some situations you may want to filter by source and destination port at the same time.  This is usually useful when you have two servers communicating over a protocol enforcing both ports.

This is actually where I failed and exposed my LAN minecraft server to the wild.  After setting up a VPN with a static IPv4 address, I only had a "allow tcp/25565" rule on my firewall as I was relying on my ISP router to not forward traffic.  This rule was not effective once the traffic was received from the VPN, although it would have been filtrated when using a given network interface or a source network.

If you want to restrict the access of a critical service to a some user (1 or more), but that they do not have a static IP address, you should consider using a VPN for this service and restrict the access to the VPN interface only.

# Write comments and keep track of changes

Firewall rules will evolve over time, you may want to write for your future you why you added this or that rule.  Ideally, use a version control system on the firewall rules file, so you can easily revert changes or track history to understand a change.

# Do not lock yourself out

When applying the firewall rules the first time, you may have made a mistake and if it is on remote equipment with no (or complicated) physical access, it is important to prepare an escape.

There are different methods, the most simple is to run a command in a second terminal that sleeps for 30 seconds before resetting the firewall to a known state, you have to run this command just before loading the new rules.  So if you are locked out after applying, just wait 30 seconds to fix the rules.

# Add statistics and logging

If you want to monitor your firewall, consider adding counters to rules, it will tell you how many times it was evaluated/matched and how many packets and traffic went through.  With nftables on Linux they are named "counters", whereas OpenBSD packet filter names this "label".

It is also possible to log packets matching a rule, this can be useful to debug an issue on the firewall, or if you want to receive alerts in your logs when a rule is triggered.

# Conclusion

Writing firewall rules is not a hard task once you identified all flows.

While companies have to maintain flow tables, I do not think it can be useful for a personal network (your mileage may vary).
</pre>
   ]]>
 </description>
 <guid>gopher://dataswamp.org:70/1/~solene/article-writing-firewall-rules</guid>
 <link>gopher://dataswamp.org:70/1/~solene/article-writing-firewall-rules</link>
 <pubDate>Wed, 11 Dec 2024 00:00:00 GMT</pubDate>
</item>

 </channel>
</rss>