<?xml version="1.0" ?>
<rss version="2.0">
<channel>
   <title>Free Thoughts</title>
   <link>gopher://aussies.space/1/~freet/phlog</link>
   <description>
The free-floating phantasms resident in the mind of The Free
Thinker, brought home to you in 68 columns of plain-text purity by
the kind generosity of your local neighbourhood Gopher.
   </description>
   <language>en-au</language>
   <docs>https://cyber.harvard.edu/rss/rss.html</docs>
   <pubDate>Mon, 23 Jun 2025 09:25:57 +1000</pubDate>
   <item>
    <title>OpenWRT Surgery</title>
    <link>gopher://aussies.space/0/~freet/phlog/2025-06-23OpenWRT_Surgery.txt</link>
    <description>
OPENWRT SURGERY

After a while when I didn't have ideas for new phlog posts, now I
have ideas and no time to write them. But I did finally get around
to setting up my final OpenWRT configuration for my home router, v.
23 within my router's 32MB of RAM much better than the default.
It's final because, as noted in my post 2025-01-09Hit_and_MIPS.txt
(oh dear, this took me six month to get back to!), OpenWRT 24 has
dropped the build option for router boards in my router's hardware
family, which at least gives me the excuse to hack deeper into the
works without fearing breakage after upgrades.

I'd like to write my usual long ramble, but since allowing time for
writing this post is under the excuse of it being notes for my own
later reference, I'll go into quick dot-point mode:

* Building with OpenWRT Image Builder on x86_64 VPS (because the
image builder is a huge download so I don't want to do that over my
home internet connection)

https://openwrt.org/docs/guide-user/additional-software/imagebuilder
 - Make directory named "files" in the root of the image builder
directory, pointed to with the FILES argument to make. In here go
all files to be added/changed compared with the default router file
system, with all their final directory paths and permissions (you
need root permissions to make them).
 - Instructions say not to run the Image Builder Makefile as root,
but if you want to include custom files that are only readable by
root, it will fail to read them unless you run it as root. It runs
fine as root.

* OpenWRT releases supported for six months after release of next
major version:
   https://openwrt.org/docs/guide-developer/security#support_status
 OpenWRT 23 therefore supported until July this year. Latest
release is 23.05.5.

* The OpenWRT Wiki page for my router pointed to the "SMP" images
(until it became officially unsupported after OpenWRT 19 due to
lack of RAM). This was wrong, it only has one processor so an SMP
kernel is bloat. I should have been using the image builder
configured without SMP support, called "generic":

https://downloads.openwrt.org/releases/23.05.5/targets/bcm63xx/generic/openwrt-imagebuilder-23.05.5-bcm63xx-generic.Linux-x86_64.tar.xz
 https://openwrt.org/docs/techref/targets/bcm63xx

* Openwrt comes with DHCPv6 via a separate DHCP server package. I'm
not using IPv6 so I don't need that and can just use the IPv4 DHCP
server in dnsmasq:
- Add "option dhcpv6 'disabled'" in /etc/config/dhcp.
- Also disable automatic IPv6 support for USB modem in
/etc/config/network
- Also comment-out IPv6 rules in /etc/config/firewall
 - "disable_ipv6" option is not supported since OpenWrt 22.
- Remove default packages odhcp6c, odhcpd-ipv6only, and odhcpd6c,
from image builder PACKAGES
- Disabled odhcpd in /etc/config/dhcp by setting in the odhcpd
section:
     option dhcpv6 'disabled'
     option ra 'disabled'

* Disable other unused default services
 - ntpd (NTP daemon)
  - Router doesn't have a RTC so this means it won't know the
correct time anymore (will count from the date of the current
OpenWRT version's release after booting), but I don't care.
  - Can be disabled in /etc/config/system or put "sysntpd" in
DISABLED_SERVICES for image builder.
 - urngd (Enthropy harvester daemon)
  - This might be important for encrypted WiFi, for which drivers
sometimes need a fresh enthropy source), but I'm not using that so
it gets the axe.
  - Remove default urngd package from image builder PACKAGES.
  - Put urngd in DISABLED_SERVICES for image builder.
 - logd (syslog daemon)
  - Networking wouldn't come up when I disabled this in the old
image, so I left it enabled at boot but put this at the start of
/etc/rc.local to kill logd at the end of the boot process:
    # Wait until firewall confguration is finished
    while killall -q -0 hotplug-call || killall -q -0 mac80211.sh || killall -q -0 fw4
    do
      sleep 5
    done

    # Kill syslog daemon
    killall logd
  - Use "/etc/init.d/log start" to start it after boot for
debugging purposes. Read syslog afterwards with the "logread"
command.

* Ditch Dropbear and SSH/SCP. Telnet and FTP have less overhead and
won't break when new clients demand newer server versions than the
last OpenWrt 23 dropbear package.
 - Remove "dropbear" package from image builder PACKAGES.
 - put "dropbear" in DISABLED_SERVICES for image builder.
 - put GNU Inetutils telnetd and inetd static binaries (see
2025-01-09Hit_and_MIPS.txt) in custom files path:
    /usr/bin/telnetd
    /usr/bin/inetd
 - Make symlinks:
    /usr/local/bin -&amp;gt; /bin
    /usr/local/var -&amp;gt; /var
 - Disable vsftpd service, since I'll use it in inetd mode which
takes up much less RAM, by putting vsftpd in DISABLED_SERVICES for
image builder.
 - Make /etc/inetd.conf:
    telnet stream tcp nowait root /usr/bin/telnetd telnetd
    ftp stream tcp nowait root /usr/sbin/vsftpd vsftpd
 - Set "listen=NO" in /etc/vsftpd.conf
 - Create this directory for vsftpd at start-up with this command
in /etc/rc.local:
    mkdir -m 0755 -p /var/run/vsftpd
   (/var/run is in tmpfs so can't put the directory in FILES)
 - Inetd now needs a custom init script to start it at boot, in
/etc/init.d/inetd:
     #!/bin/sh /etc/rc.common

     # start after and stop before networking
     START=19
     STOP=50

     start() {
       SERVICE_USE_PID=1 service_start /usr/bin/inetd /etc/inetd.conf
     }

     stop() {
       service_stop /usr/bin/inetd
     }

     reload() {
       service_reload /usr/bin/inetd
     }
   - Make the script executable and owned+writable by root only.
The symlinks to this script in /etc/rc.d are created by the image
builder.

* If the router does run out of RAM, processes get killed by the
kernel "OOM Reaper", but respawned by the (OpenWRT-specific) init
system. I tweaked the "respawn" settings to be more forgiving and
better suit my usage. Following docs here:

https://openwrt.org/docs/guide-developer/procd-init-scripts#service_parameters
 /etc/init.d/network - in start_service(): procd_set_param respawn 3600 20 0
  - This sets a 20sec wait between respawning if the network
process is killed, and
    retries indefinitely.
 /etc/init.d/dnsmasq - in dnsmasq_start(): procd_set_param respawn 3600 40 0
  - This sets a 40sec wait between respawning if the DNS server
process is killed, and
    retries indefinitely.
 /etc/init.d/log - in start_service_daemon(): #       procd_set_param respawn 5 1 -1
  - Disable respawning the log process if it is killed by
commenting out this line.

* Probably not an issue anymore, but before the above changes I had
trouble with running out of RAM at boot because the OpenWRT init
system starts all services in the background and therefore has
their init scripts all running in parallel. That's great for boot
speed (at least if I had multiple processors), but terrible for RAM
usage since there needs to be room for everything to run at once (a
really odd choice from the OpenWRT developers). Disabling dnsmasq
in DISABLED_SERVICES then launching it at the end of the boot
sequence before starting the mobile broadband modem avoids the
choke point, and can be achieved with this in /etc/rc.local (first
part already included under "Disable other unused default services"
above):
    # Wait until firewall confguration is finished
    while killall -q -0 hotplug-call || killall -q -0 mac80211.sh || killall -q -0 fw4
    do
      sleep 5
    done

    # Start these services at the end of the boot process
    # to avoid running out of RAM
    sleep 30
    /etc/init.d/dnsmasq start
    sleep 15
    ifup wan

    # Restart firewall if it was killed during ifup
    sleep 45
    /etc/init.d/firewall boot

* Finally, my idata.sh script (from 2024-03-10Off_and_On_Line.txt)
used SSH to run the data-collector script on the router. Looking at
the inetd docs, I discovered I could turn that into its own
protocol using TCPMUX and make requests using "nc" instead of
dropbear without changing the router-side script at all.
- Put these lines in /etc/inetd.conf:
  tcpmux stream tcp nowait root internal
  tcpmux/+idata stream tcp nowait root /root/send_idata.sh idata
- But all the best things in computing are obsolete, so I needed
to add the tcpmux protocol to the /etc/services file which is
missing it by default:
   tcpmux              1/tcp
- Now in idata.sh, instead of:
    # Grab data stats from router via SSH
    newdata="`DROPBEAR_PASSWORD=$ROUTER_PASSWORD dbclient $ROUTER_SSH $ROUTER_SCRIPT`"
  I've got:
    # Grab data stats from router via TCPMUX
    newdata="`echo idata | nc $ROUTER 1 | tail -n 1`"

* This is my final OpenWRT Image Builder build command. Note that
LuCI was never included in these builds by default (hence I've only
ever known OpenWRT configuration via SSH), so it doesn't need to be
excluded. I'm not sure if that's the case for other build targets:

make image FILES="files" PROFILE="brcm_bcm96358vw" PACKAGES="chat comgt hostapd-basic hostapd-common iw-full iwinfo kmod-b43 kmod-bcma kmod-cfg80211 kmod-crypto-hash kmod-mac80211 kmod-mac80211-hwsim kmod-nls-base kmod-usb2 kmod-usb-acm kmod-usb-core kmod-usb-ehci kmod-usb-ohci kmod-usb-serial kmod-usb-serial-option kmod-usb-serial-wwan kmod-usb-uhci libiwinfo librt libusb-1.0 terminfo usb-modeswitch vsftpd wireless-regdb wireless-tools coreutils-stty picocom kmod-usb-net-sierrawireless kmod-usb-serial-sierrawireless kmod-mii -odhcpd-ipv6only -odhcpd6c -odhcp6c -urng -dropbear" DISABLED_SERVICES="urngd dnsmasq sysntpd dropbear vsftpd"
- Note that the "kmod-" packages are specific to my router
hardware. No longer needing to figure out when these change
(without documentation of the changes) is another big win from not
upgrading to new major OpenWRT versions anymore.

* Still takes me a few tries to flash the firmware via TFTP (via
serial console command). Dunno why the connection always seems to
fail the first few times, but it works eventually.

The stats for this new image compared to the previous default SMP
one which I'd tried to optimise after installation are significant.

Specs: 32MB RAM, 8MB Flash

Old OpenWRT 23 (SMP) installation:
 Total RAM (minus kernel size):        23504KB
 RAM available after boot:             4148KB
 Flash free after boot:                568KB

New OpenWRT 23 (non-SMP) installation:
 Total RAM (minus kernel size):        24712KB
 RAM available after boot:             9512KB
 Flash free after boot:                1960KB

But most important, it now starts up in a reasonable time again (it
was taking over 5min before) and no longer has random crashes when
the mobile broadband modem signal drops out. Before it would run
out of RAM while trying to reconnect, sometimes failing, or killing
other important services in the process. Also the router I thought
was broken when I first installed OpenWRT 23 on it is actually
fine, it was just an early case of those issues causing the
firewall process to be killed which made it look like the network
interface was intermittent. It has been running solidly for days
since installing this lightweight build.

Oh and typically now that I've finally discovered the RAM/storage
advantage of the Linux kernel without SMP support, the kernel
developers are talking about removing that option:
https://www.phoronix.com/news/Linux-6.17-Maybe-SMP-Uncond

Linux seems to be looking less attractive lately, clearly aiming to
serve a different user-base than me. Also see this tally of how the
size of OpenWrt has increased over the years:
https://openwrt.org/supported_devices/432_warning#analysis_of_firmware_size_growth

I wonder if a BSD-based router OS would have done better, such as
ZRouter ( https://zrouter.org - looks dead)? Although drivers would
be a roadblock with that anyway. Anyway I don't think there's any
good reason that a basic router now needs better specs than these
ones have. Plus newer basic ISP-supplied models that are common
second-hand in Australia have little-better specs and never got any
sort of OpenWRT support to begin with.

- The Free Thinker
    </description>
   </item>
   <item>
    <title>Finding Your Server Limits</title>
    <link>gopher://aussies.space/0/~freet/phlog/2025-06-22Finding_Your_Server_Limits.txt</link>
    <description>
FINDING YOUR (SERVER) LIMITS

A little postscript to my last post,
2025-06-07My_Session_with_the_Bots.txt, written before I forget the
details completely. Before I ended with an Apache MaxRequestWorkers
setting of 350, since 450 caused the server's 1GB of RAM to run out
before queuing connections to the PHP script which was getting
DDoS'ed for some reason. Of course having written that I soon
discovered my server bogged down again, stuck on that Apache
process limit. Gradually raising it, I found that the RAM
requirements just to serve the 503 error response to the
now-blocked requests containing PHPSESSID were less than I thought
before. I was able to up the process limit to 950 and actual
simultaneous Apache processes topped out around 800 while logs
showed hundreds of requests from random Brazilian/SE-Asian IPs
being denied per second.

This is obviously a limitation of Apache's configuration options,
because before blocking the requests based on that pointless
PHPSESSID query string the Apache processes used much more RAM to
serve the dynamic PHP page rather than just a few headers with an
error response (the bots didn't retrieve the error page). What I
really need is a way to automatically scale the process limit based
on available RAM and average RAM usage per Apache process, but
strangely that doesn't seem to be available.

It seems to me that it should be possible to write a script to do
this by editing the MaxRequestWorkers setting and reload Apache.
But testing it would be time consuming and at this point my tally
of jobs to do Vs time to do them is already hopeless. Plus any new
job that doesn't immediately serve a profit motive ought to be
excluded given my current situation, and so long as my website
works now, improvements ought to first and foremost be to
making/finding products to sell there that people want to buy.

I haven't got Apache's caching module enabled, something I'm
considering now, but presumably the different PHPSESSID values
would have defeated that anyway. Anyway, the attack slowly died
away in request volume and then went back again to the old noise of
one to three AmazonBot hits per second.

By the way isn't it odd that in order to provide a website that few
real people actually want to look at, one now has to cater for
demand equivalent to it being viewed my millions of people a day.
How is the internet even still working when this sort of load is
being placed on it? Yes other people like to block large IP ranges
rather than trying to absorb the bots, or use one of the
bot-blocker services that are suddenly blocking me everywhere now,
but that's really just breaking the internet even more directly. As
one of the comparatively few real humans online, I demand not to
have every other website accuse me of being a bot unless I run
Firefox with lots of random scripts allowed through NoScript.
Scripts from third-party services who doubtless have
side-businesses of collecting and selling data on this real living
human that they caught out there in the sea of bots.

At least it is kind-of cool that the cheapest VPS I found could
serve millions of dynamic webpage requests per day from real
humans, if they weren't all accompanied by a larger swarm of hungry
bots. The lightweight code and small size of content on my site
makes it super fast even though Apache (chosen because I didn't
expect to be dealing with such loads) isn't the best choice for
performance. Imagine how low-spec a system just to serve human
requests could be? A Rapberry Pi Zero would be huge overkill. But
then it's that sort of cheap modern processing power that allows
people to run these DDoS/scraper bots, so it's all a vicious cycle,
and I only won this time because this attacker/scraper was so
idiotic as to just hit one PHP script with a pointless query string.

But all this broad thinking is obviously where I'm going wrong in
life. Stuff the whys and wheres, I need to make money somehow. Hmm,
what if I made a huge bot farm running half-arsed code to scrape
all the websites on the internet to death and feed it onto some AI
model I can sell to people making their own half-arsed AI junk?
Yeah, great idea, that'd put food on the table.

- The Free Thinker
    </description>
   </item>
   <item>
    <title>My Session with the Bots</title>
    <link>gopher://aussies.space/0/~freet/phlog/2025-06-07My_Session_with_the_Bots.txt</link>
    <description>
MY SESSION WITH THE BOTS

On Monday I got emails from failed cron jobs on the VPS that runs
my website, caused by failed connections to other websites. I tried
to SSH in, but it couldn't connect, nor could a web browser, oh
dear. Onto the VPS control panel website to piss away my home
internet data quota using their web-based VNC, where a hopelessly
laggy stream of errors like this was pouring out over the vitrual
fbcon:

nf_conntrack: nf_conntrack: table full, dropping packet

A quick web search reveled that this meant there were too many
connections for "nf_conntrack" to handle, solved by the
dodgy-sounding solution of setting
/proc/sys/net/netfilter/nf_conntrack_max to some random huge
number. So I typed that in blind to the laggy stream of errors in
the VNC terminal and eventually saw my commands scroll past, then
SSH finally worked.

Still no luck in a web browser though, it turns out Apache was at
its 150 process limit serving endless simultaneous requests for the
same sub-section of my website by hundreds of bots with random
User-Agents and IP addresses (Brazil and Thailand seemed to be
favourites for the latter). Upping the processes limit with
"ServerLimit=450" and "MaxRequestWorkers=450" in
/etc/apache2/mods-enabled/mpm_prefork.conf worked for a little
while, but the bot connections edged up to over 400 Apache
processes (probably queuing up as it got slower to respond) and the
RAM ran out.

I wasn't sure if that was the dodgy nf_conntrack_max setting, since
I gather huge values have RAM implications, but although I found
some better docs, and spent a silly amount of time trying to make
sense of them, I couldn't. It's one of those annoying things in the
Linux kernel that look like they're documented, but it's really all
too vague to be useful:
https://www.kernel.org/doc/html/latest/networking/nf_conntrack-sysctl.html

This page goes into much more detail, but somehow still loses me,
and it's clearly outdated compared to the way things are described
in the official docs:
https://wiki.khnet.info/index.php/Conntrack_tuning

But it does mention a maximum default value of 8192, which was what
/proc/sys/net/netfilter/nf_conntrack_max was set to before.
Although the offical docs say for nf_conntrack_max: "This value is
set to nf_conntrack_buckets by default", and for
nf_contrack_buckets: "If not specified as parameter during module
loading, the default size is calculated by dividing total memory by
16384". "free -b" shows 1007349760 bytes total physical RAM, so
1007349760 / 16384 = 61483. So I set both to that in
"/etc/sysctl.conf", which is apparantly the tidy place to put these
settings in Devuan rather than "echo"ing to /proc at start-up:

net.netfilter.nf_conntrack_buckets=61483
net.netfilter.nf_conntrack_max=61483

Still not enough RAM though, Apache was eating it all. But only one
sub-section of my website was being hit, generated by a PHP script,
so I gave up and took it down by replacing it with a very short
HTML file, and Apache processes dropped down to around 300.

That gave me time to address the other problem of the Apache access
logs, which were going to be GBs per day in size. Logrotate has an
option to rotate log files early if they exceed a certain size.
Setting "maxsize 100M" in /etc/logrotate.d/apache2 and moving the
logrotate cron job from /etc/cron.daily/ to /etc/cron.hourly/, made
it compress and rotate Apache logs early if they grow above 100MB
each. It was already set to delete the 15th copy, so now instead of
two weeks of logs I got about two or three days, but oh well. To
think I used to keep web access logs permanently!

Looking at the log files closely, they all accessed the page with a
PHPSESSID URL parameter, but that part of the site doesn't use
session tracking, so I turned using PHPSESSID off with "php_flag
session.use_trans_sid off" in .htaccess and enabled the PHP script
again. But no good! In a web browser with cookies disabled I no
longer got links with PHPSESSID in them, but the bots kept on
requesting URLs with PHPSESSID set to random values like nothing
had changed! It seemed they weren't crawling the site then, looking
to feed an AI with content, but trying new session strings
themselves. Why? A brute force attempt to hijack other user's
sessions/accounts (non-existant there anyway)? But why not do that
with cookies, which are more widely used? Or a deliberate DDoS
attack on my website? But why just one sub-section even though it
links out to lots of other parts of my website including other PHP
scripts?

In the end I gave up asking questions and was just thankful for
their stupidity because now that the PHP script shouldn't be making
links with PHPSESSID in them, I can block requests with PHPSESSID
in their query string. So I put this in .htaccess:

&amp;lt;If "%{QUERY_STRING} =~ /PHPSESSID/"&amp;gt;
 Require all denied
&amp;lt;/If&amp;gt;

Sure enough, it blocked them all, and they never picked up the
PHPSESSID-less URLs. Still huge numbers of requests, but with the
short 403 response the server delt with them quicker so only around
100-150 simultaneous server processes required, and each using
about half the RAM presumably because they didn't have to load
mod_php anymore.

Still it continued for days, before eventually stopping. Just to
confuse my attempts to understand their motivation, the logs now
show "Amazonbot" (from Amazon IPs, so probably legit) still trying
the old URLs with PHPSESSID today, but at a comparatively sedate
maximum of three denied requests per second compared to the 60-75
denied requests per second I saw before.

At least I do now know the safe Apache MaxRequestWorkers setting is
about 350 (note that ServerLimit defaults to 256 and also limits
this) with 1GB of RAM on my site. I've also now disabled cookies
with "php_flag session.use_cookies off" in .htaccess where that PHP
script lives, since that was pointless too. Half the trouble with
modern computer software is knowing what you should disable - I
also wonder if I could avoid having nf_conntrack enabled, but it's
hard to understand exactly how it's used.

- The Free Thinker
    </description>
   </item>
   <item>
    <title>The Labyrinth</title>
    <link>gopher://aussies.space/0/~freet/phlog/2025-05-28The_Labyrinth.txt</link>
    <description>
THE LABYRINTH

I've seen in a few places online people talking about setting up
auto-generated labyrinth websites designed to endlessly trap AI
crawlers in some rather petty effort to protest against them. In my
experience crawlers seem to get trapped in my online services
without me even trying - I even had to block a Gopher crawler that
got stuck in GophHub for weeks. But anyway what really strikes me
about this idea of an AI-consuming labyrinth is how the internet is
already built, often rather deliberately, as a labyrinth to occupy
us alleged human intelligences.

I mused over this last night on the couch in one of my long
sessions of thinking silently, naked, in dim light, while the room
got colder as I'd turned off the heater in preparation for bed,
after the usual opening dream of finding a girlfriend gave way in
hopelessness to more academic concepts. Unusually I caught myself
near the beginning and chose to write down my thoughts on a
crumpled scrap of paper rather than let them float away harmlessly
into the ether as usual. So this morning I awoke determined to
transcribe them here, a reboot of the crazed philosophical tone I
had when I started this phlog, albeit absent the start which was
already lost unrecorded:


I mean I don't know whether I envy or fear the state of mind where
this is anything but a waste of time. Do you? I mean one way or
other it's all a game and who knows who they're playing against?
Who cares who they're playing against? Who even wants to know? All
we care is to play, all we know is to play. AI might be a crappy
approximation of humanity's creativity, but it's an accurate model
of humanity's consumption. In that respect it's a perfection all of
its own. Endless energy consumed, endless data fed in, endless
bullshit out. AI is the internet, the internet is AI. The
intelligence of all humanity spewed out without an index, churned
over by search engines, fed back to itself via page ranking
algorithms, and now regurgitated by AI. It's all the same shit.
It's not human, it's not machine, it's a life-form of its own.
Never alive and never dead. We sustain it and it sustains us. We
control it and it controls us. We look to it to find ourselves and
find only a billion other eyes peering deeper into the darkness.
Deep down there it lives, but we will never see it, never know what
it sees back of us, or what it does.

- The Free Thinker
    </description>
   </item>
   <item>
    <title>Catching a Release</title>
    <link>gopher://aussies.space/0/~freet/phlog/2025-05-18Catching_a_Release.txt</link>
    <description>
CATCHING A RELEASE

I compile various programs from source for my 'Internet Client' SBC
where I run internet-related software remotely on my other (older)
systems. The idea is that I keep the internet-related software on
it up to date while ignoring that for the other systems. But when
I'm compiling things myself (or downloading cross-distro static
binaries etc.), that means I need a way to know when new versions
are released.

Projects on GitHub which use its releases system can be watched via
an ATOM feed, eg.
https://github.com/mobile-shell/mosh/releases.atom

GNU projects also have ATOM feeds for releases if they use
"Savannah", eg.
http://savannah.gnu.org/news/atom.php?group=mailutils

I use rss2email to send me emails when these feeds have new posts
(although I'd prefer to find something that's not Python based).

Other projects like OpenWRT already have a dedicated mailing lists
for release announcements, so that's even easier:
https://lists.openwrt.org/mailman/listinfo/openwrt-announce

Firefox (whose Linux binaries I use) has one too, but they use
Google Groups for it, which I don't want to touch, and it doesn't
filter for normal and ESR releases so ESR users like me get lots of
noise:
https://groups.google.com/a/mozilla.org/g/announce

Mozilla seem to have been willfully ignoring calls for official
RSS/ATOM feeds for years, but RSSHub claims to have one:
https://rsshub.app/firefox/release/desktop

Unfortunately their feed for ESR releases isn't working anymore,
and it was unreliable in various ways before that too:
https://rsshub.app/firefox/release/organizations

It also spits out posts with all the HTML tags from the releases
page left in, which is pretty ugly.

Of course Firefox itself will tell you, repeatedly, when it has an
update available. But I don't like programs that phone home
(sharing all your system details with Mozilla at the same time) and
nag me, so I disable that by creating an
/etc/firefox/policies/policies.json file as follows:

----------------------------------
{
  "policies": {
               "DisableAppUpdate": true
              }
}
----------------------------------

Finally I've written my own shell script that checks for new
Firefox versions within the chosen release branch and if a new one
is found scrapes the release notes page for details. It can be run
by Cron to send you the output as an email, or actually I add " |
sendmail" after the multi-line echo command so the "Subject:" line
is used in the email. It displays the new version and the changes
in plain text, with a link to detail on the security
vulnerabilities. We'll see how long it lasts until they change the
page layout...

The only trouble is when two (or more) releases happen in the same
branch at the same time, you miss the first one. That's happening
now with ESR since they're supporting v128 and v115 on old Windows
and MacOS, but that won't be for much longer.

That "ffversion.sh" script is here in the scripts section for all
the nobodies who care:
gopher://aussies.space/1/~freet/scripts/

Example output (I didn't go to that much effort with the layout):
----------------------------------
$ ffversion.sh
Subject: Firefox Release 139.0beta

Firefox 139.0beta released on April 29, 2025.

Release Notes:
https://www.mozilla.org/firefox/139.0beta/releasenotes/

             New

     Full-Page Translations are now available within Firefox
extension pages.

             Fixed

     PNG images with transparency now keep their transparency when
pasted into Firefox.

     The upload performance of HTTP/3 been significantly improved,
particularly on resumed connections (QUIC 0-RTT) and high bandwidth
and high delay connections.

     Fix for MFSA 2025-36 included in Firefox 139.0b10 and newer.

             Developer

     Developer Information

             Web Platform

     window.getSelection().toString() now correctly returns the
text serialization when text is selected in a text control,
improving cross-browser interoperability on some sites.

     Closed &amp;lt;details&amp;gt; elements are now searchable and can be
automatically expanded if found via find-in-page.

     Timer throttling for Workers is now supported.

     The Temporal proposal is now enabled by default in Firefox.
Temporal is a better version of Date, for more details, please see
https://spidermonkey.dev/blog/2025/04/11/shipping-temporal.html and
https://tc39.es/proposal-temporal/docs/.

     Added support for the WebAuthn largeBlob extension.

     Added support for requestClose() to HTMLDialogElement.

     Service Workers are now available in Private Browsing Mode.
This enhancement builds on our efforts to support IndexedDB and the
DOM Cache API in Private Browsing through encrypted storage. With
this change, more websites, especially those that rely on
background tasks, will be able to benefit from Service workers.

     Firefox now supports the hidden=until-found attribute,
allowing content to be found via find-in-page when it's otherwise
hidden by default.

     The built-in editor for contenteditable and designMode now
handles collapsible white-space(s) before block boundaries and
white-space sequences between visible content more consistently
with Chrome. As a result, Gecko no longer inserts a padding
&amp;lt;br&amp;gt; element after white-space before a block boundary,
aligning behavior with other browsers.

Security Advisories:
https://www.mozilla.org/security/advisories/mfsa2025-36/
----------------------------------

Now if only I didn't have to use such a damn slow, bloated,
over-complicated, poorly-documented, piece of software like Firefox
in the first place...

- The Free Thinker
    </description>
   </item>
   <item>
    <title>Book of Life</title>
    <link>gopher://aussies.space/0/~freet/phlog/2025-05-11Book_of_Life.txt</link>
    <description>
BOOK OF LIFE

A quick post before I go back to making a mess of my car,
attempting various servicing tasks. Somehow the ones I've done many
times before never seem to come free of mistakes for me. If
anything I fall into the trap of over-confidence and mess things up
even more.

Having finished a day's spanner-hammering, hair bathed in dirt and
oil, I'm still going back to reading Seven Pillars of Wisdom. I'm
near the end now, in the last "book", although reading less
frequently because the weather has become too cold to sit upright
for reading on most nights. So of course, head filled with
knowledge of guerrilla warfare and camel husbandry, I'm looking to
the next book on which to feast my eyes.

Continuing in the spirit of adventure I was thinking of The Kontiki
Expedition, in part just so I can finally watch the movie since
I've skipped it when shown on TV in order to read the book first.
However as academically interesting as these non-finction works
are, I wonder if reading them serves me best. I already avoid
fictional works in order to gain some true knowledge of the world
from my reading, but really how far does this knowledge advance me?
To someone with the security of wealth it's an equal advancement to
obtain any such accurate knowledge, yet here I seem to struggle
with every practical task I set myself. With tasks I set myself in
order to make money, and as often with tasks I must complete myself
due to no capacity for spending money (or skill in finding paid
people to do what I want properly anyway).

In many things it's not a problem simply solved by reading. I've
read a book on welding, yet it's clear I'll need much physical
practice to figure out MIG welding thin metals, once I finally
stump up the cash for the equipment that I've delayed buying all
year (the shielding gas is a real pain). But for cars at least
there are voluminous texts aimed at non-professionals. In
particular I picked up for $2 at a garage sale parts of an
excessively-long subscription series called On The Road, which
swerves its way randomly through many detailed DIY car servicing
topics from the general to the excessively specific. This should be
a fine candidate for material I can profitably read rather than
tales of 20th century adventure.

But even then, am I missing the point? If I'm bending my recreation
time to serve me for learning tasks I can't afford to pay for,
should I go further and spend all available time towards gaining
knowledge I can use to make money. For my big website idea I'd
surely benefit from a full rounded understanding of different
database designs and architectures. Much as it's a rude word here
on Gopher, AI is something I'll need to look closely into for it as
well. Shouldn't I spend all my time reading up on these topics?

Really, yeah. But the entertainment value then is largely gone.
Maybe I can sort-of get into that, but I'd be more into reading
about electronics (old fashioned computer-free electronics) or
mechanics. Unlike those, database theory and AI aren't something I
start reading out of my own natural inclination, they're topics I
rationalise myself to thinking I _should_ read up on. But maybe
avoiding that is the failure of my life, in thinking I can make
money within a niche of activity that I really enjoy. That's a lie
tought in youth; it doesn't apply to unsociable people like me
unless they're very lucky, and I'm evidently not. I'm sure I'm not
alone, ruthlessness in business must often be motivated by a desire
to escape from this world of self enslavement, into that wealthy
ideal where one can indulge in academic recreations without fear of
missing practical opportunities.

But which is right and which is wrong? Is it the need for wealth
which is my illusion? Should I adandon myself to fate while
indulging in academic thoughts regardless of wealth or future?
Should I abandon personal desire to try and transform my interests
to serve a society that I don't myself wish to emulate. Or the
middle ground of trying to sacrifice the former to make up for the
shortfall of the latter by skillfully sustaining myself in ways
most people aren't able to. The choice of book becomes a choice of
life. The middle road (and therefore "On The Road") appeals to me
most, I being a sucker for compromise, but without much conviction
that it's correct.

- The Free Thinker
    </description>
   </item>
</channel>
</rss>