=============================
Computing contingency plans
=============================

Time for another rant! A rather old one, but I have an urge to write
it down.

When technologies (and software in particular) are used, it's a good
idea to have a contingency plan, possibly involving a fallback to a
more reliable (and likely simpler in some way) technology in case of a
failure. There's plenty of examples, but most common computing-related
ones are perhaps fallback fonts and system runlevels (or "safe
modes", "recovery modes").

Some software falls into a hierarchy (or a chain) of sorts rather
naturally: when GUI doesn't work, TUI may work still; when
curses-based TUIs don't, plain CLI might; when there's no Internet
connection, the things that don't require it are still likely to work;
when resource-demanding software doesn't run, something lightweight
could. And there often are some mechanisms to assist recovery, such as
reserved resources (disk space, connection slots, etc) for
superusers/administrators.

Network protocols and systems can also be ordered: centralised ones
are most likely to become unavailable unexpectedly (and even more so
if there is just a single client that itself can be unavailable),
followed by (particular servers of) federated/decentralised ones,
followed by P2P/distributed ones. But there's some
conventional/historical/standard order too: for instance, RFC 2142
defines support mailbox names for a few common services, so that
operators of those would be reachable over email. And there is WHOIS
information including phone and address, so the fallback chain, in
principle, extends to a personal meeting (which would be quite
practical and usable in some fairy tale world, I guess).

RFCs themselves are a nice and related example: they are distributed
as both static HTML and plain text, and include all the major Internet
standards, so one can conceivably implement and access more
sophisticated technologies, starting with just basic ones. Man pages
and info manuals are somewhat similar: rather accessible and reliable,
and ideally they should contain information needed to debug or
implement things to the point where networking and other information
sources are available (though it's not the actual state of things, at
least on common Linux distributions).

So, one may hope that whatever happens, they can fall back a bit, and
then climb back (recover) to where they were, as quickly and easily as
possible. Of course it's not the case.

A personal computing example is that on most systems there's no
sufficient sources or documentation to debug network issues. Another
one is that failing to boot into graphical mode and needing a newer
graphics driver may require to run one of the major graphical browsers
(the ones with JS support) to download such a driver. Both happened to
me in the past years, I think there's plenty more that happen to
others. Yet another example is acquisition of such a graphical web
browser: recently I tried to access websites from a Windows XP VM, but
virtually nothing worked because they require HTTPS these days, and
protocol versions supported by IE 6 are too old for them. So I decided
to grab Firefox, but apparently the only protocol mozilla.org serves
it over is HTTPS.

Online services have similar circular dependencies: as mentioned
above, web services may fall back to (or just use as the primary means
of communication) email, but both regular email usage and mail server
maintenance these days involve rather heavy use of web services:
DNSWLs/DNSBLs, web-based abuse report forms, unsubscription forms,
sometimes hoster's control panels; the infrastructures are rather
interdependent. Perhaps this one is not a practical issue (except for
the cases like the ones I wrote about last year, with attempts to
report website issues by mail leading to autoreplies redirecting to
the broken website's page where that address was found), just an
awkward one.

Then there are more important commonly used services, such as ISPs (to
access other online services at all), banking and payment systems
(usually needed to get others running in the first place), government
services. Those are interdependent too (online banking requiring
internet connection, ISPs requiring payments), and tend to depend on
the clunkiest combinations of technologies, in some cases even with
addition of mandatory proprietary (on the client side; apparently
always proprietary server-side) ones, barely working even in perfect
conditions. One good thing about such services is that they usually
have a somewhat usable fallback: one can visit their offices, and
perform operations there, observing how employees fight their crashing
desktop software. Though even that seems to be getting redirected to
online services now.

Not sure into which group computer hardware would go, but both online
stores and manufacturers' websites tend to be barely usable, filled
with marketing junk, and to require those major web browsers with JS
to function. And hardware is another thing you need to build the rest
upon.

All those are solvable, and at worst one can retrace the steps a new
computer user may have to take (starting with getting a new computer
from a store, while having incomplete information, that is). And there
are usable fallbacks in some cases, so minor computer issues rarely
require starting over. But it still feels weird and wrong that in
order to use some basic/reliable/accessible technologies, I have to
regularly use (and depend on) their opposites.

Update (2020-05-13, COVID-19 lockdown): now one has to have an
Internet-connected computer and/or a mobile phone in order to request
a pass for moving around the city, which also involves solving
reCAPTCHA and executing other JS. Food/groceries delivery services are
also heavily dependent on JS, so the situation worsened quite notably.


----

:Date: 2019-11-02