2022-08-13

A fellow SDF member, f6k, has been writing a series of phlogs describing
his methods of offline computing. I have been avidly following that
series because a few years ago, I undertook a similar decision that led
me to develop a set of tools for offline computing.

The most interesting thing is the solutions that f6k has and what I used
are quite different which reflects differences in philosophy and use
cases. It's always fun to see how different people solve the same problems
with different tools.

So the problem we are both trying to solve is the unavailability of being
connected constantly to a permament broadband connection while still
desiring to get updated internet content at regular intervals.

However, one of the first differences between f6k and my offline computing
philosphy is driven by the initial use case for what we had to accomplish
with our computers. In f6k's second post about offline computing he mentions
that one of his core drivers was being able to apply for jobs by sending
and receiving emails. You can infer that this requirement for 2 way
communications was a pretty important driver because a lot of his other
tooling has a strong emphasis on using his email client as the center of
his workflow.

In contrast, I had no such requirements for 2 way communications via email
and this stems from the different use case I had for internet. First a little
background on how I came to need to use offline computing.

A few years ago I had a realization. I had a regular full time job where
I worked 9-10 hours a day. After I would come home from that job, I would
spend the remaining 4-5 hours before bed mindlessly watching youtube.

Watching youtube became an addiction as the algorithms that youtube used
kept on pushing more and more content in my face. It was getting so bad that
I was losing sleep and sometimes making it into work late. I knew that I had
to make a drastic change since it was obvious that I didn't really have
the self-control to quit mindlessly clicking around youtube myself.
So I essentially cut the cord and canceled my broadband internet cold turkey.

I want to be clear here and say that the problem wasn't watching my favorite
youtube channels and content creators, it was really being caught in the web
of constantly clicking on the next interesting video and being unable to stop
that dopamine hit you get when a novelty strikes.

Anyways, with that background out of the way, you can clearly see that my
internet use case is nearly 100% on the content consumption side. I still
wanted to watch my favorite youtube content, I just wanted to limit the
addictive tendencies of youtube driven by the algorithm.

As a consequence, I never developed any tooling around email like f6k. The
vast majority of my tooling was developed to fetch and retrieve content
like youtube and other websites in a fixed manner with no additional
discovery of content.

This resulted in a lot of tooling based on RSS feeds. I essentially exported
my youtube subscriptions OPML file and wrote a script to fetch a list of all
of the youtube videos within a certain time range (like the last 2 days) and
then passed that list on to the youtube-dl tool. Whenever I would go to
starbucks every few days I would just run youtube-dl with my list and download
as many videos as I could before I left.[1]

HTML web content is pretty similar. I had a large OPML file containing all
of the RSS feeds for my favorite websites. I wrote a similar script that
would go through all of my RSS feeds and essentially wget all of the articles
into a folder for each website with the article date appended. Eventually
I wrote that all into a mini website with a master index HTML file. After
running the script to pull in all of the articles, I would just view the index
HTML file in my browser and click through to all the saved HTML webpages. With
this program I didn't even need my RSS feed reader.

So basically, every 2 or 3 days I would just go to the starbucks next to my
house and spend an hour doing some browsing or online shopping while my
scripts were running. And when I got home I would have all of my favorite
videos and website articles downloaded and ready for me to consume at my
leisure. One interesing side affect of this method was when I first started
to do this, I would end up with a lot of videos and articles that I thought
I was interested in watching but I never ended up watching or reading. So I
actually ended up trimming a lot of content subscriptions.

This content trimming reminds me of all you can eat buffets. When you have
unlimited broadband, your content consumption is like food at an unlimited buffet.
You just end up filling your plate with way too much food. When you don't have
broadband internet, it's like you are at a regular restaurant, you really have
to choose carefully what you want to eat. I think a lot of people could use
content trimming to be honest.

so anyways, this method worked great for me for about a year...then the pandemic
lockdowns came and I had to make a change. But that's a story for another time.

[1]youtube-dl --continue --ignore-errors --no-overwrites -f "[filesize<200M]" --batch-file list.txt