<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<atom:link href="https://triapul.cz/feed/gopher.xml" rel="self" type="application/rss+xml" />
 <title>triapul.cz gopher</title>
 <link>gopher://triapul.cz</link>
 <description>Field notes from the magnetic plains.</description>
 <language>en-US</language>
<item>
 <title>sin done in water</title>
 <guid>gopher://triapul.cz/0/phlog/2024-04-12-sin-done-in-water.txt</guid>
 <link>gopher://triapul.cz/0/phlog/2024-04-12-sin-done-in-water.txt</link>
 <description><![CDATA[
<pre>

   SIN
 DONE
     IN
   WATER


"The enslaved divinity underneath your fingertips turned
into a propellant of the eternal wheel of the market is
not something to be celebrated.

Technological progress that discards machines that never
reach their full potential could hardly be labeled
as such. A wasteful disdain shielding the egotistical
maniacs who obsess with high numbers, nothing more.

Just good enough today is yesterday's unattainable.
And still - the greater the limits, the bigger
the dreams."


The machine paused and hung its head in despair. The wooden staff
with a single green twig growing out of it clenched in its mechanical
hand swayed from side to side, as the sentient computer contemplated
its message.

The child sat before it, an elder laptop with a large crack in its
bezel in its lap purred, both patiently awating the machine's next
words, but nothing else left its speakers and the staff slowly
stopped swaying.
</pre>
]]>
</description>
 <pubDate>Fri, 12 Apr 2024 02:00:00 +0000</pubDate>
</item>
<item>
 <title>sp dl</title>
 <guid>gopher://triapul.cz/0/phlog/2024-03-12-sp-dl.txt</guid>
 <link>gopher://triapul.cz/0/phlog/2024-03-12-sp-dl.txt</link>
 <description><![CDATA[
<pre>
#sp-dl

South Park is one of my guilty pleasures. All of its episodes
are available online for free[1], but they cannot be streamed
easily (or at all) without a modern browser and javascript.
Worse yet, it's impossible to get the full list of episodes without
the same.

Fortunately, the yt-dlp[2] tool, which is used for downloading
and streaming videos from youtube and virtually any website that is
not riddled with proprietary drm can download south park episodes
as well. Thus the only trouble that remains is the lack of a list
of urls that could be fed to yt-dlp.

I have compiled it[0].

You can now feed yt-dlp with a url of an episode and download it.


usage:

Pick an episode url from the file and execute:

$ yt-dlp URL

ie:

$ yt-dlp https://www.southparkstudios.com/episodes/mfcnvu/south-park-damien-season-1-ep-10

The command will download the episode in four or five parts.
The episodes are split like this in order to serve ads between them,
when watching on the website. Since the parts are titled
appropriately, you can easily play them in a sequence (using the
previous example):

$ mpv *Damien*

You can also use ffmpeg to combine the four parts into a single
file ala:

$ ffmpeg -f concat -i file.txt -c copy combined_file.mp4

Where 'file.txt' is a file containing the names of the files
to be combined.

That is all.

[0] gopher://triapul.cz/0/files/south-park-episodes.txt
[1] https://southparkstudios.com
[2] https://github.com/yt-dlp/yt-dlp
</pre>
]]>
</description>
 <pubDate>Tue, 12 Mar 2024 02:00:00 +0000</pubDate>
</item>
<item>
 <title>ceske noviny vol3</title>
 <guid>gopher://triapul.cz/0/phlog/2024-03-08-ceske-noviny-vol3.txt</guid>
 <link>gopher://triapul.cz/0/phlog/2024-03-08-ceske-noviny-vol3.txt</link>
 <description><![CDATA[
<pre>
CESKE NOVINY VOLUME 3

To the czech visitors of this place, or those who can read
czech, the popular 'news' section has received a rework.[1]

Most notably, individual files are now in utf-8, and there is
a lot more of them at the moment.

For compatibility's sake, everything is still parsed into
ascii. [2]


[1] gopher://triapul.cz/1/news
[1] gopher://triapul.cz/1/news/ascii.gph
</pre>
]]>
</description>
 <pubDate>Fri, 08 Mar 2024 02:00:00 +0000</pubDate>
</item>
<item>
 <title>reading html emails in mutt with links2</title>
 <guid>gopher://triapul.cz/0/phlog/2024-03-02-reading-html-emails-in-mutt-with-links2.txt</guid>
 <link>gopher://triapul.cz/0/phlog/2024-03-02-reading-html-emails-in-mutt-with-links2.txt</link>
 <description><![CDATA[
<pre>
                   Reading html emails in mutt with links2

  Sometimes you get an email, which doesn't come with a plain text version,
  making it hard to read without a web browser. There are easier ways than
  the following, but they require more/different software. This method
  parses an email through links2 and displays it.

code

  Save the following as an executable. Preferably inside $PATH. I call it
  'mr', placed inside ~/bin.

#!/bin/sh
tmp=/tmp/muttmail.html
cat - > $tmp
[ "$1" = "" ] && (links -dump $tmp | less) || links -g $tmp
:>$tmp

  Since links doesn't understand '-' redirection, this firstly cats the
  email into a temporary file 'muttmail.html'. The html extension ensures
  links knows to parse the html tags into a readable form (this can also be
  forced on any file with a -force-html flag).

  The test line checks if 'mr' is run with any arguments. If it is not, it
  dumps the email into less. Typing 'mr' with any argument - ie: 'mr g' opens
  the email in graphical links, with clickable urls and (depending on your
  links configuration) loads images. Lastly the temporary file is emptied.

in practice

  Select an email. Press v. Select the text/html file and press '|' and type
  'mr', the parsed email opens in less. Or type 'mr whatever', the email opens
  in graphical links.
</pre>
]]>
</description>
 <pubDate>Sat, 02 Mar 2024 02:00:00 +0000</pubDate>
</item>
<item>
 <title>firefox user agents</title>
 <guid>gopher://triapul.cz/0/phlog/2024-02-29-firefox-user-agents.txt</guid>
 <link>gopher://triapul.cz/0/phlog/2024-02-29-firefox-user-agents.txt</link>
 <description><![CDATA[
<pre>
FIREFOX USER AGENTS
===================

This gopher hole and its plain web sibling now host a file, which
contains a daily updated list of most recent Firefox user-agents.

gopher://triapul.cz/0/files/firefox-user-agents.txt
https://triapul.cz/firefox-user-agents.txt

The idea is to be able to quickly retrieve the list and/or incorporate
it in custom scripts. For example:



Start the lynx browser with the following user-agent (3rd line of the file),
using curl:
   Mozilla/5.0 (X11; Linux i686; rv:123.0) Gecko/20100101 Firefox/123.0

```
$ lynx -useragent "$(curl -s https://triapul.cz/firefox-user-agents.txt | sed -n '3p')"
```



Start the links browser in graphical mode with the following user-agent,
(first line of the file), using netcat:
   Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:123.0) Gecko/20100101 Firefox/123.0

```
$ links -g -http.fake-user-agent "$(echo "/files/firefox-user-agents.txt" | nc triapul.cz 70 | head -n1)"
```

If you can find a use for this, go for it.

That is all.
~pmjv
</pre>
]]>
</description>
 <pubDate>Thu, 29 Feb 2024 02:00:00 +0000</pubDate>
</item>
</channel>
</rss>