Synchronized Gopherholes (18-1-11)

  _Happy New Year, by the way, and a warning upfront: the following post
  is mostly an outline of an idea!_

  Although I had vacation over the holidays, I was totally inactive on
  the phlogging side. Family consumed quite some of my time (in a good
  way though), and I was not that motivated anyway, although I partly
  followed the activity on SDF.

  With the recent hiccups on SDF, I was thinking about where best to
  establish one's [ph|g|b]log presence to be least affected by downtime.
  Nothing's forever, though, and so one could establish a system of
  _interconnected gopherholes_ which automatically synchronize their
  contents, and publish all the locations so that people could look
  elsewhere, if one node goes down. Actually it's not restricted to blogs
  etc, but one's online presence in general.

  Of course neither the problem nor ideas to solve it are anything new.
  Think about torrents, or solutions like the [1]InterPlanetary File
  System, [2]Freenet, [3]zeronet etc. The problem with these systems is
  the requirement to install some specific client or to use some kind of
  gateway. I want a solution working with normal clients, especially for
  gopher.

  In principle, this might be accomplished by dynamic DNS or even a Round
  Robin DNS system. However, that's again a single point of failure:
  recently, some of my solutions temporarily broke because of mdns.org
  (hosted by SDF) being offline.

  In total, it boils down to "search all places where stuff could be and
  try to connect to them" which must be done either by a client (zeronet
  etc) or by the audience themselves (publish a set of entry points to
  the same content on different servers, and people try to find a working
  one). Currently, I don't want to use the former approach, and therefore
  a package for easy installation of a synchronization system seems the
  best solution. Obviously, the servers themselvesmcould well use some of
  the former systems in the background for their synchronization.

  As my [4]plog suite already contains code for git-based handling of
  source texts and publication into websites and gopher directories, it
  might be a good starting point. The final goal is an interconnected
  system of servers providing the following:
    * source texts can be injected into any running server, and after
      some hours, every other running server updates to the new content
    * conflicts, e.g new texts from different servers, are handled
      automatically (git should mostly do this already, with some
      additional script logic)
    * each server runs a web and gopher server, automatically generates
      its static pages from source texts, and may also run mailinglists
      (like already implemented in plog -- but there is a problem to
      solve, as only one e-mail per post and recipient should be sent)
    * source texts can generate plog entries, but may also update
      existing pages for managing synchronized static sites (web and
      gopher)
    * servers coming up after being offline try to find another server
      for updating themselves, and if a new server is added to the swarm,
      this information is automatically shared among the others, i.e a
      server only needs to know about a single one and will learn from it
      about the others

  Perhaps I should rewrite the plog suite; currently it's a bit messy, as
  the functional parts are rather intermingled. It might be better to
  have separate scripts or functions for (git) synchronization, source
  text conversion, publication, and mailing.

  Actually, it might even be best to completely separate the git
  synchronisation logic from all the other components, because it could
  be used for any kind of content like private repos, backups etc.

  Anyway, we'll see which goals I'll manage to implement in the near
  future!

  .:.

References

  1. https://ipfs.io/
  2. https://freenetproject.org/
  3. https://zeronet.io/
  4. https://gitlab.com/yargo/plog