A dataset is a tree of directories and files that can be transfered
as a unit and share an authorization profile (i.e. a given group or
groups of users have the same authorization for all information in
the dataset).
It occurs to me that it is wiser to design back-ups based on the
target datasets rather than taking a host-oriented approach.
* green
- Description :: Public information authored by me.
- Host(s) ::
1. sdf.org (Gopher)
2. papa.motd.org (HTTP) (File tree of 1. served via Blosxom)
3. meta.sdf.org (SSH) (Same file tree)
4. iza, lixi, shiro (local) (Mirror sets for of-line work)
(Want to put dataset under version control with Subversion on
ma.sdf.org (SSH) and replace 2. with service from papa.sdf.org
(HTTP).)
- Authorization :: me (all), others (read)
- Work set mirrors already serve as on-line backup.
- HTTP service via link from papa.motd.org to Gopher file tree
would avoid need to duplicate files on ma.