Subj : Fmail (2018 Questions)
To : Wilfred van Velzen
From : mark lewis
Date : Sat Feb 16 2019 02:24 pm
On 2019 Feb 16 12:04:48, you wrote to Ozz Nixon:
ON>> I skimmed the DOCs, and had seen ECHOMAIL.JAM statement - but,
ON>> assumed that was a RA 2.x feature and didn't want to go back through
ON>> their STRUCT files to find it... thus, I asked. ;-)
WV> It's not in the fmail of golded docs. And my guess is, it's a RA thing...
no... RA/FD/FE implemented it since they added it as part of a feature
request... it was taken up by many others as well... the problem is that there
are two or three similar files that provide the information but in slightly
different ways... echomail.jam and netmail.jam are for _outbound_ local posts
that need to be scanned out of the message base... some try to use this file
for inbound which is the exact opposite of what it is intended for...
the other files have another name and may be more generally used for "sole
occupant" type setups like single user BBSes or point systems... especially
those that use local sysop reader/editor setups like golded, timed, msged and
similar... those can use those inbound files to sort those areas to the top of
the mail listing if you want to see them first... i don't see them working very
well for multi-user setups like a BBS with multiple users... the last read
pointers of the users still need to be consulted no matter what mail arrived
recently...
on searching for new messages since a user's last visit, it should be easy
enough for any JAM capable package to maintain the two lastread pointers for
each user... it should also be able to quickly scan through the JLR files
picking up and comparing the lastread pointers for each user with the number of
messages in the area the JLR file is for... if a system is too slow doing that,
they may want to examine their algorithm and/or system setup...
i do recall, on DOS systems, that there was a major slowdown of scanning files
which JAM brought to the forefront... the problem was actually in the file
system and the way that the OS managed FAT* areas with large numbers of
files... folks thought they were putting (eg) 400 message areas in one
directory but they didn't realize they were really putting 1600 files in one
directory... splitting the areas into directories with less than 255 _files_
(63 JAM areas) per directory sped the processing up by at least a magnitude...
access speed problems may also be seen on networked drive shares... JAM really
should be used from local fast HDs, IMHO...
i like the OOP aspect of the pascal JAM library i used in my code... it was
fast and did everything i needed... i don't know how non-OOP linear procedural
code would work... if it would gather the needed information from the message
base files as quickly or if additional requests would need to be made which
could slow the scanning process down...
FWIW: when i was using JAM, i was splitting my areas into a structured
directory tree something like this...
it made things faster and also easier to maintain... autoadded areas were
placed into a special directory for them until their record was updated in the
tosser configuration program at which time it was moved to the proper directory
in the above tree... i used whatever splitter was used between words as a
divider... the ALLFIX_FILE and ALLFIX_HELP areas are good examples of that
above... same with gated news groups where you use the dots between the
portions of the group name as the directory splitter...
Always Mount a Scratch Monkey
Do you manage your own servers? If you are not running an IDS/IPS yer doin' it
wrong...
... You say I'm a bastard like it's a bad thing.
---
* Origin: (1:3634/12.73)