Subj : Committing file changes
To : David Noon
From : George White
Date : Mon Aug 21 2000 01:09 am
Hi David,
On 20-Aug-00, David Noon wrote to George White:
DN> Replying to a message of George White to Coridon Henshaw:
CH>>> It's not intermediate commits I need: what I need is some way to
CH>>> flush out write operations made to files which might be open for
CH>>> days or weeks at a time
GW>> The only reliable way I know of is _NOT_ to keep the files open
GW>> but to open and close them as needed. It is the _only_ way I know
GW>> which is guaranteed to update the directory information (Inode
GW>> under *NIX) so that a chkdsk won't cause you that sort of grief.
GW>> In a similar situation I ended up opening and closing the file
GW>> during normal operation to ensure the on-disk information and
GW>> structures were updated. Originally I opened the file on start-up
GW>> and kept it open.
DN> This is true when one is keeping things simple, such as using
DN> sequential file structures
Which is how Coridon appears to be doing things at present.
DN> A genuine database [and that is what Coridon claims he is coding]
DN> does not restrict itself to simple file structures. The usual
DN> approach is to allocate and pre-format a suitably large area of
DN> disk, known as a tablespace in DB2, and then maintain
DN> database-specific structural data within that. The pre-format
DN> operation finishes by closing the physical file, thus ensuring the
DN> underlying file system has recorded the number and size of all
DN> disk extents allocated to the file. The DBMS is then free to
DN> "suballocate" the disk space as and how it sees fit. It also takes
DN> on the responsibility to ensure the consistency of the database's
DN> content
From Coridon's description of his problem after the kernel trap, he is
not working that way, but adding variable length records to the file.
That of course means that the normal file operations can leave the
file in an inconsistant state. Certainly pre-allocating the data space
and working within it means that the file should never have to be
closed. In my experience some of the file caching on the PC platform
does not seem to handle the situation where a particular part of the
data structures (sector or cluster in the underlying file system) is
repeatedly written to and read from with the file kept open, opening
and closing the file seems to overcome the problem. I've never put in
the work to confirm this worry, just found a way to get reliable
operation and got on with codeing other things - I didn't have the
time when it arose and now the project is history I don't have any
inclination to look into it...
DN> We will see how Coridon implements such a database system.
Like you, I'm watching with interest.
George
--- Terminate 5.00/Pro
* Origin: A country point under OS/2 (2:257/609.6)