Image Map Image Map
Results 1 to 8 of 8

Thread: When is the FAT saved?

  1. #1

    Default When is the FAT saved?

    Hallo allemaal,

    Background: I'm writing my own OS with its own File System, just for fun. When adding or deleting files, the BAM (Commodore equivalent of the FAT) has to be edited when ever a sector is added or deleted. At this moment I save the particular BAM sector after every change. I test things on a virtual PC but I can imagine that when using a real PC, that the may writes could stress the mechanism of the drive. "Could" because if most BAM sectors are on the same track, almost no head movements are needed.

    One idea I had was writing a BAM sector back only when another one is needed (I only keep one sector in memory). But then you run the risk that there isn't any other BAM sector read until the PC is powered off leaving you with an incorrect sector. The next idea is to let the process causing the needed chances, to give the command to save the BAM sector in memory to disk at the end of the task. Should be safe enough IMHO, comments are welcome!

    But now the subject: how does MS-DOS handle the FAT?

    Thanks for any info!
    With kind regards / met vriendelijke groet, Ruud Baltissen

    www.baltissen.org

  2. #2
    Join Date
    Jan 2007
    Location
    Pacific Northwest, USA
    Posts
    33,665
    Blog Entries
    18

    Default

    There are good ways and bad ways to handle things.

    Generally, one updates the allocation table when allocation changes. If your allocation is on a sector level, you're going to get a bunch of rewrites. MSDOS and most other operating systems allocate in groups of sectors (blocks or clusters). Still, it may not matter to most solid-state (CF, SD, DOM) because such devices have wear-leveling built in.

    However, I don't believe that MSDOS updates the file length until a file is closed, either explicitly or implicitly.

    Consider that if you updated the file information every time the file length changed, a bunch of one-byte writes would cause the file information to be updated every time a byte is written, which you don't want.

    The point is to avoid allocating the same block for two different files. It's one of the nastier errors to sort out. Orphan blocks can be dealt with pretty harmlessly.

  3. #3

    Default

    You can buffer it in RAM and periodically write out updates. Of course, you have to worry about power shut downs but that is life. Writing out every one or two seconds for an updated fat is about right. Have a light on your machine that would indicate that the fat has been updates but not written. You'd treat it just like any floppy disk on an older machine. You only turn the power off when the light is off and you know it is safe to do so.
    Dwight

  4. #4
    Join Date
    May 2009
    Location
    Connecticut
    Posts
    4,628
    Blog Entries
    1

    Default

    MS-DOS 1 and 2 write out the updated FAT as part of flushing dirty buffers on every disk read. Only files too large to be written out in a single pass will have the FAT on disk in a temporarily inconsistent state.

  5. #5
    Join Date
    Mar 2011
    Location
    Atlanta, GA, USA
    Posts
    1,548

    Default

    You could always buffer it until an idle back-stop timer hits causing a flush. Keep it short like on the order of tens of milliseconds. That way you don't rewrite the same data more frequently than needed and everything gets flushed a reasonable time after the last user activity (in casel the power switch is flipped).
    "Good engineers keep thick authoritative books on their shelf. Not for their own reference, but to throw at people who ask stupid questions; hoping a small fragment of knowledge will osmotically transfer with each cranial impact." - Me

  6. #6
    Join Date
    Mar 2017
    Location
    New Jersey, USA
    Posts
    553

    Default

    Quote Originally Posted by Ruud View Post
    One idea I had was writing a BAM sector back only when another one is needed (I only keep one sector in memory). But then you run the risk that there isn't any other BAM sector read until the PC is powered off leaving you with an incorrect sector. The next idea is to let the process causing the needed chances, to give the command to save the BAM sector in memory to disk at the end of the task. Should be safe enough IMHO, comments are welcome!
    This is basically what you need to do, and the command to save the BAM (and any other dirty buffers) is commonly called "closing a file".

    If you are feeling truly inspired, you might consider immediately updating any BAM sector you read with a marker indicating that the sector has a potential update in progress. When you write the sector back you clear the marker. The idea here is that if the system is powered down with a BAM update unwritten, you will be able to detect this situation and do a quick check to see if the sector allocation based on following file pointers matches the allocation found in the BAM. The downside of course is that if no BAM update was actually needed (i.e. no sectors needed to be added or deleted) you have 2 extra BAM writes -- one to set the flag and another to clear it.

    Another option is to use a timeout to write the BAM sector after some inactivity period has elapsed. This has a big performance impact on a floppy disk, especially if the drive motor may have turned off. Also users might find it suspicious/alarming if the drive motor turns on or the head seeks, without them having done anything specific to trigger it. On a fast disk like a hard disk or SSD this works fine though.

    It's sensible to use different behaviors depending on whether the disk is removable or not. Also sensible to assume that any removable disk has the same performance constraints as a floppy disk (i.e. you want to minimize seeks).

    It's common on a single-user system to handle this the simple way -- instruct users not to power down until their files are closed.
    Last edited by kgober; January 9th, 2020 at 07:34 AM.

  7. #7
    Join Date
    Jan 2007
    Location
    Pacific Northwest, USA
    Posts
    33,665
    Blog Entries
    18

    Default

    I'll add that one of the strategies that has been used to insure against data loss is to preallocate storage for the file. In other words, the area to be used by file data is known right from the point when the file is opened to the point where it's closed or a system interruption happens.

    The downside is that you have to have a pretty good idea of how much storage the file will occupy--and this requires some knowledge by the program. When the file is complete, the program can return excess storage to the free pool, however--and you don't have to deal with routine file fragmentation. You may need to "compress" the medium if too many "holes" develop, however.

    I've worked with a few systems that operated this way--both had provision for an "emergency" extension of file storage (up to, say, 4 times). A couple were for microprocessors; and a couple were for mainframes. Although one may lose the location of where the end of file occurs, data is not lost because of structural failure.

  8. #8

    Default

    First for everybody: thank you for reacting! Your comments gave me a lot of input and a better insight how I can handle files.

    Quote Originally Posted by kgober View Post
    This is basically what you need to do, and the command to save the BAM (and any other dirty buffers) is commonly called "closing a file".
    I already reserved some routines for handling files, including closing one. But I didn't think about using closing a file for updating the BAM. It neatly fits in what I already have and I don't have to program an exception. So thank you very much for this solution!
    With kind regards / met vriendelijke groet, Ruud Baltissen

    www.baltissen.org

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •