PDA

View Full Version : CP/M question...



ziloo
November 2nd, 2017, 08:36 AM
Andy Laird of the "cp/m handbook " states that:

....hardware designers arrange for some initial instructions to be forced into memory at
location 0000H and onward. It is this feat that is like pulling yourself up by your own
bootstraps. How can you make the computer obey a particular instruction when there is
"nothing" (of any sensible value) inside the machine?

There are two common techniques for placing preliminary instructions into
memory:

1- Force-feeding

With this approach, the hardware engineer assumes that when the RESET
signal is applied, some part of the computer system, typically the floppy
disk controller, can masquerade as memory. Just before the CPU is unleashed,
the floppy disk controller will take control of the computer system
and copy a small program into memory at location 0000H and upward.
Then the CPU is allowed to start executing instructions at location 0000H.
The disk controller preserves the instructions even when power is off
because they are stored in nonvolatile PROM-based firmware. These
instructions make the disk controller read the first sector of the first track
of the system diskette into memory and then transfer control to it.

2- Shadow ROM
.....

Now this Force Feeding business is kinda Chinese to me. Can
some of the folks here explain how it is possible to shut up CPU at
time zero and let FD controller to take over?

ziloo :mrgreen:

JonB
November 2nd, 2017, 08:55 AM
Reading it, it looks the same as shadow ROM in effect, because you have the boot code in an EPROM. The only difference is it is copied into RAM (per DMA) by the FDC - but it would need to hold the CPU in reset while doing so. That answers your specific question - the FDC or another chip with a known initial state keeps the CPU in reset while copying the boot code to RAM, then releases the reset line so the CPU can run it. For example, the Superbrain does this (holds a CPU in reset) for CPU2 using a line of the PIA chip at cold boot (although not to copy stuff into RAM).

"Force feeding" seems a bit over engineered to me. Which FDCs have this ability?

Shadow ROM (or paged, switchable ROM) is far a more common approach in my (admittedly limited) experience.

durgadas311
November 2nd, 2017, 09:01 AM
Of course, "2- Shadow ROM" is pretty much the universal solution these days, and even for older, CP/M-era, machines. But some computers would "halt" or "hold" the CPU (effectively stop the clock) until the floppy (or other I/O) completes it's "bootstrap" command. In those cases, the I/O device would need to have DMA capability, as the CPU cannot be involved in the I/O if it has no code to run. In some cases, such as the Honeywell 200/2000 mainframes, it is a fairly manual process - the CPU powers up "halted" and the operator mounts the I/O device and starts the bootstrap operation (loads the code to run), then releases the CPU to run the code that was loaded. I think there were some "specialty" PDP-8 systems (dedicated word-processors, etc) that worked in a more-automated fashion. In those cases, when you power-up, the CPU is held idle until the floppy controller can read the bootstrap sectors. It will wait forever for you to insert the diskette, and if that is bad or not "bootable" then the hardware keeps looping - never releasing the CPU.

I think even some modern CPUs have similar capabilities, especially for embedded use. But with ROM being so inexpensive these days, it doesn't make much sense to add extra hardware to enable a bootstrap function on an I/O device.

The Z80 had a couple methods for this, one was the WAIT signal, which could be applied by I/O bootstrap logic on the initial instruction fetch from 0000H. Another was the BUSREQ/BUSACK logic (used by DMA devices), which I presume one could make work for that. I'm not aware of anyone ever doing either of those, though.

Chuck(G)
November 2nd, 2017, 09:13 AM
I think Andy was thinking of things like Don Tarbell's floppy controller. It has a small bipolar ROM in it that loads the first sector of a disk into memory and jumps to it; said ROM program forces its contents onto the data bus at location 0 after reset, during which a sector is read into 0000-0080h, with the controller managing the bus. When the read is finished, a jump to 007DH is performed and the ROM no longer appears on the bus.

There's a copy of the Tarbell disk document on bitsavers.

Right from the very first Altair S100 spec, there is a signal called STSDSBL (or something like it) on pin 18, which disables (tristates) the MPU's status drivers. Assert this signal and take control of the bus yourself.

If you have a ROM in high memory, there are two approaches to getting the MPU to go there after a reset. One is to force a jump instruction onto the bus at reset (not that much different from the Tarbell scheme, above); the other is to force 00 (no-ops) onto the bus, using a comparator to check the address the MPU thinks it's reading from and then releasing the bus when the target has been reached.

glitch
November 2nd, 2017, 11:04 AM
And, the most forceful of the "force feeding" techniques: clobbering MREAD with a transistor to ground or some paralleled bus drivers and causing an intentional bus conflict to effect power-on jump! Despite having built the status disable line into the bus spec, this is the method that the MITS Turnkey board uses, as well as at least the TDL SMB and SMB 2.

From the description, "force feeding" sounds more like DMA. There were certainly floppy controllers that could do DMA, such as the Morrow Disk Jockey DJDMA. Never personally used one, so I don't know if they also DMA in the bootstrap code.

Shadow ROM could use the above (either intentional bus conflict or properly using the status disable line), the *PHANTOM line if system memory supported it, or bank switching if the system supported it. There were also CPU boards that could jam a JMP onto the bus by just disabling external transceivers, or switch in an onboard ROM. Using the *PHANTOM line (which, when pulled low, tells memory boards that acknowledge *PHANTOM to not respond to the bus cycle) sounds like the "masquerading" part of the "force feeding" description. But that's also how Shadow ROM can work, so who knows what the exact intention was...

Chuck(G)
November 2nd, 2017, 11:08 AM
The cool thing about Tarbell's design is that it worked on almost any system, z80 or 8080. You can see what's going on--he's got a 74LS367 driving the STSDSBL line as well as the now-floating status lines. What I thought was unusual was that Tarbell's boot code is storing data into the same addresses that it's executing from--0000-0080h.

glitch
November 2nd, 2017, 11:18 AM
The cool thing about Tarbell's design is that it worked on almost any system, z80 or 8080. You can see what's going on--he's got a 74LS367 driving the STSDSBL line as well as the now-floating status lines. What I thought was unusual was that Tarbell's boot code is storing data into the same addresses that it's executing from--0000-0080h.

IIRC I've got at least one board that does that, so that you can copy the ROM contents into real RAM at 0x0000 as part of the bootstrap. Forget which.

The Dajen SCI took the proper approach to power-on jump with disabling the status lines and driving them itself. I don't know why more boards didn't use the approach early on, there was no reason to clobber MREAD like that!

Dwight Elvey
November 2nd, 2017, 01:33 PM
I have one of the early disk systems, made by "Digital Systems". On reset, it would take over
the processor ( forget which line it uses ) and DMA load the disk's first sector into RAM at address 0.
Assuming one had a front panel, one would wait for the disk light to go out and hit RUN.
CP/M likes to use 0 to 100H so the low level disk I/O can then be loaded by the bootstrap program in
the first few bytes.
Once clear of 0 to 100H, loading CPM is no issue.
I know how it works because it came with no software and I had to get it up and running myself.
All normal disk accesses are done by DMA so that the additional hardware to handle bootstrap reset is
minimal, from a design aspect.
Most later disk controllers used a shadow ROM. These usually have the ROM mirrored in two locations. The
first bit of code jumps to the normal ROM address. The act of addressing the normal address flips a flop
that was cleared on reset to enable the shadowing.
This is less complicated with the standard disk controller chips than doing the DMA load. The DMA
load has the advantage that when the boot is complete, there is no need for any ROM in the memory
address space. One can use 100% of the RAM ( or at least that part not used by some video boards
and disk I/O ).
One can always restart CP/M to get back to normal usage.
Dwight

JohnElliott
November 2nd, 2017, 01:43 PM
The Amstrad PCW (http://www.chiark.greenend.org.uk/~jacobn/cpm/pcwboot.html) uses a variant of the force-feeding technique -- at startup, memory fetches come from the printer controller, which emits 779 bytes (instructions and data reads). The CPU executes these to generate a 256-byte program in RAM. That program then reads the first sector from disc, and jumps to it.

durgadas311
November 2nd, 2017, 02:02 PM
I've seen a lot of different ways of mapping in a "bootstrap" ROM. One was to repeatedly map the ROM (which was ORG'ed in high memory) over the entire address space, then the JMP at 0000H would jump to the desired "high memory" address, then they'd usually copy the ROM to RAM (employing a "write under ROM" circuitry) and turn off ROM mapping. The Kaypro mapped (and ORG'ed) the ROM at 0000H and then after booting it disabled the ROM, and used a CP/M BIOS that switched the ROM back on for doing I/O (after making special arrangements for the user data to be copied into high memory). The H89 supported CP/M as an after-thought, and their ROM (mapped/ORG'ed at 0000H) was never used again after CP/M was booted (but was used directly for HDOS - which does not use low memory). I seem to recall some that would lock-out the ROM once you booted CP/M (requiring hardware RESET). I've never seen one that actually had a ROM in an I/O device, then transferred that ROM as an I/O command. That would be an interesting approach, and perhaps was needed to integrate into existing hardware. You still need to halt the CPU until the I/O completes, but perhaps it got away from requiring a diskette be inserted. There are probably as many different ways of doing this as there are different machines.

Chuck(G)
November 2nd, 2017, 02:30 PM
The Amstrad PCW (http://www.chiark.greenend.org.uk/~jacobn/cpm/pcwboot.html) uses a variant of the force-feeding technique -- at startup, memory fetches come from the printer controller, which emits 779 bytes (instructions and data reads). The CPU executes these to generate a 256-byte program in RAM. That program then reads the first sector from disc, and jumps to it.

What's remarkable is that the Tarbell controller used a 32-byte bipolar PROM to do its work.

PhilipA
November 10th, 2017, 10:49 AM
I have a feeling my Lanier uses this method, being as it also has a tiny PROM to boot from and sits, waiting for the floppy drive, displaying nothing on the screen.

It's given me a direction to go in, at least. Appreciate this insight. Might be able to "force feed" from something that is pretending to be a floppy drive and get some signs of life...


--Phil

ziloo
November 11th, 2017, 12:18 AM
Hello Folks,

Has there been any attempt to provide cp/m with
a hierarchical file system and subdirectories? Is it
even possible?

ziloo :mrgreen:

JohnElliott
November 11th, 2017, 01:52 AM
In CP/M-86, yes; CP/M-86 Plus for the Apricot PC has native support for DOS-formatted discs.

durgadas311
November 11th, 2017, 04:11 AM
I remember Gary Kildall and Tom Rolander giving an ad-hoc lecture on the evils of hierarchical filesystems and why CP/M was better... Of course, in modern times we have robust and reliable hierarchical filesystems. Back then, it was a bit easier to believe that - at least for "weak" CPUs like the x86 family (or 8-bit'ers) - it made no sense to have hierarchical filesystems. Of course, it was a matter of opinion, but it's too bad DRI never spent the time to develop one... we might not have been subjected to the horrors of FAT filesystems for so long...

Chuck(G)
November 11th, 2017, 08:08 AM
Kildall was focused on floppy file systems initially, so the CP/M does make a bit of sense for small volumes. When Shugart brought out their first Winchester hard drives (the SA-4000, a 14" drive), they sent us one to work with. What they supplied was about 40MB (I still have the drive). My reaction was "what the heck am I going to do with this much storage and an operating system that can't work with it?"

Hierarchical filesystems make sense on large drives, but even there, I've used "flat" file systems on mainframes that worked quite well. Hierarchy is simply one method of compartmentalizing files. There are other ways that are just as effective--and perhaps more secure.

MikeS
November 11th, 2017, 11:32 AM
Hierarchical filesystems make sense on large drives, but even there, I've used "flat" file systems on mainframes that worked quite well. Hierarchy is simply one method of compartmentalizing files. There are other ways that are just as effective--and perhaps more secure.

I'm very interested; any details anywhere?

m

krebizfan
November 11th, 2017, 11:53 AM
Not Chuck(G) but Wikipedia has a long article on the CMS file system which lasted a long time.

Smaller systems stuck with flat file systems even longer as shown by RT-11 and UCSD P-system though P-system had a clunky method of creating subvolumes.

Chuck(G)
November 11th, 2017, 12:00 PM
Bitsavers in the mainframe area, is full of this sort of thing. I'll give you an example.

CDC SCOPE/NOS starts a user out in a session or job with only two files: INPUT and OUTPUT; the former is connected to the standard input device and the latter, to the standard output device. There are no other files (with perhaps the exception of PUNCH). If you want to work on an existing file, you access the permanent file system with the ATTACH command, which specifies the job-local name, the permanent file name, user ID, cycle (version--you can have up to 1,000 versions of the same file) and access passwords (read/write/control are different). Any other local file created in the session is discarded unless explicitly saved at creation or otherwise pre-disposed of by the user.

It's quite secure--simply logging on as a valid user gets you no file access.

An added feature is that a permanent file can be offline (i.e. kept on removable media not physically present on the system; e.g. a tape). When an ATTACH command is given, the system will look up where the file resides and instruct the operator to mount the medium, if not already mounted.

durgadas311
November 11th, 2017, 03:05 PM
I think the appearance microprocessors really ushered in a new era in filestem standardization. Of course, floppy disks were a part of that. Other than "IBM Tape" interchange (sequential storage), I think most disk storage was at most interchangeable within the same computer series. With 8" floppies came the chance to exchange diskettes between various computer models. One early example, perhaps even pre-dating CP/M, was FDOS-II. I recall being able to read FDOS-II files that were written on a 6800-based computer with an 8080-based computer. Of course, programs were not runnable. a CP/M 8" SD SS floppy (IBM 3740) could be read on any CP/M system that supported 8" floppies. Of course, CP/M started out as strictly 8080 (and compatible CPUs).

Perhaps there were other, earlier examples of standardization for disk storage? Were disk packs ever interchangeable between dissimilar computers? Seems like the microprocessors really began the idea of industry standardization of disk formats. Although, it may have been the IBM 3740 that drove that.

Chuck(G)
November 11th, 2017, 04:02 PM
In 1960s and 70s, tape was pretty much it--and while it may have been 7- or 9-track, there were very few conventions.

For example, you may be given a 9-track tape from a system. Very often, the density isn't mentioned (800 NRZI, 1600 PI, 6250 GCR--and rarely, 3200 PE), so there's the first hurdle. But then, what does the data on the tape mean? You might think that you're transferring 8-bit bytes, but that's not a given. In my cases, not even the track-edness is mentioned, so Kyread is very useful.

For example, I'm handling tapes from a Univac 1100-series mainframe. That's 36-bit words, 6-bit characters framed two words in 9 frames packed 6/2 4/4 2/6... but the same data can appear on an identically-appearing tape and be 7-track 556 bpi. The character set if printable characters is probably Univac Fieldata, which isn't the same as IBM BCD or CDC Display code (all 6 bit codes). If the tape contains machine-dependent data, such as floating point words in binary, you have to decipher the floating point format.

Then there's record structure--fixed-length, delimited, control-word or something totally bizarre, such as CDC 00008, but only in the low-order 12 bits of a 60 bit word.

My first run-in with DEC interchange was a tape given to me by a DEC CE from a PDP-10 system that contained the source for the "Advanture" game. 36-bit words, so 6 frames on 7-track tape per word, but for some odd reason, packed 5 7-bit ASCII characters, with the sign bit unused. It was FORTRAN, with a database file (travel tables), so it wasn't too difficult to get running on a CDC 6600.

You get the idea--like the dog walking on its hind legs, as Boswell's quote of Samuel Johnson put it, it's not done well, but you're surprised that it's done at all.

It keeps life interesting. :)

krebizfan
November 11th, 2017, 04:48 PM
Made me happy I did not encounter 9-track until much later when routines to import and export standard tape formats were available. I had enough problems massaging distant mainframe data into the necessary local structures without also having to figure out the physical tape format.

There was something a bit strange in retrospect having two non-IBM systems required to use an IBM format solely to enable data exchange. That applies to both big tape and floppy disks.

Chuck(G)
November 11th, 2017, 06:52 PM
A lot of non-IBM systems used HASP RJE. For example, we used it between a VAX 11/750 and a CDC Cyber 180. And SDLC/HDLC was all over the place.

Sort of like an Uzbek conversing with a Peruvian in English because English is everywhere.

ziloo
November 14th, 2017, 11:00 AM
As far as 8-bit cp/m systems are concerned, what is the sector size in:

a) single sided, double density
b} double sided, double density

Thank you

ziloo :mrgreen:

krebizfan
November 14th, 2017, 11:14 AM
As far as 8-bit cp/m systems are concerned, what is the sector size in:

a) single sided, double density
b} double sided, double density

Thank you

ziloo :mrgreen:

For which computer? The 8" single density had typically 128 byte sectors with double bringing that up to 256 bytes; 5.25" typically might have 256 byte, 512 byte, or 1024 byte sectors though I am sure someone did the 128 byte sector as well. At least CP/M used multiples of 128 bytes which keeps from having to deal with some of the really weird formats. I can't think of a system that had different sector sizes for single sided versus double sided at the same density but I would not be surprised if there was one.

The shareware version of 22Disk lists a large number CP/M formats in detail.

Variety: Good for the diet; bad for data exchange.

Chuck(G)
November 14th, 2017, 12:09 PM
Yeah, including some oddballs, such as 128byte double-density (MFM) to 1024 in general. Organized in a wide variety of ways.

Dwight Elvey
November 14th, 2017, 12:25 PM
As I recall, CP/M always used 128 byte buffers and used a de-blocking interface for other sector sizes. At least that is what I recall.
I also think, most expected the first track to be 128 byte SD sectors. After that is was what ever the desired sector size was. The sector
size is really just a function of the BIOS and as I recall, CP/M expect 128 byte buffers at a time ( as I recall ).
Dwight

Chuck(G)
November 14th, 2017, 12:29 PM
Dwight, your first item is correct. CP/M's bdos interface assumes 128 byte sectors , but your second is not--many many formats used the same sector size throughout. Curiously (or not) PC/MS DOS 1.x's FCB operations also assumed 128 byte sectors.

ziloo
November 14th, 2017, 12:49 PM
.... Organized in a wide variety of ways.


... Good for the diet; bad for data exchange.

For a single-sided single density 8" diskette everything is nice in the FCB table; but
when the diskette capacity is double density, then things begin to get complicated:

1- For single density: The record count (RC) in the FCB is the actual record count;
2- For double density: The RC is not the actual record count but is related to it by
some obscure algorithm.

Would you please explain...


Terminology: A record is equivalent to 128 bytes; while sector size may vary from
one system to another, the record size is always 128 bytes.


ziloo :mrgreen:

Chuck(G)
November 14th, 2017, 01:07 PM
It's more complicated than that. You can certainly have MFM floppies in which the one-byte record count is the actual count in the extent. But there's also an overflow field in the 13th byte (counting from 1) of the directory entry. So, although the RC byte records values of 0-127 128-byte blocks, it can overflow into the EX byte.

The whole thing is based on having 16 bytes per extent to enumerate the blocks belonging to a file. A block is a power-of-2 multiple of 128 bytes in length, the smallest being 1024 bytes (8 128 byte sectors). So, if each allocation byte in a directory entry can describe a block of 8 sectors, you have 16*8 or 128 sectors. This worked well for single-density, single-sided 8" floppies with a capacity of (77*26)/8 = 250 blocks, but falls apart for larger disks (e.g. double-sided).

So, CP/M gives you two choices (or a combination thereof). You can extend the block numbering to 16 bit quantities, but only hold 8 block ordinals in a directory entry. Or you can make the allocation block size larger by a power of 2 and have larger blocks, but waste more space in a block. Or you can combine both, if for example, you're working with a hard disk. In any case, when there are more than 128 128-byte "logical" sectors, in a directory entry you record the overflow in the EX field.

So, for example, a double-sided 8" single-density floppy could use an allocation block size of 2048 and still keep 16 block ordinals in a directory entry.

The EX field is actually divided into two fields--the low-order one relates to the overflow from the RC fiield; the upper-order bits relate to the directory ordinal.

So, for example, a directory entry describing 256 blocks would use the low-order bit of the EX field as overflow and the bits to the left of it to number directory extents. The EXM value in the BIOS DPB for the drive is a mask that indicates where the division is.

ziloo
November 15th, 2017, 06:02 AM
..... The EXM value in the BIOS DPB for the drive is a mask that indicates where the division is.

Do you mean the two fields are not necessarily 4-bits each and they could vary....
Would you please give an example?


ziloo :mrgreen:

durgadas311
November 15th, 2017, 06:30 AM
Just to be concise, CP/M's file interface (the BDOS) uses a "logical sector" (a.k.a. record) size of 128 bytes. For CP/M 2.2 and older, the BIOS read/write interface also uses 128 bytes, and the BIOS is responsible for adapting to the physical sector size. CP/M 3 allows for the BDOS to know about the physical sector size and to handle the blocking/deblocking.

The EXM field of the CP/M DPB specifies the mask to be used on the directory entry extent byte, to separate the logical vs physical extent numbering. Whenever CP/M completes 128 records, it always increments the directory entry EXT byte. However, whether that increment results in allocating a new directory entry depends on applying EXM. Basically, if the high-order portion (according to EXM) of EXT byte changes, then a new directory entry is allocated.

Chuck(G)
November 15th, 2017, 08:55 AM
Do you mean the two fields are not necessarily 4-bits each and they could vary....
Would you please give an example?

Sure. Consider a double-sided drive with 250 2048 (16 128-byte sectors) byte logical blocks. You can fit 16 of these ordinals in an extent, right? That makes 32KB per extent. For a larger file, the fully-filled extents on disk will have number of 01 03... etc. The EX field uses the lowest order bit for overflow from the RC bit, with the remaining bits denoting the extent number. When CP/M's bdos searches for a file, it uses the DPB extent mask (which will be 1) in the comparison to ignore the state of the lowest order bit.

Similarly, you could have a disk twice the size use 4096 byte blocks. The EX field would then use the lowest 2 bits as overflow and the extent mask for file searching would be 3 and the full extents on disk would be 03 07...

As in FAT filesystems, large allocation blocks are wasteful for small files. CP/M offers an "out" on this in much the same way that a FAT16 does over FAT12--simply use 2 bytes to enumerate the extents in the directory entries. The penalty is that you double the number of directory slots a file requires--and remember that CP/M is not hierarchical in its directory structure--when you use up all the directory slots, that's the end of the story, regardless of how much free space might remain.

It's kind of obvious that Kildall wrote CP/M with the 8" SSSD floppy in mind, then had to adapt his scheme for larger disks. In some respects, it resembles DEC's RT11 file system in that allocation information is derived from the directory and that no byte-level length information is retained--just blocks.

ziloo
November 15th, 2017, 10:08 AM
Thank you very much for your explanations. I will get back to this topic again as
I am still scratching my head! :confused1:

There are two more bytes following the EX slot, namely S1 and S2. Would you please
explain the purpose of these two highly secret (at its cp/m days) parameters?

ziloo :mrgreen:

Chuck(G)
November 15th, 2017, 10:20 AM
S2 is a byte used for overflow from the EX byte. Recall that EX is limited to the range 0-31 (5 bits). To the best of my knowledge, S1 had no function in CP/M 2.2.

krebizfan
November 15th, 2017, 10:28 AM
That gets into tricky version specific CP/M usage. Under version 2.2 and later, one of the S* bytes is used to increase the number of extents. Under version 3.1, the other S* byte indicates how much of the last block is used. You may find CP/M inspired OSes with alternate uses for the bytes.

http://www.seasip.info/Cpm/format22.html
http://www.seasip.info/Cpm/format31.html

will explain fields in more detail. Note how date/time and passwords were added to the system. IIRC, one of the CP/M clones made date/time in a more sensible fashion by creating a special date/time extent. Bit wasteful of directory space but did not require carefully arranging all the directory entries. CP/M for such a simple OS became quite complex to handle.

ziloo
November 15th, 2017, 11:00 AM
Thank you all for your brilliant comments!

Many of these topics have not been well explained in books on cp/m,
So....if at any time you were in the mood for more discussions on
esoteric cp/m, please feel free to entertain us.....


ziloo :mrgreen:

krebizfan
November 15th, 2017, 02:44 PM
What do you consider esoteric? These sorts of strange details will happen with any file system in use for a time as new features get plugged into the few bytes unused in the original design. The other solution was joining Unix's File System of the Month Club.

Chuck(G)
November 15th, 2017, 04:12 PM
There was nothing wrong with these at their inception--but technology marches on and stretches and breaks old methods. When you've designed a system for floppies, extending a system to a 1GB hard disk seems almost ludicrous.

At least for portable (flash) devices, it looks like we'll be stuck with some variation of the FAT filesystem for a long time...

ziloo
November 15th, 2017, 05:34 PM
.... The other solution was joining Unix's File System of the Month Club.

Was it really a whole new concept each time or a bug fix?


ziloo :mrgreen:

Dwight Elvey
November 15th, 2017, 08:26 PM
Was it really a whole new concept each time or a bug fix?


ziloo :mrgreen:

It was more the blindness to see what was possible and how soon it might happen.
Moore's law could be applied to a lot of things.
Dwight

ziloo
November 17th, 2017, 05:01 AM
In the following statement:

"With the introduction of password protection and date/time stamping of files in CP/M 3,
changes had to be made to the CP/M 2.2 format disc directories. To retain compatibility,
a separate program (INITDIR.COM by Amstrad for Amstrad?) was supplied which re-formats
the directory of a CP/M 2.2 format disc so that every fourth directory entry holds
the passwords and date/time stamps of the previous three entries."

Did this method become popular, or it was a fab...:hammers:


ziloo :mrgreen:

durgadas311
November 17th, 2017, 06:36 AM
INITDIR is standard CP/M3.

Chuck(G)
November 17th, 2017, 07:44 AM
CP/M 2.2 has been the most popular, with 3 occupying a very small part of the market. While the file dating and password (wasn't that borrowed from MP/M?) is nice to have, few applications made use of the feature--and the number of available directory entries was reduced by 25%.

krebizfan
November 17th, 2017, 08:38 AM
Amstrad was a major user of CP/M 3 and enhanced versions of CP/M-86 with the date functions so if ziloo is in a region where encountering them is common, ziloo will have to keep it in mind.

Then there is always the fun chance of encountering one of the alternate third party add-on date systems for CP/M 2.2. Nothing like encountering a directory shows up as corrupt because I don't have the software that made the disk.

Chuck(G)
November 17th, 2017, 09:01 AM
But if you're writing "portable" applications, stick with 2.2 as your Bible--and it's not a bad idea to stay away from Z80-specific instructions. There were plenty of 8080/85 CP/M systems out there.

durgadas311
November 17th, 2017, 09:58 AM
CP/M 3 and MP/M-II share a great deal of code, and also share much/all of the directory/file extensions. The BDOS source code has if-defs for MP/M, although I don't know if that was how the MP/M-II BDOS was actually created. Most of the MP/M-specific features were added through the XDOS component.

Chuck(G)
November 17th, 2017, 10:13 AM
I ported MP/M 1.0 before CP/M 3 (or CP/M plus) was available. When I saw the CP/M 3.0 literature, the similarity immediately struck me.

My experience with MP/M 1.0 was very mixed--stability was a real issue. MP/M 2.0 was much better, but mostly useful only if you had applications that were designed for it. I still have my MP/M 2.0 OEM kit, including DRI's sales literature for the soon-to-be CP/M-86.

My feeling was that CP/M 3 came out of MP/M because of the very small market for the latter. Getting a BIOS (BIOS+XIOS) going can be a bit challenging, particularly if you don't have the regular method of bankswitching (16KB banks). Since our system was entirely interrupt-driven (even to the screen refresh), MP/M was a pretty good fit.

durgadas311
November 17th, 2017, 10:46 AM
A lot of the DRI advances came at a point when DOS was taking over - regardless of what was the better technology. Another example was choosing 8086 over 68K (or NS 16016/32032, or...). DRI was working on graphical support, multi-tasking, multi-user, networking, etc. But all of this got passed over because it wasn't DOS. One major use for MP/M was for CP/NET servers. So, in a multi-tasking environment as opposed to multi-user. Lesson is, it's not who gets there first or has the best technology, it's who your friends are... We all know the stories of why IBM went with MS-DOS instead of CP/M... but I'm not convinced all of it is true, based on my interactions with DRI.

Chuck(G)
November 17th, 2017, 11:55 AM
CP/M, at least until the time of DOS cloning could not seem to wean itself from the single flat-mode directory structure. I believe that this was their downfall. Once you admit hard disks and multiple users, the problems with the original scheme become obvious. Around 1979, we were looking at hard drives and Shugart sampled us the SA4000--but in its 40MB form, which is more storage at the time than CP/M 2.2 could handle. If this represented just the start, we were going to have serious problems. I recall writing a draft of a proposed disk label at the time, wondering if 32-bit sector numbers would be adequate in a few years.

Atari ST GEM for example, was provided on a DOS-like OS, even though CP/M-68K was proposed originally.

ziloo
November 18th, 2017, 02:54 AM
Would you please explain the following key parameters:

1- EX = Extent counter, low byte - takes values from 0-31
2- S2 = Extent counter, high byte

why low byte and high byte?

An extent is the portion of a file controlled by one directory entry.
If a file takes up more blocks than can be listed in one directory entry,
it is given multiple entries, distinguished by their EX and S2 bytes. The
formula is: Entry number = ((32*S2)+EX) / (Exm+1) where Exm is the
extent mask value from the Disc Parameter Block.

3-RC - Number of records (1 record=128 bytes) used in this extent, low byte.
The total number of records used in this extent is

(EX & Exm) * 128 + RC

If RC is 80h, this extent is full and there may be another one on the disc.
File lengths are only saved to the nearest 128 bytes.

ziloo :mrgreen:

krebizfan
November 18th, 2017, 07:26 AM
The low byte only covers 32 extents which means the largest possible file would be 512kB. Plenty for a 250kB floppy disk. CP/M 2.2 increases the maximum file size to 8 MB through the use of 512 extents; CP/M 3 gives a further increase to maximum file size of 32 MB with 2048 extents. The high byte is multiplied by 32 to get to those larger numbers for extents.

Just to generate a lot of confusion, CP/M refers to logical extents (file size in 16kB increments) which these calculations apply to and physical extents which are the entries in the directories. Each directory entry gets the value of what its matching logical extent number would have been. The dead hand of compatibility forces these strange structures in order to keep working with earlier code. Partially this is necessary because disks had very limited number of directory entries; 512 extents may need as few as 32 directory entries using this method. 512 directory entries would need 16 kB to process which won't leave much memory for anything on a 64 kB Z-80 system.

A better explanation and detailed examples of a bunch of disk formats can be seen at http://www.sharpmz.org/succpminfo06.htm Due to a typesetting error, certain values do not show the superscript indicating that it is a power of 2.

ziloo
November 18th, 2017, 08:07 AM
Thank you for the explanation; now let me retell the directory story;

1- each time a record is added to the file-> RC is incremented by 1
RC is one byte;

what does it mean...low byte ?

ziloo :mrgreen:

Chuck(G)
November 18th, 2017, 08:18 AM
A file's total size is determined by three bytes containing bit fields in the directory, going from least significant to most significant:

RC (0-127)
EX (0-31)
S2 (0-127?)

Concatenating the three in the last directory entry will give you the apparent total size in 128 byte blocks.

Why "apparent"? Because of the notion of "sparse" files, which describe a file larger than actually allocated (the notion doesn't exist in the FAT system). CP/M can create files with directory entries where (a) one or more allocation block specifiers are zero and (b) entire extents are not recorded in the directory. So, a file might appear as if it's 65KB long, but actually occupy only 4KB of allocated space.

This is actually useful in some cases, where some sort of hash code is used to compute a position in a randomly-accessed file. There's no need to actually allocate blocks that are never accessed. I believe that Windows NTFS can also do this.

ziloo
November 18th, 2017, 08:20 AM
Now taking my previous post and the unique explanation given by
chuck (not found in any other reference):

"The EX field is actually divided into two fields--the low-order one relates to
the overflow from the RC field; the upper-order bits relate to
the directory ordinal (?) .

So, for example, a directory entry describing 256 blocks would use
the low-order bit of the EX field as overflow and the bits to the left of it
to number directory extents.

The EXM value in the BIOS DPB for the drive is a mask
that indicates where the division is (?)."

Now back to my directory story:

1- each time a record is added to the file-> RC is incremented by 1
until RC reaches 127 (01111111); then RC would overflow;

Where would RC overflow into?

krebizfan
November 18th, 2017, 09:13 AM
If everything works the way I understand it, once RC reaches 128 blocks indicating the logical extent is full, RC would be set to 0 and EX would be increased by 1. If EX hits 32, it gets reset to 0 and S2 increments by 1. Fortunately, it is rare for a program to update a file in just 128 byte chunks meaning the program simply has to figure out the final RC, EX, and S2 values.

Matching this to the need to add additional allocation entries and directory entries is something I would have look into. Especially what looks like a corner case with 256 kB physical extents where each allocation unit uses 32 kB or 2 logical extents. How CP/M would manage to assign a second logical extent to a single allocation unit as file size increases is something I am unsure about.

Low byte and high byte refer to the EX/S2 combo. High byte means the number is larger than the low byte. If one referred to the two bytes in hex, 0A1C would indicate at least 320 extents in the S2 byte (value of 10 * 32) and that there are another 30 extents according to the EX byte for a total of 350 extents.