PDA

View Full Version : CP/M 3 and memory bank questions - what good really was it?



alank2
February 8th, 2018, 05:23 PM
So I was reading in the CP/M 3 manual about banked vs. non banked and if I'm reading it right, it looks like the maximum TPA was still around 60K or so. How much of an improvement was this really over 2.2? Where are the benefits?

krebizfan
February 8th, 2018, 05:42 PM
The common CP/M 2.2 TPA was 56K and another 3K would be needed for the Command Processor so 7K more if one does not plan on restarting the system after exiting a program. Additional benefits were the system bank had more room for a BIOS supporting more hardware and larger disk buffers which should make the system faster. Date stamps were possible and passwords were available.

Not a huge improvement and some of the improvements tended to be ignored. Date stamps were sometimes omitted to keep disks interchangeable with CP/M 2.2 systems.

lowen
February 8th, 2018, 05:43 PM
The CP/M 3 on the REH CPU280 manages 62K TPA.

durgadas311
February 8th, 2018, 06:15 PM
There were a lot of features that only the banked version of CP/M 3 provided. And, in my experience, 56K TPA was about the best you could get from CP/M 2.2. In systems with more hardware capabilities - or hardware that required more software support - you got quite a bit less than 56K. With enough extra memory to implement directory hash buffers, you could really speed up operation, too. Command line history and editing was another feature you only got with banked systems. Also, you could use a CP/M 3 disk (with timestamps) on CP/M 2.2 - you just couldn't view the timestamps and CP/M 2.2 did not update them. The Systems Guide should give a decent run-down of the features only available in banked systems.

I did a lot of development on CP/M systems back in the day, and CP/M 3 was a major improvement for me. It was difficult to go back and work in CP/M 2.2 after having CP/M 3.

Alphasite
February 8th, 2018, 07:03 PM
My CP/M Plus for the Model 4 has a 61K TPA.

alank2
February 8th, 2018, 07:49 PM
So is there a magic amount of memory that is useful for CP/M 3 and banking? The system guide shows banks 0, 1, and 2. Is there ever a bank 3, 4, etc.? Would they be used? What is ideal? 128K 192K 256K?

krebizfan
February 8th, 2018, 08:51 PM
You can have up to 16 banks using close to a megabyte total if the shared memory region is small. In theory, not sure any CP/M 3 system had that much. Multitasking variants can take advantage of more as each extra program needs a bank as well.

128K is a good starting point. Some memory won't be used but that will provide a big bank for BDOS and a big bank for the program. I have forgotten too much to advise on making use of a third or fourth bank.

durgadas311
February 8th, 2018, 10:26 PM
One key feature if you want to take advantage of more than 3 banks is direct bank-to-bank copy (either via DMA or an MMU that supports using a different bank for read vs. write). Without that feature (i.e. implementing XMOVE BIOS call) you can't really use more than 3 banks. The only real use for extra memory is for buffers (of one type or other). MP/M could take advantage of more, but MP/M is not well-suited for user features - it is more of a server environment. In other words, the individual user experience is somewhat diminished on MP/M, in favor of being able to run multiple programs/users at once.

JohnElliott
February 8th, 2018, 11:19 PM
You can have up to 16 banks using close to a megabyte total if the shared memory region is small. In theory, not sure any CP/M 3 system had that much.

There were Amstrad PCWs with up to 2Mb -- they used banks 3 and higher for the RAMdisc.

alank2
February 9th, 2018, 05:16 AM
The system guide talks about the common area being 4K - 32K. It seems some chose 16K and then have a 48K bank to swap. What is the benefit of choosing a large or small common area? TPA gets to use what is available in in anyway. I would think hardware wise it would be easier to swap the lower 32K and leave the upper 32K common - are there any downsides to that or what would make the 48K/16K or possibly even 60K/4K better?

lowen
February 9th, 2018, 05:29 AM
You can have up to 16 banks using close to a megabyte total if the shared memory region is small. In theory, not sure any CP/M 3 system had that much. Multitasking variants can take advantage of more as each extra program needs a bank as well.


The CPU280 can have up to 4MB of RAM; my own CPU280 sitting on my desk is loaded with 4MB. The excess RAM is used as a RAMdisk, and works well in that role. I would have to go through the source again to remind myself how much RAM is banked and directly usable by CP/M, though.

krebizfan
February 9th, 2018, 08:27 AM
The system guide talks about the common area being 4K - 32K. It seems some chose 16K and then have a 48K bank to swap. What is the benefit of choosing a large or small common area? TPA gets to use what is available in in anyway. I would think hardware wise it would be easier to swap the lower 32K and leave the upper 32K common - are there any downsides to that or what would make the 48K/16K or possibly even 60K/4K better?

The BDOS bank needs to be large enough to store that and items like directory hashes. That could total greater than 32K depending on your system. The shared memory needs to be large enough to store the OS bank switching routines and shared buffers. I suggest starting with 48K banks and adjust if problems occur. Getting the best possible setup may take several tries.

In MP/M, making common memory as small as possible was optimum to give each program a larger bank but that wasted memory as some banks weren't completely used. CP/M 3 is effectively single tasking MP/M and using common memory for the single program's TPA does not cause problems.

Note: I am making these comments based on the idea that the system in use can easily have 128K or more memory.

alank2
February 9th, 2018, 09:17 AM
In the system guide it shows a third bank being used but it looks like it just has stuff from the bank 0 example in it, is a third bank really needed or are 2 banks going to give just as large a TPA?

Plasmo
February 9th, 2018, 09:43 AM
Are there many programs that need 60K TPA and can't work with 50K TPA? Put it differently, has the max 60K TPA materially limited the growth of CPM? Will 1 meg or even 10 meg TPA be the game changer? I'm pretty new to CPM, I don't know the challenges & compromises facing the software developers of the time.

krebizfan
February 9th, 2018, 10:10 AM
Are there many programs that need 60K TPA and can't work with 50K TPA? Put it differently, has the max 60K TPA materially limited the growth of CPM? Will 1 meg or even 10 meg TPA be the game changer? I'm pretty new to CPM, I don't know the challenges & compromises facing the software developers of the time.

Need 60K TPA? Few. Most programs ran fine with the default 48K TPA provided by MP/M. However, a larger TPA helps a lot with programs designed for CP/M 2.2. WordStar installed in minimal memory would need to hit the disk every time the user changes page. An extra 10K means another 5 pages of the document can be stored and thereby reduces the amount of disk access. Even larger TPA would have been a game changer but difficult with a CPU only able to address 64K. An 8080 program converted to 8086 for use with MS-DOS or CP/M-86 would effectively be running with a 63K TPA; some memory would be lost to the inefficiency of the conversion. With minimal redesign, an 8080 program converted to 8086 could have the code and data segments split apart so the program has both much larger data to work with and can keep more code overlays loaded at any time. Faster programs that do more; what's not to like?

durgadas311
February 9th, 2018, 12:15 PM
In the system guide it shows a third bank being used but it looks like it just has stuff from the bank 0 example in it, is a third bank really needed or are 2 banks going to give just as large a TPA?

Extra banks don't affect TPA size. One immediate use of a third bank is for directory hash buffers (BDOS does not require XMOVE nor that those be in bank 0/common). directory hashing really speeds up access to files. Depending on the number disks supported, and their size, you might overflow one bank with directory hash buffers. If you have XMOVE, then you can move disk data buffers out of common memory, which does give you more TPA. But there's really no functionally improvement going from 2 banks to more. It's just being able to make optimizations.

alank2
February 9th, 2018, 12:19 PM
My main goal would be to maximize TPA size. If I can do that with one 128K SRAM and two 48K banks with a 16K common then that sounds good.

durgadas311
February 9th, 2018, 12:27 PM
Are there many programs that need 60K TPA and can't work with 50K TPA? Put it differently, has the max 60K TPA materially limited the growth of CPM? Will 1 meg or even 10 meg TPA be the game changer? I'm pretty new to CPM, I don't know the challenges & compromises facing the software developers of the time.

CP/M programs are limited to a 64K address space. So the best you can do is perform tricks to get most of the BDOS/BIOS out of common memory. The affect of large TPA comes to programs like editors allowing you to edit larger files (remember, both resident BDOS/BIOS and program code itself take up memory, so a 56K TPA does not mean you can edit a 56K file). and assemblers/compilers that often run memory-constrained will need to push less out to temp files, making them faster. Business apps may also be able to take advantage of more memory. But your upper limit will be 64K. single-user OSes like CP/M 3 still only give you a single TPA that is < 64K. MP/M with more memory can run more programs, but each is limited by the common memory boundary - which itself must be dependent on the size of resident BDOS+XDOS+BIOS+XIOS. It is difficult to get as large a TPA on MP/M as you can get on CP/M 3, but you can run more programs at the same time - which is a somewhat orthogonal goal.

durgadas311
February 9th, 2018, 12:34 PM
The system guide talks about the common area being 4K - 32K. It seems some chose 16K and then have a 48K bank to swap. What is the benefit of choosing a large or small common area? TPA gets to use what is available in in anyway. I would think hardware wise it would be easier to swap the lower 32K and leave the upper 32K common - are there any downsides to that or what would make the 48K/16K or possibly even 60K/4K better?

Yes, for CP/M 3 the common memory boundary does not affect TPA size. But, it does affect bank size which affects how much memory can be used for buffers, etc. I.e. with a 48K bank size you can only add 48K worth of hash buffers to bank 2, or you have less space in bank0 for additional buffers (and possible CCP or other program copies).

For MP/M, the bank size limits the max TPA for a program. In that case, 48K vs 56K (or 60K) is pretty big deal. It's been awhile since I ran MP/M or even dug into the manuals, but I'm thinking a program cannot use the space between common memory boundary and the start of the BDOS.

alank2
February 9th, 2018, 02:05 PM
Ok, so I have thought through how I could do the banking with an AND gate and a NAND gate. The AND gate can become two NAND gates leaving with 3 NAND gates.

I would have a signal called BANK. low=0=bank0, high=1=bank2. It would be fed into the AND gate as input 1. The output of the AND gate feeds A16 on the SRAM. The AND gate input 2 is from the output of the NAND gate that is fed with A14 and A15. This way bank 1 can be used only when A14 or A15 is low. If both A14/A15 are high then it is the top common 16K area and A16 will remain low.

SECOND question - I read once about a Z80 control panel somehow feeding instructions directly into the processor by manipulating the control signals. Does anyone know anything about this? I can't remember where I read it.... So you could feed a jump instruction, etc. without it having to be in SRAM.

alank2
February 12th, 2018, 04:33 AM
One key feature if you want to take advantage of more than 3 banks is direct bank-to-bank copy (either via DMA or an MMU that supports using a different bank for read vs. write). Without that feature (i.e. implementing XMOVE BIOS call) you can't really use more than 3 banks.

Can you explain this in more detail? Why would you need to copy data from bank to bank? Would CP/M request this or run more smoothly with it?

Do most disk transfers on CP/M use DMA where the disk controller gets/puts the contents directly to memory, or does the Z80 send/receive the data to/from the disk controller and it gets/puts the data in memory?

durgadas311
February 12th, 2018, 04:50 AM
DMA hardware was not common on CP/M-era machines, at least not to begin with. Most disk I/O BIOS routines would use input/output instructions. Making matters worse, most disk formats used sector sizes larger than 128 bytes (the CP/M record size), so there had to be a sector-sized buffer used to "block and deblock" data between the disk and CP/M. CP/M 3 made great strides to improve that by doing the deblocking in the BDOS, and establishing a scheme for LRU (sector-sized) buffers. But, for performance reasons, they chose not to do extra copying between banks via common memory. So, without XMOVE, CP/M 3 requires disk data buffers to be in common memory - where they (the records) can be directly copied into the user buffer. Disk directory buffers, used internally by the BDOS, could be in bank 0 (or common memory).

Direct bank-to-bank copy, either via DMA hardware or via an MMU that supports it, allows data to be copied into the user buffer from any bank/location, without an intermediate copy operation.

Some other places where this feature becomes very handy is for stowing a copy of the CCP (you can direct-copy the CCP into bank 1 on warm boot) and for implementing RAMdisk - where you can direct copy from the RAMdisk to user buffer.

It is possible to "fake it" to the BDOS by implementing an XMOVE that does the intermediate copy, but performance is likely to be abysmal.

lowen
February 12th, 2018, 04:53 AM
...
Do most disk transfers on CP/M use DMA where the disk controller gets/puts the contents directly to memory, or does the Z80 send/receive the data to/from the disk controller and it gets/puts the data in memory?
It depends upon the hardware and the particular BIOS. CP/M for the TRS-80 Model 4, for instance, has to do programmed-I/O because the Model 4 has no DMA controller (floppy transfers on the 4 typically use NMI, but with tight coding can use straight programmed-I/O; I haven't studied the Montezuma Micro BIOS closely enough to see which it actually does, since the Z80 NMI and CP/M data structures interfere a bit).

CP/M for the TRS-80 Model II, on the other hand, can use DMA transfers since the Model II has a Z80DMA and the floppy controller is set up to use it.

The CP/M 3 for the CPU280 uses DMA for floppy access, and that one I've read the source and confirmed it.