PDA

View Full Version : What Z80 SBC's are out there for a CPM 2.2 setup



alank2
February 8th, 2018, 01:04 PM
Hi Guys,

I want to mess around with CP/M a bit and not having any real hardware is an issue. I keep looking at some of the machines on eBay like the Kaypro II and such, but honestly I've got a vt420 and I'm thinking maybe I should just put together a Z80 SBC and use my terminal with it. Probably use CF or SD for the storage. What boards are out there for this? I've messed around with Grant's FPGA CP/M and it was pretty cool, but I'd like a real Z80 on it. How fast can you run a Z80 on a SBC?

Thanks,

Alan

gslick
February 8th, 2018, 01:28 PM
The P112 was one of the popular ones using a Z80182 at 16MHz. I was tempted to get one of those when they were available.

https://661.org/p112/

glitch
February 8th, 2018, 01:42 PM
There are a bunch of CP/M targeted Z80 single-boards out there. Probably my favorite of the vintage boards is the Ampro LittleBoard, designed to bolt to the bottom of a 5.25" disk drive, much as the Ferguson Big Board bolted to an 8" floppy drive.

Sergey came up with a similar design that attaches to a 3.5" drive, the Zeta SBC series:

http://www.malinov.com/Home/sergeys-projects/zeta-sbc

I've got one of his rev 2 boards, but haven't put it together yet!

There are of course the N8VEM ECB bus boards, I have no idea how available those are nowadays.

Plasmo
February 8th, 2018, 02:03 PM
There is a Z80 SBC on retrobrew site that runs cpm 2.2, 3.1, and others. The mass storage may be CF or SD card:
https://www.retrobrewcomputers.org/doku.php?id=builderpages:rhkoolstar:sbc-2g-512
The Z80 & SIO are running at 6MHz. The board is 80mm x 100mm, so should be quite cheap.

PS, I'm working on a cheap Z280 SBC. Just got CP/M 2.2 running on it, but still long way from being real.
https://www.retrobrewcomputers.org/forum/index.php?t=msg&th=255&start=0&

Chuck(G)
February 8th, 2018, 02:06 PM
Are some of the eZ80 evaluation boards running CP/M?

lowen
February 8th, 2018, 04:12 PM
Re: P112. If you change out the 16 MHZ Z80182 with a 33 MHZ chip, and upgrade the RAM speed, you can get 36 MHZ out of the little rig, and least one has been run at 49 MHz. I bought two kits while they were available this last time.

Re: ez80 and CP/M. The eZ80F91 development boards do have a CP/M available, but for some reason this hasn't really taken off. Which is a shame, really, since the eZ80F91 is still available new, and at 50 MHz, plus a nearly threefold efficiency improvement from the pipelining, an eZ80F91 should outrun a 150 MHZ Z80. There is a port of MP/M to an eZ80F91 board, but it needs a fairly substantial add on board, but it will support six users.

Of course, I'm more than a little biased towards the black sheep of the Z80 family, the Z280, and I have run two ten-board batches of the REH CPU280, which runs CP/M 3.

Re: how fast can you run a straight Z80? 20MHz is the official max, using the Z84C0020PEC.

Re: Zeta SBC. There are boards on ebay. Also check out http://retrobrewcomputers.org

alank2
February 8th, 2018, 05:07 PM
Thanks - lots of interesting options here. Your project looks great plasmo. I saw a project where someone was using a flip flop and an AVR to emulate some of the peripherals (serial ports, microSD, etc.). I've got a lot of experience with AVR's and I am intrigued by the idea.

Chuck(G)
February 8th, 2018, 05:13 PM
You could take a medium-scale ARM board (e.g. STM32F4) and emulate the whole thing, CPU, peripherals and all.

alank2
February 8th, 2018, 05:21 PM
You could take a medium-scale ARM board (e.g. STM32F4) and emulate the whole thing, CPU, peripherals and all.

I've thought about that too, but for some reason part of me wants a real Z80 type CPU on there!

Chuck(G)
February 8th, 2018, 06:00 PM
Another alternative is to stick it in something with an x80-type CPU; for example, a credit-card terminal or a FAX machine... :) The more outre, the better.

lowen
February 9th, 2018, 06:17 AM
Well, I seem to remember from a few years back that some DVD burners have a Z80 or eZ80 CPU in them. I'm not finding very quickly one as an example, as one of the forums that discussed such things, forum.rpc1.org, is defunct. But, if you can find one with a Z80, just add a serial console and play! It would definitely be unique to say that you ported CP/M to a DVD burner.

JonB
February 9th, 2018, 06:31 AM
How about one implemented in FPGA?

http://searle.hostei.com/grant/Multicomp/ and http://searle.hostei.com/grant/Multicomp/cpm/fpgaCPM.html

I built one and I doubt it cost more than 30. It runs CP/M 2.2 at 25Mhz and is very quick (compared to a real Z80 board).

lowen
February 9th, 2018, 11:57 AM
Grant's multicomp and Will Sowerbutts' SocZ80 are both good FPGA implementations, but I think the OP has already tried that and has the appetite whetted for real Z80 iron. At least that was the impression I got.

MarsMan2020
February 9th, 2018, 07:42 PM
Hi Guys,

I want to mess around with CP/M a bit and not having any real hardware is an issue. I keep looking at some of the machines on eBay like the Kaypro II and such, but honestly I've got a vt420 and I'm thinking maybe I should just put together a Z80 SBC and use my terminal with it. Probably use CF or SD for the storage. What boards are out there for this? I've messed around with Grant's FPGA CP/M and it was pretty cool, but I'd like a real Z80 on it. How fast can you run a Z80 on a SBC?

Thanks,

Alan

On the 'stand alone' front - Sergey's Zeta V2 (https://www.retrobrewcomputers.org/doku.php?id=boards:sbc:zetav2:start) is a very friendly build and you can add the ParPortProp (https://www.retrobrewcomputers.org/doku.php?id=boards:other:parportprop:start) board to add a Propeller-based console without dealing with building an ECB bus system. My first build ever was a Zeta V1 + ParPortProp. It makes a nice demo system to show to others as well.

On the 'SBCs that can be expanded' front there is always the original SBC V2 (https://www.retrobrewcomputers.org/doku.php?id=boards:sbc:sbc_v2:start). John Coffman's Mark IV (https://www.retrobrewcomputers.org/doku.php?id=boards:sbc:z180_mark_iv:z180_mark_iv) is a bit of an 'advanced' build with a Z180.

All 3 off the boards above are supported by the excellent RomWBW CP/M 2.2 (https://github.com/wwarthen/RomWBW).

There is also a newer S-100 Z80 SBC (http://www.s100computers.com/My%20System%20Pages/SBC%20Z80%20Board/SBC%20Z80%20CPU%20Board.htm) that was done by John Monahan. I have a provided a basic CP/M 3 version (https://www.retrobrewcomputers.org/doku.php?id=software:firmwareos:zsos:start) to get this one started for new people.

alank2
February 16th, 2018, 04:13 PM
I've ordered a few parts so I am going to try my luck at getting a Z80 running with an AVR (ATMEGA64A as it has 2 UARTS). My attempt will be to use the ATMEGA for serial I/O, disk interface to microSD, realtime clock (via a DSxxxx connecte to the AVR), and maybe anything else I can get the AVR to do, random number generator, SPI, parallel port, etc.

I've used FATFS before so what I want to do is use that for access to the microSD so it can be a standard filesystem (FAT/FAT32/xFAT). It will have files in the root like 0.dsk, 1.dsk, 2.dsk, etc. My goal is that in CP/M there will be 4 drive letters (A: to D:) and you can mount these 4 to any disk. I would make a utility that would manage this so you do could something like:

MOUNT A: 0 (mount 0.dsk in A:)
MOUNT A: (unmount A: no disk present)
MOUNT A: name (mount by name, I will keep a file of names and assign up to a 14 char name to each numbered disk file).
MOUNT DIR (see all the disks available on the microSD in a dir like listing)

That is the idea anyway. I would have to have commands to delete disks, create disks, copy disks, rename disks, etc.

I have a few questions.

My other thread asks about blocking/deblocking, and for CPM 2.2 it is a BIOS feature that must be implemented. I am wondering however if it would be better to implement it on the AVR side. This way on the CP/M side it would think it is dealing with 128 byte sectors, but the AVR side would do the blocking/deblocking. The goal here would be to make the BIOS on the CP/M side smaller and simpler perhaps. If I did it this way though, CP/M 3.1's BDOS blocking/deblocking wouldn't do much either then as it is dealing with 128 byte sectors, right? Would sticking with 128 byte sectors limit the disk size of each disk? What other addressable components (track, head (is there one?), sector, etc.) are there? What is the max disk size for 128 byte sectors?

I've messed around with disks on Grant's CP/M FPGA and I think he set them to 8MB, and some of the SIMH CP/M implementations do something similar. It would be nice to have reasonably large disks, 2M/4M/8M, so are/were there any standard storage devices in that range that would be smart to emulate or pattern after?

Another question, in terms of CP/M itself, did it differentiate between a floppy or hard disk, or were they both simply block storage devices to it?

durgadas311
February 16th, 2018, 05:00 PM
Another question, in terms of CP/M itself, did it differentiate between a floppy or hard disk, or were they both simply block storage devices to it?

CP/M BDOS didn't have direct knowledge of the physical disk, but you could infer some things like "fixed" vs. "removable". Vendors sometimes provided features and utilities to differentiate.

CP/M 3 allowed for larger disks and files, but it used the same basic filesystem scheme.

alank2
February 18th, 2018, 08:17 AM
I keep noticing other people asking about MMU's for their Z80 that can swap in or out memory for their system in 16K chunks. They would split the system into 4 banks and then can swap out each bank independently for example. My question is for CP/M 2.2 or 3.1, is there any benefit to such a complicated MMU? I am planning on a simple single bank bit that can be 0 or 1 that swaps the lower 48K and has a common upper 16K. Is there any reason or benefit to make it more complicated than that?

lowen
February 18th, 2018, 08:28 AM
The Z180 MMU is available in a testing form from opencores. You could blow it into something like an Altera MAX 7128 or similar and use on a Z80 system. Or you could just use a Z180 in the first place. There are several simple MMU designs out there, but there is tested software for the Z180 MMU.

durgadas311
February 18th, 2018, 08:37 AM
I keep noticing other people asking about MMU's for their Z80 that can swap in or out memory for their system in 16K chunks. They would split the system into 4 banks and then can swap out each bank independently for example. My question is for CP/M 2.2 or 3.1, is there any benefit to such a complicated MMU? I am planning on a simple single bank bit that can be 0 or 1 that swaps the lower 48K and has a common upper 16K. Is there any reason or benefit to make it more complicated than that?

If you have only two banks (128K), that is probably sufficient for CP/M 3. CP/M 2.2 can't use the extra memory. For a ramdisk, a more complicated MMU helps to allow copying ramdisk data to user buffers, but then it's probably direct bank-to-bank copy that is the most useful. I did a relatively simple MMU for a Kaypro that does that, and allows a user-selectable common page boundary. I have a schematic for the whole mod here: http://sims.durgadas.com/kaypro/ram256k.pdf, although that is more than you'd need. There is a latch for the MMU data, then a 74'85 that does the common page boundary selection, and a 74'151 that selects the DRAM address lines. This allows direct bank-to-bank copy by having a separate bank select for write vs. read. For normal CP/M 3 bank select, both RD and WR banks are the same. For XMOVE operations, they can differ. This MMU does not allow arbitrary selection of pages, so there is some memory that is not accessible (the pages above common boundary for banks 1..). With arbitrary page selection, the setups for each "bank" are more complicated but you can directly copy between any RAM page and the user buffer.

Chuck(G)
February 18th, 2018, 10:09 AM
There's also the TI 74LS613 and its relatives. Used in the PC AT to provide extra address bits for the 8237 DMA controller. Might be simpler and more flexible than doing it in TTL glue.

alank2
February 22nd, 2018, 05:14 PM
How important is a parallel port to a Z80 SBC? I plan on having a pair of serial ports and I'm trying to decide about having a parallel port or not.

Chuck(G)
February 22nd, 2018, 05:17 PM
It depends on what you'd like to do.

A parallel port can be invaluable if you're trying to talk to a GPIB device, or even a SCSI device.

alank2
February 23rd, 2018, 02:06 PM
I am at a design fork in the road. It is a decision about how to clock the Z80 and AVR. After looking at the timing diagrams again, I don't think I am going to attempt the AVR having its data set to output and trying to switch it back to input before the Z80 T1 cycle that comes next. I'm going t do it like the z80ctrl project does it and control the Z80 clock myself from the AVR. That means that the maximum speed I can do it that way is clk/2 or half of the AVR speed. I might be able to use a multiplexer to switch between the AVR pin and the real clock and get full speed as well, but that is a whole separate issue/thing to try. What I am trying to decide is about the clock speed. The Z80 can go up to 20 MHz and the ATMEGA64A up to 16 MHz. I also want to control the Z80 speed with an out instruction so you can tell it to be a specific speed. If I go with the 16 MHz crystal, then I can get speeds of 8 MHz, 4 MHz, 2 Mhz, and possibly 16 Mhz if I implement it. These are nice common Z80 speeds, but 16 MHz is not a serial friend clock speed for division. My baud rates under 57600 baud are all very workable at 0.16% error or less, 57600 baud is 0.79% error, but 115200 baud is 2.12% error. That is the downside of 16 MHz. The datasheet says that 115200 will work at -3.9%/4% error, but they recommend half of that at the most because each side might have clock error. I'm thinking the 2.12% would probably work, but who knows. Or I could just drop 115200 since it is probably pretty fast for the Z80 anyway, but again I'm not familiar with that. The altenative is to use a 14.7456 Mhz crystal and get all perfect baud rates, but then the Z80 would be clocked at rates of 7.3728 MHz, 4.9152 MHz, 3.6864 MHz, 2.9491 MHz, 2.4576 MHz, 2.1065 MHz, or 1.8432 MHz which have no bearing to how it was actually clocked, again not that it matters that much. What are your thoughts?

Chuck(G)
February 23rd, 2018, 03:13 PM
You want faster, use an ARM. Many of the lower range are 5V tolerant with plenty of drive capability. Maximum clock frequency is about f(ARM)/2. For an F103-type, that f is 72MHz; for an F4, 180MHz.

You could also consider PIC32, with similar 5V tolerance and clock speeds.

alank2
February 23rd, 2018, 04:16 PM
I've done a few things with STM32, but not a lot. I love AVR's so I'd like to do it with one of them if I can work it out. I've had a different idea - instead of making the AVR emulate and deal with MANY OUT and IN operations as a communication method, why not just make the AVR become a bus master and have it DMA everything instead. Each OUT instruction might take 20-30 cycles by time an interrupt fires on the AVR and it changes clock modes and processes it. That would probably be a significant slow down. But if the AVR took used BUSREQ to take control of the bus, it can read a sector out of the SRAM directly much faster, do what it needs to do, and leave a status code in memory for the Z80 side BIOS code to look for. I just need a way to trigger the AVR to take control of the BUS and kick off a command. I suppose the IORQ signal and using a dummy OUT command to assert the IORQ would be one way.

Chuck(G)
February 23rd, 2018, 05:55 PM
You could also move to a mid-range PIC (PIC18 or PIC32MX) and operate it as an I/O device in PSP mode.

Or use shared memory and leave all the I/O to an AVR--I've used the ATMega162 and 256 years ago that allowed for external memory, but after that I moved to 3V MCUs and haven't kept track of what AVRs are doing.

durgadas311
February 23rd, 2018, 06:51 PM
I've done a few things with STM32, but not a lot. I love AVR's so I'd like to do it with one of them if I can work it out. I've had a different idea - instead of making the AVR emulate and deal with MANY OUT and IN operations as a communication method, why not just make the AVR become a bus master and have it DMA everything instead. Each OUT instruction might take 20-30 cycles by time an interrupt fires on the AVR and it changes clock modes and processes it. That would probably be a significant slow down. But if the AVR took used BUSREQ to take control of the bus, it can read a sector out of the SRAM directly much faster, do what it needs to do, and leave a status code in memory for the Z80 side BIOS code to look for. I just need a way to trigger the AVR to take control of the BUS and kick off a command. I suppose the IORQ signal and using a dummy OUT command to assert the IORQ would be one way.

You could implement MMIO (memory mapped I/O) on the Z80. That was not common in the old days, but was used some places. So, you decode a small set of addresses high in memory and those signals activate/interrupt/signal your co-processor (whatever it is). The MMIO space might actually be RAM, but the signals also go to the other processor. It might even be just one byte that actually signals the co-processor (address 0FFFFH is pretty easy to decode). Then, as you said, the co-processor can use BUSREQ/BUSACK to access the Z80 memory as needed to perform whatever operation was request and send an interrupt to the Z80 for completion. Or, the Z80 could poll a memory location. As long as no caching is going on, that would be fairly simple.

If you used such a technique to "function ship" BIOS calls to the co-processor, your BIOS would be very small.

alank2
February 24th, 2018, 09:12 AM
Thank you both for the help!

I don't have any experience with PIC's so far. I am using an ATMEGA64A that has some sort of memory interface, but they multiplexed the data and half of the address bus. I'm guessing that would require a latch to build up. I'm not sure if I am going to use it or just bitbang reading and writing memory. I still need to look at it in the datasheet.

durgadas311 - it is worth the extra logic to implement MMIO? Do you think my plan of using an OUT instruction to trigger a transaction with the AVR will work? I agree that I could trigger it on a memory write with decoding, but this would require external logic, but I suppose so does triggering on IORQ/WR/any address lines I want to restrict it to. In your MMIO example, it would be just writing to a memory location that would trigger it and that would be cool (and more efficient). How much logic would it take to make sure all address lines are 1's (0xFFFF) and MREQ and WR? Would I use logic decoders for this?

One of my goals is "function shipping" everything I can off the CP/M BIOS into AVR code to maximize TPA.

Chuck(G)
February 24th, 2018, 09:27 AM
Don't ignore the possibility that many higher-end MCUs support USB host mode. That could be a boon if you want to use keyboards or storage devices.

durgadas311
February 24th, 2018, 10:47 AM
durgadas311 - it is worth the extra logic to implement MMIO? Do you think my plan of using an OUT instruction to trigger a transaction with the AVR will work? I agree that I could trigger it on a memory write with decoding, but this would require external logic, but I suppose so does triggering on IORQ/WR/any address lines I want to restrict it to. In your MMIO example, it would be just writing to a memory location that would trigger it and that would be cool (and more efficient). How much logic would it take to make sure all address lines are 1's (0xFFFF) and MREQ and WR? Would I use logic decoders for this?


True, its probably not worth trying to decode 16 address lines. Back in the early days of TTL, they used to put together an inverter and a bunch of diodes to make multi-input NAND gates, but thats pretty messy. There's probably no other I/O on the Z80 at all, so just combining IORQ and WR would be enough to signal the co-processor.

If you function-ship everything then you may be able to get the BIOS and BDOS down to one page (256 bytes) each. Each of those is normally expected to start at a page boundary, so that's about minimum - provided you can do the setup and response code in one page. Technically, both BIOS and BDOS could share the same code to do the function shipping (split between the two pages). So, you might be able to get it down to a 63.5K TPA.

Chuck(G)
February 24th, 2018, 11:37 AM
A 63.5K TPA? I don't see how, if you want to keep the relationships documented in the "Alteration Guide".

When I wrote 22NICE, I had the option of making a very tiny resident, as all of the BDOS and BIOS functions were implemented in a different memory segment in x86 code. CCP didn't exist, as 22NICE uses the MSDOS command processor. All I had to provide was a very simple BIOS jump table and a few bytes of interface code (emulator trap exit)--and a minimal bit of stack space.

Many programs worked--and some didn't. Some depended on the relationship between BDOS and BIOS areas. I found that 61KB/60KiB was about the upper limit for reliable operation. The basis for my testing was comprised of a mix of commercial software and the SIG/M, CPMUG and SIMTEL user libraries.

durgadas311
February 24th, 2018, 01:38 PM
No commercial software should make any assumptions about the relationship between BIOS and BDOS. In fact, commercial software should not be using the BIOS at all. But any software that needs to use the BIOS should be getting the address from the JMP at location 0000H, and for the BDOS/TPA it should be getting the address from the JMP at location 0005H. Why would any software care about whether the BIOS entry and BDOS entry were 256 bytes apart or 3328?

Magnolia Microsystems produced a co-processor for the H89, a 4MHz Z80 with 64-256K RAM that communicated via parallel port (I/O bus, actually) to the H89. It function shipped BIOS and BDOS calls to the H89 - which was running the traditional CP/M. There was no known commercial software that did not run.

Chuck(G)
February 24th, 2018, 02:12 PM
For me, "should" didn't matter--and, in fact, in a market dominated by hobbyists and software written by them, there really isn't a "should" if you want to sell product that works with the largest body of software.

As another example, no programmer "should" assume the support of undocumented instructions. But many did.

"Should" anyone write a commercial (or otherwise) product that requires a 63KB TPA? You can, but you're probably committing product suicide.

Just stating reality.

durgadas311
February 25th, 2018, 08:57 AM
One of my goals is "function shipping" everything I can off the CP/M BIOS into AVR code to maximize TPA.

So, one drawback to function shipping everything is that you need to run some sort of BDOS on the co-processor. I vaguely recall hearing of a BDOS converted to C, but can't find reference to it. You'd need to simulate what the BDOS does. At least for the initial implementation, you may want to just run a BDOS on the Z80.

alank2
February 25th, 2018, 10:56 AM
Oh, I misunderstood. I'm not going that far! I'm just going to write the BIOS in such a way as to move as much to the AVR as possible.

Chuck(G)
February 25th, 2018, 11:19 AM
I think it'd be a hoot to couple a Z80-ish MPU with a not-yet-released Propeller 2 (https://docs.google.com/document/d/1bqZifEV1829USP5WoXPffmqdxHcfl4ldyd3vS6QUGDs/edit).

Reminds me of my old CDC days...

alank2
February 26th, 2018, 04:27 AM
I see a lot of designs where people went all out with a MMU that can do 16K pages, 4 of them each indexed to any particular area of SRAM. It is flexible, but what uses it? MP/M? Something else? I didn't see CP/M 3.1 benefitting from it necessarily.

durgadas311
February 26th, 2018, 05:00 AM
I see a lot of designs where people went all out with a MMU that can do 16K pages, 4 of them each indexed to any particular area of SRAM. It is flexible, but what uses it? MP/M? Something else? I didn't see CP/M 3.1 benefitting from it necessarily.

CP/M 3 doesn't much benefit from that complexity. One area is for better use of the available memory. For example, if you have 256K and use a 16K common area, you have 48K wasted (a whole other bank) unless you have this feature. So, with simple MMU and 256K 16K-common, you get only 4 banks, but with arbitrary page selection you get 5 banks.

Another area it helps is for implementing a ramdisk, so you can map an arbitrary segment of ramdisk into a convenient address for your code.

alank2
February 26th, 2018, 07:31 AM
I've got a SN47HC00N all on its own that I am using for the banking. It is provided BANK (to select banks) and A14/A15 from the Z80 and it outputs A16 for the SRAM. It lists typical 9ns (18ns) max from "a or b input to y output". I have it going through 2 gates between the A14/A15 to select and the output A16. Will this be fast enough to update the A16 line? How fast does the Z80 require the SRAM to be? I've seen anywhere from 10ns to 55ns. I might want to run the Z80 up to 20 MHz, but probably more like 16 MHz.

durgadas311
February 26th, 2018, 08:04 AM
It's been awhile since I thought in that space. As I recall, the issue for address lines will be the "setup time" for the SRAM - not (just) access time. The Z80 should stabilize address lines ahead of the leading edge of RD or WR, and then the trailing edge is where the action is. RD or WR should be approx 1 clock cycle (50ns at 20MHz), but you have a little more time since address lines are set first. if the propagation delay in your select circuit is 18ns, you have that much less time for the SRAM to setup address select internally. You may need to study timing diagrams for both the Z80 and the SRAM. If your SRAM access time is 55ns, though, that is not going to work for 20MHz Z80 (without WAIT states).

alank2
February 27th, 2018, 07:19 AM
I got the two bank system working 48K/48K with 16K common. I am using a signal BANK with two NAND gates, so the signal has to travel through 2 gates, which in HC my tests last night were around 15ns.

The problem with this is that I have 112K usable of 128K sram, which isn't a huge problem, but I wonder if it could be improved.

Instead of doing the 48K/48K - 16K common, I could switch to a 32K/32K/32K - 32K common. I could fully address all of the 128K. Instead of the two NAND gates in series, I could do this with two OR gates, one for A15 and one for A16. If the A15 from the Z80 is high, then both A15/A16 would go high, else it would follow the BANK0 and BANK1 selection signals.

Is there a downside to a 32K/32K/32K - 32K common config vs. a 48K/48K - 16K common other than you don't have that 48K contiguous bank0? Any downside for CP/M 3.1 other than possibly having to switch to bank 2 instead of bank 0?

durgadas311
February 27th, 2018, 07:32 AM
I got the two bank system working 48K/48K with 16K common. I am using a signal BANK with two NAND gates, so the signal has to travel through 2 gates, which in HC my tests last night were around 15ns.

The problem with this is that I have 112K usable of 128K sram, which isn't a huge problem, but I wonder if it could be improved.

Instead of doing the 48K/48K - 16K common, I could switch to a 32K/32K/32K - 32K common. I could fully address all of the 128K. Instead of the two NAND gates in series, I could do this with two OR gates, one for A15 and one for A16. If the A15 from the Z80 is high, then both A15/A16 would go high, else it would follow the BANK0 and BANK1 selection signals.

Is there a downside to a 32K/32K/32K - 32K common config vs. a 48K/48K - 16K common other than you don't have that 48K contiguous bank0? Any downside for CP/M 3.1 other than possibly having to switch to bank 2 instead of bank 0?

So, tradeoffs for 32K vs 48K banks. Smaller banks mean less space for buffers and the code. Also, without an MMU that supports direct bank-to-bank copy you can only use bank 0 and common memory for disk buffers. So, bank 2 may not be of much use anyway. I think you can still put directory hash buffers there, but 32K is not much space for that. I'd say you're close to the diminishing returns point. My advice - for what it's worth - would be to make peace with 112K, or build a better MMU and go to 256K.

alank2
February 27th, 2018, 07:46 AM
Actually my idea of using an OR gates can scale. With a single quad or gate IC I could go to a 512K SRAM. As I'm planning on using an AVR to control the bank signals, it can be used to do DMA like XMOVE.

a15 from z80 OR bank0 --> a15
a15 from z80 OR bank1 --> a16
a15 from z80 OR bank2 --> a17
a15 from z80 OR bank3 --> a18

When a15 is low, then the a18/a17/a16/a15 come from bankX. When a15 is high, a18/a17/a16/a15 are all high.

alank2
February 28th, 2018, 05:52 AM
I'm still trying to work out the question of 32K/32K vs 48K/16K.

Does CP/M 3.1 really use up more than 32K in bank 0 for its use with buffers and structures typically? If I go 32/32, will I have to put some CP/M 3.1 things in bank 2?

I have two ideas for very simple MMU's. I know there are complicated designs where the user can assign any page to each 16K region, but those require more logic and complication than these two ideas. I will be controlling the bankX signals from an AVR. Using a 512K SRAM.

__

The first is my simplest 32K banked / 32K common scheme:

512K SRAM means 32K pages from 0-15.

A0-A14 --> sram A0-A14
z80 A15 || avr BANK0 --> sram A15
z80 A15 || avr BANK1 --> sram A16
z80 A15 || avr BANK2 --> sram A17
z80 A15 || avr BANK3 --> sram A18

bank3210+z80a15 --> sram a18/a17/a16/a15 (page)
xxxx + 1 --> 1111 0x8000-0xffff always points to page 15 (common).
0000 + 0 --> 0000 0x0000-0x7fff points to pages 0 (based on bank0-3)
0001 + 0 --> 0001 0x0000-0x7fff points to pages 1 (based on bank0-3, and so on).

Only needs one part (quad or gate) and you can access all memory through sixteen 32K pages.
__

The second modifies it to have a 48K banked / 16K common scheme:

512K SRAM means 16K pages from 0-31.

z80 A14 AND z80 A15 are combined with an AND gate to become z80 A14&A15

A0-A14 --> sram A0-A14
z80 A15 || avr BANK0 --> sram A15
z80 A14&A15 || avr BANK1 --> sram A16
z80 A14&A15 || avr BANK2 --> sram A17
z80 A14&A15 || avr BANK3 --> sram A18

bank3210+z80a15a14 --> sram a18/a17/a16/a15/a14 (page)
xxxx + 11 --> 11111 0xc000-0xffff always points to page 31 (common).
0000 + 00 --> 00000 0x0000-0x3fff points to page 0
0000 + 01 --> 00001 0x4000-0x7fff points to page 1
0000 + 10 --> 00010 0x8000-0xbfff points to page 2

You can get to page 3, but only by changing BANK0 to a 1 resulting in this odd arrangement:
0001 + 00 --> 00010 0x0000-0x3fff points to page 2
0001 + 01 --> 00011 0x4000-0x7fff points to page 3
0001 + 10 --> 00010 0x80000x-bfff points to page 2

Changing BANK1 to a 1 gives the next logical progression and so on:
0010 + 00 --> 00100 0x0000-0x3fff points to page 4
0010 + 01 --> 00101 0x4000-0x7fff points to page 5
0010 + 10 --> 00110 0x8000-0xbfff points to page 6

It needs two parts (quad or gate, and gate) but you get a 48K banked / 16K common configuration. You can access eight 48K banked areas easily. There are also eight 16K banked areas that can be accessed, but I wouldn't say in a clean linear way, but workable. A downside is that you are propogating through two gates in series (AND then OR) and not just one, but it would probably still work fine with fast enough logic.

Now, I _could_ drop the BANK0 signal from A15 and lose access to those 16K regions and think of it as a 16K common / eight 48K banked configuration. 400K system instead of 512K.

lowen
February 28th, 2018, 06:31 AM
While I understand the need to work out your own solution to the MMU problem, I would recommend you read up on what other hobbyists have done during the first wave of 'retro-CP/M' homebrew boards, back in the late '90s. Tim Olmstead wrote a comprehensive paper on interfacing dynamic RAMs with the Z80, and beginning on page 15 he discusses memory management and MMUs. Now, I know and read that you're planning to use static RAM, which is great, but the MMU information in Tim's paper you might find informative. The paper, with out schematics, in PDF form can be found at http://www.cpm.z80.de/download/dram.pdf and a ZIP of the paper plus schematics can be found at http://www.cpm.z80.de/download/dramfull.zip and please be sure to read the full memorial page at http://www.cpm.z80.de/tim.htm.

Now, Tim was developing a distinctly non-CP/M system, but working through his reasoning is helpful, and seeing how he leveraged the 74xx189 register file for an MMU is eye-opening. Another MMU design you could look at that has been very successful with CP/M 3 systems is that of the HD64180/Z180, and there is a lot of example code out there for it.

Now, looking at your 32K/32K banking idea, I am reminded of the TRS-80 Model 4, which uses exactly that banking. The 128K model 4 is set up with 4 32K banks, with the mapping window either in the lower 32K or the upper 32K. The CP/M implementations for the Model 4 might give you some ideas, and I read on the Tandy subforum here that a new CP/M Plus port has been made by member Alphasite; see: http://www.vcfed.org/forum/showthread.php?62272-TRS-80-Model-4-CP-M-Plus.

Hope that helps.

durgadas311
February 28th, 2018, 07:04 AM
Does CP/M 3.1 really use up more than 32K in bank 0 for its use with buffers and structures typically? If I go 32/32, will I have to put some CP/M 3.1 things in bank 2?


This question really depends on how large the banked part of your BIOS are, and how many directory buffers you want to provide (and your largest physical sector size). If you want to also save a copy of CCP.COM, that's more space.

I can't over-emphasize the benefits of the direct bank-to-bank copy MMU feature, if you plan on trying to use a large amount of memory in CP/M 3. If you're only going to implement 128K, then it may not matter. But without bank-to-bank copy you really only focus on bank 0 and bank 1, in which case you probably don't want to limit that to 32K.

alank2
February 28th, 2018, 07:41 AM
This question really depends on how large the banked part of your BIOS are, and how many directory buffers you want to provide (and your largest physical sector size). If you want to also save a copy of CCP.COM, that's more space.

All disk I/O will be via a microSD card which is pretty fast already, so I'm not too worried about storing a copy of CCP.COM. I don't think it will be too slow being loaded from disk. The largest sector size is still something to consider. I plan on implementing four mountable drives (A: - D: ). The size of each drive could be 8M, or something a little smaller. Not sure about the block size. I do want the AVR to present all disk I/O to the Z80 in terms of 128 byte records so the Z80 doesn't have to deblock/block anything. I'll do that on the AVR side. I plan on having files 0.dsk, 1.dsk, 2.dsk, etc. on the microSD that I can mount into drives like "mount a: 0" would mount 0.dsk to a:. Mount will do a warm boot after exiting.


I can't over-emphasize the benefits of the direct bank-to-bank copy MMU feature, if you plan on trying to use a large amount of memory in CP/M 3. If you're only going to implement 128K, then it may not matter. But without bank-to-bank copy you really only focus on bank 0 and bank 1, in which case you probably don't want to limit that to 32K.

I am going to go to 512K now. I am pretty sure I want to do the 32K/32K common config because it is so simple. I did get the AVR external memory interface working last night so I can map a 32K section of the SRAM directly into the AVR address space and read or write to it very quickly. Bank to bank copy should be no problem. XMOVE/MOVE bios calls will request that the AVR do it, and the AVR will take over the bus and knock it out.

I've only seen CP/M 3 talk about banks 0, 1, and 2. When you use "use a large amount of memory in CP/M 3" do you mean the OS or a program? Do programs ever use other banks for their purposes? I suppose if the code to swap banks was above 32K and below BDOS that they could.

durgadas311
February 28th, 2018, 09:58 AM
I've only seen CP/M 3 talk about banks 0, 1, and 2. When you use "use a large amount of memory in CP/M 3" do you mean the OS or a program? Do programs ever use other banks for their purposes? I suppose if the code to swap banks was above 32K and below BDOS that they could.

Programs don't normally use any bank except the TPA (remaining common memory and bank 1). It would be difficult to use 512K all for BDOS3, there only so much buffering to be done. For large disks, with large directories, hash buffers makes sense - it really speeds up CP/M access to the drives. If the AVR is doing disk I/O, the speed will depend on the interface to transfer data. If the AVR is doing DMA directly to the CP/M address space, you would probably be good to go. If it's a serial interface, you may want other methods to increase speed, like buffering. Without bank-to-bank copy (or AVR direct DMA to buffers) you'll need to have disk data buffers in common memory, which reduces TPA. If you plan on using the remaining memory for a ramdisk, bank-to-bank copy is also a major benefit. There really aren't any strict rules, but depending on your planned usage you have some other options to consider.

Hash buffers tends to be large, and with a 32K bank size you may need more than just one bank for that. BDOS3 allows hash buffers in any bank, I think - even without bank-to-bank copy.

Alphasite
February 28th, 2018, 06:07 PM
Now, looking at your 32K/32K banking idea, I am reminded of the TRS-80 Model 4, which uses exactly that banking. The 128K model 4 is set up with 4 32K banks, with the mapping window either in the lower 32K or the upper 32K. The CP/M implementations for the Model 4 might give you some ideas, and I read on the Tandy subforum here that a new CP/M Plus port has been made by member Alphasite; see: http://www.vcfed.org/forum/showthread.php?62272-TRS-80-Model-4-CP-M-Plus.

Hope that helps.

Technically it's not a new CP/M Plus port I'm working on as I wrote it 30+ years ago. I just dug it back out and have been filling in the missing bits and updating it. When I wrote it I wrote it for myself so I could run CP/M software so certain polish wasn't there (it also started as a CP/M 2.2 port). For example, there's nothing in the config program mentioning how to go up a level and there's no indication the config is written to drive A.

As for the banking, I was limited to what the Model 4 had. Were I to design my own Z80 system, I'd probably go with a 16K common area and 16K banked pages that could be mapped anywhere in the lower three 16K memory pages. That would allow easy bank-to-bank moves.

alank2
March 1st, 2018, 07:43 AM
I noticed that there are peripherals to the Z80 that some people use/include in their SBC's. CTC, DMA, SIO, etc.

Did CP/M use any of these specifically? DMA perhaps by the BIOS that uses XMOVE?

Did any programs use these periphs?

I am wondering about their importance in an SBC. If my AVR does DMA in its own way, will that be a problem for software that is looking for a DMA controller? Was there such a thing or was that always a custom solution?

durgadas311
March 1st, 2018, 08:19 AM
I noticed that there are peripherals to the Z80 that some people use/include in their SBC's. CTC, DMA, SIO, etc.

Did CP/M use any of these specifically? DMA perhaps by the BIOS that uses XMOVE?

Did any programs use these periphs?

I am wondering about their importance in an SBC. If my AVR does DMA in its own way, will that be a problem for software that is looking for a DMA controller? Was there such a thing or was that always a custom solution?

Those were just easy ways to get peripherals on a Z80 systems. The Z80-XXX ones were rather spendy, so (non-Zilog) alternatives were also used. I think MOSTek started second-sourcing the Z80-XXX chips at some point, which coincided with price relief. But, "good" CP/M programs should not have been going directly to the chips, unless the chip had a highly specialized purpose. In those cases, the device on the other end of the chip's external interface was probably equally important. So unless you plan on implementing industrial control or such, it probably doesn't matter. If your Z80 interface to the AVR is sufficiently robust, you may not need a local DMA chip.

alank2
March 1st, 2018, 09:56 AM
Thank you!! I appreciate the help you guys are giving me; I'd be lost without it!

alank2
March 1st, 2018, 06:25 PM
Spent some time with the CP/M 3 system guide tonight. Wow, that is quite a process. When you enter the memory banks it has available to it, does it then decide how and where it wants to put all the buffers, hashes, etc. ? It looks a bit daunting, but I suppose one step at a time...

Is CP/M 3 easier to setup than CP/M 2? or vice versa?

durgadas311
March 1st, 2018, 06:42 PM
Spent some time with the CP/M 3 system guide tonight. Wow, that is quite a process. When you enter the memory banks it has available to it, does it then decide how and where it wants to put all the buffers, hashes, etc. ? It looks a bit daunting, but I suppose one step at a time...

Is CP/M 3 easier to setup than CP/M 2? or vice versa?

Entry of all the bank information is only necessary when you are letting/making GENCPM do the allocation of buffers. If you pre-allocate everything in your code, or do it dynamically at boot time, then GENCPM becomes fairly simple.

CP/M 3 is definitely more complicated than CP/M 2.2, but there are some pitfalls in CP/M 2.2 as well, some things are easier with CP/M 3.

alank2
March 5th, 2018, 07:59 PM
Thanks. I've been battling SRAM over the weekend. I tried to move from 128K to a 512K IC, but the difference in speed seemed to be a problem. I was using a 55ns one before (128K), but the 512K one I ordered was 10ns. I couldn't get it to work despite many attempts. I'm not completely sure what it was, but maybe it is just too fast to get working on a breadboard. It was generating a lot of ground bounce and seemed to be really sensitive to the latch. It might run a few passes then fail, or if you try to do anything to it, probe anything, etc. The reason I tried it was that I figured 55ns would not be fast enough to try to run the Z80 at 20 MHz, but I went back to the 55ns ram (128K) and it is working beautifully with the AVR memory interface, even at 20 MHz. I've got a 512K 55ns IC coming tomorrow and hopefully it will work as well as the 128K version of it.

durgadas311
March 6th, 2018, 03:32 AM
That is strange that the faster part would not work, assuming everything else stayed the same. Like you say, maybe just some electrical characteristics coupled with the breadboard environment. I was reviewing the Z80-CPU timing diagrams, and the RD pulse is longer than 1 clock cycle, closer to 2 (although instruction fetch might be more like 1.5). It's hard to tell just how literal to take the timing diagrams. WR is more like 1 clock cycle, though. But setup time for both looks to be around 2.5 cycles. So, depending on your decode logic and the actual timing constraints of the memory, you might be just fine. But I'm pushing my area of expertise here, so maybe others have better input.

alank2
March 6th, 2018, 04:16 AM
I noticed with the 55ns SRAM that there is a 55ns time from address setup, but only 30ns time from read enable. The writing is 50ns time from address setup, but 45ns time from the write enable. I'm looking at a Z80 PDF that always shows the address setup 1/2 clock cycle ahead of the RD/WR pulse which helps. The read looks like it has plenty of time (2 clock cycles) and the write is a bit tighter at only 1 clock cycle, but the data is available for 1.5 clock cycles (75ns @ 20 Mhz).

My 20Mhz AVR running memory test patterns ran all last night 30000+ passes with zero errors.

I also thought it was strange about he 10ns part. Part of me wants to get one from a different manufacturer to see if it performs the same or different, but I don't want to sink any more time with it.

Plasmo
March 6th, 2018, 04:44 AM
Memory access time is typically defined from assertion of chip select to valid data out. Generally read access is the long pole, write time can be faster. Write pulse can definitely be much shorter than the memory access time. I have no Z80 design experience; looking at the datasheet there are 3 machines cycles in a basic bus cycle. At 20MHz, the bus cycle is at least 150nS. The RD pulse is 75nS and WR pulse is 50nS, so I think a 55nS static RAM should work fine at 20MHz.

10nS RAM in the breadboard environment is definitely a problem, it is effectively a 100MHz part. When 8 data lines are switched on simultaneously with a read, they send out large, fast pulses that can disrupt grounds all around them. The severity of disruption is dependent on data pattern. It is hard to get solderless breadboard to run faster than 10MHz. Wirewrap prototype can run faster, but you have to take special care with the ground grid-connecting each part with multiple ground wires in a grid-like network. Adding additional ground wires is a good practice even with 55nS RAM.

durgadas311
March 6th, 2018, 05:14 AM
Wirewrap prototype can run faster, but you have to take special care with the ground grid-connecting each part with multiple ground wires in a grid-like network. Adding additional ground wires is a good practice even with 55nS RAM.

I worked a place that did wire-wrap early in my career. They used a PCB pattern with ground and Vcc grids in traces, and traces to ground and Vcc pins on the sockets (pattern defaults to 16-pin dips, and we would cut traces and add solder-on clips for other packages). Strong gound and Vcc, plus the pattern allowed for lots of bypass caps. The wrire-wrap boards were not for prototype - they were the actual delivered product - so they had to be reliable. Of course, most of this was 2MHz. But just confirms what you say about strong ground and power being essential. Bypass caps do wonders, also.

alank2
March 6th, 2018, 06:14 AM
Memory access time is typically defined from assertion of chip select to valid data out. Generally read access is the long pole, write time can be faster. Write pulse can definitely be much shorter than the memory access time. I have no Z80 design experience; looking at the datasheet there are 3 machines cycles in a basic bus cycle. At 20MHz, the bus cycle is at least 150nS. The RD pulse is 75nS and WR pulse is 50nS, so I think a 55nS static RAM should work fine at 20MHz.

10nS RAM in the breadboard environment is definitely a problem, it is effectively a 100MHz part. When 8 data lines are switched on simultaneously with a read, they send out large, fast pulses that can disrupt grounds all around them. The severity of disruption is dependent on data pattern. It is hard to get solderless breadboard to run faster than 10MHz. Wirewrap prototype can run faster, but you have to take special care with the ground grid-connecting each part with multiple ground wires in a grid-like network. Adding additional ground wires is a good practice even with 55nS RAM.

This is exactly what was happening Plasmo. Here is my question - how is that different from when I switch an entire PORT on my AVR from 0x00 to 0xff? That is 8 lines all switching at once. I measure the rise/fall time on my scope last night and it was around 3.8ns. Wouldn't that also cause the same time of issues? Or is it that nothing I've ever driven before was sensitive enough to respond to that, that quickly? I did try to solidify the ground a bit which reduced the ground bounce issue I had going on, but it still failed. The guys at the eevblog were suggesting I add resistors in the lines to slow things a bit, but that would have been too hard with the already difficult breadboard wiring I already had going.

WSM
March 6th, 2018, 06:33 AM
Recently I'be been mostly working with the Z8S180 which has slightly different timing albeit very similar. I've been using 10ns SRAMs without any issues at a 33MHz clock rate and without wait states. One possible difference is that I'm using 3.3V SRAMs connected via FET bus switches.

If you look carefully at the Z80 timing diagram you'll notice that #9 [MREQr] is only a maximum and no minimum is specified. Therefore for a worst case scenario read timing without wait states I use (1.5 * TcC) - #8[MREQf] - decode time = 35ns - decoder delay at 20 MHz. For writes, I use (2 * TcC) - #8[MREQf] - decode time = 60ns - decoder delay at 20MHz. However, another critical specification is #31 [TwWR /WR minimum pulse width] = 25ns at 20MHz.

I have scoped several of my circuits and noted that the actual timings are considerably longer than the theoretical minimums. However, I haven't done that across the full temperature range and all mask versions so I choose to use the theoretical minimums. Per the datasheet footnotes, capacitance may come into play especially for larger systems and slow down the rise/fall times of various signals.

Re: Ground bounce - I was using DRAM on a HD64180 system and detected a LOT of ground bounce and power rail noise. This prototype had the DRAM decoupling capacitors connected from the DRAM supply pins to the nearest ground points. Simply changing the cap position to connect them directly across the DRAM's power and ground pins made a HUGE difference in quieting the noise.

Plasmo
March 6th, 2018, 07:09 AM
This is exactly what was happening Plasmo. Here is my question - how is that different from when I switch an entire PORT on my AVR from 0x00 to 0xff? That is 8 lines all switching at once. I measure the rise/fall time on my scope last night and it was around 3.8ns. Wouldn't that also cause the same time of issues? Or is it that nothing I've ever driven before was sensitive enough to respond to that, that quickly? I did try to solidify the ground a bit which reduced the ground bounce issue I had going on, but it still failed. The guys at the eevblog were suggesting I add resistors in the lines to slow things a bit, but that would have been too hard with the already difficult breadboard wiring I already had going.

The rise/fall time of 3.8nS is nothing compare to that of 10nS SRAM. The faster the edge rate, the greater the coupling effects; more lines switching at the same time, greater the coupling effects. Mixing 30 years of technology presents some challenges. The older technology parts frequently is not affected by the fast glitches, but the newer parts may react to it resulting in a positive feedback. An example is data bus driving causes address lines to change and that, in turn, causes data to change.

Good ground is the first line of defense against noise. A good VCC tied to ground via a number of bypass caps is also desirable. To minimize noise, the connections should be point-to-point, not the pretty wire bundles routed between parts. Do the data/address buses first and control lines last. This way the control lines are separated from noisy bus and easier to reroute from a particularly noisy part. Resistors in the outputs of the fast part is a possible solution; lower the voltage of the board or insert a diode in the fast part's VCC works sometimes (lower supply voltage means slower parts). Biasing the data bus so it is float to half point between VCC and ground is another possible solution. Temperature affects different technology differently. CMOS slows down but TTL speeds up at higher temperature. It is a technique used to separate the problem when a board contains mixed technology.

Are you prototyping on a solderless board?

alank2
March 6th, 2018, 08:48 AM
That all makes sense Plasmo. I'm using solderless boards - I did wonder if the problem would go away or stay if I put it on a proper PCB though. I figured it I just ran it slow enough, but I guess that doesn't change its speed.

I also though about the age difference of the technologies.

alank2
March 14th, 2018, 07:18 PM
The good news is that I was able to write some opcodes into SRAM and release the Z80 to run them. It then used an OUT command which triggers a flipflop to enable BUSREQ and allows the AVR access to SRAM. I then wrote a halt instruction to the very next address the Z80 would execute and then released BUSREQ and the Z80 picked back up and ran the HALT command. So the concept of having the AVR process commands this way seems to work very well, even at 20 MHz.

The bad news is that my bread board is a mess. Too many wires going all over the place so I am going to make some stackable boards (2x20 stackable header on the left for Z80 signals and a 2x20 on the right for aux signals like my bank0-3 signals, etc.

A board for the Z80, one for SRAM, one for the AVR, one for the serial ports, etc.

I've got the Z80 schematic done so far, but no board layout yet if anyone wants to review it (attached PDF). It basically is a DC barrel power in with power switch that has some main caps and a TVS, the Z80 basically wired to the left bus (z80 bus), and the only pins it uses from the right bus (aux bus) are two signals to control the speed. It will be software switchable between either 16/8/4/2 or 20/10/5/2.5 MHz controlled from the AVR so it has some logic to split and select the clock on board as well. If you see any bugs or problems let me know!

durgadas311
March 14th, 2018, 10:16 PM
Schematic makes sense to me, at least from a first look.

One thought: do you need a power-on-reset circuit? I'm wondering if you'll find that the power-on state of the CPU is somewhat indeterminate as it is, unless/until you press the RESET button.

Are you planning any peripherals at all? I'm guessing that the only purpose for the external bus connectors is for the AVR?

alank2
March 15th, 2018, 04:23 AM
Thanks.

No reset circuit needed, the AVR will control reset and preload the SRAM with instructions to execute. It will also monitor the warm reset and switch back to bank 0 if it sees it. I may move the warm reset functionality to the AVR so it performs it instead of having to watch for it.

Yes, I want it to be open for adding peripherals, but the idea is that the AVR will do most of these tasks.

I am only planning on locking down 4 bits of the OUT address as it only takes on 74AHC138. This means that out 0xF0-0xFF will trigger the AVR. But that leaves 15 other 16 byte ranges for expansion even if I took the same approach on other items.

The guys at the eevblog think I need more grounds mixed in on the z80 bus connector...

durgadas311
March 15th, 2018, 04:33 AM
The guys at the eevblog think I need more grounds mixed in on the z80 bus connector...

That probably would be a good idea... but you are not connecting a cable to these, right? These are just stacks of PCBs connected directly together? I guess if the daughter boards get their power from the main board it might be more important to make sure that both power and ground are strong.

Plasmo
March 15th, 2018, 06:16 AM
You may want to sacrifice one pin on the AUX_BUS as key so boards rotated 180 degree can't stack together. Alternatively, you can define the ground/VCC on the AUX_BUS such that if it is rotated 180 degree and stacked, the power will short to ground.

alank2
March 15th, 2018, 07:01 AM
Yes, they will be stacked. I do plan on them all getting power from the bottom board though so I should make sure there is solid ground and vcc between them. I'm going to put a 220uF main cap on each board.

Plasmo - I'm going to offset the right 2x20 down a half inch or so so that if you rotate it, it will be clearly out of sync. That won't stop someone from plugging it in wrong if they try unless the female headers have enough margin to prevent it, but hopefully it will be a good indicator that something is out of whack.

Should the GND and VCC pins be the same amount such as 6 of each, or should I have more GND's as in 8 GND's and 4 VCC's?

durgadas311
March 15th, 2018, 08:12 AM
Should the GND and VCC pins be the same amount such as 6 of each, or should I have more GND's as in 8 GND's and 4 VCC's?

Perhaps someone with more electrical engineering experience should answer, but I think its a matter of both current capacity as well as grounding. The ground and Vcc paths will both carry the same current, so there's an argument for them being the same. But there may be some RF (or other high-frequency) benefits to having more ground pins, I'm not sure. Perhaps that comes down more to surface area of the ground foil on the PCB (i.e. shielding).

Plasmo
March 15th, 2018, 03:46 PM
stacked assembly has lower inductance than the traditional motherboard/back-plane assembly (e.g. S100). Assuming you are using the low-cost 10cm x 10cm 2-layer pc boards with a maximum stack of 4 boards, 2 power & 2 grounds should be adequate. Since the AUX_BUS connector has large number of spare pins, perhaps another ground pin can provide marginal benefit. The inductance of the ground interconnects is small compare to that of the circuit board itself, unless you go to power/ground plane design rule--an expensive approach that's not justified by the speed of Z80, IMO.

alank2
March 15th, 2018, 04:55 PM
Thanks Plasmo; I am hoping to get it to work reliably at 20MHz. We'll see how far it can go. I will be using low cost 10cm x 10cm 2 layer PCB's - bottom layer will be a ground plane that I will try to keep solid and cut with as few traces as possible. Top layer will be for signals and will also have a ground pour that is connected with ground via's to the bottom layer. I added many more grounds up and down both the left and right connectors to hopefully keep the grounds solid and from bouncing. I'll attach a revised sch. with the new grounds/pin layout.

Plasmo
March 15th, 2018, 05:11 PM
Definitely more grounds than needed, but more ground can't hurt. Ground plane like that is robust enough to handle 50Mhz ez80F91.

alank2
March 15th, 2018, 06:41 PM
Thanks Plasmo - I will be pleased if I can run 20 MHz on it reliably.

alank2
March 28th, 2018, 05:16 AM
I'm going to order some pcb's today or tomorrow hopefully.

Let me know if you guys see any problems.

SRAM layout and schematic:
http://home.earthlink.net/~alank2/SRAM.pdf

Z80 layout and schematic:
http://home.earthlink.net/~alank2/Z80.pdf

Hopefully once I get these in hand and built up, I can clean up my breadboard mess a bit.

smbaker
March 28th, 2018, 08:41 AM
One possible difference is that I'm using 3.3V SRAMs connected via FET bus switches.

Could you elaborate on the bus switches? I've had an application in the back of my mind where I might want to attach 5V memory to a 3.3V microprocessor, and I hadn't quite figured out the best way to handle the bidirectional data bus.

Scott

WSM
March 28th, 2018, 09:30 AM
Could you elaborate on the bus switches?
I've been using IDTQS3245 bus switches mostly because I've got a large supply of them. They're basically eight 0.25ns bidirectional FET switches and for simple voltage translation, the OE* pin can be tied to ground for always active. Application note AN-11A (https://www.idt.com/document/apn/11-5v-and-3v-conversion-zero-delay) shows how they can be used as 3.3V to/from 5V translators by using an external dropping diode between 5V and Vcc.

Another option is TI's SN74CBT series or the SN74CBTD (Diode within the IC). I've used the SN74CBTD3861 which has 10 switches and doesn't require an external diode. There's also the 3384 (10 switches, 2 * OE*), 16210 (2 x 10 switches) etc.

smbaker
March 28th, 2018, 09:59 AM
I've been using IDTQS3245 bus switches mostly because I've got a large supply of them. They're basically eight 0.25ns bidirectional FET switches and for simple voltage translation, the OE* pin can be tied to ground for always active. Application note AN-11A (https://www.idt.com/document/apn/11-5v-and-3v-conversion-zero-delay) shows how they can be used as 3.3V to/from 5V translators by using an external dropping diode between 5V and Vcc.

Another option is TI's SN74CBT series or the SN74CBTD (Diode within the IC). I've used the SN74CBTD3861 which has 10 switches and doesn't require an external diode. There's also the 3384 (10 switches, 2 * OE*), 16210 (2 x 10 switches) etc.

Thanks. I wish there was a through-hole version, but I think these will do. ;) Other than picking the one you had most supply of, and whether or not an external diode is required, do you have any other preference between the two families?

WSM
March 28th, 2018, 12:30 PM
Other than picking the one you had most supply of, and whether or not an external diode is required, do you have any other preference between the two families?
There are probably other manufacturers that have similar FET switches and might have a DIP version but I haven't checked. I tend to just use the IDTQS3245 and SN74CBTD3861 in order to minimize my stock of unique parts. My choice then becomes whether I need x8 or x10 and whether I want to add the external diode. Some of my boards use both devices due to the number of signals that need translation.

One thing to note about the IDT chips is that most are also available with a built-in 25 ohm series resistor. Although I can see some overshoot / ringing on these signals, so far I haven't experienced a real need to go with the resistor versions.

alank2
April 9th, 2018, 01:24 PM
I got the PCB's in today, and after building them, they at least use the right amount of current. No chance for testing beyond that yet. This should simplify the breadboard I am using considerably. I'm buried in other projects so hopefully I can get back to this and start working on a CP/M bios for it soon!

44941

44942

alank2
April 16th, 2018, 01:50 PM
Does all of this make sense?

This is the flip flop circuit I plan to use to allow Z80 IN/OUT/DMA with an AVR.

Z80 executes an OUT to 0x80-0x8F:

"0111" A7/A6/A5/A4 along with IORQ# asserted and FFCLEAR# deasserted trigger the flip flop through PRE#.
The flip flop output goes to a logic level N mosfet to assert WAIT# causing the Z80 to wait.
The AVR will notice FFSTATE# asserted on the flip flop and knows it is being addressed.
The AVR evaluates RD# or WR# to determine IN or OUT.

IN instruction:
The AVR grabs A8-A15, A0-A3 (we already know A4-A7) and determines what command the Z80 is trying to input.
The AVR sets data port to output and provides the data on D0-D7.
The AVR asserts BUSREQ# and then asserts FFCLEAR#.
The Z80 will be released from the wait, capture the data IN, and hold at BUSREQ# by asserting BUSACK#.
The AVR waits for BUSACK# assertion, then sets data port back to input.
If command requires DMA, the AVR can do it here.
The AVR deasserts FFCLEAR# (IORQ# is released now and it won't retrigger), then deasserts BUSREQ#.

OUT (with DMA):
The AVR grabs A8-A15, A0-A3 (we already know A4-A7) and determines what command the Z80 is trying to input.
The AVR grabs the data on D0-D7.
The AVR asserts BUSREQ# and then asserts FFCLEAR#.
The Z80 will be released from the wait, and hold at BUSREQ# by asserting BUSACK#.
The AVR waits for BUSACK# assertion.
The AVR can do the DMA here.
The AVR deasserts FFCLEAR# (IORQ# is released now and it won't retrigger), then deasserts BUSREQ#.

OUT (without DMA - if this works):
The AVR grabs A8-A15, A0-A3 (we already know A4-A7) and determines what command the Z80 is trying to input.
The AVR grabs the data on D0-D7.
The AVR asserts FFCLEAR#.
The Z80 will be released from the wait.
The AVR quickly waits for IORQ# to be deasserted and then deasserts FFCLEAR# before any other possible IN/OUT instruction can execute.

45037

Does the forum convert PNG's to JPG's and reduce them???

Here is a link to the image:

http://home.earthlink.net/~alank2/sch.png

durgadas311
April 16th, 2018, 02:48 PM
This is certainly a lot more complicated than what I imagined. I think you've got you're work cut out for you trying to do live input/output between the Z80 and the AVR. I guess you don't have DRAM so you don't need to worry about starving the REFRESH cycles, but it still seems unusual to be holding the WAIT line low for such a long time. It also does not allow for any sort of background activities (or interrupts), like most other I/O devices allow - you normally can start a command from the Z80 and then perform some other tasks while checking for completion (or even getting an interrupt).

I was imagining something simpler. I was thinking that the Z80 would place into a well-known chunk of memory the "command" (and data) it wishes to send. Then the Z80 sets this FF and waits for it to be cleared (possibly doing other, non-conflicting, activities while waiting). The AVR is notified of the FF being set, then it reads the command/data out of memory and (optionally) places response data into memory and clears the FF. The AVR would, of course, use the BUSREQ/BUSACK semantics of the Z80 to ensure it does not conflict on memory cycles.

alank2
April 16th, 2018, 04:56 PM
I started out thinking of a DMA only method where I was triggering BUSREQ and then doing memory only, but that didn't allow me to do simple IN/OUT instructions without having to setup a memory buffer first and then checking the results after. I'm going to test it to find out where each method has the most speed - I suspect I'll find a breakdown where needing to transfer >X bytes should be pushed to DMA.

This method would actually provide a way to do blocking or non-blocking transfers. For example, you could have a command that says grab this sector and it could be implemented as blocking where we move immediately from the OUT instruction to BUSREQ and then the AVR transfers the sector in using DMA. You could just as easily implement a command that asks for a sector and then you can go on and process instructions and come back to ask if the data is ready and for the data to be transferred. In CP/M are most commands like console in, console out, going to be blocking?

glitch
April 16th, 2018, 06:08 PM
Yeah, most of the time you're sitting there polling the console device to see if it's ready.

Might just use a dual-port SRAM if you're bent on DMA but don't want the hassle of syncing everything up. They're cheap nowadays, and simple to use from both sides.

durgadas311
April 16th, 2018, 06:24 PM
Well, it's up to you. A couple comments on your circuit.

One minor note is that the 74138 is decoding I/O address 7x, not 8x. Just be aware when you start writing code.

The second might be a bigger problem. When the CPU sets up the I/O to 7x it will assert PRE# on the FF which forces the Q output high, asserting WAIT# and ensuring PRE# stays asserted. But the PRE# input will still be asserted when the AVR does FFCLEAR# (asserts CLR#) and I think the 7474 will be in an invalid, unstable, state (both PRE# and CLR# asserted). I'm not sure you'll be able to come out of the WAIT state - at least not predictably. Take a look at the truth-table for the 7474. I'm not sure just what Z80 inputs are still working during a WAIT state, probably RESET# works but not much else. Using a long WAIT state may be prone to hardware hangs, even if you work out the PRE#/CLR# situation. This FF absolutely *must* be cleared by RESET# in order to have any hope of recovery without a power cycle.

alank2
April 16th, 2018, 07:40 PM
One minor note is that the 74138 is decoding I/O address 7x, not 8x. Just be aware when you start writing code.

Ahh yes, you are right! I was originally thinking of using 0xFx, but I wanted to use the same signal for FFCLEAR# to do two things (1) clear the flip flop and (2) make sure the 138 is no longer triggering the flip flop. As a result I had to use G1 for FFCLEAR# instead of one of the G2's, meaning that I can't use "1111", but have to use "0111" instead. I was thinking of adding a jumper or solder bridge so you could pick which output on the 138 to connect to the 74's PRE# that way you could trigger on 0x0x - 0x7x.


The second might be a bigger problem. When the CPU sets up the I/O to 7x it will assert PRE# on the FF which forces the Q output high, asserting WAIT# and ensuring PRE# stays asserted.

Until FFCLEAR# is asserted on the 138. It is connected to G1, so it will disable the 138 from asserting PRE# on the 74 and a the same time assert CLR# on the 74. Does that make sense? (I came up with it earlier tonight so it might not! That is why I am asking!)


Using a long WAIT state may be prone to hardware hangs, even if you work out the PRE#/CLR# situation

It shouldn't be too long. Just long enough for the AVR to grab the OUT or supply the IN and release it. I plan to do anything that takes more time in a BUSREQ/BUSACK cycle using DMA.


This FF absolutely *must* be cleared by RESET# in order to have any hope of recovery without a power cycle.

I have RESET# on a pin change interrupt so the AVR will know that it was taken into reset. I also have control of the RESET# with the AVR, so it can detect reset and make sure the flip flop is cleared along with resetting the memory bank to 0, etc.

Attaching the PDF of the AVR board....45043

durgadas311
April 17th, 2018, 04:42 AM
Yes, I had seen the FFCLEAR# connection to the 74138, then ignored it. There is probably some minimum pulse width for FFCLEAR# (since WAIT# is clocked and so you must keep it de-asserted until the Z80 finishes the I/O cycle), but that should work.

I guess what makes me feel uneasy about the WAIT# is that it is software controlled. Traditionally, WAIT# was (mostly) always hardware controlled and, probably, more deterministic. Being a software engineer, I just don't trust software!

alank2
April 27th, 2018, 06:35 PM
The PCB is ordered so hopefully I'll have them next week sometime.

I have another question. I've read that Z80's were not as readily controllable via a control panel like the 8080 was. I remember reading some thing about how an Altair 8080 would set the PC from the control panel - can the same thing be done with a Z80? I could see that BUSREQ would certainly stall the Z80 and take it into HiZ so you could do what you wanted with SRAM, but I don't see how you could set the PC unless you somehow were able to clock in a PC...

alank2
May 5th, 2018, 09:56 AM
Finally got it built and can do some development on it!

45338

It seems to work at 20M though some of my pullups need to be a bit stronger I think.

alank2
May 7th, 2018, 06:49 AM
I'm thinking of moving in a different direction with this, one that is just two boards so you could pick it up and operate it as a handheld. The idea is that the top board would have a control panel that like the Altair is a hardware control panel requiring no Z80 side code to run, but unlike the Altair, it does not use binary, but instead has four 5x7 LED displays for the address and two 5x7 LED displays for the data. I am already using an AVR anyway for I/O such as disk and serial, so I wonder if I could make that AVR also perform the tasks of controlling the Z80 from a hardware control panel. You could pick it up, flip it on, and literally type in some op codes and store them to memory and then single step or run them.

Does anyone have experience with a hardware type of debugger that did this in the past? I know I can use a flip flop to trigger on the M1 and MREQ signals to issue a WAIT. I would need to then disable the SRAM off the data bus and try to control the Z80 directly by making the AVR drive the databus. For example, if we stopped at address 1234 and the memory is reading the value from the location, I would disable the memory. I can grab the PC from the address lines, but to allow the user to read or write memory in stop mode, I really need to be in BUSREQ/BUSACK mode, so the current instruction needs to be finished somehow. I can take the sram off and make the AVR provide a NOP instruction and then assert busreq and deassert wait to let it finish that machine cycle and move to busreq/busack mode where I can do whatever I like to memory. Ultimately though, I still need to be able to execute instructions to make it JP back to the right address or a new address, or perhaps to even evaluate registers and show them on the display.

I found this page talking about the special reset for the Z80:
http://www.primrosebank.net/computers/z80/z80_special_reset.htm

Does anyone have any documentation on hardware meant to control the Z80 from a hardware debugging point of view? Or know of any devices designed to do this that I could look at the schematic, design, etc., for?

durgadas311
May 7th, 2018, 07:52 AM
Interesting idea, but as you point out you can only access/alter memory from the AVR - not processor registers (or state).

another problem will be detecting the end of an instruction, as M1 is issued for each opcode fetch - which means that some Z80 instructions will have more than one M1 cycle.

Some interrupt modes will execute a single instruction if it is forced on the bus, but that is not really what you want either. It may be tricky getting that to work as needed without disrupting the CPU enough to allow continuation.

A couple examples from my experience are the TI990 front panel - which is completely software driven, and the Honeywell mainframes where the control panel is deeply embedded in the CPU circuitry. Since you don't have access to the Z80 CPU internals, you may have to stick with software front panels. You could theoretically use memory banking to hide the debugger code and RAM, and then use interrupts of some sort to do single-step and "halt" functions. That's still tricky if you allow Z80 software to run any interrupt mode it wants. But typically any software must conform (acquiesce) to the interrupt constraints of the platform, which you control.

alank2
May 7th, 2018, 10:07 AM
The way I plan to do that is to have the AVR be able to disable the SRAM temporarily so it can pretend to be the SRAM by providing opcodes and/or data on the data bus. If I want to know the contents of the A register, I can feed it an opcode to save A to some place in memory and then grab that off the bus (without it affecting memory), etc.

I was originally thinking about using the normal Z80 clock and trying to use WAIT and BUSREQ/BUSACK to control it, but I don't think that will give me cycle level control to run instructions like that. Instead I am going to have the normal clock at 4/8 Mhz, etc., and then an AVR supplied clock where I can manually clock it. Then when I'm in wait for busreq/busack, I can switch between them depending on the situation.

I've seen that some instructions like JP that the M1 signal is trigger on the first byte fetch, but bytes 2/3 for the address do not have M1 asserted. I've also seen that some instructions that are 4 bytes long have an opcode modifier (or something named differently?) that asserts M1 for the first and second bytes, etc. What happens if you process the first one, I think some of them are 0xDD and then don't finish the rest of the opcode? Let's say it processes 0xDD and then I feed it a NOP? Is that valid to do?

Processing an IN/OUT: IORQTRIG is enabled, so when IORQ goes low, the flipflop asserts WAIT. The AVR fires an interrupt responding to the WAIT and checks RD/WR to see what it needs to do. Then it asserts BUSREQ and disables IORQTRIG so it can then clear the flipflop which deasserts the WAIT. The Z80 runs to BUSACK. Once the AVR sees that, it knows it can enable IORQTRIG again and it releases BUSREQ to let the Z80 run. If this were an IN instruction, it would set the data lines back to input before releasing BUSREQ as well.

Running (no tracing for brakepoints) would just do the above and have the M1TRIG disabled, but if we wanted to stop for any set breakpoints, M1TRIG would be enabled so a wait would be issued every time an M1 occurs so the AVR can read the address A0-A15 to see if it matches the breakpoint. If not, the wait is cleared and the Z80 runs to the next M1. Tracing instead of running would seriously slow performance having to have each and every M1 cycle compared to a breakpoint list, but if that could be done in 100 clocks on the AVR, maybe it could still run at 10% speed. If the user hits stop or the breakpoint matches, we can at the wait switch to the controlled clock, disable the sram, and then feed it opcodes to read/write/control it.

That is the idea so far anyway! I am hoping you guys with the Z80 experience can fill in the blanks of what I have wrong!

durgadas311
May 7th, 2018, 03:47 PM
Well, you've definitely got a big project there.

Basically, the Z80 has M1 cycles (opcode fetch), and rd/wr cycles (includes I/O), and then internal cycles. Most of the Z80 instructions have multi-byte opcodes, the majority of those are 2-byte but I think there are some 3-byte (and maybe 4). The M1 cycles are not really distinguishable from each other, so I don't believe there is any way to tell the first M1 (start of an instruction) from subsequent ones. At least all M1 cycles are at the start of an instruction. Maybe closer studying of the Z80 documentation might reveal some external indication. So, failing some external indicator, the trick will be staying in-sync. When the AVR will be supplying the instruction stream, I guess you can keep track that yourself. But, when running user code it will be trickier - and easier to get out of sync. You will essentially have to be decoding the instruction stream as it executes. That's one reason to use interrupts for single-stepping: the CPU will only service the interrupt between whole instructions.

Basically, interrupts break between instructions, BUSREQ/BUSACK break between machine cycles, and WAIT suspends a machine cycle (that involves RD/WR), essentially breaking between clock cycles.

I thought I recalled that at least one of x80 CPUs has a minimum clock rate (i.e. you could not run an arbitrarily slow clock). Even if the Z80 does not have a minimum, there may be other issues with any asymmetry or inconsistency of clock pulses. The Z80 clock input is used to create several different internal timing signals, and that circuitry may not work correctly if the input clock is malformed. I think you could get into a lot of trouble if you stopped the clock in the middle of a machine cycle (and did not restart accordingly). Perhaps there is some hardware-oriented Z80 documentation that describes this better - I only have a document mostly oriented to the programmers.

It will be interesting to see if you can get this work.

alank2
May 7th, 2018, 05:38 PM
Well, you've definitely got a big project there.

I agree, it will be interesting if I can make it work.


Basically, the Z80 has M1 cycles (opcode fetch), and rd/wr cycles (includes I/O), and then internal cycles. Most of the Z80 instructions have multi-byte opcodes, the majority of those are 2-byte but I think there are some 3-byte (and maybe 4). The M1 cycles are not really distinguishable from each other, so I don't believe there is any way to tell the first M1 (start of an instruction) from subsequent ones. At least all M1 cycles are at the start of an instruction.

I think I need to start here to see what I can find out. I did find this excellent page with covers decoding in detail:

http://www.z80.info/decoding.htm#cb

Some of the cycles I've tested show that M1 is run twice. For example, the 0xDD 0x36 0x12 0x34 instruction I tried as a test fired M1 on the first and second bytes, but not the last two. What I wonder is if the "special" 0xDD 0xCB displacementbyte opcode ones will fire M1 on the first, second, and forth byte because it is technically an opcode. I will have to test this. Basically though, I will have it trigger on every M1 and I can look at the data bus to see what is there. If it is 0xDD for example, I can tell it that the next M1 is not part of a new instruction so don't consider it as the beginning of an instruction. The worst case if it is wrong is that a breakpoint wouldn't work on what is really the beginning of an instruction, or if they hit stop, I can grab a stream of M1 values until I can determine the state and know that an instruction has finished. Obviously the problem there is that if it didn't finish the instruction, you certainly can't feed it something different without expecting everything to go badly. Still, with the information on the decoding page above, I am hopeful it could be done.


That's one reason to use interrupts for single-stepping: the CPU will only service the interrupt between whole instructions.

Good idea - now this might be a whole lot easier. I can issue an NMI, and then clock it until I can see that the current instruction has finished, but before the NMI does anything to really change the processor state, or at least I can undo anything it does. Might be good for stop, but I'm not sure if this approach would be fast enough for breakpoints without slowing things critically.


I thought I recalled that at least one of x80 CPUs has a minimum clock rate (i.e. you could not run an arbitrarily slow clock).

The CMOS ones are static, but the NMOS ones aren't.


Even if the Z80 does not have a minimum, there may be other issues with any asymmetry or inconsistency of clock pulses. The Z80 clock input is used to create several different internal timing signals, and that circuitry may not work correctly if the input clock is malformed. I think you could get into a lot of trouble if you stopped the clock in the middle of a machine cycle (and did not restart accordingly). Perhaps there is some hardware-oriented Z80 documentation that describes this better - I only have a document mostly oriented to the programmers.

Good point. I was thinking that during the wait or busreq/busack cycle I could switch the clock, but it might be that one will and the other won't. Or perhaps if I wait a number of clocks after changing it before releasing the wait/busreq it might work.


It will be interesting to see if you can get this work.

I appreciate your help - I need to first try to get a breadboard going with these connections and start trying things!

alank2
May 7th, 2018, 06:18 PM
A quick test of the DD CB 00 01 should follow the "two prefix bytes, displacement byte, opcode" shows that only the first two fire the M1, so **-- where * is M1 and - is no M1. That is better than **-* which is what I was concerned it might be. If this is valid, then any given M1 will be the the first (real instruction start) or possibly the second (if the previous was CB, DD, ED, or FD). I'll have to study that page I linked to if synchronization can be established. The only time I should have to do that is stopping from a run where I have no idea whether is it the first (real) or second M1.

alank2
May 7th, 2018, 06:59 PM
To establish sync when it is unknown. Trigger on the first M1 encountered. At this point we are using WAIT to hold up the CPU and we don't know whether this M1 is the beginning of an instruction or in the middle of one. Hopefully during the wait, we can switch from the 8M clock to the our AVR controlled clock. Then we release the wait and start cycling the clock allow the Z80 to move forward. As we are doing that we watch M1 until we see a non-M1 cycle. Now we know that the NEXT M1 cycle is the beginning of an instruction. I suppose this would be a problem if memory were filled with 1 byte M1 only opcodes though...

alank2
May 7th, 2018, 07:06 PM
cb
could be a first prefix
could be a second prefix
could be an opcode

ed
could be a prefix
could be an opcode

dd
could be a prefix
could be a repeated prefix
could be an opcode

fd
could be a prefix
could be a repeated prefix
could be an opcode

else
must be an opcode (not necessarily the first), but the next M1 will be the start of a new instruction

alank2
May 8th, 2018, 06:58 AM
Here is what I've come up with - hopefully it will work.

Z80 instruction synchronizer thoughts - I found these two very helpful pages:

http://www.z80.info/decoding.htm#cb
http://clrhome.org/table/

The problem is that the M1 signal occurs more than one time in a single instruction if some of the prefix opcodes are used. I need to know when the M1 I am looking at is the first byte in an instruction.

The special opcodes are 0xcb (could be a first prefix, could be a second prefix, could be an opcode), 0xed (could be a prefix, could be an opcode), 0xdd (could be a prefix, could be a repeated prefix, could be an opcode), or 0xfd (could be a prefix, could be a repeated prefix, could be an opcode)

The page above talks about two different formats for an instruction:

[prefix byte,] opcode [,displacement byte] [,immediate data]
- OR -
two prefix bytes, displacement byte, opcode

The second one is for sequences 0xdd 0xcb or 0xfd 0xcb. Fortunately however, the 4th byte for these while called an opcode, does NOT assert the M1 signal. In the first format, M1 is asserted for the prefix byte if present and the opcode. In the second format, M1 is asserted for both prefix bytes. We can essentially treat both formats the same as far as how they assert M1.

If we look at the previous two M1 values (I hesitate to call them prefixes or opcodes because we don't know which they are), we can determine or synchronize to the beginning of an instruction. Lets call (0) the current M1 value, (1) the previous M1 value, and (2) the previous previous M1 value. There are non M1 values, but we are ignoring them for queueing these up. (2) (1) (0)

if (1) is 0xcb, what could have come before it?
if (2) is 0xcb, then (1) must be an opcode and (0) must be start of a new instruction
if (2) is 0xdd or 0xfd, then (1) must be a 2nd prefix and (0) must be start of a new instruction
else (0) is not the start of a new instruction

if (1) is 0xed, what could have come before it?
if (2) is 0xcb, then (1) must be an opcode and (0) must be start of a new instruction
else (0) is not the start of a new instruction

if (1) is 0xdd or 0xfd, what could have come before it?
if (2) is 0xcb, then (1) must be an opcode and (0) must be start of a new instruction
if (0) is 0xdd or 0xed or 0xfd, then (1) is essentially a nop and (0) must be start of a new instruction
else (0) is not the start of a new instruction

else ((1) is not 0xcb/0xed/0xdd/0xfd), then (1) must be an opcode and (0) must be start of a new instruction

This would normally take three M1 cycles to be able to fully evaluate whether (0) is the beginning of a new instruction, but what about when we are first starting out and we don't have a (1) or (2) with valid data to review? When starting, we will be assuming that we are starting on the beginning of an instruction. If we set (1) to a non prefix value like 0x00, it should work fine. If we go through the above tests, it will indicate we are on a new instruction (based on assumption). If we advance it once and we now have a valid (1) and (0), we can use (1) to determine if we are on a new instruction and when it evaluates (2) if (1) is a prefix, in that case, our assumption that (1) was the start of a new transaction is still valid so (2) being 0x00 will be fine as it will keep it from assuming (1) was an opcode and not a prefix. Finally we will then have (2), (1), and (0) all will evaluate normally.

If we step on an instruction start, then at the next M1, we should be able to determine whether the next M1 should be stopped at or not.

durgadas311
May 8th, 2018, 07:31 AM
I think that's the right decision tree for parsing M1 cycles. But I'd need to think through all combination to see if you can actually predict the next instruction in all cases.

I guess you have two situations: 1) Stopping a free-running program, and 2) single-stepping. With single-stepping, you know when the instruction starts and can decode it and determine when it ends. With trying to stop a free-running program, you'll have to do some predictive work as you described.

I guess then, once you can reliably detect the first M1 of an instruction, you can disable RAM and inject instructions from the AVR. But, I think the AVR is going to have to effectively execute each instruction (that it supplies) in parallel with the Z80, in order to know what to do with each subsequent cycle. For example, read cycles might be fetching additional instruction parameters or might be fetching data from memory. The AVR would have to know which it is and what to put on the data bus (or what to do with written data). I'm guessing the set of instructions you'd use would be very limited - maybe just PUSH and POP.

daver2
May 8th, 2018, 07:51 AM
I haven't read the thread fully - but what I think you are looking for is 'the guts' of a Z80 ICE (In-circuit emulator).

One I know of (now defunct) is here http://www.tauntek.com/z80-in-circuit-emulator.htm

Dave

alank2
May 9th, 2018, 04:10 AM
Dave you are right, there is a lot of overlap with an ICE. What I'm thinking of is basically something that can be controlled like the Altair was where you can turn it on and it doesn't even have a byte of code in it and you could program it with opcodes directly, step through them, etc. Also, it could start by preloading sram with a ROM image, etc. as well.

My decoding plan above went bad with the instructions DD CB DD CB DD CB DD CB. Is it a "DD CB" instruction or a "CB DD" instruction!!! In this case going back far enough is not the solution, but knowing which was the beginning of an instruction from possibly very far back is. Still thinking on it... I'd like to be able to decode by evaluating M1 values only and without having to fully decode them (keeping it simple) if that is possible.

durgadas311
May 9th, 2018, 06:21 AM
Dave you are right, there is a lot of overlap with an ICE. What I'm thinking of is basically something that can be controlled like the Altair was where you can turn it on and it doesn't even have a byte of code in it and you could program it with opcodes directly, step through them, etc. Also, it could start by preloading sram with a ROM image, etc. as well.

My decoding plan above went bad with the instructions DD CB DD CB DD CB DD CB. Is it a "DD CB" instruction or a "CB DD" instruction!!! In this case going back far enough is not the solution, but knowing which was the beginning of an instruction from possibly very far back is. Still thinking on it... I'd like to be able to decode by evaluating M1 values only and without having to fully decode them (keeping it simple) if that is possible.

It's possible you could look for cycles *without* M1 and then know the next M1 is the start of an instruction. But, I think you could still have trouble finding a cycle without M1 in some cases. One I know of is HLT - that will appear on the bus as continual re-execution of the HLT instruction and so all you see are M1 cycles. I think basic loops will always have at least one non-M1 cycle somewhere (all forms of jump should have at least one non-M1 cycle) - although it's not clear just how long you could go without seeing a non-M1 cycle.

alank2
May 9th, 2018, 03:18 PM
I thought about that too. I have a pin to monitor HALT as well. I think I have the sequencer working now - as long as you start on an instruction start, it will keep perfect sync despite all the prefixes. I made up a list of all the possible sequences with the prefixes and tested the sequencer by running them randomly through it and it is working well.

I came up with a new idea on the WAIT triggering today as well. I am going to use a quad NAND to invert the M1 and IORQ signals and feed that to the 7474 CLK signal. Then I can use the 7474 D as an enable or disable signal for whether I want M1 or IORQ to trigger a wait. Then I combine the two FF outputs to control WAIT. Now only a transition from high to low for M1 or IORQ will trigger the flip flop. This allows me to clear the flip flop while the M1/IORQ are still active so I can avoid having to do a BUSREQ/BUSACK cycle to end a M1 or OUT instruction. I will still have to use BUSREQ for an IN instruction so I can switch the data port back to input to take it off the data bus however.

One question I have for those who know Z80 assembly well, what is the easiest way to make the Z80 expose registers to the bus? I think you mentioned PUSH and POP before and those would work, but you would have to do both a PUSH and POP to make sure the SP doesn't change. Obviously any opcodes I feed the Z80 while stopped or stepping through instructions must not clobber flags, etc. I saw that JP ** doesn't clobber any flags, and that many other instructions don't either. To change a register, I could use the load immediate op code and then clock in the value directly. Then I thought about the IN command basically does the same thing, but not from memory, from an input device. I could do that as well. I am assuming there are instructions to load directly from or save directly to a relative or absolute memory location. As I have full bus access and can clock it directly now, what I'm looking for is the shortest instruction that allows me to expose all registers and also change all registers.

The idea with the M1 trigger is that if I am tracing (stopping each M1 cycle to see if the address matches a breakpoint list), then it will trigger wait on each M1 cycle. I'll feed that to my sequencer and know which ones are the start of an instruction. If I decide to stop and take control of the Z80, I'll disable SRAM and start feeding it opcodes manually. If I decide to keep running, I can just clear the flip flop and let it run to the next M1.

The only issue I have is that if I am running (not stopping for M1's), I have no idea where the instruction starts are. Especially a sequence like DD CB DD CB DD CB which can be a string of "DD CB" instructions or "CB DD" instructions. In the case of trying to stop from a run, I'm going to simply look for any M1 opcode that is NOT a prefix (CB DD ED FD) and then I'll know the NEXT M1 opcode is the start of a transaction. The only problem with this approach is that if for some bizarre reason you have memory filled with only prefix codes, you wouldn't be able to ever stop. To solve that I'm going to make it so that if it overflows from 0xFFFF to 0x0000 when trying to find a non prefix M! opcode, it will just abort trying to stop and give the error "unable to stop". Then do a reset to put the Z80 in a known state after the error is cleared. It shouldn't ever happen though.

I got the displays working though - six 6 beautiful 5x7 LED displays, I'll attach a picture.

45427

durgadas311
May 9th, 2018, 07:00 PM
For stopping a running program, I think you can look for non-M1 cycles and then, combined with the HALT signal, you should be able to detect the start of an instruction fairly quickly. Although, a long sequence of one-byte instructions (that take only one machine cycle) would prolong the sync-up. I can't imagine any practical code sequence that does not involved instructions with RD/WR (non-M1) cycles, so you should detect quickly. since we're talking about human interface, the difference between stopping at the exact instruction where the user flips the switch, vs. running on for 10 or even 20 instructions is really not significant. No user is going to be able to anticipate a specific instruction to stop at. And, single step is done differently since you know the starting point.

For getting registers in and out, you might consider doing a full context save/restore on every stop/start. Rather than try to let the CPU hold the values you display and get them on demand (where get/set of many registers requires altering other registers). You would do a full save of the processor when the user stops the system, and do a full restore when the user starts again. Then, you only need write the code to do the save/restore, and between the two events it doesn't matter what values are left behind in the CPU. This is essentially the same philosophy used for interrupt routines. For some (like AF) you may have to use PUSH/POP, so perhaps using that for most/all might make sense. Getting PC and SP may require a little finesse. But designing this like the save/restore of interrupts routines means you will end up with the SP in the same place as you started. Of course, PC will be off, for each instruction you execute, so you may have to work out that. Perhaps this tilts the discussion back to actually using interrupts, it not only solves the detection of instruction boundaries but also more-naturally handles the PC.

durgadas311
May 11th, 2018, 04:09 PM
Here's some code that does a full-CPU context save/restore. I modeled this as an interrupt routine, to suggest another idea for this as well.

Looking at the Z80 interrupt acknowledge timing, you could use that not only handle the instruction-boundary issue but also get the current PC. During the M1+IORQ cycle you can capture the PC off the address bus. Depending on whether you want to assume that the Z80 always has a "usable" SP, you can either let the interrupt push the PC (and RETI/RETN restore it) or use the AVR to extract the PC and force a jump when returning. My example uses the HALT instruction to suspend and let the AVR perform the "front panel" operations, since HALT produces an external signal and makes it easy for the AVR to detect it. That eliminates the need to stop the CPU. In fact, you could put this code in a special bank of ROM and select it during the interrupt acknowledge sequence.

Anyway, just some ideas.


; context save/restore for an interrupt
; saves all CPU registers.
; restores all CPU registers except R.

cseg ; might be ROM
intr: ; PC was saved during interrupt acknowledge
ld (savstk),sp ; optionally save SP
ld sp,istk ; (ditto)
push iy
push ix
push hl
push de
push bc
push af
ex af,af'
exx
push hl
push de
push bc
push af
ld a,i
ld b,a
ld a,r ; not really useful
ld c,a
push bc
; -------------------
halt ; triggers AVR to do its thing
; AVR releases Z80...
pop bc
ld a,b
ld i,a
; don't set R
pop af
pop bc
pop de
pop hl
exx
ex af,af'
pop af
pop bc
pop de
pop hl
pop ix
pop iy
ld sp,(savstk)
ei ;
reti ; or retn if NMI

; AVR may examine/change registers values here:
dseg ; RAM or equivalent
ds 16 ; safety buffer, if using RAM
ds 2 ; I/R (R is read-only)
ds 2 ; AF'
ds 2 ; BC'
ds 2 ; DE'
ds 2 ; HL'
ds 2 ; AF
ds 2 ; BC
ds 2 ; DE
ds 2 ; HL
ds 2 ; IX
ds 2 ; IY
istk:
savstk: ds 2 ; SP

end

Hope this is some help.

alank2
May 12th, 2018, 04:40 AM
Thank you - for an z80 instruction newbie like myself this will help greatly!

alank2
March 1st, 2019, 04:16 AM
So after recently building a Zeta V2 SBC, I'm again thinking of how I can bring a Z80 and AVR together for something interesting.

After talking with someone else who also did a Z80/AVR type project, I've figured out that one can use the WAIT and BUSRQ signals to handle the IN and OUT instruction. Instead of doing something open with the intention of remaining true to how the Z80 did things, I'm planning on putting only the AVR/SRAM/Z80 on its bus for simplicity. All I/O to the outside world will then go through the AVR. In this case I don't need to use logic to check for the port being IN/OUT as all ports basically go to the AVR and I can then use the port as a command number. I didn't think the AVR could handle effective IN's, but if I use a flip flop to trigger wait on IORQ going low, then I can have the AVR supply the value and also do a BUSRQ so that when it released WAIT, the Z80 will run to BUSRQ where it will hold until the AVR can set the data lines back in input and let it continue. Also, any commands like reading a 128 byte sector in can be handled through a BUSRQ DMA where the AVR loads/saves SRAM itself instead of going through hundreds of IN/OUT instructions and looping.

I plan to have the AVR handle (4 serial interfaces, 4 drive virtual floppy interface to FAT compatible SDCARD using FATFS, virtual ROM (preloading SRAM with a ROM image), RTC, simple MMU/XMOVE, Piezo, and I/O with chained MCP23S17 16 pin I/O expanders.

Going to start with CP/M 2.2 and the try CP/M plus and maybe others after that. My goal is to function ship as much to the AVR as can be and leave the maximum TPA available for CP/M 2.2.

The AVR will mostly be a coprocessor always waiting on instructions from the Z80 to do its bidding, but it begins in charge of things. A reset pulldown will keep the Z80 in reset until the AVR actively drives RESET output high to enable the Z80. The startup of the AVR will involve preloading SRAM with some sort of disk boot loader or equivalent and then I am going to try to use the INT line to feed a JMP instruction to the Z80 so that it executes the disk boot loader wherever it is. Then the DBL will load the first sector of the CP/M 2.2 disk (cold boot loader?) and load normally. The disk boot loader will be on my SDCARD as a ROM with a number like "0000 0xFF00 Disk Boot Loader.ROM" which will indicate that it is ROM 0, to be loaded at 0xFF00, and its name is "Disk Boot Loader". The idea here is that you can then tell the AVR which ROM you want to boot and which DISK you want mounted. I plan on having a zero button on the PCB that resets it to ROM 0 / A:=DISK 0 and as long as you have a valid ROM 0 / DISK 0 that is bootable, you should be able to get back to a way to boot because you could tell the AVR to boot ROM 1, or ROM 2, or eject the disk in A: or change it to a different disk. This begs the question though of why do the normal boot process at all? I could just have a 64K ROM with the entire CP/M system loaded and ready to use and skip alll the disk boot loading, but I think for authenticity it would be nice to boot it the way it should be booted using loaders and from disk.

Question # 1 - back in the days of CP/M 2.2 people had to use the tools available to them to generate a new version like MOVCPM. It seems like I remember on Grant's implementation that he was compiling the source of CP/M 2.2 - is that an easier way to bring up a new CP/M 2.2 system now than was available initially?

Question # 2 - were there any floppy disk formats that were 1MB-ish in size? I somewhat recall an 8" disk being like this, but I think I'm wrong. I dumped all the DISKDEF definitions from cpmtools and converted them into an excel sheet and sorted them and I don't see any.

Question # 3 - If looking at a size like 128 bytes per sector * 400 tracks * 16 sectors per track (800K), would it be smarter to do 128 * 200 * 32 so that 16 blocks could be fit into a directory entry instead of just 8? I am thinking of a 2K block size.


All thoughts and idea welcome.

durgadas311
March 1st, 2019, 06:16 AM
Re: Q1, Having the source definitely makes it simpler, although if your system image is static then you don't really need to relocate it. If you are allowing systems with less than 64K, or if your BIOS can be customized for specific hardware (and thus changes size), then you might need a scheme for creating different images.

Re: Q2, You could get over 1M on a DD DS 8" floppy. Formats used on Heathkit (Zenith) computers provided that capacity, I'm others did as well. But it all depends on how you implement the disk interface, whether such details are exposed/necessary. As long has the BDOS has the correct/matching DPB, the application code should not care (unless it is low-level code like FORMAT.COM or SYSGEN.COM).

Re: Q3, I don't think the low-level disk geometry matters to the efficiency of each directory entry. The block size (you said 2K) and the total number of blocks on the disk drive how the directory entries are used. Both of your geometries provide the same disk capacity, so they would not change how many blocks CP/M can put in the directory entries. Only changing block size (or disk capacity) will effect that.

alank2
March 1st, 2019, 08:08 AM
Thanks for the reply.

Not looking to change the RAM part, just talking about an initial build of 2.2 for it. Is it easier to use tools like MOVCPM or just compile a cp/m 2.2 source for example.

Do you have any links to the Heathkit/Zenith formats?

I've been reading the Programmers CPM handbook and one thing I noticed is that the 16 bytes used for recording which blocks are allocated can be 8 words or 16 bytes depending on if the total number of blocks on the drive. My Q#3 above was worded wrong. I should have said if I have 800K and do 2K blocks, there are 400 of them requiring words for dir entries, but if I did 4K blocks, then there are only 200 of them requiring only bytes in a dir entry. Which is a better trade off, smaller block size, or more efficient dir entries?

Plasmo
March 1st, 2019, 10:13 AM
The current trend in low-chip count Z80 SBC is using a powerful 2nd processor to do I/O and bootstrap for the Z80. However, the performance of the resulting dual processor is often lesser than what a standalone Z80 can be, which is 20+MHz for CMOS Z80, and the complexity of the software making software changes rather difficult (here I'm speaking for myself).

Perhaps because I'm more comfortable with logic design than complex software design, my own solution to low-chip count Z80 SBC is using a modest 5V CPLD as Z80 glue logic and design in extra logic to implement the serial bootstrap function. Serial bootstrap allow battery-backed RAM to be loaded with bootstrap code and reuse the serial port for console so a Z80 SBC is ROM-less consists of RAM, Z80, CPLD, non-volatile controller and battery. The simplicity of logic allow Z80 to run at 22MHz and with a conventional single-processor programming model. The SBC is fairly small and compact, about 2" x 6" including the CF disk.

The problem I'm running into with the 22MHz Z80 is not many traditional Z80 peripherals can run at that speed. So I'm redesigning the Z80 SBC into a motherboard with 3 RC2014 expansion slots so I can develop faster peripherals for 22MHz Z80. https://www.retrobrewcomputers.org/doku.php?id=builderpages:plasmo:z80mb64

I'm in the middle of writing a series of projects on Hackaday on how to build & test a 22MHz Z80 in baby steps. https://hackaday.io/project/163786-building-a-22mhz-z80-computer-in-4-stages
Bill
Edit: the name of Z80 motherboard is z80mb64 and the URL is:
www.retrobrewcomputers.org/doku.php?id=builderpages:plasmo:z80mb64

alank2
March 1st, 2019, 11:16 AM
Good thoughts Plasmo - and a good point that performance will be limited by the Z80 waiting on the coprocessor. In some cases, it won't be a big deal because the Z80 would be waiting for disk I/O which was slow anyway, but in other cases it might take the AVR a few cycles to run an interrupt and capture an outgoing serial byte for example. I'm going to do what I can to avoid bottlenecks, some things done on the AVR might actually be faster, such as loading a 128 byte sector directly into SRAM might be faster than the Z80 requesting the sector byte by byte and doing it itself. I'm going to run the Z80 at 20M and the AVR at a baud friendly 18.432M.

durgadas311
March 1st, 2019, 01:14 PM
Thanks for the reply.

Not looking to change the RAM part, just talking about an initial build of 2.2 for it. Is it easier to use tools like MOVCPM or just compile a cp/m 2.2 source for example.

Do you have any links to the Heathkit/Zenith formats?

I've been reading the Programmers CPM handbook and one thing I noticed is that the 16 bytes used for recording which blocks are allocated can be 8 words or 16 bytes depending on if the total number of blocks on the drive. My Q#3 above was worded wrong. I should have said if I have 800K and do 2K blocks, there are 400 of them requiring words for dir entries, but if I did 4K blocks, then there are only 200 of them requiring only bytes in a dir entry. Which is a better trade off, smaller block size, or more efficient dir entries?

Right, that makes more sense - comparing 2K and 4K block sizes. So, the trade off is that the larger the block, the more "wasted" space for small files, or even larger files that happen to be a few bytes larger than a block (or even multiple of blocks). You you have to decide whether you want to target large files or small files. If most of your files are going to a one block, or a small number of blocks, then they are going to have one directory entry in either case. You sort of have to pick your favorite sweet spot, and design for that.

Here is an old CP/M manual from a provider of CP/M for Heath/Zenith H89 computers: http://sebhc.durgadas.com/mms89/docs/mms-cpm-224.pdf . On page 60 you'll see of table of known diskette formats at that time. You have to do the math to compute capacity, but you see some 8" formats that have 607 blocks of 2K, or about 1.2M.

alank2
March 1st, 2019, 03:05 PM
Thanks - that gives me something to check out - I appreciate it!

alank2
March 1st, 2019, 03:12 PM
ALso, why is there an 8MB limit if it deals in terms of blocks - shouldn't a 4K block size have a larger limit than 2K block size? It is something to do with 128 * 65536 = 8MB?

durgadas311
March 1st, 2019, 05:45 PM
I think the limitation had to do with internal BDOS calculations - the record number uses 16 bits (so, yes, 128 * 65536). In CP/M 3 that limitation is raised to 512M - which is the size of 32768 blocks of 16K (max DSM is 32767, not 65535 - not sure why, perhaps they actually use the sign bit for something).

alank2
March 1st, 2019, 06:28 PM
So because a file is limited to 8M, that places a limit on the total file system size as well? Since the filesystem is managed as blocks and not records, I would have thought it would not be constrained that way I can see why a file would be though if it tracks the number of 128 byte records...

durgadas311
March 2nd, 2019, 03:06 AM
The filesystem is organized (allocated) as blocks, but it is accessed as records. The relative record within a file is translated to an actual record number on the filesystem. I have not closely examined the BDOS 2.2 code, but I believe the limitation has to do with how that arithmetic is done in the BDOS. Because it only uses 16-bits, and operates on records, it has the 65536 * 128 limitation. CP/M 3 takes more care when doing that arithmetic, and thus is able to handle larger capacities (512M). It also increased the file limit to 32M.

alank2
March 2nd, 2019, 08:27 AM
In cpmtools3 they have a "seclen" field, but CP/M 2.2 has no such field, correct? (because it only deals in terms of 128 byte sectors). How does cpmtools3 turn an entry like this into a valid CP/M 2.2 diskdef?

# CP/M 86 on 1.44MB floppies
diskdef cpm86-144feat
seclen 512
tracks 160
sectrk 18
blocksize 4096
maxdir 256
skew 1
boottrk 2
os 3
end

Does it multiply something like the sectors per track by the seclen/128 factor or something?

I made an Excel calculator and came up with this so far on the disk format to support (the left is mine, the right is the 1.44 from the above, but again I don't see how it works in cpm2.2 unless the seclen becomes 128 and the sectors per track become 72.51491

This forum's bitmap reduction is annoying.

http://home.earthlink.net/~alank2/cpmdisk.png

durgadas311
March 2nd, 2019, 08:56 AM
In cpmtools3 they have a "seclen" field, but CP/M 2.2 has no such field, correct? (because it only deals in terms of 128 byte sectors). How does cpmtools3 turn an entry like this into a valid CP/M 2.2 diskdef?


I think cpmtools doesn't need to convert it into a CP/M 2.2 DPB. It knows if the diskdef is CP/M 3 or 2.2 and so handles it correctly. No CP/M BDOS ever sees those diskdefs, they are only used by cpmtools.

If you have 128-byte physical sectors, you generally cannot get as much data per track as with larger sector sizes, due to formatting overhead. According to the above data, at 128x32 per track you get 4096 bytes per track, and with 512x18 you get 9216 bytes per track (formatted). If you are translating the 512x18 diskdef to a CP/M 2.2 DPB you then use SPT of 72. I would surmise that the 512x18 diskdef is mapping both sides into a single track.

alank2
March 14th, 2019, 01:59 PM
I'm back at it; getting a Z80 and AVR to work together. So far it is going really well. The picture below shows my massive rats nest of wiring on breadboards. I've been able to get IN and OUT instructions working perfectly with the AVR and I also figured out how to disable my SRAM and feed the Z80 instructions so i can feed it a JMP instruction to start somewhere besides 0x0000.

51699