So I've been playing around with the z88dk cross compiler and the guy who maintains it has been very awesome in fixing issues I've come across.
Its C runtime does support file operations for CP/M, but they are unbuffered and it only deals with files using random bdos commands because of this. Still, there is some equipment like the Epson PX-8 cassette drive that only supports sequential operations.
This got me thinking, could it switch between sequential and random operations as needed. I've read something that said that the sequential file position after a random operation is set to the beginning of the record that was transferred. This seems odd to me as you would think it would be the end, but perhaps it is more a byproduct if it having to change the fcb values to get the random record as opposed to it being concerned what location the file position is.
I've read this page - http://www.seasip.info/Cpm/fcb.html
And it says : You can rewind a file by setting EX, RC, S2 and CR to 0.
Towards the end it also mentions that:
CR = current record, ie (file pointer % 16384) / 128
EX = current extent, ie (file pointer % 52428 ) / 16384
S2 = extent high byte, ie (file pointer / 52428 ). The CP/M Plus source code refers to this use of the S2 byte as 'module number'.
So essentially CR is the lowest 7 bits, EX is the next 5 bits, and if using CP/M plus, S2 is perhaps 4 or more bits.
Questions:
Q#1 - I am thinking that these values are easier values for cp/m to deal with, perhaps even compatibility with 1.3 or something. Instead of putting a R0/R1/R2 in there in the first place that handled both sequential and random, were these values more convenient to store and use in these odd bit distributions?
Q#2 - I've read there was a 512K sequential file limit for 2.2, is this because you only have 7+5 = 2^12 = 4096 records = 512 K?
Q#3 - What is the purpose of RC? It isn't mentioned at all in their footnote showing how the values are distributed.
Q#4 - Can these values be manipulated manually? For example, if I have a file operation I want to do as random, then I want to switch back to sequential mode, but I don't want to waste a disk operation on a single sequential read to get to the end of that random record, can CR be incremented? If it overflows, then EX is incremented. If it is 32, then S2 is incremented. Or is this not possible because RC also plays a role and must be updated properly as well?
Q#5 - If RC must be updated, is it a consistent thing, or does its value work differently based on different CP/M drive configurations such as 1 byte extents vs 2 byte extents.
Its C runtime does support file operations for CP/M, but they are unbuffered and it only deals with files using random bdos commands because of this. Still, there is some equipment like the Epson PX-8 cassette drive that only supports sequential operations.
This got me thinking, could it switch between sequential and random operations as needed. I've read something that said that the sequential file position after a random operation is set to the beginning of the record that was transferred. This seems odd to me as you would think it would be the end, but perhaps it is more a byproduct if it having to change the fcb values to get the random record as opposed to it being concerned what location the file position is.
I've read this page - http://www.seasip.info/Cpm/fcb.html
And it says : You can rewind a file by setting EX, RC, S2 and CR to 0.
Towards the end it also mentions that:
CR = current record, ie (file pointer % 16384) / 128
EX = current extent, ie (file pointer % 52428 ) / 16384
S2 = extent high byte, ie (file pointer / 52428 ). The CP/M Plus source code refers to this use of the S2 byte as 'module number'.
So essentially CR is the lowest 7 bits, EX is the next 5 bits, and if using CP/M plus, S2 is perhaps 4 or more bits.
Questions:
Q#1 - I am thinking that these values are easier values for cp/m to deal with, perhaps even compatibility with 1.3 or something. Instead of putting a R0/R1/R2 in there in the first place that handled both sequential and random, were these values more convenient to store and use in these odd bit distributions?
Q#2 - I've read there was a 512K sequential file limit for 2.2, is this because you only have 7+5 = 2^12 = 4096 records = 512 K?
Q#3 - What is the purpose of RC? It isn't mentioned at all in their footnote showing how the values are distributed.
Q#4 - Can these values be manipulated manually? For example, if I have a file operation I want to do as random, then I want to switch back to sequential mode, but I don't want to waste a disk operation on a single sequential read to get to the end of that random record, can CR be incremented? If it overflows, then EX is incremented. If it is 32, then S2 is incremented. Or is this not possible because RC also plays a role and must be updated properly as well?
Q#5 - If RC must be updated, is it a consistent thing, or does its value work differently based on different CP/M drive configurations such as 1 byte extents vs 2 byte extents.