• Please review our updated Terms and Rules here

PCJr lesson learned - STAY AWAY FROM DOS!

deathshadow

Veteran Member
Joined
Jan 4, 2011
Messages
1,378
One of the big reasons for my rewriting Paku Paku and the engine it's based on from scratch was to make it more viable on an unexpanded 128k PcJr... which laughably is lowering the system target from the original 5150 concept... a LOT.

I've got my own internal profiler that lets me break the code into slices while running them in their normal order. It lets me dial in things a normal profiler wouldn't tell me -- isolating spikes in execution in realtime that would be missed by 'classic' profiling methods.

I was thinking that blitting the screen updates was the big bottleneck, but isolated to it's own slice it wasn't taking any more or less time, but the slice it was originally in was eating up massive time... move it back and move out the refresh of the old data to the backbuffer... and that's not it either? The only thing left in there is keyboard... KEYBOARD? that makes no sense, wait... it is spiking every keypress! (or key-repeat); something that on a "real" PC takes effectively zero time (below the 2400ths of a second accuracy of my profiler) is taking almost a sixtieth of a second?

Then it hit me -- the Junior's keyboard process is REALLY convoluted.

1) messages are handled by the NMI.

2) Which the ISR uses a lookup table to turn its scancodes into PC/XT scancodes for "normal" software to even work.

3) I'm based on replicating TP's CRT keypressd and readkey -- which calls int 21h functions 0x0A and 0x0B... and DOS is in RAM.

4) Those int 21h calls themselves call int 16h in BIOS...

5) both of which means pushing/popping a lot of registers to RAM.

6) ... and RAM on a unexpanded PCJr. is slow as molassas since it's the same speed as Video RAM.

Add that mess altogether? OUCH. almost the same amount of time as an entire frame of video sucked down JUST by checking if a key has been pressed and reading it's value?

That's why this original code:

Code:
; function readkey:char;
pProcNoArgs readkey
	mov    ah, 0x07
	int    0x21
	retf
	
; function keypressed:boolean;
pProcNoArgs keypressed
	mov    ah, 0x0B
	int    0x21
	retf

Is a disaster on the Junior... taking longer when a key is actually pressed to run both of them than it takes to blit from the backbuffer to screen all 9 game sprites. (4 pellets, 4 ghosts, player) AND update the score.

So... thinking on how to fix the problem.

1) I was looking at it and int16h AH=0x00 returns PC/XT scancodes regardless of the underlying hardware... so why am I dicking with ASCII and char0/extended codes?

2) Int16h AH=01 (equivalent to keypressed) only checks if 0x0040:001A == 0x0040:001C, so rather than have the overhead of an INT involved, why not just check that my damned self?

3) the CASE statement to check what key is pressed would be faster if there were less values -- so how about a lookup table for the scancodes to translate ones that do the same thing -- like turning arrow keys into WASD. Sucks down a handful of RAM, but would also let me filter oddities like AT&T and Tandy numpad6 returning 0xFC when num-lock is off. (instead of the proper response of 0x77).

4) since the Jr. has a function to do it, disable key repeat. For good measure, set key repeat really low with a high delay for non-jr systems.

5) trap pointless repeated keystrokes and strip them out of the buffer

6) You know the Junior's memory scheme is crap when ROM is faster than RAM.

So the routine I came up with is this:
Code:
; function gameKey:char;
pProcNoArgs gameKey
	xor    ax, ax
	mov    es, ax
	mov    dh, [lastKey]
.loop:
	mov    cx, [es : 0x041A]
	cmp    cx, [es : 0x041C]
	je     .noKey
	int    0x16
	cmp    ah, dh
	je     .loop
	mov    al, ah
	mov    bx, controlRemap
	xlat
.noKey:
	mov    [lastKey], ah
	retf

Since no key is assigned scancode zero, simply returning zero for no keypressed OR unrecognized key seems good enough. Oh, cute trick in there, since the BDA is in the bottom 64k, use segment zero to access it. Notice I store the real scancode for compare and not the XLAT one.

... and this simple change reduces keyboard handling back to being 'unnoticeable' just like on a real PC

This probably explains why int21h functions 0x09 and 0x06 inhales sharply upon the proverbial equine of short stature as well... strings sent through function 0x09 outputting to the screen slower than old school 300 baud modem communications.

I'd probably use my own output routines there if not for trying to keep the size under control... though it LOOKS like I may end up with some spare space. If once the game is fully functional and playable the memory footprint is less than 64k, I may go back and write something better for those too.

THEN, there was a really strange error; if I tried to play with the keyboard when joystick was enabled, the system went off to never-never land. Play with the joystick, it's fine... keyboard with joystick enabled, fine... joystick enabled play with keyboard, totally banjaxed... unable to recreate this on the Tandy 1000 or my Sharp PC7000 or in DOSBox...

See my first 'problem' above -- keyboard is handled by the NMI? The keyboard NMI expects a valid stack with enough room on it for whatever it is it thinks it's doing. My joystick routine was repurposing SP to store "0" for adc in an attempt to make 'inside the loop' faster since that routine doesn't need a stack... rather than disable the NMI, I just switched those back to "adc r16, 0" instead of "adc r16, sp"

Laughably changes elsewhere (thanks guys for noticing some of my flubs) tripled the output range of that routine anyways, so switching back to immediate zero is no biggy. No more crashes there -- and I'm getting a VERY usable range of 1..28 for the joystick reads junior, 0..94 on the 1000SX in "Slow".

Cute how PC NMI's don't try to use the stack, but Junior? Pfft.

... and thanks to these changes, I've dragged my codebase kicking and screaming into being fast enough on an unexpanded 128k Jr.

Of course I'm sitting here with sockets, 41256's and a 128k external expansion ... and not bothering for two reasons.

1) moving stuff into faster high memory is "cheating" and not what I'm testing.

2) My Parkinsonism is so bad right now I'm not certain I trust myself soldering anything that detailed... Hoping that maybe come spring I'll be up for it, if not I might have to ask if anyone out there would be willing to do that for me. At least I have a nice 1" wide dual-tip 300 watt baton for pulling the old chips in one yank without snipping the legs.

In any case, if you care about speed on the Junior, stay the hell away from Int 21h (DOS) if you have a viable alternative... and since Turbo Pascal's CRT unit uses same, just another reason NOT to use CRT. Also a laugh that something that takes almost no time on a real PC was burying the game on the Junior.
 
Last edited:
Nice work - I knew the architecture of the PCJr was quite different to that of the original PC but wow....

It strikes me that IBM must have designed the PCJr to be 'less than optimal', especially when it comes to software compatibility. I guess that this was a marketing decision so that the lower cost Jr did not compete with the higher cost PC/XT.
 
First, everybody knows the PCjr was a budget system. Corners were cut. The slow memory, ROM being faster than the first 128K of RAM, the use of NMI for the keyboard, and the additional translation steps needed to map scancodes to their PC equivalents are well documented. The machine was also around half the cost of a comparable IBM PC at the time. (Also on the 11 o'clock news tonight, water is still wet.)

For anybody who wants a good read of what the Jr has to do to be compatible:

http://brutman.com/PCjr/pcjr_keyboard_handling.html

Second, fast keystroke processing is generally not a requirement for computing. How fast do you type? At 100 words per minute and 5 chars per word you would need to be able to process 500 keyboard interrupts per minute, or 8 keystrokes a second. I'm sure the Flight Simulator and other games that ask about your keyboard layout ran into similar problems and did the something similar.

Third, overrunning your stack is a rookie mistake. Sorry.

Fourth, I think it's BIOS you want to avoid, not DOS. Pretty much all of the added overhead you describe is at the BIOS level, not the DOS level.

-------------

I choose to do all of my benchmarking and compatibility tests on a PCjr because if it can work there, it works well anywhere. It's an amazingly capable system for what is effectively the only IBM PC near clone that just happens to be made by IBM.
 
Yes the Jr's keyboard handling is notorious. If you are downloading a file over a modem at any speed, you don't dare press a key on the keyboard - or pooof... z-modem timeout!

I've done a hack in the past on my Jr to completely disable the on board 128K for system RAM and get it all from expansion. However I ran into issues with the PC Jr's OWN BIOS not using the standard CGA video window the Jr's hardware also implements to access and pan around the now 128K of totally dedicated video RAM after the hack. And it created compatibility problems with code meant to fix compatibility problems with the Jr vs the PC. Go figure. Can read more about it here:

http://www.brutman.com/forums/viewtopic.php?f=1&t=377&sid=1f817f5d62cf4e4f5907389f7c47691c

Do you have a Jr-IDE card?
 
I concur... nice work!

Out of curiosity, would you be willing to post pseudocode for your profiler? I guess it's as simple as reprogramming the PIT to 2400 ints/sec, and then reserving BSS for profiler vars, but it still looks interesting.

Third, overrunning your stack is a rookie mistake. Sorry..
... I do that once a week if not more :(.
 
First, everybody knows the PCjr was a budget system.
No, it's not news, but sometimes the differences are so mind-numbingly stupid and reeks of the type of back room penny pinching that only happens when non-engineers or engineers too clever for their own good aren't being grounded by people with the common sense to go "spend the five cents". In a lot of ways some of the stuff reminds me of Woz's "cost saving shortcuts" that never explained why the Apple II was twice the price or more of it's competitors. (other than the ignorance of the average person who buys Apple stuff).

But worse, the "cost cutting" claims doesn't explain why so many expensive but ultimately useless crap made it's way into the Jr. Like the IR keyboard, multiple cartridge ports (with insufficient bus width to put anything "real" on a cart) -- again, reeking of marketing executives throwing "gee ain't it neat' garbage at the wall to see what sticks, while intentionally hobbling the device because they didn't want it to compete. It was designed to be a failure on purpose.

... and of course the "cost cutting" is put to the lie the moment you look at the clone industry just a couple years later -- you really think IBM's deep pockets couldn't have done better within that price mark? Much less the marketing idiot stupidity of "we don't want to compete with the PC or XT" when they should have been thinking "We want to sell a lot of these"; you are NEVER in competition with yourself. A notion a lot of marketing exec types NEVER seem to grasp!

Second, fast keystroke processing is generally not a requirement for computing. How fast do you type? At 100 words per minute and 5 chars per word you would need to be able to process 500 keyboard interrupts per minute, or 8 keystrokes a second. I'm sure the Flight Simulator and other games that ask about your keyboard layout ran into similar problems and did the something similar.
... True, but it shouldn't be consuming so much time you are left with nothing

Third, overrunning your stack is a rookie mistake. Sorry.
Not so much an overrun, as not expecting anything to use the stack with normal interrupts disabled -- which is why it's FINE on a real PC. All I was doing was moving SP into ES (since I didn't need ES inside the loop), setting SP to zero to use in a ADC, and restoring it -- with a CLI and STI after. There's NO reason for the NMI to fire during gameplay or joystick reads on a real PC. I had forgotten about the Junior's re-re keyboard handling using the NMI... which likely explains why certain programs make the Junior flake out.

Let's face it, what trips a NMI on a 'real' PC? RAM Failure, FPU exception and... and... AND?

Fourth, I think it's BIOS you want to avoid, not DOS. Pretty much all of the added overhead you describe is at the BIOS level, not the DOS level.
MOST of the overhead was from calling INT 0x21 functions 0x0A and 0x0B. As I said 1) they wrap INT 0x16, 2) they run from slow RAM, 3) they push/pop to RAM just to call something that push/pop's to RAM. Reducing it to calling int16h just ONCE and checking the head/tail directly reduced the execution time from "equal to an entire vertical refresh" to "can't even measure it".

... and laughably I tried making my own ISR9 handler, and on the Junior the BEST I can come up with is SLOWER than calling the BIOS because again, ANYTHING run from RAM is so painfully slow on the junior, even ROM is faster.

I choose to do all of my benchmarking and compatibility tests on a PCjr because if it can work there, it works well anywhere.
That I can agree with, it's a great "minimum target" as pretty much ANYTHING is better than it. You code for the lowest target you can find, and you're good everywhere. Though the caveat is it can also do harm -- that's why game programs for the "minus 60" sucked as most dev's targeted the C=16 as their minimum target...

It's an amazingly capable system
I wouldn't go that far, at least not stock. You can MAKE IT remarkably capable if you don't mind doubling the case width with expansion boxes.

Do you have a Jr-IDE card?
No real reason to - anything I'm doing fits on a 360k floppy alongside DOS with room to burn -- and since I do my coding and compiling on modern systems, then sneaker-net it over I don't have anything big enough to warrant even having a HDD on the Jr. See the 60 meg RLL in my 1000SX that 99.99% of the time all it's used for is to boot the machine.
 
It strikes me that IBM must have designed the PCJr to be 'less than optimal', especially when it comes to software compatibility. I guess that this was a marketing decision so that the lower cost Jr did not compete with the higher cost PC/XT.
A lot of the Jr.'s failings can be excused by "budget," I guess, but I suppose we shouldn't rule out deliberate misdesign, given that we know for a fact that IBM hobbled the AT to prevent it from encroaching on their low-end minis.
 
Out of curiosity, would you be willing to post pseudocode for your profiler? I guess it's as simple as reprogramming the PIT to 2400 ints/sec, and then reserving BSS for profiler vars, but it still looks interesting.

It works based on my time-slicing system. I put each major 'block' that can't be broken up into it's own 'slice'. I stay at my stock 120hz timer -- Each slice "waits" to see how much time is left... Every timer tick the word-width "tickCounter" is updated, the "wait" routine loops until the timer is non-zero also keeping track of how many loops are run.

Code:
; procedure waitTimer;
pProcNoArgs waitTimer
	mov   ax, 0
.loop:
	or    ax, ax
	js    .cmpTickCounter
	inc   ax
.cmpTickCounter:
	cmp   [cs : tickCounter], WORD 0
	je    .loop
	dec   WORD [cs : tickCounter]
	mov   [timerWaited], ax
	retf

I use the sign trigger to say if it loops more than 32K times, just use that value.

For each time slice, before I call waitTimer I output tickCounter's value -- this shows how many timeslices are 'backed up' in an overload situation. Then after each waitTimer call I output the value in timerWaited showing how much 'free' CPU time there was before the loop ran.

On a stock 4.77mhz PC the normal output on each timeslice usually returns tickCounter = 0 and a timerWaited = 240..260. You switch to the junior, and timerWaited drops to anywhere from 3..8 on most slices. An empty slice (used to slow down slower game levels) returns the limit on PC and around 20 on the junior.

I round to ten at a 120hz timer, and that's a rough 2400ths accuracy. Isolating the keyboard handler to it's own slice on PC still returned that 32768... but on Junior returned a overflow of 1 on the next slice AND of course, 0 timerWaited. With a empty slice after it I got timerWaited on the slice after that of 2 -- meaning keyboard reads were consuming an entire timeslice AND an 95% of the allocated time for the slice after. :(

... and now it doesn't even register, just like PC.

I actually display these values as a bar graph in realtime in addition to logging them. At a glance I can rearrange the code in each slice, compile and run again... This is made easier by my now having the 360k floppy drive in my "middle" machine (5x86-133 running at 150) visible on the LAN and mapped as disk A in DOSBox on my workstation; since I do all my development using modern hardware, I simply test on the real deal. 99% of my code written using Flo's Notepad2, an EXCELLENT programmers editor based on Scintilla without being a total piece of SCITE.

I use timeslicing because I don't want to have to hook audio support for a dozen different sound cards into the timer ISR (that would suck HARD), but need at LEAST 120hz audio updates to recreate in software what Pac Man can do with it's custom PSG in hardware. MOST PC sound cards (especially early ones) can only generate a fixed frequency at a fixed volume - you want something more complex than "beep" for a sound effect at anything faster than a quarter note, you need to update that frequency far, FAR faster. Even the simple "siren" in the background needs at LEAST 60hz, and sounds far better at 120hz.

Even the simple "chomp" sound -- you only get the sound every other pellet(ish) and it takes the amount of time of two-thirds the movement rate at the fastest game speed. That works out to 8 ticks at 60hz -- so only 8 separate frequency changes which sounds... meh. You double it? Suddenly you're able to do so much more.

Unlike the PacMan hardware where they could just say "play this sound" and the PSG handled it, every sound has to be generated from software. Of course 120hz is also handy for doing 2 voice arpeggio on speaker, which is greatly improving how the game sounds when there is no real sound card.

It also means simplifying the game logic -- on the real hardware they could get away with a 60hz timer and using Bresenham style integer maths to update when the ghosts and player moves -- allowing for wildly different rates of movement. This appeared smooth in gameplay (for the most part) because of the higher resolution (224x248 playfield), and they were able to devote time to that extra math as sound and sprites were in hardware.

In my implementation with only a 84x93 playfield available having them not update at the same time is painfully jerky and visible, so I had to dump the idea of sprites moving at different speeds in the trash -- WORSE, because I'm having to do the sprites in software and the sound in hardware, there's no CPU time free for the different movement rates. The obvious solution was to update the sprites in unison at different frame rates... the slowest level being 20fps, the middle speed being 24fps, and the fastest being 30fps.

... and as luck would have it, 120hz is an even multiple of all three.
120 / 4 = 30
120 / 5 = 24
120 / 6 = 20

Design it to 4 timeslices, and on slower levels add one or two dummy/empty slices to slow it down.

Something you'd be VERY hard pressed to pull off with flat execution in realtime without making a massively complex ISR for sound and making the movement math far more complex. There's a reason RTOS -- particularly in motor control -- operate in the same manner. There's a reason the original game could get by on a 3mhz Z80 -- and that was that the CPU didn't have to do the heavy lifting of sound or video... ALL it had to really handle is game logic. If I didn't pull all these stunts it would have the same jerky, disjointed and piss-poor behaviors as games like Round 42. Love the video mode, but that game's "play" makes me cringe

But back on the subject of "profiling" what I'm doing isn't so much measuring how long the code takes to run, as it is measuring how much time is left over after each "piece' for doing other stuff. Means I can dial in what's actually effecting useful performance, instead of the vague/useless reports most "real" profiling does... where it would sit there bitching that "waitTimer" is sucking down >50% the execution time at the 20fps level -- OF COURSE IT IS, that's NOT useful information! Overall at the fastest speed the four main video operations are >60% the execution time -- that's NOT useful information when you have a spike that's throwing it all awry that normal profiling would "smooth out" and pretty much fail to report. That spike may only take 2% the overall execution time, but you have a spike like that at the wrong time?
 
Last edited:
A lot of the Jr.'s failings can be excused by "budget," I guess, but I suppose we shouldn't rule out deliberate misdesign, given that we know for a fact that IBM hobbled the AT to prevent it from encroaching on their low-end minis.

That's my gut reaction to it - a LOT of the design choices strike me as costing more, not less; and many many more feel like deliberately and willfully crippling it. NORMALLY I'd say never attribute to malice that which is adequately explained by stupidity, but I REALLY don't think the people working on it were that stupid.
 
The ironic thing about the Peanut is that by the time it came out, one could buy a 5160 clone for less than it would take to similarly equip a Peanut. A friend was determined to hang onto his and spent far too much getting it into a clone-comparable state.
 
To be fair, there were plenty of bad computers in 1983-1984 -- Commodore Plus/4, Mattel Aquarius, Coleco Adam, Sinclair QL, Timex-Sinclair 2068, etc. -- the PCjr was just the most highly publicized flop, due to IBM's stature in the marketplace and tons of hype from computer magazines saying that the PCjr would take the home market by storm and put all the competitors out of business. Of course, IBM's $40 million advertising campaign for the PCjr helped too, including getting Ziff-Davis to publish the first issue of PCjr Magazine even before the product actually shipped.
 
To understand IBM it helps to have worked there. I did my 18 years - it wasn't during the golden age of the PC era, but it gives me a lot of insight.

First, it is well known that IBM did not expect the original PC to sell well. It was an accident. The PC division (which was held quite apart from the rest of IBM) caught lightning in a bottle.

The PCjr can best be described as people who now think they know everything trying to repeat the lightning in a bottle. Confuse it with some market research, competitive analysis, cost cutting, and inexperience and this is what happens. This is not a new story.

Let's consider some of things the machine did well:
  • CGA graphics built in. And with more capability than the standard CGA graphics. Video memory shared with system memory is a fairly common technique now too.
  • The sound was far better ...
  • Joystick ports, serial port, etc. all included.
  • As much as you hate the keyboard mapping routines, that technique became the standard for how everything with a non-standard keyboard made themselves compatible with the PC BIOS.
  • Simplified system setup - you have a dedicated memory slot, modem slot and a dedicated diskette drive slot. The sidecars were easier to deal with for novices than expansion cards, but in retrospect they were not so great.

The gimmicks - cartridge slots and an infrared keyboard. But you can see how cartridges could find their way onto a machine destined for the home market, especially in 1983. And those cartridges are actually pretty forward thinking - they look exactly like a ROM BIOS extension, so not only could you put a game on it you could put BASIC programs, DOS software, and system ROM enhancements.

A different organization might have done better; IBM is a very expensive place to do business inside of. IBM has never been able to build commodity items. The PCjr was an attempt to do a cut rate PC compatible that was close enough to run most software, but affordable for home. I think they mostly hit the mark.
 
On a stock 4.77mhz PC the normal output on each timeslice usually returns tickCounter = 0 and a timerWaited = 240..260. You switch to the junior, and timerWaited drops to anywhere from 3..8 on most slices. An empty slice (used to slow down slower game levels) returns the limit on PC and around 20 on the junior.

I wonder what the results would be with this little change. Except for the size reduction probably not that useful if it's already hitting the limit on a PCjr but I wanted to remind you how useful SAHF can be when you just want to test some bit(s) in AH.

Code:
; procedure waitTimer;
pProcNoArgs waitTimer
	xor   ax, ax
.loop:
	sahf
	js    .cmpTickCounter
	; etc...
 
It's an interesting idea, but wouldn't really make an impact since that's the routine that's supposed to sit there with it's thumb up it's ass when there's a surplus of CPU time. I'm going to implement it though since it allows for a fraction more loops, meaning more accuracy on how much time is left.

Though now that I've glued the primary parts together, I'm discovering that even running it flat without the interval timer, I'm coming up 20% short on speed on the unexpanded junior... and most of what's sucking on the CPU is stripped right to the bone. I'm probably going to have to add a timer reset/override when the overload state surpasses the number of timer slices in a frame, and live with that the fastest frame rate won't quite be as stable or fast as it is on PC. Unless I can come up with some miracle cure, I think this is as far as I'm gonna get.

I might try reducing the number of function calls to see if that squeezes a wee bit more out of it. The "behavior" code in particular is passing too many vars and nesting too many functions deep.
 
The PCjr was an attempt to do a cut rate PC compatible that was close enough to run most software, but affordable for home. I think they mostly hit the mark.

Something closer to the Apple IIc would've nailed it -- smaller, cheaper, and self-contained, but still having a decent keyboard and being 100% software compatible with its big brother. The PCjr had an odd mix of being both too avant-garde and too restricted.
 
Something closer to the Apple IIc would've nailed it -- smaller, cheaper, and self-contained, but still having a decent keyboard and being 100% software compatible with its big brother. The PCjr had an odd mix of being both too avant-garde and too restricted.

In other words, a Tandy 1000 EX or HX.
 
Something closer to the Apple IIc would've nailed it -- smaller, cheaper, and self-contained, but still having a decent keyboard and being 100% software compatible with its big brother.

Yes, sounds a bit like what Commodore did, when they reworked the A1000 into the A500/A2000.
Somehow hardly anyone ever did an 'Amiga 500'-like PC.
I know one that a friend of mine used to have, a Vendex Headstart Explorer. It had one expansion slot, only for short cards:
explode5.jpg

explode8.jpg

explod11.jpg


They even made the keyboard fold up. Could probably have been smaller if they didn't do that:
explode6.jpg
 
There were a lot of PC compatible designs that were shoehorned into a keyboard. Most wound up being targeted for business. The "zero footprint PC" that got labels to turn into Commodore USA's Amiga system several years ago is one notable example. I never liked them. The oversized keyboard tended to be clunky, early hard drives dislike getting moved, and it is always easy to slide a tower under the desk.

IBM's PS/2 Model 30 is close to what a more conventionally designed PCJr would have been. All the useful ports (plus a parallel port) are embedded into the motherboard but with standard plugs. Video is a different better CGA but less capable than the video given to the business line. The same pair of 3.5" drive bays as the PC Jx. Three general purpose ISA internal slots instead of 3 special design slots. No sound, cartridges, or sidecars but without the PCJr failing like it did those concepts might have found a way into normal production.
 
A funny story on how fast the legacy of the Peanut dimmed.

A engineering friend who saw a goldmine in developing products for anything made by IBM owned a full-blown PC Jr. setup, complete with extra memory and a hard disk. After he discovered that not everything that IBM made was gold, he offered me his setup for $1500, which was substantially less than he'd put into it. I declined. A year later he offered it to me for nothing. I still declined.
 
IBM's PS/2 Model 30 is close to what a more conventionally designed PCJr would have been. All the useful ports (plus a parallel port) are embedded into the motherboard but with standard plugs. Video is a different better CGA but less capable than the video given to the business line. The same pair of 3.5" drive bays as the PC Jx. Three general purpose ISA internal slots instead of 3 special design slots. No sound, cartridges, or sidecars but without the PCJr failing like it did those concepts might have found a way into normal production.

And then IBM screwed up their next attempt at the home market: the PS/1, with an oddball proprietary "pseudo-all-in-one" design -- for example, the power supply was in the monitor, you had to pay extra for an add-on expansion chassis to get ISA slots, RAM expansion was done via a proprietary module that plugged into the front panel, etc. The PS/1 also brought back the PCjr's 3-voice sound chip, except it wasn't compatible with the PCjr or Tandy 1000, so games had to be specifically written for PS/1 audio (and few were).

Ironically, shortly after that, IBM released a new PS/1 "Pro" that was just a PS/2 Model 30 with a 386SX upgrade and a PS/1 badge slapped on it -- a stopgap until the much more standardized "PS/1 Consultant" desktop and tower models arrived.
 
Back
Top