Image Map Image Map
Page 7 of 12 FirstFirst ... 34567891011 ... LastLast
Results 61 to 70 of 116

Thread: A new article about x86 processors

  1. #61
    Join Date
    Jan 2007
    Location
    Pacific Northwest, USA
    Posts
    29,203
    Blog Entries
    20

    Default

    Intel was getting bugged by NEC's actions, extending their x80 licensing terms to the x86 . Therein lies a significant lawsuit. Prior to this, NEC and Intel enjoyed a mutually beneficial relationship. Zilog, on the other hand, was in the throes of massive disorganization.

  2. #62
    Join Date
    Jul 2010
    Location
    Silicon Forest, Oregon, USA
    Posts
    680

    Default

    Quite an interesting discussion you've got going here...

    A few notes about the original article, and the Intel part particularly:
    1. In 1970's Intel wasn't really that interested in microprocessors. They were more of a side product to help them sell memory ICs. This explains the reluctance to improve i8080, and Federico Faggin eventually leaving Intel and starting Zilog. It also explains why Intel wasn't too competitive with Zilog or anyone else really until late 1970's (See page 65 here).
    2. As several people mentioned above, 8085 was fairly popular for embedded applications (anything from printers to space craft). Using 8155/8156/8755 companion ICs it was possible to build a system with just 3 ICs. The "undocumented" instructions were not documented probably for the sake of future compatibility with 8086. Although 8086 was released a few years later, the 8086 design work started a bit prior to 8085 release, and 8080 compatibility was one of the primary design goals. As far as I know the Soviet 80C85 clone - IM1821VM85A is just a clone. No new instructions... I have a couple of these CPUs here, so I can test that for you
    3. It is funny how author is fascinated by the 8086 memory segmentation, and yet he complains about the same segmentation in 80286 IMHO segmentation in 80286 was just as much of the problem as it was in 8086. The main issue with 80286 was that the protected mode was not compatible with real mode, which was, as the article correctly states, later fixed in 80386 with vm86 mode.
    4. Another kind of funny thing in the article is using cycles per instruction for performance comparison. Although, it is true that if both 80286 and 80386SX would run at the same frequency, in some cases 80286 would be faster (for example see this). Yet, 80286 topped at 20 MHz, while 80386SX was available at 33 MHz... And I am not talking about 80386 which had the advantage of twice wider data bus and a cache memory implemented in some later systems.
    5. Both 80186 and 80286 were designed before it was clear that IBM PC compatibles would take over the world of personal computers. Both have some IBM PC incompatibilities. Since 80186 implements on-chip peripherals there are more incompatibilities there. But there are a few things in 80286 that could have been implemented differently to improve the compatibility with the IBM PC (A20 gate, interrupt vectors for the exceptions, compatibility of the protected mode with the real mode)
    6. NEC vs. Intel and the 80386 second source story. By mid-80's Japanese companies figured how to make memory cheap, and Intel was losing money there. Yet at the same time PC business picked up. So Intel (Andy Grove more specifically) made the decision to pull out of the memory business, and switch to processors as the main product. One of the questions was how to deal with the competition, such as AMD and the same Japanese companies that took the memory business. First of all Intel had decided not to second source 80386. Back in late 1970's IBM had requested that Intel would second source 8088 to ensure steady supply, and Intel actually asked AMD to manufacture 8088. In mid-80's the situation was different. x86 was de-facto standard for the PCs, and IBM was less interested in 80386 at that time. Also, to prevent the unauthorized cloning, Intel set the legal precedent with NEC: As I understand, it is not possible to copyright the schematic (the CPU design), but it is possible to copyright the software. And so Intel sued NEC over microcode copyright violation in their V20/V30 CPUs. They were actually able to prove that NEC copied the microcode (but I blame it on bad lawyers on NEC side... the CPU design was different enough), but the case was dismissed, since Intel didn't quite enforce the copyright previously, and let many others to manufacture its designs without specifying the author. Just for fun of it - Google AMD 8088 images... and you'll see that chips made before 1986 have AMD copyright, while later ones have Intel copyright... But Intel had achieved its goal. Although AMD had 80386 design ready, they were not able to sell it until 1989-1990 when Intel already had 80486.
    Last edited by sergey; December 6th, 2018 at 02:34 AM.

  3. #63

    Default

    Quote Originally Posted by sergey View Post
    Quite an interesting discussion you've got going here...
    3. It is funny how author is fascinated by the 8086 memory segmentation, and yet he complains about the same segmentation in 80286 IMHO segmentation in 80286 was just as much of the problem as it was in 8086. The main issue with 80286 was that the protected mode was not compatible with real mode, which was, as the article correctly states, later fixed in 80386 with vm86 mode.
    The 286 is almost entirely compatible with real mode 8086 code, if you use it like Intel intended: in "small" model with separate 64K address spaces for code and data (possibly stack too), or by allocating all memory through OS functions and treating the returned segment values like opaque handles. The application manual didn't even mention how a physical address is formed, because that was seen as an implementation detail and likely to be changed in the future.

    At the time the 8086 - or even the 286 - was designed, a limit of 64K per segment seemed big enough, since 8 bit systems often had less memory in total. Later, it became too confining when using large data structures, and the 286 didn't help there. Given the choice between real and protected mode, there was simply no benefit for most applications (also loading segment registers is slower in protected mode), so it was rarely used until 32 bit became mainstream.

    Apparently DRI made a version of Concurrent DOS that used the undocumented LOADALL instruction to emulate real mode in protected mode, but I can't find it anywhere online. I am slightly tempted to hack together something like this, just to see how ridiculously slow it would be - trap every segment load, emulate the instruction and then reload the new CPU state.

    One "feature" of the 286 that I have never found an explanation for, is how exiting from protected mode was only possible via reset. Maybe some military contract demanded this for "security"? Theoretically you could make descriptor tables inaccessible, have physical addresses be secret and randomized, but ring 0 code can still always crash the system.

  4. #64

    Default

    The reason why the 286 couldn't return to Real Mode is because Intel never intended it to do so! They expected operating systems to enter the wonderful world of protected mode and stay there.

  5. #65
    Join Date
    Jul 2010
    Location
    Silicon Forest, Oregon, USA
    Posts
    680

    Default

    Quote Originally Posted by dreNorteR View Post
    The 286 is almost entirely compatible with real mode 8086 code, if you use it like Intel intended: in "small" model with separate 64K address spaces for code and data (possibly stack too), or by allocating all memory through OS functions and treating the returned segment values like opaque handles. The application manual didn't even mention how a physical address is formed, because that was seen as an implementation detail and likely to be changed in the future.
    And yet, DOS applications frequently accessed the hardware directly. So all this "as Intel intended" model broke. Quite possibly during the 80286 design Intel haven't even expected that IBM PC / DOS would be that successful... otherwise they would have implemented something like VM86 in 80286

    Quote Originally Posted by dreNorteR View Post
    At the time the 8086 - or even the 286 - was designed, a limit of 64K per segment seemed big enough, since 8 bit systems often had less memory in total. Later, it became too confining when using large data structures, and the 286 didn't help there. Given the choice between real and protected mode, there was simply no benefit for most applications (also loading segment registers is slower in protected mode), so it was rarely used until 32 bit became mainstream.
    80286 was clearly designed with a multitasking OS support in mind. Xenix and Coherent were available that time, also OS/2 and Windows/286 that were released quite a bit later were using 80286 protected mode. And so there is a benefit of using protected mode (just not for DOS applications running on an 80286).

    Quote Originally Posted by dreNorteR View Post
    Apparently DRI made a version of Concurrent DOS that used the undocumented LOADALL instruction to emulate real mode in protected mode, but I can't find it anywhere online. I am slightly tempted to hack together something like this, just to see how ridiculously slow it would be - trap every segment load, emulate the instruction and then reload the new CPU state.
    There is a mention of Concurrent DOS 286 in this Wikipedia article. It does describe performance issues, and also that Intel had released some fixes in later 80286 steppings to improve the performance.

    Quote Originally Posted by dreNorteR View Post
    One "feature" of the 286 that I have never found an explanation for, is how exiting from protected mode was only possible via reset. Maybe some military contract demanded this for "security"? Theoretically you could make descriptor tables inaccessible, have physical addresses be secret and randomized, but ring 0 code can still always crash the system.
    It is a misfeature. I guess Intel assumed that real to protected mode switch is a one way road. After all why anyone would want to switch back to the real mode, when 80286 offered so much better protected mode
    Last edited by sergey; December 6th, 2018 at 05:23 PM.

  6. #66
    Join Date
    May 2009
    Location
    Connecticut
    Posts
    4,043
    Blog Entries
    1

    Default

    It would have been difficult for the 80286 design to be impacted by the IBM PC since less than 6 months separated the release of the IBM PC from the release of the 80286.

    Intel probably expected the same thing would happen that happened with the 8080 to 8086 transition: automatic translation would make most of the code work until proper designs could be done.

  7. #67

    Default

    Quote Originally Posted by dreNorteR View Post
    Apparently DRI made a version of Concurrent DOS that used the undocumented LOADALL instruction to emulate real mode in protected mode, but I can't find it anywhere online. I am slightly tempted to hack together something like this, just to see how ridiculously slow it would be - trap every segment load, emulate the instruction and then reload the new CPU state.
    This is evidently from an Intel document regarding the use of LOADALL and protection exceptions to emulate real mode from 286 protected mode. I've been tempted to do something with this too...

  8. Default

    @sergei Thanks for the interesting links and reply. IMHO the benefit from segment registers depends on the memory size. For DOS applications for a typical PC before the mid 80s they were a big advantage but for Xenix/286 with more than 1 MB RAM they were rather inconvenience.

  9. #69
    Join Date
    Jan 2007
    Location
    Pacific Northwest, USA
    Posts
    29,203
    Blog Entries
    20

    Default

    One thing that has never been clear to me is why real-mode segment granularity is a paltry 16 bytes. Why not 64 or 256 bytes? That would certainly have expanded the addressing capability. Is it that a 20 bit latch was the most Intel could muster in silicon? They could have certainly aped the 68K strategy of making the address register long, but bringing out only the lower bits, leaving the upper ones for future expansion.

  10. #70
    Join Date
    Mar 2011
    Location
    Atlanta, GA, USA
    Posts
    1,288

    Default

    Quote Originally Posted by Chuck(G) View Post
    They could have certainly aped the 68K strategy of making the address register long, but bringing out only the lower bits, leaving the upper ones for future expansion.
    But then they wouldn't have thought of doing that in reverse for the 386sx!
    "Good engineers keep thick authoritative books on their shelf. Not for their own reference, but to throw at people who ask stupid questions; hoping a small fragment of knowledge will osmotically transfer with each cranial impact." - Me

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •