Image Map Image Map
Page 8 of 10 FirstFirst ... 45678910 LastLast
Results 71 to 80 of 97

Thread: Worst x86 CPUs over the years

  1. Default

    I would like to nominate Vortex86DX to this competition.

    The 800 MHz Vortex86 DX2 was released in 2009. Its performance is however WORSE than the performance of a Cyrix 6x86MX at 250 MHz.

    The Vortex86DX2 eats less than 1 watts, and it is a 11 year old cpu, so of course, nowdays it is hard to put it into perspective.

    HOWEVER, it was released 16 years after the Pentium1, it has 256 kbyte L2 cache, DDR2 memory controller, and the processors clock speed is 800 mhz.
    Yet, its unable to beat the Pentium1 or the Cyrix 6x86MX (these dont even have l2 cache whatsoever).

    When bench testing the FPU of the Vortex86DX, it's only giving 33 megaflops, either the fpu works on a separate 33 mhz clock, or either the fpu is fully implemented from microcode, without any support at hardware level.

    In theory, the ALU has two pipelines, its in-order superscalar design, however the pentium1 and cyrix 6x86mx is also in-order superscalar, so probably they have messed up something very very badly in the vortex86dx.

    The Vortex86 family is based off on rise mp6, but the performance of the Vortex86DX is very far from the mp6 clock-to-clock.

    When running my game engine, the Cyrix 6x86MX performs 5.9 fps, and the Vortex86DX performs at 5.9 fps. (Again, the Cyrix 6x86MX runs at 250 MHz, and the Vortex86DX runs at 800 mhz).

    Its not just my game engine which is slow (which also uses floating point in its vertex processor). Even loading the linux kernel itself is quite of slow, decompressing the linux kernel is about two times slower than on the Cyrix machine.

    Its very hard to find any test where the Vortex86DX outperforms the Cyrix6x86MX.

    By the way, the Vortex86MX is a totally nice CPU, they have fixed this performance issue (thats about 2x-3x faster on the same clock speeds than the Vortex86DX).

  2. #72
    Join Date
    Sep 2003
    Blog Entries


    I just looked in my spare CPU junk box and found a SX 25, SX 33, and an SX/2 50 CPUs along with an IDT Winchip 200 something or other plus the more common CPUs.

    The only time I ever seen those SX chips was at a recycler I used to raid that stripped budget OEM machines along with more common machines so I snagged a few for my collection.

    This also goes to show how stupid I was back then. The owner would let me go through the CPU box and snag what I wanted and paid probably 2x scrap rates which he figured out by weighing the chips and averaging a rate out (pre gold rush I might add) so it was probably a buck or two a cpu. I basically grabbed a few 486 CPUs I never owned or didn't currently have, some 486 evergreen upgrades, and misc oddball stuff like the above SX chips and socket 5 stuff. At the time the owners sidekick was stealing the PPros and reselling them to another scrapper on the sly but everything else was fair game for me. I should have snagged more then I did.
    What I collect: 68K/Early PPC Mac, DOS/Win 3.1 era machines, Amiga/ST, C64/128
    Nubus/ISA/VLB/MCA/EISA cards of all types
    Boxed apps and games for the above systems
    Analog video capture cards/software and complete systems

  3. #73


    Quote Originally Posted by Geri View Post
    I would like to nominate Vortex86DX to this competition.

    The 800 MHz Vortex86 DX2 was released in 2009. Its performance is however WORSE than the performance of a Cyrix 6x86MX at 250 MHz.
    I have an embedded thin client PC with a Vortex86DX running at 1 GHz. And yes, except for its low power consumption, it's pretty useless.

  4. Default

    vwestlife: you are lucky for having the 1ghz model
    mine is cardboard form factor, its look like the rpi, manufactured from icop
    it only has the 800 mhz model, and 256 mbyte ram (it would be far more usable with 512mbyte ram)
    and this was made far after the vortex86 mx was shipped...
    its a mistery why someone still tought the dx version will be okay to be used

    By the video, you mention:
    ,,uncompressed linux - they must have compressed it to fit it on the disk module''
    no thats normal on every system, they decompressing the kernel at boot.
    Last edited by Geri; January 28th, 2020 at 01:10 AM.

  5. #75


    Is the Vortex86 related to the SiS550 somehow? I haven't run any benchmarks on mine

  6. #76


    My pick is the original 8088/8086. Sure my 10Mhz Turbo XT clone gave me access to a huge range of great games and productivity software, it got the job done. Essays written, games played, all great. I still have that motherboard and the case it lived in.

    But was it a good CPU design? No.

    The designers left a lot of architectural baggage behind. Design choices that were probably unwise at the time became ever more troublesome as future x86 chips were developed. For a very long time just about every firm who ever designed a non DOS system could go get a faster chip for the same money.

    Here are a couple of things

    Segment:Offset addressing
    Lets have a 20 bit address range but only 16 bit registers. When you access memory its using a segment register shifted 4 places to the left plus another register. Your code gets littered with all kinds of whacky non standard pointer types like "near", "far" and "huge". Extra overhead accessing a far pointer, and even more for a huge one.

    Eagle eyed readers will also have spotted that there are many ways to form any address, e.g. C001:0000 == C000:0010. To have "normal" pointers in the way other systems do it has to be huge pointer. That means more overhead normalising pointers so you can compare them and more overhead on pointer arithmetic to deal with running out of range on the offset. You can tell the compiler to make all pointers huge by default to spare your sanity, your performance will suffer. You may wish you were on a 68000.

    This pain didn't go away until we got the 386 and DOS/4GW extender and other ways to write 32bit code. Programming then got a lot more pleasant, so long as you were prepared to dump support for older chips.

    Instruction Format
    I once got a long way into writing an x86 emulator to run on ARM CPUs, I studied the instruction coding with its prefix bytes and varied instruction formats and lengths. The only easy way to parse instructions is to feed the memory one byte at a time into a state machine. I questioned my design choices, did some research, and found out that is exactly how the early CPUs worked. If you have ever looked at how the original 32bit ARM instruction set is encoded you can see how easy it is for the hardware to decode it quickly, read a 32 bit instruction shove it into the decoder, decode it in one go. Easy and fast.

  7. Default

    Quote Originally Posted by mcs_5 View Post
    Is the Vortex86 related to the SiS550 somehow? I haven't run any benchmarks on mine
    Rise MP6 --> SiS550 --> Vortex86SX --> Vortex86DX --> Vortex86MX

  8. #78
    Join Date
    Jan 2007
    Pacific Northwest, USA
    Blog Entries


    I've voiced my distaste for the original 8086 implementation before. One thing that hit me in the first couple of days back in 1980 when I first was confronted with the responsibility of programming the thing was the wacky addressing--and the lack of instructions to accommodate it. My first example:

    Given two long/far addresses, compute the distance between them, even if said distance is more than 65KB. On a system with linear addressing, it's easy--just subtract the two addresses. Now try that with two segmented address that can be in any form and see how many instructions it takes.

    The point is that addresses and data must be considered as a whole. If you're going to have 20-bit addresses, you should have 20 bit registers.

  9. #79
    Join Date
    May 2011
    Outer Mongolia


    Segmentation did make sense from a certain 1970's point of view, especially if you kind of think of the 8086 as slotting in somewhere between an 8080 on steroids on the low end and a drunken take on the address-extended version of the PDP-11 architecture on the high side. But it was definitely not a forward-looking decision to assume 64k was going to be enough (easy to handle) per-process memory for anyone for more than a few years.
    My Retro-computing YouTube Channel (updates... eventually?): Paleozoic PCs

  10. #80
    Join Date
    May 2009
    Blog Entries


    128 kB processes if you please. 64k code and 64k data just like the PDP-11. Yes, the programmer had to do more work but the system builder didn't need a MMU. Take 8080 code that is squeezed for room, double the memory available to it with little effort, move the OS into its own pair of segments with plenty of room for buffers. Improve performance for a few years until the chip of the future became ready to handle the influx of affordable memory.


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts