• Please review our updated Terms and Rules here

Benchmarking vintage computers against modern computers and mobile phones

simmiv

Member
Joined
Nov 2, 2017
Messages
42
Location
Australia, NSW
Hi everyone,
I like many of you have some old vintage computers, Tandy TRS80 Model 1, Apple IIe, IBM PC, XT. AT, PS/2's etc, well into the 90's. I'm planning to organize a vintage computer show sometime in the future for charity fund raising. One of the things I'm hoping to achieve is to have the ability to engage with the younger generation somehow.
One idea is to get them to use their mobile phones to compare their own mobile phone's computing power with the computing capability of the vintage PC they are looking at.. This means I have to come up with a benchmark which can span different computer architectures and makes, which can be replicated on modern mobile phones. If someone is looking at an Applie IIc or IBM PC, they would have a sign telling them it's bench marking figure and advise them to load a URL on their mobile phone to compare their mobile computing power to the vintage computer.
This is not meant to be accurate or precise, simply relative and for fun. Of course the main focus of the benchmark would be the capability of the CPU as graphics is not something these vintage computers did well. I've seen a few benchmarks using Basic as the foundation due to it's cross platform capability. Just wondering if something similar could be done on a web page for a mobile phone, or are there some other benchmarks possible to get a relative difference in computing power? Looking for some advice!
Simmi
 
If it's just for fun and not accuracy, I would suggest using "how many integer numbers can the system add per second". This is something one can imagine and it can easily be done using BASIC as well as Javascript within a web page.
 
Ye Olde Sieve. Always a benchmarking favorite. On the website promoting the show, add a benchmark page and write a version that runs in Javascript in the local browser so folks can compare their phones to your machines directly. If you're motivated, do the JS benchmark with green, pixelly fonts. I'm pretty sure there's an VT220 font out there that works in web browsers, that would work. Spend some time to get the cursor to blink as well.

Or/Also

Even better.

Create a page per machine, each running a local simulator, so that they can run the actual benchmark you're running (In BASIC, natch), on the JS simulator. Have a switch the turns off "compatability" mode that runs the same speed as the normal machine, but instead runs it as fast as possible. This way you can get in to an introduction not just to the machines, but how folks are doing simulators in hardware and software of these machines today.
 
Hi all,
Thanks for your input.. It has given me some food for thought.. I'm not very good at coding, I'm more an electronics hw guy! When looking back at benchmarks for vintage computers I found this list of Dhrystone benchmarks for many systems.. See here. I also found this Dhrystone online test which can run from any browser, desktop, mobile or tablet, see here. There are two urls doing Dhrystone testing, aligned and unaligned.. Not sure what the difference is, but either one works. Once run the output from this looks like this from a mobile phone.
Screenshot_20210206-134431_Chrome.jpg
I've sent the gentleman an email to inquire about it to see what version of Dhrystone he uses.. If it works it would be nice to have an individual QR code for each machine and once invoked the user would run the test on their mobile and out would pop something along the lines of "An IBM PC had a score of 0.22 Dhrystones, your mobile phone scored 355, which makes it 1,613 times more powerful."

What do you think!

Rgds,
Simmi
 
Dhrystones is of course one of the standard benchmarks for computers. Issue is that no normal person can actually imagine what 0.22 Dhrystones means. But for raw comparison, it's okay I guess.
 
Not wanting to start an argument here, but aside from all of the numbers off of various apps, if you can load Doom into your DOS machine and it runs, then I think you are good to go.
 
Running BASIC games may be indicative of something, but when it comes to crossing architectures, it's hardly very definitive. You're depending largely on the BASIC implementation for your measurements. Speaking from experience on old supercomputers, it's hard enough when the test is well-defined and standardized. There were tweaks to compilers, as well as instruction sets to best the competition in established benchmarks. The one that sticks out in my mind was Saxpy, whose corporate name was based on a Linpack test.
 
With slightly later vintage machines, Mandelbrot set drawing makes for a good visual demonstration of performance differences while being complex enough that modern hardware will still have a computational delay.

Ultimately, a benchmark should show performance of a system and it makes no difference if one system is slow because of a sluggish CPU, lots of activity stealing potential CPU cycles, poor language implementation, or lackluster screen display routines. The old Kilobaud BASIC benchmarks were often a test of the screen display routines with some completing in much less than half the time if the PRINT commands were commented out.
 
Let's take this one to see the problem:

1 REM FRACTAL.BAS IN TEXT MODE
5 START=TIMER
10 FOR Y=-12 TO 12
20 FOR X=-39 TO 39
30 CA=X*.0458
40 CB= Y*.08333
50 A=CA
60 B=CB
70 FOR I=0 TO 15
80 T=A*A-B*B+CA
90 B=2*A*B+CB
100 A=T
110 IF (A*A+B*B)>4 THEN GOTO 200
120 NEXT I
130 PRINT " ";
140 GOTO 210
200 IF I>9 THEN I=I+7
205 PRINT CHR$(48+I);
210 NEXT X
220 PRINT
230 NEXT Y
235 PRINT "TIME TO RUN:";TIMER-START;"SECS"

This should run easy on every microsoftish BASIC interpreter (or compiler) like cp/m mbasic.com, different versions of Commodore-basic (C64, VC20, C128, C264-Series, CBM series), Tandy TRS 80 mod. 100 and it's friends (Olivetti M10, etc.), MS-DOS GW.Basic, original ATARI ST-Basic, Omikron-Basic, ATARI 400/800/XL/XE Basic, etc., you only may have to replace the timer thing with system specific, or manual with a stop watch. For example on mbasic on CP/M this runs in about 5-10 minutes, depending on the speicific computer model. On a modern computer, if you find a BASIC interpreter which can understand this code, it will run in nanosecons, faster than the acuracy of the clock, So how to compare, if the time on the modern machine is about zero-dot-zero-zero-zero... seconds?
 
Hi everyone,
I like many of you have some old vintage computers, Tandy TRS80 Model 1, Apple IIe, IBM PC, XT. AT, PS/2's etc, well into the 90's. I'm planning to organize a vintage computer show sometime in the future for charity fund raising. One of the things I'm hoping to achieve is to have the ability to engage with the younger generation somehow.
One idea is to get them to use their mobile phones to compare their own mobile phone's computing power with the computing capability of the vintage PC they are looking at.. This means I have to come up with a benchmark which can span different computer architectures and makes, which can be replicated on modern mobile phones. If someone is looking at an Applie IIc or IBM PC, they would have a sign telling them it's bench marking figure and advise them to load a URL on their mobile phone to compare their mobile computing power to the vintage computer.
This is not meant to be accurate or precise, simply relative and for fun. Of course the main focus of the benchmark would be the capability of the CPU as graphics is not something these vintage computers did well. I've seen a few benchmarks using Basic as the foundation due to it's cross platform capability. Just wondering if something similar could be done on a web page for a mobile phone, or are there some other benchmarks possible to get a relative difference in computing power? Looking for some advice!
Simmi

For me, the issue is that the difference in power is such that it becomes incomprehensible and its inconceivable how much faster they are. My laptop is quite capable of emulating a mainframe some 10 times or so faster than the one that ran the whole of a business around 1995. You also won't be able to factor in multiple CPUs. I think I would just compare basic clock speeds. No calculation required. Most of those machines you mention are slow so

TRS80 Model 1 - 1.70 Mhz
Apple II - 1.023 Mhz
PC XT - 4.77 Mhz
PC AT - 8 Mhz

Samsung Galaxy 10+ 2,730 Mhz
Ipad 5 1,200 Mhz
Typical Lap Top 2,000 to 3000 Mhz

Also ignores multiple CPUs and bus width, but its a good raw number...
 
Also ignores multiple CPUs and bus width, but its a good raw number...
It's not and never was. Clock speed doesn't tell you how much faster a CPU is. It does not even tell you if a CPU is actually faster. An ARM CPU e.g. is way faster at the same clock speed than a x86 CPU. And even within one family, it doesn't work (e.g. Pentium III against Pentium 4). Comparing clock speeds only works for CPUs that are exactly the same otherwise. And even then it says nothing about the speed of the whole system.
 
It's not and never was. Clock speed doesn't tell you how much faster a CPU is. It does not even tell you if a CPU is actually faster. An ARM CPU e.g. is way faster at the same clock speed than a x86 CPU. And even within one family, it doesn't work (e.g. Pentium III against Pentium 4). Comparing clock speeds only works for CPUs that are exactly the same otherwise. And even then it says nothing about the speed of the whole system.

True, its not exact but is that "bad". In the context of museums some one once told me, and I feel its true, that just as in a job interview the first few minutes are important, when looking at an exhibit if the audience is not captured in the first few seconds you have lost their interest forever. So I believe that as soon as a "normal person" see "integer multiplications" or even worse "drystones" they turn off. Even Mhz as a speed may turn them off, but I would say its less important. If you want to have accuracy then put something like "drystones" afterwards, but really "performance" is not about "absolute performance" its about "performance in doing the task at hand" which is much harder to assess. A 1960's mainframe may have only limited CPU power and may be very poor at floating point arithmetic so have a low drystone figure, but it because, relatively speaking for the time, it could do IO quickly. A PC XT could do integer maths as fast, but it didn't over lap disk io...
 
Isn't one of the users of this forum doing this already, using "pi spigot" as the benchmark? http://litwr2.atspace.eu/pi/pi-spigot-benchmark.html

Clock speeds are misleading for comparing 'power.' Take 6502 versus Z80; clock for clock the 6502 will average twice as fast.

Going more modern, compare Pentium III S versus Pentium 4: I have an IBM eSeries x330 with dual 1.4GHz Tualatins (the S version with 512KB of L2) and 4GB of RAM; it holds its own against a dual 2.8 GHz Xeon.

For that matter, my third-gen core i7 laptop at 2.6GHz is nearly thirty times faster than a 2.6 GHz netburst Xeon.

Even on older architectures, the difference can be staggering; take Z80 versus eZ80 (used in the TI 84CE, which is a handheld computer, really). Clock for clock eZ80 averages 4 times faster across most instructions, so that 48MHz eZ80 in the TI 84CE is at least as fast as a 192MHz Z80.

No benchmark is exact, for sure, but raw clock speed isn't even a good ballpark comparison.

And which clock are we even talking about? Take a J11, for instance. You have four clocks per microcycle; so an 18MHz J11 has an actual microcycle speed of 4.5MHz; which number should be considered the clock speed? Or Z280, which has a clock divider; the crystal says 24MHz on a 12 MHz Z280.

The STM32H755ZI I'm working with has IIRC an 8 MHz clock; internally a PLL multiplies that up to several programmable clocks (The STM32 clock architecture is huge); I'm running the core at 400MHz.

I wouldn't mind a straight clock comparison if a single definition of clock speed could be obtained.

For instance, take that 6502 again; it has a true two-phase clock, so the chip is actually cycling at twice the frequency of either phase. A 1MHz 'clock' in this case is effectively a 2MHz 'clock speed.'

As to the I/O business, the flash storage in my phone is not even as fast as some of these older technologies.
 
For example on mbasic on CP/M this runs in about 5-10 minutes, depending on the speicific computer model. On a modern computer, if you find a BASIC interpreter which can understand this code, it will run in nanosecons, faster than the acuracy of the clock, So how to compare, if the time on the modern machine is about zero-dot-zero-zero-zero... seconds?

This is not a problem.

Y'all are missing the big picture.

This is not a "race". It's not a MIPS measuring contest.

It's simply a lens through which to provide context to the folks looking at vintage machines.

Who cares if it takes nanoseconds on a modern machine, that's kind of the point. Modern machines were fast, older machines, not so much.

Who cares if computer X's BASIC was slower than computer Y. Folks still wrote code in BASIC. It's still an apple-to-apple comparison since it's benchmarking BASIC programs.

Nobody cares if the Apple II was faster (or not) than the C64. Not today.

That's why I suggested web based simulators that can run the same code as the vintage devices, and have a switch that turns off the cycle accuracy to it can run the BASIC benchmarks as fast as a modern computer can simulate it.

Does the quality of the simulator matter? No, it does not. We're not benchmarking simulators. We're simply trying to demonstrate the "old world", and how far we've come in the new world.

We can't easily do that with benchmarks designed for modern computers, ported backwards. They're far, far to slow on old hardware. So we demonstrate it by bringing old benchmarks forward.

A benchmark that takes 10-15 minutes on old hardware isn't going to work. Who is going to sit there and wait for it to finish? As a hands on display, it's nice to have the people run it themselves, "feel" it's performance, then see it on modern machines.

What would be great is to get a display that illustrates how fast the Apollo 11 computer could run the benchmark. Mostly for a demonstration "This computer is demonstrably very, very slow, but it landed us on the moon anyway". Could just be a simple chart.

My favorite vintage anecdote about computing is line "What you're carrying in your pocket has more computing power than is what is on the Voyager spacecraft. I'm not talking about your phone, I'm talking about the key fob for your car."
 
Fully agree. And because of that, let me quote my suggestion:
If it's just for fun and not accuracy, I would suggest using "how many integer numbers can the system add per second". This is something one can imagine and it can easily be done using BASIC as well as Javascript within a web page.
If a C64 can do 200 and your smartphone can do 1.2 billion, that's a figure people understand.
 
For instance, take that 6502 again; it has a true two-phase clock, so the chip is actually cycling at twice the frequency of either phase. A 1MHz 'clock' in this case is effectively a 2MHz 'clock speed.'

I'm reminded of a list of BASIC benchmarks from the 1980s that was published--I still have the article. One of the contenders that blew away (by orders of magnitude) the personal computers was the benchmark run on a CDC Cyber 74 mainframe, which has a clock speed of--10MHz! But words are 60 bits, multiple segmented functional units, advanced (for the time) instruction scheduling made the big difference--and the benchmark was compiled, not interpreted.
 
This is not a problem.

...
Who cares if computer X's BASIC was slower than computer Y.

...

That's why I suggested web based simulators that can run the same code as the vintage devices,

So you don't like to compare different BASIC dialects with different machines, but you want to compare simulators? That does not make sense!

We can't easily do that with benchmarks designed for modern computers, ported backwards.

So, take a modern 3D benchmark, like the "Valley Benchmark", recompile it for 8088, Z80, 6502 and let it run on the historic computer... It will run years to calculate one frame, but remember, that historic processor is even not able to adress the 3D data.

Another example, open a fresh JPEG photo of your current digital camera on the modern PC, 10..25 Megapiels true color. You double click, and on the same moment you see it, downscaled to the resolution of the screen. Now take an ATARI TT, with 68030 plus 68882 FPU at 32 MHz (with 256 color graphics card), there is for example Truepaint or Photoline, which also can open JPEG pictures, it even takes 10 minutes to open the photo after shrinking it down to 1024x768 on the PC and transfering it to the TT. But it's not only rendering the picture, it's also that TT's SCSI interface is 1,8MB /s data rate at maximum, only reading the file already takes more than 20 seconds (if it fits to RAM)...

There are webbrowsers for the Atari TT, like "cab", "highwire", "netsurf", they even support SSL. Imagine now, opening a website with that, like this forum - this one is without SSL (why!?!?!). I tryed it once, never doing again, because it is a waste of time. After 30 minutes I could see something. Some guys I have met in some classic computers forum sayed, oh, there is a web browser, there is an email program for my TT or Amiga, so why I need a modern computer which is attackable by virus, no hacker will expect that I am online with my historic machine...

Peoples can not imagine that modern computers are so many light years ahead of our historic treasures in speed.
 
Last edited:
One idea is to get them to use their mobile phones to compare their own mobile phone's computing power with the computing capability of the vintage PC they are looking at..

So you don't like to compare different BASIC dialects with different machines, but you want to compare simulators? That does not make sense!

In the context of what this thread is about? No. It's utterly unimportant.

He wanted the young people to somehow grasp how their handheld, powerful phones differed than the vintage machines of the day.

That was why I suggested, again, still, an unlocked web base simulator. Something that can run a benchmark, of any kind, that they're seeing on real hardware, at the same speed as the original hardware (because the simulator can also be reasonably cycle accurate), and then, with a mouse click, run it again at "full speed". This tells them intrinsically "how much faster" their phone is than the device they're standing in front of. They can see, touch, and feel it.

They will not care one iota about CPU differences, BASIC implementation details, cycle stealing graphics implementations, etc. That's all noise to them. Instead they'll see. "Ok, so that's that..." and then... "Wow!"

A related anecdote, back in the day, when my friend was watching Star Wars, in the theater, he had taken his sister. It was her first time.

With the very first scene, when the rebel cruiser appeared she said "Wow, that's a big ship!".

Little did she know.

That's what you want the young people to hold in their hand.

I was seeing the classic Sieve run on an Apple II. That takes far too long, 500+ seconds. So, it should either be shorted (fewer iterations), or another benchmark can be used.

You can always make a presentation of benchmarks over time across different machines. Start with something that an 8-bit can finish. (Like the Sieve), but when it gets too short for later hardware, then bring up another benchmark that more suitable, and then see how it progresses to modern hardware. That's probably more interesting as a chart.

But the web based simulator is still I think the best, most tangible thing you can offer them. There's a first hit Apple II one, but it, out of the box, only goes up to 4Mhz. I imagine it could be uncorked, it's open source. I haven't looked for a TRS-80 or C64, or DosBox on the web kind of thing yet. You don't have to have them all represented. One would be enough.
 
i think my basic program is more understandable. And yes, there are differences in the run time from basic interpreter to basic interpreter. But the basic interpreter is part of the historic system. On some of them, like 8 Bit Commodore and Atari, or first version of IBM PC, it's even build in on ROM.Others have it on system disk by default, like GWbasic in DOS. And the BASIC code is easily understandable and it should be possible to rewite in Java-Script to run it on recent machines. You can see and understand, what it does. Just take that BASIC code above and modify it that way, that it runs in loop for one our, and count the number of loops per hour on different systems, and compare. The final result you can see, is that Computers not only improved in MHz, but also in artithmetics performance, parallelized threads, and software techniques, which all together increases speed.
 
Including video, hard drive, and memory speed. Even 1 wait state vs 0 wait states makes a big difference.

One of the things about the 1Mz 6502 that blew my mind was that in many ways it's faster than an original 4.77Mz 8088. Apparently the Intel chip takes multiple clock cycles to perform some instructions and the 6502 would only require one.

...Then there were the early K5 AMD processors that in theory performed faster than an equivalent Pentium, as long as you didn't need floating point calculations. It really hurt when games like Quake became popular.
 
Back
Top