• Please review our updated Terms and Rules here

the IBM/370 in comparison

vol.litwr

Experienced Member
Joined
Mar 22, 2016
Messages
324
I've tried to compare the IBM/370 with other contemporary architectures in my blog entry. I will be glad if somebody can add or correct it. Thank you.
 
None of the architectures that you cite in your blog entry are precisely contemporary with the S/370, which is, after all, the 1964 S/360 on steroids. A more accurate comparison might be with the offerings of the "seven dwarfs' with that of Snow White.
 
You have missed the whole point of the S/370. The Channel. A S/360 IO channel could support up to 256 devices and had a bandwidth as 8 MBytes (not bits) per second with no intervention from the main CPU other than the initial Start IO instruction which passed the channel program to the channel. A channel program allows scatter/gather IO and can also loop. A S/370 channel can do 4.5 MBytes/Second. Just to put that in perspective its five times faster than 10Base5 or 10base2 Ethernet. A 4381 could have up to 16 channels. Some disk IO subsystems had RPS (rotational positional sensing) and could re-order an IO list so that it was done in the order the sectors came round.

They were machines designed to do IO rather than CPU, or rather IO and CPU without interference. Nothing like it on the planet.....
 
Oh and the original machine at LCM was a real 4361. It was off air for a while with a PSU issue. Some one has popped a replacement running on the Hercules emulator on-line.
 
oh and the release is wrong VM/370R6 was the last free version and that is the one you have without XEDIT. It never had XEDIT. The full screen editor add-on was called EDGAR.
VM/SP 6 (SP == System Product) had XEDIT.

So we had:-

CP/47 and CP/67 => early versions for 360/47 (internal use only) and CP/67 for the 360/67
VM/370 R1 through R6 => "Free releases" for S/370. Most folks with Hercules run these.
VM/SP R1 to R6 => System Product so paid for chargeable. The LCM system runs SP5 and has XEDIT.
VM/XA SF and VM/XA SP => Initial 31 bit offering. Mainly for MVS migration.
VM/ESA => Production quality 31 bit offering
zVM => Current offering. Now 64bit.

There was also SEPP and BSEPP which were enhancements to VM/370 and had to be installed as separate products.
VM/HPO which was an (expensive) add-on for VM/SP but needed for larger configurations.
 
They were machines designed to do IO rather than CPU, or rather IO and CPU without interference. Nothing like it on the planet.....

Study CDC architectures much? The STAR (ca 1969) that I worked with used a 544-bit wide channel for disk accesses. Other I/O was managed through SBUs (standalone 16-bit tightly-coupled minis).

Other big iron manufacturers had analogous capabilities.
 
An interesting article but from my perspective there is some bias since comparing a S/360 and derivatives to a microprocessor is an apples to oranges comparison. The S/360 architecture was designed as a data center multi-tasking machine running business applications rather than as a personal computer with fancy graphics. Thus the inclusion of floating point and packed decimal formats from the very beginning. One just has to look at the vast number of COBOL applications that were written, many of them still running today fifty years later. Likewise the S/360 I/O architecture was based on a plethora of multi-tasked I/O devices such as punched cards, tape drives, disk drives and screens with varying capacities and transfer speeds.

For reference, my background includes writing tens of thousands of lines of assembler on S/360 and derivatives both for data center type work and also multi-tasking routines for real time process control systems running entire plants and many dozens of screens all on a single S/370. I've also written a *LOT* of assembler code for Z80/180/eZ80 plus various other microprocessors such as PICs, 808x, 68HC11 etc. The S/360 architecture will always have a very fond place in my memory for it's uniformity and consistency. Likewise the S/360's assembler macro processor, both basic let alone ASM-H, far exceeds anything I've come across on other processors.

The article uses the term "archaic" in reference to the use of various mnemonics for groupings such as L, LR and LH yet I believe they're much more explicit and self identifying than some of the subtle issues that can get an 8086 assembler programmer into real problems. And yes I much prefer Zilog mnemonics versus Intel ones, although they would have been even more self evident to the programmer if there was both a LOAD and a STORE rather than just a LOAD and the more subtle bracketing.

I take exception when people say that the S/360 4K addressing is a major limitiation. In my opinion that is naive and I generally hear it from persons who have not written a lot of S/360 code and don't understand the power of index registers and DSECTs. 4K of efficient assembler is a lot of code and can easily be extended to perhaps 32K with multiple base registers which is MUCH more than I'd recommend to any assembler programmer. Large amounts of code should be able to be broken into logical subroutines which the linker can resolve. Likewise there is a very consistent subroutine protocol used by "standard" S/360 code that should prevent errors and makes it easy to link code from different languages. For things like trigonometric functions in assembler, I often called the FORTRAN library functions rather than try writing them from scratch.

Another very powerful aspect of the base-index register architecture is that it allows the programmer to write reentrant, reuseable and relocatable code which is not easily possible on some linear architectures. When I see multi-megabyte modules on a PC I often wonder how many repetitious copies of various subroutines are included, let alone layers and layers of APIs.

In the end, I think most programmers become biased to the architectures and language they either used first and/or the most. I prefer to write efficient assembler code rather than use giga-cycles and giga-bytes. As to the underlying architecture ... base-index or linear model are both usable but consistent and understandable mnemonics and instructions without quirky exceptions are more important.
 
Back in the day, my biggest gripe with the S/360 was the awful floating point implementation--24 bit mantissa and 8 bit exponent, normalized to 4 bit boundaries. Vastly inferior to the previous generation 7090.

Back in the day, if you were a systems-level implmenter, it was a good idea to be conversant in all of the competition's architectures--particularly when writing responses to RFPs. As in "We're better than xxx corp. because..."
 
Study CDC architectures much? The STAR (ca 1969) that I worked with used a 544-bit wide channel for disk accesses. Other I/O was managed through SBUs (standalone 16-bit tightly-coupled minis).

Other big iron manufacturers had analogous capabilities.

Others of the BUNCH had similar structures, but non so uniform as S/360. As far as I can tell Star dates from about five years later than S/360 which introduced the concept.

And yes even the IBM 7090 had similar before, but the great thing about the channel was it was a structured architected and uniform. Its amazing to think you could take a peripheral from a a 1965 S/360 an plug it into an ES9370 some 25 years later and it would still work.... I can't think when the last bus-and-tag interface totally vanished, but even in 1999 they were available for the new announced MP3000 "mini mainframe"..

.. by the way the CDC machines were awesome, especially for number crunching. I haven't studied the IO architecture perhaps because I was so stunned by the CPU design which was just pushing the boundaries of what could be done at the time. But of course Seymour Cray did cut his teeth there
 
The STAR was a Jim Thornton design (he of the 6400) after Seymour couldn't get backing for his 8600 project. The PPU thing of the 6000 series dates from 1964 and is basically a single 160A-type core that's multiplexed between 10 memory and register sets (i.e. "slot in the barrel"). The 7600 got rid of the multiplexing (10 independent PPUs) and assigned each to a fixed buffer area in memory, rather than giving all unfettered access to CPU memory.

STAR was a wholly different affair, quite separated from mainline CDC efforts. I think it would not be an exaggeration to say that most CDC employees were unaware of its existence. A big (physically large) virtual memory vector machine based on 64-bit words with wide-pipelined functional units. Ultimately ended with the ETA liquid-nitrogen cooled systems. A wild ride...

Cray simply dropped the idea of implementing peripherals with intelligent channels and just gave customers a fast interface to the CPU. S/370 systems used as I/O processors were not uncommon with the Cray I.

The OP would benefit by looking at the Univac 1100/2200 systems; details of which seem to have escaped popular notice, but very sophisticated stuff, particularly in the OS. It seems odd to see 6-bit character systems being manufactured well into the 1980s and 90s, but there was a good reason for that--DoD MIL-STD-188.
 
Back in the day, my biggest gripe with the S/360 was the awful floating point implementation--24 bit mantissa and 8 bit exponent, normalized to 4 bit boundaries. Vastly inferior to the previous generation 7090.

Back in the day, if you were a systems-level implmenter, it was a good idea to be conversant in all of the competition's architectures--particularly when writing responses to RFPs. As in "We're better than xxx corp. because..."

The float was horrid. Single is useless, you need double. They really cut corners on the floating point and had to add guard bits which meant retro-fitting all the machines already in the field...
 
Contrast with the CDC 6600 (exactly contemporaneous with the S/360). Single-precision 60 bit used a 48 bit mantissa. Double-precision used 96 bits. Included rounding options in the ISA.

One of the things that kept Unisys on 36 bits was that the S/360 single-precision float did not meet DoD requirements, but the 1100 series with 27 bit mantissa did (DP used a 60 bit mantissa), bit-normalized, not nibble-normalized like the S/360.
 
You have missed the whole point of the S/370. ...
Thank you very much. I have added a phrase "It is also worth noting that mainframes used channels, specialized processors that made I/O very fast, much faster than it was for most other computers of those years".

Oh and the original machine at LCM was a real 4361. It was off air for a while with a PSU issue. Some one has popped a replacement running on the Hercules emulator on-line.
Thank you, but why are people so cryptic about the model number? I also noted that timings for some string instructions were changed. How is it possible for real hardware?

oh and the release is wrong VM/370R6 was the last free version and that is the one you have without XEDIT. It never had XEDIT. The full screen editor add-on was called EDGAR.
VM/SP 6 (SP == System Product) had XEDIT.

Thank you very much. Excuse me I still miss something. I am using VM/370 Release 6 "SixPack" version 1.2 and CMS VERSION 6 made by Robert O'Hara in October 2010. Is my system VM/SP 6 or it has another title? My system doesn't have XEDIT or EDGAR. :( I also don't have IND$FILE there.

I take exception when people say that the S/360 4K addressing is a major limitiation. In my opinion that is naive and I generally hear it from persons who have not written a lot of S/360 code and don't understand the power of index registers and DSECTs. 4K of efficient assembler is a lot of code and can easily be extended to perhaps 32K with multiple base registers which is MUCH more than I'd recommend to any assembler programmer. Large amounts of code should be able to be broken into logical subroutines which the linker can resolve. Likewise there is a very consistent subroutine protocol used by "standard" S/360 code that should prevent errors and makes it easy to link code from different languages. For things like trigonometric functions in assembler, I often called the FORTRAN library functions rather than try writing them from scratch.

Another very powerful aspect of the base-index register architecture is that it allows the programmer to write reentrant, reuseable and relocatable code which is not easily possible on some linear architectures. When I see multi-megabyte modules on a PC I often wonder how many repetitious copies of various subroutines are included, let alone layers and layers of APIs.

In the end, I think most programmers become biased to the architectures and language they either used first and/or the most. I prefer to write efficient assembler code rather than use giga-cycles and giga-bytes. As to the underlying architecture ... base-index or linear model are both usable but consistent and understandable mnemonics and instructions without quirky exceptions are more important.

Thank you very much. IMHO it is not easy task to write giga-code. :) However I don't understand your point about the base-index register architecture because the x86 uses it too. The 68000 was outside because it can use only 8-bit offsets with base-index addressing but it allows you to use the pc-relative addressing that often is the best way to do a memory access.

4095 byte offsets may be not enough if you need a lot of global variables. Even if you spent 10 registers for the basing it gives you only 40 KB - it is not much. The System/Z therefore provides 20-bit offsets.

The OP would benefit by looking at the Univac 1100/2200 systems; details of which seem to have escaped popular notice, but very sophisticated stuff, particularly in the OS. It seems odd to see 6-bit character systems being manufactured well into the 1980s and 90s, but there was a good reason for that--DoD MIL-STD-188.

It is really fantastic how many different computer architectures were produced in the USA before the 80s. All other world had almost nothing to compare with their number. The UK and the USSR had also interesting computer architectures but their number were much less.
 
Thank you very much. IMHO it is not easy task to write giga-code. :) However I don't understand your point about the base-index register architecture because the x86 uses it too. The 68000 was outside because it can use only 8-bit offsets with base-index addressing but it allows you to use the pc-relative addressing that often is the best way to do a memory access.

4095 byte offsets may be not enough if you need a lot of global variables. Even if you spent 10 registers for the basing it gives you only 40 KB - it is not much. The System/Z therefore provides 20-bit offsets.
Your example typifies what I've heard many times from inexperienced assembler programmers and my response is "Who needs 10,000 uniquely named 32-bit variables in an assembler program?". Large amounts of data are typically stored in vectors, matrices or structures defined by DSECTs in S/370 parlance. Likewise, large database structures often use a linked list structure.

One S/370 register can directly address 1K of 32-bit labeled variables when used as a base register (i.e. displacement addressing), or 4 million such variables when used as a 24-bit index register. Similarly in a simple matrix layout, 1K * 4M = 4 gig of 32-bit variables. If those variables are structure pointers, that single register could be used to address 4G of 4K structures = 16TB of data. Thus the S/370 addressing limitations are not in the instruction format but rather in the address space provided by the operating system.

I think this points out the difference in high-level language programmers versus experienced assembler programmers. High-level programmers tend to think in terms of a flat data model with everything directly addressable whereas good assembler programmers think in terms of data structures and efficient data addressing techniques.
 
It is really fantastic how many different computer architectures were produced in the USA before the 80s. All other world had almost nothing to compare with their number. The UK and the USSR had also interesting computer architectures but their number were much less.

In terms of modern thinking, the S/360 is nothing special. Consider some of the older architectures, such as the immediate predecessors of the S/360--say, the 7090/7094 or the 7070/7080 (for commercial work). Very different beasts, conceptually. (Some may argue that the 7030 STRETCH was the legitimate parent, but few saw real use in the field.)

My soft spot for "friendly" ISAs is the 1620, particularly the Model I CADET. More regular and straightforward than the 1401, with interesting "quirks" that Edgars Dijkstra loathed. Yet a system that can be operated by a student just learning assembly language.

Of course, in the annals of ISAs, the Burroughs 5000/5500 is legend.
 
r.e. versions:-

IBM treats the the various versions of VM as separate "products" and re-uses the release numbers..

So the original 370 product was called VM/370, was free, and had 6 releases. You have VM/370 Release 6. No XEDIT or REXX. Roberts 6-pack has BREXX not IBM REXX
Next came the VM/System Product or VM/SP. This is a different product and also had releases R1 through R6 but these numbers don't have anything to do with the VM/370 releases.
VM/SP releases 4, 5 & 6 have XEDIT and RREXX. It was licenced by IBM so its no possible to legally get a copy.
The LCM machine ran VM/SP Release 5 under a special licence from IBM and that is now running on emulation at tty.livingcomputers.org port 24.
There are guest IDs so you can try XEDIT and IBM REXX.

So in historical order,
VM/370 R1 -> R6,
VM/SP R1 -> R6
VM/XA SF
VM/XA SP
VM/ESA
zVM

the machine at LCM may have been a prototype. or upgraded.
 
Of course, in the annals of ISAs, the Burroughs 5000/5500 is legend.
Thank you for mentioning those computers. I have read some information about them. It has been really interesting. Only mainframes from IBM and ICL were known in the USSR. I can even suppose that 99% of mainframes there were the IBM/360 compatible and 1% were from ICL and most of ICL's mainfraimes were the IBM/360 compatible too.
 
Back
Top