They were machines designed to do IO rather than CPU, or rather IO and CPU without interference. Nothing like it on the planet.....
Study CDC architectures much? The STAR (ca 1969) that I worked with used a 544-bit wide channel for disk accesses. Other I/O was managed through SBUs (standalone 16-bit tightly-coupled minis).
Other big iron manufacturers had analogous capabilities.
Back in the day, my biggest gripe with the S/360 was the awful floating point implementation--24 bit mantissa and 8 bit exponent, normalized to 4 bit boundaries. Vastly inferior to the previous generation 7090.
Back in the day, if you were a systems-level implmenter, it was a good idea to be conversant in all of the competition's architectures--particularly when writing responses to RFPs. As in "We're better than xxx corp. because..."
Thank you very much. I have added a phrase "It is also worth noting that mainframes used channels, specialized processors that made I/O very fast, much faster than it was for most other computers of those years".You have missed the whole point of the S/370. ...
Thank you, but why are people so cryptic about the model number? I also noted that timings for some string instructions were changed. How is it possible for real hardware?Oh and the original machine at LCM was a real 4361. It was off air for a while with a PSU issue. Some one has popped a replacement running on the Hercules emulator on-line.
oh and the release is wrong VM/370R6 was the last free version and that is the one you have without XEDIT. It never had XEDIT. The full screen editor add-on was called EDGAR.
VM/SP 6 (SP == System Product) had XEDIT.
I take exception when people say that the S/360 4K addressing is a major limitiation. In my opinion that is naive and I generally hear it from persons who have not written a lot of S/360 code and don't understand the power of index registers and DSECTs. 4K of efficient assembler is a lot of code and can easily be extended to perhaps 32K with multiple base registers which is MUCH more than I'd recommend to any assembler programmer. Large amounts of code should be able to be broken into logical subroutines which the linker can resolve. Likewise there is a very consistent subroutine protocol used by "standard" S/360 code that should prevent errors and makes it easy to link code from different languages. For things like trigonometric functions in assembler, I often called the FORTRAN library functions rather than try writing them from scratch.
Another very powerful aspect of the base-index register architecture is that it allows the programmer to write reentrant, reuseable and relocatable code which is not easily possible on some linear architectures. When I see multi-megabyte modules on a PC I often wonder how many repetitious copies of various subroutines are included, let alone layers and layers of APIs.
In the end, I think most programmers become biased to the architectures and language they either used first and/or the most. I prefer to write efficient assembler code rather than use giga-cycles and giga-bytes. As to the underlying architecture ... base-index or linear model are both usable but consistent and understandable mnemonics and instructions without quirky exceptions are more important.
The OP would benefit by looking at the Univac 1100/2200 systems; details of which seem to have escaped popular notice, but very sophisticated stuff, particularly in the OS. It seems odd to see 6-bit character systems being manufactured well into the 1980s and 90s, but there was a good reason for that--DoD MIL-STD-188.
Your example typifies what I've heard many times from inexperienced assembler programmers and my response is "Who needs 10,000 uniquely named 32-bit variables in an assembler program?". Large amounts of data are typically stored in vectors, matrices or structures defined by DSECTs in S/370 parlance. Likewise, large database structures often use a linked list structure.Thank you very much. IMHO it is not easy task to write giga-code. However I don't understand your point about the base-index register architecture because the x86 uses it too. The 68000 was outside because it can use only 8-bit offsets with base-index addressing but it allows you to use the pc-relative addressing that often is the best way to do a memory access.
4095 byte offsets may be not enough if you need a lot of global variables. Even if you spent 10 registers for the basing it gives you only 40 KB - it is not much. The System/Z therefore provides 20-bit offsets.
It is really fantastic how many different computer architectures were produced in the USA before the 80s. All other world had almost nothing to compare with their number. The UK and the USSR had also interesting computer architectures but their number were much less.
Thank you for mentioning those computers. I have read some information about them. It has been really interesting. Only mainframes from IBM and ICL were known in the USSR. I can even suppose that 99% of mainframes there were the IBM/360 compatible and 1% were from ICL and most of ICL's mainfraimes were the IBM/360 compatible too.Of course, in the annals of ISAs, the Burroughs 5000/5500 is legend.