PDA

View Full Version : Modern Mainframes



ziloo
December 29th, 2017, 02:33 AM
There are many people in this forum who know the inside-out of
the previous generation of mainframe computers. When people
read about "big-iron" computers, they usually find it unbelievable as
to how expanded and large they were, and how faster and more
capable today's home computers are. As usual I want to ask a
trivial question:

Is there any task that mainframe computers of previous generation
can do better than present PCs?

I read that IBM is now one of the main manufacturer of modern
mainframes, but I have no idea as to the "super power" of these
machines. I want to have a mental image of what these computers
are capable of doing; what is so spectacular about them?

ziloo :mrgreen:

cruff
December 29th, 2017, 03:56 AM
Mainframes are usually designed with "RAS" - reliability, availability and serviceability (see https://www.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.zmainframe/zconc_RAS.htm for more details). Basically, they are built to handle large amounts of data and lots of simultaneous transactions while providing extremely high uptime.

For example, a zSeries mainframe is provisioned with extra CPUs that are not enabled for customer use that can be used as spares if another CPU fails, or to provide a "capacity on demand" option where the customer can temporarily lease extra CPU capacity to handle a spike in load. The zSeries also has the capability to be logically partitioned into multiple systems, with the ability to assign CPU fractional amounts capacity to a parititon. For example, it is possible to assign 30% of a CPU capacity to a partition, which can save on software licensing costs. This makes it easy to have a testing partition where you can verify changes before they are rolled out to the production partition.

It is also possible for the partitions to be running entirely different OS versions or entirely different OS types, i.e. zOS, zVM and Linux. These partitions can intercommunicate using high speed channels that are internal to the physical system. The partitioning support enforces an additional measure of separation between the partitions, such that if one of the partitions is broken into, it is not possible for it to even detect the I/O devices or memory in use by the other partitions.

I/O devices can be attached to or detached from systems and partitions on the fly if desired, with support from the OS of course.

Another feature (or possibly a misfeature as some might say) is that the operating systems have had years of work done to them removing bugs and maintaining compatibility with software written long ago. This lets you continue to run software paid for years ago, possibly for which the source can no longer be found. The downside is that you get to use software that has arcane names for commands and the impenetrable control structures and formats of something like IBM's JCL.

Mr. Horse
December 29th, 2017, 06:12 AM
The biggest thing I can think of with main frames, super computers, and IPC systems is reliability over home systems.
Hot swapping it a thing that is not often seen on home systems, but many main frames allow the removal and installation of addin cards wile the systems is running.

Then with main frames and super computers you have (most of the time) a far faster FPU then even new home computers.

Chuck(G)
December 29th, 2017, 07:01 AM
In absolute terms, no. Moore's law and time always wins the game. The other corollary is the "light nanosecond" advantage of smaller technologies. This was obvious when the first integrated circuits came out--and progress has never even slowed.

In relative terms, maybe. Large mainframes were probably better equipped in the I/O department, being able to transfer data at memory bandwidth speeds. Supporting a hundred remote terminals on a machine with a 10MHz clock wasn't unusual.

Progress, however, is uneven. Have memory and I/O kept up with processor speeds?

Another aspect is that old mainframe code didn't waste a lot of time on graphical user interfaces and other niceties. Code bloat was anathema, while today it's a way of life.

I like the older systems because of their differentness. In the old days, we were still figuring a lot of things out and system architectures varied wildly. Today, it seems that 8-bit bytes, byte-addressable binary machines are a given. That hasn't always been true by a long shot.

pearce_jj
December 29th, 2017, 07:56 AM
To be blunt, Amazon Web Services has completely superseded what can be achieved with any mainframe or super computer in terms of application scale, uptime, or cost (minimisation) and this is why AWS skills are currently so valuable. The auto scaling and cross region capabilities makes it possible to architect applications on AWS that can scale to rediculous proportions, and then scale back in during quiet times to control costs.

Chuck(G)
December 29th, 2017, 08:23 AM
James, not every computational task yields to massive parallelism; c.f. Amdahl's law.

ziloo
December 29th, 2017, 09:02 AM
....Amazon Web Services has completely superseded what can be
achieved with any mainframe or super computer ......

So....what is AWS made of?

pearce_jj
December 29th, 2017, 09:21 AM
Commodity hardware and some very capable software defined networking, at the infrastructure layer, and a number of platforms such as their managed databases, big data, AI; there are something like 1500 products in it.

krebizfan
December 29th, 2017, 09:21 AM
So....what is AWS made of?

Racks and racks of standard servers. Microsoft Azure is similar but more public with the details of how it is internally setup. https://www.nextplatform.com/2016/11/01/microsoft-azure-goes-back-rack-servers-project-olympus/

IBM Z-series uses special chips that are faster clock for clock than offerings from Intel or AMD. IBM's EC12 was listed as running at 5.5 GHz which would roughly equal a current Intel chip running at about 8 GHz. Each IBM chip had 6 cores at that speed and could have 101 such chips. Intel based servers would have a better performance per watt provided that the workloads could scale over the much larger number of cores needed.

ziloo
December 29th, 2017, 10:24 AM
Do I understand correctly that:

-Modern mainframes are still being designed by considering innovations
in hardware for dedicated purposes.

-Parallel and distributed computing is more of a brute-force method to
break down a task into simpler tasks to be done by multiple computing units.

krebizfan
December 29th, 2017, 10:45 AM
A more powerful single threaded CPU is probably more of a brute force method. Parallel computing which includes distributed methods is a smarter solution but requires a lot of work for the programmer who needs to figure out how to divide up the work. Some problems have no known method of spreading amongst multiple cores. Other problems are easily handled by parallel system. Facebook has many thousands of low performance threads at any time, perfect for a system with huge numbers of weak cores.

Even the specialized mainframes use chips similar to standard designs. IBM's mainframe chips are a high performance variation on the Power lineup. Supercomputers are geared more to very parallel work loads and often are designed around large numbers of graphic card chips. Nvidia Telsa may not be as good as specialized parallel design but mass production drops the cost to a more reasonable level. See https://en.wikipedia.org/wiki/Sierra_(supercomputer) for an example.

All current big computers are parallel relying on lots of cores. Some just start off with more powerful cores in the requisite large numbers. Unfortunately, no one has figured out a good method for greatly increasing single threaded performance that does not send power consumption through the roof.

NeXT
December 29th, 2017, 11:29 AM
I read that IBM is now one of the main manufacturer of modern
mainframes, but I have no idea as to the "super power" of these
machines. I want to have a mental image of what these computers
are capable of doing; what is so spectacular about them?

ziloo :mrgreen:

Linus of all people made a pretty handy video that best describes why Mainframes are still a thing. Don't really try thinking of them as either really high-end servers or a type of supercomputer.
https://youtu.be/ximv-PwAKnc

Basically the I/O of a mainframe is untouchable to PC's, servers or supercomputers.


To be blunt, Amazon Web Services has completely superseded what can be achieved with any mainframe or super computer in terms of application scale, uptime, or cost (minimisation) and this is why AWS skills are currently so valuable. The auto scaling and cross region capabilities makes it possible to architect applications on AWS that can scale to rediculous proportions, and then scale back in during quiet times to control costs.

No, it has not. Additionally there are a lot of things in regards to security that make it more ideal to run your own mainframe than to outsource.

Charles Anthony
December 29th, 2017, 02:11 PM
Multics running on the Honeywell DPS8-M mainframe achieved a B2 security rating.

This was possible because much of the security was built into the hardware.

The current economics are so MIPS/dollar driven that CPU designers to pass the cost of security downstream to the software.

They are probably contemporary CPU designs more secure the DPS8-M, but they are a tiny, specialized, niche market.

Chuck(G)
December 29th, 2017, 06:58 PM
Yup, but Multics never ran ROVER. :) There was a lot going on in screen rooms that has never been discussed openly.

g4ugm
December 30th, 2017, 01:48 AM
Having worked on both modern mainframes, and virtualized Intel server farms, IMHO the main reason why mainframes are retained is the sheer cost of reworking the code totally exceeds the cost of keeping a "mainframe" running. IBM is the only manufacturer of Mainframes and keeps the market live because big user such as Banks and Airlines risk policies won't let them use unsupported software. IBM leverages this by having a strict support life cycle for both Hardware and Software, and ensuring that the new supported software releases won't run on older hardware so in order to stay supported both hardware and software must be upgraded together. Unlike Microsoft software every version of IBM Mainframe software has its own licence. There are no downgrade rights for example...

As for "is there anything you can only do on a mainframe" I don't believe that technically there is anything you can do on a mainframe that you can't do on a big Intel box. As some one else said the CPU power is in many cases not the limiting factor, its IO that limits the system. You may have to think a bit harder but its doable. So for example UK Air Traffic control no longer uses IBM mainframes. It still uses some of the IBM code under custom emulation, but no IBM boxes are involved. The last time it failed it was a table overflow in the emulated IBM code....

https://www.nats.aero/wp-content/uploads/2015/02/v3%200%20Interim%20Report%20-%20NATS%20System%20Failure%2012%20December%202014. pdf

of course there are things.....

.... and whilst mainframes are fun, having managed a VMWare cluster running a couple of hundred windows images doing live load balancing by moving running systems between servers, i would say in some ways Inetel Server hardware is ahead of "mainframes"...

ziloo
December 30th, 2017, 03:05 AM
Without going through too much in-depth details that would be a
burden to you, I would like to know:

If I compare a let's say S100 computer schematics and a 70s
generation IBM mainframe, what would be the major differences
between the two?


ziloo :)

Chuck(G)
December 30th, 2017, 11:22 AM
Microprogramming, I/O channels, orthogonal register sets, virtual memory, greater variety of I/O, multiprocessor configuration, multiuser OS...

Pretty much everything. How do you hook up an S100 box to an IBM 1360 (https://en.wikipedia.org/wiki/IBM_1360)?

ziloo
December 30th, 2017, 12:16 PM
.... How do you hook up an S100 box to an IBM 1360 (https://en.wikipedia.org/wiki/IBM_1360)?

Over a candle light dinner may be.......? :kiss:



ziloo :mrgreen:

RobS
January 3rd, 2018, 12:54 AM
Old mainframes persisted because they were reliable and their owners didn't want to replace the software that they were using unless they really had to. A company in Pennsylvania continued to use Honeywell 200 series mainframes first developed in the 1960's right up until the year 2000, when they had to replace the software because of the millennium bug, so scrapped the machines then as well. The site maintenance engineer told me that the only work that he had to do was replace cooling fans when the bearings failed and clean out the air filters. No doubt the electronics kept working faultlessly because all the components more prone to failure had already been replaced over the decades.

Nowadays one has to consider what a mainframe is or was exactly. I assume that the term refers to the physical structure, i.e. rows of racking with interconnections into which many smaller components are fitted. In the early days a single plug in component might have been just one logic gate, then later modules contained entire functions. Subsequent machines like the Honeywell DPS6 series, which could be as large as some small mainframes, had a complete specialised processor unit within each module. Consequently modern central installations with racks of servers are simply a continuation of the evolution of the mainframe. It's just that the functionality of a single component has continued to increase. Looked at that way a mainframe is simply a roomful of racking frames containing interconnected easily replaced components that benefit from the efficiency and security of all being close together. Hence a modern cloud may physically be a mainframe. The other way to view a mainframe is as a device which has unique central components which aren't replicated, so are a weak point in the design, but on this basis virtually all home computers and even smartphones are mainframes.

Not all old mainframes had all the benefits mentioned however. Using backplanes with sockets made replacement of components easy but cost a lot for the connectors on both the backplanes and the plug-in units. I know because nowadays about half of the cost of getting replica boards made for my Honeywell 200 project is in gold-plating the edge connectors. In some mainframes this cost was avoided by using wire-wrapped connections to the component modules instead of plugs and sockets. Restorers of these old machines have a hard time tracing faults and rectifying them while I routinely remove, swap around and replace modules in my machine because they are the more expensive plug-in type. That is another reason why I view the term mainframe to refer as much to the expensive racking system that contains the electronics as to the electronic components themselves. It really just refers to the central part of any system that is intended to be the most reliable because it is the most essential. Nowadays we may have virtual mainframes in clouds but functionally they are still the same thing, aren't they?

I conclude that in a way the term "big iron" really referred to the racking systems as much as anything and they are certainly still around and essential to any large installation.

ziloo
January 3rd, 2018, 03:26 AM
....The other way to view a mainframe is as a device which has unique
central components which aren't replicated, so are a weak point in the design,
but on this basis virtually all home computers and even smartphones are mainframes...



Which IBM/Honeywell/.... mainframe models still used transistors/discrete components
instead of VLSI in their architecture (no integrated CPU, no IC at all).

ziloo :mrgreen:

Chuck(G)
January 3rd, 2018, 07:03 AM
"VLSI" is a loaded term, subject to shift according to the times. If you want to be precise, speak about the level of integration (e.g., gates per square mm).

CDC mainframes, for example, were discrete transistor into the early 70s. Subsequent IC designs didn't improve performance measurably (a Cyber 74 runs at essentially the same speed as a 6600). Individual circuit element speed was less important than nanosecond-foot constraints and cooling issues. That's what, for instance, made early Cray machines so revolutionary--it wasn't so much the architecture, but the fact that he managed to squeeze a lot of circuit into a relatively small volume. (e.g. witness a Cray I backplane construction). His CDC 7600 was laid out in traditional straight lines, not a circle, and wastes precious nanoseconds in getting signals from one place to another. The cost, of course, was MTTR. Consider the Cray 2 "Bubbles" as an extreme example. The Cray 3 was even more extreme, but never went into production.

I'm speaking about "big iron" not the little DEC, GE, Honeywell, XDS or NCR systems.

Sometimes IC integration could backfire. I recall an attempt that Honeywell made to convert one of its systems to ECL IC technology. The result ran slower (and hotter) than the discrete design--and cost more to build.

ziloo
January 3rd, 2018, 08:19 AM
Brilliant remarks....thank you Rob and chuck!

And as a tribute to Seymour Cray, I found a quote from him :

"Parity is for the farmers"

Apparently he didn't like parity checking...


ziloo :mrgreen:

ziloo
January 3rd, 2018, 08:36 AM
Seymour Cray once asked:

If you were plowing a field, which would you rather use...
...... 2 strong oxen or 1024 chickens?

well the answer is very simple:

It all depends on how wet the field is....


ziloo :mrgreen:

Chuck(G)
January 3rd, 2018, 08:54 AM
You have to understand the context. IBM used parity checking extensively on even their earliest machines.

Seymour's contention was that a well-tested core memory would exhibit few errors (holds even for DRAM today--parity/.ECC failures in tested memory are a comparatively rare occurrence)--and that a detected parity error would bring the machine to its knees in any case. Recall that Seymour tried to squeeze every available cycle out of the hardware and that included taking advantage of the write-after-read regeneration of core data for instructions like the exchange jump.

Seymour used parity where appropriate; for example, in the large bulk-core ECS units.

When the 7600 rolled out, it was really pushing core technology to its limits, even to the extent of hitting one location too often would create errors because of heating. So the 7600 implemented parity in memory.

Sometimes folks read too much into an offhand quip.

ziloo
January 3rd, 2018, 09:17 AM
...Sometimes folks read too much into an offhand quip....

Not really.......but it is always a great conversation starter...
and that is why we are all here.....aren't we? :wink:


ziloo :mrgreen:

cruff
January 3rd, 2018, 03:47 PM
(e.g. witness a Cray I backplane construction).

The "backplane" of a Cray 1 is in fact made up of twisted pair wires that run between the relevant card connections. Fixing issues apparently could get interesting if the service person needed to thread a replacement pair into the mass of wires. What I found interesting in the evolution of the Cray systems is that the Cray 3 fit into a much smaller volume due to the improvement in IC density that it didn't need to be placed into a physical circle to reduce the inter-card latency.

Chuck(G)
January 3rd, 2018, 03:59 PM
I've spent time on the rear of a CDC taper-pin backplane, gun and all, so I know whereof you speak. I believe that Cray used women almost exclusively for the backplane wiring because of their smaller physical size.

https://tr1.cbsistatic.com/hub/i/r/2010/05/24/95973382-c3b0-11e2-bc00-02911874f8c8/resize/770x/3edf5fa7ca33690c1d6c7a8b9780243f/4333788453_a0c4186b47_b.jpg

ziloo
January 4th, 2018, 02:32 AM
Great photo from the never ending resources of Chuck (G)!
By the way chuck...as I have asked you before....how do you
get the large picture format in your post?

Did they use point-to-point wiring because of high current
requirement in the system or ....what?

ziloo :mrgreen:

bear
January 4th, 2018, 04:23 AM
to shorten the component interconnects as much as possible, for the purposes of reducing signal propagation delay, as has already been pointed out in this thread.

Chuck(G)
January 4th, 2018, 08:48 AM
I use the [ img ] tagging for externally-hosted jpegs, nothing to it.

Many of the longer lines (note that they are all twisted-pair) are of a specific length for timing. One of my old bosses remarked that his first job as a brand-new EE graduate from the U of Minnesota was to measure the lines on the backplane to which Seymour had attached tags saying "TUNE". Remember what I said about nanosecond-feed? Light (and electrical signals) propagate at about one nanosecond for every foot traveled. So delays on a backplane like that shown are calculated. The wires themselves attach to the sockets for the "cordwood" modules via tapered silver-plated pins. The advantage is that it's relatively easy to modify the setup; very unlike, say, wire-wrap or PCB. You may have to dig through the mat with both hands and a penlight in your mouth to see what you're doing, but it's doable. Modifications on these big mainframes were not uncommon to provide features as needed.

https://gordonbell.azurewebsites.net/craytalk/img034.gif

The buttons on the front of the module were for test points.

ziloo
January 4th, 2018, 09:24 AM
.....Many of the longer lines (note that they are all twisted-pair).......

All these wires twisted and laid together.....would n't it cause cross talk?



...You may have to dig through the mat with both hands and
a penlight in your mouth to see what you're doing, but it's doable...

Where are the labels to identify each wire?


ziloo :mrgreen:

Chuck(G)
January 4th, 2018, 10:46 AM
Ah, grasshopper, those twisted wires are differential pairs and are generally quite noise-immune (https://en.wikipedia.org/wiki/Differential_signaling)

Why, for example, does your 100mbit eithernet use UTP (unshielded twisted pair) signalling, while the old 10Base2 coaxial cable was limited to about 10mbit/second?

Very high-speed nonstaurating logic, such as ECL, used differential signalling throughout. The other obvious benefit is that a gate draws the same current whether it's on or off, so no big power-supply rail spikes. The stuff, since it's never truly "off" does drink a lot of current.

For identifying wires, you followed them to the point of attachment, which was labeled. Pretty much foolproof, as opposed to relying on someone's (misplaced) tag.

ziloo
January 4th, 2018, 12:08 PM
.....ECL, used differential signalling throughout.....

ECL....emitter-coupled logic as compared to TTL......

I see that differential signalling is presently used in RS-485, USB, and Serial ATA;
but as far as ECL in mainframes were concerned, was it used in I/O only,
or it was used within the system as well?

ziloo :mrgreen:

Chuck(G)
January 4th, 2018, 12:47 PM
I should show you a photo of one of the Eurocard ECL wirewrap boards--almost no single conductors--all twisted pair.

If you think that high-speed mainframes of old were made with TTL, you'd be mistaken--WikiP article on ECL (https://en.wikipedia.org/wiki/Emitter-coupled_logic)

It's very different from the cookbook TTL/CMOs designs.

ziloo
January 4th, 2018, 08:22 PM
...I/O devices can be attached to or detached from systems and
partitions on the fly if desired, with support from the OS of course...


Does this mean...attaching/detaching while the system was on and operating?


ziloo :mrgreen:

cruff
January 5th, 2018, 03:59 PM
Does this mean...attaching/detaching while the system was on and operating?

Yes. Just like you can do with USB devices, eSATA, etc.

Doug G
January 5th, 2018, 04:22 PM
Your picture of the backplane wiring brought back memories of when I got a tour of the innards of a Cyber 205. I was friends with one of the CE's assigned to the machine and he got me in to visit in person. IIRC he told me the twisted pairs were also cut to specific lengths and they would replace with longer/shorter runs to tweak the timing as needed.

I don't recall how many 205's CDC made, I left the company shortly after they were introduced. The one I saw was one of the early ones, located in Fort Collins, CO.

Chuck(G)
January 5th, 2018, 04:40 PM
I worked at CDC when the STAR-100 was still a product. I remember discussions we had with Neil Lincoln's crew about the 201 (CDC was changing all of the old names to CYBER-something or the other. The STAR-100 became the CYBER 200; the 6600 became the CYBER 74, etc.), but I'd left before it hit any sort of production. I didn't touch bases with Neil until the ETA-10 around 1983, when a few of us STAR old-timers landed a contract to do the FORTRAN compiler for it. I remember logging onto the 205 at ETA (being used as a "bridge" for ETA-10 development) and being shocked to find that they were still using some of my old bootleg code for the 100. (e.g., OGNATE - "oh god, not another text editor"). Did Neil ever realize his "box of Chiclets" model? That was the idea that a box about the size of a Chiclets container held all of the types of ICs used in the machine?

Neil was fun and full of ideas and stories. I remember attending a department meeting at ADL concerning his "Super X by-God" proposal. He is missed.

Now Ziloo--there's a mainframe for you--the ETA-10--immersed in a cryostat full of liquid nitrogen. :)

Those were the days...

ziloo
January 5th, 2018, 11:35 PM
The harder they fall...????????


http://ethw.org/w/images/thumb/5/58/Eta_1.jpg/300px-Eta_1.jpg


ziloo :(

ziloo
January 5th, 2018, 11:45 PM
...I/O devices can be attached to or detached from systems and
partitions on the fly if desired, with support from the OS of course...




Yes. Just like you can do with USB devices, eSATA, etc.

Is/was there some sort of serial bus in mainframes?


ziloo :mrgreen:

RobS
January 6th, 2018, 03:34 AM
Now Ziloo--there's a mainframe for you--the ETA-10--immersed in a cryostat full of liquid nitrogen. :)

Those were the days...

Don't they have to do that with quantum computers now? What goes around comes around, doesn't it? Like most of us quantum computers like a quiet working environment, so I understand ... not that I understand anything much about quantum computers really. Does anyone though, even the people who build them?

cruff
January 6th, 2018, 04:08 AM
Is/was there some sort of serial bus in mainframes?

Do you mean serial communication links? If so, yes, the IBM mainframes have both ESCON and FICON (Fibre Channel) controllers using fibre optic cables to talk to peripherals. These could be attached to switches.

Ethernet was also used internally to connect the service laptop (mounted inside the rack cabinet) and the external PC-class management system to the mainframe hardware management controller. On the z890 I used, they ran OS/2, and the external system was used to control the mainframe paritioning, startup and shutdown. You could also run a virtual terminal on it as a console terminal for software installation.

Chuck(G)
January 6th, 2018, 09:09 AM
CDC for certain made some boneheaded strategic decisions during the 1980s. Consider that when I was with them, they'd just won an antitrust lawsuit against IBM and picked up some cash and IBM's SBC, along with a bar against IBM's competing in the service bureau business. It was a very large company, encompassing operations like Ticketron, Commercial Credit, etc.

What killed off CDC was the fossilized management structure. When I was with them, they boasted 128 vice-presidents. I remember remarking to Neil that, in spite of fundamental changes in the industry, ETA just adopted the fossils of CDC, hook,line and sinker. He agreed with me. When iI first learned about the ETA-10 (then called the GF-10), I said "Of course, you'll be running Unix". He thought that was a good idea, but couldn't get technical management to go along with the proposal. A couple of years later, after I'd moved back to microcomputing, I received a phone call from one of his people asking if I would like to head up a Unix port project at ETA. Of course, I couldn't pick my people--and I'd have to do things the ETA way. I declined. Sometimes, you're best off not boarding a sinking ship, no matter how well the buffet is laid out...

A sad personal postscript to this was that one of my old CDC co-workers committed suicide after he was laid off.

g4ugm
January 6th, 2018, 01:39 PM
Is/was there some sort of serial bus in mainframes?


ziloo :mrgreen:

Historically there were "bus and tag" devices. This was the standard i/o connection on mainframes. It was an 8-bit wide bus, one cable carried the data, one the control signals. Given the slowest ran an one Mhz that gave 8 MBytes/sec so around as fast as ethernet in 1964. It was the reason they ate up work. Later versions went faster...

They also allowed devices to be powered on and off but originally it had to be configured into the host at startup. Later there was dynamic configuration...

https://en.wikipedia.org/wiki/IBM_System/360#Channels

Chuck(G)
January 6th, 2018, 04:38 PM
But that was a standard hookup from many vendors, and certainly not bit-serial.

g4ugm
January 7th, 2018, 02:14 AM
But that was a standard hookup from many vendors, and certainly not bit-serial.

so no bit serial but still dynamic device control...

ziloo
January 7th, 2018, 04:43 AM
According to Wikipedia:

"All computers before 1951, and most of the early massive parallel processing machines used
a bit-serial architecture—they were serial computers."


ziloo :confused4:

g4ugm
January 7th, 2018, 01:41 PM
According to Wikipedia:

"All computers before 1951, and most of the early massive parallel processing machines used
a bit-serial architecture—they were serial computers."


ziloo :confused4:




I wouldn't call any machine from before 1951 a mainframe. They were serial as you didn't have much choice as the main store was either williams/kilburn tubes e.g. Manchester mk1, IBM701 or Mercury Delay lines e.g. EDSAC, EDVAC, CSIRAC, UNIVAC, and ACE, which are both serial devices. You could run delay lines or williams tubes in parallel, but you need a lot for any sensible value word size, and because you have to wait longer for a word to appear at the output I am not sure its quicker or even makes sense. As soon as core memory became available it was possible to build parallel machines.

Chuck(G)
January 7th, 2018, 02:15 PM
Ziloo-you asked about a serial bus in mainframes. Before 1951, it was a matter of opinion whether computers of the time had bus structures at all.

Chuck(G)
January 7th, 2018, 02:27 PM
so no bit serial but still dynamic device control...

Requiring a reboot between configuration changes, but yes. On big CDC mainframes, after the initial programs were loaded from the deadstart tape, the operator was presented with an EST (Equipment Status Table), to which s/he could make changes before continuing. The DS tape, by default, was tape unit 0, but could be changed with the deadstart panel switches.

On a later "ocean of drives" system I worked on, individual drives could be taken on- and offline while the system was running. A job accessing a newly-offlined drive would be suspended until either killed or the drive became available again.

ziloo
January 7th, 2018, 10:36 PM
From the pages of history:

"....Though the CDC-6500 was off-the-shelf hardware (aside from a custom interface to our network),
the software we used was largely home-grown. The operating system was called SCOPE/Hustler.
It was based on Control Data's batch-oriented SCOPE 3.2 (System Control Of Program Execution) OS.

MSU's primary innovation was to add interactive service in a way that was well-integrated into
the vendor's batch OS. "Hustler" was taken from the Paul Newman movie of the same name,
which was popular when MSU was beginning its design of this homebrew OS. Evidently
the programmers found OS data structure terminology--with queues (sounds like "cues"--get it?),
tables, and pools--too tempting to resist the cute name. They even invented a "pocket" data structure.
But the operating system lacked "balls"..... :) "

Source is here (www.60bits.net/msu/mycomp/cdc6000/65hist.htm)

ziloo :mrgreen:

Chuck(G)
January 8th, 2018, 08:54 AM
You might also investigate Purdue's use of the 6500. I think Saul Rosen wrote a couple of papers on it. IIRC, it was coupled to a pair of 7094s for I/O (at the time, 7094s were being phased out for the S/360 machines and were comparatively inexpensive). Purdue's 6500 OS was based on Dave Calender and Greg Mansfield's MACE. Dave's passion was bats and was one of Harold Edgerton's students and suggested using Edgerton's new strobe to study them. Greg was a friend (I introduced him to gelato) who wound up at Cray eventually. (MACE stands for "Mansfield's Answer for Customer Engineers"). MACE started out as a "bootleg" lightweight system used by CEs and was, at a program level, largely compatible with SCOPE (which itself was a takeoff on COS, the Chippewa Operating System). Greg did much of his work at night, using machines on the QA floor at Arden Hills.

MACE morphed into KRONOS and was used for timesharing tasks such as the PLATO learning system. SCOPE was used for batch jobs. Sometime around 1973 or so, both were re-christened--SCOPE became NOS/BE and KRONOS became NOS--eventually, the internal differences were reconciled and a single product line, NOS, was offered.

Perhaps the fundamental difference between SCOPE and KRONOS was the deployment of resources. SCOPE used a PP-centric approach and a filesystem oriented toward high performance. KRONOS put some of the OS into the CPU and used a simplified file system designed for quick response. For example, SCOPE sorted and prioritized disk requests to get the highest performance; KRONOS serviced them on a strict first-in-first-out basis. Almost all system calls to SCOPE resulting in loading a PP program to handle the request; KRONOS handled the simpler, non-I/O requests in the CPU. Both approaches made sense--SCOPE was great for heavy compute and big I/O workloads; KRONOS was great for light multi-user real-time tasks.

There were other (often classified) operating systems as well used for government work.

ziloo
January 8th, 2018, 09:48 AM
Chuck......during your involvement with the big ones, was there any
serious talk about "Artificial Intelligence" and some "deep stuff"
like "Heuristic Algorithm"?


ziloo :mrgreen:

Chuck(G)
January 8th, 2018, 10:07 AM
Nope--I was primarily a low-level systems guy. I didn't do any of the theoretical stuff.

ziloo
January 8th, 2018, 10:14 AM
Low-level???........I don't think so........ :winking:


ziloo :mrgreen:

ziloo
January 9th, 2018, 02:58 AM
Ziloo--here's a shot of CDC "big iron"--a vector supercomputer....

The CPU is the line of of units in the background. The boxes on the right with the CRTs are SBUs (station buffer units), which essentially handle I/O. Basically 16 bit minis with a resident drum and a bunch of core and device/channel interfaces. On the left, you can just see a 405 card reader and the rear of the operator's console. What's not visible are the tape drives and disk drives.

http://archive.computerhistory.org/resources/still-image/Control_Data_Corporation/102627351.03.01.lg.jpg



It is a good place to ask.......why so many termials ?...


ziloo :mrgreen:

m_thompson
January 9th, 2018, 03:56 AM
ECL....emitter-coupled logic as compared to TTL......

I see that differential signalling is presently used in RS-485, USB, and Serial ATA;
but as far as ECL in mainframes were concerned, was it used in I/O only,
or it was used within the system as well?

ziloo :mrgreen:

The DEC PDP-10 KL10 used 10k ECL for the whole CPU.

Chuck(G)
January 9th, 2018, 07:14 AM
It is a good place to ask.......why so many termials ?...

I thought I answered that. They're the interfaces to the SBUs--which are I/O processors for the machine in the background and fairly complete computers in their own right.. Normally, an operator only pays attention to the display terminal for the MCU, which is a special type of station with its own local drum storage. But given the cost of these things, the terminal interface was a minor blip and a convenience for the CEs. One could, for instance, stop and reload a station with different firmware without halting the main CPU.

This sort of thing was not atypical on supercomputers. A Cray I, for example made use of another (not Cray) mainframe to perform I/O. The purpose of this big iron was to compute and not get involved in dedicating expensive cycles to managing mechanical I/O devices. It would make a lousy machine to play games on, unless the game were something like chess.

I recall that one of Neil's designs dedicated a hard-wired 16Kw (8 bytes per word) to the operating system. When I asked about this, his response was "If you can't fit an operating system in 16K, you don't belong in this business.". The design was never implemented, but it was fun to talk about it.

ziloo
January 9th, 2018, 12:17 PM
The DEC PDP-10 KL10 used 10k ECL for the whole CPU.

Thank you m_thompson....

I read that PDP-11 originally used asynchronous Unibus.
What kind of bus is that?


ziloo :mrgreen:

ziloo
January 15th, 2018, 06:50 AM
.....They're the interfaces to the SBUs--which are I/O processors for
the machine in the background and fairly complete computers in their own right......

In reading about the PDP-10 computer.... I found out that for the proper operation of
the main computer, they used Frontend Computer Systems that extended the functionality
of the systems to which they were connected.

The frontend computer would be first booted from a disk drive, or Tape drive,
and then commands can be given to the frontend computer to start the main
processor. In some systems they actually used an 8080 CPU to load a microcode
from a disk or magnetic tape, and then start the main processor. The 8080 then
switches modes after the operating system boots and controls the console
and remote diagnostic serial ports.

Any additional discussion would be appreciated...

ziloo :mrgreen:

Chuck(G)
January 15th, 2018, 08:11 AM
You might want to research the meaning of the acronym SPOOL. (Simultaneous Peripheral Operation Off Line) for similar setups.

ziloo
January 15th, 2018, 09:14 AM
Back in college days, I remember executing numerical Fortran programs by running my
punched cards through the card reader. Then we would wait in the reception area for
our output sheets to be printed out. During the final weeks of the semester,
there would be a loooooong line of students waiting for their prints.

And I vividly remember the Post-Mortem Dump.... :cloudmad:

ziloo :mrgreen:

ziloo
March 12th, 2019, 09:52 AM
Reviewing some information about ARM series of micros, out of curiosity......
I wondered whether "super scalar" and "pipelining" was a post-Mainframe
innovation. As I found out, there had been early implementation of these concepts
during the 70's in the Mainframe universe.

Any comment would be appreciated!

ziloo :mrgreen:

Chuck(G)
March 12th, 2019, 10:12 AM
Goes back farther than that--try the IBM 7030 (https://en.wikipedia.org/wiki/IBM_7030_Stretch) for example--or the CDC 6600 (https://en.wikipedia.org/wiki/CDC_6600). Vector instructions are slowly making their way into MPUs, but mainframes had them as of the late 1960s.

ziloo
March 12th, 2019, 11:23 AM
Goes back farther than that......

I wonder whether there would be ever be a documentation about
people who came up with the idea/design of these concepts...

ziloo :mrgreen: