Image Map Image Map
Page 1 of 7 12345 ... LastLast
Results 1 to 10 of 65

Thread: Modern Mainframes

  1. #1
    Join Date
    Feb 2006
    Location
    in the basement
    Posts
    888

    Default Modern Mainframes

    There are many people in this forum who know the inside-out of
    the previous generation of mainframe computers. When people
    read about "big-iron" computers, they usually find it unbelievable as
    to how expanded and large they were, and how faster and more
    capable today's home computers are. As usual I want to ask a
    trivial question:

    Is there any task that mainframe computers of previous generation
    can do better than present PCs?

    I read that IBM is now one of the main manufacturer of modern
    mainframes, but I have no idea as to the "super power" of these
    machines. I want to have a mental image of what these computers
    are capable of doing; what is so spectacular about them?

    ziloo

  2. #2
    Join Date
    Dec 2012
    Location
    Colorado
    Posts
    220
    Blog Entries
    1

    Default

    Mainframes are usually designed with "RAS" - reliability, availability and serviceability (see https://www.ibm.com/support/knowledg.../zconc_RAS.htm for more details). Basically, they are built to handle large amounts of data and lots of simultaneous transactions while providing extremely high uptime.

    For example, a zSeries mainframe is provisioned with extra CPUs that are not enabled for customer use that can be used as spares if another CPU fails, or to provide a "capacity on demand" option where the customer can temporarily lease extra CPU capacity to handle a spike in load. The zSeries also has the capability to be logically partitioned into multiple systems, with the ability to assign CPU fractional amounts capacity to a parititon. For example, it is possible to assign 30% of a CPU capacity to a partition, which can save on software licensing costs. This makes it easy to have a testing partition where you can verify changes before they are rolled out to the production partition.

    It is also possible for the partitions to be running entirely different OS versions or entirely different OS types, i.e. zOS, zVM and Linux. These partitions can intercommunicate using high speed channels that are internal to the physical system. The partitioning support enforces an additional measure of separation between the partitions, such that if one of the partitions is broken into, it is not possible for it to even detect the I/O devices or memory in use by the other partitions.

    I/O devices can be attached to or detached from systems and partitions on the fly if desired, with support from the OS of course.

    Another feature (or possibly a misfeature as some might say) is that the operating systems have had years of work done to them removing bugs and maintaining compatibility with software written long ago. This lets you continue to run software paid for years ago, possibly for which the source can no longer be found. The downside is that you get to use software that has arcane names for commands and the impenetrable control structures and formats of something like IBM's JCL.

  3. #3

    Default

    The biggest thing I can think of with main frames, super computers, and IPC systems is reliability over home systems.
    Hot swapping it a thing that is not often seen on home systems, but many main frames allow the removal and installation of addin cards wile the systems is running.

    Then with main frames and super computers you have (most of the time) a far faster FPU then even new home computers.

  4. #4
    Join Date
    Jan 2007
    Location
    Pacific Northwest, USA
    Posts
    34,340
    Blog Entries
    18

    Default

    In absolute terms, no. Moore's law and time always wins the game. The other corollary is the "light nanosecond" advantage of smaller technologies. This was obvious when the first integrated circuits came out--and progress has never even slowed.

    In relative terms, maybe. Large mainframes were probably better equipped in the I/O department, being able to transfer data at memory bandwidth speeds. Supporting a hundred remote terminals on a machine with a 10MHz clock wasn't unusual.

    Progress, however, is uneven. Have memory and I/O kept up with processor speeds?

    Another aspect is that old mainframe code didn't waste a lot of time on graphical user interfaces and other niceties. Code bloat was anathema, while today it's a way of life.

    I like the older systems because of their differentness. In the old days, we were still figuring a lot of things out and system architectures varied wildly. Today, it seems that 8-bit bytes, byte-addressable binary machines are a given. That hasn't always been true by a long shot.

  5. #5

    Default

    To be blunt, Amazon Web Services has completely superseded what can be achieved with any mainframe or super computer in terms of application scale, uptime, or cost (minimisation) and this is why AWS skills are currently so valuable. The auto scaling and cross region capabilities makes it possible to architect applications on AWS that can scale to rediculous proportions, and then scale back in during quiet times to control costs.

  6. #6
    Join Date
    Jan 2007
    Location
    Pacific Northwest, USA
    Posts
    34,340
    Blog Entries
    18

    Default

    James, not every computational task yields to massive parallelism; c.f. Amdahl's law.

  7. #7
    Join Date
    Feb 2006
    Location
    in the basement
    Posts
    888

    Default

    Quote Originally Posted by pearce_jj View Post
    ....Amazon Web Services has completely superseded what can be
    achieved with any mainframe or super computer ......
    So....what is AWS made of?

  8. #8

    Default

    Commodity hardware and some very capable software defined networking, at the infrastructure layer, and a number of platforms such as their managed databases, big data, AI; there are something like 1500 products in it.

  9. #9
    Join Date
    May 2009
    Location
    Connecticut
    Posts
    4,718
    Blog Entries
    1

    Default

    Quote Originally Posted by ziloo View Post
    So....what is AWS made of?
    Racks and racks of standard servers. Microsoft Azure is similar but more public with the details of how it is internally setup. https://www.nextplatform.com/2016/11...oject-olympus/

    IBM Z-series uses special chips that are faster clock for clock than offerings from Intel or AMD. IBM's EC12 was listed as running at 5.5 GHz which would roughly equal a current Intel chip running at about 8 GHz. Each IBM chip had 6 cores at that speed and could have 101 such chips. Intel based servers would have a better performance per watt provided that the workloads could scale over the much larger number of cores needed.

  10. #10
    Join Date
    Feb 2006
    Location
    in the basement
    Posts
    888

    Default

    Do I understand correctly that:

    -Modern mainframes are still being designed by considering innovations
    in hardware for dedicated purposes.

    -Parallel and distributed computing is more of a brute-force method to
    break down a task into simpler tasks to be done by multiple computing units.

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •