Image Map Image Map
Page 2 of 14 FirstFirst 12345612 ... LastLast
Results 11 to 20 of 136

Thread: Honeywell 200 resurrection

  1. #11
    Join Date
    Sep 2012
    Location
    Kent, England
    Posts
    103

    Default

    I don't even know whether the Dataplex works now. I haven't taken it off the shelf for years and I'll probably put my back out when I do. One card? When I acquired equipment I'd always take everything else connected with it from the engineer's room, operator's room, encoding room (not the girls though, well only once and I put her back later), but at least manuals, media and spare parts. That's why I have over 300 spare light bulbs for a H-200 which doesn't exist yet. At least if I build it it'll be a while before I have to convert it to LEDs. That said, I did have to buy some spare bulbs for the Dataplex as it has several photocell motion detectors in each card drive which sort of rely on them to work.

    Thanks for mentioning that the project is worthy. I need encouragement. I put it off for two and a half years to write a novel instead but at the moment I think this project has more support than the novel. That's strange because it's about people living in the 1960s who are able to communicate with people living in the twenty-first century. It's science fiction of course.

  2. Default

    Cool! I wish you luck.

    An H200 was the first computer I ever programmed, though I only saw it once. Back in 1975 my high school offered a FORTRAN class. We rented a keypunch and sent our card decks across town every night with the school's janitor who lived near the computer facility. The next day we got our printouts back (I still have some of them). The long turn-around taught me early to debug in my head. One day we took a field trip and saw the real thing.

    The next year the school bought two Altairs with MS cassette BASIC.

  3. #13
    Join Date
    Sep 2012
    Location
    Kent, England
    Posts
    103

    Default

    Even in our business environment our development work was run overnight, so we also got only one shot a day at running a programme. I agree that it is good training. Personally I somehow never acquired the habit of making mistakes in the first place because I did what I was taught and nobody taught me to make them. The first programme that I ever wrote on a training course was immediately flawless and I carried on that way for years. Our managers were scared by the way that I worked, using all the available scheduled time to design my programmes with virtually no time left to correct errors as I didn't plan to make any, but they got used to it. Nowadays trial and error development is so fast that even I resort to it, but mainly because modern computer systems don't necessarily do what they're supposed to in reality or they are inadequately documented. I have to be extremely careful building my H200 as the parts that I have are irreplaceable and burning any out could end the project.

    I went on a FORTRAN training course in 1972 because our company normally used COBOL but valuation of our liabilities involved complicated actuarial calculations which would have been inefficient in COBOL and our actuaries used FORTRAN for their research tasks. I discovered that FORTRAN couldn't read or write the large COBOL tape files that we used then, so instead I wrote a COBOL programme to handle the files with an EASYCODER module embedded in it to do the calculations. EASYCODER was the assembly language of the H200 but the brilliant design of the H200 hardware meant that it was halfway between more modern low level assembler languages and something like BASIC, so not too great a strain on the brain. This wasn't anything brand new but just another step in the work already done by IBM, Honeywell and others. By building the replica H200 I can demonstrate the versatility of the H200 machine language better than by simply writing an emulator, which I also need to do anyway.

    Computer architecture is a balancing act between cost, performance and complexity. Magnetic core memory was extremely expensive when it was made by hand, so early computer logic did as much as possible in one instruction to keep programmes small. Semiconductor RAM became very cheap and processors became much faster, so instructions could do less and RISC processors became viable. Then processors became so complex that they could do highly specialised tasks again, like the video processors in modern gaming computers. The H200 is an example of where this balancing act started. I have several Honeywell Level 6 and DPS6 minicomputers which I am about to donate to a computer museum. They are an interesting transitional phase in computer architecture, having RISC style bit-slice processors executing microcode on ROM to implement the more complex machine language that the programmes actually use. If you changed the internal plug-in ROMs in a DPS6 it could behave like some other sixteen bit computer, which would be fun. Perhaps one could even be converted into a PC. I believe that they were versatile enough to be used on the Space Shuttle, so I've heard.

  4. Default

    I've wondered whether the compiler we used was from Honeywell or converted from IBM.

    The job page header said:
    FORTRAN D SYSTEM TAPE REVISION NUMBER 6.0
    and the listing page said:
    FORTRAN 200 SOURCE LISTING AND DIAGNOSTICS

  5. #15
    Join Date
    Sep 2012
    Location
    Kent, England
    Posts
    103

    Default

    My Honeywell training course in 1972 was for FORTRAN D and they used their own compiler. FORTRAN is intended to be a very portable language, so one would use the native compiler for the computer on which it is run. My reason for opting to do my calculations in EASYCODER was that even then our computer had no hardware multiply or divide and the software versions built into the Honeywell compilers weren't that great. In those days writing good routines to do those tasks was a game played by aspiring programmers. Our company required us to use actuarial rounding in our calculations as financial calculations need different consideration from scientific ones when choosing a rounding method, so we incorporated the rounding into our routines when we designed them.

    The progressive calculation of Pi that I've written as a demonstration programme for the H-200 doesn't use conventional multiplication or division instructions and wouldn't benefit from them much, even on a computer which had them. Also it doesn't use a return stack as the most basic H-200 didn't have that facility either. Memory was too small and expensive then to waste it on a stack. In fact where a conventional modern programme would use a stack this programme contrives to use a first-in-first-out queue, which is more useful because it is continually looping and taking the oldest data to use in the newest calculations. Top down structured programming with subroutines is a common style of programming now but it isn't the best solution for every occasion. In the days when computers had little brains programmers had to exercise theirs a lot to compensate. The "Go To" instruction didn't die out; it lives on with an assumed name under a witness protection programme in every computer.

  6. #16
    Join Date
    Sep 2008
    Location
    Walled Lake, MI
    Posts
    3,601
    Blog Entries
    6

    Default

    Quote Originally Posted by RobS View Post
    My Honeywell training course in 1972 was for FORTRAN D and they used their own compiler. FORTRAN is intended to be a very portable language, so one would use the native compiler for the computer on which it is run. My reason for opting to do my calculations in EASYCODER was that even then our computer had no hardware multiply or divide and the software versions built into the Honeywell compilers weren't that great. In those days writing good routines to do those tasks was a game played by aspiring programmers. Our company required us to use actuarial rounding in our calculations as financial calculations need different consideration from scientific ones when choosing a rounding method, so we incorporated the rounding into our routines when we designed them.

    The progressive calculation of Pi that I've written as a demonstration programme for the H-200 doesn't use conventional multiplication or division instructions and wouldn't benefit from them much, even on a computer which had them. Also it doesn't use a return stack as the most basic H-200 didn't have that facility either. Memory was too small and expensive then to waste it on a stack. In fact where a conventional modern programme would use a stack this programme contrives to use a first-in-first-out queue, which is more useful because it is continually looping and taking the oldest data to use in the newest calculations. Top down structured programming with subroutines is a common style of programming now but it isn't the best solution for every occasion. In the days when computers had little brains programmers had to exercise theirs a lot to compensate. The "Go To" instruction didn't die out; it lives on with an assumed name under a witness protection programme in every computer.
    Just curious Rob, just how far out did you take Pi?

  7. Default

    Quote Originally Posted by RobS View Post
    The progressive calculation of Pi that I've written as a demonstration programme for the H-200 doesn't use conventional multiplication or division instructions and wouldn't benefit from them much, even on a computer which had them.
    What multiplication algorithm are you using? The H-200 does BCD math, right?

    Are you taking advantage of the variable-length word size?

  8. #18
    Join Date
    Sep 2012
    Location
    Kent, England
    Posts
    103

    Default

    My target using a 2k byte memory was the Feynman point, 767, and the programme reached 770 after a lot of head-scratching on my part. I don't specify the end-point; the programme just stops when the calculations in progress overflow available memory. The smallest memory size available in an H-200 was 2k and that was all the core memory I had anyway, which is why I set that target. As I have recently been offered several original Honeywell 4k memory modules I can now consider building a 4k machine. Any more memory than that would need "address mode 3" as "address mode 2" only used twelve bit addresses, but address mode three also provided indexed and indirect addressing and building that in would involve a lot more hardware for which I probably won't have the parts or the patience, so I am still limiting my initial design to 4k and the most basic machine marketed. To answer your question completely I just this moment cleared a path through my workroom and across the desktop (something akin to navigating between icebergs with all the boxes balancing everywhere at present) to find my development PC and rerun the emulator with 4k of memory and very spookily it stopped at 1767 decimal places, exactly 1000 past the Feynman point. Therefore adding 2k gave me another 1000 places, which is pretty good memory usage. I also have the "Pi Factory" algorithm written as a demonstration C programme to run on a PC, which is not optimised for memory usage, so I could take it much further but I agree with Richard Feynman, that it is adequate to "end" Pi with "...999999 and so on."

  9. #19
    Join Date
    Sep 2012
    Location
    Kent, England
    Posts
    103

    Default

    I believe that a significant difference between the H-200 and the IBM1401 was that the H-200 was able to do both BCD and binary arithmetic. Being a character machine with unlimited word length that meant that one could do either to any number of significant digits as well, which was fun. Also having binary arithmetic meant that one could do address modification. In the most basic model of H-200, which didn't have indexed addressing, that meant that one could index a programme loop through an array simply by adding values to the appropriate parts of the instructions, which was okay until the addresses overflowed by mistake and the carries changed the operation codes There is no consolidated multiplication or division in my algorithm because it only calculates one decimal place at a time. The only multiplications required are by integer constants so they are done by the time-honoured process of doubling and adding. Divisions just provide a single decimal digit as quotient with a remainder, so are just done by repeated subtraction. In order to calculate the digits progressively all the calculations involved have to be done simultaneously one place at a time. As binary values take less space than BCD I use purely binary numbers but scale them by ten each time around the loop, so they effectively convert into decimal as they are used, a sort of extended BCD if you like. Only the final accumulator is a real BCD field. This field is necessary because I have to allow a delay of a few digits before printing the top one so that carries can propagate through the answer. This final carrying is the only part of the calculation of Pi that demands right-to-left operations; everything else can be done left-to-right even though it runs contrary to our usual view of arithmetic. One of my reasons for choosing the Feynman point as my target was that passing it successfully proves that the propagation of carries is working where it is most likely to fail.

    I do use variable length words in the FIFO queue but that alone wasn't enough to hit my target, so I have added half-byte compaction to eliminate half-byte zeroes. The memory used by the compaction and expansion routines is less than the savings in the queue size, so it was worth doing ... but I still didn't hit the target then. The H-200 has an additional bit, the item mark, on each character and that tends to sit around doing nothing most of the time, so I use it to mark bytes where a half-byte zero has been suppressed. Making the values variable length isn't an enormous saving though as I only ever store remainders from divisions, which are all small numbers by definition. I have put so much memory optimisation into the H-200 version of the programme that it is now difficult to see how the actual calculation of Pi works in amongst all the optimisation code.

    Having achieved sufficient compaction in my software I now have to be equally adept at compacting the necessary logic into my hardware as I only have sufficent backplane space for 200 logic boards even though I have almost a thousand boards to hand. It would be very nice to find another H-200 backplane somewhere, even a small one out of a Honeywell disk drive or elsewhere. Even compatible individual edge connectors (single sided 0.125 inch pitch 40 pin) don't seem that easy to find now. Well it wouldn't be fun if it was too easy.

  10. #20
    Join Date
    Sep 2012
    Location
    Kent, England
    Posts
    103

    Default

    I forgot to mention that the H-200 would take one hour nine minutes forty eight point four seconds to calculate the 1767 decimal places, which would get rather boring as a demonstration. With 4k of memory I think I'd trade in a few decimal places to speed it up.

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •