Image Map Image Map
Page 6 of 8 FirstFirst ... 2345678 LastLast
Results 51 to 60 of 71

Thread: Testimonies of using BASIC back in the day

  1. #51
    Join Date
    Jan 2007
    Location
    Pacific Northwest, USA
    Posts
    31,526
    Blog Entries
    20

    Default

    I'll check in here with some tales from the past--and how I see things.

    In my mainframe days, the bulk of my programming was done in assembly. Many thousands of lines, all keypunched. The experience taught me two lessons--the value of coding standards and the value of a really good macro assembler. Coding standards, obviously for maintenance--realize that this was before on-line editors--you sat with a listing with statement sequence numbers/identifiers and worked out directives for the source library program to make your changes--as in "delete these statements, insert these statements, etc." At the same time, you knew that there were other people writing directives, perhaps on the same section of code you were working on. Good documentation and coordination/communication were essential and paid off handsomely.

    A good macro assembler would allow you to do just about anything that you could imagine. Remote/deferred assembly, character manipulation, syntax extensions, macros that define macros all could simplify something that would be a nightmare in straight assembly to something that a human could understand. It's a shame that not very many assemblers exist today that can do the same.

    About the only other language that I used back then was FORTRAN--you could find it on just about any platform--at least, I know of no mainframe where it wasn't offered. You used that to write utility programs, if possible, where peculiar machine features or speed of execution. Some FORTRANs were very good indeed, being able to allocate register use and schedule instructions as good as the better assembly programmers.

    BASIC wasn't an option back then--the language was too limited and usually was interpreted, not compiled.

    I moved to very large vector systems with the emphasis on number-crunching. Huge instruction set with instructions like "SEARCH MASKED KEY BYTE" with up to 6 operands. For that, I used a derivative of FORTRAN called IMPL--and also made changes to the compiler to improve code generation. If you had something specific in mind, there were ways to express assembly instructions inline.

    At about that time, I built my first personal microcomputer from a kit (Altair 8800). I'd been following the action at Intel and still have the notes from the 8008 announcement, faded though they may be. A disk was out of the question, so I used an audio tape recorder and the guts from a Novation modem for offline storage. It worked, so I didn't have to toggle things in, or type them from the console. BASIC was one of those programs that I typed in the hex code for, byte by byte. It worked, but not quickly--interpreter, again. So I used a memory-resident assembler which worked for a time. Eventually, I put together a system with Don Tarbell's disk controller and a couple of 8" floppy drives that I scrounged. It wasn't too long before I got CP/M 1.4 (or thereabouts) going, which gave me more possibilities for software development. But still assembly.

    Professionally, at about the same time, I took a job with a startup and used an Intel MDS-800 running ISIS-II. Intel had a language that was vaguely reminiscent of PL/I called PL/M-80. It wasn't bad--you could actually make good use of its capabilities, although it was not an optimizing compiler in any sense, so the size of the executable code and its speed wasn't up to assembly. For "quick and dirty", however, it was great.

    Eventually, as disk systems got affordable, other languages made their appearance. Various flavors of BASIC (few were true compilers--and there's a reason for that), FORTRAN, COBOL, SNOBOL4, FORTH...you name it. Anyone remember DRI's ISV program that promoted their PL/I? Yes--a remarkably feature-rich PL/I for the 8080. There were Cs--but they weren't all that good, for a very good reason:

    The 8 bit Intel platform lacks certain features that makes C practical. C uses a stack architecture, derived from the PDP-11 architecture. The PDP-11 is a 16-bit machine, the 8080 is not. Addressing of local stack-resident variables on a PDP-11 is quite straightforward; on the 8080, it's a nightmare. Among other things, the 8080 doesn't have stack-relative addressing, nor does it have indexed addressing. 16-bit addresses have to be calculated the hard way--move the stack pointer to HL, load another register pair with the index, add it to HL, then access the variable byte-by-byte. Really ugly. While the Z80 does have indexed IY and IX addressing, it's also quite limited and handling simple 16-bit integer stack-resident variables, particularly if the local area is more than 256 bytes long, is again, very complicated. You simply can't generate good C code on an 8080. FORTRAN--sure. No stack-resident variables--in fact, no stack required at all. That's why FORTRAN could be run on an 8KW PDP-8, but no such luck for C--C imposes certain demands on the architecture.

    Comes the 8086 in 1979 and the later, the IBM PC. All of the sudden, things get less complicated, although handling large (more than 64KB) data structures is quite awkward. But for the first time, you had a microprocessor with an ISA that could do justice to C. Disks were relatively inexpensive, so you had a full-scale development system. Assembly could be used to write fast and/or small programs, but for the tedious stuff, C was great. Microsoft even endorsed it--and they didn't have a C compiler at the time. They recommended the use of the Lattice C compiler--a basic K&R thing that did the job.

    BASIC made sense for business applcations--I wrote a BASIC incremental compiler (to P-code) for a company to port the large suite of MCBA applications to an 8085. There was a good reason for the P-code thing: If you were to write a compile-to-native code BASIC, you'd wind up with a program full of code that did little more than set up arguments to subroutines to do the basic operations. At best, the 8080 could do inline 16-bit arithmetic as long as you didn't need to multiply or divide, but BASIC originally had no explicit type declaration statements. You had number and you had strings. The other problem was that 8080 code is not self-relocating. P-code results in smaller programs, location-independent code and even multitasking. The result can be quite small and fast.

    As far as languages go, from a compiler-writer's viewpoint, they're all the same at the back end. You take a tree or other abstract representation of the compiled and optimized source code and you translate it into native instructions, perhaps doing some small optimizations. What the front-end eats isn't important. I've been on projects where the same back-end was used for C, FORTRAN and Pascal.

    My perpetual gripe with C is that it lacks a decent preprocessor. For some odd reason, preprocessor directives are considered to be evil by the C community. Yet, look at PL/I's preprocessor, complete with compile-time variables, conditionals and other statements. Incredibly useful, if you know how to use it. Yes, C++ has features that make a preprocessor less important, but there you get the whole complex world of what amounts to a different language, when all you wanted was a way to write a general macro to initialize an I/O port. There were times when I've found C++ quite handy for abstracting things, but I like the simplicity of C.

    So, for the last 20-odd years, I've written a lot of C, with a smattering of assembly support. But much more C than assembly. And almost no BASIC, FORTRAN or COBOL oor Ada at all--but I'd use any of the above if there were an advantage to using it in any particular application.

  2. #52

    Default

    Quote Originally Posted by KC9UDX View Post
    How often does anyone actually use FP? Even when I was rendering 3D wireframes I always used scaled integers.
    You'd be shocked by 3d programming from pretty much 3rd generation Pentium onwards. The age of the MMX and the 3DNow, the time of the sword and axe is nigh, the time of the wolf's blizzard. Ess'tuath esse!

    Somehow some math nerds who knew jack shit about programming got together and convinced EVERYONE in the 386 era that matrix multiplies were somehow more efficient and effective than the direct math for translations, rotations, and so forth. HOW they managed to convince people that 64 multiplies of 32 memory addresses into 16 more addresses was faster than four multiplies, three addition and one subtraction of 4 addresses into two I'll never understand... That matrix math started being used for TRANSLATIONS (what should be three simple addition) was pure derp... but it got worse...

    As 1) everything moved to floating point, and 2) rather than argue it, processor makers created hardware instructions to do it. You know MMX? 3dNow? That's about ALL those do! Hardware matrix multiplies shoving massive amounts of memory around just to do a rotation or translation.

    Pretty much by the time Glide was fading, all 3D math on PC's is floating point, typically double precision. OpenGL? DirectX? Vulkan? Double precision floats. Even WebGL in the browser does it now, and they had to change JavaScript to add strictly typecast arrays (to a loosely cast language) to do it! Though that change has opened the doors to doing a lot of things JavaScript couldn't before, making it even more viable as a full stack development option.

    You can't even argue it now with 'professional' game programmers even when the situation calls for something matrixes and normal projections can't handle as they are so used to "the API does that for me". Implementing things like arctangent polar projections (which with a lookup table at screen resolution depth can be many, MANY times faster even CPU bound over a gpu projection) are agonizing to implement because the rendering hardware just won't take the numbers unless you translate it all from polar to Cartesian, a process that eliminates the advantages.

    Laughably I wrote a game engine about twenty years ago that used GLIDE (3dFX's proprietary API which really didn't do much 3d, it was just a fast textured triangle drawing engine) built ENTIRELY in polar coordinates -- until the view rotation that was in fact handled as a translation -- using 32 and 64 bit integer math that in a standup fight could give the equivalent rendering in OpenGL on a 'similar performing' card that had hardware 3d math a right round rogering.

    If you're working on the CPU and dealing with off the shelf 3d model formats now? Double precision floats. If you're working on the GPU through a major API? Double precision floats.

    Which for a LONG time left ARM crippled or at the whims of the GPU (which laughably STILL aren't even up to snuff with Intel HD on processing power) until they added the option for a "VFP" extension -- vector floating point; which is a big fancy way of saying MMX on ARM. It's bad enough a ARM Cortex A8 at 1ghz delivers integer and memory performance about equal to a 450mhz PII (since they are more obsessed with processing per watt than processing per clock) when you realize that things like webgl or OpenGL ES want to work in double precision floats, and since there is no floating point in hardware on a stock ARM PRIOR to Cortex A8 and it's optional on A9's you're looking at 487 scale performance in that regard. (thankfully VFP and SIMD extensions are now commonplace, but a LOT of cheaper devices still omit them)

    Even more of a laugh when you realize most low end ARM video hardware is just overglorified 20 year old Permedia designs with faster clocks shoved at it.

    Part of why without a major overhaul, now that Intel is gunning for that space ARM could be in for a very rough ride in the coming years. VFP is a stopgap at best, even the best offerings in Mali OpenGL ES video for ARM gets pimp slapped by even piddly little Intel HD on some of the new low wattage Celerons. The only real hope ARM has moving forward is existing momentum and if nVidia's new low power strategy for desktop/notebook trickles its way down into the Tegra line.

    ... and honestly I wouldn't hold my breath on that, I get the feeling nVidia is starting to consider walking away from the mobile space even if their "shield" technology relies on it. It hasn't been the success they hoped for.
    Last edited by deathshadow; May 12th, 2017 at 01:30 PM.
    From time to time the accessibility of a website must be refreshed with the blood of owners and designers. It is its natural manure.
    CUTCODEDOWN.COM

  3. #53
    Join Date
    May 2009
    Location
    Connecticut
    Posts
    4,275
    Blog Entries
    1

    Default

    Floating point might not have made much sense in games since displaying partial pixels is not beneficial. In scientific software, it was common to go with floating point with as many bits of accuracy as possible. Sometimes a good idea, sometimes it just meant the PDP-11 ran all weekend.

  4. #54

    Default

    Quote Originally Posted by krebizfan View Post
    Floating point might not have made much sense in games since displaying partial pixels is not beneficial.
    Until you get into anti-aliasing, sub-pixel hinting, etc, etc...
    From time to time the accessibility of a website must be refreshed with the blood of owners and designers. It is its natural manure.
    CUTCODEDOWN.COM

  5. #55
    Join Date
    Dec 2014
    Location
    The Netherlands
    Posts
    2,024

    Default

    Games just used whatever method was fastest at the time.
    In the early 2d era, it often made most sense to just use integer coordinates, and work with a coordinate system that maps 1:1 to the pixel grid on screen.
    With more advanced stuff (scaling/rotating 2D and such, 2.5D or real 3D), you would need additional precision over the screen resolution. So you want some kind of solution that can handle fractional coordinates as well.
    Obviously, before FPUs were commonplace, full floating point wasn't very efficient. So games used fixedpoint notation (basically just integers scaled up by a certain power-of-2 value, to get fractional precision).
    Another advantage of using integers is that they are very predictable and numerically stable. There's no fancy scaling or rounding that can affect precision in unwanted ways. So if you are writing some kind of rasterizing routine (doing eg a linedrawing or polygon routine), an integer-based solution will be guaranteed to render consistently, and touch all intended pixels.

    But as FPUs became commonplace, it became more efficient (and flexible) to perform certain calculations with floating point.
    In games there's a pretty obvious transition-point: In the era of DOOM, Descent and such, everything was still done with fixedpoint integers (the 486 made the FPU commonplace, but it wasn't a very efficient FPU, so you'd avoid it like the plague for high-performance calculations). Then Quake came around, and a lot of calculations were done with floating point (Pentium happened, and its FPU could do single-precision operations like fmul and fdiv much faster than integer mul and div.. And perhaps more importantly: the FPU instructions could run in parallel with integer ones. For perspective divide it would fire off one fdiv for every 16 horizontal pixels. The fdiv would effectively be 'free' because it ran in the background while the innerloop was outputting textured pixels. By the time it had rendered 16 pixels, the fdiv was completed and the result available on the FPU stack).

  6. #56
    Join Date
    Jan 2007
    Location
    Pacific Northwest, USA
    Posts
    31,526
    Blog Entries
    20

    Default

    ...and let's not forget the fixed-point DSPs. Despite lack of floating point, they are/were quite useful.

  7. #57

    Default

    Some corrections for deathshadow: MMX is for integer/fixed-point operations, not floating-point. Single-precision maths is typical for most non-scientific GPU-based work, not double. Double-precision maths is supported by GPUs but it's avoided since it performs at far less than half the rate of single-precision operations in all but the highest-end GPUs, i.e. ones not intended for gaming. In ARM processors VFP has been supplemented by NEON which performs much better with vector operations than VFP does. The Cortex-A8 implements NEON well but has a crippled VFP unit compared to the one in the A9. Even pre-ARMv7 processors, such as the ARMv6 ARM11 used in the original Raspberry Pi greatly outperform it.

  8. #58
    Join Date
    Jan 2007
    Location
    Pacific Northwest, USA
    Posts
    31,526
    Blog Entries
    20

    Default

    Let's hope that RISC-V makes some headway. I'd hate to think that we'll turn into an ARM world.

  9. #59
    Join Date
    Dec 2005
    Location
    Toronto ON Canada
    Posts
    7,137

    Default

    Quote Originally Posted by krebizfan View Post
    Half the code I dealt with in the late 70s involved floating point so I guess I have a slightly different perspective. I wanted accurate results with speed.
    The exact opposite of almost all the code I dealt with; in the business world floating point was generally slower and less accurate because it tended to introduce rounding errors.

  10. #60
    Join Date
    Jan 2007
    Location
    Pacific Northwest, USA
    Posts
    31,526
    Blog Entries
    20

    Default

    ...which is one of the reasons why spreadsheets initially implement decimal floating-point math--and why CBASIC did, for example.

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •