Image Map Image Map
Page 7 of 7 FirstFirst ... 34567
Results 61 to 67 of 67

Thread: Testimonies of using BASIC back in the day

  1. #61
    Join Date
    May 2009
    Location
    Connecticut
    Posts
    3,102
    Blog Entries
    1

    Default

    Quote Originally Posted by MikeS View Post
    The exact opposite of almost all the code I dealt with; in the business world floating point was generally slower and less accurate because it tended to introduce rounding errors.
    Alas, electron orbits do not lend themselves to easy decimal notations.

  2. #62
    Join Date
    Dec 2005
    Location
    Toronto ON Canada
    Posts
    6,620

    Default

    Quote Originally Posted by krebizfan View Post
    Alas, electron orbits do not lend themselves to easy decimal notations.
    Indeed; just pointing out that what's fast and accurate in your realm (science, games etc.) is slow and not necessarily accurate either in the (often ignored here) business realm.

    Quote Originally Posted by Chuck(G) View Post
    ...which is one of the reasons why spreadsheets initially implement decimal floating-point math--and why CBASIC did, for example.
    'Precision as Displayed' is set by default in all my Excel sheets; it's disconcerting when 'IF A1=2' fails, or when FoxPro says that 1 + 1 = 1 for that matter...
    Last edited by MikeS; May 15th, 2017 at 12:04 PM.

  3. #63
    Join Date
    Jun 2016
    Location
    Guisborough, England
    Posts
    59

    Default

    Surely that can be a problem with ANY computer language.

    Over the years, I've had problems with various, including the C7 I've done most of my 'serious' work with. Where such things mattered, I always took the precaution of using a special rounding function on both sides of the comparison to make sure that if the two numbers were the same, then the computer recognised them as such. If I didn't, there was always a possibility that one number or the other might still have a stray (VERY 'stray') 0.000000000001 or such-like hanging in there!

    Geoff

  4. #64
    Join Date
    Jan 2007
    Location
    Pacific Northwest, USA
    Posts
    23,837
    Blog Entries
    20

    Default

    Note by "decimal", I don't necessarily mean that a number is expressed in, say, BCD, although that would make for convenience.

    Simply maintaining the exponent as a power of 10 rather than 2 (or 16, as in S/360 floating point) is sufficient. While most have heard of IEEE 754 floating point, few are familiar with IEEE 854. At any rate, interest in decimal radix is very much alive. Somewhere, even recall reading about a decimal coprocessor done in FPGA.

    Students studying numerical methods are often given a problem that involves trig functions near their limits usually expressed as a quotient. The naive learner simply codes the expression as stated and discovers that the result is pure garbage. It's an object lesson in not blindly trusting the computer to come up with the "right" answer.

  5. #65
    Join Date
    Dec 2005
    Location
    Toronto ON Canada
    Posts
    6,620

    Default

    What Chuck said.

    Quoting from his link:

    "Most computers today support binary floating-point in hardware. While suitable for many purposes, binary floating-point arithmetic should not be used for financial, commercial, and user-centric applications or web services because the decimal data used in these applications cannot be represented exactly using binary floating-point".

    As someone who spent 40 years or so with systems and software for the accounting and financial services markets, that's a truth I learned very early on...

  6. #66
    Join Date
    Jan 2007
    Location
    Pacific Northwest, USA
    Posts
    23,837
    Blog Entries
    20

    Default

    I'm trying to remember an article from the HP Journal from years back. As best as I can recall, it was a way of doing decimal floating point by expressing the mantissa in groups of 20(?) bits, each group having the range from 0-999999 in binary, or something to that effect. The benefit was that you could get 6 digits of significance, where doing the same as BCD would only get you 5. Does anyone remember the article?

  7. #67
    Join Date
    Dec 2014
    Location
    The Netherlands
    Posts
    1,582

    Default

    Quote Originally Posted by Chuck(G) View Post
    Students studying numerical methods are often given a problem that involves trig functions near their limits usually expressed as a quotient. The naive learner simply codes the expression as stated and discovers that the result is pure garbage. It's an object lesson in not blindly trusting the computer to come up with the "right" answer.
    That reminds me of an issue that many raytracers have:
    If you calculate the intersection of a ray and some surface (sphere, plane, cylinder, etc), then the intersection point is never exactly *on* the surface of course, due to the limited precision.
    So your intersection point is either in front or behind the surface, depending on which way the rounding turned out.

    Now, if you don't take this into account, and then proceed to reflect the ray at the intersection point, you will often find that your rendered surface will have 'holes': random black pixels.
    Why? Simple: when you reflect your ray, you implicitly assume that it is bounced against the surface, so it should be on the 'outside'. But if due to rounding your intersection point was actually on the 'inside' of the surface, then the reflected ray will bounce back to the surface again, and the light may get 'trapped' inside the object for the remaining 'bounces' to be calculated. That's why you'll get those random black pixels.

    Some implementations just leave it at that... Others try to use bruteforce to 'fix' it: Either they increase the supersampling to a level where the black pixels will 'blend in' with the correct neighbours, so the issue is not apparent. Or, they just use double precision floating point everywhere (or worse, if the FPU supports it). Neither gives correct results.

    Smart coders can get it working fine without any supersampling, and just single precision. You can use one of these elegant solutions:
    1) When you calculate the intersection point, step back along the ray by a certain epsilon value. With a well-chosen epsilon, the intersection point is now always on the correct side of the surface, you never 'overshoot' inside the object.
    2) Keep track of what object your ray last bounced from. If it is the same object as the nearest intersection at the current bounce, then it is suspect. In that case, if the length of the ray (distance between the previous and current intersection point) is shorter than a given epsilon, discard this nearest intersection and take the next-nearest one instead, because you have likely bounced against the same surface again.

Page 7 of 7 FirstFirst ... 34567

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •