Image Map Image Map
Page 1 of 2 12 LastLast
Results 1 to 10 of 17

Thread: Understanding CPUs and hexadecimal

  1. #1
    Join Date
    Jan 2011
    Location
    Vancouver, BC
    Posts
    4,579
    Blog Entries
    3

    Default Understanding CPUs and hexadecimal

    Do CPUs like the 6502, etc receive instructions in hex and convert them to binary internally? Or do they rely on the 'computer' to translate what is in memory for them beforehand? I've always assumed everything has to be fed in as binary, and that hex is there as a human-friendly way to manage things. Am I correct in that?
    Last edited by falter; August 31st, 2019 at 01:48 PM.

  2. #2
    Join Date
    Jan 2005
    Location
    Principality of Xeon (NJ)
    Posts
    1,278

    Default

    As soon as the CPU acquires adequate power, it starts pulling binary format "codes", or instructions from permanent memory. In reality the computer understands nothing about binary or hexadecimal math, but takes the voltages present on it's data bus and acts accordingly (both Instructions and data are loaded onto the data bus, which are the contents of memory locations " dialed up" for access via voltages placed onto it's address bus).

    That explanation may not be perfectly clear. But to simply answer your question both memory addresses and instructions/data are placed, or received on their respective busses (a set of pins on the micro designated for memory addressing (collectively the address bus, probably labelled A0 - A15 on a 6502) or actual memory reading/writing (data bus, D0 h D15) in voltages (for simplicity, 5 volts (high or on), or 0 volts (low or off)) that are effectively represented by binary numbers.

  3. #3
    Join Date
    Jan 2007
    Location
    Pacific Northwest, USA
    Posts
    32,250
    Blog Entries
    18

    Default

    Hexadecimal (and octal, as well as base 32 and 64) are simply convenient systems of notation. They are binary, but with binary digits grouped in 3 bits or 4 bits (or 5 and 6 bits). There's nothing special about them--and even the notation for hex using 0-9 and A-F wasn't common until the mid 1960s.

    A = 1010 in binary, for example. A 12-bit word can be expressed as 12 binary digits, 4 octal digits or 3 hex digits.

    DEC stubbornly hewed to octal with the 16-bit PDP-11, leading to some confusion. A 16 bit word of binary ones would be FFFF in hexadecimal and FF, FF if viewed as two 8-bit bytes. However, the same word would be 177777 in octal, with the two 8-bit halves veiwed as 377 377.

    I once worked on a 64-bit word machine where addresses were in bit granualrity, but indexes were in bytes (shift 3 bits), quarter-words (shift 4 bits), halfwords (shift 5 bits) or words (shift 6 bits). I valued our TI SR-22 desktop calculator for plowing through dumps...

  4. #4
    Join Date
    Jan 2005
    Location
    Principality of Xeon (NJ)
    Posts
    1,278

    Default

    And you must always remember, whereas you'll never meet someone who lives at 0 Lois ln. or 0 Sunset blvd., 0 is the first memory location on every microprocessor system on the planet, and can contain an actual data item or instruction.

  5. #5
    Join Date
    Jan 2007
    Location
    Pacific Northwest, USA
    Posts
    32,250
    Blog Entries
    18

    Default

    Well, there are binary number systems with no zero or one. Take RFC 4648 and base 32 encoding, for example.

    Code:
                         Table 3: The Base 32 Alphabet
    
         Value Encoding  Value Encoding  Value Encoding  Value Encoding
             0 A             9 J            18 S            27 3
             1 B            10 K            19 T            28 4
             2 C            11 L            20 U            29 5
             3 D            12 M            21 V            30 6
             4 E            13 N            22 W            31 7
             5 F            14 O            23 X
             6 G            15 P            24 Y         (pad) =
             7 H            16 Q            25 Z
             8 I            17 R            26 2

  6. #6
    Join Date
    Feb 2011
    Location
    NorthWest England (East Pondia)
    Posts
    2,227
    Blog Entries
    10

    Default

    Quote Originally Posted by falter View Post
    Do CPUs like the 6502, etc receive instructions in hex and convert them to binary internally? Or do they rely on the 'computer' to translate what is in memory for them beforehand? I've always assumed everything has to be fed in as binary, and that hex is there as a human-friendly way to manage things. Am I correct in that?
    You are correct, HEX is human friendly way of representing binary numbers. If you are entering HEX data via a low level monitor then each HEX character gets converted to its 4-bit binary equivalent. A pair of values are combined and stored in memory as a binary value. On output the reverse happens. Each byte is split into nibbles and output as two ASCII characters....
    Dave
    G4UGM

    Looking for Analog Computers, Drum Plotters, and Graphics Terminals

  7. #7

    Default

    The grouping of binary bits is usually done in powers of 2. Hex is a relatively compact way to represent a binary number. It is not always the best way to look at any particular processor. As an example, many microprocessors tend to group basic operations in to groups of 8 or 16. Take for example, the 8080 processor has 8 addressable registers ( some sharing for operations that don't make sense ). Grouping the 8080 instruction bits into an octal value makes learning 8080 machine code a lot easier. This is why Heathkit first released the H8 with an octal keyboard. Octal was clumsy for addresses because the basic until was still the byte, or 8 bits. With addresses being 16 bits, they created what they called split octal with each octal byte being 17 in octal. The split octal address of 417 plus 1 would be 500. So, a more popular way of looking at bits in a compact way was to show things in what we call hex ( really hexadecimal ). This allowed us to represent the split octal number of 417 as 50 in hex, where adding 1 made more sense.
    Also many financial computations are often done in BCD ( Binary coded decimal ). In this form 199 plus 1 is nicely 200. This means the computer can at most only temporarily use the binary values of 1010, 1011, 1100, 1101, 1110 and 1111. It must correct with the appropriate carries to keep each digit 0 to 9 ( coded in the binary value ). Although, people have gotten used to decimal ( because of a berth defect of only having 0A hex digits on our hands), computers work happily with single binary bits ( 1 or 0 ). This makes Hex as a handy way to write large binary number that are easily converted to computer happy binary. As example 10000 in hex would be 65536, an awful number in decimal but an easy number to translate in binary. Some mathematical operations actually work better in other bases. As an example, say you wanted to print out the 400th digit of Pi. There is actually a way to print the 400th hex digit of Pi without calculating all the intermediate numbers.
    So, maybe there is something more fundamental in nature about powers of 2 compared with powers of 10 ( our primary numerical choice ).
    So, to end the story, when you download a hex file into your Arduino, it is actually converting each ASCII digit into 4 binary bits before writing it into it Flash memory. All, this to make it easier for you to read a dump file of the data.
    So, as a thought experiment, think of a trinary computer. Values would be +,0,-. We could group the values in groups of 27 instead of 16 as in hex. 10000 hex would be 3807 in base 27 and 10000 in base 27 would be 531441 in decimal. What a nice way to say about half a million tri-bits.
    Dwight

  8. #8
    Join Date
    Jan 2007
    Location
    Pacific Northwest, USA
    Posts
    32,250
    Blog Entries
    18

    Default

    The grouping of binary bits is usually done in powers of 2
    I'm not following again. Do you mean 2 bits, 4 bits, 8 bits, 16 bits? etc.

    If so, that was not the rule pretty much throughout the 1960s. 36 bit machines using 6 bit characters and 7 track tapes were far more common than 32-bit ones. Octal was the radix of choice for that reason. Decimal machines also weren't uncommon.

    Hexadecimal was largely introduced with the IBM S/360 in 1964; IBM decided to use a macaronic word, combining a Greek prefix with a Latin suffix ("hex" and "decimal") rather than the accepted, at the time, "sexadecimal". IBM suits couldn't stand the idea of a term starting with the letters "sex", however etymologically correct.

    As mentioned, the few computers that used base 16 notation didn't standardize. IIRC, that one of the conventions was 01234567889UVWXYZ. Univac 1100-series were 36 bit machines with variable byte length: 6, 9 or 12 bits--but not 8. But almost all of the classic big iron before the S/360 used octal or decimal. Even after 9 track tapes had seen wide adoption, you'd see 36 bits encoded as 4 1/2 frames. Makes for entertaining twiddling.

    And of those 6-bit characters, few vendors shared the same encoding. Even IBM had several different versions of BCDIC.

    Even in the early microprocessor days, octal was very common. Consider the GI CP1600 - one of the first monolithic 16-bit MPUs--the literature is in octal. Even with the 8080, the world was divided. IMSAI allowed the front panel switches to be configured as colored groups of 3 or 4.

    Shrug--it's all the same to me. At one time I was programming a 60 bit word system in octal; a 64 bit word system in hex and a 32 bit system in octal all at the same time. It's good mental exercise to be able to do math in either radix.

    The world has not always been 8 bit characters and ASCII. Not by a long shot.

  9. #9

    Default

    What's more fun than a non-linear character set? Hexadoku..

  10. #10
    Join Date
    Jan 2007
    Location
    Pacific Northwest, USA
    Posts
    32,250
    Blog Entries
    18

    Default

    It makes for a great deal of entertainment for me. You get a bunch of 10.5" reels of tape.

    First off, you have to decide if they're 7 or 9 track, and what density--and if they need treatment before reading.

    Then you read them (after treating them). Then you get to figure out what system produced them. It helps to have a background in big iron.

    Then you translate the ancient encodings into something that modern hardware can understand.

    It's very entertaining--making bits mean something.

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •