PDA

View Full Version : It’s all Bass Ackward if you ask me



Lorne
October 13th, 2010, 01:46 PM
In most countries in the world people read from left to right, and top to bottom.
And that’s what most of us do, except apparently, when we’re dealing with electronics.

I just went through this crap over the last two days, and thought I’d share my experience.

My Osborne 1 had a RAM problem.
With direction/help from another forum member, I used a small program called UMPIRE.COM which is on the FOG CP/M disk #55 (available at Bitsaver’s – Software – User Groups section) to test the DRAM.
The program pointed to bit 5 in the second bank as being the bad DRAM.

Now, here’s the mainboard for the Osborne:

4668

As you can see, the components are arranged vertically in columns being numbered from left to right starting at 1 on the far left and ending at 27 on the far right.
The components are also arranged horizontally in rows being labeled from A through E.
The DRAM banks are in rows identified from top to bottom, as A through D, and columns numbered from 20 to 27.
For those who didn’t get that part: that’s left to right, top to bottom, column numbering starting at 1 (not zero), which is all quite logical.

So, now we want to find the bad DRAM in the second row, at bit 5.
Easy enough, right?
The second row would be row B.
Yep, we’re good so far.

Now, we still need to find bit 5 of the 8 bits.
So we count 5 chips from the left side, which would give us the DRAM chip at location B-24, right?
Wrong !
It seems that in electronics we start counting at zero, not one, so the chip in question is actually the sixth chip, and is in position B-25 (IE: the one in the freshly soldered in IC socket).
Now, if someone were to hand you six eggs, and say “I want you to count the eggs out loud, and then tell me how many eggs there are”,
Would you say:
“One, two, three, four, five, six, there are six eggs”
Or, would you say:
“Zero, one, two, three, four, five, there are six eggs”?
Huh?
I mean, WTF?
Who the hell counts like that?
It's just plain bass ackward.

But wait, there’s more !
The 5th bit in the second bank isn’t actually in position B-25 at all.
Believe it or not, it’s in position B-22 because it seems the DRAM is counted from right to left !
IE: go get the soldering station out again.

Am I the only one who thinks that a lot of this electronics stuff is completely illogical?
Am I the only one who gets totally confused working on electronics stuff?

The Osborne’s DRAM problem has been repaired (twice!) and it is working fine, but now I have a splitting headache, which I think will only be relieved by a six pack of Bass Ale.

Chuck(G)
October 13th, 2010, 01:48 PM
Everything computer-related is utterly logical, unless it's not.

I like beer too. :)

glitch
October 13th, 2010, 02:02 PM
What's even better is wire-wrapping a 16-bit address bus, and you're short one wire when you get to the end. Only then do you realize that, while everyone else calls the first address bit A_0, the device you're connecting starts at A_1. That, and Cypress's desire to introduce a "compatible" 8K x 8 6264 SRAM in which the .3" and .6" spacing packages have different pinouts caused much headache in wire-wrap prototyping.

"Why is the RAM dump showing each byte duplicated twice? It's almost like A_0 is connected to something other than the first address bit..."

per
October 13th, 2010, 02:02 PM
The thing is that it is ten times more easier to program when using numeric integer sequences starting at 0 instead of 1. This is especially true if lots of tables and lists are used, since the values then can directly be translated into memory offset values without any unnessecary complications.

You can test if you have found the rigth memory chip by just "piggybacking" a working DRAM on top of the one you think is fauty. If you are correct, the error goes away, if you are wrong, the error is still there. When you find the faulty one, you should obivously replace it.

---

Of course, for the general user it may be really akward, especially if some systems actually starts counting at 1 when it comes to the hardware. It's just a thing one get used to after working with computers for a bit.

Chuck(G)
October 13th, 2010, 02:20 PM
So, for those who think things are logical, which end of a byte or word is bit 0? Why?

per
October 13th, 2010, 02:30 PM
So, for those who think things are logical, which end of a byte or word is bit 0? Why?

I would say the leas signifficant bit, as it would be illogical for a byte to consist of bit 8 to bit 15 (in the case a word is bit 0 to bit 15). Also, when we write numbers, we start from the right hand side with the lowest decimal.

Yes, I know there are exceptions, especially when you get to shift-register based logic.

Lorne
October 13th, 2010, 02:37 PM
So, for those who think things are logical, which end of a byte or word is bit 0? Why?

I'll bite (not byte).

Logically, because we read from left to right, I'd say the left side.
However, as electronics are bass ackward, I'd have to stab in the dark with the right side.

Lorne
October 13th, 2010, 02:42 PM
Also, when we write numbers, we start from the right hand side with the lowest decimal.



If Per is correct, then I'm correct - it really is bass ackward.
I've been dealing with numbers for the last 35 years, and I've never written one from right to left.
As long as I know it's always backward, I might be able to figure this stuff out.
I always suspected there was something secret and dastardly about electronics - maybe that's it.

MikeS
October 13th, 2010, 03:00 PM
Not electronics; binary numbers.

While the chip locations are columns and rows arbitrarily numbered 1 or A to whatever, the bits represent a binary number, i.e. powers of two, so it's quite logical that the lowest value is 0 and is on the right, exactly the same way that we write decimal numbers with the lowest power of ten (also 0) on the right, extending to the left in increasing powers.

If you started with bit 1 (2^1, =2) you couldn't have any odd numbers 8)

@Lorne: I think you do write numbers with the lowest value digit on the right and do addition, subtraction etc. right to left, dontcha? When you have to put a number like 6349125 into little boxes representing powers of ten, don't you have to start on the right or at least count the digits?

To use your example, how many positions from the right would the digit representing 10^5 be?

per
October 13th, 2010, 03:18 PM
If Per is correct, then I'm correct - it really is bass ackward.
I've been dealing with numbers for the last 35 years, and I've never written one from right to left.
As long as I know it's always backward, I might be able to figure this stuff out.
I always suspected there was something secret and dastardly about electronics - maybe that's it.
It seems like there is a slight missunderstanding there.

When you have forty-two, that translates to four tens and two ones in the regular decimal system. If I was to start with the lowest decimal (ones, of which we have two of) from the left as it seems to me that you suggest is correct, that would be "24". Since numbers are written with the lowest decimals on the rigth side, the proper way to write it is "42" (tens=high decimal on the left, ones=low decimal on the right).

Chuck(G)
October 13th, 2010, 03:28 PM
I'll bite (not byte).

Logically, because we read from left to right, I'd say the left side.
However, as electronics are bass ackward, I'd have to stab in the dark with the right side.

Per, dig out your red "System/360 Principles of Operation" and tell me what it says about bit numbering. (This scheme remained in force for IBM mainframes for decades).

I've worked on systems that were fixed-word size (64 bits), but bit-addressable. Logically, the bit with the lowest physical address being called 0 makes sense, even if it represents the most significan bit of a word or byte.

In fact, the x86 "little endian" representation of numbers makes a tremendous amount of sense. The problem is that we cloud the picture with our own pre-computer idea of how to write numbers, left-to-right with the leftmost digit being most significant. If, instead we wrote the numbers from 0000 to 1111 in the following way:

0000, 1000, 0100, 1100, 0010, 1010, 0110, 1110...1111, it would make perfect sense. If we used hexadecimal notation, we'd have to swap the digits in a byte, so that A1 = (our decimal 26) and was written 1000 0101.

We're all mental prisoners of some obscure ancient Arabic scholar. But given the way arabic is written (right to left), it makes perfect sense.

This idiocy extended to systems like the IBM 1620, where numeric fields were addressed by their least-significant digit (highest address), but non-numeric fields were addressed by the lowest-address digit. This made for a lot of fancy mental arithmetic.

Lou - N2MIY
October 13th, 2010, 03:35 PM
Quote:

Originally Posted by Chuck(G)
So, for those who think things are logical, which end of a byte or word is bit 0? Why?


Even within DEC there were two factions. I looked around the house and the pdp-8 and 10 consoles have the MSB labeled as bit 0, while the 11 has the MSB as bit 16. However, the MSB is always at the left end of the console, and the LSB is at the right.

I'm sure almost everyone has read this, but it's educational for those who haven't: http://en.wikipedia.org/wiki/Endianness

Lou

per
October 13th, 2010, 03:43 PM
Per, dig out your red "System/360 Principles of Operation" and tell me what it says about bit numbering. (This scheme remained in force for IBM mainframes for decades).

I've worked on systems that were fixed-word size (64 bits), but bit-addressable. Logically, the bit with the lowest physical address being called 0 makes sense, even if it represents the most significan bit of a word or byte.

In fact, the x86 "little endian" representation of numbers makes a tremendous amount of sense. The problem is that we cloud the picture with our own pre-computer idea of how to write numbers, left-to-right with the leftmost digit being most significant. If, instead we wrote the numbers from 0000 to 1111 in the following way:

0000, 1000, 0100, 1100, 0010, 1010, 0110, 1110...1111, it would make perfect sense. If we used hexadecimal notation, we'd have to swap the digits in a byte, so that A1 = (our decimal 26) and was written 1000 0101.

We're all mental prisoners of some obscure ancient Arabic scholar. But given the way arabic is written (right to left), it makes perfect sense.

This idiocy extended to systems like the IBM 1620, where numeric fields were addressed by their least-significant digit (highest address), but non-numeric fields were addressed by the lowest-address digit. This made for a lot of fancy mental arithmetic.

That explains a whole lot.

I have mostly only worked with x86 and Z80 stuff, and I neither seen or showed too much interest in any of the older mainframes. Both the Z80 and x86 CPUs uses combined registers, and that's laid out so that the "arabic" binary numbering makes more sense.

Of course, if you are used to the IBM left-to-right scheeme, then i can imagine that it can get very confusing.

MikeS
October 13th, 2010, 03:43 PM
...If, instead we wrote the numbers from 0000 to 1111 in the following way:

0000, 1000, 0100, 1100, 0010, 1010, 0110, 1110...1111, it would make perfect sense. If we used hexadecimal notation, we'd have to swap the digits in a byte, so that A1 = (our decimal 26) and was written 1000 0101.

We're all mental prisoners of some obscure ancient Arabic scholar. But given the way arabic is written (right to left), it makes perfect sense.
Well, if you start labelling ICs and schematics with bit 0 being the MSb and a bit's value depending on how many bits are in the address or data bus instead of being a power of two it would certainly add a bit of a challenge...

Left to right or right to left is pretty arbitrary, but I prefer the Arabic way; I'm too old to change and I'd be worried about writing a cheque as $901 instead of $109...

MikeS
October 13th, 2010, 03:53 PM
So let's see; if we label the outputs of a binary counter with bit 0 being the MSB then instead of 1,2,3,4,5,6 we'd count: 8,4,12,2,10,6... Yeah, that'd make things interesting; I think I'll stick with 1 - 16...

Lorne
October 17th, 2010, 06:18 PM
OK, so I've read a bunch of stuff lately (on HEX for one thing) and I understand what Per and others were stating about the counting digits from the right. The counting in base 16 is still nuts, but at least I understand it now.

I also read the article Lou provided about little and big Endianess.

That's the part that still boggles my mind, and probably why I started his thread in the first place. How can something like the Endianess debate exist in this day and age? With where computers are at now, I would have expected that to have been settled years ago.

In architecture, structural, mechanical and electrical engineering, there are conventions (standards) that are used, so that everyone coming along later can understand what has been done, and carry on in the same fashion. Those conventions can probably be attributed to trade industry groups that were formed to license practioners in their fields, and develop those standards.

Granted those professions have been around a whole lot longer than electronics engineering, but I would have thought that somewhere along the line (say in the last 35 years), a trade industry group would have been formed to develop and publish some sort of standards/procedures for electronics design (and if not, simply to test and license people to practice in the trade).

In the 1980's I grew up as a consumer of computers, and not a designer, so my questions to those who grew up in that time as the designers would be:

1) were/are there any industry trade groups who test/licence electronics engineers, and are they developing/adopting standards that all their members are following?

2) did the Endianess thing (and other non-standard things eg: IBMs twist in the dual floppy cable) occur because computers were so new, and everyone was on their own? (ie: not sharing design information as they were trade secrets)

3) are there now standards in hardware design that that everyone is following (other than things like ATX PSU wiring), or is everyone still out there doing their own thing?

Chuck(G)
October 17th, 2010, 07:53 PM
The endian debate still exists because it mostly makes sense both ways.

On a big-endian type of CPU, it makes sense that the MSB of a word is at the llowest address if type-addressing alignment is required. In other words, a word must be located at a word boundary, a half-word must be at a halfword boundary and a byte must be at a byte boundary.

On a little-endian type of CPU, it makes sense if no particular alignment requirement is observed. That is, 48, 48 00, 48 00 00 00 all represent the number 48 and all can be addressed at the same address. This is not true of the big-endian model. If you have the value 00 00 00 48 in memory at address n, then to get the same value into a word, halfword and byte requires that you address n+0, n+2 or n+3 correspondingly.

Crack the egg at the big end or the little end? Both work and both have their advantages.

Some CPUs even allow you to change between the two modes via a software command.

commodorejohn
October 18th, 2010, 06:31 AM
Quite so. I'd be a little-endian fan all the way if it weren't for the fact that big-endian handling allows easier word-length handling of bitmapped graphics. Say we have a 1-bit or planar bitmap, with each pixel represented by one bit in a plane. When packed in the usual way, the leftmost pixels are in the highest bits of each chunk (typically a byte, though IIRC the Amiga uses words.) Each chunk in memory is progressively further to the right of the screen, until it wraps around at the end of the line. So if we have two bytes in memory representing sixteen pixels, it looks like this:

Bit 76543210 76543210
---------------------
Pix 01234567 89ABCDEF
When loaded as a word into a big-endian processor, it comes out like so:

Bit FEDCBA9876543210
Pix 0123456789ABCDEF
In other words, it winds up being in the exact same order in the register as it is to the video generator, which means that you can do word-length (or even longer) operations as if it were a single value, whereas on a little-endian processor, you either have to restrict yourself to byte-length operations or spend time shuffling things about in the register.

Why yes, I have been banging my head against this particular wall lately; what tipped you off?

Chuck(G)
October 18th, 2010, 08:38 AM
But that only makes sense if your bit-mapped graphics need to use one word (rather than a byte or bit) per pixel. If you're using one byte or bit, then there's no difference. And more advanced processors have specialized instruction for handling graphics anyway, so it hardly matters in the real world. I believe that both the PPC and the i860/960 could be switched between modes with a simple status bit change.

Any big-endian CPU can be made to act as a little-endian CPU by simply complementing the low-order bits of the address bus on selected data accesses. However, programming a big-endian 8086 would make me want to hit my head against the wall...