# why 1 byte = 8 bits, 1 kb= 1024 byte but not 1000?

computer stuffs are created by human, it is not natural selection, right? so why did we make 1 byte = 8 bits, but not 10? why do we use the increment of 8 or 16, but not 10?

Relevance
• 9 years ago

Honestly I have no idea why, but its called base 8 math. Humans naturally use base 10 (we group in units of 10 e.g. a decade, a century, a millennium). In base 8 math, you group in terms of 8, so it goes up in 8. 8 cubed is 512, and to make things a little easier for ourselves, we said that double that (1024) is a kilobyte, since its the closest base 8 number to an even 1000.

• 9 years ago

Computers use electronics and it is easier to make electronic circuits that store data as ones and zeros. If you know how to store a billion (1 gig) of the decimal digits 0-9 efficiently and electronically then computer chip makers might be interested but realize that computers also store text and there are 26 letters in the English alphabet, so decimal is inefficient for text also.

• 9 years ago

Because that's how the hardware has historically worked. If you build a memory system, it's addressable such that it's number of memory locations is a power of 2.

They commonly use powers of 2 in sizes, like cache sizes and buffers, for similar reasons.

Disk drives are different, and at some point in time, disk makers agreed to list their capacities such that a MB is 1,000,000 bytes, and not 1,048,576.

• Anonymous
9 years ago

For the same reason 1 foot is 12" and not 10 like cm.