Sunday, August 25, 2013

Missed Technology Thursday, didn't we?

Sorry about that!  But, here to make up for it, is the missing post:


How does my computer compute?

Computers have come a long way since the days when they were programmed with a pair of pliers and a soldering iron!

(Do you know where the term “bug” came from?  When the government owned a computer big enough to fill a building with, one day, the thing just wouldn’t work.  So they laboriously started searching through the thousands of physical connections and switches to see where the problem was.  It turned out a moth had spun a cocoon so that it impeded the operation of a single switch.  Hence the term “bug” came from having a real bug gum up the works!)

When they were first used, they were thought of as big, fast calculators.  They were used to perform calculations of a size and scope people had never been able to do manually, just because of the scope of the amount of work that needed to done to complete it.

Eventually, someone realized they could be used for more, and someone invented other programs for them, including word processing, spreadsheets, collating and sorting long lists of information and so forth.  Databases were invented for storing tremendous amounts of information where it could be manipulated, sorted, recalled and regurgitated in ways nobody had ever been able to do before.

Why? Because computers are, at heart, big FAST calculators.  But as you’ve probably heard before, they are stupid.  Dumb as a rock.

Then why is your smartphone so smart?  How does it do all that stuff?

Basics.  Lets take a look.

At the very most basic level, your computer only understands two symbols.  A zero (0) and a one (1).  This is called a “binary” system.  Binary meaning “two”.  So, how does it know when you hit a key whether that represents an “a” or an “A”, or a “z” or a “Z”?

Code.  It uses something called an ASCI code to know what letter or number is which.  The very most basic bit of information is called just that, a “bit”.  That is a one or a zero.  So, how does that translate into an A or a Z?  You string bits together.  Kind of like Morse code, only with eight elements instead of three.

A computer chip, or CPU (Central Processing Unit) processes information one cycle at a time.  It does it VERY fast, on the order today of billions of cycles per second, but still, only so many bits at a time.  The first computers were only able to read eight bits at a time.  They were called, therefor, 8 bit systems.  Eventually, that increased, but because of that, even today’s systems are built on the eight bit base.

A single character is represented by eight bits of ones and zeros.  So, for instance, that lowercase “a” would be 01100001 in binary code.  For simplicity's sake, that string of code is called a Byte.  Pronounced “bite”.  The ASCI code, extended today, covers just about any symbol you might want to type, including spaces, special symbols, foreign language letters, and so forth.

So, to unravel a mystery to most consumers, this reveals the meaning of those mysterious bits and bytes you see on the packages of electronics you see at Staples.

It works like this: When you store information on your local C: drive (or your Macintosh HD for Macs), a single letter takes up one byte of space.  If you keep typing, 1024 bytes makes up a Kilobyte (Kilo meaning thousand in latin).  The next step up is the Megabyte (MB), which is, again, 1024 Kilobytes (written kB), which is 1,048,576 bytes.  A million bytes, approximately.  Remember, we are looking at a base of eight characters here!

So, a Terabyte is 1024 Megabytes (MB), or 1,073,741,824 bytes.  A billion bytes.   (Note that the second letter “B” is capitalized.  This because we are working with BYTES.  Bits are abbreviated with the lowercase b.  We’ll see that in a moment.)

So, when you see MB or TB, you now know we are talking storage space - how much storage a hard drive or a flash drive has on it and how much information you can put on it.

Now, I mentioned bits, and how they are abbreviated with the lowercase b.  Why?  Because bits are used to describe the amount of data that can be TRANSMITTED over a network.  So, here, we aren’t looking at the traditional eight bit limitation of the operating system, but we use the base ten we humans use to count.  So, when you see an Internet Service Provider that claims to have a speed of 25 Mbps, he isn’t talking about Megabytes - that would be storage - he’s saying that his network can transmit 25 million BITS of information per second - that works out to a little over 3.1 million bytes (or characters) per second.

That’s pretty decent speed in the US.

In later posts, I’ll explain a bit more about operating systems and such, but I think I’ve stretched the average users’ mind enough for one day!

No comments: