Notational Systems | CompTIA Tech+ FC0-U71 | 1.2

In this post on notational systems we’ll be exploring the four major number systems:  binary, hexadecimal, decimal, and octal.  Each of these systems plays a crucial role in computing, and understanding how they work is essential for anyone preparing for the CompTIA Tech+ FC0-U71 exam.

Decimal System

Let’s begin with the decimal system, which is the one most of us are familiar with.

The decimal system is also known as base-10.  It uses ten digits, ranging from 0 to 9, to represent numbers.  Each digit in a number has a place value that is a power of 10.  This is the system we use in everyday life for counting and performing arithmetic.

For example, in the number 453:

  • The 4 represents four hundreds, or 4 x 102 = 400
  • The 5 represents five tens, or 5 x 101 = 50
  • The 3 represents three ones, or 3 x 100 = 3.

So, 453 in decimal equals 400 + 50 + 3.

While this system is great for human use, computers find it less efficient because they operate electronically in a way that’s better suited to a binary system.  

That brings us to our next topic:  binary.

Binary System

The binary system is the fundamental language of computers.  It is also known as base-2 and uses only two digits:  0 and 1.

In binary, each place value represents a power of 2, starting from 20, 21, 22, and so on.  Binary is used by computers because at the hardware level, everything is represented as electrical states – typically on or off – which can be efficiently represented by 1s and 0s.

Let’s take an example.  The binary number 1011 is represented like this:

  • The leftmost 1 is in the 23 position:  1 x 23 = 8
  • The 0 is the 22 position:  0 x 22 = 0
  • The next 1 is in the 21 position:  1 x 21 = 2
  • The last 1 is in the 20 position:  1 x 20 = 1.

When you add those up, 8 + 0 + 2 + 1 = 11, so the binary number 1011 equals 11 in decimal.

Why is binary important?  Computers use binary because digital circuits, such as processors and memory, only understand two states:  on and off.  These states can be represented perfectly by 1s and 0s in binary notation.

Hexadecimal System

Next up is the Hexadecimal System, or base-16.

The hexadecimal system uses 16 symbols:  the digits 0 – 9, followed by the letters A – F.  The letters represent the decimal values 10 through 15.  So:

  • A = 10
  • B = 11
  • C = 12
  • D = 13
  • E = 14
  • F = 15

Hexadecimal is often used in computing because it can represent large binary numbers more compactly.  For example, one hexadecimal digit represents four binary digits, or bits.  This is why hex is frequently used in memory addresses and color codes in computing.

Let’s break down an example:  2F in hexadecimal.

  • The 2 is in the 161 place, so 2 x 16 = 32
  • The F represents 15, and it’s in the 160 place, so 15 x 1 = 15

Adding these together gives 32 + 15 = 47, 20 2F in hexadecimal is equal to 47 in decimal.

Converting between binary and hexadecimal is also straightforward.  Let’s take the binary number 10101111.  If we group the binary digits into sets of four, we get 1010 and 1111.  These group are:

  • 1010 is A in hexadecimal
  • 1111 is F in hexadecimal

So, the binary number 10101111 becomes AF in hexadecimal.

Octal System

The Octal System, or base-8, uses the digits 0 through 7.  Like binary and hexadecimal, octal is also used in computing, though it’s less common today than hexadecimal.

In the octal system, each place value represents a power of 8.  Octal was historically used because it’s easier to convert binary numbers into octal.  For example, groups of three binary digits can be directly converted into one octal digit.

Let’s look at an example:  745 in octal.

  • The 7 is in the 82 place, so 7 x 64 = 448
  • The 4 is in the 81 place, so 4 x 8 = 32
  • The 5 is in the 80 place, so 5 x 1 = 5

Adding these up gives 448 + 32 + 5 = 485, so 745 in octal equals 485 in decimal.

Why was octal used?  In earlier computing systems, octal made it easier to read and process binary numbers because computers often worked with 12-bit, 24-bit, or 36-bit systems.  Since three binary digits perfectly convert to one octal digit, it was more efficient to use in certain environments.

Conclusion

To wrap up, here’s a quick recap:

  • Decimal (Base-10) is the system we use every day for counting and arithmetic.
  • Binary (Base-2) is the language of computers, representing data in 0s and 1s.
  • Hexadecimal (Base-16) is used to represent binary data more compactly, with digits ranging from 0 – 9 and A – F.
  • Octal (Base-8) was historically important in computing, especially with older systems, and uses digits 0 – 7.

Understanding these systems and knowing how to convert between them is crucial for working with computers and is a key part of the CompTIA Tech+ FC0-U71 exam.