Computer Mathematics Program to Demonstrate Number Systems & Representation (Decimal, Binary, Hex)
Learn Computer Mathematics step by step.
All about Computer Mathematics - Number Systems & Representation (Decimal, Binary, Hex)
Jan 27, 2026
## Chapter 2: Number Systems & Representation (Decimal, Binary, Hex)
One lab mistake I see a lot:
A student writes:
“Binary `10` is ten.”
No shame. Your brain has been trained for base 10 since childhood.
But in binary, `10` is not “ten”. It’s “two”.
So the real problem is not binary. The real problem is **base switching**.
Your brain keeps assuming base 10 unless you force it to stop.
### The one idea that makes every base feel the same
All number systems are just place value systems.
Decimal places: powers of 10
Binary places: powers of 2
Hex places: powers of 16
Same game. Different base.
I usually explain it like a row of boxes. Each box has a weight.
In base 10, the weights are 1, 10, 100, 1000…
In base 2, the weights are 1, 2, 4, 8, 16…
In base 16, the weights are 1, 16, 256, 4096…
Then you just add the weights where the digit is non-zero.
### Why hex exists (human eyes)
Binary is perfect for circuits but terrible for humans.
Hex is a compact “human wrapper” for binary. One hex digit equals 4 bits.
That’s why:
- `0xF` is 4 bits of 1s (`1111`)
- `0xFF` is 8 bits of 1s (`11111111`)
Hex can look like a new language at first. It isn’t. It’s just grouped binary.
### Okay… but how do we convert between these systems?
Students think conversion is a “trick”. It’s not. It’s just two slow methods, and after some practice your brain starts doing shortcuts.
First, see the slow method. It shows what is really happening.
### Method A: Binary → Decimal (weights method)
This is the one we already started. But let me say it in a more “brain-friendly” way.
Binary is not a “number with weird digits”. Binary is a row of switches.
Each switch position has a fixed value. The rightmost switch is worth 1. The next is worth 2. Then 4. Then 8. Then 16.
So when a bit is:
- `0` → that switch is OFF → you add nothing
- `1` → that switch is ON → you add that position’s value
The only wrong move here is reading `10110` like a normal human number. Read it like: “which weights are turned on?”
Here’s the same thing, written like a small weight table:
```text
bits: 1 0 1 1 0
weights: 16 8 4 2 1
add: 16 0 4 2 0 = 22
```
That’s all.
If you want one more quick practice that feels less random:
Binary: `100000`
Only the 32-weight switch is ON → answer is 32.
Binary: `111111`
32+16+8+4+2+1 = 63.
After a while, you stop calculating and you start “seeing” these patterns.
### Method B: Decimal → Binary (divide method)
This is the method that feels odd at first. That’s normal.
You keep dividing by 2 and collect the remainders. The remainders become your bits, from bottom to top.
I like to say it in class like this:
“When you divide by 2, the remainder is telling you whether the last bit is 0 or 1.”
### Method C: Binary ↔ Hex (grouping method)
Hex exists because of this nice fact:
- 1 hex digit = 4 bits
So you group binary into chunks of 4 (from the right). Each chunk becomes one hex digit.
### A small reality check: where do images and audio fit here?
Students ask: “Sir, okay numbers are fine. But how does a photo become binary?”
This is the big idea:
The computer does not store “a photo”. It stores a long list of numbers.
And those numbers are stored as bits.
So the question becomes:
- What numbers represent a photo?
- What numbers represent audio?
- What numbers represent typed text?
And then, how do those numbers get turned into bits? (Answer: the same way any number is stored: as a bit pattern.)
### How keyboard input becomes binary (simple view)
When you press a key, your keyboard doesn’t send the letter “A” as a letter. Hardware can’t send “letters”. It sends tiny signals and codes.
The clean mental pipeline is:
**Key press → (hardware code) → OS event → character code → encoding → bytes → bits**
Let’s slow that down without drowning in details.
#### Step 1: Keyboard sends a “which key” signal
Your keyboard sends something like a *scancode* (basically “this physical key was pressed”).
This is not the character yet. It’s more like “the key in the second row, third column”.
It’s easy to assume the keyboard sends “A”. But “A” depends on:
- your keyboard layout (US/India/etc.)
- whether Shift is pressed
- language settings
So the keyboard only says: “this key”.
#### Step 2: OS converts it into a character (when it makes sense)
Your OS takes that key event + modifiers (Shift/Ctrl/Alt) and decides what character it becomes.
Now we are closer to “A”.
#### Step 3: A character is stored as a number (code point idea)
Inside the computer, a character is represented by a number (like an ID).
For basic English letters, this number lines up with ASCII history. For the full world (Hindi, emoji, everything), we use Unicode code points.
Don’t worry if “Unicode code point” sounds heavy. Think like this:
“Every character has a number name inside the machine.”
#### Step 4: That number becomes bytes using an encoding (UTF-8 is common)
To store or send text, we convert characters into bytes. UTF-8 is the most common encoding for this today.
Here’s a small, practical example that students actually enjoy because it feels real:
Text: `Hi`
- `H` is 72 decimal → `0x48` hex → `01001000` binary
- `i` is 105 decimal → `0x69` hex → `01101001` binary
So the bytes in a file (for simple English text) are basically:
```text
01001000 01101001
```
Two letters. Two bytes. Sixteen bits.
This is why you sometimes see “hex editors” in debugging. They are literally showing the stored bytes.
#### Common mistake (text side): assuming 1 character = 1 byte always
For English letters, yes, often 1 byte.
But for many characters (like emoji), UTF-8 uses multiple bytes. So “counting characters” and “counting bytes” are not always the same.
This becomes a real bug in programming when you slice strings by byte length or assume fixed-width storage.
### How an image becomes binary (simple view)
A camera sensor measures light. But light is not a number. It’s an analog signal.
So the device does:
1) Measure light intensity (analog)
2) Convert to a number using an ADC (analog-to-digital converter)
3) Store the number for each pixel (often as R, G, B values)
4) Those numbers become bytes
5) Bytes become bits in storage
Same story: real world → numbers → bytes → bits.
Now let’s go a bit deeper, but still with a calm head.
#### Pixel = a small set of numbers
When you see an image on screen, your brain sees “a picture”.
The computer sees a grid. Each grid cell is a pixel. Each pixel stores numbers.
Common simple format: RGB
- Red intensity (0–255)
- Green intensity (0–255)
- Blue intensity (0–255)
So one pixel might be:
```text
R=12, G=200, B=90
```
If it’s 8 bits per channel, each of those is one byte. So one pixel is 3 bytes (24 bits).
#### “Resolution” is just how many pixels
If an image is 1920×1080, that’s about 2 million pixels.
If each pixel is 3 bytes, raw size is about:
2,073,600 × 3 ≈ 6,220,800 bytes ≈ ~6 MB
Seeing a “400 KB” JPEG can make it look like the raw image is also small. It isn’t.
No. The JPEG is compressed. The raw pixel grid is much larger.
#### How compression fits in (without going deep)
There are two big styles:
- Lossless (PNG): tries to keep exact pixel values
- Lossy (JPEG): throws away details your eyes “usually won’t notice”
So “image to binary” happens in two layers:
1) image as pixel numbers
2) pixel numbers packed into a file format (with headers + compression) as bytes
And then bytes are stored as bits.
#### A tiny image example (so it stops feeling magical)
Imagine a 2×2 image. Four pixels.
Let’s store pixels in RGB (8-bit each channel):
```text
P1: (255, 0, 0) // red
P2: (0, 255, 0) // green
P3: (0, 0, 255) // blue
P4: (255, 255, 255) // white
```
In bytes (hex), that’s:
```text
FF 00 00 00 FF 00 00 00 FF FF FF FF
```
That’s “the picture” in the simplest raw sense: a sequence of numbers.
### How audio becomes binary (simple view)
Same idea again:
1) Microphone measures air pressure changes (analog)
2) ADC samples it many times per second (like 44,100 times per second)
3) Each sample becomes a number (like a 16-bit signed integer)
4) Store the samples as bytes → bits
Compression (MP3/AAC) is later. First, the raw audio is just numbers.
Let’s deepen audio a little, because audio is where students’ brains do a small flip.
#### Audio is not “stored sound”. It’s stored measurements.
The microphone measures air pressure changes. The computer stores numbers that represent those measurements over time.
So audio is like:
“a long list of numbers, each one taken at a time moment”
#### Sampling rate = how many measurements per second
44,100 Hz means: 44,100 samples per second.
So 1 second of mono audio has 44,100 numbers.
If stereo, it’s two lists (Left and Right), so basically 88,200 numbers per second.
#### Bit depth = how detailed each measurement is
16-bit audio means each sample is stored using 16 bits.
And here’s where signed numbers come in:
Audio samples swing positive and negative around a “center line”.
That’s why you’ll see ranges like:
- -32768 to +32767 (for 16-bit signed PCM)
If you accidentally interpret signed samples as unsigned, the wave shifts upward and sounds wrong. This is a real bug in audio processing code.
#### A tiny audio example (numbers first)
Imagine 8 samples (very tiny, just for intuition):
```text
0, 1000, 2000, 1000, 0, -1000, -2000, -1000
```
That’s a simple wave going up and down.
In real audio, there are tens of thousands of samples per second, so the wave looks smooth.
#### Where compression fits (quick, not deep)
Raw audio (WAV/PCM) is just those sample numbers stored.
MP3/AAC tries to store “what humans will hear” using clever math, not storing every detail exactly.
So again two layers:
1) audio as sample numbers
2) sample numbers packed/compressed into a file format as bytes → bits
#### Basic: binary to decimal without drama
Binary: `10110`
Weights: 16, 8, 4, 2, 1
So: `1×16 + 0×8 + 1×4 + 1×2 + 0×1 = 22`
If you can do this slowly, you can do any conversion. Speed comes later.
#### Common mistake: reading binary like decimal
Binary `10` is not ten. It’s two.
Why students slip: your brain sees “10” and auto-thinks “ten”.
Binary `10` means: `1×2 + 0×1 = 2`.
When you catch this mistake early, half your fear disappears.
#### Simple reason: decimal to binary using divide-by-2
Convert 13 (decimal) to binary.
Do this:
```text
13 ÷ 2 = 6 remainder 1
6 ÷ 2 = 3 remainder 0
3 ÷ 2 = 1 remainder 1
1 ÷ 2 = 0 remainder 1
```
Now read remainders bottom to top: `1101`.
So 13 is `1101` in binary.
The only trick is the direction: read the remainders bottom-to-top. If you read top-to-bottom you’ll get the reverse.
#### Practical: how typed text becomes bytes (and bytes become bits)
Let’s take the character `A`.
In common encodings:
- `A` often maps to the number 65 (decimal)
- 65 in hex is `0x41`
- 65 in binary is `01000001` (8 bits)
So when you type `A` and save a file, somewhere inside storage you literally have bits like:
```text
01000001
```
This is why programmers like hex for debugging: `0x41` is easier to read than `01000001`.
#### Practical: how an image pixel becomes numbers (and then bits)
Take a simple pixel color: `#FFAA00`.
That’s three bytes:
- `FF` for red (255)
- `AA` for green (170)
- `00` for blue (0)
So one pixel might be stored as:
```text
R = 255, G = 170, B = 0
```
Now each of those numbers has a binary form:
- 255 → `11111111`
- 170 → `10101010`
- 0 → `00000000`
So that one pixel is basically a small pack of bits.
Real images are just millions of pixels like this, plus some metadata and compression tricks.
#### Extra tip: audio samples are signed numbers
Raw audio (like WAV) often stores each sample as a 16-bit signed integer.
So one sample is a number in a range like:
- -32768 to +32767
This is why “signed vs unsigned” matters in audio.
If you read audio bytes as unsigned (0–65535) instead of signed, the waveform shifts and sounds wrong.
Now let’s actually understand how negative numbers are stored, because this shows up in every low-level topic.
---
#### Signed integers and two’s complement (how negative numbers live in bits)
Computers store integers as bits. For negative numbers, most systems use a method called two’s complement.
The reason people like two’s complement is simple: the same binary adder circuit can handle both positive and negative numbers.
##### Step 1: pick a bit-width first (example: 8-bit)
An 8-bit signed integer has 8 bits. One of those bits becomes the sign (but not in a “separate minus sign” way). It’s part of the pattern.
Range for 8-bit signed:
- minimum: -128
- maximum: +127
##### Step 2: convert a negative number using two’s complement
To get the 8-bit pattern for `-5`:
1) Write `+5` in 8-bit binary
`00000101`
2) Flip all bits (this is called one’s complement)
`11111010`
3) Add 1
`11111011`
So in 8-bit signed world:
```text
-5 = 11111011
```
##### Quick check (so you trust the idea)
If `-5` is `11111011`, then adding `5` should give `0`:
```text
11111011 (-5)
+ 00000101 (+5)
-----------
00000000 (0) (carry out is ignored in fixed-width math)
```
That “carry out ignored” line is important. Fixed-width integers always drop overflow bits.
#### Binary addition, carry, and overflow (the exam + debugging version)
Binary addition is the same as decimal addition, just with base 2.
Small rules:
```text
0 + 0 = 0
0 + 1 = 1
1 + 0 = 1
1 + 1 = 10 (sum 0, carry 1)
```
Overflow means: the real answer needs more bits than your box has.
- Unsigned overflow: value wraps around (like a clock)
- Signed overflow: the sign can flip and you get a “wrong-looking” negative/positive
This is the same reason you saw “my positive became negative” bugs in real code.
#### Bitwise operations (AND, OR, XOR, NOT) and why programmers use them
Bitwise operations work directly on bits. You see them in:
- flags (multiple ON/OFF settings in one number)
- permissions (read/write/delete)
- fast checks (even/odd, power of two)
- masking bytes in hex
Example:
```text
A = 01011010
B = 00111100
```
```text
A AND B = 00011000 (only common 1-bits stay)
A OR B = 01111110 (any 1-bit stays)
A XOR B = 01100110 (1 when bits are different)
```
##### Bit masks (one of the most used low-level ideas)
Suppose a system stores permissions like this (one bit per permission):
```text
bit 0 = read
bit 1 = write
bit 2 = delete
```
So a user with read + write has:
```text
read = 001
write = 010
both = 011
```
Check if write permission is present:
- mask for write = `010`
- do AND, then compare with zero
```text
011 AND 010 = 010 (not zero, so write exists)
```
#### Shifts (left shift and right shift)
Shifts move bits left or right.
```text
00000101 (5)
<< 1 -> 00001010 (10)
```
Left shift by 1 is like multiplying by 2 (when it does not overflow).
Right shift by 1 is like dividing by 2 (integer division style).
This is why shifts appear in performance code and in low-level data packing.
## Conclusion
In this article, we explored the core concepts of All about Computer Mathematics - Number Systems & Representation (Decimal, Binary, Hex). Understanding these fundamentals is crucial for any developer looking to master this topic.
## Frequently Asked Questions (FAQs)
**Q: What is All about Computer Mathematics - Number Systems & Representation (Decimal, Binary, Hex)?**
A: All about Computer Mathematics - Number Systems & Representation (Decimal, Binary, Hex) is a fundamental concept in this programming language/topic that allows developers to perform specific tasks efficiently.
**Q: Why is All about Computer Mathematics - Number Systems & Representation (Decimal, Binary, Hex) important?**
A: It helps in organizing code, improving performance, and implementing complex logic in a structured way.
**Q: How to get started with All about Computer Mathematics - Number Systems & Representation (Decimal, Binary, Hex)?**
A: You can start by practicing the basic syntax and examples provided in this tutorial.
**Q: Are there any prerequisites for All about Computer Mathematics - Number Systems & Representation (Decimal, Binary, Hex)?**
A: Basic knowledge of programming logic and syntax is recommended.
**Q: Can All about Computer Mathematics - Number Systems & Representation (Decimal, Binary, Hex) be used in real-world projects?**
A: Yes, it is widely used in enterprise-level applications and software development.
**Q: Where can I find more examples of All about Computer Mathematics - Number Systems & Representation (Decimal, Binary, Hex)?**
A: You can check our blog section for more advanced tutorials and use cases.
**Q: Is All about Computer Mathematics - Number Systems & Representation (Decimal, Binary, Hex) suitable for beginners?**
A: Yes, our guide is designed to be beginner-friendly with clear explanations.
**Q: How does All about Computer Mathematics - Number Systems & Representation (Decimal, Binary, Hex) improve code quality?**
A: By providing a standardized way to handle logic, it makes code more readable and maintainable.
**Q: What are common mistakes when using All about Computer Mathematics - Number Systems & Representation (Decimal, Binary, Hex)?**
A: Common mistakes include incorrect syntax usage and not following best practices, which we've covered here.
**Q: Does this tutorial cover advanced All about Computer Mathematics - Number Systems & Representation (Decimal, Binary, Hex)?**
A: This article covers the essentials; stay tuned for our advanced series on this topic.