Resources | Subject Notes | Computer Science
This section explains how to convert numbers between the decimal (base-10) and binary (base-2) systems. Understanding these conversions is fundamental to data representation in computer science.
Converting a decimal number to binary involves repeatedly dividing the decimal number by 2 and noting the remainders. The remainders, read in reverse order, form the binary equivalent.
Algorithm:
Example: Convert the decimal number 25 to binary.
Division | Quotient | Remainder |
---|---|---|
$25 \div 2$ | 12 | 1 |
$12 \div 2$ | 6 | 0 |
$6 \div 2$ | 3 | 0 |
$3 \div 2$ | 1 | 1 |
$1 \div 2$ | 0 | 1 |
Reading the remainders in reverse order (11001) gives the binary equivalent of 25, which is 110012.
Converting a binary number to decimal involves multiplying each digit of the binary number by the corresponding power of 2 and summing the results.
Algorithm:
Example: Convert the binary number 110012 to decimal.
Binary Digit | Position | $2^i$ | Product |
---|---|---|---|
1 | 4 | $2^4 = 16$ | $1 \times 16 = 16$ |
0 | 3 | $2^3 = 8$ | $0 \times 8 = 0$ |
0 | 2 | $2^2 = 4$ | $0 \times 4 = 0$ |
1 | 1 | $2^1 = 2$ | $1 \times 2 = 2$ |
1 | 0 | $2^0 = 1$ | $1 \times 1 = 1$ |
Summing the products: $16 + 0 + 0 + 2 + 1 = 19$. Therefore, 110012 = 1910.
Understanding the conversion between decimal and binary is crucial for comprehending how computers represent data. This knowledge is applied in various areas of computer science, including data storage, memory management, and digital logic.