Resources | Subject Notes | Computer Science
This section explores how the amount of data a storage device can hold is measured. Understanding these concepts is fundamental to computer science.
Data storage is typically measured using units that represent different quantities of information. The most common units are:
It's important to distinguish between binary and decimal representations of data storage. Computers use binary (base-2), while we often use decimal (base-10) in everyday life. The prefixes (kilo, mega, etc.) have different meanings in binary and decimal.
For example, $$1 \text{ KB}$$ in decimal is approximately 1000 bytes, but in binary it's exactly 1024 bytes.
The following table summarizes the relationship between these storage units:
Unit | Abbreviation | Decimal Equivalent (approximate) | Binary Equivalent |
---|---|---|---|
Bit | b | - | 1 |
Byte | B | 1024 | 8 |
Kilobyte | KB | 1024 | 1024 |
Megabyte | MB | 1024 | 1024 |
Gigabyte | GB | 1024 | 1024 |
Terabyte | TB | 1024 | 1024 |
Petabyte | PB | 1024 | 1024 |
To calculate the total storage capacity of a device, you need to know the number of storage units it has. For example, a hard drive might be advertised as having 1 TB of storage. This means it can store 1,024,000,000,000 bytes of data.
Data compression is a technique used to reduce the amount of storage space required for data. This is achieved by removing redundancy in the data. There are two main types of compression:
Data compression is crucial for efficient data storage and transmission.