A bit (short for binary digit) is the smallest unit of data in a computer. It can represent a 0 or 1, forming the basis of binary code.
A byte consists of 8 bits and is used to represent a single character, such as a letter or a symbol. For example, the letter 'A' in ASCII is stored as one byte.
A byte size is called decmial when it is a power of 10, and binary when it is a power of two
Note that sequential byte sizes are defined in terms of a constant multiple of the previous byte size, and that we only define a kilobyte in terms of regular old bytes, then the next size megabyte 1000 kilobytes, since a kebibyte is a binary size, then we define the next sequential binary byte size as 1024 = 2^10 kebibytes
For each type of byte it has the form WXYZbyte where WXYZ are stand-in variables for the actual characters such as WXYZ = mega etc... given a decimal byte size of the form WXYZbyte, then its decimal counterpart is given by WXbibyte
The decimal system (base 10) is used by manufacturers to simplify sizes for general users.
The binary system (base 2) aligns with how computers process data and is used in operating systems and memory specifications.
Common C++ data types and their typical sizes (may vary by system):
char: 1 byte (8 bits).
short: 2 bytes (16 bits).
int: 4 bytes (32 bits).
long: 4 or 8 bytes (32 or 64 bits, depending on the system).
long long: 8 bytes (64 bits).
float: 4 bytes (32 bits).
double: 8 bytes (64 bits).
bool: 1 byte (8 bits).
The difference between decimal and binary measurements exists because decimal prefixes (e.g., kilo, mega) are standard in the SI system, while binary prefixes (e.g., kibi, mebi) are used in computing for precision.