I hate to break it to you, but your internet speed is not 100MB/s (megabytes per second); it is more like 100Mb/s (megabits per second). As sysadmins, we sometimes blend the two—bits and bytes—but there is a difference between them. But then you get on a call with an ISP rep, and they promise you 10 gigabytes/s, 40 gigabytes/s, 100 gigabytes/s or one gigabyte/s of bandwidth, and you purchase the service only to find out the hard way that they meant 100 gigabits/s. I'm not saying who that happened to…
Megabytes are typically for storage (RAM, HDD, SSD, NVMe, etc.), and megabits are typically for network bandwidth or throughput (network cards, modems, WiFi adapters, etc.). It can be easy to confuse the two because both bits/s and bytes/s represent data transmission speeds, but remember that, in the abbreviations for each, the uppercase "B" stands for bytes while the lowercase "b" stands for bits.
Bits/s vs Bytes/s
So, bits and bytes are both units of data, but what is the actual difference between them? One byte is equivalent to eight bits. A bit is considered to be the smallest unit of data measurement. A bit can be either 0 or 1. Computers interpret our intentions and process information by the respective representation of those "instructions" as bits. Computers also send and receive data as ones and zeroes—bits. Regardless of the amount of data transmitted over the network, or stored or retrieved from storage, the information is streamed as bits. How we interpret the rate of the bits transmitted denotes how we communicate that rate of transmission. We can arbitrarily express the rate of transmission as "bit per [any measurement of time]." We could have used minutes, hours, days, or even microseconds, but seconds became the customary standard. This gives us an easy way to estimate how long something is going to take.
When data is transmitted over a network medium, it is typically written in bits/s, kilobits/s(kbps), megabits/s (Mbps), or gigabits/s (Gbps). The following table describes this:
As network speeds have increased, it has become easier to describe transmission rates in higher units of measurement. We have gone from 9600 bits/s to 14.4 kbits/s to 28.8 kbits/s to 56 kbits/s to 128 kbits/s. From there, we skyrocketed to 1Mbits/s then 100 Mbits/s then 1000 Mbits/s (1Gb/s), 10000 Mb/s (10Gbit/s). As the medium of transmission changed over the years, so has the transmission rate.
Storing and retrieving data locally on a computer has always been faster than transmitting it over a network. Transmission over the network was (and still is) limited by the transmission medium used. As file sizes grew over the years, it was easier to understand how long it would take to store or retrieve the file. The key to understanding the terminology for storage is remembering that eight bits equals one byte. So a one Megabyte file is actually an 8,000,000 bit file. This means the file is composed of 8,000,000 ones and zeroes, and it can be stored at a rate of one MB/s or 8,000,000 bits/s.
Extra—MiB vs. MB
Openshift 4 (OCP 4) can now "containerize" VMs (virtual machines)—yeah, the game has changed. In Openshift, you may now see some file sizes labeled MiB. MiB stands for Mebibyte, which is a contraction of "mega" and "binary." It's a new way of writing storage capacity to offer some clarity to the words we use, and the actual math behind those words.
It's always an interesting conversation when network admins and system admins talk about speed. Remember to do the conversion and verify what is being expressed.
[ Free cheat sheet: IT job interview tips. ]