I grew up in the computer age when it could safely be assumed that the bandwidth of an internet connection was measured in bits per second…not gigabits per second…not megabits per second…nor even kilobits per second….but bits per second.
Because we knew that bandwidth was a scarce commodity, we were careful to ensure that we didn’t try to use the entire network bandwidth and we were also careful to make sure that minimum amount of data was moved. It probably took twice as long and required twice the effort to write the software, but we had to do it.
Fast forward to today. I think most software writers assume that bandwidth is nearly unlimited and that there is no concern for the amount of data that is moved across a network.
My example of this is Microsoft Windows 10. Each day, I turn on the computer, it opens more than 25 connections to a server that, according to the whois records, belongs to Microsoft Corporation. This ends up saturating my internet connection. If I’m lucky, it only lasts for a few minutes. If, like today, I’m unlucky, this will go on for several hours.
I live far enough out of the city that my best internet connection speed is 1.8Mbps and until the processes finish their connections to Microsoft, my internet experience reminds me of back in the day when Teletype Model 33 and 35 ASRs were pretty much all that was commonly available for use. These ran at 110 bits per second….for text, which is all they printed, that is 10 characters per second!
I will probably head to the thrift store to see what is available for “old” laptops under $20 and see how well Linux runs on it.
For what it’s worth, my favorite distribution is Slackware Linux. I use it at work on some test servers that I operate.