Interesting quantitative look at web performance and how designs made for people with high-end devices can be practically unusable for people on low-end devices, which disproportionately affects poorer people and people in developing countries. Also discusses how sites game Google’s performance metrics—maybe not news to the web devs among ye, but it was new to me. The arrogance of the Discourse founder was astounding.
RETVRN to static web pages.[1]
Also, from one of the appendices:
In principle, HN should be the slowest social media site or link aggregator because it’s written in a custom Lisp that isn’t highly optimized and the code was originally written with brevity and cleverness in mind, which generally gives you fairly poor performance. However, that’s only poor relative to what you’d get if you were writing high-performance code, which is not a relevant point of comparison here.
Although even static web pages can be fraught—see his other post on speeding up his site 50x by tearing out a bunch of unnecessary crap. ↩︎
Also when it comes to loading times your dealing with packets which have a set size. So sometimes you can have a situation where a single extra kilobyte literally doubles the time to load the site. Because if you can fit the entire site into a single packet it loads much faster, but as soon as you need 2 packets it has to wait for them all to fully load. I was reading an article about that a few months ago.
Thanks for sharing (and thanks to @ea6927d8@lemmy.ml for finding the link for) that article! Really interesting stuff. I knew the basics of TCP and Ethernet frames, but I didn’t know about the TCP slow start thing. I’ve been thinking about building my own static website, so I’ll keep this in mind when I do tackle that project.
This one? https://endtimes.dev/why-your-website-should-be-under-14kb-in-size/
yes