Dude, I think you’re just ignorant of how web hosting works.
I run a managed hosting service for Mastodon and Lemmy, but yeah…
Every single site you visit is hosted on probably dozens or more servers so that it can load balance or guarantee better uptime.
Hacker News: one single FreeBSD box. Not even a database.
Also, your cargo-cult is showing… talking about “load balance” as a guarantee of uptime is the same as justifying using Mongo because it is webscale
Why not?
the instance should be able to provide capabilities to host those users.
Why? And who pays for that?
I’d argue the exact opposite. We should strive for more instances and for Lemmy’s userbase to be spread around. The fact that is scaling out (more instances) is easier than scaling up (beefier servers) is a feature, not a bug.
When us older folks say “Anything you put on the public internet should be considered public and recorded forever”, it’s because of that.
What I really hope to see is some client-side algorithms that can let you track who vote-voted-for-what. This way, you (your client) could ignore downvotes if you detect brigading or rings and it could boost a particular post if it happened to be upvoted by a friend of yours.
Can you please stop with the unnecessary snark and this silly attempt at dick-measuring? Are you upset at something?
No. I am saying that the majority of websites out there don’t need to pay the costs or worry about this.
Good engineering is about understanding trade-offs. We can be talking all day about the different strategies to have 4, 5 or 6 nines of availability, but all that would be pointless if the conversation is not anchored in how much will be the cost of implementing and operating such a solution.
Lemmy - like all other social media software - does not need that. There is nothing critical about it. No one dies if the server goes offline for a couple of minutes in the month. No business will stop making money if we take the database down to do a migration instead of using blue-green deployments. Even the busiest instances are not seeing enough load to warrant more servers and are able to scale by simply (1) fine-tuning the database (which is the real bottleneck) and (2) launching more processes.
Anyone that is criticizing Lemmy because “it can not scale out” is either talking out of their ass or a bad engineer. Possibly both.