Absolutely can and will take action. Doesn’t always kill the right process (sometimes it kills big database engines for the crime of existing), but usually gives me enough headroom to SSH back in and fix it myself.
Absolutely can and will take action. Doesn’t always kill the right process (sometimes it kills big database engines for the crime of existing), but usually gives me enough headroom to SSH back in and fix it myself.
Even better, you can swapoff
swap too!
IMO the joke is more “timeless” because it uses state names instead of company names.
Imagine if instead it mentioned Xerox computers, DEC terminals*, IPX, and Ethernet hubs. We’d say “wow that comic didn’t age well”. Even something as recent as “EVGA GPU” will go down in history books instead of commonplace.
*Yes, I am aware that the VT100 terminal spec is from DEC. But they don’t make DEC terminals anymore
10 years down the road, we don’t know what tech will look like. But there is a high likelihood that the state of Pennsylvania will still exist and hold relevance.
C Hypercube… C Hyper… Hyper C… wait is this where HolyC fits in? (/s)
We have on prem and do all our upgrades by burn the OS and move the data, with the exception of the hypervisor OS (which has a pretty resilient bulk self upgrade built in, and we have a burn-the-OS plan documented for if they do crash). Even system file corruption of a random pet server? New VM and reattach the data disk. Need high availability? Throw F5 or HAProxy at the problem (assuming L7 protocol support).
Both cloud and on prem can work equally when done right. The most important part is to understand that both have different types of cost (human, machine, developer) and to make the right choice based your/your customer’s needs and any applicable laws or regulations about data locality. And yeah, sometimes one will be better for someone and not someone else.
Seven figures of cloud engineering can’t solve stupid, but neither can seven figures of datacenter. This isn’t some Sith/Jedi concept where you have hard definitions of dark and light or good and evil - though sometimes both will see each other as the enemy, and they are in a way competitors.
So that’s the nifty thing about Unix is that stuff like this works- when you say “locked up”, I’m assuming you refer to logging in to a graphical environment, like Gnome, KDE, XFCE, etc. To an extent, this can even apply to some heavy server processes: just replace most of the references to graphical with application access.
Even lightweight graphical environments can take a decent amount of muscle to run, or else they lag. Plus even at a low level, they have to constantly redraw the cursor as you move it around the screen.
SSH and plain terminals (Ctrl-Alt-F#, what number is which varies by distro) take almost no resources to run: SSH/Getty (which are already running), a quick process call to the password system, then a shell like bash or zsh. A singular GUI application may take more standing RAM at idle than this entire stack. Also, if you’re out of disk space, the graphical stack may not be able to alive
So when you’re limited on resources, be it either by low spec system or a resource exhaustion issue, it takes almost no overhead to have an extra shell running. So it can squeeze into a tiny corner of what’s leftover on your resource-starved computer.
Additionally, from a user experience perspective, if you press a key and it takes a beat to show up, it doesn’t feel as bad as if it had taken the same beat for your cursor redraw to occur (which also burns extra CPU cycles you may not be able to spare)