TechieDamien@lemmy.mltoProgrammer Humor@lemmy.ml•A Containerized Night Out: Docker, Podman, and LXC Walk into a Bar
1·
1 year agoYou can run llms on text-generation-ui such as open llama and gpt2. It is very similar to the stable diffusion web ui.
You can run llms on text-generation-ui such as open llama and gpt2. It is very similar to the stable diffusion web ui.
Yes, definitely. My biggest use is transparent filesystem compression, so I completely agree!
Well when using zstd, you tar first, something like tar -I zstd -cf my_tar.tar.zst my_files/*
. You almost never call zstd directly and always use some kind of wrapper.
I don’t post to GitLab because I use Gitea. We are not the same.
If I’m being honest, it is fairly slow. It takes a good few seconds to respond on a 6800XT using the medium vram option. But that is the price to pay to running ai locally. Of course, a cluster should drastically improve the speed of the model.