![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.ml/pictrs/image/V3gyxs4Ucn.png)
That was my first thought. Server side LLMs are extraordinarily expensive to run. Download to costs to users.
That was my first thought. Server side LLMs are extraordinarily expensive to run. Download to costs to users.
The real news here seems to be that Linus is going all in on ARM and now has an ARM based desktop workstation as well. He’s still using the MacBook.
By artist I mean the LLM. Do you punish the LLM (or company running it) for generating it, or the person who asked it to?
That’s tough though. Do you punish “the artist” or the person who commissioned them? Or both?
Roku, I already had you blocked at Pihole, but now I’m just straight up turning off your network access.
I mean it’s kinda amazing that there’s someone looking at a 14th gen Intel CPU sucking back 200+ watts, while it gets spanked by a 7800X3D running at 65 watts, and thinking “AMD is hurting consumers”. That’s some next level shit.
The Series 9 and the SE, and I think it was. Apple rarely if ever discounts current generations of products.
I dunno maybe it’s just gimmicky but it feels so nice to have a desktop app over and webui that still uses the server. I use Deluge for only that reason
Sorry so you’re basing this entirely on people commenting on things and not real world reports of fragility? I think you’re getting worked up for no reason.
If you just want something small and fun to play with Linux and containers try looking for used Chromeboxes. Just bear in mind that going down the *arr app rabbit hole usually means building something with more storage.
Chill and do some reading. It’s not only opt-in but can be disabled at any time, and it’s opt-in per request. It’ll tell you before anything goes to ChatGPT and even then it’s anonymized.