also at beehaw

  • 1 Post
  • 20 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle



  • So I’m no expert at running local LLMs, but I did download one (the 7B vicuña model recommended by the LocalLLM subreddit wiki) and try my hand at training a LoRA on some structured data I have.

    Based on my experience, the VRAM available to you is going to be way more of a bottleneck than PCIe speeds.

    I could barely hold a 7B model in 10 GB of VRAM on my 3080, so 8 GB might be impossible or very tight. IMO to get good results with local models you really have large quantities of VRAM and be using 13B or above models.

    Additionally, when you’re training a LoRA the model + training data gets loaded into VRAM. My training dataset wasn’t very large, and even so, I kept running into VRAM constraints with training.

    In the end I concluded that in the current state, running a local LLM is an interesting exercise but only great on enthusiast level hardware with loads of VRAM (4090s etc).










  • I appreciate this point of view! My BA is in visual arts, but I’ve also leaned heavily into tech, programming as a hobby, etc.

    I think there’s a lot of different topical threads at play when it comes to AI art (classism and fine art, what average viewers vs trained viewers find appealing in a visual medium, etc) – but the economic issue that you point out are really key. Many artists rely on their craft for their literal bodily survival, so AI art is very much a real threat to them.

    But, when I first interacted with Midjourney, and seeing my mom (just an average lady) being excited about AI generated art, I can’t help but see it like photography – all of a sudden the average person gets access to a way of visually capturing things that make them happy, that they think look cool, something they saw in a dream but didn’t have the skill to create visually… and that doesn’t sound like an inherently bad thing to me.


  • That’s helpful; this sounds like a docker issue or qBit issue then. The default qBit location for torrents is /downloads, but you’d need to make sure to point it towards the container volume mapping you’re setting up in docker.

    my relevant qBittorrent compose volume mapping is as follows:

        volumes:
          - /volume1/shared/torrents:/data/torrents
    

    Personally, I don’t separate my torrent downloads by type; I use incoming & completed folders. Here’s how I set up my qBittorrent config:

    Original Value New Value
    Session\DefaultSavePath=/downloads/ Session\DefaultSavePath=/data/torrents/1_completed/
    Session\TempPath=/downloads/incomplete/ Session\TempPath=/data/torrents/2_incoming/
    Downloads\SavePath=/downloads/ Downloads\SavePath=/data/torrents/1_completed/
    Downloads\TempPath=/downloads/incomplete/ Downloads\TempPath=/data/torrents/2_incoming/



  • For me it was really the price of domain renewals. Namecheap has great starting deals, but eg. I have a .studio domain and it costs $28.16 to renew at Namecheap and $21.09 at Porkbun. My .xyz domain costs $9.92 to renew at Porkbun, $14.16 at Namecheap. (Registrar comparison chart here.)

    In terms of pure price, Cloudflare is cheaper to renew for all the domains I have, but Porkbun is only a dollar or two off and I like supporting a smaller company. Edit: Porkbun offers free SSL which is nice if you don’t feel like bothering with LetsEncrypt yourself.

    (Also, I find Namecheap’s domain management console absolutely horrible to work with in terms of UI.)