• sorrus@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I have a server setup with TrueNAS Scale installed and I would love something as simple as a Lemmy app for TrueCharts. I know TrueNAS scale uses k8s for apps so this should be possible but from my limited research it seems like something I’d have to figure out on my own for the most part.

  • iso@lemmy.com.tr
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Have you been able to load balance with multiple containers? Im not really familiar with k8s

      • redcalcium@c.calciumlabs.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 year ago

        I also use kubernetes to run my Lemmy instance. Sadly, pictrs uses their own “database” file which can only be opened by a single pod because it refuse to run if the “database” lock is already acquired by another pod, making scaling up the number of pods impossible. I wish they use postgres instead of inventing their own database. I suspect this is one of the reasons why those large Lemmy instances have difficulty scaling up their server.

        • Ducks@ducks.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          You mean pictrs can’t scale, or the other pods cannot as well? I separated lemmy-ui, the backend, and pictrs into different pods. Haven’t tried scaling anything yet though, but I did notice the database issue with pictrs when RollingRestart, had to switch to Recreate.

        • tyfi@wirebase.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          This is a really interesting observation. Curious if the devs are aware that this breaks simple scalability efforts

        • belidzs@fost.huOP
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          Right, that’s a good point.

          So far it’s working quite well, however for a micro-sized instance it’s no surprise. Worst case scenario I can do the same thing as the admins of lemmy.world did: create a dedicated scheduling pod using the same docker image as the normal ones, but exclude it from the Service’s target, so it won’t receive any incoming traffic.

          The rest of the pods can then be dedicated to serve traffic with their scheduling functionality disabled.

  • frankblack@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    Yea I was wondering if should go with a docker server or a single k8s and add workers if I need it later. What do you think?

  • frankblack@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    1 year ago

    Yea I was wondering if should go with a docker server or a single k8s and add workers if I need it later.