• 0 Posts
  • 9 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle




  • I don’t really use it for this, but here are some things I do use it for:

    • metrics scraping on servers without needing to open ports or worry about ssl encryption. Works great for federating Prometheus instances or scraping exporters
    • secure access to machines not directly exposed to the internet. I.e. ssh access to my home box while I’m traveling
    • being an exit node for web traffic while traveling. I.e. maybe you are traveling and have a bank who is giving you grief about logging in – masquerade that connection from your home IP

    I mostly just use it for metrics scraping though


  • liara@lemm.eetoPlex@lemmy.caSystem Architecture Feedback
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    10 months ago

    I mean you could just use docker and if all you want is a plex container, it may be the way to go. Kubernetes is definitely a lot to learn if you are kind of hesitant to get started with it in the first place. I would just say that single binary distribution like k0s just basically becomes a docker on steroids when used in a single node environment. I’ve become so familiar with k8s that going back to docker feels like a massive downgrade for anything but a simple and straightforward task (which a single plex container, admittedly is).

    Just do yourself a favour and if you go that route, at least use docker-compose to template your container. Searching through your bash history to find the command you used to start the container is just a recipe for frustration waiting to happen.

    One major feature that k8s has which docker doesn’t (I mean, there are lots tbh) is the ability to use helm charts. These are basically install templates, if you don’t like the defaults then you can provide your own (assuming the chart author has written good charts) and the helm chart will template your values into the default chart and spin up a bespoke version for you.

    For instance, a theoretical helm chart whose purpose is to install qbittorrent would likely provide the following:

    • the manifest to run a version of qbittorrent
    • a cluster IP to expose the plex web port internally
    • an “ingress” object to connect your nginx frontend to the qbittorrent web port so that you can go to mydomain.com/qbittorrent and qbittorrent appears
    • a volume mount to store your data in

    However, say this ingress by default doesn’t use SSL, but you want to ensure you’re using https when you enter the password on your web interface, but the rest of the chart defaults meet your needs – then a well written chart would allow you to provide values that let you template the SSL setup for the ingress object and provide for cert-manager to go and provision some certificates for your provided hostname with Let’s Encrypt.

    Helm charts basically are a way to provide a sane set of defaults which can be extended and customized to personal needs.

    k8s (or the lightweight cousins) may not be ideal for you and, as I said, I’m biased because I’m a certified k8s admin, so tinkering with k8s resembles something like fun for me. Your mileage may vary :)

    Terms:

    • batteries: this isn’t a kubernetes term, just a figure of speech. You got a new toy and it came with batteries – you didn’t have to supply them yourself. The batteries were included (i.e. the installation was opinionated and came with a pre-existing notion of how you should use the application). I wrote that last comment from my phone, so I may have misused the term in an attempt to get a response written on my phone
    • pod: a collection of containers running in tandem – for instance, if you had nginx and plex running in the same pod, then plex can find nginx ports as if they were sharing the same machine (127.0.0.1 is the same for both containers). If you had nginx and plex running in different pods, then you would need to use a service to allow them to communicate with each other (which they could easily do with the cluster’s dns service)
    • manifest: a yaml file containing the spec of your containers (name, container port, image to use, volumes to mount). This would basically equate to a docker compose file, but manifests can also define services, namespaces, volumes, etc. In that regard a manifest is a yaml file that defines an object in kubernetes
    • nodeport: a type of service. This one will direct traffic to a given pod and potentially open that service to external services. By default nodeports are given in the 30000-32767 range and will bind to the hosts network interface. The result is that the service becomes available on a port externally. The other service type is clusterip which only assigns an IP which can be accessed internally (i.e. by other pods/services in your cluster – for instance if you had a mysql service you wanted to expose to a web/application pod but not let the world access it, then clusterIP lets you do this). The final service type is loadbalancer which is a bit more complicated (TLDR, these are frequently integrated with cloud providers to automatically spin up actual load balancer objects, for instance, at AWS, but can also be used to bind services to privileged ports on your external IP while leveraging something like “MetalLB”)

  • liara@lemm.eetoPlex@lemmy.caSystem Architecture Feedback
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    10 months ago

    A complete kubernetes cluster for a homelab probably would be overkill (unless you really wanted a kubernetes playground, which some folks do). However, yes, my recommendation these days would be k0s directly. I did use k3s up until recently but gave k0s a shot and realized it’s a bit lighter on resources, more configurable (for instance you can choose to run cri-o instead of containerd, which isn’t an option with k3s) and has some extra features like letting you put helm charts with their values directly in the k0s config.

    k0s vs k3s just comes down to personal preference but for me it came down to:

    • I disabled a lot of features of k3s out of the box (disabled flannel for calico, used nginx ingress instead of traefik). K0s feels a little less opinionated – it doesn’t include quite as many batteries during initialization, but this doesn’t bother me because I have my own preferences for how to handle certain aspects of my stack
    • both can use sqlite as the data backend (and both will by default in single node mode), which is the much less resource usage than using etcd as the data store
    • I find k0s uses a couple hundred MB less RAM for the control plane components (about 700mb vs 1g for k3s)
    • less constant cpu usage from the API server
    • both have good documentation for their specific features and of course kubernetes itself is extremely well documented, which is “language” used to define the services and pods

    As for distro to run it on, I use MicroOS myself (immutable os, I have it set to automatically update and reboot once a week), but Debian is my second choice and my personal preference for server distros. The beauty of this setup is the container host really just needs the bare minimum to run the containers. There’s less that can break because the containers are all managed by others upstream so the main concern of breakage areas basically becomes did server boot and did k0s start?

    NFS is fine and actually natively supported as a kubernetes volume type: https://kubernetes.io/docs/concepts/storage/volumes/

    Your option could be to mount it directly to the host first and then use a hostPath to mount it to the container, or just mount the NFS path directly to the pod. As for permissions you may need to do some mapping, but kubernetes also has security contexts that can let you alter the UID of the user running the pod. If you need user to be privileged and root, you can do that or if you need UID 5124 you can do that too.

    If your goal right now is a Plex server and not much else to start then this makes things very easy:

    • spin up k0s
    • add a Plex pod/manifest
    • add a service type of NodePort and expose the Plex service on a static node port of 32400 (we are lucky that Plex falls into the NodePort service range by default)
    • the GPU passthrough I admit will take some work, but it should be doable

    You can add nginx ingress, cert-manager, metal lb, etc later on down the line if you get curious and want to expand a bit (sonarr, radarr, adguard home, etc)

    You could also just go full stupid with kubevirt, but it’s not a project I’ve personally explored using. Iirc it basically allows for the provisioning of more persistent VMs with k8s rather than containers


  • liara@lemm.eetoPlex@lemmy.caSystem Architecture Feedback
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 months ago

    I’m kind of over the whole idea of keeping “pets” around to serve my various self-hosting needs. Why make a hypervisor and then shard off pieces of this and create multiple operating systems that need to be maintained when you can just orchestrate all your hosting needs with a container orchestrator like k0s/k3s on the host? Even GPU passthrough can be done.

    I’m a bit biased because I’m also a CKA, but I was a die-hard “bare metal or bust” kinda person with my self hosted stuff until I discovered kubernetes. K8s is a lot of resources on its own but a distro like k0s really pares down on the minimum requirements and basically just becomes a more featureful version of Docker if you just run it as a single node.

    Eventually I came to understand that when your entire home stack is represented by a few hundred lines of yaml and a couple directories of portable data, that you can stop coddling the Linux install and just use the applications.