• 0 Posts
  • 11 Comments
Joined 1 year ago
cake
Cake day: June 5th, 2023

help-circle




  • I’ve struggled with trying to find an alternative to Google Photos that actually works well enough, and reliably enough, for me to feel comfortable enough to fully replace it. I’ve tried everything on the Awesome Selfhosted list that would be a potential competitor, but nothing comes close to Google Photos. It’s honestly just such a solid product it’s really hard to find an open source/selfhosted replacement that works at least as well. And Google Photos is just so convenient when it comes to shared albums, it’s just slick.

    My ideal solution would be to have Google Photos remain the source of truth, but have something else as a secondary backup. I looked into the idea of using Rclone to mount Google Photos and another backend (ie. Wasabi), and just replicating periodically from Google Photos to another location. But unfortunately at this time (and maybe forever), the Google Photos API doesn’t allow you to access photos/videos in their original form, only compressed. But I want the originals of course, so this doesn’t fly. The next thing I’ll be looking into when I have more time is automating Google Takeout periodically to fetch the original quality photos/videos, then upload to a backup location. But it’s such a janky idea and it rubs me the wrong way… But it might be the only way. Rclone would have been so perfect if only it could get the original quality content, but that’s on Google not enabling that capability.


  • I have a single ASUS Chromebox M075U I3-4010U which I use as a Docker host. It’s neatly and inconspicously tucked away under my TV, and it’s quiet even when the fan’s on full if a heavy workload is running.

    Main Specs:

    • Processor: Intel Core i3-4010U 1.7 GHz
    • Memory: 4 GB DDR3 1600 (I upgraded this to 16 GB)
    • Storage: 16 GB SSD (I upgraded this to 64 GB)
    • Graphics: Intel HD Graphics 4400
    • OS: Google Chrome OS (Currently running Ubuntu 22.04)

    Full Specs: https://www.newegg.ca/asus-chromebox-m075u-nettop-computer/p/N82E16883220591R

    I started off with a single-node Kubernetes cluster (k3s) a few years ago for learning purposes, and ran with it for quite a long time, but have since gone back to Docker Compose for a few reasons:

    • Less overhead and more light-weight
    • Quicker and easier to maintain now that I have a young family and less time
    • Easier to share examples of how to run certain stacks with people that don’t have Kubernetes experience

    For logs, I’m only concerned with container logs, so I use Dozzle for a quick view of what’s going on. I’m not concerned with keeping historical logs, I only care about real-time logs, since if there’s an ongoing issue I can troubleshoot it then and there and that’s all I need. This also means I dont need to store anything in terms of logs, or run a heavier log ingestion stack such as ELK, Graylog, or anything like that. Dozzle is nice and light and gives me everything I need.

    When it comes to container updates, I just do it whenever I feel like, manually. It’s generally frowned upon to reference the latest tag for a container image to get the latest updates automatically for risk of random breaking changes. And I personally feel this holds true for other methods such as Watchtower for automated container updates. I like to keep my containers running a specific version of an image until I feel it’s time to see what’s new and try an update. I can then safely backup the persistent data, see if all goes well, and if not, do a quick rollback with minimal effort.

    I used to think monitoring tools were cool, fun, neat to show off (fancy graphs, right?), but I’ve since let go of that idea. I don’t have any monitoring setup besides Dozzle for logs (and now it shows you some extra info such as memory and CPU usage, which is nice). In the past I’ve had Grafana, Prometheus, and some other tooling for monitoring but I never ended up looking at any of it once it was up and “done” (this stuff is never really “done”, you know?). So I just felt it was all a waste of resources that could be better spent actually serving a better purpose. At the end of the day, if I’m using my services and not having any trouble with anything, then it’s fine, I don’t care about seeing some fancy graphs or metrics showing me what’s going on behind the curtain, because my needs are being served, which is the point right?

    I do use Gotify for notifications, if you want to count that as monitoring, but that’s pretty much it.

    I’m pretty proud of the fact that I’ve got such a cheap, low-powered little server compared to what most people who selfhost likely have to work with, and that I’m able to run so many services on it without any performance issues that I myself can notice. Everything just works, and works very well. I can probably even add a bunch more services before I start seeing performance issues.

    At the moment I run about 50 containers across my stacks, supporting:

    • AdGuard Home
    • AriaNG
    • Bazarr
    • Certbot
    • Cloudflared
    • Cloudflare DDNS
    • Dataloader (custom service I wrote for ingesting data from a bunch of sources)
    • Dozzle
    • FileFlows
    • FileRun
    • Gitea
    • go-socks5-proxy
    • Gotify
    • Homepage
    • Invidious
    • Jackett
    • Jellyfin
    • Lemmy
    • Lidarr
    • Navidrome
    • Nginx
    • Planka
    • qBittorrent
    • Radarr
    • Rclone
    • Reactive-Resume
    • Readarr
    • Shadowsocks Server (Rust)
    • slskd
    • Snippet-Box
    • Sonarr
    • Teedy
    • Vaultwarden
    • Zola

    If you know what you’re doing and have enough knowledge in a variety of areas, you can make a lot of use of even the most basic/barebones hardware and squeeze a lot out of it. Keeping things simple, tidy, and making effective use of DNS, caching, etc, can go a long way. Experience in Full Stack Development, Infrastructure, and DevOps practices over the years really helped me in terms of knowing how to squeeze every last bit of performance out of this thing lol. I’ve definitely taken my caching strategies to the next level, which is working really well. I want to do a write-up on it someday.


  • Containers really shine in the selfhosting world in modern times. Complete userspace isolation, basically no worries about dependencies or conflicts since it’s all internally shipped and pre-configured, easy port mapping, immutable “system” files and volume mounting for persistent data… And much more. If built properly, container images solve almost all problems you’re grappling with.

    I can’t imagine ever building another application myself without containerization ever again. I can’t remember the last time I installed any kind of server-side software directly on a host without containerization, with the exception of packages required by the host that are unavoidable to support containers or to increase security posture.

    I’m my (admittedly strong) opinion, it’s absolute madness, and dare I say, reckless and incomprehensible, why anybody would ever create a brand new product that doesn’t ship via container images in this day and age, if you have the required knowledge to make it happen, or the capacity to get up to speed to learn how to make it happen (properly and following best practices of course) in time to meet a deadline.

    I’m sure some would disagree or have special use-cases they could cite where containers wouldn’t be a good fit for a product or solution, but I’m pretty confident that those would be really niche cases that would apply to barely anyone.


  • Sure, I won’t post exactly what I have, but something like this could be used as a starting point:

    #!/bin/bash
    now="$(date +'%Y-%m-%d')"
    
    echo "Starting backup script"
    
    echo "Backing up directories to Wasabi"
    for dir in /home/USERNAME/Docker/*/
    do
        dir=${dir%*/}
        backup_dir_local="/home/USERNAME/Docker/${dir##*/}"
        backup_dir_remote="$now/${dir##*/}"
    
        echo "Spinning down stack"
        cd $backup_dir_local && docker compose down --remove-orphans
    
        echo "Going to backup $backup_dir_local to s3://BUCKET_NAME/$backup_dir_remote"
        aws s3 cp $backup_dir_local s3://BUCKET_NAME/$backup_dir_remote --recursive --profile wasabi
    
        echo "Spinning up stack"
        cd $backup_dir_local && docker compose up --detach
    done
    aws s3 cp /home/USERNAME/Docker/backup.sh s3://USERNAME/$now/backup.sh --profile wasabi
    
    echo "Sending notification that backup tasks are complete"
    curl "https://GOTIFY_HOSTNAME/message?token=GOTIFY_TOKEN" -F "title=Backup Complete" -F "message=All container data backed up to Wasabi." -F "priority=5"
    
    echo "Completed backup script"
    

    I have all of my stacks (defined using Docker Compose) in separate subdirectories within the parent directory /home/USERNAME/Docker/, this is the only subdirectory that matters on the host. I have the backup script in the parent directory (in reality I have a few scripts in use, since my real setup is a bit more elaborate than the above). For each stack (ie. subdirectory) I spin the stack down, make the backup and copy it up to Wasabi, then spin the stack backup, and progress through each stack until done. Then lastly I copy the backup script up itself (in reality I copy up all of the scripts I use for various things). Not included as part of the script and outside of the scope of the example is the fact that I have the AWS CLI configured on the host with profiles to be able to interact with Wasabi, AWS, and Backblaze B2.

    That should give you the general idea of how simple it is. In the above example I’m not doing some things I actually do, such as create a compressed archived, validate it to ensure there’s no corruption, pruning of files that aren’t needed for the backup within any of the stacks, etc. So don’t take this to be a “good” solution, but one that would do the minimum necessary to have something.


  • I keep all secrets and passwords in a selfhosted Bitwarden instance. I don’t maintain any kind of “documentation” since my deployment files and scripts are clean and tidy, so I can tell what’s going on at a glance by looking at them directly. They’re also constantly changing as I continuously harden services according to ever-changing standards, so it’s more efficient for me to just continue keeping my codebase clean than it is to maintain separate documentation that I myself will likely never read again once I’ve published it.

    I’m the only one that needs to know how my own services are deployed and what the infrastructure looks like, and it’s way faster for me to just look at the actual content of whatever deployment files or scripts I have.

    It’s a different story for things I work with professionally, because you can’t trust someone else working to maintain the same things as you has the same level of knowledge to “just know where to go and what to do”. But that doesn’t apply to personal projects where I just want to get things done.



  • I run all of my services in containers, and intentionally leave my Docker host as barebones as possible so that it’s disposable (I don’t backup anything aside from data to do with the services themselves, the host can be launched into the sun without any backups and it wouldn’t matter). I like to keep things simple yet practical, so I just run a nightly cron job that spins down all my stacks, creates archives of everything as-is at that time, and uploads them to Wasabi, AWS S3, and Backblaze B2. Then everything just spins back up, rinse and repeat the next night. I use lifecycle policies to keep the last 90 days worth of backups.