• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • Dualbooting is possible and easy: just gotta shrink the Windows partition and install Linux next to it. Make sure to not format the whole thing by mistake, though. A lot of Linux installers want to format the disk by default, so you have to pick manual mode and make sure to shrink (not delete and re-create!) the windows partition.

    As for its usefulness, however… Switching the OS is incredibly annoying. Every time you want to do that you have to shut down the system completely and boot it back up. That means you have to stop everything you’re doing, save all the progress, and then try to get back to speed 2 minutes later. After a while the constant rebooting gets really old.

    Furthermore, Linux a completely different system that shares only some surface level things with Windows. Switching to it basically means re-learning how to use a computer almost from scratch, which is, also, incredibly frustrating.

    The two things combined very quickly turn into a temptation to just keep using the more familiar system. (Been there, done that.)

    I think I’ll have to agree with people who propose Virtual Machines as a solution.

    Running Linux in a VM on Windows would let you play around with it, tinker a little and see what software is and isn’t available on it. From there you’ll be able to decide if you’re even willing to dedicate more time and effort to learning it.

    If you decide to continue, you can dual boot Windows and Linux. But not to be able to switch between the two, but to be able to back out of the experiment.

    Instead, the roles of the OSes could be reversed: a second copy of Windows could be install in a VM, which, in turn, would run on Linux.

    That way, you’d still have a way to run some more picky Windows software (that is, software that refuses to work in Wine) without actually booting into Windows.

    This approach would maximize exposure to Linux, while still allowing to back out of the experiment at any moment.


  • S410@kbin.socialtoLinux@lemmy.mlI dislike wayland
    link
    fedilink
    arrow-up
    21
    arrow-down
    1
    ·
    4 months ago

    Wayland has it’s fair share of problems that haven’t been solved yet, but most of those points are nonsense.

    If that person lived a little over a hundred years ago and wrote a rant about cars vs horses instead, it’d go something like this:

    Think twice before abandoning Horses. Cars break everything!
    Cars break if you stuff hay in the fuel tank!
    Cars are incompatible with horse shoes!
    You can’t shove your dick in a car’s mouth!

    The rant you’re linking makes about as much sense.







  • “AI” models are, essentially, solvers for mathematical system that we, humans, cannot describe and create solvers for ourselves.

    For example, a calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly. A language, thought? Or an image classifier? That is not possible to create by hand.

    With “AI” instead of designing all the logic manually, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for some incredibly complex system.

    If we were to try to make a regular calculator that way and all we were giving the model was “2+2=4” it would memorize the equation without understanding it. That’s called “overfitting” and that’s something people being AI are trying their best to prevent from happening. It happens if the training data contains too many repeats of the same thing.

    However, if there is no repetition in the training set, the model is forced to actually learn the patterns in the data, instead of data itself.

    Essentially: if you’re training a model on single copyrighted work, you’re making a copy of that work via overfitting. If you’re using terabytes of diverse data, overfitting is minimized. Instead, the resulting model has actual understanding of the system you’re training it on.



  • OpenSUSE + KDE is a really solid choice, I’d say.

    The most important Linux advice I have is this: Linux isn’t Windows. Don’t expect things to works the same.
    Don’t try too hard to re-configure things that don’t match the way things are on Windows. If there isn’t an easy way to get a certain behavior, there’s probably a reason for it.





  • Not OP, but I have the same setup.

    I have BTRFS on /, which lives on an SSD and ext4 on an HDD, which is /home. BTRFS can do snapshots, which is very useful in case an update (or my own stupidity) bricks the systems. Meanwhile, /home is filled with junk like cache files, games, etc. which doesn’t really make sense to snapshot, but that’s, actually, secondary. Spinning rust is slow and BTRFS makes it even worse (at least on my hardware) which, in itself, is enough to avoid using it.





  • On my devices like PCs, laptops or phones, syncthing syncs all my .rc files, configs, keys, etc.

    For things like servers, routers, etc. I rely on OpenSSH’s ability to send over environmental variables to send my aliases and functions.
    On the remote I have
    [ -n "$SSH_CONNECTION" ] && eval "$(echo "$LC_RC" | { { base64 -d || openssl base64 -d; } | gzip -d; } 2>/dev/null)"
    in whatever is loaded when I connect (.bashrc, usually)
    On the local machine
    alias ssh="$([ -z "$SSH_CONNECTION" ] && echo 'LC_RC=$(gzip < ~/.rc | base64 -w 0)') ssh'

    That’s not the best way to do that by any means (it doesn’t work with dropbear, for example), but for cases like that I have other non-generic, one-off solutions.