• 0 Posts
  • 142 Comments
Joined 4 months ago
cake
Cake day: March 8th, 2024

help-circle

  • I guess that depends on the use case and how frequently both machines are running simultaneously. Like I said, that reasoning makes a lot of sense if you have a bunch of users coming and going, but the OP is saying it’s two instances at most, so… I don’t know if the math makes virtualization more efficient. It’d pobably be more efficient by the dollar, if the server is constantly rendering something in the background and you’re only sapping whatever performance you need to run games when you’re playing.

    But the physical space thing is debatable, I think. This sounds like a chonker of a setup either way, and nothing is keeping you from stacking or rack-mounting two PCs, either. Plus if that’s the concern you can go with very space-efficient alternatives, including gaming laptops. I’ve done that before for that reason.

    I suppose it’s why PC building as a hobbyist is fun, there are a lot of balance points and you can tweak a lot of knobs to balance many different things between power/price/performance/power consumption/whatever else.


  • OK, yeah, that makes sense. And it IS pretty unique, to have a multi-GPU system available at home but just idling when not at work. I think I’d still try to build a standalone second machine for that second user, though. You can then focus on making the big boy accessible from wherever you want to use it for gaming, which seems like a much more manageable, much less finicky challenge. That second computer would probably end up being relatively inexpensive to match the average use case for half of the big server thing. Definitely much less of a hassle. I’ve even had a gaming laptop serve that kind of purpose just because I needed a portable workstation with a GPU anyway, so it could double as a desktop replacement for gaming with someone else at home, but of course that depends on your needs.

    And in that scenario you could also just run all that LLM/SD stuff in the background and make it accessible across your network, I think that’s pretty trivial whether it’s inside a VM or running directly on the same environment as everything else as a background process. Trivial compared to a fully virtualized gaming computer sharing a pool of GPUs, anyway.

    Feel free to tell us where you land, it certainly seems like a fun, quirky setup etiher way.


  • Yeah, but if you’re this deep into the self hosting rabbit hole what circumstances lead to having an extra GPU laying around without an extra everything else, even if it’s relartively underpowered? You’ll probably be able to upgrade it later by recycling whatever is in your nice PC next time you upgrade something.

    At this point most of my household is running some frankenstein of phased out parts just to justify my main build. It’s a bit of a problem, actually.


  • Before I had to try twice for Fedi reasons, I was mostly pushing it for the joke.

    But honestly, this is so on brand for MS. They came up with a superficially marketable idea, botched the execution, then botched the marketing even harder. Then Apple came up with the same feature and everybody liked it.

    The idea that this is them playing the long game is hilarious. Not only is that not how big software companies work, it is definitely not how MS works. People just want to sound worldly and cynical and instead come across paranoid and delusional. The idea that everybody working on this knew it sucked and they shipped it anyway is extremely plausible.

    Can they execute? Sure! But can they also get stuck failing to push back on a bad idea until they end up shipping something nobody likes? Often, objectively. And almost always subjectively because they also consistently suck at branding their stuff, both the good and the bad.



  • OK, but why?

    Well, for fun and as a cool hobby project, I get that. That is enough to justify it, like any other crazy hobbyist project. Don’t let me stop you.

    But in the spirit of practicality and speaking hypothetically: Why set it up that way?

    For self-hosting why not build a few standalone machines and run off that instead? The reason to do this large scale is optimizing resources so you can assign a smaller pool of hardware to users as they need it, right? For a home set of two or three users you’d probably notice the fluctuations in performance caused by sharing the resources on the gaming VMs and it would cost you the same or more than building a couple reasonable gaming systems and a home server/NAS for the rest. Way less, I bet, if you’re smart about upgrades and hand-me-downs.



  • I don’t think that’s correct. Recall will not draw any data from any app you don’t actively display onscreen. In fact it will not draw any data you don’t specifically display on screen. Apple’s Recall will know about data that is stored in applications whether you open it or not, as it’s been explained, but it will work with specific applications drawing from specific data (and it does also look at your screen, although it’s not clear if it does that constantly or on demand).

    Just to quote the current Apple Intelligence landing page. This is posted by Apple itself as promo materials:

    Apple Intelligence empowers Siri with onscreen awareness, so it can understand and take action with things on your screen. If a friend texts you their new address, you can say “Add this address to their contact card,” and Siri will take care of it.

    Awareness of your personal context enables Siri to help you in ways that are unique to you. Can’t remember if a friend shared that recipe with you in a note, a text, or an email? Need your passport number while booking a flight? Siri can use its knowledge of the information on your device to help find what you’re looking for, without compromising your privacy.

    Seamlessly take action in and across apps with Siri. You can make a request like “Send the email I drafted to April and Lilly” and Siri knows which email you’re referencing and which app it’s in. And Siri can take actions across apps, so after you ask Siri to enhance a photo for you by saying “Make this photo pop,” you can ask Siri to drop it in a specific note in the Notes app — without lifting a finger.

    That sure sounds to me like Siri now looks at you screen, logs your past activity, or at least searches through pre-existing system logs of your activity, and has access to and processes all your information.

    Again, Recall and “AppleI” will both draw different sets of data, but they are both drawing new data at the system level. And they’re both making context inferences on your data. Sure, the process is different, they each have issues the other doesn’t (MS’s 1.0 version had glaring security holes and it’s too human-readable, Apple’s version is sending your data to a server for processing, instead of being all on-device), but it’s fundamentally doing the same thing with the same startling access to your data. Both companies insist they’re not logging your data anywhere outside your device. To me, that’s not enough in either case.


  • The Giant Bomb site player specifically was way better than the contemporary Youtube player for a good long while. They were also better at prioritizing bitrate over resolution, since they weren’t obsessed with pretending they had a pixel count advantage over competitors while compressing contents down to mush. If anything it’s ironic that Youtube will now try to sell you bitrate as part of their subscription without cranking up the resolution, presumably because their creators no longer even try to upload 4K anymore.

    Sorry, now I’m bringing up legacy gripes from a different decade. Carry on.


  • For the record, MS also had stated that users can exclude specific applications from Recall and devs can exclude specific screens or content from being recorded.

    I’m not sure that “Apple already indexes and has unfettered access to all your granular data” isa good defense, either. That’s… worse. Although for what it’s worth it does seem like this AI thing is way more intrusive than Spotlight, in that it’s not just searching keywords inside files it can parse, it is connecting data from multiple sources to generate context about you, some of which is being processed off-site. I don’t think it’s as easily expoitable as the 1.0 version of Recall MS described, but if your concern is with an AI or a corporation having access to information, or to compromising information being accessed through easy search by anybody with local access… well, yeah, it’s all degrees of bad here.

    Didn’t you and I already litigate this in a different thread? I’d rather not rehash that.




  • Nope, it’s the exact opposite. Timeline looked a lot more like Apple’s Recall equivalent they’re rolling out. It monitored your activity, not just your screen, and then stored the output on the “cloud” so it could be shared across computers.

    I don’t want either, but if you ask me if I want my screen recorded on-device or my logged activity shared across all machines and stored on MS’s servers I’ll take Recall any day.

    So my point is, Timeline didn’t “soften” anything. It went away on the launch of Win11. And nobody was “softened” because when it resurfaced as “Recall” everybody freaked right out immediately all over again. Bad ideas are bad ideas. You can wait for people to get over minor inconveniences or tradeoffs, or just live with whatever percentage of people find something to be a dealbreaker if the value you extract from it is way higher than the business you lose. But a bad idea is a bad idea.

    Also my point: people here don’t know how to take a win. Recall is gone, I’d expect it to never come back, unless Apple does MS’s job for them and when it resurfaces it works exactly like the Apple feature that works exactly like Recall without anybody freaking out about it.





  • OK, look, I don’t like the online auth requirement for Windows 11, I think it’s dumb and finicky. I’m not trying to defend it here, I was just trying to correct the record on a slightly misleading summary…

    …but come on, any user with those needs can work around the login in like five minutes.

    Retro gaming in 20 years will either work just fine on the next version of Windows or work on a Win11 install supporting an offline account. Heavy machinery shipping with Windows will presumably ship in a state where it can be authetnticated, so it should have some way to be online or to update to a version of Windows that does have auth servers, if Win11 stops having those for some reason. Bad drivers or simply not having connectivity hardware just requires using a USB device. Your phone will USB tether long enough to log in to Windows on first install just fine, I’ve done it before.

    Don’t get me wrong, it shouldn’t be needed, and it’s a stupid annoyance. The real answer to all those use cases is using the known workarounds to support offline accounts on first boot that MS should continue to surface and offer as a supported option. But let’s not be disingenuously obtuse about how the software actually works. I’ve done way worse to keep a legacy OS running on an old machine.



  • Was that a work computer? I know on a work laptop I did have some time restrictions set by IT because they had some authentication policies, but my understanding is that on a Windows Home account you control there should be no time limit, although it may complain about your MS apps or treat it as a not-activated install after a while, I’m not sure. I admit that I have never put that to the test on a Win 11 PC. I definitely did on MS-account enabled Win 10, since I’ve stashed older PCs and then turned them back on offline later, but I don’t think I’ve had an idle Win11 machine more than three months yet.