• 1 Post
  • 37 Comments
Joined 1 year ago
cake
Cake day: June 7th, 2023

help-circle






  • My experience has been pretty similar. With Windows turning the invasive crap up to 11, I decided to try and jump to Linux. The catch has always been gaming. But, I have a Steam Deck and so have seen first hand how well Proton has been bridging that gap and finally decided to dip my toes back in. I installed Arch on a USB 3 thumbdrive and have been running my primary system that way for about a month now. Most everything has worked well. Though, with the selection of Arch, I accepted some level of slamming my head against a wall to get things how I want them. That’s more on me than Linux. Games have been running well (except for the input bug in Enshrouded with recent major update, that’s fixed now). I’ve had no issues with software, I was already using mostly FOSS anyway. It’s really been a lot of “it just works” all around.




  • I think AI is good with giving answers to well defined problems. The issue is that companies keep trying to throw it at poorly defined problems and the results are less useful. I work in the cybersecurity space and you can’t swing a dead cat without hitting a vendor talking about AI in their products. It’s the new, big marketing buzzword. The problem is that finding the bad stuff on a network is not a well defined problem. So instead, you get the unsupervised models faffing about, generating tons and tons of false positives. The only useful implementations of AI I’ve seen in these tools actually mirrors you own: they can be scary good at generating data queries from natural language prompts. Which is, once again, a well defined problem.

    Overall, AI is a tool and used in the right way, it’s useful. It gets a bad rap because companies keep using it in bad ways and the end result can be worse than not having it at all.



  • The answer to that will be everyone’s favorite “it depends”. Specifically, it depends on everything you are trying to do. I have a fairly minimal setup, I host a WordPress site for my personal blog and I host a NextCloud instance for syncing my photos/documents/etc. I also have to admit that my backup situation is not good (I don’t have a remote backup). So, my costs are pretty minimal:

    • $12/year - Domain
    • $10/month - Linode/Akamai containers

    The Domain fee is obvious, I pay for my own domain. For the containers, I have 2 containers hosted by the bought up husk of Linode. The first is just a Kali container I use for remote scanning and testing (of my own stuff and for work). So, not a necessary cost, but one I like to have. The other is a Wireguard container connecting back to my home network. This is necessary as my ISP makes use of CG-NAT. The short version of that is, I don’t actually have a public IP address on my home network and so have to work around that limitation. I do this by hosting NGinx on the Wireguard container and routing all traffic over a Wireguard VPN back to my home router. The VPN terminates on the outside interface and then traffic on 443/tcp is NAT’d through the firewall to my “server”. I have an NGinx container listening on 443 and based on host headers traffic goes to either the WordPress or NextCloud container which do their magic respectively. I also have a number of services, running in containers, on that server. But, none of those are hosted on the internet. Things like PiHole and Octoprint.

    I don’t track costs for electricity, but that should be minimal for my server. The rest of the network equipment is a wash, as I would be using that anyway for home internet. So overall, I pay $11/month in fixed costs and then any upgrades/changes to my server have a one-time capital cost. For example, I just upgraded the CPU in it as it was struggling under the Enshrouded server I was running for my Wife and I.


  • Attempt at serious answer (warning: may be slightly offensive)

    Wow, you are a fucking moron. But, there is an interesting question buried in there, you just managed to ask it in a monumentally stupid way. So, let’s pick this apart a bit. Assuming Trump gets re-elected and speed-runs the US into global irrelevancy, what happens to the various standards and standards bodies? tl;dr: Not much.

    • FIPS - This will be the most effected. If companies no longer need to care about working with the US Government (USG), no one is going to bother with FIPS. FIPS is really only a list of cryptographic standards which are considered “secure enough” for USG use. The standards won’t actually change and the USG may still continue to update FIPS, people would just stop noticing.
    • UNICODE - Right so UNICODE is a code page maintained by the Unicode Consortium. Maybe with the US being less dominant, we see the inclusion of more stuff; but, it’s just a way to define printable characters. It works incredibly well and there’s no reason such would be abandoned. Also, there are already plenty of other code pages, Unicode is just popular because it covers so much. Maybe the headquarters for the consortium ends up elsewhere.
    • ANSI - Isn’t a standard, it’s a US Government Body. So, assuming it stops being good at it’s job, other countries/organizations would likely stop listening to it’s ideas. The ANSI standards which exist will continue to exist, if ANSI continues to exist, it’ll probably keep publishing standards but only the US would care about them.
    • ISO - Again, this isn’t a standard, it’s a Non-Governmental Organization, headquartered in Switzerland. Also, ISO is not an acronym, it’s borrowed from Greek. And ya, this one would almost certainly keep chugging along. Probably a bit more Euro-centric than they are now, but mostly unchanged.

    For this reason, and a lot of other reasons, I am in favor of liberterianism because then, it would not be a government ran by octogenarians deciding standards for communication,

    It’s ok, I was young and stupid once too. The fact is that, while many telecommunications standards started off in the US, and some even in the USG, most of them have long since been handed off to industry groups. The Internet Engineering Task Force is responsible for most of the standards we follow today. They were spun off from the USG in 1993 and are mostly a consensus driven organization with input from all over the world. In a less US centric world, the makeup of the body might change some. But, I suspect things would keep humming along much as they have for the last few decades.

    Will we live in a post-standard world?

    This depends on the level of fracturing of networks. Over time, there has been a move towards standardization because it makes sense. Sure, companies resist and all of them try to own the standard, but there has been a lot of pushback against that and often from outside the US. For example, the EU’s law to require common charging ports. In many ways, the EU is now doing more for standardization than the US.

    Worse, cryptography. Well, for ‘serious shit’, people roll their own crypto because…

    Tell me you know fuck all about security without saying you know fuck all about security. There is a well accepted maxim, called “Schneier’s law” based on this classic essay. It’s often shortened to “Don’t roll your own crypto”. And this goes back to that FIPS standard mentioned earlier. FIPS is useful mostly because it keeps various bits of the USG from picking bad crypto. The algorithms listed in FIPS are all bog-standard stuff, from things like the Advanced Encryption Standard (AES) process. The primitives and standards are the primitives and standards because they fucking work and have been heavily tested and shown to be secure over a lot of years of really smart people trying to break them. Ironically, it was that same sort of open testing that resulted in the NSA being caught trying to create a crypto backdoor.
    So no, for ‘serious shit’ no one rolls their own crypto, because that would be fucking dumb.

    But what about primitives? For every suite, for every protocol, people use the same primitives, which are standardized.

    And ya, they would continue to be, as said above, they have been demonstrated over and over again to work. If they are found not to work, people stop using them (se:e SHA1, MD5, DES). Its funny that, for someone who is “in favor of liberterianism” you seem to be very poorly informed of examples where private groups and industry are actually doing a very good job of things without government oversight.

    Overall, you seem to have a very poor understanding of how these standards get created in the modern world. Yes, the US was behind a lot of them. But, as they have been handed over to private (and often international) organizations, they have moved further and further away from US Government control. Now, that isn’t to say that US Based companies don’t have a lot of clout in those organizations. Let’s face it, we are all at the mercy of Microsoft and Google way too often. But, even if those companies fall to irrelevance, the organizations they are part of will likely continue to do what they already do. It’s possible that we’d see a faster balkanization of the internet, something we already see a bit of. Countries like China, Iran or Russia may do more to wall their people off from US/EU influence, if they don’t have an economic interest in some communications. Though, it’s just as likely that trade will continue to keep those barriers to the flow of information as open as possible.

    The major change could really be in language. Without the US propping it up, English may lose it’s standing as the lingua franca of the world. As it stands right now, it’s not uncommon for two people, neither of which speaks English as their native language, to end up conversing in English as that is the language the two of them share. If a new superpower rises, perhaps the lingua franca shifts and the majority of sites on the internet shift with it. Though, that’s likely to be a multi-generational change. And it could be a good thing. English is a terrible language, it’s less a language and more three languages dressed up in a trench coat pretending to be one.

    So yes, there would likely be changes over time. But, it’s likely more around the edges than some wholesale abandoning of standards. And who knows, maybe we’ll end up with people learning to write well researched and thought out questions on the internet, and not whatever drivel you just shat out. Na, that’s too much to hope for.






  • Windows 10 released in 2015. Windows 11 released in 2021. It’s pretty much in line with other release cycles for Windows Desktop OS releases.

    • XP -> Vista - was about 6 years
    • Vista -> 7 - Was about 2 (But everyone sane basically skipped Vista)
    • 7-> 8 - Was 3 years, with a fourth year to get to 8.1.
    • 8 -> 10 - Was about 3 years.

    If you only look at the releases which mattered, XP -> 7 was 8 years and 7 -> 10 was 6. So, it seems like Microsoft kinda accepted reality this time around and we didn’t get some sort of asinine Windows Mojave shenanigans trying to polish a turd. That said, I’m still running 10 on my main system and my experiences with 11 are making me consider an upgrade path to Linux when Win10 goes EoL.



  • It’s always a “chicken or the egg” situation. Right now, there isn’t much need for a home router with anything faster than a 1Gbps port. In the prosumer space 10Gbps is available, but it’s not super cheap (about $300 with SFP module). But, if something like 50Gbps becomes common, manufacturers will be incentivized to make products for it. The economies of scale and the effects of competition will kick in and prices will come down.

    I’m old. I was at one of the events where Intel announced 1Gbps over copper. This was supposed to be impossible, there was no way to push 1Gbps over Cat-5 cables. But, with Cat-5e and Cat-6, they had cracked it. At the time, there was no way this was ever going to be a cheap technology and it was intended for large enterprises for major switch interconnect runs. Now it’s everywhere.

    Maybe 50Gbps to the home won’t happen. And this is just some exec blowing smoke. But, maybe they’ll do it and kick off the market for cheaper equipment in that class. While I do agree that we’re lacking the “killer app” to make that much bandwidth to the home necessary. Things like music and video streaming came about after the advent of faster speeds. It wasn’t until we had DSL that people realized that streaming music, in real time, would be a thing. We needed the bandwidth to be there for the use cases to be discovered.