![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
Both Intel and AMD invest a lot into open source drivers, firmware and userspace applications, but also due to the nature of X86_64’s UEFI, a lot of the proprietary crap is loaded in ROM on the motherboard, and as microcode.
Both Intel and AMD invest a lot into open source drivers, firmware and userspace applications, but also due to the nature of X86_64’s UEFI, a lot of the proprietary crap is loaded in ROM on the motherboard, and as microcode.
I work with SoC suppliers, including Qualcomm and can confirm; you need to sign an NDA to get a highly patched old orphaned kernel, often with drivers that are provided only as precompiled binaries, preventing you updating the kernel yourself.
If you want that source code, you need to also pay a lot of money yearly to be a Qualcomm partner and even then you still might not have access to the sources for all the binaries you use. Even when you do get the sources, don’t expect them to be updated for new kernel compatibility; you’ve gotta do that yourself.
Many other manufacturers do this as well, but few are as bad. The environment is getting better, but it seems to be a feature that many large manufacturers feel they can live without.
I’ve seen some optometry equipment running RHEL
About a year ago I moved to Hyprland & Wayfire for my NVIDIA & Intel boxes. Moved NVIDIA to Radeon a few months back and had mixed results.
Recently tried Plasma 6 for experimental HDR and am impressed.
I am curious about why performance cores would go with a no-smt implementation; the die area improvements are obvious, but how are they going to spend that die area improvement to make up the performance gap?
AVX only really affects a small subset of applications, so I can’t see that going too far.
A better branch predictor could be a boon, but given how good they are already I’m not sure how they could make up the 50% multi threaded loss.
Perhaps just cramming more physical cores together and a better cache sharing mechanism?
deleted by creator
Most firewalls are at their safest when you first get them i.e by default they block everything coming in. As you start doing port forwarding and the like you start making the network selectively less secure; that’s when you have to pay attention.
I had an EdgeRouter X for years before I started my job. They are solid devices, and I’d definitely put them above most consumer routers.
Because they only charge for the hardware, they will eventually run into the same disincentive to provide consistent timely updates. If you do buy an Ubiquiti or similar enthusiast brand, do still keep an eye out for the CVEs that don’t get patched.
I build Linux routers for my day job. Some advice:
your firewall should be an appliance first and foremost; you apply appropriate settings and then other than periodic updates, you should leave it TF alone. If your firewall is on a machine that you regularly modify, you will one day change your firewall settings unknowingly. Put all your other devices behind said firewall appliance. A physical device is best, since correctly forwarding everything to your firewall comes under the “will one day unknowingly modify” category.
use open source firewall & routing software such as OpenWRT and PFSense. Any commercial router that keeps up to date and patches security vulnerabilities, you cannot afford.
It opens the door to more manufacturers since there is no ISA licence fees. While the AMD/Intel duopoly is being fairly competitive at the moment, it really doesn’t have to be. Only think back to how bad it was late 2000s to 2015.
I imagine a plethora of core designers, soc vendors and platform creators filling their own niches; lowest cost, lowest power, HW accelerators, highest core count etc.
I don’t see the raw performance of AMD/Intel being surpassed soon, just because of the sheer total R&D years each has, but that doesn’t mean there aren’t other areas better suited to a different architectural approach.
I don’t see the problem, I also don’t see how this is a novel situation.
The technical merits of system level protocols only really affect the user insofar as they make it easier for userspace application writers to make their software. This is why we have the distinction, so that users never have to change the underlying software, and when they choose to it’s because everything just works.
Sure but why open their code without getting the integration benefits?
Likely a combination of 4 things:
They have third party firmware in their blobs that they are under NDA regarding the source code.
They believe in the source code is a large part of their success and don’t want to reveal it.
They believe giving out the source code will allow many inferior variants of the software, impacting their brand.
Control; the more source code they have in mesa the more of their code can be rejected by mesa. Keeping their stuff as blobs allows them to put in whatever hacks they want.
NT is not the majority of windows code though; for windows to be multi architecture, all of windows needs to work with the new architecture; NT, drivers & userspace.
For Linux, if an existing userspace application doesn’t work in aarch64, somebody somewhere will build a port. For windows, so much of their stuff is proprietary that Microsoft are the only ones able to build that port.
Not because “windows bad”, just a consequence of such a locked down system which doesn’t have anything open source to inherit.
Memory safety is likely to prevent a lot of bugs. Not necessarily in the kernel proper, I honestly don’t see it being used widely there for a while.
In third party drivers is where I see the largest benefit; there are plenty of manufacturers who will build a shitty driver for their device, say that it targets Linux 4.19, and then never support/update it. I have seen quite a few third party drivers for my work and I am not impressed; security flaws, memory leaks, disabling of sensible warnings. Having future drivers written in rust would force these companies to build a working driver that didn’t require months of trawling through to fix issues.
Now that I think about it, in 10 years I’ll probably be complaining about massive unsafe blocks everywhere…
Until it marks you as unlicensed because you used a new motherboard.
I started using Linux maybe 5 years ago, just before DXVK and proton became a thing. The difference between now and then for gaming is night and day.
If it’s on steam, there is a pretty good chance it’ll work. If it’s not on steam, it still might work through lutris.
There are some holdouts like Riot games, but I haven’t owned windows in almost two years.
I’m honestly more worried about science communicators projecting life-changing properties onto it because “superconductor”, and then the public coming away with “these scientists are all hyperbolic hacks”.
This looks a lot less like diamagnetism than the previous videos, but is still using way larger magnets than they should need; still very sceptical.
Also a reminder to anybody reading a news article about this: LK99 is likely a ceramic, so the attributes specifically for metallic superconductors would not apply.
Kernel modules don’t have to be open source provided they follow certain rules like not using gpl only symbols. This is the same reason you can use an NVIDIA driver.
Its not enforced so much by law as what the fsf and Linux foundation can prove and are willing to pursue; going after a company that size is expensive, especially when they’re a Linux foundation partner. A lot of major Linux foundation partners are actively breaking the GPL.