• 0 Posts
  • 29 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle
  • Sorry to clarify: updates come as security or as feature updates. If I’ve already got a standard operating environment (SOE) with all the features I/staff need to do work, I don’t need new features.

    I then have to watch cves with my cve trackers to know when software updates are needed and all devices with those software get updated and the SOE is updated.

    I can go on a rant about how bad the Linux has recently made my life as someone’s policy is that any Linux bug might be a security vulnerability and therefore I now have infinite noise in my cve feed, which in turn is making decisions on how to mitigate security issues hard, but that is beyond this discussion.

    So in short I’m only talking about when you update, updating only security fixes, not the software and features. Live patching security vulnerabilities is pretty much free low effort, low impact, and in my personal opinion, absolutely critical. But software features patching can be disruptive, leaves little to be gained, and really only should be driven for a request to need that feature at which point it would also include an update to the SOE.





  • They probably have been using it for years, and for the last more then a decade I’ve been using Ubuntu as my main Linux distribution since I have work to do and I’ll get to doing work faster in ubuntu than any other distribution.

    Why did I start with Ubuntu? 10+ years ago Ubuntu was lightyears ahead for community support for issues. Again, I had work to do, I wasn’t hobbyist playing “fuck windows”.

    In fact look at things like ROS where you can get going with “apt install ros-noetic-desktop” and now you can build your robotics stuff instantly. Every dependency to start and all the other tooling is there too. Sure a bunch of people would now say “use nix” but my autonomous robotics project doesn’t care I am trying to get lidar, camera, motors, and SLAM algorithms to work. I don’t want to care or think about compiling ROS for some arch distribution.

    I won’t say I don’t dabble with other distributions but if I’ve got work to do, I’m going to use the tools I already know better than the back of my hand. And at the time, when selecting these tools, Ubuntu had it answered and is stable enough to have been unchanging for basically a decade.

    Oh and if I needed to, I could pay and get support so the CEO can hear that risk is gone too (despite almost every other vendor we pay never actually resolving a issue before we find and fix it… Though I do like also being able to say “we have raised a ticket with vendor x and am waiting on a reply”).


  • From my perspective, if used for work, automatic security updates should be mandatory. Linux is damn impressive with live patch. With thousands or even tens of thousands of endpoints, it’s negligent to not patch.

    Features? Don’t care. But security updates are essential in a large organisation.

    The worst part of the Linux fan base is the users who hate forced updates, and also don’t believe in AV. Ok on your home network that’s not very risky compared to a corp network with a million student and staff personal information often with byo devices only a network segment away and APT groups targeting you because they know your reputation is worth something to ransom.


  • One rich company trying to claim money off the other rich companies using its software. The ROI on enforcing these will come from only those that really should have afforded to pay and if they can’t, shouldn’t have built on the framework. Let them duke it out. I have zero empathy for either side.

    The hopeful other side is with a “budget” for the license, a company can consider using that to weigh up open source contributions and expertise. Allowing those projects to have experts who have income. Even if it’s only a few companies that then hire for that role of porting over, and contributing back to include needed features, more of that helps everyone.

    The same happens in security, there used to be no budget for it, it was a cost centre. But then insurance providers wouldn’t provide cyber insurance without meeting minimum standards (after they lost billions) and now companies suddenly have a budget. Security is thriving.

    When companies value something, because they need to weigh opportunity cost, they’ll find money.



  • Hold them all to account, no single points of failure. Make them all responsible.

    When talking about vscode especially, those users aren’t your mum and dad. They’re technology professionals or enthusiasts.

    With respect to vendors (Microsoft) for too long have they lived off an expectation that its always a end user or publisher responsibility, not theirs when they’re offering a brokering (store or whatever) service. They’ve tried using words like ‘custodian’ when they took the service to further detract from responsibility and fault.

    Vendors of routers and firewalls and other network connected IoT for the consumer space now are being legislatively enforced to start adhering to bare minimum responsible practices such as ‘push to change’ configuration updates and automated security firmware updates, of and the long awaited mandatory random password with reset on first configuration (no more admin/Admin).

    Is clear this burden will cost those providers. Good. Just like we should take a stance against polluters freely polluting, so too should we make providers take responsibility for reasonable security defaults instead of making the world less secure.

    That then makes it even more the users responsibility to be responsible for what they then do insecurely since security should be the default by design. Going outside of those bounds are at your own risk.

    Right now it’s a wild West, and telling what is and isn’t secure would be a roll of the dice since it’s just users telling users that they think it’s fine. Are you supposed to just trust a publisher? But what if they act in bad faith? That problem needs solving. Once an app/plugin/device has millions of people using it, it’s reputation is publicly seen as ok even if completely undeserved.

    Hmm rant over. I got a bit worked up.


  • You’re right. Both cloud services (like Microsoft 365 measured by licensing) and azure each individually are about double Windows. They together make over half of Microsoft’s earnings while Windows is like 16%. Then you’ve got games and linkedin and others filling up the smaller %.

    Microsoft doesn’t need Windows, you can run your office 365 off Mac or Linux for all they care. Just host all your virtual workloads on azure regardless of OS if it’s not serverless, and they’re fine with taking that money.





  • Tailscale can act as a site to site vpn, but it’s best used as a meshvpn imo with as many things as possible in it.

    Why? Because the dynamic dns is so powerful. Every host name automatically is in every other tailscale joined computer automatically. My NAS (Truenas in my case) is just “nas” so to access it it’s just https://nas. Same with my rustdesk server on https://rustdesk. Jellyfin? You guessed it: https://jellyfin.

    Why is this cool? I moved my box between other networks and it just works again. No ips changed.

    I take it to work. It just works. I keep one server at my parents place? It just works.

    But my printer doesn’t have the ability to join the tailnet so I use subnet routing to create a node on that network to act as a NAT router to get to and from that printer.

    You can even define exit nodes so if I install tailscale on my parents TV in another state, they can exit their internet via my home which has my IP and therefore Netflix counts it as inside my residence.

    Anyway just some considerations. I generally use the subnet routing as a last resort. My 3 node proxmox cluster is all joined and if I took a node to my parents it would literally just work, if slower, as a cluster member. Crazy. Very cool






  • The messaging around this so far doesn’t lead me to want to follow the fork on production. As a sysadmin I’m not rushing out to swap my reverse proxy.

    The problem is I’m speculating but it seems like the developer was only continuing to develop under condition that they continued control over the nginx decision making.

    So currently it looks like from a user of nginx, the cve registration is protecting me with open communication. From a security aspect, a security researcher probably needs that cve to count as a bug bounty.

    From the developers perspective, f5 broke the pact of decision control being with the developer. But for me, I would rather it be registered and I’m informed even if I know my configuration doesn’t use it.

    Again, assuming a lot here. But I agree with f5. That feature even beta could be in a dev or test environment. That’s enough reason to know.

    Edit:Long term, I don’t know where I’ll land. Personally I’d rather be with the developer, except I need to trust that the solution is open not in source, but in communication. It’s a weird situation.


  • IP and Routing is layer 3, broadcast is layer 2 with Mac addresses being shared within a broadcast domain (often a vlan/lan) and the only requirement for layer 2 is a switch you don’t need routers. Devices on a lan talk only via switches which switch based on Mac address tables. You don’t learn Mac addresses of devices past your broadcast domain, that’s what a router handles.

    So in network practice (nothing Linux related) if you are on a broadcast network that’s a /24 subnet, what should happen is all devices within that subnet talk to each other without using a router, instead they learn a mac address and the associated ip from a broadcast from the device which owns it.

    If you tell your device that it’s only on a /32 then it should discard every arp it hears as invalid. Which means it won’t learn any neighbouring lan devices.

    While your network on your single device with the /32 probably works ok to get to other networks (routed networks like internet or other vlans), because other networks ask the router, and the router probably learned your mac and ip on whatever vlan/interface your device is connected via.

    But unless you’re trying to do something unconventional, devices on a lan should match the routers expected subnet. This way devices can trust their assumption that within their subnet they communicate to other local devices by learning other network devices network address via arp, and communicate directly in unicast via learned ips from that arp. If it’s outside the subnet they then look to the gateway. They trust the gateway. The gateway should route to the right interface or next hop.

    If you really wanted to make this work though, usually routers can proxy arp. So in this case, you tell the router to ‘oroxy’ and broadcast your arp to other devices. Those devices on your lan looking for your ip will find the routers Mac address, then using destination network address translation you can redirect the incoming connection from a lan device to your device via your router. Then your /32 ip can probably work. Usually this is done when someone has put a static ip on a device with a wrong subnet ip on a vlan with another subnet. So the device which arps is ignored by the router and the other network devices. If you use the router to proxy arp you can basically give the local lan devices an ip to hit that they expect, which then you can translate to the misconfigured device. This generally is considered a bandaid solution temporary until a vendor or technician can fix their misconfiguration. I do not recommend.