• 37 Posts
  • 1.36K Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle
  • The remote access devices can be a good thing. The issue is one of control. Given the software driven nature and complexity of devices, bugs are inevitable. Having a way for the manufacturer to distribute those updates remotely is a good thing as it lowers costs, and makes it more likely the updates get deployed. That said, the ability to enable and disable that remote access system needs to be in the hands of the customer, not the manufacturer.

    As an example, many years ago I worked for a company which manufactured physical access control systems (think those stinking badges and readers at office buildings). And we had two scenarios come up which illustrate the issue quite well. In the first case, the hardware which controlled the individual doors had a bug which caused the doors to fail unlocked. And based on the age of the hardware the only way to update the firmware was to physically go to the device and replace an EEPROM. I spent a very long day wandering a customer’s site climbing a ladder over and over again. This was slow, expensive and just generally not a great experience for anyone involved. In the second case, there were database issues with a customer’s server. At that time, these systems weren’t internet connected so that route for support didn’t exist. However, we shipped each system with a modem and remote access software. So, the customer hooked up the modem, gave us a number to dial in and we fixed the problem fairly quickly. The customer then unplugged the modem and went about breaking the system again.

    Having a way for the manufacturer to connect and support the system is important. They just shouldn’t have free run of the system at all times. The customer should also be told about the remote support system before buying the system and be able to turn it off. Sure, it’s possible to have reasonably secure remote logins on the internet (see: SSH or VPN), but it’s far more secure to just not have the service exposed at all. How many routers have been hacked because the manufacturers decided to create and leave in backdoors?



  • There’s rather a lot of reports of heads remaining concious for up to 30 seconds or so after being separated from their body.

    Given the rather precipitous drop in blood pressure going to the brain, this claim seems pretty dubious. Twitching and motion would certainly be possible as autonomic functions go haywire, but actual consciousness seems far fetched.

    At the same time

    A shotgun to the back of the head doesn’t have that issue, although it does make a bit more of a mess.

    If I had to choose, I’d probably pick this over the guillotine as well. Seems like a lot less setup time and general anticipation.
    Overall, inert gas axphixiation might be the better choice (assuming one is forced into it).


  • The main thing I have from that time is several large boxes hanging about taking up shelf space and a burning hatred of MMOs. My wife and I got into WoW during late Vanilla. We stood in line at midnight to get the collector’s edition box for WotLK and later again for Cataclysm (we weren’t that far gone when The Burning Crusade released). Shortly after Cataclysm released, there was the Midsummer Fire Festival and as we were playing through it, we hit that wall where any more quests became locked behind “Do these daily quests 10,000 times to progress” and the whole suspension of disbelief just came crashing down. I had already hated daily quests and the grindy elements of the game, but at that moment I just said, “fuck this” and walked away from the game.

    I do look back fondly on some of the good times we had in the game. Certainly in Vanilla there was some amazing writing and world crafting. We met some good people and had a lot of fun over the years and I don’t regret the time or money spent. However, one thing it taught me is just how pointless MMOs are. They are specifically designed to be endless treadmills. And this can be OK, so long as the treadmill itself is well designed and fun. But, so many of the elements exist just to eat time. Instead of being fun, they suck the fun out of the game and turn it into a job.

    We even tried a few other MMOs after that point (e.g. Star Wars) just because we wanted something to fill that niche in our gaming time. But invariably, there would be the grind mechanics which ruined the game for us. Or worse yet, pay to win mechanics where the game would literally dangle offers of “pay $X to shortcut this pointless grind” (ESO pops to mind for this). If the game is offering me ways to pay money to not play the game, then I’ll take the easier route and not play the game at all, thank you very much.

    So ya, WoW taught me to hate MMOs and grinding in games. And that’s good, I guess.




  • What you are trying to do is called P2V, for Physical to Virtual. VMWare used to have tools specifically for this. I haven’t used them in a decade or more, but they likely still work. That should let you spin up the virtual system in VMWare Player (I’d test this before wiping the drive) and you can likely convert the resulting VM to other formats (e.g. VirtualBox). Again, test it out before wiping the drive, nothing sucks like discovering you lost data because you just had to rush things.



  • It would be interesting to see someone with the background to understand the arguments involved in the paper give it a good review.

    That said, I’ve never brought the simulation hypothesis on the simple grounds of compute resources. Part of the argument tends to be the idea of an infinite recursion of simulations, making the possible number of simulations infinite. This has one minor issue, where are all those simulations running? If the top level (call it U0 for Universe 0) is running a simulation (U1) and that simulation decides to run its own simulation (U2), where is U2 running? While the naive answer is U1, this cannot actually be true. U1 doesn’t actually exist, everything it it doing is actually being run up in U0. Therefore, for U1 to think it’s running U2, U0 needs to simulate U2 and pipe the results into U1. And this logic continues for every sub-simulation run. They must all be simulated by U0. And while U0 may have vast resources dedicated to their simulation, they do not have infinite resources and would have to limit the number of sub-simulation which could be run.






  • Location: ~87% of respondents are from Canada

    As others mentioned, this would be an interesting data point to validate. I’m not familiar with the server side of Lemmy, but does the server provide any logs which could be used with GeoIP to get a sense of the relative number of connections from different countries? While there is likely to be some misreporting due to VPN usage and the like, it’s likely to be a low enough number of connections to be ignored as “noise” in the data. Depending on the VPNs in question, it may also be possible to run down many of the IP addresses which are VPNs in the connections logs and report “VPN user” as a distinct category. This would also be interesting to see broken out by instance (e.g. what countries are hitting lemmy.world versus lemmy.ml versus lemmy.ca etc.).

    All that said, thank you for sharing. These sorts of exercises can be interesting to understand what a population looks like.




  • My bet is on it never getting completed. It’s going to be a running grift over the next few years. There will be delay after delay after delay with multiple “independent” contractors rolling through to deal with whatever the current delay is. Those contractors will be chosen via a competitive bid process,. The company bidding the highest kickbacks to Trump being awarded the contract. At the end of the Trump administration, anything actually constructed on the grounds will need to be torn down due to engineering failures, and multitudes of bugs planted by foreign spy agencies.





  • If the goal is stability, I would have likely started with an immutable OS. This creates certain assurances for the base OS to be in a known good state.
    With that base, I’d tend towards:
    Flatpak > Container > AppImage

    My reasoning for this being:

    1. Installing software should not effect the base OS (nor can it with an immutable OS). Changes to the base OS and system libraries are a major source of instability and dependency hell. So, everything should be self contained.
    2. Installing one software package should not effect another software package. This is basically pushing software towards being immutable as well. The install of Software Package 1, should have no way to bork Software Package 2. Hence the need for isolating those packages as flatpaks, AppImages or containers.
    3. Software should be updated (even on Linux, install your fucking updates). This is why I have Flatpak at the top of the list, it has a built in mechanism for updating. Container images can be made to update reasonably automatically, but have risks. By using something like docker-compose and having services tied to the “:latest” tag, images would auto-update. However, its possible to have stacks where a breaking change is made in one service before another service is able to deal with it. So, I tend to tag things to specific versions and update those manually. Finally, while I really like AppImages, updating them is 100% manual.

    This leaves the question of apt packages or doing installs via make. And the answer is: don’t do that. If there is not a flatpak, appimage, or pre-made container, make your own container. Docker files are really simple. Sure, they can get super complex and do some amazing stuff. You don’t need that for a single software package. Make simple, reasonable choices and keep all the craziness of that software package walled off from everything else.


  • An economy is really just a way to distribute finite resources in a world with infinite wants. Even the most egalitarian of systems is going to require deciding who gets something and who doesn’t (winner and losers). It’s perfectly valid to be frustrated by being on the “doesn’t” end of that equation. And we (US and other Western Democracies) could certainly do a lot more to shift some of the resources away from the few who are hording a lot of them, even without a radical “tear the system down” approach. The difficulty is the political will to do so.

    Unfortunately, mustering political will for a collective good, which may come with some individual losses can be a tough sell. Especially when large parts of a population are comfortable. Not only do you have to convince people that the collective good is an overall good for them, you also have to convince them that the individual losses either won’t effect them or will be mitigated by the upsides of the collective good. And given peoples’ tendency to over emphasize the short term risks over the long term risks, this can be especially hard. But, that doesn’t mean you should give up, just that you need to sharpen your arguments and find ways to convince more people that things can be better for them, if they are willing to take that step.