

As someone with this exact size/model of TV you are absolutely correct. AVR and speakers were just assumed to be part of how it was going to be used. Would have preferred it didn’t have built in speakers at all.
As someone with this exact size/model of TV you are absolutely correct. AVR and speakers were just assumed to be part of how it was going to be used. Would have preferred it didn’t have built in speakers at all.
Btrfs is a copy on write (COW) filesystem. Which means that whenever you modify a file it can’t be modified in place. Instead a new block is written and then a single atomic operation is done to flip that new block to be the location of that data.
This is a really good thing for protecting your data from things like power outages or system crashes because the data is always in a good state on disk. Either the update happened or it didn’t there is never any in-between.
While COW is good for data integrity it isn’t always good for speed. If you were doing lots of updates that are smaller than a block you first have to read the rest of the block and then seek to the new location and write out the new block. On ssds this isn’t a issue but on HDDs it can slow things down and fragment your filesystem considerably.
Btrfs has a defragmentation utility though so fragmentation is a fixable problem. If you were using ZFS there would be no way to reverse that fragmentation.
Other filesystems like ext4/xfs are “journaling” filesystems. Instead of writing new blocks or updating each block immediately they keep the changes in memory and write them to a “journal” on the disk. When there is time those changes from the journal are flushed to the disk to make the actual changes happen. Writing the journal to disk is a sequential operation making it more efficient on HDDs. In the event that the system crashes the filesystem replays the journal to get back to the latest state.
ZFS has a journal equivalent called the ZFS Intent Log (ZIL). You put the ZIL on fast SSDs while the data itself is on your HDDs. This also helps with the fragmentation issues for ZFS because ZFS will write incoming writes to the ZIL and then flush them to disk every few seconds. This means fewer larger writes to the HDDs.
Another downside of COW is that because the filesystem is assumed to be so good at preventing corruption, in some extremely rare cases if corruption gets written to disk you might lose the entire filesystem. There are lots of checks in software to prevent that from happening but occasionally hardware issues may let the corruption past.
This is why anyone running ZFS/btrfs for their NAS is recommended to run ECC memory. A random bit flipping in ram might mean the wrong data gets written out and if that data is part of the metadata of the filesystem itself the entire filesystem may be unrecoverable. This is exceedingly rare, but a risk.
Most traditional filesystems on the other hand were built assuming that they had to cleanup corruption from system crashes, etc. So they have fsck tools that can go through and recover as much as possible when that happens.
Lots of other posts here talking about other features that make btrfs a great choice. If you were running a high performance database a journaling filesystem would likely be faster but maybe not by much especially on SSD. But for a end user system the snapshots/file checksumming/etc are far more important than a tiny bit of performance. For the potential corruption issues if you are lacking ECC backups are the proper mitigation (as of DDR5 ECC is in all ram sticks).
Agreed. The nonstandard port helps too. Most script kiddies aren’t going to know your service even exists.
Take it another step further and remove the default backend on your reverse proxy so that requests to anything but the correct DNS name are dropped (bots just are probing IPs) and you basically don’t have to worry at all. Just make sure to keep your reverse proxy up to date.
The reverse proxy ends up enabling security through obscurity, which shouldn’t be your only line of defence, but it is an effective first line of defence especially for anyone who isn’t a target of foreign government level of attacks.
Adding basic auth to your reverse proxy endpoints extends that a whole lot further. Form based logins on your apps might be a lot prettier, but it’s a lot harder to probe for what’s running behind your proxy when every single URI just returns 401. I trust my reverse proxy doing basic auth a lot more than I trust some php login form.
I always see posters on Lemmy about setting up elaborate VPN setups for as the only way to access internal services, but it seems like awful overkill to me.
VPN still needed for some things that are inherently insecure or just should never be exposed to the outside, but if it is a web service with authentication required a reverse proxy is plenty of security for a home lab.
You are paying for reasonably well polished software, which for non technical people makes them a very good choice.
They have one click module installs for a lot of the things that self hosted people would want to run. If you want Plex, a onedrive clone, photo sync on your phone, etc just click a button and they handle installing and most of the maintenance of running that software for you. Obviously these are available on other open source NAS appliances now too so this isn’t much of a differnentiator for them anymore, but they were one of the first to do this.
I use them for their NVR which there are open source alternatives for but they aren’t nearly as polished, user friendly, or feature rich.
Their backup solution is also reasonably good for some home labs and small business use cases. If you have a VMware lab at home for instance it can connect to your vCenter and it do incremental backups of your VMs. There is an agent for Windows machines as well so you can keep laptops/desktops backed up.
For businesses there are backup options for Office365/Google Workspace where it can keep backups of your email/calendar/onedrive/SharePoint/etc. So there are a lot of capabilities there that aren’t really well covered with open source tools right now.
I run my own built NAS for mass storage because anything over two drives is way too expensive from Synology and I specifically wanted ZFS, but the two drive units were priced low enough to buy just for the software. If you want a set and forget NAS they were a pretty good solution.
If their drives are reasonably priced maybe they will still be an okay choice for some people, but we all know the point of this is for them to make more money so that is unlikely. There are alternatives like Qnap, but unless you specifically need one of their software components either build it yourself or grab one of the open source NAS distros.
“The party told you to reject the evidence of your eyes and ears. It was their final, and most essential command.” - George Orwell, 1984
They stick 9.81 in for acceleration, so that is presumably for gravity.
Often in dry pipe setups there is still a stopper in all of the sprinkler heads that has to melt to let the water out. This is common in places like datacenters or other places where accidentally hitting the sprinkler head would cause major damage from the water.
Basically smoke/heat detectors trigger the pipe to fill, then heat from the fire releases sprinklers wherever it is hot enough to melt the stopper.
But I suppose there are cases where the fire might be expected to spread so fast that they don’t put the stoppers in and just let all of them go.
I’ve had one of these 3d printed keys in my wallet as a backup in case I get locked out for 5 years now. I certainly don’t use it often but yeah it holds up fine.
The couple of times I have used it works fine but you certainly want to be a little extra careful with it. I’ve got locks that are only 5ish years old so they all turn rather easily, and I avoid my door with the deadbolt when I use it because that would probably be too much for it.
Mine is PETG but for how thin it is, it flexes a lot. I figured flexing is better than snapping off, but I think PLA or maybe a polycarbonate would function better. A nylon would probably be too flexible like the PETG.
Netflix had Dolby Vision and HDR, this is just adding HDR10+. HDR10+ is similar to Dolby Vision in that it give your TV dynamic metadata for the HDR. Constantly adjusting min/max brightness of the scene.
For dynamic metadata Dolby Vision support is much more common in TVs, some brands like LG don’t have any support for HDR10+ even in their high end TVs.
I am pretty sure from a content perspective Dolby Vision is also much more prevalent. It does look like most streamers support HDR10+, but I don’t think much of their content is available in HDR10+.
Anyways still a good change. HDR10+ is royalty free unlike Dolby Vision, and it is backwards compatible with regular HDR TVs.
In areas that don’t have variable rates like where I am at it is just a straight discount per kwh no matter when you use the power.
However the power company puts in a separate meter which has this lower electric rate for the things you want on the off-peak service (the charger in this case). That meter has a unit that they can remote control to cut the power whenever they choose.
So when the power company sees that their grid is nearing capacity they start shutting off customers off-peak meters for a couple of hours at a time. This usually happens in the middle of the night in winter when it is really cold, or the mid to late afternoon in the summer when it is really hot.
Traditionally this was for homes with electric heat. The power company would only allow this when you had a second heat source like a furnace. The point being that they are effectively shifting from electric heat to some sort of fossil fuel. A lot of homes from before the 70s/80s had multiple heat sources because fuel shortages forced a lot of homeowners to add electric heat, but they still had oil furnaces they could fall back to.
Also many of these chargers are installed on off-peak meters so that you can get a few cents per kwh off. In the winter in cold areas like Minnesota peak shaving happens in the middle of the night because many homes are on electric heat.
So if it is cold enough for the electric company to be peak shaving, you may lose several hours of charging through the night
All of the modern yubikeys (and it looks like the nitro keys as well) can have fido2 enabled so that you can use them as a hardware token for sites that support passkeys. I think yubikeys come with only OTP enabled so you need to download their utility to enable the other modes.
If you are a Linux user (that’s required to be on Lemmy right?) you can use either the fido2 or ccid (smart card through pkcs11) mode to keep SSH keys protected. The fido2 ssh key type (ed25519-sk) hasn’t been around that long so some service might not support it. The pkcs11 version gives you a normal RSA key, but is harder to get setup, and if you want extra security they don’t have any way to verify user presence. With fido2 you can optionally require that you must physically touch the key after entering the pin.
There are also pkcs11 and fido2 pam modules so you can use it as a way to login/sudo on your system with an easy to use pin.
And if you have a luks encrypted volume you can unlock that volume with your pin at boot with either pkcs11 or fido2.
Unlocking LUKS2 volumes with TPM2, FIDO2, PKCS#11 Security Hardware on systemd 248
If you are on an Ubuntu based distro initramfs-tools doesn’t build the initramfs with the utilities required for doing that. The easiest way to fix that is to switch to dracut.
Dracut is officially “supported” on 24.10 and is planned to be the default for Ubuntu 25.10 forward, but it can work on previous versions as well. For 24.04 I needed hostonly enabled and hostonly_mode set to sloppy. Some details on that in these two links:
https://discourse.ubuntu.com/t/please-try-out-dracut/48975
So a single hardware token can handle your passkeys, your ssh keys, computer login, and drive encryption. Basically you will never have to type a password ever again.
If your NAS has enough resources the happy(ish) medium is to use your NAS as a hypervisor. The NAS can be on the bare hardware or its own VM, and the containers can have their own VMs as needed.
Then you don’t have to take down your NAS when you need to reboot your container’s VMs, and you get a little extra security separation between any externally facing services and any potentially sensitive data on the NAS.
Lots of performance trade offs there, but I tend to want to keep my NAS on more stable OS versions, and then the other workloads can be more bleeding edge/experimental as needed. It is a good mix if you have the resources, and having a hypervisor to test VMs is always useful.
If you have Ethernet cables that are old or have damaged ends in your pile just sacrifice them to make your own cable ties. Cut it into pieces as long as you need to wrap your other cables and in each section you cut you get four twist ties.
Cheap, readily at hand, and if the cables were bad you can call it recycling.
If you are just using a self signed server certificate anyone can connect to your services. Many browsers/applications will fail to connect or give a warning but it can be easily bypassed.
Unless you are talking about mutual TLS authentication (aka mTLS or two way ssl). With mutual TLS in addition to the server key+cert you also have a client key+cert for your client. And you setup your web server/reverse proxy to only allow connections from clients that can prove they have that client key.
So in the context of this thread mTLS is a great way to protect your externally exposed services. Mutual TLS should be just as strong of a protection as a VPN, and in fact many VPNs use mutual TLS to authenticate clients (i.e. if you have an OpenVPN file with certs in it instead of a pre-shared key). So they are doing the exact same thing. Why not skip all of the extra VPN steps and setup mTLS directly to your services.
mTLS prevents any web requests from getting through before the client has authenticated, but it can be a little complicated to setup. In reality basic auth at the reverse proxy and a sufficiently strong password is just as good, and is much easier to setup/use.
Here are a couple of relevant links for nginx. Traefik and many other reverse proxies can do the same.
Assuming you are in the US and on Android check out NOAA Weather Unofficial
AD supported free version, pro I think is only $2 so it isn’t unreasonable.
Daily forecast page appears to match the daily detailed descriptions but the really powerful part is the hourly chart.
Bit cluttered you aren’t used to it, but you do get to pick which metrics you want to see. Feels like the old weatherspark days before flash got killed off.
The radar section isn’t anything special so I use a separate app for that (MyRadar). I also have weather alerts running through MyRadar so I can’t say much about alert functionality in this app.
The biggest question is, are you looking for Dolby Vision support?
There is no open source implementation for Dolby Vision or HDR10+ so if you want to use those formats you are limited to Android/Apple/Amazon streaming boxes.
If you want to avoid the ads from those devices apart from side loading apks to replace home screens or something the only way to get Dolby Vision with Kodi/standard Linux is to buy a CoreELEC supported streaming device and flashing it with CoreELEC.
List of supported devices here
CoreELEC is Kodi based so it limits your player choice, but there are plugins for Plex/Jellyfin if you want to pull from those as back ends.
Personally it is a lot easier to just grab the latest gen Onn 4k Pro from Walmart for $50 and deal with the Google TV ads (never leave my streaming app anyways). Only downside with the Onn is lack of Dolby TrueHD/DTS Master audio output, but it handles AV1, and more Dolby Vision profiles than the Shield does at a much cheaper price. It also handles HDR10+ which the Shield doesn’t but that for at isn’t nearly as common and many of the big TV brands don’t support it anyways.
I’ve got about 30 zwave devices, and at first the idea of the 900mhz mesh network sounded like a really solid solution. After running them for a few years now if I were doing it again I would go with wifi devices instead.
I can see some advantages to the mesh in a house lacking wifi coverage. However I would guess most people implementing zigbee/zwave probably have a pretty robust wifi setup. But if your phone doesn’t have great signal across the entire house a lightswitch inside of a metal box in the wall is going to be worse.
Zwave is rather slow because it is designed for reliability not speed. Not that it needs to be fast but when rebooting the controller it can take a while for all of the devices to be discovered, and if a device goes missing things break down quickly, and the entire network becomes unresponsive even if there is another path in the mesh. Nothing worse than hitting one of your automations and everything hangs leaving you in the dark because one outlet three rooms over is acting up.
It does have some advantages, like devices can be tied to each other (i.e. a switch tied to a light) and they will work even without your hub being up and running (zwave controller I think can even be down).
Zwave/Zigbee also guarantee some level of compatibility/standardization. A lightswitch is a lightswitch it doesn’t matter which brand you get.
On the security front Zwave has encryption options but it slows down the network considerably. Instead of just sending out a message into the network it has to negotiate the encrypted connection each time it wants to send a message with a couple of back and forth packets. You can turn it on per device and because of the drawbacks the recommendation tends to be, to only encrypt important things like locks and door controls which isn’t great for security.
For Zwave 900mhz is an advantage (sometimes). 900mhz can be pretty busy in densely populated areas, but so can 2.4 for zigbee/wifi. If you have an older house with metal boxes for switches/plaster walls the mesh and the 900mhz penetration range may be an advantage.
In reality though I couldn’t bridge reliably to my garage about thirty feet away, and doing so made me hit the Zwaves four hop limit so I couldn’t use that bridge to connect any additional devices further out. With wifi devices connecting back to the house with a wifi bridge, a buried Ethernet cable, etc can extend the network much more reliably. I haven’t tried any of the latest gens of Zwave devices which are supposed to have higher range.
The main problem with wifi devices is that they are often tied to the cloud, but a good number of them can be controlled over just your LAN though. Each brand tends to have their own APIs/protocols though so you need to verify compatibility with your smart hub before investing.
So if you go the wifi route make sure your devices are compatible and specifically check that your devices can be controlled without a cloud connection. Especially good to look for devices like Shelly that allow flashing of your own firmware or have standardized connection methods in their own firmware (Shelly supports MQTT out of the box)
All of the “snooping” is self contained. You run the network controller either locally on a PC, or on one of their dedicated pieces of hardware (dream machine/cloud key).
All of the devices connect directly to your network controller, no cloud connections. You can have devices outside of your network connected to your network controller (layer 3 adoption), but that requires port forwarding so again it is a direct connection to you.
You can enable cloud access to your network controller’s admin interface which appears to be some sort of reverse tunnel (no port forwarding needed), but it is not required. It does come in handy though.
As far as what “snooping” there is, there is basic client tracking (what IP/mac/hostnames) to show what is connected to your network. The firewall can track basics like bandwidth/throughout, and you can enable deep packet inspection which classifies internet destinations (streaming/Amazon/Netflix sort of categories). I don’t think that classification reaches out to the internet but that probably needs to be confirmed.
All of their devices have an SSH service which you can login to and you have pretty wide access to look around the system. Who knows what the binaries are doing though.
I know some of their WISP (AirMAX) hardware for long distance links has automatic crash reporting built in which is opt out. There is a pop up to let you know when you first login. No mention of that on the normal Unifi hardware, but they might have it running in the background.
I really like their APs and having your entire network in the network controller is really nice for visibility but my preference is to build my own firewall that I have more control over and then Unifi APs for wireless. If I were concerned about the APs giving out data, I know I could cut that off at the firewall easily.
A lot of the Unifi APs can have OpenWRT flashed on them, but the latest Wifi7 APs might be too locked down.
If they fire the “individual leaders” (aka supervisors/low level managers) they don’t have anyone to police the workers to make sure they are actually working.
So they make them go back to the office to be sure their workers are physically at their computers all day, even if they no longer have any idea if the work is getting done.