• 1 Post
  • 141 Comments
Joined 2 years ago
cake
Cake day: June 23rd, 2023

help-circle
  • You ever see those Wired videos where they talk about a concept on five different levels ranging from beginner to expert?

    The first level answer is likely that, yes, you’re reasonably secure in your current setup. That’s true, but it’s also really simplified and it skips a lot of important considerations. (For example, “secure against what?”) One of the first big realizations that hit me after I’d been running servers for a little while and trying to chase security is the idea of a threat model. What protects me from a script kiddie trying to break into one of my web servers won’t do much for me against a phishing attack.

    The more you do this, though, the more I think you’ll realize that security is more of a process than an actual state you can attain.

    I think it sounds like you’re doing a good job moving cautiously and picking up things at each step. If the next step is remote access, you’ve got a pretty good situation for a mesh VPN like Tailscale or Netbird or ZeroTier. They’ll help you deal with the CGNAT and each one gives you a decent growth path where you can start out with a free tier and if you need it in the future, either buy into the product or self host it.



  • It sure will handle a remote VPS, it’s just not as automatic to set up as it is with PVE.

    I put this off for a long time, but I finally did it this weekend.

    Basically, you install the proxmox-backup-client utility and then run it via cron or a systemd timerto do the backup however often you want.

    You’re responsible for getting the VPS to communicate with your backup server (like pretty much any self-hosted service), so some sort of VPN between them would be good. I used NetBird for that part and I have a policy that allows access from the client to PBS only on TCP port 8007.


  • I’ve been quite happy with Proxmox Backup Server. I’ve had it running for years and it’s been pretty solid for all my VMs/containers. There’s also a bare metal client, which I’m adding to a couple cloud VPS machines this weekend. We’ll see how that goes.

    Also, since it’s just Debian under the hood, I also use the PBS host as a replication target for my ZFS datasets via sanoid/syncoid.









  • With this concept in mind, I recently put together a VDI setup for a person who’s in one location for half of the year and another the other half. The idea is he’ll have a thin client at each location and connect to the same session wherever he is.

    I’m doing this via a VM on Proxmox and SPICE. Maybe there’s some idea in there you could use.




  • Hey, sorry for the late response—I missed the reply coming in.

    I like docker volumes for multiple nodes because there’s no guarantee that multiple systems will have the same directory structure to bindmount, but moving volumes between nodes is relatively straightforward config-wise, which is a reason you’d use them in k8s.

    As for latency in streaming: I think of latency sensitive operations as mostly things that need two-way communication. So, for example, if you wanted to play a game over a network, you’d need the controls to respond to your input immediately. Or if you’re making a voip call, you’d want the two sides of the conversation to be in sync. On the other hand, a video stream doesn’t typically download in real-time. The file fills a buffer on your computer ahead of you watching it. So the downloading isn’t happening synchronously with you watching it unless there’s a serious network bottleneck.


  • Take this with a grain of salt, the more I re-read, the more I realize I’m making assumptions about your setup that may or may not be true. First, I’m making an assumption that you’re doing ACLs for samba shares (and I know that system better on FreeBSD than Linux). I’m also assuming based on your description you want everyone to have access, but not write access.

    I think you could do an officewide group with read-only permissions on all of the shares and then set the unix group to the department.

    So, for your HR team you’d do chgrp -R hr /path/to/parent/shares/hr and setfacl -m d:g:rwx /path/to/parent/shares/hr and add the officewide group’s read-only perms: setfacl -m d:g:officewide:rx /path/to/parent/shares/hr. Rinse and repeat for each share.

    Not sure if this is what you’re after, but maybe it’ll help lead in a good direction.





  • I don’t think there’s a right answer for most of these, but here are my thoughts.

    Data: I almost always prefer bind mounts. I find them easier to manage for data that I’ll need to deal with (e.g. with backups). Docker volumes make a lot of sense to me when you start dealing with multiple nodes and central management, where you want to move containers between nodes (like a swarm).

    Cache: streaming video isn’t super latency sensitive, so I can’t think of a need for this type of caching. With multiple users hitting the web interface all the time it might help, but I think I’d do that caching in my reverse proxy instead.

    User: I don’t use the *arr stack, but I’d imagine that suite of applications and Jellyfin all need to handle the same files, so I’d be inclined to use the same user (or at least group) on all of them.

    DLNA: this is a feature I don’t make much use of, but it allows for Jellyfin to serve media to devices that don’t run a Jellyfin client. It’s an open standard for media sharing among local devices. I don’t think I would jump through any hoops for it unless you have a use, but the default setup won’t get in your way.

    Hope that helps a little.