1 Followers
0 Following
1 Posts
Just a basic programmer living in California

[HowTo] run Kodi media center

https://leminal.space/post/32909429

[HowTo] run Kodi media center - Leminal Space

My wife brought home a TV from a thrift store. So I took the easy route of setting up a media center to make it play videos. I already had a headless Jellyfin server running on a small Beelink computer. I connected that to the TV, installed Kodi, installed the Yatse remote control app on my phone, and we’re all set. It took me some research and trial and error to get the remote control app connected with Zeroconf auto discovery working. So I thought I’d share what I learned. Here is my entire Kodi module: nix { pkgs, ... }: { # Enable a graphical shell services.xserver.enable = true; services.xserver.desktopManager.kodi = { enable = true; package = pkgs.kodi.withPackages ( kodiPackages: with kodiPackages; [ jellyfin netflix ] ); }; # To view plugins available in nixpkgs run: # # $ nix repl # > pkgs = import <nixpkgs> {} # > builtins.attrNames pkgs.kodiPackages # # Or search for plugins on https://search.nixos.org/, and in the left-sidebar # under "Package sets" click "kodiPackages" services.displayManager.autoLogin = { enable = true; user = "kodi"; }; users.users.kodi.isNormalUser = true; # Allow access to web UI & remote control API networking.firewall = { allowedTCPPorts = [ 8010 # the port I configured for "allow remote control via HTTP" 9090 # also event server? ]; allowedUDPPorts = [ 9777 # event server ]; }; # Allows Kodi to advertise to remote control apps using Zeroconf. services.avahi = { enable = true; publish.enable = true; publish.userServices = true; }; } Kodi is a graphical shell. Like I said, this box was previously headless, so I enabled Kodi as my DE and set it up to automatically log in. To make the remote control work there are some necessary settings changes to make in the Kodi UI. I had to connect a keyboard temporarily to set this up: - Go to Settings > Services - Enable General > Announce services to other systems (for Zeroconf auto discovery) - In Control enable: - Allow remote control via HTTP (match port number in NixOS firewall settings) - Allow remote control from applications on this system - Allow remote control from applications on other systems - I’m not sure if this is needed, but the Yatse docs recommend these settings in UPnP / DLNA for “stream streaming cases”: - Share my libraries - Allow remote control via UPnP Declarative settings would be nicer, but there doesn’t seem to be a NixOS module that does that yet. It’s the same situation with Jellyfin.

The milk crate usurper!

https://leminal.space/post/30930361

Hacking a pocket gel pen

https://leminal.space/post/29832975

Hacking a pocket gel pen - Leminal Space

Fisher Bullet Space Pen with a gel refill [https://leminal.space/pictrs/image/df6d3b6e-bb24-4121-9108-89c9829894b2.jpeg] This is my favorite pen mod! I like to have a pen and a few index cards in my pockets at all times for fleeting notes. A Fisher Bullet Space Pen is a convenient size to fit in my pocket easily, it looks nice, and it is comfortable to use. Unfortunately the refill it comes with is an awful ballpoint! I have a strong preference for gel pens. Refills are not a standard size, so swapping in a gel refill takes a little know-how. One of my favorite gel pens is a Pilot Hi-Tec-C. Oddly Pilot doesn’t make refills for the full-sized version. But there is a multi-pen version, the Hi-Tec-C Coleto, with refills that are small and skinny enough to fit into a Space Pen pretty well. I cut 4mm off of the back of the refill, and with a little fiddling it fits. Or for another option there are instructions for using a Zebra JK refill here [https://penvibe.com/the-ultimate-guide-to-fisher-space-pen-refills/#Eleventh_Sub_Point_2]. There are downsides. The gel refill doesn’t have any of the unique capabilities of the pressurized Space Pen refills, like writing upside-down. The tip of the Hi-Tec-C Coleto refill is a little wobbly. And the multi-pen refills have a lower capacity than most refills, so they don’t write as long. This mod is best for an occasional-use pen that you always have on hand. (I keep my main pen with my journal because it’s too big for my pocket.)

[HowTo] Selective VPN confinement in NixOS

https://leminal.space/post/28955630

[HowTo] Selective VPN confinement in NixOS - Leminal Space

cross-posted from: https://leminal.space/post/28955576 [https://leminal.space/post/28955576] I learned how to do this recently, and I wanted to share. Once you know what to do VPN confinement is easy to set up on NixOS. The scenario: you want selected processes to run through a VPN, but you want everything else to not run through the VPN. On Linux you can do this with a network namespace. That’s a kernel feature that defines a network stack that is isolated from your default network stack. Processes can be configured to run in a new namespace, and when they do they cannot access the usual not-VPN-protected network interfaces. Network namespaces work along with other types of namespaces, like process namespaces, to allow Docker containers to function almost as though they are separate machines from the host system. Actually Docker containers are regular processes that are carefully isolated using namespaces, cgroups, and private filesystems. Because of that isolation Docker containers are a popular choice for VPN confinement. But since all you really need is network isolation you can skip the middleman, and use network namespaces directly. There is a third-party NixOS module that automates this, VPN-Confinement [https://github.com/Maroka-chan/VPN-Confinement]. Here’s an example that runs a Borg backup job through a VPN connection. (This example also uses the third-party sops-nix [https://github.com/Mic92/sops-nix] module to encrypt VPN credentials.) nix { config, ... }: let vpnNamespace = "wg"; in { # Define the network namespace for VPN confinement. Creates a VPN network # interface in the namespace; creates a bridge; sets up routing; creates # firewall rules to prevent DNS leaking. The VPN-Confinement module requires # using Wireguard as the VPN protocol. vpnNamespaces.${vpnNamespace} = { enable = true; wireguardConfigFile = config.sops.secrets.wireguard_config.path; }; # Set up whatever service should run via VPN services.borgbackup.jobs.homelab = { paths = "/home/jesse"; encryption.mode = "none"; environment.BORG_RSH = "ssh -i /home/jesse/.ssh/id_ed25519"; repo = "ssh://offsite.sitr.us/backups/homelab"; compression = "auto,zstd"; startAt = "daily"; }; # Modify the systemd unit for your service to run its processes in the VPN # namespace. # # - sets Service.NetworkNamespacePath in the systemd unit # - sets Service.InaccessiblePaths = [ "/run/nscd" "/run/resolvconf" ] to prevent DNS leaking # - adds a dependency to the unit that brings up the VPN network namespace # # I found the name of the systemd service that services.borgbackups.jobs # creates by looking at the Borg module source. You can find the source for # NixOS modules by searching for config options on https://search.nixos.org/options systemd.services.borgbackup-job-homelab = { vpnConfinement = { enable = true; inherit vpnNamespace; # `inherit vpnNamespace;` has the same effect as `vpnNamespace = vpnNamespace;` # I used a variable to be certain that the value here matches the name # I used to set up the namespace on line 11. If the names don't match then your # service won't run through the VPN. }; }; # Load your wireguard config file however you want. Your VPN provider probably # supports wireguard, and will likely generate a config file for you. sops.secrets.wireguard_config = { sopsFile = ./secrets.yaml; owner = "root"; group = "root"; }; } This setup assumes using the Wireguard VPN protocol, and assumes that programs you want to be VPNed are run by systemd. VPN providers mostly support Wireguard, including Tailscale. But my understanding is that Tailscale’s mesh routing requires additional setup beyond creating a Wireguard interface. So you’d likely want a different setup for confinement with Tailscale. You can run the Tailscale client in a network namespace (there is a start on such a setup here [https://jamesguthrie.ch/blog/multi-tailnet-unlocking-access-to-multiple-tailscale-networks/]); or you might use Tailscale’s subnet router feature to blend VPN and local network traffic instead of selective confinement. Normally when you turn on a VPN your VPN client software creates a network interface that transparently sends traffic through an encrypted tunnel, and configures a default route to send network traffic through that interface. So traffic from all programs is routed through the tunnel. VPN-Confinement creates that network interface in the isolated namespace, and sets that default route in the namespace, so that only programs running in the namespace are affected. There is much more detail in this blog post [https://www.samkwort.com/qbittorrent_nixos_module]. The VPN-Confinement module differs from the setup in that post in a couple of ways: it has some extra setup to block DNS requests that aren’t properly tunneled; it creates a network bridge instead of a simple virtual ethernet cable for port forwarding; and it provides more options for firewall and routing configuration. VPN-Confinement has an option to forward ports from the default network stack into the VPN namespace. This is useful if you want all outbound traffic to go through the VPN, but you want to accept inbound traffic from programs on the host, or from other machines on your local network, or anywhere else. This is handy if, for example, you’re running a program on a headless server that provides a web UI for remote administration. Here’s an expanded VPN namespace example: nix vpnNamespaces.${vpnNamespace} = { enable = true; wireguardConfigFile = config.sops.secrets.wireguard_config.path; # Forward traffic to specified ports from the default network namespace to # the VPN namespace. portMappings = [{ from = 8080; to = 8080; }]; accessibleFrom = [ # Accept traffic from machines on the local network, and route through the # mapped ports. "192.168.1.0/24" ]; }; Requests to mapped ports from the host machine need to be addressed to the network bridge that VPN-Confinement sets up. You can configure its addresses using the bridgeAddress and bridgeAddressIPv6 options. By default the addresses are 192.168.15.5 and fd93:9701:1d00::1. If you’re configuring addresses elsewhere in your NixOS config you can use an expression like this: nix url = "http://${config.vpnNamespaces.${vpnNamespace}.bridgeAddress}:8080/"; If you look at the source for VPN-Confinement you’ll see that namespace configuration and routing require a lot of stateful ip commands. I think it would be nice if there were an alternative, declarative interface to iproute2. But VPN-Confinement is able to encapsulate the stateful stuff in systemd ExecStart and ExecStopPost scripts. I ran into an issue where mDNS stopped working while the VPN network namespace was active. I fixed that problem by configuring Avahi to ignore VPN-Confinement’s network bridge: nix services.avahi.denyInterfaces = [ "${vpnNamespace}-br" ];

[HowTo] Selective VPN confinement in NixOS

https://leminal.space/post/28955576

[HowTo] Selective VPN confinement in NixOS - Leminal Space

I learned how to do this recently, and I wanted to share. Once you know what to do VPN confinement is easy to set up on NixOS. The scenario: you want selected processes to run through a VPN, but you want everything else to not run through the VPN. On Linux you can do this with a network namespace. That’s a kernel feature that defines a network stack that is isolated from your default network stack. Processes can be configured to run in a new namespace, and when they do they cannot access the usual not-VPN-protected network interfaces. Network namespaces work along with other types of namespaces, like process namespaces, to allow Docker containers to function almost as though they are separate machines from the host system. Actually Docker containers are regular processes that are carefully isolated using namespaces, cgroups, and private filesystems. Because of that isolation Docker containers are a popular choice for VPN confinement. But since all you really need is network isolation you can skip the middleman, and use network namespaces directly. There is a third-party NixOS module that automates this, VPN-Confinement [https://github.com/Maroka-chan/VPN-Confinement]. Here’s an example that runs a Borg backup job through a VPN connection. (This example also uses the third-party sops-nix [https://github.com/Mic92/sops-nix] module to encrypt VPN credentials.) nix { config, ... }: let vpnNamespace = "wg"; in { # Define the network namespace for VPN confinement. Creates a VPN network # interface in the namespace; creates a bridge; sets up routing; creates # firewall rules to prevent DNS leaking. The VPN-Confinement module requires # using Wireguard as the VPN protocol. vpnNamespaces.${vpnNamespace} = { enable = true; wireguardConfigFile = config.sops.secrets.wireguard_config.path; }; # Set up whatever service should run via VPN services.borgbackup.jobs.homelab = { paths = "/home/jesse"; encryption.mode = "none"; environment.BORG_RSH = "ssh -i /home/jesse/.ssh/id_ed25519"; repo = "ssh://offsite.sitr.us/backups/homelab"; compression = "auto,zstd"; startAt = "daily"; }; # Modify the systemd unit for your service to run its processes in the VPN # namespace. # # - sets Service.NetworkNamespacePath in the systemd unit # - sets Service.InaccessiblePaths = [ "/run/nscd" "/run/resolvconf" ] to prevent DNS leaking # - adds a dependency to the unit that brings up the VPN network namespace # # I found the name of the systemd service that services.borgbackups.jobs # creates by looking at the Borg module source. You can find the source for # NixOS modules by searching for config options on https://search.nixos.org/options systemd.services.borgbackup-job-homelab = { vpnConfinement = { enable = true; inherit vpnNamespace; # `inherit vpnNamespace;` has the same effect as `vpnNamespace = vpnNamespace;` # I used a variable to be certain that the value here matches the name # I used to set up the namespace on line 11. If the names don't match then your # service won't run through the VPN. }; }; # Load your wireguard config file however you want. Your VPN provider probably # supports wireguard, and will likely generate a config file for you. sops.secrets.wireguard_config = { sopsFile = ./secrets.yaml; owner = "root"; group = "root"; }; } This setup assumes using the Wireguard VPN protocol, and assumes that programs you want to be VPNed are run by systemd. VPN providers mostly support Wireguard, including Tailscale. But my understanding is that Tailscale’s mesh routing requires additional setup beyond creating a Wireguard interface. So you’d likely want a different setup for confinement with Tailscale. You can run the Tailscale client in a network namespace (there is a start on such a setup here [https://jamesguthrie.ch/blog/multi-tailnet-unlocking-access-to-multiple-tailscale-networks/]); or you might use Tailscale’s subnet router feature to blend VPN and local network traffic instead of selective confinement. Normally when you turn on a VPN your VPN client software creates a network interface that transparently sends traffic through an encrypted tunnel, and configures a default route to send network traffic through that interface. So traffic from all programs is routed through the tunnel. VPN-Confinement creates that network interface in the isolated namespace, and sets that default route in the namespace, so that only programs running in the namespace are affected. There is much more detail in this blog post [https://www.samkwort.com/qbittorrent_nixos_module]. The VPN-Confinement module differs from the setup in that post in a couple of ways: it has some extra setup to block DNS requests that aren’t properly tunneled; it creates a network bridge instead of a simple virtual ethernet cable for port forwarding; and it provides more options for firewall and routing configuration. VPN-Confinement has an option to forward ports from the default network stack into the VPN namespace. This is useful if you want all outbound traffic to go through the VPN, but you want to accept inbound traffic from programs on the host, or from other machines on your local network, or anywhere else. This is handy if, for example, you’re running a program on a headless server that provides a web UI for remote administration. Here’s an expanded VPN namespace example: nix vpnNamespaces.${vpnNamespace} = { enable = true; wireguardConfigFile = config.sops.secrets.wireguard_config.path; # Forward traffic to specified ports from the default network namespace to # the VPN namespace. portMappings = [{ from = 8080; to = 8080; }]; accessibleFrom = [ # Accept traffic from machines on the local network, and route through the # mapped ports. "192.168.1.0/24" ]; }; Requests to mapped ports from the host machine need to be addressed to the network bridge that VPN-Confinement sets up. You can configure its addresses using the bridgeAddress and bridgeAddressIPv6 options. By default the addresses are 192.168.15.5 and fd93:9701:1d00::1. If you’re configuring addresses elsewhere in your NixOS config you can use an expression like this: nix url = "http://${config.vpnNamespaces.${vpnNamespace}.bridgeAddress}:8080/"; If you look at the source for VPN-Confinement you’ll see that namespace configuration and routing require a lot of stateful ip commands. I think it would be nice if there were an alternative, declarative interface to iproute2. But VPN-Confinement is able to encapsulate the stateful stuff in systemd ExecStart and ExecStopPost scripts. I ran into an issue where mDNS stopped working while the VPN network namespace was active. I fixed that problem by configuring Avahi to ignore VPN-Confinement’s network bridge: nix services.avahi.denyInterfaces = [ "${vpnNamespace}-br" ];

Spotted Linux on the back of a car

https://leminal.space/post/25189798

Blue oyster mushrooms growing in a kit

https://leminal.space/post/21213244

My favorite bullet journal guide

https://leminal.space/post/21068541

My favorite bullet journal guide - Leminal Space

I got into bullet journaling a few weeks ago. I looked at a bunch of resources that went into detail, but I felt like I didn’t have the big picture. The Absolute Ultimate Guide covers the motivation, what bullet journaling is all about, and details for getting started quickly, all in one relatively short post.

Patch interpreter path in embedded binary?

https://leminal.space/post/19386723

Israeli army knew of Hamas plot to take hostages before 7 Oct - Leminal Space

The Israeli military had advanced notice about a plan by Hamas to raid southern Israel including accurate predictions about the number of hostages the group would seize, a new report has found. The internal report shared with Gaza Division commanders, entitled ‘Detailed raid training from end to end’, was released on 19 September and found that Hamas fighters had been training for a huge assault on Israel, a warning that was ignored by officers, Israeli broadcaster Kan reported. Three weeks later, Hamas and its allies launched a series of assaults into southern Israel, killing 1,190 Israelis and taking 251 captives back to Gaza, sparking a brutal Israeli war on the Palestinian enclave that has seen more than 37,000 people killed. “I feel like crying, yelling and swearing,” one of the authors of the report said about the ignored warnings, according to Kan. October 7 was almost a mirror of how the report warned an assault would play out, even predicting the number of hostages who would be seized by Hamas at between 200 and 250, while detailing how its fighters from its elite units would assault military posts and towns. The 7 October events remains a hugely sensitive issue in Israel with demands for commanders and politicians to step down due to the collapse of the southern front on that day. Israel’s High Court issued an injunction on Sunday to suspend an investigation by State Comptroller Matanyahu Englman into the military and Shin Bet over alleged failings that led to Hamas’s surprise attack on 7 October.