p̻̻̥r̥̻̥o̻j̤͛ec͔t̞dp 

@projectdp@infosec.exchange
646 Followers
1.5K Following
327 Posts

🛡️#DevSecOps | #OpenSource | #Tech | #Security | #InfoSec | #Hacker | #Networking | #OpenBSD | #NetBSD | #FreeBSD | #Linux | #Homelab | #Selfhosted | #fedi22 🔐

                  🦶

:github: GitHub 🐡https://github.com/projectdp
:birdsite: x 🐦https://x.com/projectdp
OpenPGP DOIP$argon2id$v=19$m=64,t=512,p=2$jMNVxRVgnVNgbWahv5tkTQ$I5KP38/0lYM+bXGQmtc8vQ
Cluster Rebuild Project

Everything deployed so far is in gitops, and renovate os functional

Current features/components:
* external-dns
* cert-manager
* cilium providing: Gateway-API, Ingress, load-balancer
* CloudNative-PG for postgres DBs
* Forgejo
* keycloak
* Kube-Prometheus-Stack which also deploys Grafana dashboard and Loki
* ArgoCD
* Renovate
* Rook-Ceph (object storage, block storage, and distributed filesystem)

That is the core of the cluster done, the heart of it. The next step is to get DB backups running properly, then I'll follow a flow of backing up the DB on the old cluster and restoring onto the new cluster. Data transfer via backup verification!
#HomeLab #Kubernetes

Homelab update:

Upgraded the Proxmox cluster from 7 to 8. Process went smoothly for the cluster, the only issue was a Windows VM had issues booting after migrating off a host for some reason. I'll have to do some troubleshooting there.

Set up the first node in a nested XCP-ng cluster within Proxmox. I think I installed XOA but need to make sure since each time I let the installation run the terminal times out and it logs me back out before I can see the result of the installation script. Need to find out where it installs the XOA listener for the web interface. Then I'll have to read up on how to get the rest of the nested XCP-ng cluster nodes up.

Anyone else running a nested hypervisor cluster? XCP-ng in Proxmox or any other nested configuration? Any issues?

#homelab #proxmox #xcpng #xcp #xen

@solene Hi I wanted to say thanks for all the helpful QubesOS related articles you have published. They're definitely helping me get ramped up after switching back to Qubes.

I was having some issues with the OpenBSD configuration but I'll try going through it again in a week or so. I was attempting to use both your article and Xuni's article to get it going, and mostly was having issues with Xuni's steps, but I'll go through it again.

Thanks for taking the time to do all the writeups!

https://dataswamp.org/~solene/index-full.html
https://dataswamp.org/~solene/2023-06-03-openbsd-in-qubes-os.html

#qubesos #framework #openbsd

I now have my Framework laptop running Fedora. I have KVM installed and Proxmox is installed in KVM with nested virtualization enabled on the host. I'm now running a restored VM from a Proxmox backup.

I had to do a bit of data juggling with my external NVMe SSD to import the backup to the correct location to enable me to restore within the Proxmox VM. I also had some initial issues getting the target guest VM to run but I noticed I had my vCPU allocation a bit too high. After adjusting this and removing an additional vNIC and Audio virtual device for Spice, I was able to get it to run.

I'll have to figure out the best way to get data portability of these VMs. Ideally if I could determine how the vma format works that Proxmox uses for backup maybe I can skip the Proxmox VM and load the guest VM directly into KVM on my Framework host.

Or better yet if I can leverage LVM snapshotting and save out a volume of the entire VM then load that directly into KVM/QEMU on the laptop that would be nice.

#homelab #proxmox #kvm #qemu #virtualization #fedora #framework

Received my Framework laptop, running Fedora + Sway and it's great so far.

I'm working on setting up some KVM nested virtualization with Proxmox as a guest.

Currently compiling some rust tools per my previous post:
https://infosec.exchange/@projectdp/109377615889143904

If anyone has some new rust tools you like to use please let me know, I want to test out useful rust-written tools.

#homelab #rust #rustlang #linux #fedora #swaywm

p̻̻̥r̥̻̥o̻j̤͛ec͔t̞dp :verifiedpurple: (@projectdp@infosec.exchange)

Some of my favorite #Rust CLI tools: ``` fd bat tab procs coreutils lsd broot exa onefetch du-dust feroxbuster rustscan ripgrep diskonaut alacritty zoxide t-rec starship navi xplr ``` What are yours?

Infosec Exchange

Hi homelabbers,

I should be receiving my framework laptop later this month and I was thinking about how to make some of my VMs portable from my proxmox cluster, so before leaving my homelab I can migrate select VMs to my laptop for local use on the laptop while I'm away from my lab gear. I also have a VPN so that's not a solution for this scenario since I want to run the VM locally.

I've done a manual backup method which compresses a full VM and I exported that to a disk but I think I'll have to mess with testing things like qemu flags and parameters to get it launching perfectly from Qemu or KVM on the Framework laptop running some vanilla Linux OS. This is overhead I don't want to mess with if it's going to be too fiddly.

That got me thinking, would it be possible to run a light VM that is running a proxmox node on the framework laptop, and then simply attach it to the cluster and migrate natively through proxmox and utilize the VM that way? Has anyone tested running a proxmox node virtualized on a laptop? I'm thinking even with it being a nested VM that it will run faster locally than over the VPN.

#homelab #selfhosted

Great Scott Gadgets announced that their Universal Radio Test Instrument (URTI) project is being developed. Check out some of the expected functionality of this portable tool:

Spectrum analyzer
Vector network analyzer
Vector signal generator
Vector signal analyzer
Antenna analyzer
Power meter
Frequency counter
Full-duplex SDR transceiver

https://greatscottgadgets.com/2023/05-04-development-of-a-universal-radio-test-instrument/

#sdr #radio #antenna #spectrumanalyzer #signal #hamradio #tools

Development of a Universal Radio Test Instrument - Great Scott Gadgets

I mentioned I missed a concept, so where am I failing?

Well I noticed after migrating several VMs from the NFS storage over to the iSCSI+LVM storage that I was still only using a single path to the NAS by watching the port activity on the NAS. There are two network paths to utilize and I figured Multipath would handle that.

I then started testing by manually unplugging cables to force the second path but that didn't work. I also tried toggling the second iSCSI connection within Proxmox with no luck. The way I am adding the iSCSI connections is likely incorrect. What I was doing was using two different iSCSI portal IPs -- the two different IPs on the NAS, however they both point to the same iSCSI iqn on the NAS.

I'm pretty sure I need to now generate a new iqn that targets the same LUN on the Synology side, then re-add the second path IP to point to the second iqn. I'm a bit hesitant to do this since last time I broke the multipathing when I removed an iSCSI connection from Proxmox and had to do a lot of work to fix it. To be continued...

It has been a good exercise in figuring out how this all works. If you have experience doing this it would be great to hear what your experience was like and how you've configured things.

#homelab #selfhosted #proxmox #synology #iscsi #cluster #storage

I've been meaning to revisit running iSCSI multipathing in my Proxmox cluster. I previously had it set up with a TrueNAS machine providing storage but I'm now utilizing a Synology. During the initial cluster configuration I attempted the iSCSI config but failed to get it working across all cluster nodes.

Instead for several months I decided to go with NFS since it has the most options to store Proxmox data (Disk image, Container template, Container, Snippets, VZDump backup file, ISO image) whereas the iSCSI + LVM option has more limits (Disk image, Container).

I finally revisited this and was able to get iSCSI, Multipath, and the LVM overlay working. I think I have missed one concept though and it's an important one that I need to validate. Before I get there I wanted to share the config items:

1. Synology: Set up Storage Volume
2. Synology: Set up LUN
3. Synology: Generate iSCSI iqn
4. Synology: Add Cluster host iqn's of cluster machines
5. Proxmox: Add iSCSI target and iqn at the cluster level
6. Proxmox: Add iSCSI target 2 at the cluster level
7. Proxmox shell: Install open-iscsi and multipath-tools if you haven't already
8. Proxmox shell: verify wwid's of the newly generated /dev/sdb, /dev/sdc (example disk names) ensuring that the wwid's match and are the correct iSCSI targets.
9. Proxmox shell: Configure /etc/multipath.conf to match your storage device, including denying the multipath management of all devices except for the explicit wwid if your iSCSI devices.
10. Proxmox shell: Restart multipathd. Once the multipath alias device appears you will be able to see it as an LVM Physical Volume (PV) with pvdisplay.
11. Proxmox shell: You may now generate an LVM Volume Group (VG) which will appear across the whole cluster.
12. Proxmox: You can now add an LVM overlay at the cluster level by selecting your new Volume Group.

Now I'm able to use my iSCSI-backed LVM volume across all clustered nodes for HA of VMs and Containers.

#homelab #selfhosted #proxmox #synology #iscsi #cluster #storage