Cursed homelab update:

TL;DR: Comparing CPU performance-per-dollar doesn't make sense. Where do people go for good benchmarks?

So my next purchases for said cursed homelab are:

1. Gaming system

So I'm up to my usual tricks here: big spreadsheet matching total cost vs performance.

The problem I'm having is that the numbers I'm able to find from Cinebench 2026 seem to favour Intel. The best AMD CPU is the 9900X which is $100 more expensive (10%) than the Intel 265KF and apparently 7% slower despite having 4 more threads and a cheaper motherboard.

What am I doing wrong here? My understanding was that AMD was still the performance-per-dollar king, but these numbers (and numbers from Pass Mark) are telling me they aren't. Is this just the Australian tech market being useless? There are "better" benchmarks out there, how do people sensibly compare CPUs these days?

2. Server

I think I have this one pretty close to sorted. I'm going to end up with a reconditioned DDR4 era Xeon server, most likely dual socket. Full depth, sadly, but it's either full depth, ancient, or dodgy eBay sellers, so this is a compromise I'm willing to make. The only questions I have are the exact final specs, what adapters I'll need to get two M.2 NVMe 2280 drives into it and how noisy it'll be.

#CursedHomelab #homelab #tech #askfedi #Benchmarks

Cursed homelab update:

Because I've been sitting on this for long enough.

TL;DR: Qotom micro-server blew up so my gaming machine is now being the hardware underneath server #3. So I need both a new gaming machine and a replacement server-class thing.

Part 1: The Qotomihilation

So the cheap Qotom server I purchased a while back blew up. Or rather something in the power circuitry developed a dead short (ish) which caused the PSU to cut power to protect itself. I initially thought the PSU was bad so I spent far too much on buying a replacement retail (same brand, higher rating) and all that did was confirm that the problem was the board not the PSU. (I also purchased fans to add to the server as I'm 90% sure this failure was heat related. They're going in the fan box.)

There's no obvious damage so no possibility to repair it without sending it back to the manufacturer, and if I'm going to do that, I might as well buy a replacement.

The obvious solution here is to replace it with something better - buying this server was a sensible decision at the time, but with hindsight, it was a bad solution to my problems.

Part 2: Rage Swap 2: Swap Harder

So the gaming machine got pressed back into server service. This is the one that had bad RAM, the one I was nervous about because it was still having problems after the known-bad RAM stick got pulled out.

And the one that, after I installed Bazzite on it, has been rock solid and survived multiple days of uptime without issue.

The biggest problem here was that the Qotom box had a mini-SAS port on the back for bulk storage, 9 Ethernet ports, and two M.2 slots. So the simplest solution for bulk storage was to plug in the QNAP card I got with the external box, hook it up with the cable QNAP supplied and have done. Finding two more Ethernet ports was as simple as finding the two PCI-e gigabit Ethernet cards I purchased back when I had dreams of running a high-availability router. Dealing with the other 2 Ethernet ports I was using was as simple as connecting the NBN box directly to server 3, and using the now-spare gigabit switch for the "IoT" network.

Which left the problem of the two M.2 slots. Bazzite was installed on an M.2 drive in one of them, but the other was missing the standoff and screw, probably because they had never been taken out of the baggie of screws that came with that motherboard.

So where was it?

Part 3: The curse of the rage-search

It was in the box for my gaming rig, obviously.

I didn't know that at the time, so I tore up the stratified pile of misc computer junk on (and in) the beautiful Silverstone desktop case Server 1 used to be in. No dice.

I then sat down and realised that it wasn't that the boxes "weren't there", it was that I couldn't see them, so I found the right box, right screw installed it, and it booted up first time. (and just to underscore this, there was no boot shenanigans required at all)

Part 4: 64GB of server in 24GB of RAM

The great thing about having space is that you can put stuff in it. The crap thing about space is that when it's gone, you can't fit everything in there anymore.

So I aggressively hacked at Kubernetes, Ceph and Elasticsearch to get everything to fit on that server without it running out of RAM (it deliberately has no swap) and it seems to now be stable.

Interesting fact: Linux seems to break IPv6 forwarding over virtual bridges when it OOMs.

Part 5: Next steps

I cleaned up the room (I now have a fan bag. Yay.) lay the computer down on it's wrong side (tower case) and it's been stable since I figured out what was eating all it's RAM.

So now all I need is a new server (thanks @decryption for pointing me to https://www.bargainhardware.co.uk - Australia's server options are nonexistent and I've been primarily looking at retail options) and a new gaming rig to go in that beautiful Silverstone desktop case, and I'm having to do all of this way sooner than I'd planned to.

But needs must.

#homelab #cursedhomelab #tech #it #linux

Refurbished Servers, PCs, Workstations & Parts | HPE, Dell | Bargain Hardware

Bargain Hardware is Europe’s leading supplier of refurbished business class IT hardware, including servers, workstations, desktop PCs, laptops and components. 

Bargain Hardware

#recreationalcomputertouching #cursedhomelab #yakshaving

In today's episode, a Windows 11 guest on my Proxmox machine is receiving SLAAC configuration for all three VLANs in my network, despite the network port the Proxmox host is connected to, only being connected to the default VLAN with ID 1.

Cursed homelab update:

So it turns out that if you subtly misconfigure a WPAD server on your network (microscopic HTTP server that returns a single[1] file) and have it point to a slightly unreliable proxy server, Windows will, by default, use that proxy and when it discovers that said proxy is unreliable, doesn't do anything to mitigate this in any way, or inform the user, or anything, leading to people complaining that a rock-solid internet connection is broken.

And some applications don't have any real sensible way to deal with unreliable proxy servers or internet connections and just break.

Or to put it another way, I just fixed my partner's internet problems by deleting a domain name.

Thanks Windows.

[1] If you don't hate yourself you'll have two copies of the file so you have a sane URL if you ever need to configure something manually.

#homelab #CursedHomelab

Today in "It was DNS" news.

So connectivity external to my Kubernetes cluster wasn't working and I couldn't figure out why.

So some thing in a random pod would try to resolve www.example.com and it'd get the IP of my external connection.

Pause here if you want to figure this out yourself.

I have dynamic DNS set up with a wildcard address so anything.my.domain goes to my.domain.

I also use my.domain as the root of everything internal to my network, so if I have some-service.my.domain set to point to some internal IP, I can use some basic reverse proxying to allow HTTP access externally.

However the place where those internal names are registered has changed, instead of being on my Samba AD cluster, it's now on the router as I've deleted the Samba AD cluster as it isn't and won't do anything useful for me.

However the router doesn't think it's the authoritative source of my.domain DNS entries, so it forwards them externally, so if I resolve nonexistent-host.my.domain, it gets passed upstream, resolved by the wildcard, and ends up with the IP of my external connection.

However this was happening for nearly any domain inside Kubernetes, not just obviously incorrect ones.

Why? Because Kubernetes sets "option ndots:5" in every pod's resolv.conf, and adds my.domain to the end of the search list, so any sufficiently short name is resolved as short.name.my.domain before it is resolved as short.name.

This obviously caused a lot of problems as short.name.my.domain always resolved to an IP.

I fixed this by blocklisting my.domain in the router, it turns out that Unbound on OpnSense resolved names it knows about before applying blocklists, so this works as expected without having to convince the router that it owns the domain.

Sigh.

At least things are working now.

#kubernetes #itwasdns #dns #CursedHomelab

Cursed homelab update:

Server #2's partial upgrade is complete:
1. New SAS card is in with shroud and fan
2. Noisy server fans replaced with Noctua fans
3. 4 more HDDs added (one is apparently dead and I haven't yet figured out which one)

So after 1 hour of uptime, temperatures are:
PSU: 39 -> 39 - no change
CPU: 41 -> 41 - no change
Motherboard: 34/45 -> 33/47 - same ballpark
HDDs: 30-36 -> 36-45 - slightly alarming and I am also measuring temps from 3 more drives
ambient: 26 -> 28 - I'm not sure I trust the previous ambient temperature, however we have had some hot days, so this might be right

If the HDD temps don't decrease over the next couple of days, I might consider (partially) blocking the air vent at the top of the front of the case.

Fan duct appears to be working, there is a distinct hot spot on the back, fan is spinning and all that, so best guess is the duct and fan are doing their job. (And monitoring says that the fan is spinning at 1781 RPM)

Also the server is now dead quiet. I mean seriously I cannot hear it.

I'm calling this a success.

Well a qualified success. It turns out that to do the fancy hot-swap fans in the case, Silverstone used a "normal" 4 pin socket instead of a "PC fan" one, so I had to bypass the hot-swap system and plug the 3 fans directly into one of the backplanes.

So it looks like I'm going to be cannibalising the "Low-noise adapter"s Noctua provided to convert the Silverstone fans to PC fan connectors and make some tiny adapter cables so I can plug regular PC fan connectors into the hot-swap system.

I'm not planning to install this until I have to open it up again, which will hopefully be for a CPU + RAM upgrade.

#homelab #CursedHomelab #3dprinting #noctua #tech #it

Cursed homelab update:

I now have a shroud for the hot SAS card.

It's a little cursed, but what isn't these days, and there's a bunch of stuff I'd do differently next time, but this is enough for this card in this computer today, so yeah.

Modelling this was a challenge. Started in OpenSCAD - my usual go-to, but found the shape I wanted for this a challenge, so I switched to Autodesk Fusion - which my partner has been playing with - but it not being on "my" laptop (there's no Linux version) and being stupidly slow made it a pain to use (also where are the fricking _details_ on how things work? WTF) so I installed FreeCAD on my laptop and after 5 days of work, I have A Thing!

Good points:
- The card clip situation I designed holds the card a lot firmer than I'd expected it would - no rattle whatsoever - and because of this is it is a lot harder to install than anticipated.
- The attachment is strong enough to hold my heavy stunt fan to the card without flexing it
- The Kobee matte PLA printed mostly fine and I dialled down the settings so the walls are thin, so it's just as translucent as the first duct (see a previous post)
- I got it close enough to right so that the first print is usable

Bad points:
- I didn't account for the tiny folded over edge of the card's support bracket, so the outflow part of the duct doesn't sit flush with it. This appears to actually be helping hold the duct in the right spot, so I'm not going to fix this today
- My decision to use a curved surface for the bottom side of the card attachment rails was a bad one as it's not as well attached as it could be, however the plastic is stiff enough that this isn't a problem
- The most awkward part of the design, the transition from the card attachment rails to a lip that fits over the back of the card as the duct flares out is just as janky in person as it is on the screen, but I will literally be the only person who cares about it.
- The inner surface of the duct - a long flat surface that's sloping over the fan base - has lots of long stringy bits of filament stretched over it as they didn't adhere properly. This appears to be a problem with this specific filament and if I cared enough I could dial it in, but I don't so I'm just going to apply some glue to the area and hope for the best.

Next steps are to replace the thermal paste, fix the heatsink, put glue on the stringy surface, and put it all together, ready to go in the server!

#3dprinting #freecad #tech #cad #homelab #CursedHomelab

Just got my hands on my first "real" Noctua fans and if the fans are as good as the packaging, then .... well these will be excellent. (I'm not counting their grey fans as "real")

These are to make server #2 less noisy. I'm trading a lot of static pressure for reduced noise, but my understanding is that these should be near-silent.

I am worried that this will be a significant downgrade to this server's cooling, but temperatures don't lie, so if the temperatures are less than a couple of degrees warmer, this is a win.

Current temps: PSU: 39, CPU: 41, Motherboard: 34/45, HDDs: 30-36, ambient: 26.

This upgrade will also include:
- adding a very hot SAS HBA with it's own fan and cooling duct
- removing the two outflow fans on the back of the case as they should be unnecessary
- replacing the CPU fan with a grey Noctua

#CursedHomelab #Noctua #tech #it #homelab

So I'm modelling a fan duct for a SAS card (see the last #cursedhomelab update for details) and was originally going to do it in OpenSCAD like I did for the last one, however this one needs some very specific geometry that I can't quite wrap my head around, so I'm trying Autodesk Fusion instead.

Those specific bits of geometry are:
- some bits to clip it to the SAS card's PCB
- 2 turns in the duct to accommodate the connectors on the PCB and a small offset on the fan
- and some geometry to screw it to the support bracket

I learned PRO/Engineer at university and it uses the "standard" structure of defining planes, drawing sketches on those planes, then extruding solid objects, one way or another, from those sketches.

Autodesk Fusion is very familiar with a very similar flow, but it's been a while and I've forgotten a lot of the details of how to take the structure I can see in my head and turn it into CAD structures. That said, it's all coming back to me.

Lofting is currently causing me the most problems as the tools I have to hand look like they should do what I want, but I keep producing lumpy messes rather than the rounded cornered boxes I'm wanting for this, so I'm having to draw a lot of geometry to make this work.

But I'm getting there. Then to print it paper thin, attach a 80mm fan and clip it to the card when I install it.

(Photo is the first duct I made to have a fan blow over the 4x HDDs protruding ~3cm from the 3x 5.25" bays in server #2's former case. It is illuminated from inside to demonstrate how thinly it was printed)

#3dprinting #make #tech #homelab #CursedHomelab #autodeskfusion #fusion360 #cad

Cursed Homelab Update:

Brought to you by acceptably loud fans.

So after solving my Cable Buying Problems I have transplanted Server #2 into the new case and ... the fans are acceptably loud for a datacentre. They're not acceptably loud for the room ajoining my bedroom, but they're "whooshy" loud not "rattly" loud so it's not bad.

NOT GOOD AT ALL, but not bad.

However this has revealed the next facet of my Cable Problems: they're too short for the fancy SAS card I bought last year.

You see the cards I was replacing (ancient first-gen LSI SAS cards) had the connectors at the end of the card closest to the drives, so I roughly guestimated the distance between the backplanes and the cards as 50cm and bought cables that long.

This card is one of the slightly cursed ones that have the connectors at the other end, almost as if they were trying to minimise differences between the ones with internal and external connectors.

So they didn't reach. Or rather I couldn't plug one into the furtherest connector on the card. Or it doesn't work. I don't know and I can't easily check, so new SAS card time.

This time it's a Cisco-branded (apparently) LSI SAS 9271-8i, so I do my usual dance of upgrading the firmware to the latest version, making sure the BIOS is installed and enabled, then plugging in some test disks to check the ports.

Except they don't show up and the handy "jbod=on" command doesn't work.

And it turns out that card is Enterprise (adj, derogatory) so the latest few revisions of the firmware dropped JBOD support. Oh and it runs _dangerously_ hot. Like "burn yourself on the heatsink" hot. Like "temperature sensor reading 170 degrees might be correct" hot.

No JBOD only RAID. This is an enterprise card for enterprise customers, we'll have no homelabbers here.

Ugh.

Thankfully the server it's going into has good airflow, but I'm seriously thinking about re-doing the thermal paste, fixing the heatsink so it sits flat, and 3D printing a fan shroud. Y'know, for paranoia.

Oh, and the SAS to SATA cables I bought? Wrong way around. These are for drives, not controllers, so they don't work. Don't ask. SAS is nuts.

#homelab #cursed #cursedhomelab #lsi #sas