Tomorrow I will be doing a talk at #FrOSCon about a project that I have been working on for a while: #Signstar - a secure signing environment based on @nitrokey's #NetHSM

https://programm.froscon.org/2024/events/3139.html

#FrOSCon2024 #Rust #RustLang #DigitalSigning #ArchLinux #OpenPGP #SecureBoot #Packaging #Automation #HardwareSecurityModule #HSM

Lecture: Boring infrastructure: Building a secure signing environment | Sunday | Schedule FrOSCon 2024

Boring infrastructure: Building a secure signing environment

media.ccc.de

@dvzrv Just watched it, was an interesting talk.

I do have 1 specific question: you mentioned that N out of M build machines need to repro identical artifacts for automatic deployment. Why not M out of M and when that isn't the case fall back to manual intervention?

Seems a bit weird to just kinda ignore the ones where the repro wasn't identical. (I do know that currently a decent chunk of packages aren't reporducible, but I am not talking about those)

@NekkoDroid thanks!
FWIW, the threshold signing is something we won't be able to do nearterm.

In that case n out of m refers to the number of future build machines. If we have m (let's assume e.g. four) in total, we would only need n (e.g. two out of four) to verify our reproducibility assumption.

Going for m out of m is somewhat an economical question (e.g. rebuilding QEMU is costly and takes time), but also one of usefulness (do we need more than one or two validations?).

@dvzrv makes sense, I thought M referred to the amount of systems where the build for package X was run on and only N of those needed to agree on the repro status.

Having 3 for smaller and 2 for larger ones make sense, as with the larger ones it's more likely that something is gonna differ

@dvzrv Thanks for the mention and wishing you a great talk! 💪 😎