I want to see if the thoughts of others align with mine.

An FPGA toolchain should optimise for:
highest worst-case Fmax
61.4%
highest average Fmax
27.3%
highest best-case Fmax
11.4%
Poll ended at .
@lofty Depends how many parallel jobs I am allowed to submit to get that golden seed 😅
@wren6991 I'm computing distributions of 100 runs, so

@lofty If I'm comparing RTL changes for fmax then I'll usually do ~100 runs and take the median. Seems reasonable because 8 runs aiming for median frequency should have only a 0.4% chance of all failing. Most client machines have 8 parallel threads of execution available these days.

I don't particularly care about the mean because if I get a worst-case result then you can bet I'm re-rolling, so the tail values don't matter.

I don't think it's *good* that seed sweeping is the optimal way of using the tools, but the fact is it reduces variance as well as increasing expected fmax.

@lofty Also bear in mind my comparison point here is Vivado which has a >> 0.4% chance of just segfaulting for no apparent reason, so I find that kind of probability acceptable
@wren6991 @lofty one trick i've seen for dealing with fpga toolchains that i really like is to LD_PRELOAD this bad boy. makes them much less flakey.

@mei @lofty @wren6991 Woah... That is... Something special.

So it deals with use-after-free, miscalculated memory sizes as well as buffer overruns?

@loke @mei @lofty I bet that serialising all the free/malloc calls also hides some races
@wren6991 @loke @mei @lofty about that, I believe the orig_malloc should happen before the pthread_mutex_unlock... Or did I miss something?
@f4grx @loke @mei @lofty Oh yeah, you're right. It doesn't serialise the calls