Source-Level Debugging of Compiler-Optimised Code: Ill-Posed, but Not Impossible, https://dl.acm.org/doi/10.1145/3689492.3690047. There's almost no research in this area so it's good to see someone thinking about some of the challenges. I'd like to have seen a discussion of undefined behavior, specifically, since there's a paradox where C programmers are most likely to use debuggers exactly in cases where UB is present, which poses fundamental challenges to source-level debugging.
I do think there is a potential resolution but it requires the parallel source-level simulation to have a complete runtime UB detector along the lines of Rust's Miri, which involves a level of simulation overhead orders of magnitude beyond what you'd normally expect (or accept) for deoptimized code/source-level simulation. Speaking of Miri, there's a forthcoming POPL paper: https://plf.inf.ethz.ch/research/popl26-miri.html
Miri: Practical Undefined Behavior Detection for Rust

Programming Language Foundations Lab
And I don't see how you could do the paper's proposed attach-on-demand simulation (to avoid having to simulate from the beginning of execution to catch up) for something like UB detection since you need to potentially know the complete pointer provenance of every pointer in the program state if you want to do UB detection.
There's also well-known issues with record-and-replay-like approaches for dealing with fine-grained thread race non-determinism. Existing systems like rr generally take control of scheduling and run everything serialized, so even with zero administrative overhead you'd still suffer an upfront slowdown (before attaching the debugger) proportional to the loss of parallelism from running your program on one core. Microsoft's recorder takes a different approach but has other trade-offs.