The other day: “hmm building blender got quite a bit slower” (by 10-15%). What caused it? Of course, enabling “plz C++20” instead of previous “plz C++17” compiler option. This is without actually *using* any of 20 features yet.
😭
The other day: “hmm building blender got quite a bit slower” (by 10-15%). What caused it? Of course, enabling “plz C++20” instead of previous “plz C++17” compiler option. This is without actually *using* any of 20 features yet.
😭
@aras This is a big and common thing with C++20.
A bunch of things become constexpr and inline in C++20 is my memory, significantly increasing the cost of the standard library headers in _every_ compile. =[
@ktf @aras We've seen no meaningful improvements in compile time due to concepts TBH. Not that they're bad for compile times, but most of it doesn't go to extraneous instantiations in `enable_if` that concept checks avoid.
Much more from excessively slow/complex checking of inline function bodies in every translation unit.
@JSAMcFarlane @chandlerc @ktf @aras
Come to my talk at 'using std::c++' (Madrid, Spain) to learn about my measurements from a real production codebase.
@aras libc++ in particular has done good work to factor its headers and generally avoid any of the STL headers from becoming significantly larger from ranges or similar things. Those you only pay for _if you use them_.
Don't get me wrong, ranges as standardized cause _obnoxiously_ slow compile times, and I would generally advise against every including them into code. But you shouldn't pay the price unless you actually include a range header by and large... And if you do, might be worth filing a bug.