100% code coverage is near-meaningless - but is there a good measure to use?

https://feddit.uk/post/443660

100% code coverage is near-meaningless - but is there a good measure to use? - Feddit UK

Is there some formal way(s) of quantifying potential flaws, or risk, and ensuring there’s sufficient spread of tests to cover them? Perhaps using some kind of complexity measure? Or a risk assessment of some kind? Experience tells me I need to be extra careful around certain things - user input, code generation, anything with a publicly exposed surface, third-party libraries/services, financial data, personal information (especially of minors), batch data manipulation/migration, and so on. But is there any accepted means of formally measuring a system and ensuring that some level of test quality exists?

On top or, better, in addition to mutation testing, some amount of property-based testing is always great where it counts.