100% code coverage is near-meaningless - but is there a good measure to use?

https://feddit.uk/post/443660

100% code coverage is near-meaningless - but is there a good measure to use? - Feddit UK

Is there some formal way(s) of quantifying potential flaws, or risk, and ensuring there’s sufficient spread of tests to cover them? Perhaps using some kind of complexity measure? Or a risk assessment of some kind? Experience tells me I need to be extra careful around certain things - user input, code generation, anything with a publicly exposed surface, third-party libraries/services, financial data, personal information (especially of minors), batch data manipulation/migration, and so on. But is there any accepted means of formally measuring a system and ensuring that some level of test quality exists?

80%. Much beyond that and you get into a decreasing return on the investment of making the tests.
I think this is a good rule-of-thumb in general. But I think the best way to decide on the correct coverage is to go through uncovered code and make a conscious decision about it. In some classes it may be OK to have 30%, in others one wants to go all the way up to 100%.
God I fucking wish my projects were like this