Meet Covpeek - a fast, language-agnostic coverage parser that extracts coverage data across projects. No more juggling formats - unify reports and focus on quality.

Try it today: https://github.com/Chapati-Systems/covpeek

#covpeek #codecoverage #devops #coverage #testing #ci #rust #golang

GitHub - Chapati-Systems/covpeek: Cross-language Coverage Report CLI Parser in Go

Cross-language Coverage Report CLI Parser in Go. Contribute to Chapati-Systems/covpeek development by creating an account on GitHub.

GitHub

When writing tests, i do not optimize for code coverage

I have now been on both sides of the code coverage debate. I have advocated that test suites should achieve higher code coverage. I have also advocated against using code coverage for blocking PR merges.
And fair warning: what i say next is based on my lived experience as a developer, and may not apply to all software engineering contexts.

Some things i have learned to be true, over my time researching and practicing software engineering:

  • You can execute the same line of code a million time without ever catching a bug in that line.
  • Higher code coverage does not mean better tests.
  • Code coverage can be gamed.
  • Some code cannot be executed naturally in a test environment. A lot of UI and configuration code falls neatly in this category.

Given that, I have landed on a very basic philosophy around writing tests, and how code coverage factors in:

I do not optimize my test code for higher code coverage. And instead use coverage data to prioritize what i need to test and leave untested.

The coverage metric itself is not meaningful to me. I rather have only 30% code coverage if i am testing the most critical sections of the codebase. I prefer that over covering 70% of the codebase which may be peripheral to the program’s implementation.

Beyond the coverage metric itself, i am more interested in the set of lines got executed by the test. And i like to examine this data by each method or function. That tells me if the lines that I think should get executed are actually getting executed. I also use such insight to think about the program inputs i need to conjure to exercise the lines of code that are getting skipped from execution.

Indeed, that sort of analysis can be time consuming. But to me it is no different than the time I spend in investigate bugs in production code with breakpoints and step-through debuggers. I use the full set of covered (and uncovered) lines as data in gaining a deeper understanding of how my test code is working.

Personal take: Code coverage is not something that should be optimized for. It should be treated as an honest reflection of what got executed by the test(s), and what did not. That gives real insight into the inner mechanics of the test code.

#codeCoverage #programming #softwareMetrics #softwareEngineering #testing

For a few weeks I am working on improvements for @phpunit #codecoverage features.

Just relased a blog post detailling the approach and all the ideas and results including deep links into all the relevant pull requests.

https://staabm.github.io/2025/11/26/speedup-phpunit-code-coverage.html

Speedup PHPUnit code coverage generation

While working on the PHPStan codebase I recently realized we spent a considerable amount of time to generate code-coverage data, which we need later on to feed the Infection based mutation testing process.

My developer experience
Supercharge Your Test Coverage with GitHub Copilot Testing for .NET - .NET Blog

Boost your testing workflow with GitHub Copilot testing for .NET, available now in Visual Studio. Automatically generate, build, and run high-quality unit tests for files, projects, or entire solutions.

.NET Blog
What is the name of the metric for when you got 100% code coverage, but you got 2, 4, 10 tests covering each line of code. Coverage depth? It would be a nice statistic to have #codecoverage

#PostgreSQL Differential Code Coverage—Now Automated Daily!

Just came across a fantastic initiative by [@nbyavuz](https://github.com/nbyavuz) that I think the Postgres community will appreciate.

They’ve built a script that generates **differential code coverage** between the latest release branch (currently `REL_18_STABLE`) and `HEAD`—a great way to track what’s being tested as the codebase evolves. Even better, it’s automated via GitHub Actions and published daily as an HTML report.

Script: https://github.com/nbyavuz/postgres-code-coverage/blob/main/code_coverage.sh
Live Report: https://nbyavuz.github.io/postgres-code-coverage

Whether you're contributing to Postgres or just curious about test coverage trends, this is worth a look. Kudos to the author for making this public and maintainable!

If you have thoughts or suggestions, I’m sure they’d welcome feedback.

#CodeCoverage #OpenSource #DevTools #Testing #GitHubActions

nbyavuz - Overview

nbyavuz has 17 repositories available. Follow their code on GitHub.

GitHub

Today I worked on phpunit/php-code-coverage to improve @phpunit performance when generating #codecoverage reports.

the just pushed release improves my example use case by ~33%.

Consider sponsoring my open source work in case you love such fixes 🙂

Understanding .NET code coverage is crucial for software quality. A badge can indicate your project's health, but it's essential to grasp its limitations. #CodeCoverage #dotnet

https://isaacl.dev/gox

Seen this during my morning run.
made me think of generated unit tests... for generated code.
I came across this recently in a codebase, all in the noble pursuit of "code coverage."
#CodeThoughts #GeneratedTests #CodeCoverage #DeveloperLife #CodingInsights #SoftwareEngineering #ProgrammingHumor #iosdev