The "badwords" script we use in curl CI to detect words in documentation and source code that we want to avoid took 48 seconds to run just a few days ago.

Then it took 8 seconds after an optimization take by me.

Then 3.6 once the regexes were improved by @icing

Then 1.1 seconds with more tweaks on the regex use.

Now @vsz works at taking it yet a little further...

@bagder @icing It started at 2 minutes, and seen an initial speed-up to 1.5 minutes by moving the runner from Intel to ARM: https://github.com/curl/curl/commit/86190dccb34981e43a00bbcc9215ed7e48ba0755 (I found that nice, but unexpected)
GHA: migrate 3 linter jobs to arm64 · curl/curl@86190dc

Also to make them finish as fast or overall faster. checkdocs/proselint: before: https://github.com/curl/curl/actions/runs/21255607528/job/61169136666 22s after: https://github.com/curl/curl/actio...

GitHub
@vsz @icing ah right, my timings are on my local machine