I haven't been focused on security research since ~November of last year for a reason: to wait and see how VRPs and bug bounty programs adapted to the rapidly changing infosec landscape due to AI tooling.

TL;DR: It was hard to make decent money from VRPs. It's now even harder. It's not researchers' fault.

Thread below.

An avalanche of reports (both good and bad quality) had strained existing processes and teams. On the other hand, improved tooling also helped defenders do better internal research and speed up patching, among other processes.

Overall, security research might be better with AI tooling, but we also need to throw more humans at the problem. And pay them well.

Recent trends are generally good for users (better security, hopefully). But it's bleak for security researchers who focus on bug bounties, myself included.

I can probably no longer make decent income from VRPs. I don't know how many others can too, given the widespread suspension of VRPs or narrower scopes of higher-difficulty issues (~good) with historically-low rewards (awful). While I had seen this coming for a year or so, it's still disappointing.

The tech industry is now reaping what it started sowing years ago with irresponsible AI rollouts and AI-centric budgets.

There's a lot of macroeconomic reasons, but within tech I mainly blame the higher-level executives pushing for limited budgets for humans and treating everything as a problem that AI can solve.

That's not even mentioning the limited upsides of most AI deployments, compared to the impact on Earth's environment and the impact to human life and economic systems.

But the impacts of AI have been discussed at length. I've generally been pessimistic about where tech and capitalism is going, so I'm not going to pretend things are going to get better as a society if current trends continue.