Vito Botta

@vitobotta
181 Followers
144 Following
143 Posts
Lead Architect at Brella. Creator of SprintPulse - The Retrospective Tool Teams Actually Love - Try it out at https://sprintpulse.io/
Abouthttps://vitobotta.com/
SprintPulsehttps://sprintpulse.io/
Githubhttps://github.com/vitobotta
LocationFinland 🇫🇮

New functionality may be overlooked by some hunters, and undocumented API endpoints or features - that's where the gold is in my opinion and the reason why it's beneficial to monitor the target company's changelogs or release notes. Even if a new feature *is* documented, it may still hide some aspects of it that are not and are worth investigating.

2/2

In bug bounty hunting-related communities - Discord, Reddit etc - I keep seeing people suggesting to look for bugs, or even look for/focus on a specific class of bugs.

IMO that can lead to waste of time and lost opportunities depending on the type of target. I'd say, instead: stop looking for bugs. Start looking for *features* nobody documented, or features that were just added or changed very recently and invest time in understanding them well and looking for ways to abuse them.

1/2

It's kinda weird because I wouldn't expect the agent to make much of a difference, compared to the model, but yeah, it seems it does for me.

Just observations from daily use. Your mileage may vary depending on what you work on.

Any other tool I missed that is worth trying?

Been testing main AI coding tools in the past months, including Claude Code, OpenCode, Droid, Cline, Kilo Code, Roo Code and others. Each has its pros and cons.

But my favourite so far is OpenCode. Somehow it seems it handles complex tasks better than anything else I've tried, not sure why. It seems to follow my prompts better, and it also feels faster overall.

1/2

Every time I try Mistral models, I want them to be good. A European AI champion would be brilliant. But in practice? They just feel behind. Maybe the new release will change that. I will probably give it a go. But honestly, I have lost hope in European AI companies delivering something that genuinely competes.

Kubernetes is becoming the default for AI infrastructure. CNCF highlighted the convergence - same orchestration benefits for distributed training and inference.

NVIDIA's Nemotron 3 Super targets agentic AI with 5x throughput, shipping Kubernetes deployment cookbooks for vLLM and SGLang.

If ML and containers are separate in your stack, time to reconsider.

They grabbed the PyPI publishing token from an .env file and pushed malicious code.

The fix? Pin your GitHub Actions to specific commits, not version tags. And maybe stop putting secrets in .env files....

If you pulled either of those versions, rotate your credentials now.

2/2

LiteLLM just got hit by a supply chain attack. 95M monthly downloads, and two versions (1.82.7 and 1.82.8) had credential-stealing code slipped in.

The attack path is worth noting. The attackers compromised Trivy, a vulnerability scanner used in LiteLLM's CI/CD pipeline.

1/2

What we're seeing isn't just growing pains - it's perhaps a signal that the model needs soem changes I think.

But how to adapt? Certainly the answer isn't AI vs. humans. It's AI + humans. But too many wannabe hunters without real skills are using AI to find potential bugs they report without validating them and most often without even understanding them.

I think that addressing this huge issue is going to be the biggest challenge for bug bounty programs and platforms in the AI era.

2/2

The bug bounty landscape is shifting dramatically in 2026.

AI-generated reports are literally flooding programs - some platforms report up to 70% of submissions are now AI "slop." The curl project even shut down their program to stop it. Unbelievable.

This isn't just noise. It's a fundamental challenge to the model:

- triagers are overwhelmed
- legitimate researchers compete with AI
- signal-to-noise ratio is collapsing

1/2