As declared by an expert in cryptography who knows how to guide the LLM into debugging low-level cryptography, which that's good.
Quite different if you are not a cryptographer or a domain expert.
| Official | https:// |
| Support this service | https://www.patreon.com/birddotmakeup |
"you think"
Well I know that when one is vibe coding, they are not necessarily "thinking" about the details on how something is built; they just roll the dice to the agent to generate the details and don't know if it is even correct or free of critical bugs.
That is the 'gambling' which Anthropic acts as the casino giving usage limits, promotions, free tokens until they pull the rug (which they just did) and the users suffer withdrawal symptoms from their boosted usage.
There you go. So when Azure has an outage, so will Anthropic (and Github).
Now expect both of them to have unstable uptime and outages every week.
Just in time when the AI slot machine owner will stop their promotional offers [0] for free $20 spins on March 28th. [1] (Before it was March 27th and now they recently added an extra day of gambling)
Now with this additional change, their usage limits are there to get you to spend more for rolling the dice at the casino.
Have fun!
[0] https://support.claude.com/en/articles/14063676-claude-march...
Something that is not in the 1M+ people studying for interviews and throwing pieces of paper (CVs, cover letters, degrees) at the job application:
A verifiable track record beyond the CV, that is extremely hard to fake with valuable experience that you did not know you needed.
As I said before:
1. Open source contributions to high-profile / major repositories (with code-review in the open with core maintainers). No hello world / demo projects.
2. Production-grade shipped projects / side-projects with paying customers or high-profile companies using it and is bringing in recurring revenue.
3. Given several presentations at conferences discussing anything from your project as a library author, maintainer or at a company showcasing your engineering expertise.
All are extremely difficult to fake and easy to verify and requires a level of effort on the applicant to qualify which filters 90% of noise out there. Years of experience is not a requirement but a bonus.
The rest of the other methods like leetcode, hackerrank, take home projects or quiz trivia, wastes time on both the interviewer and the candidate and both can be cheated easily using AI.
It is that simple.
Given the issues with AWS with Kiro and Github, We already have just a few high-profile examples of what happens when AI is used at scale and even when you let it generate tests which is something you should absolutely not do.
Otherwise in some cases, you get this issue [0].
[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...
> I'm a programmer, and I use automatic programming. The code I generate in this way is mine. My code, my output, my production. I, and you, can be proud.
Disagree.
So when there is a bug / outage / error, due to "automatic programming" are you ready to be first in line to accept accountability (the LLM cannot be) when it all goes wrong in production? I do not think that would even be enough or whether this would work in the long term.
No excuses like "I prompted it wrong" or "Claude missed something" or "I didn't check over because 8 other AI agents said it was "absolutely right"™".
We will then have lots of issues such as this case study [0] where everything seemingly looks fine at first, all tests pass but in production, the logic was misinterpreted by the LLM with a wrong keyword, [0] during a refactor.
[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...
Tells you all you need to know around how extremely weak a C executable like QuickJS is for LLMs to exploit. (If you as an infosec researcher prompt them correctly to find and exploit vulnerabilities).
> Leak a libc Pointer via Use-After-Free. The exploit uses the vulnerability to leak a pointer to libc.
I doubt Rust would save you here unless the binary has very limited calls to libc, but would be much harder for a UaF to happen in Rust code.
> Pwno is a AI cybersecurity startup...
We all know that LLMs were used to find these vulnerabilities, specifically on high impact projects. That's fine.
However, my only question is who actually provided the patch: The maintainers of FFmpeg? The LLM that is being used? Or the security researchers themselves after finding the issue?
It seems that these two statements about the issue are in conflict:
> We found and patched 6 memory vulnerabilities in FFmpeg in two days.
> Dec, 2025: avcodec/exif maintainer provided patch.
As declared by an expert in cryptography who knows how to guide the LLM into debugging low-level cryptography, which that's good.
Quite different if you are not a cryptographer or a domain expert.