145 Followers
62 Following
74 Posts
An open-eyed man falling into the well of weird warring state machines. I mostly speak on (offensive) cybersecurity issues.
twitterhttps://twitter.com/udunadan
age keyage1cvckqvwfqcx76mnppys8zleaxjwrnp7s7upktlydkhh7l8wns5ws9qpad4
old postshttps://ioc.exchange/@udunadan
Exploit for CVE-2022-4262. Fukin finally! Shoutout to @_clem1
for finding the ITW exploit. And shoutout to @5aelo, @bjrjk, @alisaesage for their RCA's and prior analysis of the vuln :).
https://github.com/mistymntncop/CVE-2022-4262
GitHub - mistymntncop/CVE-2022-4262

Contribute to mistymntncop/CVE-2022-4262 development by creating an account on GitHub.

GitHub
Rare footage of a vulnerability researcher seeing his bug being reported by someone else
Sometimes I think the only substantial progress I've made in developing code review skills is due to switching between completely different programs and applying novel experience I got studying one to another.
This is one of the hardest parts because all you're left with is sitting and thinking. No code to look up. Just trying to be creative. Immediate block on all progress, no automatable or repetitive steps. Swimming alone in the ocean of possibilities with zero of them in sight.
Me trying to figure out how to trigger a complex condition in the program
Whenever I see that a finding was done with the help of an automated tool, it gives me hope: automation is limited, there is a chance for me to think deeply and manually review stuff. But when it's a manual thing done by somebody more qualified, there is less hope there. Even if you are less qualified (which isn't guaranteed and is a dynamic variable in a dynamic space in itself), you can outpace other actors by allocating more time since it's finite thing. But I still hold that it is less optimistic thing than competing with automation.
Many eyes bias (thinking there are a lot of analysts looking at what you're looking, with similar ideas) is hard to bypass at times because once you gain technical understanding of a piece of program, it seems obvious and easy to understand, while it may not be a case at all.
Strengthening the Shield: MTE in Heap Allocators

Introduction In 2018, with the release of ARMv8.5-A, a brand new chip security feature MTE (Memory Tagging Extensions) emerged. Five years later, in 2023, the first smartphone to support this feature was released — Google Pixel 8 — marking the official entry of MTE into the consumer market. Although this feature is not yet enabled by default, developers can turn it on themselves for testing. As a powerful defense against memory corruption, there has not yet been a comprehensive analysis of MTE’s defensive boundaries, capabilities, and its impact on performance on the internet.

DARKNAVY

While it's common to bash vendors for their incorrect responses and evaluations of bug reports, I'd argue that the same level of absurdity happens with researchers as well. And to judge that you don't have to be a NSO team lead; just make up a scenario of using a bug yourself.

Say you have an unstable PoC that reproduces once out of 1000 times, works only on version 69.420.1.0.1 but not 1.0.0, relies on experimental features, requires a couple of other bugs to get to a valuable code exec. If you went blackhat, how much money would that make you? I guess, not much. And imagine writing and stabilizing it for all versions in use there. And a whole lot of other things which get obvious once you put yourself into a client position. Many researchers don't, and that puts them close to vendors doing inaccurate risk assessment.

When trying to learn a topic (for example, a memory allocator) or acquire a skill (a debugger scripting), it might turn out that nothing stays in your head after that. That's because this was uncalled for, and when you learn only when needed, that's what cements knowledge.

Let your learning process follow your needs. They are a great measure for resource allocation.