5 Followers
241 Following
894 Posts
Only EBNF's manual should start with a section called "Quick Guide to EBNF".

People worry a lot about losing knowledge — about "burned-down libraries".

Comparatively few people seem to worry about what happens if you take a billion books full of auto-generated, often-untrue junk text and *add* them all to the library.

In theory, nothing is lost. In reality, everything is lost, because nothing useful can now be found.

"In the beginning, there was man. And for a time.. it was good." https://www.youtube.com/watch?v=sU8RunvBRZ8
The Animatrix - The Second Renaissance Part I (1/2) [HD]

YouTube

In any event, if you want quote toots and functioning search, head on down to https://infosec.place

Happy Saturday

Akkoma

This is dope.

Ffuf adds hashing for blind payloads and response parsing for regex.

VERY cool @joohoi 👏👏👏
---
RT @joohoi
ffuf 2.0 is out! There are couple of new major features introduced as well as updates to the project in general. I had way more to say that fits in the birdsite format, so here's a thread on more applicable platform on the topic:

https://infosec.exchange/@joohoi/109806822104162973
https://twitter.com/joohoi/status/1621871589707993089

Joo N/A (@[email protected])

Attached: 1 image ffuf v2.0 is out! There's a lot to tell about the release, updates to the status of the project etc, so please bear with me through this wall-of-text. https://github.com/ffuf/ffuf There are two major features introduced with this release. The first one I want to tell you about is the scraper functionality. In the most simplest form scraper can be described as a feature that extracts specific data from the responses. In the spirit of design principles behind ffuf, this feature is very flexible and can be harnessed in to multitude of use cases depending on your individual needs and creativity. With this feature set, you can turn ffuf into a light weight albeit blazing fast web vulnerability scanner by creating a set of fingerprints for known-vulnerable pieces of technology in the stack. Adding rules to identify credentials like API keys will help you catch leaked sensitive data and so on. All of this was designed in a way that gives you a full control on how you wish to use it - the feature won't get in your way, while still powering all your web fuzzing runs for the future. You can define scraper rule groups to be always active or to be activated on demand. They can be applied to all responses regardless if they would otherwise get filtered out by the filters configured in the run or not. Never miss the low hanging fruit again! There are two different flavors to creating the rules themselves. First: the absolute powerhouse called regular expression that many people in the field have a love-hate relationship with. Second: as regexes are notoriously bad at parsing HTML, which large parts of the intertubes consist of there's another syntax available: jquery-like selectors, which allow you to do things like pick up page titles simply with a rule like `html > head > title`. I wrote a bit more documentation about scraper in the new ffuf wiki: https://github.com/ffuf/ffuf/wiki/Scraper So if you just want to gather some simple technological data from the matches in your ffuf runs or build a vuln scanning behemoth, now you have the tools to do so. Another major feature with this release is FFUFHASH - request backtracking. I have seen many occasions where blind payloads have been sprayed all over, and when finally one of them fires, the user has another hard task ahead of them: figuring out which one out of those 200k requests that might have been. In order to alleviate this, ffuf now provides a dynamic hash that can be mapped back to the initial request while not filling up the diskspace by saving all and every raw request sent out. This works by saving ffuf options for each run, and deriving part of the hash from that individual run. The other part will just simply tell ffuf the payload position in the fuzz input queue. So remember to include FFUFHASH keyword as a part of your interactsh SSRF callback sudomains and make backtracking as easy as `ffuf -search FFUFHASH`. More detailed documentation about this feature can be found in the wiki: https://github.com/ffuf/ffuf/wiki/Ffufhash-mapping In addition to these, there's a ton of small changes and fixes. ffuf now uses XDG paths as default when creating its configuration directories, the configurable rate per second for requests is more robust and can be changed during the run in the interactive mode to mention a few. Additionally few words about the status of the project & "sponsorware". I have decided to scrap the sponsorware model altogether as having additional exclusive forks of the project etc. added too much maintenance overhead and effectively discouraged me to jump on something whenever I had 15 minutes of time. While I love all of the people sponsoring my work on GitHub sponsors and I would love to give them something extra, I believe this is the way to go. I'm open for ideas for this! Seeing the email notifications of someone considering the time I spend on the project worthy of couple of dollars often lifts my spirits more than I care to admit and works as a trigger to jump on a new cool feature or similar. While I'm not motivated by money, the few dollars have a lot more weight than its nominal value - as it's a concrete token of appreciation. Phew, sorry for the wall of text, that's it. Remember to ffuf responsively y'all!

Infosec Exchange

RELEASE: 120 gigabytes from the Russian internet provider #Convex, revealing pervasive Russian surveillance of internet and phone activities, including the previously unknown Green Atom #surveillance program.
https://ddosecrets.org/wiki/Convex

In 2015, the European Court of Human Rights warned in Zakharov v. Russia that the legislation underpinning #Russia's System for Operative Investigative Activities surveillance system did "not provide for adequate and effective guarantees against arbitrariness and the risk of abuse which is inherent in any system of secret surveillance" and that the requirements for legal authorization could be circumvented. In 2016, the Yarovaya Law was passed and went into effect in 2018, requiring that all communications information be provided to authorities without a court order.

According to the hackers, the Green Atom data confirms the extent to which these legal structures are abused. They say the internet provider captured and mirrored virtually all data from every switch in the largest regions of Russia, which is then passed on to Moscow for use by the security services.

Data: https://ddosecrets.org/wiki/Convex

Computational Foundations for the Second Law of Thermodynamics - Why is the Second Law of Thermodynamics always true? A fascinating story of computational irreducibility https://writings.stephenwolfram.com/2023/02/computational-foundations-for-the-second-law-of-thermodynamics/ #physics (yep as said that account is not InfoSec only ;p)
Computational Foundations for the Second Law of Thermodynamics

Stephen Wolfram applies lessons learned from the Wolfram Physics Project to construct a proper framework to explain why--and to what extent--the Second Law of thermodynamics is true.

Here are the 1 month uptime statistics from my personal #Akkoma instance running on a 4GB RAM, 2 core VPS instance.

Memory seems sufficient, disk consumption looks tolerable.

And I can #search my timeline.

(I'm following <250 accounts, mostly from infosec.exchange)
Akkoma

.

New p-code emulator for fuzzing based on ghidra sleigh. Full system fuzzing perf comparable to qemu, CmpLog support etc.

Icicle: A Re-designed emulator for greybox firmware fuzzing https://arxiv.org/pdf/2301.13346.pdf

https://github.com/icicle-emu/icicle