Good and interesting presentation by Joe Bialek:
Pointer Problems – Why We’re Refactoring the Windows Kernel:
Good and interesting presentation by Joe Bialek:
Pointer Problems – Why We’re Refactoring the Windows Kernel:
For anyone looking to adjust their media diet, now’s a great time to consider escaping The Algorithms with RSS. Here are some of the blogs, newsletters, and independent news sites I follow: https://www.mollywhite.net/blogroll/
For feed readers, I use Inoreader, but there are many other good options.
also see:
something I wrote about responsible fuzzing https://blog.regehr.org/archives/2037
the CVC5 theorem prover's guidelines for people doing fuzzing https://github.com/cvc5/cvc5/wiki/Fuzzing-cvc5
so, it turns out I created a method of producing fully self-contained portable distributions of Python that support arbitrary native modules and don't require recompiling anything https://github.com/whitequark/superlinker?tab=readme-ov-file#python
I accidentally
After so much testing with different prompts and models, ended up wording a good query for decompiling with r2ai/decai.
The first screenshot shows the result for: Claude3.5, Gpt4o and Qwen2.5 (local) for a password checking function in Swift.
The second one is from r2ghidra, but GHIDRA/IDA/BN results are at the same level of uselessness
Generative Language Models gained significant attention in late 2022 / early 2023, notably with the introduction of models refined to act consistently with users' expectations of interactions with AI (conversational models). Arguably the focal point of public attention has been such a refinement of the GPT3 model -- the ChatGPT and its subsequent integration with auxiliary capabilities, including search as part of Microsoft Bing. Despite extensive prior research invested in their development, their performance and applicability to a range of daily tasks remained unclear and niche. However, their wider utilization without a requirement for technical expertise, made in large part possible through conversational fine-tuning, revealed the extent of their true capabilities in a real-world environment. This has garnered both public excitement for their potential applications and concerns about their capabilities and potential malicious uses. This review aims to provide a brief overview of the history, state of the art, and implications of Generative Language Models in terms of their principles, abilities, limitations, and future prospects -- especially in the context of cyber-defense, with a focus on the Swiss operational environment.