Blog | https://sempf.net |
Amazon | https://www.amazon.com/William-Sempf/e/B00DDON3UO |
OWASP | https://owasp.org/www-chapter-columbus/ |
Github | https://github.com/sempf |
Amateur Radio | KE8PCT |
Header | Medeco Biaxial |
Blog | https://sempf.net |
Amazon | https://www.amazon.com/William-Sempf/e/B00DDON3UO |
OWASP | https://owasp.org/www-chapter-columbus/ |
Github | https://github.com/sempf |
Amateur Radio | KE8PCT |
Header | Medeco Biaxial |
I asked Venice AI why it doesn't write secure code, and it was pretty damn honest.
The security of code generated by AI depends on the context in which it is used and the implementation details. Here’s a direct breakdown of relevant factors:
1. AI models, including myself, do not "write secure code" by default because security requires domain-specific knowledge, threat modeling, and adherence to best practices that may not be explicitly included in a prompt. For example, if you ask for a script without specifying input validation, it may lack protections against injection attacks, buffer overflows, or insecure dependencies.
2. Code security is inherently tied to execution environments. Venice.ai does not execute or sandbox code; it only provides text-based responses. This means any code examples shared are theoretical and should never be used in production without rigorous auditing.
3. Security flaws often arise from oversight, such as hardcoded credentials, misconfigured permissions, or failure to sanitize inputs. These issues are not unique to AI-generated code but are common in human-written code as well.
4. Venice.ai prioritizes privacy (user data is not stored on servers) but does not guarantee security outcomes for code examples. Users are responsible for validating and testing any code they implement.
5. If you’re referring to the Venice Token (VVV) or its infrastructure, security would depend on the blockchain protocols, smart contract audits, and wallet practices—not the AI’s outputs.
In short, code security is a collaborative responsibility. Always treat generated code as a starting point, not a finished product, and apply security testing frameworks or consult experts for critical applications.
The article explains how the method works—it’s not so much traditional virtualization as it is a series of small, deceptive programs. The “host” app downloads a collection of fake apps and analyzes legitimate apps to closely mimic their behavior. When a user tries to open a real app, the host app intervenes: it closes the real app and launches a lookalike dummy app instead, using accessibility permissions.
To the user, it appears to be the legitimate app, but in reality, all input and data are being captured by the fake one. With accessibility access, the malicious app can monitor every tap, gesture, and keystroke. It can even track which apps are opened or detect when the lock screen appears—enabling it to build even more convincing fake versions of other apps and extract more personal information.
This is a prime example of why you should never install unofficial apps or grant permissions unless you’re absolutely certain the app is trustworthy—sometimes, not even then.
Hope that helps clarify things. It’s not totally accurate to all the details, but a general description to perhaps understand the process better.
I would love to see this malware, and how they pulled off. Virtualization on decices has always been an interest of mine.
https://zimperium.com/blog/your-mobile-app-their-playground-the-dark-side-of-the-virtualization
Zimperium zLabs has uncovered a sophisticated evolution of the GodFather banking malware that leverages an advanced on-device virtualization technique to hijack several legitimate applications, with a focus on mobile banking and cryptocurrency applications.
I would love to see this malware, and how they pulled off. Virtualization on decices has always been an interest of mine.
https://zimperium.com/blog/your-mobile-app-their-playground-the-dark-side-of-the-virtualization
Zimperium zLabs has uncovered a sophisticated evolution of the GodFather banking malware that leverages an advanced on-device virtualization technique to hijack several legitimate applications, with a focus on mobile banking and cryptocurrency applications.
Aaaaand this would be why we should be testing the supply chain.
https://www.reversinglabs.com/blog/threat-actor-banana-squad-exploits-github-repos-in-new-campaign
Here's an interesting take on "testing the supply chain."
GitPhish is an automated tool that exploits GitHub's OAuth device code flow to gain unauthorized access to repositories, secrets, and CI/CD systems. Learn how it works, how to use it, and how to mitigate this emerging phishing technique.
Hey, everybody. This is just a reminder to support your Mastodon instance's administrator. For all the homies here at infosec.exchange, that's @jerry!
Please take a look at this instance's "About" page for all the details of how to make a monetary donation to support our community and to thank Jerry.