OpenSSF Scorecard, but for Indicators of AI Influence (IoAIs henceforth) - as scanned from the Github repo
#OpenSSF #OpenSSFScorecard #SAST #Infosec #WhatsMissing

Песочница ошибок: проверка игрового движка S&Box

Рынок современных игровых движков постепенно расширяется, и всё больше студий выбирают не кого-то из двух гигантов (учитывая последние события, вообще одного), а движки поменьше. Сегодня поговорим про одного из новичков индустрии — S&Box. И это случай, когда новичок не такой простой, каким кажется. Подробнее о проекте и о том, какие ошибки мы смогли найти с помощью PVS-Studio, расскажем в статье.

https://habr.com/ru/companies/pvs-studio/articles/1017018/

#pvsstudio #sast #статистический_анализ #gamedev #open_source

Песочница ошибок: проверка игрового движка S&Box

Рынок современных игровых движков постепенно расширяется, и всё больше студий выбирают не кого-то из двух гигантов (учитывая последние события, вообще одного), а движки поменьше. Сегодня поговорим про...

Хабр
FYI: Software Security: Critical Practices for Clean Code #shorts: These four code practices make up a significant portion of overall security. It breaks it down so companies get credit for resolving criticals in one category. It doesn't even mention mediums here, which more mature companies address. #security #code #SAST #SCA #software https://www.youtube.com/shorts/0vyOZmM2zVc

GitLab 18.10 adds cheap AI code reviews, but do developers actually want them?

https://fed.brid.gy/r/https://nerds.xyz/2026/03/gitlab-agentic-ai-18-10/

GitLab 18.10 adds cheap AI code reviews, but do developers actually want them?

GitLab is pushing agentic AI deeper into development workflows with version 18.10, but developers may question whether they actually need it.

NERDS.xyz

OpenAI verzichtet bei Codex Security auf klassische SAST-Berichte.

Die Software nutzt LLM-Technologie, um Code semantisch zu prüfen und False Positives vorab zu validieren. Anstatt unbestätigte Schwachstellen aufzulisten, generiert das System ausschließlich verifizierte Fehlermeldungen inklusive anwendbarem Code-Patch.

#OpenAI #CodexSecurity #SAST #Cybersecurity #News
https://www.all-ai.de/news/news26/codex-security-fehlerlisten

Codex Security: Warum OpenAI auf Fehlerlisten verzichtet

OpenAI erklärt, warum das Tool keine klassischen SAST-Reports generiert. Der Fokus liegt auf KI-Validierung statt auf Fehlalarmen.

All-AI.de
China is developing low-cost lunar cargo options for its expanding moon program

China is developing low-cost lunar cargo options for its expanding moon program A state-owned space contractor has unveiled a concept for an “economical lunar cargo transport” system as China prepares for construction of a lunar base.

SpaceNews

New research shows how free AI tools from Anthropic and OpenAI expose a blind spot in static application security testing. Fintechs are seeing real‑world bugs in APIs that these models flag. Could this be the next open‑source push for better code security? Read the full breakdown. #AISecurity #SAST #OpenAI #FintechSecurity

🔗 https://aidailypost.com/news/anthropic-openai-expose-sast-blind-spot-free-tools-find-bugs-fintechs

FYI: Security: Prioritize Fixing, Not Just Running Scans #shorts: Security teams often make the mistake of prioritizing the rollout of SAST tools over actually resolving the findings. It's better to focus on resolving critical and high findings for one project before moving on to others. The emphasis should be on resolution. #security #SAST #vulnerability #cybersecurity #infosec https://www.youtube.com/shorts/9sx7h_wdaOQ
ICYMI: Software Security: Critical Practices for Clean Code #shorts: These four code practices make up a significant portion of overall security. It breaks it down so companies get credit for resolving criticals in one category. It doesn't even mention mediums here, which more mature companies address. #security #code #SAST #SCA #software https://www.youtube.com/shorts/0vyOZmM2zVc

Oh man. Bruce has some words and they are singing my tune. The code review is getting solid.

https://www.schneier.com/blog/archives/2026/02/ai-found-twelve-new-vulnerabilities-in-openssl.html

#genai #sast

AI Found Twelve New Vulnerabilities in OpenSSL - Schneier on Security

The title of the post is”What AI Security Research Looks Like When It Works,” and I agree: In the latest OpenSSL security release> on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the ...

Schneier on Security