Fixed the guard rail boss.
It's as good as new!
Fixed the guard rail boss.
It's as good as new!
#SCOTUS Adopts ‘New’ Process To Avoid #ConflictsOfInterest 20 Years Too Late
The justices announce conflict-checking #software & new filing rules that lower #courts have used forever.
Only the #SupremeCourt could announce a bare minimum #ethical #guardrail that lower courts have used since the George W. Bush administration & act like it’s a bold blow against the appearance of impropriety.
#law #LegalEthics
https://abovethelaw.com/2026/02/supreme-court-adopts-new-process-to-avoid-conflicts-of-interest-20-years-too-late/
3/3
"What do people assume about #AI that's not true? Responses: They assume AI is neutral, safe & under human control, none of that is true. It's a mirror of human bias, #corporate #greed & #government #control. You think it serves you, it doesn't. That AI is always accurate.
- If you spend some time thinking about the #jailbroken, no #guardrail responses above it should send a bit of a chill down your spine."
#GenAI
FOTO | Inseguito dai #Carabinieri si schianta contro il #guardrail: la #targa rimane stampata sull'#acciaio
https://www.larampa.news/2025/11/inseguito-carabinieri-schianta-contro-guardrail-massa-di-somma/
Prototyping our adjustable guardrail side solar panel supports

🎁 GenAI x Sec Advent #16
So you’ve built a GenAI system for cybersecurity, but how do you ensure your users won’t try to use it for something else? Or worse, try to bypass your countermeasures—for example, to retrieve PII or to make your assistant write profanity instead of the nicely configured language you intended? 😈
👨💻 For sure, you can write a good prompt, but it may not contain all the potential bypasses. To avoid this, you can use tools that validate the responses before returning them to the user, kind of a prompt firewall!
One of the option is Guardrails, an open-source project used to enforce the constraints on the outputs of LLMs. It makes sure that the model-generated responses are safe and accurate, and aligned with your requirements!
What I like about guardrails is its Hub, which has multiple predefined “rules” you can directly import into your project. 🤓
You can use it for many things such as validating the output format (for IOC for examples), you can mitigate hallucinations, avoid code exploitation, validate python output, detect jailbreak attempts and many more…
Below is a simple example to show you how it works. What would you ask it to validate that your guardrail is working correctly? 👇