Debian Chrultrabook, Desktop and Laptop Server
GrapheneOS phone
What happened in your terminal in 2025?
🌀 **cmd-wrapped** — A CLI to visualize your shell history stats across different shells.
💯 See yearly stats for your command usage.
🚀 Supports zsh, bash, fish, nushell or atuin
🦀 Written in Rust!
⭐ GitHub: https://github.com/YiNNx/cmd-wrapped
#rustlang #cli #terminal #wrapped #productivity #devtools #summary
This Humble Bundle is worth checking out! 🐧
As some folks know, I am back on the job market.
26 year PHP Veteran, extensive Open Source and community experience. Specializing in modernization, training up teams, technical leadership, and long-term thinking. Some Kotlin experience as well, but not a ton.
Currently looking for Staff/Principal or Director/CTO level roles. Size of company flexible. Full time remote, US Central Time.
More details on LinkedIn: https://www.linkedin.com/in/larry-garfield/
Boosts welcome, etc.
Hi, The OpenWrt community is proud to announce the newest stable release of the OpenWrt 24.10 stable series. Download firmware images using the OpenWrt Firmware Selector: https://firmware-selector.openwrt.org?version=24.10.4 Download firmware images directly from our download servers: https://downloads.openwrt.org/releases/24.10.4/targets/ Main changes between OpenWrt 24.10.3 and OpenWrt 24.10.4 Only the main changes are listed below. See changelog-24.10.4 for the full changelog. Secur...
"Petri (Parallel Exploration Tool for Risky Interactions) is our new open-source tool that enables researchers to explore hypotheses about model behavior with ease. Petri deploys an automated agent to test a target AI system through diverse multi-turn conversations involving simulated users and tools; Petri then scores and summarizes the target’s behavior.
This automation handles a significant part of the work that one needs to do to build a broad understanding of a new model, and makes it possible to test many individual hypotheses about how a model might behave in some new circumstance with only minutes of hands-on effort.
As AI becomes more capable and is deployed across more domains and with wide-ranging affordances, we need to evaluate a broader range of behaviors. This makes it increasingly difficult for humans to properly audit each model—the sheer volume and complexity of potential behaviors far exceeds what researchers can manually test.
We’ve found it valuable to turn to automated auditing agents to help address this challenge. We used them in the Claude 4 and Claude Sonnet 4.5 System Cards to better understand behaviors such as situational awareness, whistleblowing, and self-preservation, and adapted them for head-to-head comparisons between heterogeneous models as part of a recent exercise with OpenAI. Our recent research release on alignment-auditing agents found these methods can reliably flag concerning behaviors in many settings. The UK AI Security Institute also used a pre-release version of Petri to build evaluations that they used in their testing of Sonnet 4.5."
https://www.anthropic.com/research/petri-open-source-auditing
On the topic of AI tools finding issues: we always thought they *could* do good. The right tool used by a skilled person is a recipe for awesome outcomes. An AI chat in the hands of someone who doesn't quite know what they ask for nor understand what the output says is not. Not to mention that the LLMs frequently just plainly lie.
A primary problem is the myths sold by "big AI" that make people believe they can do these things by themselves. That leads to slop avalanches.