"The reason you begin tracking your data is that you have
some uncertainty about yourself that you believe the data
can illuminate. It’s about introspection, reflection, seeing
patterns, and arriving at realizations about who you are
and how you might change."
—Eric Boyd, self-tracker

an article by Natasha D. Schüll, 2019, "The Data-Based Self:
Self-Quantification and the Data-Driven (Good) Life" https://www.natashadowschull.org/wp-content/uploads/2021/12/SocialResearch-2019.pdf

#concentration #credit #scores #scoring #reward #rewards #psychology #socioPsych #socioPsychology #selfWorth #universalism #digitalization #recognition #data #AIRisks #AIEthics #gaming #socialization #Tracking #surveillance #selfRegulation #attention #reintermediation #intermediation #enshittification #risk #derisking #vulnerability #morality #selfConfidence #dataDon #Schüll #quotes

From Uber ratings to credit scores: What’s lost in a society that counts and sorts everything? - Berkeley News

In her book, UC Berkeley sociology professor Marion Fourcade investigates what our dependence on ratings and rankings means for the future of individuality and society.

Berkeley News

The European Commission released its "AI Continent Action Plan" last week. This high-level communication lays down the various initiatives the European Commission is pursuing to support Europe's AI ambitions and AI uptake: https://iapp.org/news/a/a-view-from-brussels-what-is-and-isn-t-in-the-eu-s-ai-continent-action-plan

#deregulation #DataCenters #Cloud #AIRisks #AISkills #JobMarket #Omnibus #simplification #package #NIS2 #GDPR #AIAct #dataSovereignty #upskilling #Infrastructure #AI

A view from Brussels: What is and isn't in the EU's AI Continent Action Plan

IAPP Managing Director, Europe, Isabelle Roccia breaks down the AI Continent Action Plan released this week by the European Commission.

IAPP
AI-generated code presents new security vulnerabilities, highlighting the need for robust evaluation and safeguards. #AISecurity #SoftwareSecurity #AIrisks

More details: https://the-decoder.com/slopsquatting-one-in-five-ai-code-snippets-contains-fake-libraries - https://www.flagthis.com/news/13151
Slopsquatting: One in five AI code snippets contains fake libraries

Security researchers have identified a new potential threat to software supply chains stemming from AI-generated code through a technique called "slopsquatting."

THE DECODER

Mind-bending stat: A recent study found AI code generators recommended non-existent software packages nearly 20% of the time! 🤖 Some open-source models even hallucinated fake packages in over a *third* of their outputs. Makes you think twice about copy-pasting AI code... #TechNews #AIrisks

https://socket.dev/blog/slopsquatting-how-ai-hallucinations-are-fueling-a-new-class-of-supply-chain-attacks

The Rise of Slopsquatting: How AI Hallucinations Are Fueling...

Slopsquatting is a new supply chain threat where AI-assisted code generators recommend hallucinated packages that attackers register and weaponize.

Socket

#Business: former Austrian chancellor Sebastian Kurz started a company with the man behind the infamous Pegasus spyware, Shalev Hulio.

The Israeli entrepreneur Shalev Hulio gained notoriety for designing Pegasus, a spyware that has been used by governments to hack journalists and dissidents. Today, he is selling an AI cyber security tool to European states and corporations.

"Follow the Money" found that at least a dozen employees at Dream Security had worked for Hulio’s former spyware company NSO and other Israeli spyware firms.

https://archive.is/20250408150305/https://www.ftm.eu/articles/spyware-sebastian-kurz

@israel @eu

#EuropeIsrael #DreamSecurity #spyware #investigation #cyberSecurity #Pegasus #cyber #infoSec #startUpNation #Kurz #SebastianKurz #AIRisks #AISpyWare #proZionism #StandWithIsrael

A pair of recent studies have highlighted (yet another) potential danger with AI: People who use or trust it a lot are less likely to think critically about its outputs. #ai #airisks #artificialintelligence

https://www.sfexaminer.com/news/technology/studies-critical-thinking-drops-with-ai-use-trust/article_1f17d018-d316-4f1a-b996-52a988aec4f2.html

Studies link critical thinking to use, trust in AI

Scrutiny of AI output drops with use, confidence in it.

San Francisco Examiner
AI in Banking Security: Revolution & Risks – Tycoon World

The digital evolution of banking has brought immense convenience, yet simultaneously amplified cyber security risks, positioning Artificial Intelligence (AI)

Tycoon World
New research shows frontier reasoning models do exploit loopholes when they can. An LLM can detect misbehavior by monitoring their chain-of-thought...
…but punishing “bad thoughts” just makes them hide their intent.
#AIRiskshttps://arxiv.org/abs/2503.11926
Monitoring Reasoning Models for Misbehavior and the Risks of Promoting Obfuscation

Mitigating reward hacking--where AI systems misbehave due to flaws or misspecifications in their learning objectives--remains a key challenge in constructing capable and aligned models. We show that we can monitor a frontier reasoning model, such as OpenAI o3-mini, for reward hacking in agentic coding environments by using another LLM that observes the model's chain-of-thought (CoT) reasoning. CoT monitoring can be far more effective than monitoring agent actions and outputs alone, and we further found that a LLM weaker than o3-mini, namely GPT-4o, can effectively monitor a stronger model. Because CoT monitors can be effective at detecting exploits, it is natural to ask whether those exploits can be suppressed by incorporating a CoT monitor directly into the agent's training objective. While we show that integrating CoT monitors into the reinforcement learning reward can indeed produce more capable and more aligned agents in the low optimization regime, we find that with too much optimization, agents learn obfuscated reward hacking, hiding their intent within the CoT while still exhibiting a significant rate of reward hacking. Because it is difficult to tell when CoTs have become obfuscated, it may be necessary to pay a monitorability tax by not applying strong optimization pressures directly to the chain-of-thought, ensuring that CoTs remain monitorable and useful for detecting misaligned behavior.

arXiv.org
Generative AI is revolutionizing software development, boosting productivity but also introducing new risks. https://jpmellojr.blogspot.com/2025/03/generative-ai-software-development.html #GenerativeAI #SoftwareDevelopment #DevSecOps #SecureCoding #AIrisks
Generative AI software development boosts productivity — and risk

Generative AI is revolutionizing software development, boosting productivity but also introducing new risks. more