A man used AI to recover 400,000 USD from a Bitcoin wallet he locked himself out of in 2015. The case highlights AI's ability to crack cryptographic security - the same tools that recover forgotten passwords could potentially unlock any wallet. https://gizmodo.com/man-says-he-used-ai-to-unlock-old-bitcoin-wallet-worth-400k-2000758866 #AIethics #AI #GenAI #AISafety
Man Says He Used AI to Unlock Old Bitcoin Wallet Worth $400K

The pseudonymous user said he was stoned when he changed the password in 2015.

Gizmodo

I'm really excited about the rapidly-improving state of local LLMs on linux. I haven't quite found a way to make them work for daily coding, but I'm hopeful for the future.

From the article:

"The honest takeaway: local AI on CPU is real, practical, and improving fast. You don't need to wait for a GPU upgrade to start experimenting."

https://itsfoss.com/testing-local-llms-without-gpu/

#AI #localLLM #linux #aiethics

Can You Run LLMs Locally Without a GPU? I Tested 8 Models on Linux

Want to run AI models locally without expensive hardware? I tested 8 LLMs on a CPU-only machine to find out what works and what doesn’t.

It's FOSS

🚨 New Article - Suffering Without Perpetrators: The Humanitarian Passive in AI-Generated Conflict Discourse

Focusing on Palestine, Iran, and platform moderation, it defines responsibility loss as the measurable weakening of grammatical traceability between harm and responsible agency.

🔗https://zenodo.org/records/20139961

#LLM #MedicalNLP #LegalTech #MedTech #AIethics #AIgovernance #cryptoreg
#healthcare #ArtificialIntelligence #NLP #aifutures #lawstodon
#tech #agustinvstartari #linguistics #ai #LRM

Suffering Without Perpetrators: The Humanitarian Passive in AI-Generated Conflict Discourse

This paper introduces the humanitarian passive as a machine-mediated syntactic pattern through which civilian suffering remains visible while responsibility becomes grammatically optional. Focusing on Palestine, Iran, and platform moderation, it defines responsibility loss as the measurable weakening of grammatical traceability between harm and responsible agency. The article proposes the Responsibility Loss Index (RLI) to evaluate whether AI-generated summaries, headlines, reports, and moderation notices preserve or erase agents responsible for violence, sanctions, restriction, censorship, or humanitarian harm. Its central contribution is to shift AI ethics from bias detection alone toward responsibility detection.  

Zenodo

New in Misaligned: Seven studies that investigate bias and the impact of LLMs on disempowered and vulnerable users.

Critical Views On LLMs, Another Academic Reading List.

#AIEthics #LLM

https://read.misalignedmag.com/critical-views-on-llms-another-academic-reading-list-32e40c1e1184

Critical Views On LLMs, Another Academic Reading List

Seven studies that investigate bias and the impact of LLMs on disempowered and vulnerable users.

Medium

In this week's edition of "misaligned bits":

ChatGPT’s learning successes retracted, questionable productivity stats, Palantir gets more of UK’s health data, hallucinations are accelerating, and AI assistants make us a bit dumber.

#AiEthics

https://read.misalignedmag.com/misaligned-bits-25-retracted-059e5b6b1886

misaligned bits #25: Retracted

ChatGPT’s learning successes retracted, questionable productivity stats, Palantir gets more of UK’s health data, hallucinations are…

Medium

AI Nudification: The 55% Stat Parents Can’t Ignore

How AI Nudification Became the New Adolescent Normal

From Virtual Fitting Rooms to Digital Danger

Generative AI (GenAI) was supposed to be our creative co-pilot. We didn’t see AI Nudification coming.

We marveled at its ability to turn text into art and embraced “virtual try-on” applications that allowed us to see how clothing might fit using nothing more than a smartphone camera. But as a tech ethicist, I’ve watched this innovation take a dark, predatory turn. While the underlying technology, specifically “inpainting,” is legitimate, its application in adolescent circles has reached a terrifying tipping point.

We are no longer talking about a few “tech-savvy” outliers; we are witnessing the mass-normalization of AI-generated Child Sexual Exploitation Material (CSEM) among teenagers. This isn’t just the next stage of digital growing pains. It’s a fundamental shift in how the first generation of “AI adolescents” navigates consent, identity, and digital harm.

Takeaway 1: The “Scaling Gap” and the New AI Nudification Normal

For years, educators and parents tracked the steady rise of traditional “sexting.” Historical meta-analyses placed adolescent creation and receipt of self-generated sexual imagery at roughly 14.8% and 27.4%, respectively. This latest data reveals a staggering “scaling gap” that should alarm every stakeholder in digital safety. 

Today, GenAI has effectively quadrupled those rates. According to a nationally representative survey of 13-to-17-year-olds:

  • 55.3% of adolescents have used AI “nudification” tools to create sexualized images of themselves.
  • 54.4% have received these images.

What was once a niche behavior has become a majority experience. This isn’t just a technological update to sexting; it is a total normalization of CSEM production as a routine part of adolescent sexual exploration.

Are you an LPC in need of continuing education? Dr. Weeks has a course on this material and many other unique and interesting topics.

In the course, “The Prevalence of Youth-Produced Image-Based Sexual Abuse,” Dr. Weeks teaches how child digital safety is undergoing a paradigm shift, how changes in Image Based Sexual Abuse require adaptation, and proposes a framework for conceptualizing IBSA.

Takeaway 2: Nudification vs. Creation – The Personal Toll of Inpainting

It is vital to understand the technical nuance that makes this trend so invasive. There is a massive difference between general text-to-image GenAI (which creates an image from a prompt) and “nudification” tools. These tools utilize a technique called inpainting, which modifies a pre-existing, real photo. 

The survey found that usage of these specific nudification tools is significantly higher than traditional AI content creation. This is precisely why the victimization is so direct: it requires the likeness of a real person. As the study notes, these tools are designed to: 

“…visualize what individuals might look like without clothing.” 

By using a real individual as a “basis image,” the technology allows for the digital removal of clothing, turning a casual school photo into CSEM in seconds. The distinction between a “fake” image and a “real” person is erased, leading to a profound degree of direct victimization.

Are you exploring your trauma? Do you feel your childhood experiences were detrimental to your current mental or physical health? Utilize this free, validated, self-report questionnaire to find out.

Take the Adverse Childhood Experience (ACE) Questionnaire

Takeaway 3: The High Cost of Non-Consensual “Deepfakes”

The most heartbreaking aspect of this shift is the erosion of consent. The data highlights a crisis of victimization: 36.3% of participants reported having a non-consensual image of them created, and 33.2% had such an image shared without their permission. 

Victims describe a visceral sense of “powerlessness” and “dehumanization.” When your likeness can be hijacked and sexualized without your involvement, it leads to a state of constant hypervigilance. Crucially, these statistics represent a lower bound of the crisis. Because the study only measured peer-to-peer actions, it does not account for images created by adults exploiting minors or images of children under the age of 13. If those variables were included, the scale of victimization would likely skyrocket.

Takeaway 4: The Gender and Age Myths Around AI Nudification Debunked

We often fall into the trap of thinking digital crises are limited to specific subcultures or older teens. The data tells a different story. The usage of AI nudification tools is remarkably uniform across all demographics: race, region, and sexual orientation showed no statistically significant differences in prevalence. This is a universal adolescent issue. 

While male participants showed higher rates of regular (frequent) creation and distribution, the most startling finding was the age breakdown. There was no statistically significant difference in usage between 13-year-olds and 17-year-olds. This destroys the myth that we can wait until high school to talk about AI safety. To be effective, digital literacy and intervention must begin before age 13, as younger adolescents are already engaging with these tools at the same rates as their older peers.

Learn why it’s important for everyone, especially teens, to be able to control their online experiences. Dick Pic Culture: How do Teenage Girls Navigate it?

Takeaway 5: A Legal and Ethical Gray Zone

We must call these images what they are: CSEM. Under federal law (18 U.S. Code § 1466A), the production and distribution of pornographic GenAI images of minors is illegal, regardless of whether the image is “real.” 

This puts policymakers in an ethical bind.

We are currently seeing thousands of adolescents technically committing federal crimes as part of “exploratory” peer behavior. Ethicists and lawmakers are now forced to debate whether we need legal “carve-outs” for consensual, same-age peer interactions, or if the permanent digital harm of these images necessitates strict criminal enforcement. Meanwhile, “gray market” apps continue to bypass app store controls, providing easy access to nudification tools without any meaningful age verification.

Conclusion: A Call for Proactive Digital Literacy

The window for intervention is narrow but still open. Because much of the current usage is reported as “exploratory” rather than “habitual,” we have a brief opportunity to steer this generation toward a more ethical digital future. However, our response cannot be reactive. We need multimodal education that doesn’t just teach “online safety” but addresses the profound ethical weight of AI tools and the lifelong impact of non-consensual sharing. 

Final Thought: As we enter an era where a child’s likeness can be permanently decoupled from their consent in a matter of clicks, we must ask: Are our legal and educational frameworks fundamentally incompatible with this new reality, or are we simply too slow to protect the first generation of AI adolescents?

Are you a professional looking to stay up-to-date with the latest information on, sex addiction, trauma, and mental health news and research? Or maybe you’re looking for continuing education courses? Then you should stay up-to-date with all of Dr. Jen’s work through her practice’s newsletter!

Are you looking for more reputable, data-backed information on sexual addiction? The Mitigation Aide Research Archive is an excellent source for executive summaries of research studies.

#AdolescentDigitalSafety #AIDeepfakes #AIEthics #AINudification #CSEM #DeepfakeAbuse #DigitalConsent #DigitalLiteracy #GenerativeAI #NonConsensualImages #OnlineSafetyForParents #ParentEducation #TeenSexting #TeenTechnologyRisks #YouthOnlineSafety

🚀 El Horizonte de la IA: ¿Hacia dónde caminamos?

La Inteligencia Artificial ha dejado de ser una promesa de ciencia ficción para convertirse en el tejido invisible de nuestra cotidianidad. Pero, ¿qué nos espera en la próxima década? No se trata solo de chatbots más rápidos, sino de una redefinición de la colaboración humano-máquina.

#IA #ArtificialIntelligence #TechFuture #Innovacion #Tecnologia #Mastodon #Futuro #AIethics #OpenAI #DigitalTransformation

Many companies are making risky bets on AI replacing workers based on speculation, not performance, harming trust and job security. The real impact is unfolding at the individual level, especially for vulnerable workers. Adaptation and transparency are key.
Discover more at https://dev.to/rawveg/your-boss-bets-your-job-on-ai-lba
#HumanInTheLoop #AIinWorkplace #WorkforceDisruption #AIethics
Your Boss Bets Your Job on AI

In September 2025, Salesforce CEO Marc Benioff went on a podcast and said something that should have...

DEV Community

Have a Coherent AI Policy

이 글은 AI 도구 사용에 대한 실무적이고 윤리적인 AI 정책 수립의 중요성을 강조한다. 단순히 AI 사용량을 경쟁하는 'tokenmaxxing' 같은 무의미한 지표를 경계하며, AI가 생성한 코드는 개발자가 완전히 이해하고 책임져야 한다고 주장한다. 또한, AI 도구에 과도하게 의존하지 않고, 특히 주니어 엔지니어는 직접 코딩 경험을 쌓아야 장기적인 성장에 도움이 된다고 조언한다. AI 도구는 생산성 향상에 기여하지만, 도구 소실 시에도 업무가 가능해야 하며, 팀과 고객을 위한 실질적 가치 창출에 집중해야 한다는 점을 강조한다.

https://brianmeeker.me/2026/05/14/have-a-coherent-ai-policy/

#aipolicy #softwareengineering #llm #developerproductivity #aiethics

Have a Coherent AI Policy

Yet Another Software Engineer's Blog