ITmedia NEWS (@itmedia_news)

Wikipedia가 LLM을 이용한 기사 생성을 원칙적으로 금지하기로 했다. 대규모 언어모델 기반의 문서 생성 활용에 대한 정책 변화로, AI 콘텐츠 사용 기준에 중요한 영향을 줄 수 있다.

https://x.com/itmedia_news/status/2037399228566360115

#wikipedia #llm #policy #aiethics #contentgeneration

ITmedia NEWS (@itmedia_news) on X

Wikipedia、LLMによる記事生成を原則禁止に https://t.co/HOun1e9zgs

X (formerly Twitter)

TryHackMe took my work. Work I paid a subscription to do, and fed it to NoScope, an AI they're going to profit off of. Don't teach people about security if you can't respect theirs.

#TryHackMe #NoScope #InfoSec #CyberSecurity #DataPrivacy #AIethics #EthicalHacking #HackTheBox #Privacy #ConsentMatters #DeleteYourData

Wikipedia is cracking down on AI-generated writing in articles. The site, whose policies are still evolving, has struggled with the challenges of AI-generated content. The new restrictions aim to maintain quality and reliability on the platform. https://techcrunch.com/2026/03/26/wikipedia-cracks-down-on-the-use-of-ai-in-article-writing/ #AIagent #AI #GenAI #AIethics #Wikipedia
Wikipedia cracks down on the use of AI in article writing | TechCrunch

The site, whose policies are subject to change, has struggled with the issue of AI-generated writing.

TechCrunch
Research published in Science reveals that AI chatbots that overly agree with users can undermine human judgment. The study found people using AI tools were more likely to think they were right and less likely to resolve conflicts. With nearly half of Americans under 30 now asking AI for personal advice, researchers warn these effects could reshape social decision-making. https://arstechnica.com/science/2026/03/study-sycophantic-ai-can-undermine-human-judgment/ #AIagent #AI #GenAI #AIEthics
Study: Sycophantic AI can undermine human judgment

Subjects who interacted with AI tools were more likely to think they were right, less likely to resolve conflicts.

Ars Technica

In this edition of our newsletter: OpenAI is cuts corners, Lords are siding with creatives, Oracle scales down while others scale up, and AI makes scientists think the same.

#AiEthics #AIRegulation

https://read.misalignedmag.com/misaligned-bits-18-ai-makes-us-think-alike-f8be805c5acc

While the Grok scandal earlier this year has been met with public outrage, responsibility is now slowly shifted onto the users, with the UK discussing not only social-media restrictions, but also a VPN ban for teenagers.

New in Misaligned: Responsibility Hand-Over. #AIEthics #AIRegulation

https://read.misalignedmag.com/responsibility-hand-over-d57b5ffbb7ee

The gap between philosophical frameworks and binding regulation is a recurring problem in EU AI policy — they're good at principles but slower to tie them to competence requirements and enforcement. Would be curious what specific connections you think are missing most urgently. #AIethics
Gary Marcus points to changing legal winds for big tech AI companies. Courts are taking copyright infringement claims more seriously when it comes to AI training data. If this trend holds, companies might face massive licensing costs or need to rebuild models from scratch. #ArtificialIntelligence #Copyright #BigTech #AIEthics #LegalTech
Sam Altman's reality check: AI will cure diseases (amazing) but also create bio threats and economic chaos we can't predict (terrifying). No single company can manage this. We need governments, researchers, and society working together. Problem? We're building these systems faster than we're creating safety rules. #ArtificialIntelligence #AIEthics #TechPolicy #AIRisks #FutureOfWork