TechRadar (@techradar)

새로운 보고서는 소프트웨어 공급망의 위험성이 얼마나 큰지 보여주며, 인공지능(AI)이 이러한 위험을 악화시키는 방향으로 작용한다고 지적합니다. 보고서는 AI 도구와 자동화가 공급망 취약점 확대에 미치는 영향을 경고하고 있습니다.

https://x.com/techradar/status/2005675546127298964

#supplychain #security #airisk #software

TechRadar (@techradar) on X

A new report shows how risky software supply chains really are, and how AI only makes it worse. https://t.co/uDfpMVHNS6

X (formerly Twitter)
2/3 người Mỹ lo ngại AI sẽ gây hại nghiêm trọng cho con người trong 20 năm tới, theo khảo sát của Pew Research. Quan ngại về rủi ro công nghệ tiếp tục gia tăng. #AI #CongNghe #AIandEthics #GiaiDoanViec #AIProgress #AIConcerns #AIrisks #AIdebate #AItechnology #AItroubled #AIissues #AIeffects #AIwatch #AIfuture #AIchaos #AIharm #AIimpact #AIhorizon #AIanalysis #AIstudy #AIconcern #AIworry #AIdich #AIhatred #AIviolence #AIwar #AIdanger #AItragedy #AIproblem #AIcrisis #AIrisk #AIthreat #AIattack #AI

Why "Ethical AI" Through Hardwired Rules Is Impossible?

AI pioneer De Kai argues rule-based ethical systems fail due to intractable conflicts (trolley problems), communication action complexity, and machine learning's cultural adaptation. The real existential risk: AI-accelerated hyperpolarization enabling DIY WMDs via lethal drones and computational biology.

https://buff.ly/XaXzbgR
#AIEthics #ResponsibleAI #AIRisk #MachineEthics #TechPolicy2025

This is a great Github repo of awesome resources related to AI agent failures, such as hallucinations, etc. It also has related weblinks to papers and conferences on AI safety issues.

https://github.com/vectara/awesome-agent-failures

Also see AI Hallucination Cases Tracker (AI and non-AI fabricated/false citations):
https://naturalandartificiallaw.com/ai-hallucination-cases-tracker/
and:
https://www.damiencharlotin.com/hallucinations/
and
https://www.visualcapitalist.com/sp/ter02-ranked-ai-hallucination-rates-by-model/

#AIagents #AIsafety #AIrisk #law

GitHub - vectara/awesome-agent-failures: A community curated collection of AI agent failure modes and battle-tested solutions.

A community curated collection of AI agent failure modes and battle-tested solutions. - vectara/awesome-agent-failures

GitHub
Anthropic CEO Dario Amodei warns that some AI companies are taking reckless, 'YOLO-style' risks with excessive spending, highlighting industry concerns over risk management and investment timing amid fierce competition with OpenAI, Google, and Microsoft.
#YonhapInfomax
#Anthropic #DarioAmodei #AIRisk #OpenAI #InvestmentTiming
#Economics #FinancialMarkets #Banking #Securities #Bonds #StockMarket
https://en.infomaxai.com/news/articleView.html?idxno=93917
Anthropic CEO Warns Some AI Firms Are Taking Reckless 'YOLO-Style' Risks

Anthropic CEO Dario Amodei warns that some AI companies are taking reckless, 'YOLO-style' risks with excessive spending, highlighting industry concerns over risk management and investment timing amid fierce competition with OpenAI, Google, and Microsoft.

Yonhap Infomax

Where is that "Pause" button?

The EU is about to issue a delay for implementing a few key parts of its "high-risk" AI rules. Originally slated to go live in August 2026, the EU is considering a proposal to delay implementation for at least one year.

Critics had demanded a delay, arguing the technical standards that companies could rely on to comply with the high-risk AI requirements were not ready by a summer deadline. https://www.politico.eu/article/eu-to-propose-delay-of-key-part-of-landmark-ai-law-by-one-year/ #AI #EU #AIRisk #AIRegulation #AIAct #Europe

New report shows Data Center Watch is unfunded and operates separately from 10a Labs' AI risk services. Miquel Vila highlights how power draw, cooling demands, and geographic clustering shape AI risk analysis. Dive into the policy implications and what it means for the industry. #DataCenterWatch #10aLabs #AIrisk #PowerDraw

🔗 https://aidailypost.com/news/data-center-watch-unfunded-separate-from-10a-labs-ai-risk-services

Australian super funds face massive risk in AI tech bubble. Michael Burry warns of potential market collapse threatening retirement savings. Geopolitical tensions add further complexity to investment landscape. #AIRisk #Superannuation

"AI Doom? No Problem"

I think this article from last weekend's Wall Street Journal is AI's "jumping the shark" moment for the ‘Cheerful Apocalyptics’ in Silicon Valley who think that a superintelligent AI destroying humanity would not necessarily be a bad thing.

https://www.wsj.com/tech/ai/ai-apocalypse-no-problem-6b691772?st=zSyiSF

also see another article, "The Asymmetric Design Flaw: Crippling Relational AI Guarantees Systemic Risk and Humanitarian Failure"
https://zenodo.org/records/17280485

#AI #philosophy #AIrisk #AIsafety #ethics

Anthropic's CEO warns of 25% severe AI risk; SMBs should plan safeguards, diversify vendors, and monitor regulations. #AIrisk #SMBstrategy

https://www.techradar.com/ai-platforms-assistants/claude/anthropics-ceo-gives-a-25-percent-chance-things-go-really-really-badly-with-ai

Anthropic's CEO gives 'a 25% chance things go really, really badly' with AI

But he's betting on the 75% chance of a more optimistic outcome

TechRadar