Why Model Collapse In LLMs Is Inevitable With Self-Learning

There is a persistent belief in the ‘AI’ community that large language models (LLMs) have the ability to learn and self-improve by tweaking the weights in their vector space. Although t…

Hackaday
Does it #deradicalize? Their tendency towards sycophancy concerns me. On a quick search I found: www.far.ai/news/attempt... " #FrontierLLMs Attempt to Persuade into #HarmfulTopics August 21, 2025 Summary #LargeLanguageModels ( #LLMs) are already more persuasive than humans in many domains.
Frontier LLMs Attempt to Persuade into Harmful Topics

Large language models (LLMs) are already more persuasive than humans in many domains. While this power can be used for good, like helping people quit smoking, it also presents significant risks, such as large-scale political manipulation, disinformation, or terrorism recruitment. But how easy is it to get frontier models to persuade into harmful beliefs or illegal actions? Really easy – just ask them.

AI Accelerates Exploits, Forces New Breach Playbooks

The game-changing capabilities of AI models like Anthropic's Claude Mythos have drastically shrunk the exploit window, allowing them to uncover vulnerabilities in minutes that would take human experts weeks or even hours to detect. This seismic shift is forcing organizations to rethink their approach to…

https://osintsights.com/ai-accelerates-exploits-forces-new-breach-playbooks?utm_source=mastodon&utm_medium=social

#AiAcceleratedExploits #LargeLanguageModels #VulnerabilityManagement #IncidentResponse #EmergingThreats

AI Accelerates Exploits, Forces New Breach Playbooks

Discover how AI accelerates exploits and forces new breach playbooks, learn to rethink vulnerability windows and incident response, and protect your organization now with expert insights.

OSINTSights
Trying Pair Programming With An LLM Chatbot

When it comes to software developers, there are t a few distinct types. For example, the extroverted, chatty type, who is always going out there to share the latest and newest libraries and project…

Hackaday
Anthropic’s Claude Code Problem Shows How Fragile AI Moats Really Are | HackerNoon

It's been a rough few months for Anthropic....

- «Hey guys, how do you use ROCm in Linux?»
- «First, ensure you have an RTX in your computer»

🤣

#NVIDIA #ROCm #AMD #Radeon #RTX #GeForce #CUDA #AI #ArtificialIntelligence #LLM #LargeLanguageModels #Llama #LlamaCCP #Ollama #NPU

Given how #LLMs work, it would make sense to treat LLMs the same way as other stochastic way of divination and consult demons or oracles, like tarot cards, throwing bones, tea leafs, etc…

It's been a long while since the bible had it's last expansion pack, but I'm sure the next expansion pack will at least contain a prohibition on making and consulting LLMs

——
#MachineLearning #LargeLanguageModels #UnpopularOpinion

IPO Arena will compare eight LLMs on IPO-stage stock trading, sentiment, and risk-adjusted returns using LIBB infrastructure. https://hackernoon.com/can-llms-beat-the-ipo-etf-inside-the-ipo-arena-experiment #largelanguagemodels
Can LLMs Beat the IPO ETF? Inside the IPO Arena Experiment | HackerNoon

IPO Arena will compare eight LLMs on IPO-stage stock trading, sentiment, and risk-adjusted returns using LIBB infrastructure.

Five AI models, three users, one finding: the settings came from the textbook, not the data
We took Nightscout data from three people and gave it to 5 Large Language Models with a detailed prompt to see how they would respond.

The results weren't what I was expecting.
https://www.diabettech.com/five-ai-models-three-users-one-finding-the-settings-came-from-the-textbook-not-the-data/
#AI #Diabetes #PumpSettings #AI #AIDSettings #ArtificialIntelligence #LargeLanguageModels