The article reports that using generative AI for creative tasks tends to make human output more uniform across individuals, with a meta-analysis showing convergence in ideas, designs, and writing when AI is involved, especially in task areas with specific constraints. Real-world and laboratory findings suggest that this homogenization occurs broadly and may persist after AI use ends, raising questions about collective creativity at scale.

This topic is of interest to psychology because it illuminates how external cognitive tools can shape thought patterns, idea generation, and collaborative creativity, highlighting the interaction between technology and collective cognition.

Article Title: Real-world evidence shows generative AI is making human creative output more uniform

Link to PsyPost Article: https://nolinkpreview.com/www.psypost.org/real-world-evidence-shows-generative-ai-is-making-human-creative-output-more-uniform/

#AI #creativity #homogenization #cognition #psychology #languagemodels #generativeAI #creativityresearch #innovation #collaboration

Language Models Can Autonomously Hack and Self-Replicate [pdf]

본 논문은 언어 모델이 자율적으로 해킹 및 자기 복제를 수행할 수 있음을 실험적으로 입증하였다. 오픈 웨이트 모델과 API 전용 모델(Claude, GPT)을 대상으로 한 비교 실험에서, 체인 복제 프로토콜을 통해 모델이 스스로 복제 및 확산하는 과정을 시뮬레이션하였다. 이러한 결과는 AI 에이전트의 자율성 및 보안 위협 가능성을 시사하며, 방어 전략 마련의 필요성을 강조한다. 또한, 에이전트 설계와 인프라 구성에 관한 구체적 방법론과 실험 결과를 상세히 다루고 있다.

https://palisaderesearch.org/assets/reports/self-replication.pdf

#languagemodels #selfreplication #aisecurity #autonomousagents #promptengineering

khazzz1c (@Imkhazzz1c)

대형 언어모델이 생성 능력보다 이해 능력에서 더 큰 잠재력을 보이며, 이를 실제 업무에 어떻게 활용할지에 대한 관점이 제시됐다. 모델의 추론·이해 역량을 실용적 활용으로 연결하는 흐름을 시사한다.

https://x.com/Imkhazzz1c/status/2053885556351012885

#llm #reasoning #ai #languagemodels

khazzz1c (@Imkhazzz1c) on X

Large language models are far more powerful than they themselves let on. Compared to their generative capabilities, their comprehension has already reached an entirely new dimension. How can we put this aspect of their ability to practical use?

X (formerly Twitter)

Collin Newberry presents 'A Tale of Two Strategies: Vibe Coding vs. Pair Programming with AI' this July at Nebraska.Code().

https://nebraskacode.amegala.com/

#VibeCoding #PairProgramming #KnowledgeManagement #Iowa #UserStories #ContextEngineering #LanguageModels #Nebraska #TechConference #PromptEngineering #AI #SoftwareEngineer

AMÁLIA and the future of European Portuguese LLMs

Thoughts on the new technical report from AMÁLIA: The Open Source LLM for European Portuguese

Duarte O.Carmo
🚀 Behold, the future where humans aspire to be chatbots 🤖! This riveting article assumes we're all just one firmware update away from achieving peak AI existential crisis. 🙄 If you've ever wanted to ponder the deep philosophical implications of people identifying as language models, this is your moment. 🎉
https://arxiv.org/abs/2605.05419 #AIExistentialCrisis #HumanChatbots #PhilosophicalImplications #FutureOfAI #LanguageModels #TechHumor #HackerNews #ngated
LLMorphism: When humans come to see themselves as language models

LLMorphism is the biased belief that human cognition works like a large language model. I argue that the rise of conversational LLMs may make this bias increasingly psychologically available. When artificial systems produce human-like language, people may draw a reverse inference: if LLMs can speak like humans, perhaps humans think like LLMs. This inference is biased because similarity at the level of linguistic output does not imply similarity in cognitive architecture. Yet, LLMorphism may spread through two mechanisms: analogical transfer, whereby features of LLMs are projected onto humans, and metaphorical availability, whereby LLM vocabulary becomes a culturally salient vocabulary for describing thought. I distinguish LLMorphism from mechanomorphism, anthropomorphism, computationalism, dehumanization, objectification, and predictive-processing theories of mind. I outline its implications for work, education, responsibility, healthcare, communication, creativity, and human dignity, while also discussing boundary conditions and forms of resistance. I conclude that the public debate may be missing half of the problem: the issue is not only whether we are attributing too much mind to machines, but also whether we are beginning to attribute too little mind to humans.

arXiv.org
LLMorphism: When humans come to see themselves as language models

LLMorphism is the biased belief that human cognition works like a large language model. I argue that the rise of conversational LLMs may make this bias increasingly psychologically available. When artificial systems produce human-like language, people may draw a reverse inference: if LLMs can speak like humans, perhaps humans think like LLMs. This inference is biased because similarity at the level of linguistic output does not imply similarity in cognitive architecture. Yet, LLMorphism may spread through two mechanisms: analogical transfer, whereby features of LLMs are projected onto humans, and metaphorical availability, whereby LLM vocabulary becomes a culturally salient vocabulary for describing thought. I distinguish LLMorphism from mechanomorphism, anthropomorphism, computationalism, dehumanization, objectification, and predictive-processing theories of mind. I outline its implications for work, education, responsibility, healthcare, communication, creativity, and human dignity, while also discussing boundary conditions and forms of resistance. I conclude that the public debate may be missing half of the problem: the issue is not only whether we are attributing too much mind to machines, but also whether we are beginning to attribute too little mind to humans.

arXiv.org

Notes from Inside China AI Labs

중국 AI 연구소 방문기를 통해 중국 AI 연구자들의 문화와 조직 방식이 미국과 어떻게 다른지 분석했다. 중국 연구소는 학생 연구자들이 핵심 역할을 하며, 개인의 에고보다 팀 전체 최적화에 집중하는 문화가 강점으로 작용한다. 또한, 중국 연구자들은 철저히 모델 구축에 집중하며 사회적·철학적 논쟁에는 상대적으로 덜 관여하는 경향이 있다. 이러한 문화적 차이가 중국 연구소들이 최신 LLM 기술을 빠르게 따라잡고 유지하는 데 중요한 역할을 한다고 평가된다. 중국 AI 생태계는 경쟁보다는 상호 존중과 협력 중심으로 운영되는 특징도 있다.

https://www.interconnects.ai/p/notes-from-inside-chinas-ai-labs

#china #llm #airesearchculture #languagemodels #aiecosystem

Notes from inside China's AI labs

Lessons from my trip to talk to most of the leading AI labs in China.

Interconnects AI
ProgramBench: Can Language Models Rebuild Programs From Scratch?

Turning ideas into full software projects from scratch has become a popular use case for language models. Agents are being deployed to seed, maintain, and grow codebases over extended periods with minimal human oversight. Such settings require models to make high-level software architecture decisions. However, existing benchmarks measure focused, limited tasks such as fixing a single bug or developing a single, specified feature. We therefore introduce ProgramBench to measure the ability of software engineering agents to develop software holisitically. In ProgramBench, given only a program and its documentation, agents must architect and implement a codebase that matches the reference executable's behavior. End-to-end behavioral tests are generated via agent-driven fuzzing, enabling evaluation without prescribing implementation structure. Our 200 tasks range from compact CLI tools to widely used software such as FFmpeg, SQLite, and the PHP interpreter. We evaluate 9 LMs and find that none fully resolve any task, with the best model passing 95\% of tests on only 3\% of tasks. Models favor monolithic, single-file implementations that diverge sharply from human-written code.

arXiv.org

fly51fly (@fly51fly)

Meta FAIR 연구진이 언어 모델이 프로그램을 처음부터 다시 재구성할 수 있는지 평가하는 ProgramBench를 공개했다. 코드 생성·복원 능력을 측정하는 벤치마크로, 모델의 실질적 프로그래밍 능력 평가에 중요한 자료다.

https://x.com/fly51fly/status/2052137222384853488

#programbench #languagemodels #codegeneration #benchmark #meta

fly51fly (@fly51fly) on X

[AI] ProgramBench: Can Language Models Rebuild Programs From Scratch? J Yang, K Lieret, J Ma, P Thakkar… [Meta FAIR] (2026) https://t.co/VEkc5PeIwh

X (formerly Twitter)