猛虎蔚AttentionCourage🌈

@Courage24Freedom
10 Followers
112 Following
438 Posts

loving creating and exploring, reading, writing, music and languages
Attention is all you need
You are what you consume(time,food,information,attention,energy)

#SelfGrowth #SocialSkills #LanguageLearning #Diary

Learning Language
self grow&care
social skills
TechforGood
Fuji Rock感动的一些moment
好喜欢竹内玛利亚啊啊啊
对死亡焦虑 对死亡和时间关系的认识 生命教育与死亡教育
12最近读的一些书影音;
13对我影响深远的人、物、事。
14与四五年前对比后的自我反思——行动力不足,主动主动再主动争取,多练习stand out for myself
15对Assertive的认识的改变
最近表达欲爆棚,先罗列下自己想写的一些主题和关键词:
1/手术一复盘;
2/毒虫叮咬如何自救(欧洲/大陆两版本)
3科幻小说创作;现实与梦境,梦境是另一些时空维度的高仿真世界投影;
4如何搭建AI agent辅助视障人士语音输入写作;
5学法语英语写作表达自己的感受语言背后思维和文化的差异,
6西班牙口语和弗拉门戈音乐舞蹈文化;
7Embodied Cognition,非线形时间感知;
8怎样摸索形成自己的穿衣风格;
9 多模态大语言模型之外的,World- Model(物理世界与电子高仿真世界 搭建联系)
10最近与不同朋友的对话和自我思考反省
11发现自己原来是用全身肌肉和感官细胞来学音乐乐器的,而这在一个交响乐团里很少见,与一个pianist的相关交流讨论;12重新学习练习以前乐器后随着歌曲和身体记忆起不同时空阶段自己的想法和感受-----时空倒流回08年我心狂野激情燃烧的一面
13维也纳ACL开会所思所想

^ 低精力人也能做的“抵抗”,照顾好自己也是“抵抗”的一部分
https://www.instagram.com/p/DH8_WozOnW-/

保护自己的注意力和时间是很重要的。资本利用注意力来敛财,敛来的财会用于投资战争,不给这些人注意力,它们就赚不到钱。

Janea Brown on Instagram: "The $0 boycott that might give you MORE energy rather than drain it 😈. After 2 years, redirecting my attention has become the foundation for all other resistance—and often costs nothing. Our minutes = their millions. Every algorithm view and targeted ad funds the systems we’re fighting. This isn’t about perfection (I still love binges & a good scroll sesh 😌). It’s about becoming less useful to empire and more present for each other. The tools that help me and more avails in the “Anti-Capitalist Tools” board linked in bio ❤️‍🔥. What digital boycott practice works for you 🦋? #EarthlingsUndone #LazyResistance #AttentionLiberation #SystemChange #CollectiveCare #BetterAncestor"

98K likes, 318 comments - jnaydaily on April 2, 2025: "The $0 boycott that might give you MORE energy rather than drain it 😈. After 2 years, redirecting my attention has become the foundation for all other resistance—and often costs nothing. Our minutes = their millions. Every algorithm view and targeted ad funds the systems we’re fighting. This isn’t about perfection (I still love binges & a good scroll sesh 😌). It’s about becoming less useful to empire and more present for each other. The tools that help me and more avails in the “Anti-Capitalist Tools” board linked in bio ❤️‍🔥. What digital boycott practice works for you 🦋? #EarthlingsUndone #LazyResistance #AttentionLiberation #SystemChange #CollectiveCare #BetterAncestor".

Instagram
Notion CEO Ivan Zhao wants you to demand better from your tools

Notion’s Ivan Zhao on AI, productivity, and the future of work.

The Verge
时隔两个月I'm Back here!最近变化很大,和自己内心对话更多,新的阶段马上要到来,希望自己能在真正到来前,黎明来临前全力以赴,至少要对得起自己。最近重复出现的Universe Keyword:1/Honest to myself
每日夸自己真可爱、智慧有力量!“最稳固的自信并不是来自于你有没有达到某些外部评判标准,比如考多少分、什么学历什么收入、有多少房车之类的,而是来自于,在你与外界碰撞的过程中,你观察自己对各种事情的反应,你越来越了解自己的心性和人品,你面对自己无所愧,而是觉得自己真可爱,心性真坚定,这朋友能交,这就是最稳固的自信了。”

LLM Unlearning Should Be Form-Independent

Xiaotian Ye, Mengqi Zhang, Shu Wu
https://arxiv.org/abs/2506.07795 https://arxiv.org/pdf/2506.07795 https://arxiv.org/html/2506.07795

arXiv:2506.07795v1 Announce Type: new
Abstract: Large Language Model (LLM) unlearning aims to erase or suppress undesirable knowledge within the model, offering promise for controlling harmful or private information to prevent misuse. However, recent studies highlight its limited efficacy in real-world scenarios, hindering practical adoption. In this study, we identify a pervasive issue underlying many downstream failures: the effectiveness of existing unlearning methods heavily depends on the form of training samples and frequently fails to generalize to alternate expressions of the same knowledge. We formally characterize this problem as Form-Dependent Bias and systematically investigate its specific manifestation patterns across various downstream tasks. To quantify its prevalence and support future research, we introduce ORT, a novel benchmark designed to evaluate the robustness of unlearning methods against variations in knowledge expression. Results reveal that Form-Dependent Bias is both widespread and severe among current techniques.
We argue that LLM unlearning should be form-independent to address the endless forms of downstream tasks encountered in real-world security-critical scenarios. Towards this goal, we introduce Rank-one Concept Redirection (ROCR), a novel training-free method, as a promising solution path. ROCR performs unlearning by targeting the invariants in downstream tasks, specifically the activated dangerous concepts. It is capable of modifying model parameters within seconds to redirect the model's perception of a specific unlearning target concept to another harmless concept. Extensive experiments demonstrate that ROCR significantly improves unlearning effectiveness compared to traditional methods while generating highly natural outputs.

toXiv_bot_toot

LLM Unlearning Should Be Form-Independent

Large Language Model (LLM) unlearning aims to erase or suppress undesirable knowledge within the model, offering promise for controlling harmful or private information to prevent misuse. However, recent studies highlight its limited efficacy in real-world scenarios, hindering practical adoption. In this study, we identify a pervasive issue underlying many downstream failures: the effectiveness of existing unlearning methods heavily depends on the form of training samples and frequently fails to generalize to alternate expressions of the same knowledge. We formally characterize this problem as Form-Dependent Bias and systematically investigate its specific manifestation patterns across various downstream tasks. To quantify its prevalence and support future research, we introduce ORT, a novel benchmark designed to evaluate the robustness of unlearning methods against variations in knowledge expression. Results reveal that Form-Dependent Bias is both widespread and severe among current techniques. We argue that LLM unlearning should be form-independent to address the endless forms of downstream tasks encountered in real-world security-critical scenarios. Towards this goal, we introduce Rank-one Concept Redirection (ROCR), a novel training-free method, as a promising solution path. ROCR performs unlearning by targeting the invariants in downstream tasks, specifically the activated dangerous concepts. It is capable of modifying model parameters within seconds to redirect the model's perception of a specific unlearning target concept to another harmless concept. Extensive experiments demonstrate that ROCR significantly improves unlearning effectiveness compared to traditional methods while generating highly natural outputs.

arXiv.org
Recent Aha moments:
1/Success comes from unfounded confidence, double the crazy efforts and concentration.
成功来源于毫无缘由的自信加上成倍的疯狂努力和专注。
2/Living a simple life saves space in your brain to deal with complex and difficult problems.
生活简单才能为大脑节省空间来处理复杂棘手的问题。