Ars Technica (@arstechnica)

트릴리언(수조) 개의 염기 서열로 학습된 오픈소스 대형 게놈(유전체) 모델이 발표되었다. 대규모 유전체 데이터를 기반으로 한 이 오픈소스 AI는 유전체 분석·예측과 바이오인포매틱스 응용에 즉시 활용될 수 있으며, 연구 재현성 및 커뮤니티 기여를 통한 모델 개선 가능성이 높아 향후 생명과학·의료 AI 개발에 중요한 영향을 줄 것으로 보인다.

https://x.com/arstechnica/status/2029321245217702070

#genomics #opensource #largemodel #bioinformatics #ai

Ars Technica (@arstechnica) on X

Large genome model: Open source AI trained on trillions of bases https://t.co/gwDL5Qgu5C

X (formerly Twitter)

techAU (@techAU)

약 4000억(≈400B) 파라미터 규모의 대형 언어모델 'quen 3.5'가 공개되었다는 짧은 공지성 트윗입니다. 모델명과 규모만 언급되어 있으며, 대형 파라미터 수를 가진 새로운 LLM 릴리스로 해석됩니다.

https://x.com/techAU/status/2023449550254731354

#quen #llm #largemodel #release

techAU (@techAU) on X

~400B parameter quen 3.5 is out.

X (formerly Twitter)

Qwen3-Omni-Flash-2025-12-01:a next-generation native multimodal large model

https://qwen.ai/blog?id=qwen3-omni-flash-20251201

#HackerNews #Qwen3OmniFlash #NextGen #AI #Multimodal #Model #LargeModel

Qwen

Qwen Chat offers comprehensive functionality spanning chatbot, image and video understanding, image generation, document processing, web search integration, tool utilization, and artifacts.

Analyzing And Editing Inner Mechanisms Of Backdoored Language Models

#ResearchHighlights

"We can successfully insert a weak backdoor mechanism in the benign model, even without also editing the embeddings of the trigger words."

"Our framework can reverse-engineer backdoor mechanisms in toy and large models for the first time, scale the strength of the backdoor mechanism ..."

https://arxiv.org/abs/2302.12461

#ai #llm #pcpablation #mlp #toymodel #largemodel #backdoor #backdooredlanguagemodel #chatgpt

Analyzing And Editing Inner Mechanisms Of Backdoored Language Models

Poisoning of data sets is a potential security threat to large language models that can lead to backdoored models. A description of the internal mechanisms of backdoored language models and how they process trigger inputs, e.g., when switching to toxic language, has yet to be found. In this work, we study the internal representations of transformer-based backdoored language models and determine early-layer MLP modules as most important for the backdoor mechanism in combination with the initial embedding projection. We use this knowledge to remove, insert, and modify backdoor mechanisms with engineered replacements that reduce the MLP module outputs to essentials for the backdoor mechanism. To this end, we introduce PCP ablation, where we replace transformer modules with low-rank matrices based on the principal components of their activations. We demonstrate our results on backdoored toy, backdoored large, and non-backdoored open-source models. We show that we can improve the backdoor robustness of large language models by locally constraining individual modules during fine-tuning on potentially poisonous data sets. Trigger warning: Offensive language.

arXiv.org