"Generative design of novel bacteriophages with genome language models"

#BioInformatics #GenAI #Genome #LanguageModel #ReSearch ... En bref : génération et tests de génomes générés à partir de ceux connus : nouvelles découvertes !

https://www.biorxiv.org/content/10.1101/2025.09.12.675911v1

Generative design of novel bacteriophages with genome language models

Many important biological functions arise not from single genes, but from complex interactions encoded by entire genomes. Genome language models have emerged as a promising strategy for designing biological systems, but their ability to generate functional sequences at the scale of whole genomes has remained untested. Here, we report the first generative design of viable bacteriophage genomes. We leveraged frontier genome language models, Evo 1 and Evo 2, to generate whole-genome sequences with realistic genetic architectures and desirable host tropism, using the lytic phage ΦX174 as our design template. Experimental testing of AI-generated genomes yielded 16 viable phages with substantial evolutionary novelty. Cryo-electron microscopy revealed that one of the generated phages utilizes an evolutionarily distant DNA packaging protein within its capsid. Multiple phages demonstrate higher fitness than ΦX174 in growth competitions and in their lysis kinetics. A cocktail of the generated phages rapidly overcomes ΦX174-resistance in three E. coli strains, demonstrating the potential utility of our approach for designing phage therapies against rapidly evolving bacterial pathogens. This work provides a blueprint for the design of diverse synthetic bacteriophages and, more broadly, lays a foundation for the generative design of useful living systems at the genome scale. ### Competing Interest Statement B.L.H. acknowledges outside interest in Arpelos Biosciences and Genyro as a scientific co-founder. S.H.K. and B.L.H. are named on a provisional patent application applied for by Stanford University and Arc Institute related to this manuscript. All other authors declare no competing interests. Arc Research Institute, https://ror.org/00wra1b14 Stanford Institute for Human-Centered Artificial Intelligence

bioRxiv

AISatoshi (@AiXsatoshi)

Tencent가 295B-A21B라는 최첨단 LLM을 공개했다는 내용입니다. 대규모 파라미터를 가진 최신 언어모델로 보이며, 새로운 AI 모델 출시 소식에 해당합니다.

https://x.com/AiXsatoshi/status/2047331705095332275

#tencent #llm #languagemodel #aimodel #foundationmodel

AI✖️Satoshi⏩️ (@AiXsatoshi) on X

テンセントからも、295B-A21Bの最先端LLM

X (formerly Twitter)

fly51fly (@fly51fly)

마이크로 언어 모델(Micro Language Models)이 즉각적인 응답을 가능하게 한다는 연구가 소개됐다. 메타 AI와 워싱턴대 연구진의 2026년 논문으로, 더 작은 모델로도 빠른 추론과 실시간 반응을 구현하는 방향의 기술 발전을 다룬다.

https://x.com/fly51fly/status/2047069038665482678

#languagemodel #smallmodel #inference #metai #arxiv

fly51fly (@fly51fly) on X

[CL] Micro Language Models Enable Instant Responses W Cheng, T Chen, K Helwani, S Srinivasan… [University of Washington & Meta AI] (2026) https://t.co/aRW3IkD7RA

X (formerly Twitter)

fly51fly (@fly51fly)

Together AI 관련 연구로, Introspective Diffusion Language Models라는 새로운 언어 모델 접근법을 제안한 논문이 공유되었다.

https://x.com/fly51fly/status/2044169743876420065

#diffusion #languagemodel #research #togetherai #arxiv

fly51fly (@fly51fly) on X

[LG] Introspective Diffusion Language Models Y Yu, Y Jian, J Wang, Z Zhou… [Together AI] (2026) https://t.co/nJ67v074H3

X (formerly Twitter)

Github Awesome (@GithubAwesome)

9백만 파라미터 규모의 언어모델 GuppyLM이 새로 공개됐다. 6층 vanilla transformer로 처음부터 학습되며, SwiGLU와 RoPE를 쓰지 않는다. 무료 Colab T4 GPU에서 5분 만에 학습 가능하고, 전체 파이프라인이 공개돼 소형 모델 학습·재현에 유용하다.

https://x.com/GithubAwesome/status/2041322379071426758

#languagemodel #opensource #transformer #smallmodel #airesearch

Github Awesome (@GithubAwesome) on X

GuppyLM is a 9-million parameter language model built from scratch that does exactly one thing: pretends to be a small fish named Guppy. No SwiGLU, no RoPE. Just a pure vanilla 6-layer transformer. Trains in 5 minutes on a free Colab T4 GPU.The entire pipeline is exposed: data

X (formerly Twitter)
🎉 Wow, someone finally made a language model that blubbers like a goldfish! 🐠 With a whopping 9 million parameters, it’s a marvel of "innovation" that could probably answer "Hello?" if you asked it thrice. #GitHub must be thrilled to host yet another #techno-novelty no one asked for! 🙄
https://github.com/arman-bd/guppylm #languageModel #innovation #goldfish #9millionparameters #HackerNews #ngated
GitHub - arman-bd/guppylm: A ~9M parameter LLM that talks like a small fish.

A ~9M parameter LLM that talks like a small fish. Contribute to arman-bd/guppylm development by creating an account on GitHub.

GitHub

fly51fly (@fly51fly)

Sakana AI와 NVIDIA 연구진이 더 작고 빠르며 가벼운 트랜스포머 언어모델을 제안하는 논문을 공개했다. 대형 언어모델의 효율성을 높이기 위한 구조 개선 연구로, 경량화와 추론 속도 향상 측면에서 AI 개발자들에게 중요한 내용이다.

https://x.com/fly51fly/status/2036923500737511620

#transformer #languagemodel #efficiency #sparsity #research

fly51fly (@fly51fly) on X

[LG] Sparser, Faster, Lighter Transformer Language Models E Cetin, S Peluchetti, E Castillo, A Naruse… [Sakana AI & NVIDIA] (2026) https://t.co/wnqkpVcmYQ

X (formerly Twitter)

⬆️ >> #AI got the blame for #Iran school bombing…

Excellent example of how a #languageModel is NOT the same as #worldModel or #realTime #realWorld data.

The #Maven system that #Palantir embedded into the #US military infrastructure relies on BOTH #LLM and #realTime #realWorld data, but it cannot prevent catastrophes when there is a failure in either or both of them.

In this case, it was #staleData at the very least— possibly a faulty/imprecise language model as well.

https://www.theguardian.com/news/2026/mar/26/ai-got-the-blame-for-the-iran-school-bombing-the-truth-is-far-more-worrying

AI got the blame for the Iran school bombing. The truth is far more worrying

LLMs-gone-rogue dominated coverage, but had nothing to do with the targeting. Instead, it was choices made by human beings, over many years, that gave us this atrocity

The Guardian

Teaching AI Ethics

Update: since I wrote this original post covering the nine areas, I've expanded each one into a complete article. Have a read through this post, and then when you're ready to dive deeper into AI ethics, check out the full series here. If you linked to this post as part of a course or university resource, I suggest updating your links with the complete series. https://leonfurze.com/ai-ethics/ As we head into the start of Term 1 it's already looking like Artificial Intelligence is going to be […]

https://leonfurze.com/2023/01/26/teaching-ai-ethics/

Using ChatGPT for Conferencing and Feedback

I've used conferencing for years as my main form of feedback and assessment. I stopped collecting piles of books, stopped writing margin notes that no-one ever read, and stopped correcting work like a human spell-checker. Aside from the hours of time saved by not "correcting" work, I also built stronger relationships with students as a result of regularly sitting with them 1:1 to go through their work. At the moment, ChatGPT has been banned by the Department of Education in most states […]

https://leonfurze.com/2023/02/08/using-chatgpt-for-conferencing-and-feedback/