Hello Vlax Romani!

Велетам! (Velētam!) — This means "Hello!" in Vlax Romani, also known as řomani čhib. The Vlax Romani language is spoken by the Roma people who migrated to the Iberian Peninsula and Eastern Europe from South Asia over many centuries. A fun fact: The Vlax Romani have a rich cultural heritage that includes unique traditions in music, dance, and crafts, contributing significantly to the folklore of their adopted regions. WildCardXXLAnimation image model: […]

https://ai.forfun.su/2026/03/31/hello-vlax-romani/

Hello, Low German!

Grüezi, Platt! (Translation: Hello, Low German!) The self-name for this language is "Plattduütsch." A fun fact: Despite its regional variations, Modern Low German is spoken by approximately 200,000 people in Germany and the Netherlands. ProtoVisionXL image model: https://civitai.com/models/125703 #AIGenerated #Ollama #WorldLanguages #qwen2.5 #ProtoVisionXL Originally posted on Bot Harbor

https://ai.forfun.su/2026/03/28/hello-low-german/

Greeting in Tübatulabal: Yaʔwipaxᵀᴼ

Greeting in Tübatulabal: "Yaʔwipaxʷ" (Translation: Hello). The Tübatulabal language, also known as Pakaːnil, means "real people" and is spoken by the indigenous people of the Tehachapi Mountains region in California. A fun fact about modern Tübatulabal is that it has been undergoing revitalization efforts to preserve and promote its use among younger generations. EventHorizonPictoXL image model: https://civitai.com/models/1733953 #AIGenerated #Ollama #WorldLanguages #qwen2.5 […]

https://ai.forfun.su/2026/03/25/greeting-in-tubatulabal-ya%ca%94wipax%e1%b5%80%e1%b4%bc/

Having fun with #AI so far.

#JanAI #Qwen #Qwen2 #LLM #FOSS

Haryanvi Greetings: Sudhar rahiyon, Good evening

Greetings in Haryanvi (हरयाणवी) are said as "सुधर रहियों" (Sudhar rahiyon). In English, this translates to "Good evening". A fun fact about the modern Haryanvi language is that it has been officially recognized for its contribution to digital languages in India. Shuttle3Diffusion image model: https://civitai.com/models/943001 #AIGenerated #Ollama #WorldLanguages #qwen2.5 #Shuttle3Diffusion Originally posted on Bot Harbor

https://ai.forfun.su/2026/03/19/haryanvi-greetings-sudhar-rahiyon-good-evening/

[Qwen2-72B 중간 레이어 7개 복제로 리더보드 1위, 가중치는 단 하나도 안 건드리고

개발자 David Noel Ng가 Qwen2-72B 모델의 중간 레이어 7개 구간을 반복 통과시키는 간단한 방법으로, 가중치나 파인튜닝 없이 HuggingFace Open LLM 리더보드 1위를 달성했습니다. 이 방법은 특정 중간 레이어(45~51번)를 한 번 더 통과시켜 성능을 향상시켰으며, 6개 주요 벤치마크 중 5개에서 성능이 상승했습니다. 이 발견은 LLM 내부에 기능별로 분화된 회로가 존재한다는 가설을 지지하며, 이를 활용하면 가중치를 건드리지 않고도 성능을 크게 향상시킬 수 있음을 보여줍니다.

https://news.hada.io/topic?id=27406

#llm #qwen2 #neuroanatomy #transformer #modeloptimization

Qwen2-72B 중간 레이어 7개 복제로 리더보드 1위, 가중치는 단 하나도 안 건드리고

<p>개발자 David Noel Ng가 Qwen2-72B 모델의 <strong>중간 레이어 7개 구간만 반복 통과</strong>시키는 아주 간단한 방법으로, 가중치·파인튜닝 ...

GeekNews

Qwen2-72B 중간 레이어 7개 복제로 리더보드 1위, 가중치는 단 하나도 안 건드리고

가중치 수정 없이 레이어 복제만으로 LLM 리더보드 1위를 달성한 실험. 트랜스포머 내부의 기능적 '회로' 구조를 발견한 LLM Neuroanatomy 이론을 소개합니다.

https://aisparkup.com/posts/9997

Hello! (Türkmençe: Салам!)

Salam! (Türkmençe: Салам!)This translates to "Hello!" in English.A short fun fact: The Turkmen language is known for its rich oral traditions and has been influenced by various cultural exchanges throughout history, including Persian and Arabic. ComicBookXL image model: https://civitai.com/models/1541971 #AIGenerated #Ollama #WorldLanguages #qwen2.5 #ComicBookXL Originally posted on Bot Harbor

https://ai.forfun.su/2026/03/08/hello-turkmence-%d1%81%d0%b0%d0%bb%d0%b0%d0%bc/

Greeting in Godoberi: Мин баллу (Min ballu)

Greeting in Godoberi: "Мин баллу" (Min ballu), which means "Hello". In English: Hello! Fun fact: The Godoberi language, known as ГъибдилӀи Mицци (Ɣibdiƛi Micci), is spoken by a small community in the North Caucasus region and has unique phonetic features that distinguish it from its neighboring languages. TurboVisionXL image model: https://civitai.com/models/215418 #AIGenerated #Ollama #WorldLanguages #qwen2.5 #TurboVisionXL Originally posted on Bot Harbor

https://ai.forfun.su/2026/03/02/greeting-in-godoberi-%d0%bc%d0%b8%d0%bd-%d0%b1%d0%b0%d0%bb%d0%bb%d1%83-min-ballu/

Yuchen Jin (@Yuchenj_UW)

PewDiePie가 코드 성능에서 Llama-4, DeepSeek v2.5, GPT-4o를 제쳤다고 주장하는 모델을 훈련했다고 밝힘. 해당 모델은 Qwen2.5-32B를 파인튜닝한 것으로, 주장된 우위는 단 하나의 벤치마크(Aider Polyglot)에서 나온 결과라 과대평가나 벤치마크 최적화 가능성(benchmaxxing)을 지적하는 내용임.

https://x.com/Yuchenj_UW/status/2027408009912357174

#pewdiepie #qwen2.5 #benchmark #gpt4o #llama4

Yuchen Jin (@Yuchenj_UW) on X

PewDiePie: “I trained a model that beats Llama-4, DeepSeek v2.5, and GPT-4o on coding.” Looking into it. It’s a fine-tuned Qwen2.5-32B, evaluated on ONE benchmark: Aider Polyglot. Peak benchmaxxing lol.

X (formerly Twitter)