Long and fiery winter night it is ! User:Dragons_Bot is importing frequency lists in 500 more #languages from Unilex into #Lingualibre. Dragons Bot script is running tonight, editing Lili persistently. We then will have common words list for 1001 languages, ready for you to record. At step 3 of the Recording Studio, click "local list", then search List:{your_iso}/Unilex and you are good to go ! If your community's languages aren't there you can let me know below. 🎉
https://lingualibre.org/wiki/Special:RecordWizard
Login required - Lingua Libre

🤖🐲Another long day with User:Dragons_Bot!

Months ago, I activated several #SignLanguages on #LinguaLibre. People can video record signed words. While doing activity stats, a #SPARQL query shown missing data on 467 languages items. Dragons_Bot just fixed those. Will be useful for incoming 3rd recording type for #WhistledLanguages. 😉

Today, I use Lingualibre Wikibase as a calm pad for coding my bot. Some days, I will move to #Wikidata for live editing on languages. 🎉
https://en.wikipedia.org/wiki/Whistled_language

Whistled language - Wikipedia

🤖🐲 User:Dragons_Bot to the rescue ! Doing clean ups !

Did you know ? Lingualibre has 219 #languages recorded, but a #SPARQL query will return 221 languages. Why ? Because Chinese, by example, is erroneously present twice 😲 :
- ❌ as Q130, iso: zho for #Chinese writing
- ✅ as Q113, iso `cmn`, for Chinese Mandarin
Tonight, I code a script to move all records toward `cmn`, on both #Lingualibre's items and #Commons' file wikipages. Fighto ! ò__ó

Script running tonight, still ~2hours to go. 3 languages and ~1,000 recordings to fix to get back to clean data. I wonder how others scripts of the toolchain will follow to be honest. But for now : sleep !
[2:25am edit: Well, I added few hours and finished that task ò__ó]
https://lingualibre.org/wiki/Help:SPARQL_for_maintenance#Counts
Help:SPARQL for maintenance - Lingua Libre

This new year, I'm working an open licence dump of 8,598 #Chinese audio recordings.

# The source
Those files were part of the original project -Shootka recorder (2005-2016)- which @Wikimedia_Fr, Nicolas Vion recoded and renamed into #Lingualibre (2016+). The total dumps of 150~300k audios have been laying there for 8 years now, in need for processing and migration to #WikimediaCommons.

# Scrapping
In past years, I noticed webpages of this archive project collapsed. Data still seemed available by one or two access points. Yesterday I scrapped all what I could get:

# Download all from Shtooka
$ wget -r -np -nc -nH --accept=.flac,.html -A .flac,.html https://packs.shtooka.net/ --no-check-certificate
>FINISHED --2024-01-03 14:06:55--
>Total clock time: 1h 26m 49s
>Downloaded: 114574 files, 7.9G in 21m 56s (6.14 MB/s)

😿 Many filepaths failed
💖 #Chinese HSK succeed
🔉 高低 gāodī: height

# INSPECT
I now got 8,598 Chinese #HSK audios. As usual, we progress with a small sample of files. The cycle goes :
-investigate
-code
-fix
-expand.

Thank good I document most of my actions on #Lingualibre for past decade ! Dozen of Help pages to onboard junior programmers.

https://lingualibre.org/wiki/Help:Converting_audios#Helpers

I updated a bit the command thanks to ChatGPT's suggestions. To print the Shtooka files' rich metadata, try :

$ ffprobe -hide_banner ./data/cmn-0a0a8a8b.ogg

*HSK version 1, from the 2000s

Help:Converting audios - Lingua Libre

# METADATA OF INTEREST
Among the twenty metadata this Chinese audio file contains, several are of intestest.

```
- speaker name
- speaker LL id
- speaker gender
- uploader username
- Wikidata language id
- word
- date of creation
- open license
```

Great ! I need those value for WikimediaCommons Template:Lingua_libre_record. 😉

# WIKIBOT
Dragons_Bot stands upon NodeJS and #WikiapiJS , a powerful JS framework I use. (I love this project, so I was also involved in its documentation.) It has 44 stars ⭐ on github :

https://kanasimi.github.io/wikiapi/

As of now, I have a decent script which creates the suitable wikitext and filename to upload, audios converted to .wav, and a clean data file to run the whole.

Sidenote: A more popular alternative for JS devs wanting a #wikibot is #WTF_wikipedia (750 ⭐ ) :
https://github.com/spencermountain/wtf_wikipedia/graphs/contributors

Home - Documentation

[Temporary toot]
Ok ! I'm done for today. As visible in the previous screenshot I got too much Shtooka metadata from audio files (ffmpeg). I need enlightenment from Nicolas Vion to know if the duplications I see there and indeed duplications or not. Phoning him later. When I get the greenlight, then I can mass import those HSK audios.

Chinese + Lingualibre + WikimediaCommons + Wikibot thread ➡️ cc @Ash_Crow @harmonia_amanda

Will you notice the differences ?

@lingualibre being #OpenSource, web developer and Wikimedian @elfix jumped in and switched our SPARQL endpoint url. This revives an important but heavy query to document and visualize #Gender biases, in order to counter it.

#LinguaLibre, as most @Wikimedia projects, reproduce gender and diversity biases. Our movement therefore leads explicit efforts for inclusivity and diversity.

- Prod https://lingualibre.org/LanguagesGallery/index.html
- Dev https://hugolpz.github.io/LanguagesGallery/

Lingualibre Languages Gallery

**90% of Wikipedia's Editors Are Male—Here's What They're Doing About It.**

The group that oversees the free encyclopedia is trying to fix a years-old problem.
By Robinson Meyer "

https://www.theatlantic.com/technology/archive/2013/10/90-of-wikipedias-editors-are-male-heres-what-theyre-doing-about-it/280882/

90% of Wikipedia's Editors Are Male—Here's What They're Doing About It

The group that oversees the free encyclopedia is trying to fix a years-old problem. 

The Atlantic

🎉 Finally ! Wikimedia Commons was blocking upload of files & filenames with characters in some minority #languages writing systems.

With today's change those characters and filenames are now accepted on Commons and therefore #Lingualibre. *It allows Lingualibre & Wikimedia to support more and smaller languages & cultures.*

Doesn't seem much but Lili members have been pushed this issue since 2021.

It's now fixed thank to MW dev @LucasWerkmeister and User:Nikki.❤️ https://commons.wikimedia.org/w/index.php?title=MediaWiki%3ATitleblacklist&diff=845765023&oldid=829500280

MediaWiki:Titleblacklist: Difference between revisions - Wikimedia Commons

Woow !! #LinguaLibre just made a +6 languages jump tonight ! I saw some unfamiliar ISO codes yesterday in the recent changes logs, I need to investigate.

We are now at 227 open content audio lexicons !
https://hugolpz.github.io/LanguagesGallery/

Lingualibre Languages Gallery

240 languages !
Thank to #Indonesia #Haji language, there are now 240 #languages on #lingualibre .
- Stats: https://hugolpz.github.io/LanguagesGallery/
- Wikidata: https://wikidata.org/wiki/Q5639933
- Wikipedia: https://en.wikipedia.org/wiki/Haji_language

Lingualibre continues to provide user-friendly systems for all and smaller languages communities to rapidly record their local vocabulary.

Lingualibre Languages Gallery

What a week for #Lingualibre !
Lingualibre is rich in #language.s data but also in huge need of web developers and outreach efforts.
I've been monitoring potential grants for a time, gathering all those i could identify in a Grants table.
https://meta.wikimedia.org/wiki/Template:Grants
Template:Grants - Meta

@Wikimedia_Fr and French Ministry of Culture are providing an helpful yearly lifeline for software maintenance.

But we need more to be above water and being bold.

@wikimediafoundation's Technology Fund was my main hope. But after monitoring it for 2 years and despite being a very necessary lifeline for Wikimedia's open source projects, it hasn't started yet. 😢 I still hope it will somedays, but ... https://meta.wikimedia.org/wiki/Grants:Programs/Wikimedia_Research_%26_Technology_Fund

Grants:Programs/Wikimedia Research & Technology Fund - Meta

Two month ago, the @wikimediafoundation got accepted within the Google Summer of Code 2024 ! #gsoc24

The GSoC is a @Google funded summer project were Google pays the internship for junior developers to contribute to open source codebases.

So I took my Grant writter / Mentor hat, and submitted not one, but TWO Lingualibre coding projects.

## LinguaLibre v3

The First is to speed up the deployment of the next version of Lingualibre, with easier to maintain, easier to query application. It is critical for every open source project to be easy to dive in, to fix, and expand. This effort will be co-mentored with @Poslovitch

And gosh ! In just few days, *7* junior developers expressed interest in this #Lingualibre GSoC24 . I onboarded 5 of them. 3 of whom already cloned the repository and started hacking around that #Django/Vuejs. 😮🎇

## LinguaLibre SignIt

The Second one is to revamp #SignIt, the click-to-translate web extension we use for #SignLanguage. This extension is cute, awesome, and can assist us all to learn sign languages ! But is in a deadly impasse for two reasons :
- it's Firefox only, with only <5% market share nowadays
- #webextension.s are phasing out the version of code we use.
A whole revamp is needed !

🔥IMPORTANT: We are looking for a 2nd mentor, with solid git/github (and ideally webextension) know how.

What a week of #Wikimedia projects !
1. Whistle language map
2. Label Culture Libre Silver recipient for Toulouse University
3. #Wikimania 2024 sessions submissions x2
4. « Sign language for welcoming Libraries » deployment !

I just landed back to Toulouse after 6h (nuclear) train's journey. I need sleep, so i let you know more about those projects tomorrow !

1. Whistled Occitan : leveraging #Lingualibre speed to record #languages vocabularies we are collaborating with one of the last 5 practicians of Whistled #Occitan. Completmented with #Wikidata and #SPARQL, we successfully prototyped an interactive multimedia map with local toponyms, so the public can explore their ancestral land and hear place names being whistled to them in this endangered language. A working prototype, it works but also needs further care.

2. Label Culture Libre: following a year of efforts from our Documentation service (SICD) team, assisting toward Open content cultural change, training myself 300+ staff along my Wikiresidence, and leading a dozen open content projects... We are happy to announced the University of #Toulouse received from @Wikimedia_Fr the Label Culture Libre, Silver level.

- UT SICD announcement: https://bibliotheques.univ-toulouse.fr/actualites/label-culture-libre-wikimedia-2024-le-sicd-labellise-niveau-argent
- Presentation : https://docs.google.com/presentation/d/13ECIt5qg2WOR3YOOypRzI5YS9R5Pwhvz/edit?usp=sharing&ouid=100658033193494547613&rtpof=true&sd=true

Label Culture Libre Wikimédia 2024 : le SICD labellisé niveau Argent | Service Inter-établissements de Coopération Documentaire

3. #Wikimania 2024, Katowice, Poland.

Looking forward to join my 10th yearly Wikimedian global conference and share with wiki peers, I submitted 2 sessions proposals :

- Supporting #minority #languages and Wikimedia's global community // Round table // 40mins
The panel will discuss needs, network and capabilities.

- Lingualibre in review : achievements, changes, analysis and pilote projects. // Presentation // 25mins
After passing the 1M recordings milestones, let's do a sum up. #SignIt

4. « Basic #SignLanguage lexicon for welcoming Libraries » (🇫🇷 Lexique #LSF pour des bibliothèques inclusives), on track for past 4 months as well, starts to bear tangible fruits with the prototyping of a webpages showcasting an essential set of useful French Sign Languages words to welcome signing people in French libraries. The project videos is contributed to by local sign language association IRIS.

1/3 Alert on #Lingualibre !

3 toots to learn how a large band Wikimedia IP/proxies block collapsed our Wikimedia tools and how to fix it.

It beggin 3 days ago when half a dozen users converged to report a vague but critical bug : LinguaLibre uploads were fully failing when sending files to #Commons !

2/3 It took me 2 workdays investigating, diving into #Lingualibre +#Commons + Meta's #SPARQLs, logs and APIs (recentchanges with tagfilter=lingualibre, userrights).

But there we are ! ✌️
1) massive collapse of uploads was observed
2) bug was replicated and error message was identified
3) unblock request made ! thk @ancilu , and the large ban block was reconfigured.
4) we are back online !
https://public-paws.wmcloud.org/User:Yug/QueryLingualibre-monthly.ipynb
https://commons.wikimedia.org/wiki/File:2024.05_Lingualibre_IP_ban_bug.png
https://meta.wikimedia.org/w/index.php?title=Steward_requests/Global&oldid=26774369#Unregistered_users_only_block_for_the_range_2001:41D0:0:0:0:0:0:0/32
https://commons.wikimedia.org/wiki/Special:RecentChanges?hidebots=1&translations=filter&hidecategorization=1&hideWikibase=1&tagfilter=OAuth+CID%3A+1735&limit=500&days=30&urlversion=2

Notebook

3/3 That bug has been ongoing since.... May 2th ! Thank to all volunteers who reported the bug and helped solve this CRITICAL #Lingualibre issue. 🌻 Feedbacks are critical to start our investigations and solve open source issues. For us, we had to reconfigure the IP ban. User:EPIC did it. ✌️

@Hugo @Wikimedia_Fr Beau travail de l'université de Toulouse et des équipes !

On espère voir bientôt l'université de Lorraine dans ces lauréats ! @fresseng 😉

@Hugo bravo 🙂
@Wikimedia_Fr Merci ! 💛
Bravo au Staff WMfr également.
@Hugo #alt4you
1. Screenshot with on top "Languages (221)" and underneath "Don't see your language in LinguaLibre yet?" with two buttons, one "Log in and record few words" and the other "Search by language name"
2. Identical screenshot except the first line, which now says "Languages (227)"
@Hugo d'ailleurs question, pourquoi ce n'est pas "*a* few words" ? Enregistrer quelques mots est plus logique qu'enregistrer peu de mots ?

@harmonia_amanda , thank you for the `alts`. I took them. (was on a rush this morning)

I implemented your typo fix: https://github.com/hugolpz/LanguagesGallery/commits/main/
Thank you <3

Refreshing the page should show the fix.

Commits · hugolpz/LanguagesGallery

[v.1.0] Lingualibre Languages Gallery in VueJS. Contribute to hugolpz/LanguagesGallery development by creating an account on GitHub.

GitHub