Done mapping all 25 #barangays of Minalabac, Camarines Sur, #Philippines 🇵🇭 in #OpenStreetMap, creating/updating their #Wikidata items, and linking the two with each other.

Wanna play around? Here is the Overpass Turbo query: https://overpass-turbo.eu/s/2mza

And here is the Wikidata Query Service (#WDQS) query: https://w.wiki/K6ne

Previously: https://en.osm.town/@seav/114750346738105697

#LinkedOpenData #OpenData #Mapstodon #gischat

@wikidata the query service doesn't seem to have been able to deliver short URLs for a short while now. Is this a known issue? #Wikidata #WDQS

That's a list I've been looking for for some time: https://www.mediawiki.org/wiki/Wikibase/Indexing/RDF_Dump_Format#WDQS_data_differences

It shows the differences between the RDF source of a Wikidata item and the way it's stored in the RDF actually used in #Wikidata Query Service #WDQS (and in #qlever as well, apparently).

I'd stumbled over the fact that "?item a wikibase:Item" doesn't return any results. The link above explains why.

Wikibase/Indexing/RDF Dump Format - MediaWiki

MediaWiki

Maybe late to the party, but this days I learned that #Wikidata query service (#WDQS) now technically enforces descriptive user-agents as mandated by the Wikimedia User-Agent Policy: https://foundation.wikimedia.org/wiki/Policy:Wikimedia_Foundation_User-Agent_Policy

At different places I read that including an e-mail address in the user-agent is required by the Wikimedia User-Agent Policy. But up to the policy a project URL or similar would also be fine, right?

@wikidata

Policy:Wikimedia Foundation User-Agent Policy - Wikimedia Foundation Governance Wiki

📢 #ABECTO version 3.1.5 has been released:
🔗 https://github.com/fusion-jena/abecto/releases/tag/v3.1.5

ABECTO is an #OpenSource #CLI tool that compares #RDF graphs to spot errors 🪲 and assess completeness 📊 intended for use in #CICD pipelines.

Versions 3.1.3, 3.1.4, & 3.1.5 add rudimentary HTTP rate limit handling, some bug fixes, and (most important) a descriptive HTTP user-agent to cope with the #Wikidata query service (#WDQS) user-agent policy 🛂.

Ok, it now seems as if #qlever mapped the data-namespace (data:Q42 a schema:Dataset) onto the wd namespace (wd:Q42 a wikibase:Item) which makes it more compatible with #wikidata. I find it still a bit confusing as it differs from the RDF source. If you want to get, say, the number of sitelinks of an element you must now use "wd:Q42 wikibase:sitelinks ?n" in your #SPARQL both in qlever and in #WDQS. Previously, IIRC, you had to do "?dataset schema:about wd:Q42; ?dataset wikibase:sitelinks ?n".

heute Abend (20:00–21:00) im Free Knowledge Habitat: Wikidata Live Querying! wir denken uns gemeinsam interessante SPARQL-Anfragen für Wikidata aus ^^ (kommt zahlreich weil alleine werd ich nicht so viele Ideen haben ) https://pretalx.wikimedia.de/39c3-2025/talk/VTELBK/

#39c3 #Wikimedia #Wikidata #WDQS #SPARQL

Wikidata Live Querying Free Knowledge Habitat 39c3

Lasst uns gemeinsam lustige Wikidata-Abfragen schreiben!

if you want to know more about Wikidata, Andrew McAllister and I are doing an Intro to SPARQL and Wikidata Query Service at 8:30pm today, also in the Free Knowledge Habitat: https://pretalx.wikimedia.de/39c3-2025/talk/RNWUH8/

#39c3 #Wikimedia #Wikidata #WDQS

Intro to SPARQL and Wikidata Query Service (EN) Free Knowledge Habitat 39c3

The WIkidata Query Service (WDQS) is one of the main services to get data from Wikidata. In this workshop we'll explore the SPARQL query language that allows us to access data as well as the WDQS UI, a helpful UI in which we can write and run queries. The goal is that you'll be able to write your own queries and explore Wikidata's data by the end of the session.

Gestern kam im #SPARQL-Workshop auf, wieso man überhaupt noch den #WDQS-Service benutzen sollte und nicht gleich den performanteren #wikidata-Endpunkt von #qlever . Gründe sind für mich aktuell noch:

- Autocomplete funktioniert in #wdqs insofern besser, als man ihn gezielter triggern kann
- mehr Optionen der Ergebnisvisualisierung
- gut nutzbare Code-Snippets
- aktuellere Daten

Oft schreibe ich die Queries in WDQS und führe sie dann in Qlever aus.

I'll try and ask here as well: Does anybody know if there's an overview of where #WDQS differs from standard #SPARQL? I mean e.g. mapping of data-namespace onto wdt at rdf level, different handling of dates (now() - ?date is possible in wdqs) etc. With alternatives such as qlever becoming more popular, it'd be useful to know what works just in wdqs and what not.

#wikidata