https://drops.dagstuhl.de/entities/collection/dblp
https://doi.org/10.4230/dblp.rdf.ntriples
https://doi.org/10.4230/dblp.xml
Today "ActivityStreams: Where do you want to go to today?" might be a slogan we borrowed from Microsoft.
The question is whether #ActivityStreams should be used - besides all the things it is already being used for - to also map to file systems.
The #LinkedData nature of #ActivityPub is generally shunned in favor of plain #JSON. That in itself is fine, as long as:
a) information still represents valid #JSONLD.
b) information models still follow data modeling best practices.
c) information models are designed with #interoperability in mind.
Not saying your approach is good or bad, just observing that everyone mapping and overloading their own app-specific semantics to the poor AS vocab looks to me a worst-practice. We can get away with it, as we made post-facto interop the poor man's accepted practice, lacking more rigorous extension process and guidance.
There are likely existing standardized ontologies.
Live SPARQL Query Page Links:
[1] https://tinyurl.com/Query-Definition
[2] https://tinyurl.com/Query-Solution-Page
#Wikidata #SPARQL #VirtuosoRDBMS #LODCloud #LinkedData #KnowledgeGraphs #SemanticWeb
TIL that ORCID identifiers are available as #LinkedData :
https://gist.github.com/edsu/9be9658f9c6d300c569bae9b1016e108
RE: https://mastodon.social/@SteveRudolfi/116279083767770070
"If there’s one insight we all need to focus on most, it’s this: your job is no longer to build a destination. It’s to build a parts library. And one that’s well documented so that when an AI agent re-assembles those parts for the human on the other side, the parts are put together in a way you wish to be represented.
The web has always evolved in ways that reduced brand control over the user journey. Ads replaced organic rankings. Featured snippets replaced clicks. AI Overviews replaced visits. This patent is the logical next step in that progression. The question isn’t how to stop this from happening, it’s how to make sure your parts are the ones AI wants to work with."
In short, this sounds like part of what the Semantic Web & Linked Data vision was about, also heavily based on autonomous software agents crawling, querying, extracting & re-assembling information on demand for a user. But the issue here is the plan to take Google's hyper-centralized and ubiquitous rent-seeking to whole new levels, pushing for the replacement of entire websites with essentially just machine readable repos of data/asset descriptors and then generating filtered/personalized/optimized websites on the fly, obviously for a more or less mandatory fee (not partaking likely ends up in invisibility)...
It's component-driven and reactive design taken to its ice cold logical conclusion... Queue a whole new set of "industry standards" (agreements between the main AI companies), frameworks, breathless consultants and an economy for "agentic arbitration", "agentic SEO", heck even "agentic premium themes" etc. arising around this... An army of human and machine middlemen all just to mediate the biggest middleman of all! It's the on-demand, ephemeral realtime web we've always dreamed of!
Zero permanence.
Zero record/archive.
Zero accountability.
Zero shared reality.
Zero leverage.
(Ps. After 15+ years, maybe even https://schema.org will have its time of glory on the horizon as part of this all...)
Finally got around to rebuilding my personal website: https://naturzukunft.de
It explains what I'm working on: open, federated infrastructure so small initiatives — repair cafés, community gardens, food co-ops — become visible to AI systems. Not another platform. Open rails.