Christoph Braun

18 Followers
56 Following
5 Posts
Linked Data, Knowledge Graphs, and Semantic Data Ecosystems.
Active in W3C Solid CG and W3C LWS WG.
GitHubhttps://github.com/uvdsl
WebIDhttps://uvdsl.solid.aifb.kit.edu/profile/card#me
@noeldemartin Looking forward to seeing you there!

@thisismissem Ah, you are assuming that the dev is also doing the data modeling. Gotcha, makes sense for Indie-Devs or small ventures.

Still, what you describe is not particular to JSON-LD. Devs can simply store their JSON objects in a Pod.

In my experience, the "confused dev" that you described above occurs when the devs are forced to use JSON-LD without understanding the basics of RDF. Proper onboarding is crucial here, tooling like LDO and similar do exist to ease LD development ...

@sphcow I really like this bit in particular:
> Zero-knowledge proofs faithfully amplify the semantics of the underlying data

Maybe you would be then interested in work at the intersection of ZKPs and Knowledge Graphs / RDF?

If so, you might want to have glance at our work on "RDF-Based Semantics for Selective Disclosure and Zero-Knowledge Proofs on Verifiable Credentials"
https://publikationen.bibliothek.kit.edu/1000182104

Would love to hear your thoughts!

RDF-Based Semantics for Selective Disclosure and Zero-Knowledge Proofs on Verifiable Credentials

Our work connects the W3C Verifiable Credentials (VC) data model and zero-knowledge proofs (ZKPs) to allow for minimised information disclosure.More generally,

@thisismissem just stumbled over your comment here.

Why did the developers need to come up with a "schema" in the first place? Proper data modeling is hard (as you say too)

Devs can just define their JSON objects and store them on a Pod, and read them from the Pod in their app. You _can_ simply ignore the LD part here. (My students (try to) do that first when they are already used to web dev)

Doing data modeling for LD (which we should! :)) just gives you more features from the RDF world, no?

@vrandecic Hi 👋 Aren't the theories you are missing the hardness assumptions?

I think that each process to "derive information" (or maybe, "making information explicit") requires computational effort. While in some examples that effort may be neglegible, in some others (OWL?) it becomes tangible, and for even others there are whole applications build on-top of that notion of infeasible computational effort (i.e., your cryptography example).