Research question for machine learning folks. Are there ever situations in which it is desirable to divide learning up in parallel to several models, where each model can only locally learn some component of a computation but never the full computation?

So the differentiation algorithm would compose the components into some global knowledge, but would be the only one with access to that global knowledge. And the training data would be decomposed in such a way that only well-defined fragments of data are accessible to each model?

We have a class of functions for which we can do this, I think, but we aren't sure who cares, and knowing who cares would give us relevant work to look at and help us know what we might want to do next.

It's tingling the neurons in my brain associated with all of threshold schema, cryptosystems, differential privacy, parallel learning, federated learning, and complexity theory, but I don't have anything more specific there than vibes.

@TaliaRinger Sounds like federated learning. I've heard of use case for ML at the edge, and maybe institutions with proprietary data they don't want to share but are willing to mix in a shared model.