Research question for machine learning folks. Are there ever situations in which it is desirable to divide learning up in parallel to several models, where each model can only locally learn some component of a computation but never the full computation?

So the differentiation algorithm would compose the components into some global knowledge, but would be the only one with access to that global knowledge. And the training data would be decomposed in such a way that only well-defined fragments of data are accessible to each model?

We have a class of functions for which we can do this, I think, but we aren't sure who cares, and knowing who cares would give us relevant work to look at and help us know what we might want to do next.

It's tingling the neurons in my brain associated with all of threshold schema, cryptosystems, differential privacy, parallel learning, federated learning, and complexity theory, but I don't have anything more specific there than vibes.

@TaliaRinger

The problem is to learn to bid for electricity in a group, with only the group big enough for day ahead market access, and the group members adding their bids into the group.

In case the members are close enough, the group can share their weather and market models, and the members only need to learn how they differ from the average.

This sharing might count as access to global knowledge, but all additions into the group are done with privacy.