@[email protected] @[email protected] Personally, I'm also concerned that the neural networks inside some of these systems are effectively black boxes that will not be opened anytime soon, if ever. Having access to source code does not give you insight into what the neural networks in the guts of the code are actually doing computationally. Especially if they have an architecture that hasn't been fully shared and hundreds of billions of internal parameters trained on data no single person could ever comprehend.
This is a potential vector for all sorts of bad things:
- I fully expect that within a decade we'll be regularly hearing about cybersecurity incidents that are traceable back to opaque neural networks; we are already seeing the contours of how some of these might look
- An engineered artifact has not been properly engineered till it can be scoped and its failure modes understood, which is by definition not possible when there are black boxes inside the system
- I don't see how you can possibly write a scientific paper with results that can be replicated or at least understood if the system under test has an opaque black box in it. "To replicate these results, insert < < black box full of 1 trillion floating point numbers > > into your system and press 'OK'" doesn't feel like science to me.