@eff Regarding the Omnibus Bill, Japanese experts appear to be lobbying your organization and other institutions. I believe the following points require attention:
1) It has become common in Japan to consider regulations regarding making decisions about individuals as the core of personal data protection, but this is a mistake. First, it is important to note that models trained on personal data or insights gained from statistically analyzing personal data can affect individuals even when used to make decisions about a group (rather than individuals). Second, such insights and models can be used by anyone, not just those who analyzed or trained them. On the other hand, if personal data is accumulated in a rich form, it can be used for various analyses, so the accumulation itself can be a threat.
2) Therefore, the core of personal data protection regulations is to curb the diversion of personal data beyond the intended use in the original context in which it was received, and the collection and distribution of personal data without limiting its purpose. Japanese law (as in the Omnibus Bill) defines personal data as data that may be personal data for one entity but not for an entity that does not identify the individual. As a result, this curb does not work well in Japan, and it has led to confusion and complexity in real business situations. I believe the EU should not repeat Japan's mistakes.
3) Allowing the training of general-purpose AI with personal data as a "legitimate interest" is tantamount to abandoning the curb mentioned in paragraph 2) above. Even if it is proven that current LLMs are unable to recognize individuals in an integrated manner across multiple training data sets or RAG entries, this merely means that this has not been achieved with the current state of technology, and I believe that this should only be permitted if explicitly stipulated as an exception.