If they aren't going to be open source, #OpenAI need to at least be really open (transparent) about what public data they're using. The CTO just comes across as very dodgy in this interview, which really takes away from the amazingness of the actual technology.

https://www.youtube.com/watch?v=mAUpxN-EIgU

YouTube

@martin #OpenAI and the other commercial providers are operating under the definition: "If we can access it, it's public, and anything we want to do with it is fair use." Their customers and investors clearly have no objection, so only regulation has any chance of getting this horse back into the barn. (I think the barn has burned down, fallen over, and sunk into a swamp.)

@emdalton Regardless of fair use interpretations, if #OpenAI feels they did nothing wrong, then they must logically and ethically be OK with tracking and sharing at least descriptions of all the “public data” they used. This cannot be treated as “secret sauce”.

Then we can all have a much more grounded research, discussion and regulations about what is fair use and what isn’t, what is necessary for #AI to function and what isn't, and what is better for society and what isn’t - all of which we really need to do.

Perhaps some people want to opt out of use by AI, and that's their right. Perhaps there needs to be profit-sharing, perhaps not.

But none of this can happen properly, in my opinion, until we have solid data. Right now OpenAI is blocking that, and the only possible reason seems to be about avoiding those consequences.

@martin I completely agree. It's clear that #OpenAI knows there will be pushback to their handling of data, and they don't want to deal with that. Again, their investors and customers don't seem to care. In an international environment, how can openness be encouraged or enforced?