OpenAI's terms are now pretty clear (or at least, with the obvious strategic ambiguity) on IP which I assume would apply to both ChatGPT and DALL-E: "OpenAI hereby assigns to you all its right, title and interest in and to Output."

But this is new as of a TOS update in December, and the previous version does NOT say this - but I have a vague recollection that there were explicit copyright terms for DALL-E. Does anyone remember what/where this was?

And what I mean by strategic ambiguity here is that OpenAI doesn't claim that they HAVE any rights, just that if they did have them, they're assigning them to you.

@cfiesler interestingly, this is subject to a user's compliance with the terms.¹ And the terms prohibit users from passing off machine-generated content as human-generated.² So if you try to claim the output as made by you alone, you lose any ownership, and presumably it stays with OpenAI (if it exists).

See https://openai.com/terms

__
¹ Section 3(c) "...subject to your compliance..."
² Section 2(c)(v) "You may not ... represent that output from the Services was human-generated when it is not"

Terms of Use

Thank you for using OpenAI! These Terms of Use apply when you use the products and services of OpenAI, L.L.C. or our affiliates, including our application programming interface, software, tools, developer services, data, documentation, and website (“Services”). The Terms include our Service Terms, Sharing & Publication Policy, Usage Policies,

OpenAI
@Colarusso oh yeah I was wondering about that too!! though the issue still remains that maybe no one has rights heh
@cfiesler yeah, it's a nice hedge. It also gives the terms a lot of power should the right exist. It gives them some teeth that they normally wouldn't have. Normally if someone violated a site's ToS both parties would just cut ties. Here OpenAI could claim ownership over the output. And what happens when someone made $$ off their (OpenAI's) property? It's not super straightforward exactly what would happen, but I suspect in most cases it could be really powerful.

@cfiesler If you trust OpenAI, this is a good thing from a safety angle because it means they could actually enforce responsible use constraints on their tool. That's something that has been hard to do historically. E.g., I'm not sure things like the Responsible AI License really have teeth.¹

My lab tried our hand at this but ended up using Trust Law. See https://spot.suffolklitlab.org/trusts

Of course, whether of not it's good depends on if you trust OpenAI. ;)

___
¹ https://www.licenses.ai/ai-licenses

Building Trust(s)

@cfiesler is this like when youtube videos have "no copyright intended no copyright intended" in the description of the video?