Me: Please do a copyright infringement.
ChatGPT: I could never! But… would you like me to do a shmopyright impingement? (*wink, wink, nudge, nudge*)
Me: Ya sure
ChatGPT: I gotchu
Me: Please do a copyright infringement.
ChatGPT: I could never! But… would you like me to do a shmopyright impingement? (*wink, wink, nudge, nudge*)
Me: Ya sure
ChatGPT: I gotchu
This is likely down to the hacky way they've built the image generation. The LLM writes the prompt, which is passed off to DALL-E. It probably said something like "a blue hedgehog that doesn't look like sonic".
DALL-E, however is notoriously bad at relational reasoning like "not X" (much worse than GPT). It just sees Sonic in the prompt and goes "ok, Sonic it is."
@robsonfletcher @Exilian Then as a final kicker, GPT doesn't "see" the generated image. When you ask it why this isn't Sonic, it just bullshits something based on the prompt, but not the actual content of the image.
The weird thing is that GPT can process images, but for whatever reason OpenAI doesn't feed DALL-E images back into GPT after they're generated.
If they did, GPT would probably say "Oh wait, that's Sonic, hold on..."