I keep telling people #LLM #chatbots read prose, not lists of facts. But maybe examples help.
In the first example, the world has many more components than the objects mentioned. All these are implications of the situation, umbrella, dynamic movement, and so on. It's not just "a person now holds an object named 'umbrella'". Yet people keep prompting these things as if choices of words and subtext doesn't matter.
Sure, the hottest new programming language is English, but are you sure you know what it means? It's not the neutral engineer-English you use in documenting things.
The second example is a dog barking at night. So, "an animal makes a noise + timestamp"? No! It all brings a world of associations and implications with it!
These systems will read your programming code and JSON elements as prose as well. It is not all the same how the properties are named, and what the contents are. Even the names of people you might have in there will spawn up worlds of imagination which will guide the chatbot in its decisions.
Did you think the names of your classes aren't executable code and hence don't matter? Well, now they are!