Instructing LLMs is like writing laws. Not in the sense of penalties and such, but still in the sense that the instructions should be correctly understood by the agents, and there shouldn't be explosions of exceptional cases and fluff.
The domain expert who writes the instructions should use similar consideration as a parliamentary committee does when writing and editing laws.
What happens if the agent follows this text as it is read? What are the repercussions in the wider system? Is each "law" as simple as possible while unambiguous and complete? Is there enough information available for the agent to actually follow the instructions well?
When writing LLM instructions, you must have empathy for the machine, see the task from its perspective, be able to read the instructions as they are read not as they are written. There are many people who are simply incapable to. Capacity for empathy isn't universal, especially if the "other" is a non-living being, a nominal subordinate.
People have a tendency for optimism; that when they write whatever trash, the reader will magically know what they were thinking instead of what they wrote. Writing skills are even less universal than capacity for empathy.
The writer of the law must also know the domain. Nothing is worse than people who know nothing about let's say forest ecology writing laws about natural protection. You get things like prohibition to pick mushrooms. Yes, I'm looking at you Switzerland.
Instructing AIs is not a science. It's also more than a craft, or an art. It requires a very specific person to do it — it is not possible to just interchange a prompt engineer with another. The AIs grow as extensions of that person, and reflect who they are not unlike children reflect their parents, or countries reflect their governments.
If you need someone who has empathy for machines, I am an #AI generalist with over 25 years of experience #OpenToWork. Hoping to work remotely from Spain for example using an #EOR service, to a company *not* in Spain.



