Gettting good at using generative AI means being effective at working with a tool that makes things up 10% - 50% of the time.

Many smart people struggle with this because they either
1. Get frustrated with a non-deterministic tool whose output they can’t trust.
2. Decide to blindly trust it because “Claude/ChatGPT said so”

Both are failure patterns and quite common.

Being good at using LLMs includes

1. Being able to provide context and craft prompts that limit the risk of hallucinations AND
2. Having processes and frameworks to vet the quality of the output from the LLM instead of blindly trusting it.

@carnage4life I find #2 is skipped a lot. Modern software engineering is all about measuring eval'ing the quality of your outputs. We should be doing the same with our agents.
@carnage4life 1. Is an ability to consciously shape your language usage to be that of the community whose info you seek, and 2. Is an ability to use logic to build determinism. So the mythical analytical person who intimately understands human language communities :)

@carnage4life genuinely curious:

can 2. be - even partly - delegated to LLMs or does it necessarily require human involvement?

@carnage4life I think you missed 3., having SMEs who can identify and fix hallucinations and errors because for them AI is an accelerator, not a replacement.