Just fantastic technology all around. Absolutely no worry where this is all going to go.

Is it just me? Am I using this wrong or am I asking questions that are too hard?

Here’s an example of a hallucination that happened while explaining away another hallucination I called it out on. I rarely have experiences other than these.

Useful things I have learned from people so far:
- try to use newer models and “thinking mode” (I just did the latter on Gemini, although it feels so very slow)
- you can try to prompt engineer to demand more truthfulness by cross checking (perhaps this is what thinking mode does?)
- using for recall is better than using to learn things
- obscure historical research in general is not going to feel great
- being as specific as you can helps
- be aware of biases perpetuated by AI and counter them
@mwichary research modes or thinking modes are better in many cases, but funny because they're just LLM running on top of LLM. It's the LLM refining and adding to the prompt before arriving at the answer. Like everything in this field, it doesn't resolve the core problems inherent to LLMs, just tries to brute force it into something resembling accuracy and truth.

@fcloth @mwichary

"brute force it into something resembling accuracy and truth" is exactly how all "AI" works.

@McCrankyface @fcloth Yeah, this part makes all the AGI arguments so fallacious.