Just fantastic technology all around. Absolutely no worry where this is all going to go.

Is it just me? Am I using this wrong or am I asking questions that are too hard?

Here’s an example of a hallucination that happened while explaining away another hallucination I called it out on. I rarely have experiences other than these.

@mwichary this is 100% typical of my attempts to use chatgpt. as soon as the need for concrete factual information comes up in the interaction (especially stuff that is actually difficult to research!) the model generates plausible-looking answers that turn out to be false, then generates apology-oid text that itself doubles down on plausible-looking answers, which itself contains still more falsehoods
@aparrish Yeah… I can see it potentially being useful as an accelerant to get to some information (although the ethical and ecological concerns remain as well), but not nearly as groundbreaking as people believe this is…
@mwichary even aside from the ethical and ecological concerns, i worry that the generated text is often, like, worse than wrong, in that it might predispose you to pursue certain lines of research that favor (broadly conservative) pre-existing ideas about what you're researching... as in this case, where the generated text kinda implies that among the most important ways that women contribute to UX is by being mothers and wives of famous men
@aparrish That’s a great point!
@aparrish @mwichary anchoring bias! I think about this whenever people say "I'm just using it to generate a starting point, it's fine" (it is not fine)