It's unfortunate how many random questions LLMs can answer, I was essentially raised on tech forum posts I'd never think to ask or how to refine them to an answer through inquiry.
I hope my own model for learning is subsumed by leveraging the infinite patience of an LLM for another. But I worry there's just a certain kind of competency you don't get to from here. We'll see.
Even now I'm holding back on posting kind of general trivial questions because everybody knows I could just ask an LLM. I should override that instinct and instead just post it.
@SwiftOnSecurity LLM prompt needs to include "worst wrong answers only"
@SwiftOnSecurity even before the LLMs there were the LMGTFY/RTFM people. There will always be. Doesn't mean we should stop asking questions because of them
@SwiftOnSecurity I feel like the safest approach is to simply never use them, not even once. I've never even visited one of their websites, because I know how it works, and I know that they're exploiting our tendancy toward anthropomorphisation to create an ongoing dependency that can be monetised.
@SwiftOnSecurity when it comes to niche trivia, LLMs are really bad at hallucinations. For example, asking Google’s AI for warm-blooded reptiles. It can’t reason that “reptile” is NOT synonymous with “lizard”.

@SwiftOnSecurity

Ask the questions. We've got lots of people with truly reasoning models bouncing around in our skulls that may be able to answer.

@SwiftOnSecurity Depends. Trivial questions and their answers can be reassurance of knowledge. Someone's trivial questions are total news to others. Might be really worth it.
@SwiftOnSecurity fuck LLMs. Shout your idea into cyberspace
@SwiftOnSecurity LLMs don’t know facts. Ask LLMs for probabilistic associations, but not facts.
@SwiftOnSecurity new social game idea:
Pub Quiz
But you're not allowed to use your brain at all
*Only* & *exclusively* ONE LLM that you choose & declare beforehand
@SwiftOnSecurity the failure in forming the right question or in being concise with your thoughts is going to force everyone to communicate through the LLMs... we will have the Tower of Babel, but they monitoring will leave us all vulnerable.

@SwiftOnSecurity

Back in the card catalogue days, seeing neighboring cards often led to finding new things to research.

Wikisurfing's -kinda- like that with the adjacency of the links, but rather misses the more random-walk encounters encyclopedias gave.

Amazon et al. have had to implement 'recommendations' to make up for not traversing the aisles of a store.

Discovery of unexpected things is a joy and I am disappointed at the inability of modern tech to match the analog versions.

@SwiftOnSecurity

Also gods know everything 'novel' or 'interesting' I've ever come up with has been due to my weird brain doing cross-context associations.

@munin @SwiftOnSecurity the only reason I ended up pivoting into learning a bunch of signal integrity stuff, which then led to me more seriously picking electronics back up, is because I was searching for something about a server motherboard and stumbled onto a Robert Feranec video where he analysed various high speed features on the OCP Project Olympus dual-Xeon motherboard. a random keyword match, unrelated to the intent of the query, resulting in a profound impact on my life's trajectory.
@munin @SwiftOnSecurity it makes me sad to see every tech company trying to "optimise" information discovery processes away from this kind of accidental outcome.

@SwiftOnSecurity

100% this.

A LLM will answer a question. Sometimes even correctly! But it won't tell you "there's a better way to accomplish your goal" or talk incidentally about related subjects that give you an epiphany about the whole thing. Serendipity is an important thing.

@SwiftOnSecurity @tbortels The nuance of solving the XY problem I think remains human only