It's always the most mediocre people who are worried about AI replacing them or taking their jobs or becoming sentient and/or murderous.
@evacide an interesting observation. LLMs are by far the best bullshitters on the planet.
@femaven @evacide I am getting a 404 on the link
Expert Insight: Dangers of Using Large Language Models Before They Are Baked

Today's LLMs pose too many trust and security risks.

Dark Reading
@noplasticshower @evacide it’s about using Markov chains to fake knowledge about wine http://markallenthornton.com/blog/wine-markov/
Automatically generating wine tasting notes with Markov chains

Creating pseudorandom wine back labels customized by price, rating, type, or region using data from Wine.com

Mark Allen Thornton
@femaven @evacide fun and games. LLMs are much better bullshitters than Markov chains in my experience. Would be fun to prompt chatGPT into pretending to pontificate about wine.

@noplasticshower @femaven @evacide

LLMs are literally markov chains. They just have really really complicated state and transition functions.

The way they work is... you give them a sequence of N tokens (that's the state) and they predict the most likely next token (that's the transition function). Then you've got N+1 tokens so lather-rinse-repeat... Once the number of tokens exceeds some value M drop the tokens off the front of the text... So basically it's Markov chains but really big.

@dlakelan @femaven @evacide I see your perspective but disagree with that characterization
@noplasticshower @evacide the summary of the Markov chain for wine comes to a similar conclusion per the 2nd article - Overall, the results of the Markov generation process were mixed - some are nonsensical, self-contradictory, or ungrammatical, but many are quite coherent and convincingly human. Ultimately, perhaps they're best considered as the tasting notes a critic might write after finishing the bottle.