Eric William Lin

@ericwilliamlin
49 Followers
72 Following
14 Posts
Award-winning data experience designer/creative technologist. Tech/data/design nerd. Coffee and cocktail-obsessed. Used to be a composer. Canto + Taiwanese-American. šŸ‡ŗšŸ‡øšŸ‡¹šŸ‡¼šŸ‡­šŸ‡°šŸ³ļøā€šŸŒˆ Work on: www.ericwilliamlin.com
Genuine question: why do we expect LLMs like GPT-x to be both truth/knowledge machines and fabulists (i.e., creative generators)? Aren’t the two tasks inherently contradictory? If it needs to both be accurate AND needs to be able to lie (make up arbitrary facts, say, for a pieces of fiction), how does the same model do both reliably?

I really truly think a social media hack is enthusiasm.

People (maybe a majority) follow me on Twitter who have no career focus on computer security.
I follow farmers on YouTube just talking about their tractor repair tribulations with perseverance. I will absolutely never drive a tractor or put any direct tractor knowledge they give me into practice. But I care because they communicate larger fundamental insights into finding solutions.

This is a magic ingredient. You can't really fake it. And people know when they see it. It's one of those inescapably human manifestations the brain recognizes micro-signals from.

New clarified milk punch: Glenlivet 12yr single malt Scotch, Angostura 1919 Caribbean rum, Felix Roasting Ethiopian Gera Estate natural anaerobic coffee, Amaro Montenegro, Giffard banana šŸŒ liqueur, Chinola passion fruit, šŸ‹ lemon juice, milk-clarified.
@mrSaver @zalcarik @mmasnick It's not just the tech journalists. Also seeing a lot of breathless hype from the tech/startup crowd...and not just the opportunistic crypto refugees haha
@mrSaver @zalcarik @mmasnick oh for sure! I’m not expecting anything else! Again, very useful for some cases and lots of potential. It’s the ā€œthis is gonna replace x and it’s the end of yā€ folks who are overhyping these LLMs that annoy me. Let’s celebrate the work but acknowledge what it can’t do…and never can in its current state.
@zalcarik @mmasnick so it’s not just about being ā€œfunnyā€. It’s all domains. Same thing with its sonnets and haikus. Tell it to write a poem and it defaults to trying to rhyme. But often without any understanding of overall rhyme schemes. I wonder if LLMs could be designed to have an understanding of time and chase and effect. (Or have a memory of what it wrote 3 paragraphs earlier)
@zalcarik @mmasnick I think it’s everything mapped onto a generic statistically plausible median…which means it always has a believable shape, but very mediocre. I think this is generalizable across domains. I experimented a lot with cocktail-related prompts for example…and you basically get a mediocre blandness to everything. It ā€œknowsā€ was a Negroni variations means…but can never really break out and understand how to break patterns.
The existence of various decentralized servers is both a major strength and major downside of Mastodon. It's not hard to understand as a techy...but it's probably wildly unintuitive for most people coming from Twitter.
@jeremybowers I'm still trying to find an easy way just to recapture my data viz/news graphics network on here. It's so tedious to rebuild the same follow/following graph even in just one domain...
Looking forward to the day where we stop talking about Twitter here, have a thriving community spanning all sorts of topics, and no longer have to think about a particular pathetic person.