I'm on the latest episode of the Rooftop Ruby podcast with @collin and @joeldrapper talking about Large Language Models

It was a really excellent conversation - we covered a huge amount of ground

I'm trying something new: I put together my own transcript with Whisper, then cleaned that up and added inline links and section headings. Here's the result, complete with an embedded audio player that can jump to each different section: https://simonwillison.net/2023/Sep/29/llms-podcast/

Talking Large Language Models with Rooftop Ruby

I’m on the latest episode of the Rooftop Ruby podcast with Collin Donnell and Joel Drapper, talking all things LLM. Here’s a full transcript of the episode, which I generated …

@collin @joeldrapper I used GPT-4 to help build my own custom audio player, with a 3x speed button!

Here's that GPT-4 transcript: https://chat.openai.com/share/4ea13846-6292-4412-97e5-57400279c6c7

ChatGPT

A conversational AI system that listens, learns, and challenges

@collin @joeldrapper Here's the full list of topics we covered. You can click through to each of these to jump directly to that point in the audio (or just read the annotated transcript) https://simonwillison.net/2023/Sep/29/llms-podcast/
Talking Large Language Models with Rooftop Ruby

I’m on the latest episode of the Rooftop Ruby podcast with Collin Donnell and Joel Drapper, talking all things LLM. Here’s a full transcript of the episode, which I generated …

@collin @joeldrapper From the podcast, here are my thoughts on whether leaning on LLM assistance is likely to help or hurt new programmers:
https://simonwillison.net/2023/Sep/29/llms-podcast/#does-it-help-or-hurt-new-programmers
Talking Large Language Models with Rooftop Ruby

I’m on the latest episode of the Rooftop Ruby podcast with Collin Donnell and Joel Drapper, talking all things LLM. Here’s a full transcript of the episode, which I generated …

@simon With the side benefit of not having to deal with all the asshole seniors and StackOverflow jockeys who can’t wait to mock you for needing help
@mattmay hard to overstate how important that it - being able to ask dumb questions with zero chance of judgement is wonderful

@simon @collin @joeldrapper

I find LLM code gen works best as a way to generate standalone functions with unit tests that can be integrated into existing apps. As such, it is like generating a custom library instantly.

Generating whole apps is much harder, especially if a UI is involved. It can be quite tedious to explain all the UI behaviors of a webapp at this point. Ultimately, eventually it will boil down to specifications-as-programming which has long been a goal.

@baclace @collin @joeldrapper Yeah, UI programming is definitely a lot less well served - I can get bits and pieces out of it, but fundamentally these models don't have a great idea of 2D space yet so they're not the best for interface work

I'm looking forward to seeing if that changes with the new GPT-4 image inputs