| Location | San Francisco |
| Location | San Francisco |
did a rewrite of the writing flow tool and added...
✨ sections,
✨ a diff view for reviewing & editing suggested changes,
✨ and the start of a flow editor view
✨ a way to edit the generated paragraph summaries
coming up next: adding drag & drop reordering in the flow editor view 🙌
starting the day with @andy_matuschak 's article on "Cultivating depth and stillness in research" was exactly what I needed to remind myself to move more slowly and deliberately this year.
https://andymatuschak.org/stillness
Leading to a lovely day of printing words and marking them up over tea - feels like the best work is created with a blend of frenzied creating and slow marinating
It’s so fun training your own data via embeddings and being able to ask a GPT3 bot empowered with your context.
And federating with all kinds of data (Eg. For me: remix, Shopify, web.dev, etc etc) #genai
Some reflections on my experience building the Twemex browser extension, and why tweaking existing software can be nice:
https://www.geoffreylitt.com/2023/01/08/for-your-next-side-project-make-a-browser-extension.html
"A consistent challenge in my development as a researcher has been: how to cultivate deep, stable concentration in the face of complex, ill-structured creative problems?"
Newly unlocked Letter from the Lab in the spirit of new year reflections/planning: https://andymatuschak.org/stillness/
Memory Augmented Large Language Models are Computationally Universal
We show that transformer-based large language models are computationally universal when augmented with an external memory. Any deterministic language model that conditions on strings of bounded length is equivalent to a finite automaton, hence computationally limited. However, augmenting such models with a read-write memory creates the possibility of processing arbitrarily large inputs and, potentially, simulating any algorithm. We establish that an existing large language model, Flan-U-PaLM 540B, can be combined with an associative read-write memory to exactly simulate the execution of a universal Turing machine, $U_{15,2}$. A key aspect of the finding is that it does not require any modification of the language model weights. Instead, the construction relies solely on designing a form of stored instruction computer that can subsequently be programmed with a specific set of prompts.