Kyle Baxter

254 Followers
100 Following
150 Posts
The lite brite is now black and white. Applying large language models to the financial audit space and beyond.
“Building Cooperative Embodied Agents Modularly with Large Language Models”: https://huggingface.co/papers/2307.02485
Google teases Project Tailwind — a prototype AI notebook that learns from your documents

At its I/O developer conference, Google teased Project Tailwind, a prototype AI notebook that learns from your personal documents. Google said the software could act as a tutor or writing assistant.

The Verge
Tweet / Twitter

Twitter
Apple Music, I dunno if I’d say Directions to See a Ghost is chill. Falling into a psychedelic oblivion maybe, but probably not chill. All the same, I like where your head is at.
Back in 2015 I had a nugget of an idea for an app for note taking and creating outlines, which would make it easy to link concepts across notes/outlines, and would automatically create an index of terms/n-grams across notes/outlines. Thinking about how trivial it would be to do this now with modern LLMs (and a lot more).
“Collaborating with language models for embodied reasoning”: https://arxiv.org/abs/2302.00763
Collaborating with language models for embodied reasoning

Reasoning in a complex and ambiguous environment is a key goal for Reinforcement Learning (RL) agents. While some sophisticated RL agents can successfully solve difficult tasks, they require a large amount of training data and often struggle to generalize to new unseen environments and new tasks. On the other hand, Large Scale Language Models (LSLMs) have exhibited strong reasoning ability and the ability to to adapt to new tasks through in-context learning. However, LSLMs do not inherently have the ability to interrogate or intervene on the environment. In this work, we investigate how to combine these complementary abilities in a single system consisting of three parts: a Planner, an Actor, and a Reporter. The Planner is a pre-trained language model that can issue commands to a simple embodied agent (the Actor), while the Reporter communicates with the Planner to inform its next command. We present a set of tasks that require reasoning, test this system's ability to generalize zero-shot and investigate failure cases, and demonstrate how components of this system can be trained with reinforcement-learning to improve performance.

arXiv.org

@skoda Here’s a fun throwback mid-development screenshot I found of the feature (from a decade ago!).

After launch, I wanted to add a feature where users could select their preferred measuring units (imperial or metric), and Basil would parse measures in ingredients, and convert them to their preferred units. So I built on top of the great foundation Chuck had given me to parse measures and do conversions. What a learning experience that was.

Zion is mesmerizing in the winter
when you chain GPT-3 prompts together in an app for the first time
Threw on Trans while working to own the haters