48 Followers
337 Following
11 Posts
I'm a mathematician working on natural language processing at Finch Computing. My principal interest is applying mathematics to understand machine learning, artificial intelligence and causality with the ultimate goal of fighting disinformation on the Internet.
Marvin looks like an amazing tool for building large language model apps: https://www.askmarvin.ai/ #AI #nlp #llm
Welcome to Marvin - Marvin

A batteries-included library for building AI-powered software.

Train large language models on consumer hardware! Thanks HuggingFace! https://github.com/huggingface/peft #nlp #ai
GitHub - huggingface/peft: 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning. - GitHub - huggingface/peft: 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

GitHub
Finally! A resource to help understand the menagerie of open source LLMs, fine-tuning techniques and publicly available datasets: The Flan Collection: Designing Data and Methods for Effective Instruction Tuning (https://arxiv.org/abs/2301.13688v1) #NLP, #AI
The Flan Collection: Designing Data and Methods for Effective Instruction Tuning

We study the design decisions of publicly available instruction tuning methods, and break down the development of Flan 2022 (Chung et al., 2022). Through careful ablation studies on the Flan Collection of tasks and methods, we tease apart the effect of design decisions which enable Flan-T5 to outperform prior work by 3-17%+ across evaluation settings. We find task balancing and enrichment techniques are overlooked but critical to effective instruction tuning, and in particular, training with mixed prompt settings (zero-shot, few-shot, and chain-of-thought) actually yields stronger (2%+) performance in all settings. In further experiments, we show Flan-T5 requires less finetuning to converge higher and faster than T5 on single downstream tasks, motivating instruction-tuned models as more computationally-efficient starting checkpoints for new tasks. Finally, to accelerate research on instruction tuning, we make the Flan 2022 collection of datasets, templates, and methods publicly available at https://github.com/google-research/FLAN/tree/main/flan/v2.

arXiv.org
Just got an early Christmas gift! #optimization, #informationgeometry, #deeplearning