Raphael Tang

8 Followers
10 Following
6 Posts
Lead research scientist at Comcast AI, interested in model compression, explainable AI, probability theory, manifold learning, geometric deep learning, and physics.
Websitehttp://ralphtang.com

What the DAAM: with our attribution maps, we uncover entanglement in Stable Diffusion. Cohyponyms ,such as "zebra" and "giraffe," worsen generation when together, and adjectives attend beyond the nouns they modify.

Tweet: https://twitter.com/ralph_tang/status/1600912260540817409

Paper: https://arxiv.org/abs/2210.04885

Demo: https://huggingface.co/spaces/tetrisd/Diffusion-Attentive-Attribution-Maps

Codebase: https://github.com/castorini/daam

Raphael Tang on Twitter

“What the DAAM! With our attribution maps, we uncover entanglement in Stable Diffusion. Cohyponyms such as "zebra" and "giraffe" worsen generation when together, and adjectives attend beyond the nouns they modify. Read on for the full story! (1/12) Paper: https://t.co/usApj14H8Y”

Twitter
In our EMNLP 2022 paper, SpeechNet (https://arxiv.org/abs/2211.11740), we productionize Wav2vec 2.0 in the resource-constrained setting, building datasets with weak supervision and accelerating the model with CUDA graph pools. Check it out!
SpeechNet: Weakly Supervised, End-to-End Speech Recognition at Industrial Scale

End-to-end automatic speech recognition systems represent the state of the art, but they rely on thousands of hours of manually annotated speech for training, as well as heavyweight computation for inference. Of course, this impedes commercialization since most companies lack vast human and computational resources. In this paper, we explore training and deploying an ASR system in the label-scarce, compute-limited setting. To reduce human labor, we use a third-party ASR system as a weak supervision source, supplemented with labeling functions derived from implicit user feedback. To accelerate inference, we propose to route production-time queries across a pool of CUDA graphs of varying input lengths, the distribution of which best matches the traffic's. Compared to our third-party ASR, we achieve a relative improvement in word-error rate of 8% and a speedup of 600%. Our system, called SpeechNet, currently serves 12 million queries per day on our voice-enabled smart television. To our knowledge, this is the first time a large-scale, Wav2vec-based deployment has been described in the academic literature.

arXiv.org