Gowthami Somepalli

218 Followers
81 Following
288 Posts
Grad student at UMD! Interested in #MachineLearning. She/her.
Websitehttps://somepago.github.io

RT @haldaume3
📢 I'm thrilled to announce our new @NSF Institute on Trustworthy AI in Law & Society!

@trails_ai's premise is: Participation Builds (Appropriate) Trust.

🌐 Details: http://trails.umd.edu
💼 Postdoc, Director, ...: https://www.trails.umd.edu/getinvolved
🗓️ Events: https://www.trails.umd.edu/events

>

NSF Institute for Trustworthy AI in Law & Society (TRAILS)

NSF Institute for Trustworthy AI in Law & Society (TRAILS)

RT @srush_nlp
👋 to grad students. I'm (in theory) a senior professor. Reading tweets like this makes me feel like a disgusting failure.

Just want to say that I, and ICLR, love your research and your pace. You're doing great work.

RT @kamalgupta09
We introduce LilNetX, a framework to train large neural networks which take up fraction of disk space and are much faster at inference!

Visit our poster @iclr_conf #iclr2023 in Rwanda to learn more!
When 🗓: May 02, 11:30 AM (Local Time)
Code & Video: https://lilnetx.github.io

LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification

Paper description.

RT @togelius
Not long ago, breakthroughs in AI research often came from lone academics or small teams using desktop hardware. These days, not so much. Are you anxious about how to stay competitive in AI as an academic?
@yannakakis and I wrote this piece for you:
https://arxiv.org/abs/2304.06035
Choose Your Weapon: Survival Strategies for Depressed AI Academics

Are you an AI researcher at an academic institution? Are you anxious you are not coping with the current pace of AI advancements? Do you feel you have no (or very limited) access to the computational and human resources required for an AI research breakthrough? You are not alone; we feel the same way. A growing number of AI academics can no longer find the means and resources to compete at a global scale. This is a somewhat recent phenomenon, but an accelerating one, with private actors investing enormous compute resources into cutting edge AI research. Here, we discuss what you can do to stay competitive while remaining an academic. We also briefly discuss what universities and the private sector could do improve the situation, if they are so inclined. This is not an exhaustive list of strategies, and you may not agree with all of them, but it serves to start a discussion.

arXiv.org

RT @jbhuang0604
SUPER excited about Ben Poole @poolio's visit at UMD @umdcs next Tuesday!

Looking forward to learning the insanely cool research on 2D priors for 3D generation! 🤩

This is exactly how I've been feeling for the last few months. It is becoming almost impossible to keep up with the current pace of research.

#phdlife #machinelearning
---
RT @natolambert
Almost everyone I know working in AI these days feels one step away from total burnout. I took the time to take you behind the curtain and know what people on the state-of-the-art AI are struggling with:

https://robotic.substack.com/p/behind-the-cu…
https://twitter.com/natolambert/status/1643751135856164864

Excited to talk about our #CVPR23 paper, "Investigating Data Replication in #Diffusion Models" at @ml_collective tomorrow! Drop by if you want to learn more about this topic!

⏰ - Mar 31, 1 pm Eastern
💻 - https://mlcollective.org/dlct/
📃 - https://arxiv.org/abs/2212.03860

#MachineLearning

ML Collective

ML Collective Research Group

RT @_akhaliq
ASIC: Aligning Sparse in-the-wild Image Collections

abs: https://arxiv.org/abs/2303.16201
project page: https://kampta.github.io/asic/

ASIC: Aligning Sparse in-the-wild Image Collections

We present a method for joint alignment of sparse in-the-wild image collections of an object category. Most prior works assume either ground-truth keypoint annotations or a large dataset of images of a single object category. However, neither of the above assumptions hold true for the long-tail of the objects present in the world. We present a self-supervised technique that directly optimizes on a sparse collection of images of a particular object/object category to obtain consistent dense correspondences across the collection. We use pairwise nearest neighbors obtained from deep features of a pre-trained vision transformer (ViT) model as noisy and sparse keypoint matches and make them dense and accurate matches by optimizing a neural network that jointly maps the image collection into a learned canonical grid. Experiments on CUB and SPair-71k benchmarks demonstrate that our method can produce globally consistent and higher quality correspondences across the image collection when compared to existing self-supervised methods. Code and other material will be made available at \url{https://kampta.github.io/asic}.

arXiv.org

This thread is too fun.

TLDR: How to bargain with #GPT4 based carpet sales man?
---
RT @mayfer
haha ok one more GPT-4 game:

try to buy the rug for as cheap as possible.
play link: https://aiadventure.spiel.com/carpet

my record so far is $400
https://twitter.com/mayfer/status/1638356816836059136

GPT-4 Carpet Salesman

RT @shlokkkk
New paper: Hyperbolic Contrastive Learning for Visual Representations beyond Objects: accepted to #CVPR2023.
Paper: https://arxiv.org/pdf/2212.00653.pdf
Code: https://github.com/shlokk/HCL/