Gowthami Somepalli

218 Followers
81 Following
288 Posts
Grad student at UMD! Interested in #MachineLearning. She/her.
Websitehttps://somepago.github.io
RT @tomgoldsteincs
Today we're launching the TRAILS institute at UMD and GW. This $20M federally funded project will study the safety and public policy issues raised by artificial intelligence from a human-centric perspective.
https://www.forbes.com/sites/michaeltnietzel/2023/05/04/nsf-announces-140-million-investment-in-seven-artificial-intelligence-research-institutes/?sh=1d9ccbde4d21
NSF Announces $140 Million Investment In Seven Artificial Intelligence Research Institutes

The U.S. National Science Foundation (NSF), along with several other federal agencies and higher education institutions, has announced a $140 million investment to establish seven new National Artificial Intelligence Research Institutes (AI Institutes).

Forbes

RT @haldaume3
📢 I'm thrilled to announce our new @NSF Institute on Trustworthy AI in Law & Society!

@trails_ai's premise is: Participation Builds (Appropriate) Trust.

🌐 Details: http://trails.umd.edu
💼 Postdoc, Director, ...: https://www.trails.umd.edu/getinvolved
🗓️ Events: https://www.trails.umd.edu/events

>

NSF Institute for Trustworthy AI in Law & Society (TRAILS)

NSF Institute for Trustworthy AI in Law & Society (TRAILS)

RT @srush_nlp
👋 to grad students. I'm (in theory) a senior professor. Reading tweets like this makes me feel like a disgusting failure.

Just want to say that I, and ICLR, love your research and your pace. You're doing great work.

RT @kamalgupta09
We introduce LilNetX, a framework to train large neural networks which take up fraction of disk space and are much faster at inference!

Visit our poster @iclr_conf #iclr2023 in Rwanda to learn more!
When 🗓: May 02, 11:30 AM (Local Time)
Code & Video: https://lilnetx.github.io

LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification

Paper description.

RT @togelius
Not long ago, breakthroughs in AI research often came from lone academics or small teams using desktop hardware. These days, not so much. Are you anxious about how to stay competitive in AI as an academic?
@yannakakis and I wrote this piece for you:
https://arxiv.org/abs/2304.06035
Choose Your Weapon: Survival Strategies for Depressed AI Academics

Are you an AI researcher at an academic institution? Are you anxious you are not coping with the current pace of AI advancements? Do you feel you have no (or very limited) access to the computational and human resources required for an AI research breakthrough? You are not alone; we feel the same way. A growing number of AI academics can no longer find the means and resources to compete at a global scale. This is a somewhat recent phenomenon, but an accelerating one, with private actors investing enormous compute resources into cutting edge AI research. Here, we discuss what you can do to stay competitive while remaining an academic. We also briefly discuss what universities and the private sector could do improve the situation, if they are so inclined. This is not an exhaustive list of strategies, and you may not agree with all of them, but it serves to start a discussion.

arXiv.org

RT @jbhuang0604
SUPER excited about Ben Poole @poolio's visit at UMD @umdcs next Tuesday!

Looking forward to learning the insanely cool research on 2D priors for 3D generation! 🤩

As Rohan correctly pointed out… looks like we are at cross-roads again!
---
RT @rohanchandra30
@gowthami_s Wondering if this is what AI folks went through in 2012 with DL :-)

It's so exciting; so many new problem statements and applications to play with now !
https://twitter.com/rohanchandra30/status/1644163331052142598

Rohan Chandra on Twitter

“@gowthami_s Wondering if this is what AI folks went through in 2012 with DL :-) It's so exciting; so many new problem statements and applications to play with now !”

Twitter
On one hand, I like what I’m doing and I want to spend more time on understanding things and on other you hear opinions like (of some twitter “expert”), if you don’t move to LLM research you’ll be left behind.

This is exactly how I've been feeling for the last few months. It is becoming almost impossible to keep up with the current pace of research.

#phdlife #machinelearning
---
RT @natolambert
Almost everyone I know working in AI these days feels one step away from total burnout. I took the time to take you behind the curtain and know what people on the state-of-the-art AI are struggling with:

https://robotic.substack.com/p/behind-the-cu…
https://twitter.com/natolambert/status/1643751135856164864

RT @syhw
Do you need to quantize models? Try diffq, `pip install diffq` and https://github.com/facebookresearch/diffq#usage
GitHub - facebookresearch/diffq: DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight or group of weights, in order to achieve a given trade-off between model size and accuracy.

DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight or group of weights, in order to achieve a given trade-off b...

GitHub