Soumith Chintala

1.9K Followers
101 Following
9 Posts

Co-creator and lead of @pytorch at Meta A.I.
I also dabble in robotics these days.

AI is delicious when it is open-source and easy-to-use.

If you installed PyTorch-nightly on Linux between Dec. 25 and Dec. 30, uninstall it and torchtriton immediately and use the latest nightly binaries.

Read the security advisory here: https://pytorch.org/blog/compromised-nightly-dependency/

Compromised PyTorch-nightly dependency chain between December 25th and December 30th, 2022. – PyTorch

Made a practical search today that Google totally failed to answer, Perplexity.ai failed too, but ChatGPT gave an answer that sounds plausible.
Now, the conundrum is that I don't know whether ChatGPT made stuff up or gave an accurate answer haha.

OpenAI just released a 3D version of DALL-E called Point-E
github: https://github.com/openai/point-e
arXiv: https://arxiv.org/abs/2212.08751

I've seen a few 3D point cloud generators over the last year. I don't know the area well. Anyone know if there is anything significantly different going on here?

[EDIT: and "bigger and faster" is an okay answer, though I suspect there is more to it]

GitHub - openai/point-e: Point cloud diffusion for 3D model synthesis

Point cloud diffusion for 3D model synthesis. Contribute to openai/point-e development by creating an account on GitHub.

GitHub

Just created @pytorch here.

Start following it for updates (will setup syndication shortly).

toot toot!
Just getting started here. Start following us for updates.

the Tooot app has been pretty good for Mastodon on iOS. Fast, responsive, small.

https://apps.apple.com/us/app/tooot/id1549772269

‎tooot

‎tooot is an open source, simple yet elegant Mastodon mobile client. A Mastodon (https://joinmastodon.org/) account is required to use this app. tooot supports: - Cross platform, including iPadOS and MacOS - Multiple accounts - Dark mode or adapt to system - Adjustable toot font size - Push notifica…

App Store

CLIP-Fields won the Outstanding Paper Award at the LangRob workshop @ CoRL 2022. (congrats @notmahi@twitter ! )

This is a starting point of NLP-powered spatial memory. Strictly better than using each pre-trained model separately.
The framework makes it easy to add in more signals (like GraspNet).

Read more about our work here:
https://mahis.life/clip-fields/

CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory

Teaching robots in the real world to respond to natural language queries with zero human labels — using pretrained large language models (LLMs), visual language models (VLMs), and neural fields.

CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory