Andrey Kurenkov

35 Followers
16 Following
1.9K Posts
CS PhD at Stanford Vision Lab. Creator/lead editor http://skynettoday.com 🤖 Into AI, robots, cinema, coding, photography. Russian.
www.andreykurenkov.com
And another thing, in the spirit of sharing upliftiting things.

Just want to share this lovely sentiment from the legendary artist Patti Smith

Indeed, in these times, and as things get more dire (as they will), let's not count out levity

(screenshot since I don't think it exists outside of Instagram on this platform)

One benefit of #WFH - can take breaks by finally catching up on reading this pile of magazines that has mostly just been collecting dust for months. And while relaxing on this excellent hammock 😊

Also, oh boy is this apartment going to get clean...

Pretty crazy how just a few weeks ago it seemed like it'd be fine to travel home for spring break and after that spend a weekend in SF going to fun events.

A bummer to have to cancel, but of course it's the least I can do (for my own sake and for the sake of others).

Great succinct and clear summary of things to know wrt #COVID19.

I've been trying to educate myself from multiple sources, and this aligns with my understanding as well.

Especially if you are in the bay area, read this or other overviews and be informed!

As today is #IWD2020, fun fact time: did you know woman's day is a major holiday for Russians culturally?

It is common for men to get flowers and express appreciation to women in their lives!

Thankful for all the awesome women in my life 😊

http://masterrussian.com/russianculture/womens_day_march8.htm

March 8th - International Women's Day - Russian Holidays

How to celebrate March 8th and what to give as a gift to your woman, mother, sister or grandmother. March 8th is a Russian public holiday also know as International Women's Day. Find March 8 poems in Russian and what Russians think about this holiday.

For others in the #BayArea, a good overview article about current situation with #COVID19: https://www.sfchronicle.com/bayarea/article/Wuhan-coronavirus-Here-s-what-we-know-15000563.php

There are now 31 confirmed cases in the bay area, with 14 in Santa Clara county near @[email protected] and big tech companies: https://www.mercurynews.com/2020/03/04/map-coronavirus-cases-in-the-bay-area-northern-california/

Stay informed!

Coronavirus hits Bay Area: What residents need to know

The number of confirmed cases of coronavirus is climbing each day, and there are signs the virus is spreading from person to person. Here is the key information San Francisco Bay Area residents need to know.

It's primary vote day here in CA!

Voting was super quick and painless, which was nice.

If your state is also part of super Tuesday, do your part and #vote! :)

#SuperTuesday #CaliforniaPrimary

Some more fun statistics from the paper deadline week by @[email protected]. Apparently I do a lot of pacing when writing 😅 also, it makes sense I slept less, but why did I climb so many floors 🤔

@[email protected]

RT @[email protected]

Recent studies have suggested that the earliest iterations of DNN training are especially critical. In our #ICLR2020 paper with @[email protected] and @[email protected], we use the lottery ticket framework to rigorously examine this crucial phase of training.

https://arxiv.org/abs/2002.10365

🐦🔗: https://twitter.com/arimorcos/status/1234517600962596865

The Early Phase of Neural Network Training

Recent studies have shown that many important aspects of neural network learning take place within the very earliest iterations or epochs of training. For example, sparse, trainable sub-networks emerge (Frankle et al., 2019), gradient descent moves into a small subspace (Gur-Ari et al., 2018), and the network undergoes a critical period (Achille et al., 2019). Here, we examine the changes that deep neural networks undergo during this early phase of training. We perform extensive measurements of the network state during these early iterations of training and leverage the framework of Frankle et al. (2019) to quantitatively probe the weight distribution and its reliance on various aspects of the dataset. We find that, within this framework, deep networks are not robust to reinitializing with random weights while maintaining signs, and that weight distributions are highly non-independent even after only a few hundred iterations. Despite this behavior, pre-training with blurred inputs or an auxiliary self-supervised task can approximate the changes in supervised networks, suggesting that these changes are not inherently label-dependent, though labels significantly accelerate this process. Together, these results help to elucidate the network changes occurring during this pivotal initial period of learning.