Vlad Ayzenberg

126 Followers
88 Following
83 Posts
Postdoc in the Behrmann lab, former PhD in the Lourenco lab. Interested in: Cognition. Computation. Neuroscience. Development. he/him.
Websitevlad-ayzenberg.com
Twitterhttps://twitter.com/vayzenberg90

Seems to be back online!
---
RT @vayzenberg90
FYI, the #pavlovia website seems to be having issues. Tasks will seemingly start and run just fine, but will throw an error at the end when saving the data.

Hopefully this saves someone else from getting to the end of a session only to lose the participant's data...
https://twitter.com/vayzenberg90/status/1653799303779872774

Vlad Ayzenberg on Twitter

“FYI, the #pavlovia website seems to be having issues. Tasks will seemingly start and run just fine, but will throw an error at the end when saving the data. Hopefully this saves someone else from getting to the end of a session only to lose the participant's data...”

Twitter

FYI, the #pavlovia website seems to be having issues. Tasks will seemingly start and run just fine, but will throw an error at the end when saving the data.

Hopefully this saves someone else from getting to the end of a session only to lose the participant's data...

RT @sami_r_yousif
After leaving his lab of my own volition, a former adviser started making extreme claims about me out of nowhere in a seemingly retaliatory way. I spent four years trying to have those claims and his conduct investigated, largely unsuccessfully. Read more: https://yaledailynews.com/blog/2023/04/28/up-close-when-graduate-student-adviser-relationships-go-awry/
UP CLOSE: When graduate student-adviser relationships go awry

Advisers are graduate students’ lifelines at Yale: they provide guidance in academic and professional matters and boost students into their postgrad lives. But when these relationships turn sour, few resources exist for graduate students to equitably resolve faculty conflicts.

Yale Daily News
RT @mkpsyx
Excited to announce that my first first-authored paper is out now in @ScienceAdvances http://doi.org/10.1126/sciadv.add2981 We collected 1 million+ memory ratings to show that semantic properties best explain what we remember. Major thanks to @WilmaBainbridge @martin_hebart @Chris_I_Baker
However, @action_brain pointed out that our DNN simulations primarily compared classification of rectilinear vs. curvilinear shapes, not within-category exemplar discrimination, as they did in their original study
---
RT @action_brain
The critical point we raise is that RV can discriminate between different curvilinear shapes – and between different rectilinear ones as well. We were not testing if she can tell curvilinear from rectil…
https://twitter.com/action_brain/status/1628073511733407744
Mel_Goodale on Twitter

“The critical point we raise is that RV can discriminate between different curvilinear shapes – and between different rectilinear ones as well. We were not testing if she can tell curvilinear from rectilinear shapes (which she undoubtedly can as well).”

Twitter

In our recent reply to @action_brain and David Milner, we used DNNs to argue that patients with parietal lesions could discriminate their stimuli by relying on local visual features, not global shape
---
RT @vayzenberg90
We received another commentary on our @TrendsCognSci paper "Does the ventral visual pathway represent object shape?"

This time from @action_brain and David Milner. See their commentary here: https://doi.org/10.1016/j…
https://twitter.com/vayzenberg90/status/1628043940065849344

RT @tomstello_
1 / 🧵Thrilled to present a new article in @TrendsCognSci exploring the limitations of the left-right spectrum in political psychology. We propose a more nuanced, data-driven approach (w/ co-authors @LeorZmigrod & @ArberTasimi).
(https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(23)00074-8?rss=yes)
Viewpoint-dependant object recognition
---
RT @lxeagle17
This is so unsettling.
https://twitter.com/lxeagle17/status/1646277625076391938
Lakshya Jain on Twitter

“This is so unsettling.”

Twitter
RT @levels_of
New paper! Now out in Open Mind, @PraveenKenderla, Sung-Ho Kim, & I found that objects’ structural properties (specifically topology) competed with objects’ surface features (both shape and color) in children’s extension of labels to novel objects! 1/ https://direct.mit.edu/opmi/article/doi/10.1162/opmi_a_00073/115301/Competition-Between-Object-Topology-and-Surface
Competition Between Object Topology and Surface Features in Children’s Extension of Novel Nouns

Abstract. Objects’ topological properties play a central role in object perception, superseding objects’ surface features in object representation and tracking from early in development. We asked about the role of objects’ topological properties in children’s generalization of novel labels to objects. We adapted the classic name generalization task of Landau et al. (1988, 1992). In three experiments, we showed children (n = 151; 3–8-year-olds) a novel object (the standard) and gave the object a novel label. We then showed children three potential target objects and asked children which of the objects shared the same label as the standard. In Experiment 1, the standard object either did or did not contain a hole, and we asked whether children would extend the standard’s label to a target object that shared either metric shape or topology with the standard. Experiment 2 served as a control condition for Experiment 1. In Experiment 3, we pitted topology against another surface feature, color. We found that objects’ topology competed with objects’ surface features (both shape and color) in children’s extension of labels to novel objects. We discuss possible implications for our understanding of the inductive potential of objects’ topologies for making inferences about objects’ categories across early development.

MIT Press

RT @ljuba_pi
DNNs and humans may in part rely on different object features for visual recognition @SfNJournals

https://www.jneurosci.org/content/43/10/1731?fbclid=IwAR31IOlt3LRB5fPzjUuvkv6PJO71L5VJUHNtJwVqd0E9p6GjwuxmH7wn7W8

Deep Neural Networks and Visuo-Semantic Models Explain Complementary Components of Human Ventral-Stream Representational Dynamics

Deep neural networks (DNNs) are promising models of the cortical computations supporting human object recognition. However, despite their ability to explain a significant portion of variance in neural data, the agreement between models and brain representational dynamics is far from perfect. We address this issue by asking which representational features are currently unaccounted for in neural time series data, estimated for multiple areas of the ventral stream via source-reconstructed magnetoencephalography data acquired in human participants (nine females, six males) during object viewing. We focus on the ability of visuo-semantic models, consisting of human-generated labels of object features and categories, to explain variance beyond the explanatory power of DNNs alone. We report a gradual reversal in the relative importance of DNN versus visuo-semantic features as ventral-stream object representations unfold over space and time. Although lower-level visual areas are better explained by DNN features starting early in time (at 66 ms after stimulus onset), higher-level cortical dynamics are best accounted for by visuo-semantic features starting later in time (at 146 ms after stimulus onset). Among the visuo-semantic features, object parts and basic categories drive the advantage over DNNs. These results show that a significant component of the variance unexplained by DNNs in higher-level cortical dynamics is structured and can be explained by readily nameable aspects of the objects. We conclude that current DNNs fail to fully capture dynamic representations in higher-level human visual cortex and suggest a path toward more accurate models of ventral-stream computations. SIGNIFICANCE STATEMENT When we view objects such as faces and cars in our visual environment, their neural representations dynamically unfold over time at a millisecond scale. These dynamics reflect the cortical computations that support fast and robust object recognition. DNNs have emerged as a promising framework for modeling these computations but cannot yet fully account for the neural dynamics. Using magnetoencephalography data acquired in human observers during object viewing, we show that readily nameable aspects of objects, such as 'eye', 'wheel', and 'face', can account for variance in the neural dynamics over and above DNNs. These findings suggest that DNNs and humans may in part rely on different object features for visual recognition and provide guidelines for model improvement.

Journal of Neuroscience