8 Followers
7 Following
6 Posts
Interested in Network Science and Machine learning to create computational “macroscopes” that enable us to look into our society at a greater resolution with big and connected data. Assistant Professor at Binghamton University.
Network ScienceRepresentation learning
Science of Science

Check out our new paper about predicting the auction price of contemporary art pieces! 🎨📈

https://www.nature.com/articles/s41598-024-60957-z

In this paper led by Kangsan Lee & Jaehyuk Park (w @samgoree and David Crandall), we examined a comprehensive dataset of art auctions of contemporary artists to understand what really determines the price of art. 🧵

Social signals predict contemporary art prices better than visual features, particularly in emerging markets - Scientific Reports

What determines the price of an artwork? This article leverages a comprehensive and novel dataset on art auctions of contemporary artists to examine the impact of social and visual features on the valuation of artworks across global markets. Our findings indicate that social signals allow us to predict the price of artwork exceptionally well, even approaching the professionals’ prediction accuracy, while the visual features play a marginal role. This pattern is especially pronounced in emerging markets, supporting the idea that social signals become more critical when it is more difficult to assess the quality. These results strongly support that the value of artwork is largely shaped by social factors, particularly in emerging markets where a stronger preference for “buying an artist” than “buying an artwork.” Additionally, our study shows that it is possible to boost experts’ performance, highlighting the potential benefits of human-machine models in uncertain or rapidly changing markets, where expert knowledge is limited.

Nature
We divided the features of each art piece into two categories: visual 👁️ and social 👀. Visual features include traditional computer vision features as well as high-level features from CNN. Social features are whatever non-content information we have for the artist, market, etc. In a way, social features are what we can know when an artist says "I have created a 4x3 painting but I can't show it to you." 🧵

python_dependency: Python Dependency Network

Python's package dependency networks. Nodes in the network are Python's packages registered to PyPI and edges are dependencies among packages.

This network has 58743 nodes and 108399 edges.

Tags: Technological, Software, Unweighted
https://networks.skewed.de/net/python_dependency

#networks #data #networkscience #netzschleuder

python_dependency — Python Dependency Network

Python's package dependency networks. Nodes in the network are Python's packages registered to PyPI and edges are dependencies among packages.

We've all been laughing at the obvious fails from Google's AI Overviews feature, but there's a serious lesson in there too about how it disrupts the relational nature of information. More in the latest Mystery AI Hype Theater 3000 newsletter:

https://buttondown.email/maiht3k/archive/information-is-relational/

Information is Relational

Google's AI Overviews Fails Helpfully Highlight a Source of Danger By Emily The local volcano that did erupt during my lifetime: Mt St Helens on May 18,...

🚨Preprint Alert🚨 Benchmarks guide #MachineLearning, but is the core benchmark for #GraphML, the link prediction task, guiding us correctly? Our latest preprint questions its validity, reveals a bias that substantially skews the evaluation, and proposes a degree-corrected link prediction benchmark. Let's dive in! https://arxiv.org/abs/2405.14985 More on 👉 https://twitter.com/skojaku/status/1795413358818013431
Implicit degree bias in the link prediction task

Link prediction -- a task of distinguishing actual hidden edges from random unconnected node pairs -- is one of the quintessential tasks in graph machine learning. Despite being widely accepted as a universal benchmark and a downstream task for representation learning, the validity of the link prediction benchmark itself has been rarely questioned. Here, we show that the common edge sampling procedure in the link prediction task has an implicit bias toward high-degree nodes and produces a highly skewed evaluation that favors methods overly dependent on node degree, to the extent that a ``null'' link prediction method based solely on node degree can yield nearly optimal performance. We propose a degree-corrected link prediction task that offers a more reasonable assessment that aligns better with the performance in the recommendation task. Finally, we demonstrate that the degree-corrected benchmark can more effectively train graph machine-learning models by reducing overfitting to node degrees and facilitating the learning of relevant structures in graphs.

arXiv.org