"#AI training introduces a significant shift – individual decisions no longer terminate with the present but… influence the future behavior of scalable algorithms. This amplifies the impact of individual actions, creating lasting #externalities. Yet, the aggregation of data from many individuals may lead to diffused #responsibility, weakening the sense of pivotality. … leading to less prosocial behavior compared to a situation with high perceived pivotality for algorithmic outcomes.
… removing pivotality led to increased #selfishness in how humans trained the algorithm. Importantly, this change in revealed #socialPreferences was driven by a shift in individual responsibility (the power over one’s own or others’ fate) rather than the incentive structures (the expected additional payoff of one’s current decisions through the AI’s training).
… findings reveal a positive correlation between participants’ beliefs about others’ revealed preferences in generating training data and their own AI training choices when they were pivotal for others’ payoffs. This pattern points to a potential #falseConsensus effect or belief distortion mechanism, where participants justify selfish behavior by assuming others are also selfish, rather than attempting to offset others’ selfishness through prosocial actions."
#ExperimentalEcon

