Drew

@__hoyt
1 Followers
12 Following
18 Posts
Here for a good time, not necessarily a long time.
@wonderofscience If only we could all get this far away from Putin and Trump. Maybe we could stick them on one of Elon’s rocket with him, and we can ship them all off to Saturn!
@Strandjunker His followers are already justifying why he should get more years in office….
@streetartutopia I wish Americans had 1% of the courage.
Pretty poor officiating in this birds game… AJ Brown absolutely was holding Lattimore’s facemask during the play and then proceeded to rip his helmet off…. And the flag went against the Commanders….
@kcarruthers @GossiTheDog OpenVibe is good as well.
@KrajciTom @dgoldsmith @mekkaokereke Now compare your findings to economic status per capita and education levels reached and I’m guess you’ll find the answers you seek.
@CyberSecJsmes @briankrebs Can you fill all tires with nitrogen? Where do you get them filled?
TAMU wants to lose? They need 10 points, so they go for it on 4th and goal? Wtf
@jonmsterling There has been some good research done and papers written about this topic in the last year: https://arxiv.org/abs/2404.03502
AI and the Problem of Knowledge Collapse

While artificial intelligence has the potential to process vast amounts of data, generate new insights, and unlock greater productivity, its widespread adoption may entail unforeseen consequences. We identify conditions under which AI, by reducing the cost of access to certain modes of knowledge, can paradoxically harm public understanding. While large language models are trained on vast amounts of diverse data, they naturally generate output towards the 'center' of the distribution. This is generally useful, but widespread reliance on recursive AI systems could lead to a process we define as "knowledge collapse", and argue this could harm innovation and the richness of human understanding and culture. However, unlike AI models that cannot choose what data they are trained on, humans may strategically seek out diverse forms of knowledge if they perceive them to be worthwhile. To investigate this, we provide a simple model in which a community of learners or innovators choose to use traditional methods or to rely on a discounted AI-assisted process and identify conditions under which knowledge collapse occurs. In our default model, a 20% discount on AI-generated content generates public beliefs 2.3 times further from the truth than when there is no discount. An empirical approach to measuring the distribution of LLM outputs is provided in theoretical terms and illustrated through a specific example comparing the diversity of outputs across different models and prompting styles. Finally, based on the results, we consider further research directions to counteract such outcomes.

arXiv.org
Low Quality Facts (@[email protected])

Attached: 1 image They can just pick any random number.

Mastodon 🐘