dee homak

@aalien
45 Followers
46 Following
116 Posts
we used to hate people
now we just make fun of them
it's more effective that way
pronounshe/she/MAY/HEM
telegram@dhomak
twitter (is it dead yet?)@aalien
@jepyang oh, thank you! i’ll shop around a bit, then

PSA: this is a shitposting account with cats and shitposts. you are being warned.

opinions are my own and, frankly, outrageous.

you are valid.

our time will come.

@lilydavay * недоумевает на теле-авивском *

ну, и, э, чо они там в диалоге такого страшного увидели?

catposting commence!
#cats #cat
new paper from @abebab shows that the larger you scale up the data in your web-scraped models the more hate and racism you get: https://arxiv.org/abs/2306.13141
On Hate Scaling Laws For Data-Swamps

`Scale the model, scale the data, scale the GPU-farms' is the reigning sentiment in the world of generative AI today. While model scaling has been extensively studied, data scaling and its downstream impacts remain under explored. This is especially of critical importance in the context of visio-linguistic datasets whose main source is the World Wide Web, condensed and packaged as the CommonCrawl dump. This large scale data-dump, which is known to have numerous drawbacks, is repeatedly mined and serves as the data-motherlode for large generative models. In this paper, we: 1) investigate the effect of scaling datasets on hateful content through a comparative audit of the LAION-400M and LAION-2B-en, containing 400 million and 2 billion samples respectively, and 2) evaluate the downstream impact of scale on visio-linguistic models trained on these dataset variants by measuring racial bias of the models trained on them using the Chicago Face Dataset (CFD) as a probe. Our results show that 1) the presence of hateful content in datasets, when measured with a Hate Content Rate (HCR) metric on the inferences of the Pysentimiento hate-detection Natural Language Processing (NLP) model, increased by nearly $12\%$ and 2) societal biases and negative stereotypes were also exacerbated with scale on the models we evaluated. As scale increased, the tendency of the model to associate images of human faces with the `human being' class over 7 other offensive classes reduced by half. Furthermore, for the Black female category, the tendency of the model to associate their faces with the `criminal' class doubled, while quintupling for Black male faces. We present a qualitative and historical analysis of the model audit results, reflect on our findings and its implications for dataset curation practice, and close with a summary of our findings and potential future work to be done in this area.

arXiv.org
@[email protected] аааааааааа

@[email protected] any advice for legacy users? i registered there back in 2017 and my acct was in semi-hibernation all these years.

is there a way to migrate without much suffering and ghost accts.?

(ignore if it’s not your area of expertise)

@lilydavay ну я чисто все равно ЗАНУДНО УТОЧНЯЮ
я не обесцениваю, я просто охуеваю иначе чем ты ++++ дичайшие травматические флешбеки, сорри за этот выплеск
@lilydavay я очень хочу сказать, что сегодня миллионы буквально людей почувствовали ровно то, что я в ноябре 2012, когда пришел роскомнадзор: а в смысле. а как. а у нас баннеры по договору не откручены. а в смысле «запрещено полностью»…