maya ⛓️

395 Followers
934 Following
18.8K Posts

literal swamp #goblin. aesthete, enthusiast, techie scum. PNW pasture-raised.

no hard feelings about dipping out after following if my stuff turns out to not be for you. i follow/unfollow without thinking about it much.

my employer has never wanted me to share an opinion publicly and every day i ensure they never will

all other notices/disclaimers: https://maya.land/mastodon-landing/

"leading voice of the goblin web" --@minterpunct

pronominashe/her
situshttps://maya.land
ubinamseattle
I'm migrating this account to @maya imminently! Thank you for your patience with me in the duration; I'm going to try to keep as much as possible the same so this doesn't get too confusing.
DNS stands for do not squish

> I find that attractive servers earn approximately $1261 more per year in tips than unattractive servers, the primary driver of which is female customers tipping attractive females more than unattractive females.

https://ideas.repec.org/a/eee/joepsy/v49y2015icp34-46.html

oh do i have thoughts

Beauty and the feast: Examining the effect of beauty on earn

This paper looks at the effect of beauty on earnings using restaurant tipping data. Customers were surveyed as they left a set of five Virginia restaurants about the dining experience, their server, a

You do unbend your noble strength, to think
So brainsickly of things.
Surface Fairness, Deep Bias: A Comparative Study of Bias in Language Models

Modern language models are trained on large amounts of data. These data inevitably include controversial and stereotypical content, which contains all sorts of biases related to gender, origin, age, etc. As a result, the models express biased points of view or produce different results based on the assigned personality or the personality of the user. In this paper, we investigate various proxy measures of bias in large language models (LLMs). We find that evaluating models with pre-prompted personae on a multi-subject benchmark (MMLU) leads to negligible and mostly random differences in scores. However, if we reformulate the task and ask a model to grade the user's answer, this shows more significant signs of bias. Finally, if we ask the model for salary negotiation advice, we see pronounced bias in the answers. With the recent trend for LLM assistant memory and personalization, these problems open up from a different angle: modern LLM users do not need to pre-prompt the description of their persona since the model already knows their socio-demographics.

arXiv.org
what

i wonder how you could keep this mounted such that you could draw right onto the sheet from the roll before cutting

i suppose the secondary challenge would be figuring out what to do with a bunch of giant doodles. cut them out and poster tape them to walls? maybe the real move is figuring out wheatpaste....

https://www.amazon.com/dp/B0DNNSMZTM

Amazon.com : Bryco Goods 36 Inch Paper Roll Dispenser and Cutter, Wall Mountable & Non-Slip Tabletop, Heavy-Duty Steel Frame – Kraft, Butcher, Freezer, Wrapping Paper Holder – for Home, Office, Craft Projects : Office Products

Amazon.com : Bryco Goods 36 Inch Paper Roll Dispenser and Cutter, Wall Mountable & Non-Slip Tabletop, Heavy-Duty Steel Frame – Kraft, Butcher, Freezer, Wrapping Paper Holder – for Home, Office, Craft Projects : Office Products

I don’t want to not have to trust anybody. If the value proposition of your technology is “you don’t have to trust Anyone!” I suspect that you and I are working at cross purposes.
simple. unadorned. effective.
https://youtu.be/pHjocCtkSxY
fleetwood mac dreams but the instrumental is 360 by charli xcx

YouTube
But screw your courage to the sticking-place,
And we'll not fail.