I Tracked Down the Hidden Workers Secretly Powering ChatGPT

YouTube

Выява дня: Гірка для ўзважвання золата, Заходняя Афрыка, XIX ст.

#gold #WestAfrica #wikipedia #photography #scale #19thCentury

One #accessibility issue I mention frequently is web designers, presumably for aesthetic reasons, making low-contrast colour choices. It also frequently goes along with selecting a #font so small that only people with excellent vision (and no #presbyopia) can read them, even if the #contrast were higher.

Here's an example. I'm not pointing out the software in question, even though you could identify it easily, because this isn't a dunk on that project, specifically.

This is the reference #documentation for an API, a small excerpt from the navigation links that run down a column on the left side of the page. The #text is darkish #grey on a lighter grey background. The contrast is terrible, particularly ignoring the highlighted entry because that's bolded as the current selection.

If you have #cataracts or any other #vision problem, you're going to have trouble with this. But it gets worse.

That text is 7 pixels high. On my monitors, it's 3 mm high. Ridiculous. Note that if you have fine motor-control problems or use alternative input devices, these are also extremely difficult to click on.

Here's the kicker: for this site, I have Firefox set to #scale the text up to 133%. That 7 pixels / 3 mm is *after* enlarging it.

#Web folks, please try to remember that not everyone is a twenty-something able-bodied person with zero accessibility issues.

#WebDesign #WebDesigner #usability #readability #legibility #WebPage

RT by @EnricoLetta: “There is a momentum in #Europe, #Yes the EU can #scale!” says @EnricoLetta / So good to witness the launch at TownHall Europe of the @IEuniversity Competitiveness Hub Brussels! Representing #IEFoundation, @IE_SciTech and #RiseEurope, let’s scale #deeptech and build the EU 4.0!
---
https://nitter.net/GeoffroyGerard/status/2051711435328127465#m

Origin, or: Beyond blue

A ‘Golden Shovel’ poem

where, ultimately, are we from?
simple enough to mouth: beyond
for wise men, for fools, for the
dreamers, eyes trained towards the blue
all of us standing together, here
within our brief finity, within
known existence—my attempts, your
impressions—our shared universe
made of stars, of atoms, the
dust of Mars, ice of Ganymede, gas of our sun
ionized plasma—which shines
sparkling across the waters on
Terra’s surface—then it's me and you

‘Beyond Blue’, a haiku by Ivor Steven

My poem above is a golden shovel based on Ivor’s haiku:

from beyond the blue
here, within your universe
the sun shines on you

Golden shovel?

A golden shovel is a kind of poem where each line ends with a word taken from another poem. The words come in the same order as they appear in the original poem. The new poem can be about something completely different, even though it uses those words.

d’Verse

Poets are encouraged to compose ‘golden shovel’ poems at d’Verse.

Let’s write poetry together!

When it comes to partnership, some humans can make their lives alone – it’s possible. But creatively, it’s more like painting: you can’t just use the same colours in every painting. It’s just not an option. You can’t take the same photograph every time and live with art forms with no differences.

Ben Harper (b. 1969)

Would you like to create poetry with me and have a completed poem of yours featured here at the Skeptic’s Kaddish? I am very excited to have launched the ‘Poetry Partners’ initiative and am looking forward to meeting and creating with you… Check it out!

#Existence #GoldenShovel #Haiku #Intimacy #Perception #Poem #Poetry #Questions #Scale #Universe

#KnowledgeByte: The #Kardashev #Scale was developed as a way of measuring a civilization's technological advancement based upon how much usable #Energy it has at its disposal.

Here is a short overview.

https://knowledgezone.co.in/posts/What-is-Kardashev-scale-5ea95d0b247bb32e6075ab46

RT @FlorianGallwitz: Der ganze Podcast ist interessant, wenn man sich für die Struktur der "Token-Ökonomie" interessiert, in der es darum geht, Energie möglichst effizient in KI-Tokens ("Intelligenz") zu verwandeln. Energie ist für Huang der Flaschenhals, der am wenigsten schnell zu beseitigen ist. Dwarkesh Patel (@dwarkesh_sp) Distilled recap of the back-and-forth with Jensen on export controls: Dwarkesh: Wouldn’t selling Nvidia chips to China enable them to train models like Claude Mythos with cyber offensive capabilities that would be threats to American companies and national security? Jensen: First of all, Mythos was trained on fairly mundane capacity and a fairly mundane amount of it by an extraordinary company. The amount of capacity and the type of compute it was trained on is abundantly available in China. Dwarkesh: With that, could they eventually train a model like Mythos? Yes. But the question is, because we have more FLOPs, American labs are able to get to this level of capabilities first. Furthermore, even if they trained a model like this, the ability to deploy it at scale matters. If you had a cyber hacker, it's much more dangerous if they have a million of them versus a thousand of them. Jensen: Your premise is just wrong. The fact of the matter is their AI development is going just fine. The best AI researchers in the world, because they are limited in compute, also come up with extremely smart algorithms. DeepSeek is not an inconsequential advance. The day that DeepSeek comes out on Huawei first, that is a horrible outcome for our nation. Dwarkesh: Currently, you can have a m…

mehr auf Arint.info

#Anthropic #China #Claude #DeepSeek #go #make #nitter #opensource #rest #scale #Tesla #things #UnitedStates #arint_info

https://x.com/FlorianGallwitz/status/2048333556867342353#m

Arint - SEO+KI (@[email protected])

<p>RT @FlorianGallwitz: Der ganze Podcast ist interessant, wenn man sich für die Struktur der "Token-Ökonomie" interessiert, in der es darum geht, Energie möglichst effizient in KI-Tokens ("Intelligenz") zu verwandeln. Energie ist für Huang der Flaschenhals, der am wenigsten schnell zu beseitigen ist. Dwarkesh Patel (@dwarkesh_sp) Distilled recap of the back-and-forth with Jensen on export controls: Dwarkesh: Wouldn’t selling Nvidia chips to China enable them to train models like Claude Mythos with cyber offensive capabilities that would be threats to American companies and national security? Jensen: First of all, Mythos was trained on fairly mundane capacity and a fairly mundane amount of it by an extraordinary company. The amount of capacity and the type of compute it was trained on is abundantly available in China. Dwarkesh: With that, could they eventually train a model like Mythos? Yes. But the question is, because we have more FLOPs, American labs are able to get to this level of capabilities first. Furthermore, even if they trained a model like this, the ability to deploy it at scale matters. If you had a cyber hacker, it's much more dangerous if they have a million of them versus a thousand of them. Jensen: Your premise is just wrong. The fact of the matter is their AI development is going just fine. The best AI researchers in the world, because they are limited in compute, also come up with extremely smart algorithms. DeepSeek is not an inconsequential advance. The day that DeepSeek comes out on Huawei first, that is a horrible outcome for our nation. Dwarkesh: Currently, you can have a m…</p> <p><a href="https://arint.info/@Arint/116477504486275928">mehr</a> auf <a href="https://arint.info/">Arint.info</a></p> <p>#Anthropic #China #Claude #DeepSeek #go #make #nitter #opensource #rest #scale #Tesla #things #UnitedStates #arint_info</p> <p><a href="https://x.com/FlorianGallwitz/status/2048333556867342353#m">https://x.com/FlorianGallwitz/status/2048333556867342353#m</a></p>

Mastodon Glitch Edition
Full scale Space Rider test craft set for parafoil glide trials
https://atlas.whatip.xyz/post.php?slug=full-scale-space-rider-test-craft-set-for-parafoil-glide-trials
Key insight: <p>Berlin
#trials #space #rider #scale
Full scale Space Rider test craft set for parafoil glide trials

Berlin, Germany (SPX) Apr 22, 2026 The European Space Agency has completed the first full scale test model of its reusable Space Rider spacecraft, marking a step toward flight trials of the vehicle&...

RT @MilkRoadAI: Anthropic just had to throttle Claude’s thinking depth and cap usage for paying customers. Developers are switching to OpenAI Codex and the market is already sniffing out who wins from all of. Anthropic’s revenue tripled in one quarter to a $30B annual run rate, demand grew so fast that a single developer running an AI agent could drain a full day’s worth of compute in minutes. Anthropic had to reduce Claude’s default thinking mode, introduce peak-hour caps, and test pulling Claude Code off its $20 plan entirely and paying subscribers started hitting limits they’d never seen before. There are literally not enough GPUs to serve every request, even the hyperscalers can’t build fast enough, Microsoft, Google, Amazon, and Meta are committing a combined ~$700 billion in AI capex in 2026, approaching 100% of their combined operating free cash flow. And it’s still not sufficient to meet demand. That gap is where the neocloud trade lives. CoreWeave, Nebius ($NBIS), IREN, and CoreWeave ($CRWV) exist because hyperscalers physically can’t scale fast enough. These companies accumulated $131 billion in enterprise GPU commitments from zero in under three years. CoreWeave alone has a $66.8 billion revenue backlog, Nebius is running at 98% capacity utilization essentially sold out. Synergy Research forecasts the entire neocloud market hits $400 billion by 2031, growing at 58% per year. Google just committed $40 billion and 5 gigawatts to Anthropic. And every AI lab is in the same position: they have the demand, they have the revenue, they are actively looking for anyone who can hand them c…

mehr auf Arint.info

#agent #AIagent #Amazon #Anthropic #Claude #ClaudeCode #Codex #Google #Meta #Microsoft #nitter #OpenAI #OpenAICodex #scale #arint_info

https://x.com/MilkRoadAI/status/2047797528293188024#m

Arint - SEO+KI (@[email protected])

<p>RT @MilkRoadAI: Anthropic just had to throttle Claude’s thinking depth and cap usage for paying customers. Developers are switching to OpenAI Codex and the market is already sniffing out who wins from all of. Anthropic’s revenue tripled in one quarter to a $30B annual run rate, demand grew so fast that a single developer running an AI agent could drain a full day’s worth of compute in minutes. Anthropic had to reduce Claude’s default thinking mode, introduce peak-hour caps, and test pulling Claude Code off its $20 plan entirely and paying subscribers started hitting limits they’d never seen before. There are literally not enough GPUs to serve every request, even the hyperscalers can’t build fast enough, Microsoft, Google, Amazon, and Meta are committing a combined ~$700 billion in AI capex in 2026, approaching 100% of their combined operating free cash flow. And it’s still not sufficient to meet demand. That gap is where the neocloud trade lives. CoreWeave, Nebius ($NBIS), IREN, and CoreWeave ($CRWV) exist because hyperscalers physically can’t scale fast enough. These companies accumulated $131 billion in enterprise GPU commitments from zero in under three years. CoreWeave alone has a $66.8 billion revenue backlog, Nebius is running at 98% capacity utilization essentially sold out. Synergy Research forecasts the entire neocloud market hits $400 billion by 2031, growing at 58% per year. Google just committed $40 billion and 5 gigawatts to Anthropic. And every AI lab is in the same position: they have the demand, they have the revenue, they are actively looking for anyone who can hand them c…</p> <p><a href="https://arint.info/@Arint/116464762709441359">mehr</a> auf <a href="https://arint.info/">Arint.info</a></p> <p>#agent #AIagent #Amazon #Anthropic #Claude #ClaudeCode #Codex #Google #Meta #Microsoft #nitter #OpenAI #OpenAICodex #scale #arint_info</p> <p><a href="https://x.com/MilkRoadAI/status/2047797528293188024#m">https://x.com/MilkRoadAI/status/2047797528293188024#m</a></p>

Mastodon Glitch Edition
“Supports #OpenTelemetry” used to be enough. It isn’t anymore. At #scale, that binary label stops telling you what matters. Two systems can claim the same, with very different outcomes. @[email protected] outlines a maturity model to make those differences visible early. 👉 eu1.hubs.ly/H0tHMCc0