Liquid Glass’ blurred content everywhere is especially cruel to those of us who use reading glasses or progressives.

The reflex to seeing blurry text on our phones is to adjust our sight angle or distance to sharpen it. But, of course, it’s not our fault, doesn’t sharpen, and just causes eyestrain.

Text on my phone should never be blurry.

You may ask, “How many people could this possibly affect?”

Well…

@marcoarment Which obviously includes all the leaders at Apple, which really makes you wonder if anyone is reviewing any of this

@RyanHyde @marcoarment

They're all rich enough to have had their eyes replaced with the eyes of youths from a developing country.

@marcoarment i think designers are forced to retire at 38yo
@agiletortoise if only, somehow Alan Dye persists
@agiletortoise @marcoarment Always relearning the lessons from the last design debacle.
@marcoarment Any excuse to wheel out the best eye test! https://youtu.be/dx4nN0HkByg?feature=shared
Father Jack's Eye Test | Father Ted

YouTube
@marcoarment I'm gonna be that guy, but ChatGPT is not the right tool for this

@outadoc @marcoarment I think LLMs are the perfect tool for this. I'm curious why you don't agree.

LLMs are great at parsing text and aggregating it. Their entire existence is based on modeling languages. World-knowledge LLMs search the internet far better than plain-old-Google in my experience. Factual halucinations are still an issue, but have been dramatically reduced in the last year.

After a few minutes of "regular" Googling, everything in this screenshot is accurate.

@jimmylittle @marcoarment You have no way to fact-check it. I think it's okay sometimes, but if you post it to many followers, you might as well be spreading misinformation: you have no way to know for sure. Marco has good intentions, but sourcing is going to be more and more important in a world where anyone could just make up text that looks legit.

@outadoc @marcoarment of course you have a way to fact-check it, but it doesn't fit into a pithy social media screenshot. just asking "Sources?" after the initial response returned dozens of sources, from presbyopia physicians, Wikipedia (which has further sources), NIH, NOA, and more.

In my view, posting the screenshot makes it NOT misinformation. It lets us know where the info came from and how we interpret that info can be gauged on that source.

If he just typed "80% of people have trouble seeing up close", then that would be a much different (and worse) situation.

@jimmylittle to be clear: they really aren’t great at it: https://www.bbc.com/mediacentre/2025/bbc-research-shows-issues-with-answers-from-artificial-intelligence-assistants

and even the example only seems to hold up for north america but makes it sound like it’s globally applicable https://mastodon.social/@hagen/114778841385753624

Groundbreaking BBC research shows issues with over half the answers from Artificial Intelligence (AI) assistants

Conducted over a month, the study saw the BBC test four prominent, publicly available AI assistants

@hagen 50% accuracy is probably a better hit rate than above-the-fold Google result relevance, but that aside…

You’re right about most tech being US-centric, which is a huge problem. American companies (and all the big tech companies are) tend to forget they are global corporations with worldwide users.

@jimmylittle comparing llm outputs to google results when it comes to accuracy is ignoring the biggest flaw of the technology though: the confidence in which the information is makes llm users way more susceptible to believe the output. it’s literally triggering the same mechanism con-men use to make us trust them. there is a reason why studies of actual usage patterns show how harmful the impact of llm chatbots is.

@hagen I totally agree with this. The false confidence in the summaries and answers is a problem, but that’s not how I use them.

I use LLMs as an intern, not an assistant. They’re great at gathering sources and info, but (currently) below average at processing it.

They may or may not get better at the processing part, but letting them handle the tedium of link-gathering is extraordinarily helpful.

@jimmylittle the problem is that everybody else uses them that way and that’s simply harmful to society as a whole. and i think ignoring that bit because a few people understand what they actually do and use them accordingly is very dangerous.

@hagen You may be right, time will tell. But since “everybody else” is already using it, you can’t put the toothpaste back in the tube.

I’m old enough to remember newspaper articles about how misuse of the internet would lead to the demise of humanity. The internet certainly f’ed up a lot of stuff along the way (lookin’ at you Zuck), but has been a net positive for society as a whole.

I’d rather use and understand emerging technologies than try to play catch-up in the future.

@jimmylittle yes you very much can. it’s not toothpaste. it’s a tool not making money and without a clear path to generate revenue. but with legal proceceedings against it that could rule the business model illegal.

this whole inevitability narrative is bullshit. there is no law of nature that tells us technology has to be used. and chatbots are not like the internet. they aren’t infrastructure. they are an incredibly bad interface for llms.

@hagen There will be legislation, I hope. The initial scraping of data to train these models is certainly unethical and may be illegal (tho the recent Anthropic ruling leans towards "fair use", which I wholeheartedly disagree with).

But that doesn't mean the whole business model is illegal. Adobe has proven that reliable models can be trained on properly licensed data. NYT has made deals to license content to training. AP has done the same. Training models on owned content isn't illegal, doing it without permission is (or should be...)

Google faced similar lawsuits in the early days. Scraping the web? Summarizing content? It's all been litigated before, and everyone is fine with it as long as there are guardrails and financial incentives for both parties.

@jimmylittle that is not the correct read on the situation. the part where anthropic bought physical books and scanned them is fair use. torrenting libgen and ingesting youtube against their terms of service isn’t and is piracy times millionst (which alone could bankrupt them.) it also only concerns training, not the output.

the deals made by publishers are to build a marketplace, which is helpful for copyright lawsuits because it establishes that there is a value in their content for training.

@jimmylittle the comparison to google doesn’t work because we live in different times where the favour is very much stacked against big tech. also google’s mission was to send people to the websites, chatbots are built to keep the people in the system. VERY different and important when talking about the pillars of fair use.

and yes, adobe showed how it is possible. but they also aren’t building a chatbot and a very specialized use case.

@jimmylittle and like you said: that’s only training, not output. completely different things and output simply hasn’t been ruled against. but you know who has entered the chat? disney. who REALLY aren’t in the habit of losing copyright lawsuits. this really isn’t done yet. especially if you consider that in the us and the eu every party is (rightfully) mad at tech companies.

@hagen For sure. There are lots of legal issues to resolve, and honestly it's probably not going to go the way we want it to, given the current US administration.

I heard someone (i think it was on the ATP podcast...) compare it to music in the early 2Ks. Napster was a disruptor that allowed everyone to steal music. The industry responded by making music easier to buy, then easy to stream. It did not really work out well for the artists (mostly because of the record companies) – but the legalities were worked out, deals were made, and customers were ultimately better served.

@jimmylittle @outadoc @marcoarment You start out with saying LLMs are “the perfect tool for this” and a moment later say “factual hallucinations are still an issue” when the entire post from Marco is about a factual item he asked ChatGPT. Can’t have it both ways.

@Aaron @outadoc @marcoarment Trust but verify.

Same thing I do with a Google search.

@jimmylittle @outadoc @marcoarment But that’s literally double the work! Just freaking use Google or Kagi or DDG or StartPage and get the answer by clicking through to the correct results. Why ask a bullshit machine for information and then have to go out and verify it’s accurate? HOWWWW is that easier or even close to on-par with just searching in the first place?

@Aaron @outadoc @marcoarment Is the first Google result is always right? Should I just listen to a Subreddit?

The thing with LLM answers is they summarize and cite multiple sources, so you’re more likely to find a reliable source than just trusting a search engine. SEO has ruined reliable search results on the mainstream search engines.

@jimmylittle @outadoc @marcoarment That your misinfo is hard to fact check because it was invented by your prompt is not a reason to trust it 🫣

@boxed @outadoc @marcoarment It’s not hard to fact check- It’s easier to fact check a ChatGPT answer than just about anything else. It literally lists the sources.

But everyone has their process. I like an LLM answer with cited sources. Some people just pick the first Google result. Some go right to Wikipedia or Reddit.

Every single one of those “sources” are confidently wrong as often as not, it’s just about each person’s personal experience with them. Trust but verify.

@jimmylittle @outadoc @marcoarment what's a 'factual' hallucination as distinct from... what?

@oscarjiminy @outadoc @marcoarment There are all kinds of hallucinations. LLMs have a current problem of seeing things that aren’t there as things they perceive as facts.

A guy on mushrooms seeing dancing pink elephants is a different kind of hallucinating.

There is an important distinction. 🍄

@jimmylittle @outadoc @marcoarment it's a specious distinction in that we are not addressing human subjects

so far as machine output goes 'factual' hallucinations are identical to any other output you may term as an hullucination

@jimmylittle @outadoc @marcoarment any visual output however photorealistic might be categorised as machine halluciantion given machines have (and cannot have) 'experience' of the material world
@jimmylittle @outadoc @marcoarment the same is true of any kind of linguistic output

@oscarjiminy @outadoc @marcoarment To be clear. *I* don’t term them as hallucinations, the industry does.

I consider them bugs in the output.

@jimmylittle @outadoc @marcoarment so what's a factual bug?

hallucination's a cultural adaptation to describe an emerging phenomenon

@jimmylittle @outadoc @marcoarment it's a perfectly cromulent term

@jimmylittle @outadoc @marcoarment calling them bugs in the output is demonstration of a similar category error

they are 'bugs' given the expectation. the expectation being coherent, logical output that is in full agreement with the material world/linguistic norms and expectations

those expectations are a bug in the wetware. they are absurd expectations

@oscarjiminy @outadoc @marcoarment i disagree. LLMs are just software. Software has bugs, and if done correctly, gets updated so there are fewer and fewer bugs as time goes on.

LLMs look bad when the expectation is 100% accuracy, but the reality is no search software is 100% accurate.

My guess is the sources that LLMs cite (NOT the text they spit out, but the sources they cite) are far more relevant to the question than the top 5 Google results (of which 3 are ads, one link is useful, and one is an SEO Link Farm).

The beauty part about modern web-based software is there are lots of options. You can always just _not use_ the ones you don't like.

@jimmylittle @outadoc @marcoarment ok so we are not addressing software, strictly speaking we are addressing a cultural entity (however it may be bound in/defined in software). that’s a distinction in so far as the bug is in *output* (as you yourself asserted). The output is faulty given expectation.

llms are not search engines at least that’s not the primary function. the function is to aggregate, collate, extrapolate through immitation. that function (the output) is fundamentally different from what google is oriented to deliver (historically speaking).

there are so many different products that are conflated into ai (or/and that claim to be enhanced/informed by ai) that it can be tricky to speak generally but in so far as objections go a fundamental one is as to how they are framed. these services cannot reason.

for folks who believe they can or/and that cognition/sentience is not a necessarily embodied phenomenon the agi singularity is an article of faith. it is not rational.

@jimmylittle @outadoc @marcoarment citation is also a category error.

there is no citation as such with llm output. many hallucinations pertain to non existent references but specifically (in so far as reason, cognition, understanding), beyond approximation there is currently no effective way to establish what needs to be cited (in academic terms) and what can merely inform the output.

this is entirely outside of consideration of hallucinated refs.

@outadoc correct, a quick search led me to this well sourced article that indicates that chatgpt took north american numbers and made it sound like they are true for every continent – which they don’t seem to be.

https://www.contactlenses.co.uk/education/presbyopia-stats

doesn’t invalidate marco’s broader point, but of course the information is incorrect. because it fucking always is. (not true, it‘s just 50 % of the time when it comes to correctly regurgitating news. that’s fine, right? https://www.bbc.com/mediacentre/2025/bbc-research-shows-issues-with-answers-from-artificial-intelligence-assistants)

Presbyopia statistics worldwide in 2025

<p>As of 2025, presbyopia affects a very large portion of the world’s population. Estimates suggest that around 1.8 Billion people worldwide are presbyopic. <p>

@marcoarment @outadoc, it isn’t. However, long-sightedness *is* very common: doi.org/10.1016/j.ophtha.2018.04.013
@marcoarment I’m not there yet, but last year my optician told me I have maybe 6 years left before I’ll need glasses! I can already feel it creeping in if I’m particularly tired
@marcoarment it happened to me all of a sudden in the last 6 months. One day I was good at focusing up close, the next day it was gone...

@Coach @marcoarment

I hit 42 and my eyes stopped focusing.

@marcoarment this answer is from chatgpt, and therefore any resemblance to reality is purely coincidental.
@marcoarment Don’t ask an LLM a factual question that you can’t test!
@siracusa @marcoarment And if you're trying to demonstrate a point, a link to a live/active source will always be better than a screenshot, since readers can follow the link to help decide how much trust to put in it. 🙏

@siracusa @marcoarment
I interpret Figure 3 of this study to show presbyopia to peak at about 72% in a given age group, with about 50% of 40-45 year olds affected.
After reading some Cleveland Clinic and Mayo Clinic articles, I understand why ChatGPT thinks 100% of 65 year olds are affected: those sources site 65 as the age when all your presbyopia has happened to you.
Caveat: I don’t actually know what this line from the sites study means: “It is important to stress that this describes the number of people who would be vision impaired at near without adequate optical correction, not simply those who can not accommodate from distance to near. The latter essentially would be everyone from approximately 55 years of age onward.”

https://www.aaojournal.org/article/S0161-6420(17)33797-1/fulltext

@siracusa
This! Otherwise your source is just "the internet".
@marcoarment
@siracusa @marcoarment Moreover, the pertinent question is not whether you are farsighted, but whether it’s being corrected. For some of you, I am sure, your arms are still long enough.
@siracusa at least use one that's hooked up to a search engine and cites sources!
@marcoarment It’s the new grey and grey. Only when I pulled that kind of shit as a junior web designer, someone senior would should “FUCK NO”, give me a (metaphorical) slap and tell me to do it properly. Where are all the people at Apple yelling FUCK NO?

@marcoarment That captures it well. Been playing with it on my iPad and while it isn't as terrible as I feared at first, I totally hate this design.

Designers need to get it through their heads that transparency is a mistake.