Liquid Glass’ blurred content everywhere is especially cruel to those of us who use reading glasses or progressives.

The reflex to seeing blurry text on our phones is to adjust our sight angle or distance to sharpen it. But, of course, it’s not our fault, doesn’t sharpen, and just causes eyestrain.

Text on my phone should never be blurry.

You may ask, “How many people could this possibly affect?”

Well…

@marcoarment Which obviously includes all the leaders at Apple, which really makes you wonder if anyone is reviewing any of this

@RyanHyde @marcoarment

They're all rich enough to have had their eyes replaced with the eyes of youths from a developing country.

@marcoarment i think designers are forced to retire at 38yo
@marcoarment Any excuse to wheel out the best eye test! https://youtu.be/dx4nN0HkByg?feature=shared
Father Jack's Eye Test | Father Ted

YouTube
@marcoarment I'm gonna be that guy, but ChatGPT is not the right tool for this

@outadoc correct, a quick search led me to this well sourced article that indicates that chatgpt took north american numbers and made it sound like they are true for every continent – which they don’t seem to be.

https://www.contactlenses.co.uk/education/presbyopia-stats

doesn’t invalidate marco’s broader point, but of course the information is incorrect. because it fucking always is. (not true, it‘s just 50 % of the time when it comes to correctly regurgitating news. that’s fine, right? https://www.bbc.com/mediacentre/2025/bbc-research-shows-issues-with-answers-from-artificial-intelligence-assistants)

Presbyopia statistics worldwide in 2025

<p>As of 2025, presbyopia affects a very large portion of the world’s population. Estimates suggest that around 1.8 Billion people worldwide are presbyopic. <p>

@marcoarment @outadoc, it isn’t. However, long-sightedness *is* very common: doi.org/10.1016/j.ophtha.2018.04.013
@marcoarment I’m not there yet, but last year my optician told me I have maybe 6 years left before I’ll need glasses! I can already feel it creeping in if I’m particularly tired
@marcoarment it happened to me all of a sudden in the last 6 months. One day I was good at focusing up close, the next day it was gone...

@Coach @marcoarment

I hit 42 and my eyes stopped focusing.

@marcoarment this answer is from chatgpt, and therefore any resemblance to reality is purely coincidental.
@marcoarment Don’t ask an LLM a factual question that you can’t test!
@siracusa @marcoarment And if you're trying to demonstrate a point, a link to a live/active source will always be better than a screenshot, since readers can follow the link to help decide how much trust to put in it. 🙏

@siracusa @marcoarment
I interpret Figure 3 of this study to show presbyopia to peak at about 72% in a given age group, with about 50% of 40-45 year olds affected.
After reading some Cleveland Clinic and Mayo Clinic articles, I understand why ChatGPT thinks 100% of 65 year olds are affected: those sources site 65 as the age when all your presbyopia has happened to you.
Caveat: I don’t actually know what this line from the sites study means: “It is important to stress that this describes the number of people who would be vision impaired at near without adequate optical correction, not simply those who can not accommodate from distance to near. The latter essentially would be everyone from approximately 55 years of age onward.”

https://www.aaojournal.org/article/S0161-6420(17)33797-1/fulltext

@siracusa
This! Otherwise your source is just "the internet".
@marcoarment
@siracusa @marcoarment Moreover, the pertinent question is not whether you are farsighted, but whether it’s being corrected. For some of you, I am sure, your arms are still long enough.
@siracusa at least use one that's hooked up to a search engine and cites sources!
@fifthrocket The sources are only useful if you click the links and read them! In which case, maybe just try a web search instead.
@siracusa both of these provide website snippets that I find are quite satisfactory (Gemini’s UI attached)
But I think the important part is that I have much much more confidence in an LLM’s ability to summarize content that’s loaded into the context window.
@fifthrocket Still gotta click through the links. LLM summaries can be wrong.
@siracusa I’m not sure how likely that really is.
That was your whole point back in episode 589. If the answer is in the context window, you shouldn’t be surprised if it gets it right. You should expect it to be right basically every time! Pulling stuff out of the context window is quite easy.

@siracusa the notification summarization from Apple Intelligence is uh... less good than leading edge models. (and should not be a baseline for your expectations.)

I apologize for going on and on, but also, those aren’t summaries, they’re website snippets.

@fifthrocket You can expect all you want, but in practice it’s sometimes wrong. You’ve gotta click the link, otherwise what are you citing? A probabilistic summary of a thing? Maybe it’s an accurate summary. Maybe it’s not. If only there were some way to tell…
@siracusa that “sources” view shows snippets, not ai summaries.
@fifthrocket Are you sure? Only one way to find out…

@siracusa I mean, yea, I checked before telling you they’re snippets. But one doesn’t need to do that each time.

You wouldn't re-read the terms and conditions every time you open an app

@fifthrocket Google search used to show snippets too, but you still need to click the links.
@siracusa on desktop you still get the standard snippet for that. Would you have really admonished @marcoarment for posting this??
@fifthrocket I think many (most?) Google “snippets” are now generated summaries. And that thing at the top in your screenshot looks like the generated “answer” that Google has been putting at the top of its search results for years now.
@siracusa that’s a “featured snippet”. It’s not generated and never has been as far as I know.
@fifthrocket It’s hard to keep up with the ever-changing Google search results page (which may even vary from user to user). The point is, you can’t cite a search results page. If you can’t easily test the answer yourself, click the links.

@siracusa but why does it end there? You’re deciding whether to believe the Google snippet is really on the NIH website. Why do you think it's accurate just because it’s published on the NIH website, you haven’t looked at the spreadsheet they loaded the results. Maybe they made a math mistake. And that’s only one study. You really want a literature review or meta analysis.
No such analysis exists, but a search results screenshot can fit a bunch of snippets from multiple sources all next to each other.
Maybe screenshot of a bunch of snippets is as good as it gets for an answer to this question in the short form post format. And an LLM summary of the literature might be nearly as good

(I’m so sorry to be pedantic, but I actually didn’t realize this was my opinion going in. I appreciate the chance to figure this out through discussion)

@fifthrocket The info on the NIH website could be right or wrong, but citing it is straightforward: this is what the NIH says. Citing instead what your friend thinks might be on the NIH website is a waste of everyone’s time. Cite the NIH or don’t, but adding an extra, unreliable party in the middle is unserious.
@siracusa AI overviews are much newer and are generated summaries, as I understand it. Pretty clearly marked though

@siracusa @fifthrocket

Yep. I've gone through and checked ChatGPT's sources and it is frequently wrong about what the source says. When I "confront" ChatGPT about the mistake, it says: "You're absolutely right; that source doesn't say blah blah blah."

Unless there are mountains of well-established research on a topic, you can't trust any LLM, even if it gives you sources. And if such mountains of research do exist, you're still better off going to a trusted website run by experts.

@zhaozilong you're responding to a screenshot of the snippets view though. I think a bunch of snippets of authoritative websites should be strong evidence, possibly stronger than a screenshot of a quote on a single authoritative website.

@marcoarment The text on my S25 Ultra is perfectly clear. Trade in value for my 16PM was great too.

Edit: I wear varifocals (progressives in Trumpistanish?) and can't see a damned thing without them.

@marcoarment xScope's tools have a setting to simulate this for 20-something designers.

I suspect it's never been used.

@chockenberry @marcoarment I’ve used it along with the colorblind simulators many times. It’s wonderful!
@stegrainer @marcoarment Thanks! Good to know I'm not alone! 😀
@chockenberry @marcoarment some day we find out this was the inspiration for Liquid Glass.
@chockenberry @marcoarment great feature! I won’t be needing it unfortunately.
@chockenberry @marcoarment Used that xScope feature a lot, and not least to create awareness for #A11Y. It’s literally eye-opening.

@chockenberry @marcoarment I'm pretty sure no one at Apple uses it. Reading the station platform or exit number in Apple Maps is impossible with more than +1, and accessibility settings do not make this thing larger.

I believe that like making architects spend a day on a wheelchair, a day with sight-ruining glasses should be required for anyone who calls themselves a "UI/UX designer".

@chockenberry @marcoarment I had used xScope for years but had completely forgotten about it. Now I need to renew my license because I need to use it today!
@marcoarment Too difficult of a question to just punch into a search engine, huh?
Global Prevalence of Presbyopia and Vision Impairment from Uncorrected Presbyopia: Systematic Review, Meta-analysis, and Modelling - PubMed

There is a significant burden of VI from uncorrected presbyopia, with the greatest burden in rural areas of low-resource countries.

PubMed
@marcoarment Please try not to use ChatGPT or other LLM's to make factual claims. A reliable, published source with a link is MUCH better way to share such information (also helps with verification).
@marcoarment You wanting to use ChatGPT to *find* a source for things is one thing (though arguably still problematic), but just citing LLM answers directly as a supposed source is wildly irresponsible.