Liquid Glass’ blurred content everywhere is especially cruel to those of us who use reading glasses or progressives.

The reflex to seeing blurry text on our phones is to adjust our sight angle or distance to sharpen it. But, of course, it’s not our fault, doesn’t sharpen, and just causes eyestrain.

Text on my phone should never be blurry.

You may ask, “How many people could this possibly affect?”

Well…

@marcoarment I'm gonna be that guy, but ChatGPT is not the right tool for this

@outadoc @marcoarment I think LLMs are the perfect tool for this. I'm curious why you don't agree.

LLMs are great at parsing text and aggregating it. Their entire existence is based on modeling languages. World-knowledge LLMs search the internet far better than plain-old-Google in my experience. Factual halucinations are still an issue, but have been dramatically reduced in the last year.

After a few minutes of "regular" Googling, everything in this screenshot is accurate.

@jimmylittle @outadoc @marcoarment what's a 'factual' hallucination as distinct from... what?

@oscarjiminy @outadoc @marcoarment There are all kinds of hallucinations. LLMs have a current problem of seeing things that aren’t there as things they perceive as facts.

A guy on mushrooms seeing dancing pink elephants is a different kind of hallucinating.

There is an important distinction. 🍄

@jimmylittle @outadoc @marcoarment it's a specious distinction in that we are not addressing human subjects

so far as machine output goes 'factual' hallucinations are identical to any other output you may term as an hullucination

@oscarjiminy @outadoc @marcoarment To be clear. *I* don’t term them as hallucinations, the industry does.

I consider them bugs in the output.

@jimmylittle @outadoc @marcoarment calling them bugs in the output is demonstration of a similar category error

they are 'bugs' given the expectation. the expectation being coherent, logical output that is in full agreement with the material world/linguistic norms and expectations

those expectations are a bug in the wetware. they are absurd expectations

@oscarjiminy @outadoc @marcoarment i disagree. LLMs are just software. Software has bugs, and if done correctly, gets updated so there are fewer and fewer bugs as time goes on.

LLMs look bad when the expectation is 100% accuracy, but the reality is no search software is 100% accurate.

My guess is the sources that LLMs cite (NOT the text they spit out, but the sources they cite) are far more relevant to the question than the top 5 Google results (of which 3 are ads, one link is useful, and one is an SEO Link Farm).

The beauty part about modern web-based software is there are lots of options. You can always just _not use_ the ones you don't like.

@jimmylittle @outadoc @marcoarment ok so we are not addressing software, strictly speaking we are addressing a cultural entity (however it may be bound in/defined in software). that’s a distinction in so far as the bug is in *output* (as you yourself asserted). The output is faulty given expectation.

llms are not search engines at least that’s not the primary function. the function is to aggregate, collate, extrapolate through immitation. that function (the output) is fundamentally different from what google is oriented to deliver (historically speaking).

there are so many different products that are conflated into ai (or/and that claim to be enhanced/informed by ai) that it can be tricky to speak generally but in so far as objections go a fundamental one is as to how they are framed. these services cannot reason.

for folks who believe they can or/and that cognition/sentience is not a necessarily embodied phenomenon the agi singularity is an article of faith. it is not rational.

@jimmylittle @outadoc @marcoarment citation is also a category error.

there is no citation as such with llm output. many hallucinations pertain to non existent references but specifically (in so far as reason, cognition, understanding), beyond approximation there is currently no effective way to establish what needs to be cited (in academic terms) and what can merely inform the output.

this is entirely outside of consideration of hallucinated refs.