Jaime Henriquez, Ph.D.

@GiantGroundSloth
0 Followers
11 Following
34 Posts

Teacher, writer, interdisciplinary scholar, “big picture" person. Tech/Humanities.

A cynical optimist and student of the “No Stop Signs” intersection of technology and human nature.

I like to think about something interesting, then ponder and research until I think I understand it as much as I want to. Then go tell someone, or sit here stalling and fine-tuning. :/

I love playing with language; it's a technology like politics, computers, MMOs, Pa Kua, logic - most any tool.

Cheers!

I'm just encountering "Who to follow" here on Mastodon, and I thought I had a innate sense of what the typology[?] was, but getting a "personalized suggestion" from the European Commission is beyond me.

Jaime

Another AI Company Wrote Us and Here’s Our Response

https://warandpeas.substack.com/p/another-ai-company-wrote-us-and-heres

Another AI Company Wrote Us and Here’s Our Response

Why the current hype around AI is a slap in the face for creatives.

War and Peas

@dangillmor
Rebecca Solnit’s piece in Saturday’s Guardian:

“[The MSM] have become a stampeding herd producing an avalanche of stories suggesting Biden is unfit, [...] They do this while ignoring something every scholar and critic of journalism knows well and every journalist should. As Nikole Hannah-Jones put it: ‘As media we consistently proclaim that we are just reporting the news when in fact we are driving it. What we cover, how we cover it, determines often what Americans think is important and how they perceive these issues yet we keep pretending it’s not so.’ They are not reporting that he is a loser; they are making him one.”

https://www.theguardian.com/commentisfree/article/2024/jul/06/biden-trump-race-rebecca-solnit

Why is the pundit class so desperate to push Biden out of the race?

Yes, Biden had a bad debate – but so did Trump. The media is once again repeating the mistakes of 2016

The Guardian

Dear @rbreich,

I am enjoying your videos on economic issues, particularly on monopolistic pricing. Thank you, sir.

Something that has long puzzled me is what seems like a simple question but probably isn't. How many competitors does it take to make a "free" market? One is clearly insufficient, four apparently not enough in the food "market" so, is there a magic number or range that non-economists (or regulators) can use as a rule of thumb?

Best regards, and thanks for reading - Jaime

As a PhD student, I was fascinated to see how many bright peers were developing price-fixing algorithms, under the rationale of market optimization and helping businesses.

This month, the FTC reminded everyone that this is still illegal.

https://www.ftc.gov/business-guidance/blog/2024/03/price-fixing-algorithm-still-price-fixing?utm_campaign=landlords_and_property_ma&utm_content=1709317844&utm_medium=social&utm_source=linkedin

Price fixing by algorithm is still price fixing

Landlords and property managers can’t collude on rental pricing. Using new technology to do it doesn’t change that antitrust fundamental. Regardless of the industry you’re in, if your business uses an algorithm to determine prices, a brief filed by the FTC and the Department of Justice offers a helpful guideline for antitrust compliance: your algorithm can’t do anything that would be illegal if done by a real person.

Federal Trade Commission
There's only one regulation of Big Tech that has no downside for the rest of us: Break up these companies, and require interoperability, to create the conditions that make genuine competition possible.

Why crap-detection literacy is essential, not only for online info, but for using LLMs

Hallucination is Inevitable: An Innate Limitation of Large Language Models

https://arxiv.org/abs/2401.11817

#ai #llm

"In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs."

Hallucination is Inevitable: An Innate Limitation of Large Language Models

Hallucination has been widely recognized to be a significant drawback for large language models (LLMs). There have been many works that attempt to reduce the extent of hallucination. These efforts have mostly been empirical so far, which cannot answer the fundamental question whether it can be completely eliminated. In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs. Specifically, we define a formal world where hallucination is defined as inconsistencies between a computable LLM and a computable ground truth function. By employing results from learning theory, we show that LLMs cannot learn all the computable functions and will therefore inevitably hallucinate if used as general problem solvers. Since the formal world is a part of the real world which is much more complicated, hallucinations are also inevitable for real world LLMs. Furthermore, for real world LLMs constrained by provable time complexity, we describe the hallucination-prone tasks and empirically validate our claims. Finally, using the formal world framework, we discuss the possible mechanisms and efficacies of existing hallucination mitigators as well as the practical implications on the safe deployment of LLMs.

arXiv.org

#ChatGPT just answered a burning question of mine that two human pharmacists could not answer. Yipee!

https://chat.openai.com/share/08da5a60-93ea-456d-be95-467da614ff78

ChatGPT

A conversational AI system that listens, learns, and challenges

Young people: The GOP is floating an amendment to raise the voting age from 18 to 25. Vivek Ramaswamy is openly promoting it. They’re scared of your votes and want to silence you. Act appropriately in 2024.

I remember the 2000 Election. There were enough votes for Ralph Nader to hand the election to George W. Bush.

I remember the 2016 Election. There were enough votes for Jill Stein to hand the election to Donald Trump.

In 2024 the Green Party is running Cornell West. Let’s not repeat this history, friends

Your vote is not a marriage. You’re not choosing a life partner. It’s a chess move for what’s best for the country and the world.