@Infoseepage @ben @ProPublica @lenfestinstitute I respect propublica a lot but this throws almost all of it away in one single post
@aburka @Infoseepage @ben @ProPublica @lenfestinstitute yeah, I’m gutted that my donations to ProPublica are going toward LLMs and content theft from other authors and journalists.
@mattsains @Infoseepage @ben @ProPublica @lenfestinstitute and the "however you feel about it" language is just gross techbro drivel. Really expected better from Ben
Ben Werdmuller (@[email protected])

There is no such thing as neutral technology.

Werd Social

@aburka @mattsains @Infoseepage @ProPublica @lenfestinstitute Correct: there isn't.

The JD and post are deliberately worded. This is a research post. Is there a way to use these technologies in a way that's aligned with our values and need for safety? "No" is a very viable outcome here. Either way, we need to do our due diligence rather than knee-jerk it away or join the hype cycle.

The position is funded by Lenfest and doesn't come out of our existing donations.

@ben @mattsains @Infoseepage @ProPublica @lenfestinstitute that is not what the JD says. Please don't try to gaslight me. It says this highly paid individual, who has experience with LLM prompt engineering (so presumably a proponent) will "prototype tools with potential for production use". Point me to the part where it says you will be evaluating whether using a plagiarism machine is beneficial at all.
@ben @aburka @Infoseepage @ProPublica @lenfestinstitute there has already been extensive research on AI and specifically LLMs, including their deleterious effects on intellectual property rights, the environment, discrimination, and impact to the information landscape. There’s no need to rehash this research at the cost of ProPublica’s reputation
@ben @aburka @Infoseepage @ProPublica @lenfestinstitute I also reject the notion that the commenters here are ā€œknee jerkā€ing. I’ve been following and studying sociological impacts of AI for the last two years, and my comments are based on that knowledge. I imagine the same is true for at least some of my peers in this thread
@mattsains @ben @Infoseepage @ProPublica @lenfestinstitute it's telling that they require LLM experience for the job, hence folks like me who refuse to use them on ideological grounds would not be eligible
@mattsains @aburka @Infoseepage @ProPublica @lenfestinstitute I don't doubt that, and it's worth checking my blog if you think I'm a shill for the vendors or the tech. Definitely going into this with eyes open.
@ben @mattsains @Infoseepage @ProPublica @lenfestinstitute That's a big part of why it's so disappointing to see the manner in which you wrote the JD and the post. I wasn't being facetious when I said I had expected better from you and your organization. If I thought you were completely a lost cause, I wouldn't be engaging here
@ben @aburka @Infoseepage @ProPublica @lenfestinstitute to my ears, the question here seems akin to ā€œcan smoking a pack a day help cure my lung cancer?ā€ What research question do you want the candidate to answer that isn’t already settled by the scientific body of evidence out there?
@mattsains @ben @Infoseepage @ProPublica @lenfestinstitute however you feel about it, cigarettes are a big part of the health landscape out there today

@aburka @mattsains @ben @Infoseepage @ProPublica @lenfestinstitute I generally compare LLMs to fossil fuels and nuclear weapons. Yes, they may be widespread and difficult to eliminate, but our continued existence depends on us getting rid of these technologies. However you feel about fossil fuels, our civilization can't afford to use them anymore.

Cigarettes are unhealthy but not an inherently self-terminating technology.

@ben @mattsains @Infoseepage @ProPublica @lenfestinstitute
> The position is funded by Lenfest

Based on the link in the JD, this appears to be a carefully worded non-truth (more gaslighting). The funding comes from OpenAI and is officially geared towards increasing AI adoption. Something tells me Microsoft and OpenAI will not take no for an answer. I just can't believe Ben is so incredibly naive as to not understand this. It's so frustrating to be lied to!

@aburka @mattsains @Infoseepage @ProPublica @lenfestinstitute Oh, I want to be very clear on this point: nobody involved (Lenfest, OpenAI, Microsoft) has any say or input on what we say or do. Nor do we need to use any specific technology.
@ben @mattsains @Infoseepage @ProPublica @lenfestinstitute oh come ON man. This kind of crap is why all scientific studies have to declare their funding sources and conflicts of interest. What would you say to a study on the health effects of cigarettes funded by Philip Morris, but they disclaimed any influence on the results?
@ben @mattsains @Infoseepage @ProPublica @lenfestinstitute and it's not even a study! In the press release about the grant and in the JD you posted, it explicitly says the idea is to use AI in the newsroom! What are we even doing here.

@ben @aburka @mattsains @Infoseepage @ProPublica @lenfestinstitute "Nor do we need to use any specific technology." This does not seem accurate. The job description requires:

- Experience using generative AI and large language models APIs.
- Familiarity with LLM prompt engineering, fine-tuning or evaluation techniques.

Generative AI and LLMs are specific unethical technologies that have no business in journalism because they cannot preserve truth or accuracy by their nature.

@skyfaller @ben @aburka @Infoseepage @ProPublica @lenfestinstitute and I suppose all $5 million of unused service credit donation from those companies will go to underprivileged plagiarists in need or whatever. The more I engage in this thread the more disheartened I am that journalism is finally dead, so I’m going to unsubscribe now and just go touch some grass

@ben @aburka @mattsains @Infoseepage @ProPublica @lenfestinstitute If this job were not focused on generative AI and LLMs, and funded by OpenAI, I'd be cautiously optimistic because of the goodwill and trust that ProPublica and Ben have built up over time.

"AI" is a marketing term, and not everything that's been marketed that way over the years is evil. iNaturalist's vision model is a useful and ethical example of machine learning.

But OpenAI and its products are existential threats.

@skyfaller @ben @mattsains @Infoseepage @ProPublica @lenfestinstitute no need to use any specific technology, but here's a cool $5 million in free credits for my specific technology, no pressure lmao

@skyfaller @ben @mattsains @Infoseepage @ProPublica @lenfestinstitute I mean it's SO OBVIOUS

1. embrace: fund nonprofit to "explore" AI, tempt them with no-strings-attached funding and free credits
2. extend: nonprofit starts using the tech, relying on it more and more, slowing down hiring, changing research practices etc
3. extinguish: jack up the price, no more nonprofit journalism!

And that's not even taking into account how they can influence what propublica even *discovers* when using AI to "parse large troves of data" (an explicit goal called out in the JD) due to the biases and hallucinations built into the model.

@aburka @ben @mattsains @Infoseepage @ProPublica @lenfestinstitute Oh man, I missed that detail, yes this is extremely specific technology. Specific proprietary LLMs made by the worst people.

OpenAI and Sam Altman's disgusting actions aside (have you given your eyeballs to Worldcoin yet?), Microsoft is complicit in the genocide in Gaza and is doubling down on assisting the Israeli government in murdering children.

@skyfaller @ben @mattsains @Infoseepage @ProPublica @lenfestinstitute One wonders if they'll internally kill stories related to stuff like this for fear of losing funding for the position
@ben @skyfaller @mattsains @Infoseepage @ProPublica @lenfestinstitute of course you say that and I believe you think so, honestly, but the organization's decision processes are tainted just by taking the money
@skyfaller @ben @aburka @mattsains @Infoseepage @ProPublica @lenfestinstitute Sounds like hiring a snake oil salesman to help determine if snake oil is right for you.
@ben @aburka @Infoseepage @ProPublica @lenfestinstitute Lenfest is funded in large part by Microsoft and Open AI, who stand to gain the most from ProPublica staining its reputation to shill for LLMs. It’s so transparent it wouldn’t take an AI-assisted journalist to figure out what motives are at play here