@aburka @mattsains @Infoseepage @ProPublica @lenfestinstitute Correct: there isn't.
The JD and post are deliberately worded. This is a research post. Is there a way to use these technologies in a way that's aligned with our values and need for safety? "No" is a very viable outcome here. Either way, we need to do our due diligence rather than knee-jerk it away or join the hype cycle.
The position is funded by Lenfest and doesn't come out of our existing donations.
@aburka @mattsains @ben @Infoseepage @ProPublica @lenfestinstitute I generally compare LLMs to fossil fuels and nuclear weapons. Yes, they may be widespread and difficult to eliminate, but our continued existence depends on us getting rid of these technologies. However you feel about fossil fuels, our civilization can't afford to use them anymore.
Cigarettes are unhealthy but not an inherently self-terminating technology.
@ben @mattsains @Infoseepage @ProPublica @lenfestinstitute
> The position is funded by Lenfest
Based on the link in the JD, this appears to be a carefully worded non-truth (more gaslighting). The funding comes from OpenAI and is officially geared towards increasing AI adoption. Something tells me Microsoft and OpenAI will not take no for an answer. I just can't believe Ben is so incredibly naive as to not understand this. It's so frustrating to be lied to!
@ben @aburka @mattsains @Infoseepage @ProPublica @lenfestinstitute "Nor do we need to use any specific technology." This does not seem accurate. The job description requires:
- Experience using generative AI and large language models APIs.
- Familiarity with LLM prompt engineering, fine-tuning or evaluation techniques.
Generative AI and LLMs are specific unethical technologies that have no business in journalism because they cannot preserve truth or accuracy by their nature.
@ben @aburka @mattsains @Infoseepage @ProPublica @lenfestinstitute If this job were not focused on generative AI and LLMs, and funded by OpenAI, I'd be cautiously optimistic because of the goodwill and trust that ProPublica and Ben have built up over time.
"AI" is a marketing term, and not everything that's been marketed that way over the years is evil. iNaturalist's vision model is a useful and ethical example of machine learning.
But OpenAI and its products are existential threats.
@skyfaller @ben @mattsains @Infoseepage @ProPublica @lenfestinstitute I mean it's SO OBVIOUS
1. embrace: fund nonprofit to "explore" AI, tempt them with no-strings-attached funding and free credits
2. extend: nonprofit starts using the tech, relying on it more and more, slowing down hiring, changing research practices etc
3. extinguish: jack up the price, no more nonprofit journalism!
And that's not even taking into account how they can influence what propublica even *discovers* when using AI to "parse large troves of data" (an explicit goal called out in the JD) due to the biases and hallucinations built into the model.
@aburka @ben @mattsains @Infoseepage @ProPublica @lenfestinstitute Oh man, I missed that detail, yes this is extremely specific technology. Specific proprietary LLMs made by the worst people.
OpenAI and Sam Altman's disgusting actions aside (have you given your eyeballs to Worldcoin yet?), Microsoft is complicit in the genocide in Gaza and is doubling down on assisting the Israeli government in murdering children.