@aburka @mattsains @Infoseepage @ProPublica @lenfestinstitute Correct: there isn't.
The JD and post are deliberately worded. This is a research post. Is there a way to use these technologies in a way that's aligned with our values and need for safety? "No" is a very viable outcome here. Either way, we need to do our due diligence rather than knee-jerk it away or join the hype cycle.
The position is funded by Lenfest and doesn't come out of our existing donations.
@ben @mattsains @Infoseepage @ProPublica @lenfestinstitute
> The position is funded by Lenfest
Based on the link in the JD, this appears to be a carefully worded non-truth (more gaslighting). The funding comes from OpenAI and is officially geared towards increasing AI adoption. Something tells me Microsoft and OpenAI will not take no for an answer. I just can't believe Ben is so incredibly naive as to not understand this. It's so frustrating to be lied to!
@ben @aburka @mattsains @Infoseepage @ProPublica @lenfestinstitute "Nor do we need to use any specific technology." This does not seem accurate. The job description requires:
- Experience using generative AI and large language models APIs.
- Familiarity with LLM prompt engineering, fine-tuning or evaluation techniques.
Generative AI and LLMs are specific unethical technologies that have no business in journalism because they cannot preserve truth or accuracy by their nature.
@aburka @ben @mattsains @Infoseepage @ProPublica @lenfestinstitute Oh man, I missed that detail, yes this is extremely specific technology. Specific proprietary LLMs made by the worst people.
OpenAI and Sam Altman's disgusting actions aside (have you given your eyeballs to Worldcoin yet?), Microsoft is complicit in the genocide in Gaza and is doubling down on assisting the Israeli government in murdering children.