Some people were using OpenAI's advanced API parameters apparently to derive scientific insights about GPT3.5. This will break in under a week. Thank you OpenAI for making our point so eloquently and efficiently for us

(first image: introduction of our CUI'23 paper https://doi.org/10.1145/3571884.3604316 ; second image: OpenAI announces it will break getting information about prompt/output probabilities for whatever reason)

#OpeningUpChatGPT #proprietary #reproducibility #openscience

Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generators | Proceedings of the 5th International Conference on Conversational User Interfaces

ACM Conferences

Credit where credit is due: sometimes folks releasing LLM+RLHFs actually do admit there are legal complications

"While I/we are proud of FAIR's open values/history, on this particular one we are still trying to navigate. Using Clueweb may have been a mistake in this regard as that is not free.. in general the product+legal landscape is very difficult these days :( .. just getting the paper release itself approved took many weeks..."
(FAIR=Facebook AI Research)

#LLM #OpeningUpChatGPT

IEEE Spectrum's Michael Nolan covers our work on #OpeningUpChatGPT: "An assessment of openness among ostensibly open-source LLM models finds that few live up to the claim" https://spectrum.ieee.org/openai-not-open

With some choice quotes from @andreasliesenfeld and me

LLAMA and ChatGPT Are Not Open-Source

An assessment of openness among ostensibly open-source LLM models finds that few live up to the claim

IEEE Spectrum
New on the blog: some background on our #OpeningUpChatGPT paper — moderately ranty and with shoutouts to true originals like @emilymbender @timnitGebru @abebab @mmitchell_ai and the impressive BigScience collaboration that tops our openness list https://ideophone.org/opening-up-chatgpt-evidence-based-measures-of-openness-and-transparency-in-instruction-tuned-large-language-models/
Opening up ChatGPT: Evidence-based measures of openness and transparency in instruction-tuned large language models – The Ideophone