https://www.rollingstone.com/politics/politics-news/tiktok-served-nazi-propaganda-jan-6-committee-found-1234656268/
| Website | https://wp.nyu.edu/bonikowski/ |
| Website | https://wp.nyu.edu/bonikowski/ |
Really impressive paper (and #rstats 📦) on using dependency parsers for extracting semantic relationships from text. #TextAsData
https://journals.sagepub.com/doi/full/10.1177/00491241221099551
Seems like Peter Mohrbacher has a much better approach to #AIart than Greg Rutkowski.
'Does that mean you will be able to monetize when everyone can do the same thing equally? This highlights the many jobs required to become a prominent artist beyond just making a JPG. AI tools just make the barrier for entry lower. The job of an artist is complicated: creating a vision and direction, curating a portfolio, and building a brand of art. Those challenges remain.'
We are just a few weeks shy of the two-year anniversary of the January 6, 2021 insurrection in the United States Capitol, the culmination of a series of events engineered by Donald Trump and his allies to disrupt the peaceful … Continue reading My Reflections on the Release of the January 6 Committee Report on Trump’s Attempted Election Subversion and the Expected Passage This Week of Electoral Count Act Reform: Gratitude, Awe, and Partial Relief →
"Language models are better than humans at next-token prediction" — Two distinct experiments to directly compare humans and language models to see who is better at next-token prediction.
Paper: https://arxiv.org/abs/2212.11281
#AI #CL #NewPaper #DeepLearning #MachineLearning
<<Find this useful? Please boost so that others can benefit too 🙂>>
Current language models are considered to have sub-human capabilities at natural language tasks like question-answering or writing code. However, language models are not trained to perform well at these tasks, they are trained to accurately predict the next token given previous tokes in tokenized text. It is not clear whether language models are better or worse than humans at next token prediction. To try to answer this question, we performed two distinct experiments to directly compare humans and language models on this front: one measuring top-1 accuracy and the other measuring perplexity. In both experiments, we find humans to be consistently \emph{worse} than even relatively small language models like GPT3-Ada at next-token prediction.