It looks like Meta just quietly (?) rolled out a new Large Language Model trained up on the scientific literature. At first glance it's not terrible—I was thinking I had reviewed worse—but it starts talking nonsense after a while.

Still, interesting and makes me think about the research papers I'll be receiving from students in coming years.

https://galactica.org/?prompt=literature+review+on+costly+signaling+theory&max_new_tokens=1400

literature review on costly signaling theory - Galactica

"If the seller does this, then the buyer will buy the phone at a higher price than it is worth, and the seller will lose money."

Feeling a bit bad now TBH.

Maybe I didn't give them enough credit.

I thought it was just a language model that produced middling results. I didn't realize this was "a new interface to access and manipulate what we know about the universe."

Currently having a blast playing with Amazon's lex.page, thanks to @tonic.

This is quite a nice plot twist it threw in there.

@ct_bergstrom I’ll make sure to connect you with the creator. There’s something about this GPT-3 AI that’s almost there (GPT-4 is coming soon!) , combining the lex page with something similar for spreadsheets and btw it can kind of code an algorithm and I’m guessing that s what will be in the most basic toolbox for any one basically.
@tonic @ct_bergstrom one last thing : try to write something like an undergrad essay on a “science topic” it’s pretty good at it!
@tonic @ct_bergstrom isn't this a low bar? Undergrads are terrible at writing essays. 😉
it's true, but some people even need to teach them , so it could be quite a nice tool - you can probably design a full hour long presentation at undergrad level in 30 minutes or so with the thing... maybe...
@mlmillerphd @ct_bergstrom