There's been a lot of chatter over the past day about whether platforms should slow down the release of large language models. I'm persuaded that we should:

https://www.platformer.news/p/the-ai-industry-really-should-slow

The AI industry really should slow down a little

This year has given us a bounty of innovations. We could use some time to absorb them

Platformer
Btw it’s a good night to subscribe to Hard Fork https://www.nytimes.com/column/hard-fork
Hard Fork

Each week, journalists Kevin Roose and Casey Newton explore and make sense of the rapidly changing world of tech.

@caseynewton booooo I was hoping the trans people and allies at NYT were hard forking and making their own news org.
@caseynewton Y’all are all in on this Bard Fork rebrand
@caseynewton loved the user responses in the last episode. Some amazing use cases.
@caseynewton huge get! Congratulations.
@caseynewton I’m really enjoying the podcast. Nice mix of conversation and edited news with focus.
@caseynewton companies probably will not care but they need to see that a bad outcome will also destroy their company at a speed they can't recover from.
I asked Chat GPT to write a rebuttal.
@numbertheory it's talking as if it's one of us: "We should be thoughtful..."! WE? :))
@omidmnz "We're all trying to find out who the rogue AI is. It could be any one of us!"
@caseynewton I feel like it is a Pandora's Box situation. The only way we can really control it is through educating the public to be more discerning and not give into biases so quickly. Yes, it's going to be a disaster.
@caseynewton can we pause with the emojis too?
@caseynewton I definitely hear what you’re saying but i think it’s too late? People are doing all sorts of crazy things with these leaked and serviced models all over the world, there’s *so* much happening, i think that you might not be able to get the beast back in the cage.
@2happy1sad this sort of preemptive defeatism is not going to serve anyone well
@caseynewton If you've not yet read this paper, you should probably have a look, given your thesis. I think it's important to the discussion of what it takes to hit AGI. Bottom line: it may be surprisingly easy at this stage. https://gwern.net/scaling-hypothesis
The Scaling Hypothesis

On GPT-3: meta-learning, scaling, implications, and deep theory. The scaling hypothesis: neural nets absorb data & compute, generalizing and becoming more Bayesian as problems get harder, manifesting new abilities even at trivial-by-global-standards-scale. The deep learning revolution has begun as foretold.