Altman would rather talk about regulation of sci-fi scenarios and not about the real world consequences of how AI can be used right now to do harm. Things like protection from discrimination, fairness, privacy, ability to receive remedy.

Regulatory capture is just one part of this. Regulation of fantastical scenarios and not real-world scenarios is tantamount to no regulation.

Anyway, if he wants rules about sentience and self-replication, that’s fine. I’m not opposed to that.

But that should be in addition to the FTC regulating applications that harm consumers. Those might not even affect a company that provides API services and not direct-to-consumer applications.

@Riedl Would be surprised if that’s not strategic. He might have watched the Zuckerberg hearings and decided to just flood the zone with bs.
@b3n unclear. I think he might believe it albeit also be savvy enough to know it serves his purpose to use the event to drive his PR

@Riedl The mind virus is that strong in The Valley?

On the other hand I wonder how much of it is an inability to join another (non-SV) discussion. You don’t need AGI for „AI“ to kill people, control over cars and an „idea“ from anywhere that it’s cool to do so is enough. Take a functional definition of intelligence, intention etc. and some things do map. No „singularity“, but it’s a sleigh of hand to get rid of that…

@b3n the mind virus is very prevalent in SV.

@Riedl I became a quite strong twitter user during Covid. Looking back I’m as fascinated as terrified to see what it did to the way I saw the world - and I always stayed at at least arms length away from the earlier rationality movement and turned away from it before EA became big.

The ease with which self-selection and an algo are able to keep us in a stable self-organized narrative world is quite something. (“Culture”, duh, but that it doesn’t need any face to face…?)

@Riedl It was an insightful experience though. I often have the impression social scientists who only study it from afar mit much of the nuances of the experience, the chaos, weirdness, pure self organization.

It’s the same with LLMs which mashes it hard to shut off the line of Altman etc completely. Stuff like this is too alive to be understood just in theory. (Well, alive is a very bad word here of course ;))

@b3n @Riedl Are you sure it was Zuckerberg he was watching? It could just as easily have been SBF.
@inspired @Riedl Yeah, there's a similar playbook. I guess the most important thing to watch in either case are the questions, anyway - then you know that no serious issues will get raised once you start talking about fairy dust.

@Riedl another Very Serious Person. This is where we are now. Our leaders of every variety incapable of engaging with reality and trying to mold society into the dreamworld they already inhabit. They don't know they make me wanna set them on fire, or maybe they do, maybe they're flaunting their insane ideas just to piss me off, show me the breathless coverage they're getting. It's working.

Anyway, can't wait for the revolution.

@Riedl That last paragraph of Sam Altman's is a really good example of this thing that he and many other SV types do, where they take something (here 'regulatory capture') that is gaining public notice and a growing consensus on valence, where they view it with the opposite valence and find that they can't flip the consensus to match their view, so they try to redirect it to mean something completely different where the valence helps them.
@Riedl Eg in the example above (and elsewhere) they are trying to get people to use 'regulatory capture' to mean 'capture of companies/innovation by those dastardly regulators', while keeping the high negative valence consensus that has formed about 'regulatory capture', thus neatly changing how people's feeling will translate to public pressure from something that threatens the SV owning class to something that benefits the SV owning class.
@Riedl @timnitGebru what does he mean by ‘capability threshold’? Does it literally mean scale?

@shiwali @timnitGebru it is unclear what a “capability threshold” would mean. Could be performance on benchmarks. In earlier remarks he talked about sentience and self-replication.

It might just mean “whatever OpenAI” has, because it would be a huge badge of honor to be the only company that must be regulated because it is too powerful. (And such a burden, OpenAI is taking on regulation in the name of protecting humanity). Very effective PR.

@Riedl right??? I’m almost impressed at how “regulation should take effect above a capability threshold” is out-of-touch in basically every way a statement about AI can be out-of-touch. Like at this point I genuinely don’t care if it’s ignorance or malice, it’s so clear that folks like this are unwilling/unable to approach this in an ethical way

@Riedl @timnitGebru I am very shocked that these people are the ones with a voice in the policy space.

I raised this question with DARPA I2O’s director. US government has made significant investments in AI over past several decades - much before these companies existed, it has a larger, nuanced context. NLP, vision benchmarks were created under DARPA programs. The director says that they are working very closely with the policy makers but it’s not common knowledge.