Padraig X. Lamont

45 Followers
134 Following
106 Posts

I enjoy applying software to unique problems!

Founder of https://RoyalUr.net, developer of https://MisinfoGame.com

Websitehttps://padraiglamont.com
GitHubhttps://github.com/Sothatsit
@yonomitt Ahhh JavaScript, forever the friendliest language to developers
@yonomitt Is that because they’re stored using floating point?
@orbiterlab This is why I think that all actions the AI proposes should be approved by the human asking it to do things. Not only do I think this would be a better user experience (reduces unexpected actions), I think this would also help avoid issues like this.
@garymarcus We need better regulation and AI safety talks and action. A 6-month pause is not going to have any effect on that at all. There has already been someone that committed suicide after talking to a much smaller model, based upon GPT-J. We need to focus on these issues, not on the potential of more emergent properties from bigger models that could potentially change the world more than LLMs already have. The tech and the potential for damage is already here and is super widespread.

@garymarcus None of the arguments I’ve seen have addressed these issues. Most arguments I’ve seen were one of the following:

(1) OpenAI will become a monopoly, we don’t want that.

(2) People just wanted to increase the visibility of this as an issue.

(3) This will give regulatory and safety committees time to catch up.

I disagree that a 6-month pause would affect any of these things, other than point (2). This has been a good marketing tool for the issue, which I think is good.

@garymarcus (3) Doesn’t stop more powerful models being developed: Due to the emergent properties of LLMs, I think it is likely that our next advancements will not come solely from scaling them up. Sam Altman said that they have 100s of small improvements that they used to train GPT-4. To get more emergent intelligence, I expect these types of small improvements may continue to have a big impact, beyond just increasing the params. This pause would have no effect on their development.

@garymarcus My problem with the letter is that time is such a bad lever to use to control the development of AI. I’m all for increased investment in safety and regulation, but pausing development of models more capable than GPT-4 is:

(1) Badly defined: Are we just talking about more parameters? A lot of research is about using parameters more effectively now. We can now do much more with less compute than ever before.

@garymarcus (2) Doesn’t stop any damage: People are already using these LLMs at scale. The difference between GPT-3.5 and GPT-4 is not that huge. Why do you think that a bigger model would have a much more significant difference?

@Riedl This is terrible. It is quite scary to think about the power that LLMs can have in convincing people to think certain ways, with very limited oversight in what is appropriate. This makes me appreciate OpenAI’s stance on securing their models. I really really hope that this at least acts as an important case for future AI research and regulations.

The thing that scares me is that these models can already be used for manipulation of vulnerable people en masse. How do we even stop that?

AI Written, AI Read cartoon - Marketoonist | Tom Fishburne

One piece of slang that has long embodied the short attention span Internet age is TL;DR, short for “too long; didn’t read.” With the explosion of generative AI tools, we’re rapidly entering the age of TL;DW: “too long, didn’t write.” A January survey from Fishbowl found that 40% of nearly 12,000 workers have used ChatGPT

Marketoonist | Tom Fishburne - Marketoonist is the thought bubble of Tom Fishburne. Marketing cartoons, content marketing with a sense of humor, keynote speaking.