Once again Ted Chiang has it exactly right. The immediate danger from #AI is not that it will become sentient and do whatever it wants. The danger is that it will do what it’s being designed to do: help rich corporations destroy the working class in pursuit of ever-greater profits and thus concentrate wealth in fewer and fewer hands.

https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey

@JamesGleick

I assume that Google, Facebook and MicroSoft accidentally have killed themselves. Since LlaMa was leaked, the cost of creating a LLM has fallen to 100$ (as described in a document leaked from Google). Now there are 8Billion potential competitors of Google, Facebook and MicroSoft (provided they keep their business strategy WRT LLMs).

#dotComStartupsFinallyJumpedTheShark

@Life_is @JamesGleick the code to train the model was always open source. What leaked was the weights, which allow using their exact trained model, but do little to help with training new variants. The code for setting up the structure of the model and training it is what really changes the cost of a new model, because at that point the job is data set assembly rather than R&D

@danielleigh @JamesGleick

It was open source in principle, but not available until it was leaked (and the leak was legal, because it was open source).

Whatever the details: It is as if someone invented the automobile, patented it and the patent expired unexpectedly after a month.

@Life_is @JamesGleick the code was released in February, and the weights weren't leaked until March. The training dataset remains private to Facebook and the weights are simply their trained version of the model. They intentionally released the code when they announced the model. This is the part that makes it easier for others to train their own models, as the hard part of building neural nets is figuring out the architecture and training strategy.