Maybe I'm just crazy, but I feel like there is not enough freaking out going on vis-a-vis recent AI developments. It seems to me science, the arts, transportation, engineering... everything is going to be changed, if not in 10 years certainly in 30. It feels to me like there's a lot of denial going on, of the form "well, it can't do [this one thing]" but that was said about chess programs in the 90s. They're now absolutely unbeatable.
@ZachWeinersmith A lot of current AI has a bunch more hidden (and often very poorly paid) humans inside than is obvious to observers (much training data is literally generated on Amazon "Mechanical Turk", but not just labelers, the delivery robots and chat assistants in production use have callcenters full of backup humans.) Even ChatGPT required a significant amount of tedious manual human work to create the tuning model that's used to differentiate it from plain GPT3
@ZachWeinersmith this is also one of many reasons these systems are all currently very far from being profitable - in addition to the costs of the labeling work, training the models that appear impressive to day is far from cheap, and using them is not cheap either - a single ChatGPT response requires several orders of magnitude more computing power than a google search, and the way these models are constructed makes that difficult to do anything about.