Very interesting and provocative article (not sure I agree with conclusions, but they are worth considering seriously).

https://cacm.acm.org/magazines/2023/1/267976-the-end-of-programming/fulltext#comments

The End of Programming

The end of classical computer science is coming, and most of us are dinosaurs waiting for the meteor to hit.

"The engineers of the future will, in a few keystrokes, fire up an instance of a four-quintillion-parameter model that already encodes the full extent of human knowledge (and then some)"

But what does "the full extent of human knowledge entail? *A lot more* than most people are aware of, I would say. Not at all clear that these models can attain it only via training on written text (or programs).

@melaniemitchell how do models handle being trained on two very strongly held, but conflicting, opinions? #ai #ml
@melaniemitchell will this extent of human knowledge contain how i tie my shoelaces? i suspect not. or better yet, what the first kiss after years of drought tastes like?
@melaniemitchell Curious what effect you think models which are trained on, in addition to text, images, audio and video, data obtained from embodied bots in the physical world (and simulated worlds)?
@melaniemitchell Most human knowledge is tacit, no textual description exists.
@melaniemitchell “replacing the entire concept of writing programs with training models” - reality check: how expensive in terms of kWh is that and its random output? Being able to program and control a machine by using one or a suite of algorithms will remain fundamental for future.
@melaniemitchell Rob Horning writes some quite cogent thoughts about generative AI. In one article he writes "AI models presume that thought is entirely a matter of pattern recognition, and these patterns, already inscribed in the corpus of the internet, can mapped once and for all, with human “thinkers” always already trapped within them. The possibility that thought could consist of pattern breaking is eliminated."
Lots of reflection needed.
https://robhorning.substack.com/p/what-of-the-national-throat
https://robhorning.substack.com/p/every-answer
What of the national throat?

I’ve been reading articles about ChatGPT all week, ordering them in my mind to make the discourse about it into a kind of coherent narrative that has ebbed and flowed from excitement to panic to backlash to counter-backlash. It’s apparently never to late to say “it’s early days” with generative AI, or to rehash concerns that have been aired with each new development in the means of mechanical reproduction.

Internal exile
@melaniemitchell we will see simple prompt-to-app (Android/iOS) solutions already next year, I guess. #ChatGPT is sometimes hallucinating (with great confidence) but programming is the simple case for answer-verification by CI processes. Maybe this is just a few weeks away.
@melaniemitchell Very good read. Imho there will always be domains where 100% deterministic software as a result of multi-human intelligence beats the stochastic approach of ML, and thus will stay relevant. At least I wouldn't want to fly a plane with ChatGPT as copilot. Question remains, will there be enough young developers to program deterministically in 10-15 years, or will they all have been consumed by the dark force of programming (ML)?

@melaniemitchell
The end of radiologists awaiting the end of bus drivers awaiting the end of ...

alike the end of programmers ...

@melaniemitchell I liked Simson Garfinkel's reply to the post. Thanks for sharing!

@melaniemitchell Who builds the models? Other models?

How are we feeling about the arguments for the singularity? If you could get one that builds a better one, and that process doesn't stop, you get the singularity. But the "ifs" there are pretty massive.

@melaniemitchell
Excuse my intrusion.
It's a rather interesting article, very challenging, as someone puts it.
However, I don't think any AI will ever be able to write code to solve a puzzle. 😉

@melaniemitchell 1/2-thread: I'm missing the counter-evidence to the highly optimistic, abstract arguments, e.g.,

Study: #AI assistants help developers produce code that's insecure -- ... make developers believe their code is sound
https://www.theregister.com/2022/12/21/ai_assistants_bad_code/

Stack Overflow bans #ChatGPT as 'substantially harmful' for coding issues -- High error rates mean thousands of AI answers need checking by humans
https://www.theregister.com/2022/12/05/stack_overflow_bans_chatgpt/

Study finds AI assistants help developers produce code that's more likely to be buggy

At the same time, tools like Github Copilot and Facebook InCoder make developers believe their code is sound

The Register

@melaniemitchell 2/2-thread: Self-Driving Cars Are Going Nowhere After $100 Billion Spent? - Grit Daily News
https://gritdaily.com/self-driving-cars-going-nowhere/

And the unsolved fundamental problem of "The ironies of automation" (see below image from James Reason: Human Error. 1990)

Self-Driving Cars Are Going Nowhere After $100 Billion Spent? - Grit Daily News

Are self-driving cars going nowhere, even after $100 billion has been spent? Many seem to think so, including the pioneer of the technology.

Grit Daily News