@NicholasLaney @Daojoan Quality won't be much of an issue for very long, the amount of manual work and postprocessing the output needs is rapidly falling as AI engineers refine and fine-tune their models. Now we can have any long dead actor appear in a few short scenes, but in a few years, we will see entire feature films with lead actors from long ago, even doing things they have never done on screen (as long as there are other people who have done it so the machine learning model can learn those movement patterns and apply them to different characters).
It can be done, and somebody will do it. However, rendering individual scenes or a five minute music video is one thing, creating a film that is 90-120 minutes long and doesn't bore the audience to death is quite another. While a text generator can create a rough first draft of a screenplay, it still needs a lot of manual work to turn it into something halfway decent. The main problem is that current AI models don't understand anything, they just learn to spot recurring patterns in the training data and synthesise new data with those patterns. And then there is the problem of sheer raw computing power: AI only seems cheap because there are investors pouring billions of dollars into computing centres, but once you pay for it all by yourself, you will find that it is either quite expensive or very slow. You can run AI models on your own hardware, but a €2000 graphics workstation or gaming rig with the latest Nvidia GPU is barely enough to run a very small LLM, and while it can generate a still image with Stable Diffusion or Flux within seconds, it still needs to run for many hours to produce a few seconds of video. If you have a more humble PC or laptop, it will take hours to render a still image, and you can totally forget about video or text.
Since our own brains need significantly less power than a high-end PC, it looks like there should be more efficient hardware architectures for running artificial neural networks. With our current digital neuron models, we won't ever reach any level of intelligence that's even close to the human brain, and by trying so, we are wasting resources.
So
#AI isn't going to end Hollywood, but it gives us new tools. Video and audio generators aren't going to replace actors and movie sets and film crews, but they give us new options for SFX and postprocessing. Voice actors dubbing movies in different languages might be a thing of the past soon, since we can now let the actors speak any language in the world in their own voice (with resynth'd video for natural looking lip movement). Prop makers will have less to do since generative 3D models can make a lot of variations of a single 3D object, with 3D printing to turn them into actual objects that only need some assembly and some paint, which means that a much smaller number of prop makers can make more props faster. AI tools are becoming part of all kinds of software. In fact, machine learning and artificial neural networks have been part of graphics software like Photoshop for quite a while, how else do you think content aware fill works?
The idea that we can just replace all the humans with AI in the very near future is bullshit, of course, it isn't going to happen with the kind of hardware and software we have. We can, however, have fewer people do more work faster, just like we can with any other kind of power tool. Just like a lumberjack with a chainsaw can do the work of a whole crew of lumberjacks with simple saws, a 3D modelling artist working on a computer game can now make an object, have the AI generate dozents of variations, and then select the best of those for some final manual adjustments if needed.