Last year I gave a presentation at #ADC24 about #AbletonMove and finally that presentation is online. it's in a much longer video so I've shared where my bit starts, which is around the 1 hour 21 minute mark. I was told to keep it to 10 minutes, so that's what I did. https://youtu.be/ZkZ5lu3yEZk?si=zXyftn3mqr0hJHu3&t=4859
Workshop: Inclusive Design within Audio Products - What, Why, How? - Accessibility Panel - ADC 2024

YouTube
@pkirn sending this your way in case you hadn't seen it. Was a fun one.
@FreakyFwoof Well done sir. I wish they hadn't used DXRevive on the microphones though, the very first model they released did not work too well on the voices. Still, well done indeed.
@erion I thought it was the adobe podcast thing.
@FreakyFwoof I am 99% sure this was DX Revive.
@erion I haven't heard of it, does it use the same technology? During the bass line for example, it was going 'Dumb dumb duh dumb' as if it were trying to talk, which Adobe would have also done in the same situation.

@FreakyFwoof Essentially yes, it's just better and it's local as well.

Yep, the reason why it did that is it tried to imitate the bass with a voice since the frequency ranges were similar.

@erion @FreakyFwoof Well, better... i'd argue against it. Dx Revive is basically a "fill the gap" for dialogues, where hard to understand voice overs kinda get auto-completed and polished. Thing is, it trains your voice on the materials, and in parts that are really hard to understand, it tries to guess what you just said, and it can go terribly wrong with that. I once tried to fix a bad WhatsApp recording of two dudes, it messed up all that was said really badly.

@ToniBarth @FreakyFwoof There is no training involved, it just simply uses your source material as a reference. It has a model trained on various voices in various languages, which it uses for multiple steps (these are what you mentioned as polish). Compared to Adobe, it does more and locally on your computer without you having to send your audio files to their servers to process. That's definitely better in my book, but of course you may think otherwise.

As far as the quality goes, this largely depends on the source material, you will usually get better results if you use one plugin instance for one voice only. Some models may work better for a specific voice, as an example their Studio 2 model deals with lower frequencies better.

@erion @FreakyFwoof "it simply uses your source material as reference", that is training, lol. Apart from that, yeah, being able to use it locally is definitely a plus, and i'm not saying that it can't do well, but in my experience it heavily depends on the source material, sometimes it even makes up words that were never spoken and don't make sense at all, that is something that IMO is important to remember.
@ToniBarth @FreakyFwoof If it is indeed training, it would be able to reproduce the same result without using your audio source, which it can't. It is not trained on it, it is using it as a reference to learn the characteristics of the voice or voices and enhance it by isolating the voices, recovering missing frequencies by synthesizing them from the AI model, balancing it to an EQ profile, etc.
@erion @FreakyFwoof I guess that might just have been an edge case where it just couldn't follow along and usually it does way better, but instead of just improving quality, de-reverbing, compressing etc like what Adobe Podcast does, it really puts AI to your voice and makes something of it, which can go really well, but also really badly. With Adobe Podcast you kinda know what you'll get when you use it, DxRevive less so.
@FreakyFwoof That's quite neat!
@FloatingOnion It's a device I have hours of fun with. I love it.
@FreakyFwoof I've seen you mentioning it, so enjoyed seeing how you used it.