Just bought a license for Redux Sampler from Renoise after playing around with it for the last week or two. It is my opinion that this is the BEST native Linux sampler available!
Combining Redux with VCV Rack to produce some tracks based around found sounds and field recordings.
Ambient Experimental Beats are on the horizon.
#redux #vcvrack #linux #linuxaudio #electronicmusic
#samplers #renoise
@vampirdaddy @jamie yeah, cuz in practice, you have "collecting societies" like #GEMA that literally will demand one to evidence there's no content being played that they represent or face huge [retroactive] fines and license payments.
OFC this is #NotLegalAdvice and @wbs_legal, a law firm spechalized in media, did a good writeup on this issue.
It's also the reason why one can buy 8-12hr #samplers with #BackgroundMusic that is "GEMA-free" for €120+ because even a small location will face €300+ in monthy (!) licensing fees if they choose to just play the local radio station (on top of TV/Radio licensing fees!)

Die Gesellschaft für musikalische Aufführungs- und mechanische Vervielfältigungsrechte (GEMA) vertritt in Deutschland die Nutzungsrechte von Musikschaffenden und Anbietern. Momentan hat sie, gestärkt durch die internationale Kooperation mit Verwertungsgesellschaften anderer Länder, eine Monopolstellung auf dem deutschen Markt. Das zeigt sich auch in der sogenannten „GEMA-Vermutung“, die durch die Rechtsprechung des BGH […]
User came on Discord today complaining how hard it was to configure RAW mode on BlueSCSI - they were right! So I automated it all.
Seems samplers users like the RAW partitioned SD workflow instead of images.
This also allows you to pop in a SCSI2SD card into your BlueSCSI for a easy upgrade.
70 track album
Rethinking Losses for Diffusion Bridge Samplers
https://arxiv.org/abs/2506.10982
#HackerNews #Rethinking #Losses #Diffusion #Bridge #Samplers #MachineLearning #Research #Arxiv
Diffusion bridges are a promising class of deep-learning methods for sampling from unnormalized distributions. Recent works show that the Log Variance (LV) loss consistently outperforms the reverse Kullback-Leibler (rKL) loss when using the reparametrization trick to compute rKL-gradients. While the on-policy LV loss yields identical gradients to the rKL loss when combined with the log-derivative trick for diffusion samplers with non-learnable forward processes, this equivalence does not hold for diffusion bridges or when diffusion coefficients are learned. Based on this insight we argue that for diffusion bridges the LV loss does not represent an optimization objective that can be motivated like the rKL loss via the data processing inequality. Our analysis shows that employing the rKL loss with the log-derivative trick (rKL-LD) does not only avoid these conceptual problems but also consistently outperforms the LV loss. Experimental results with different types of diffusion bridges on challenging benchmarks show that samplers trained with the rKL-LD loss achieve better performance. From a practical perspective we find that rKL-LD requires significantly less hyperparameter optimization and yields more stable training behavior.
Dummy's Guide to Modern LLM Sampling
#HackerNews #DummyGuide #LLM #Sampling #ModernTech #AI #Learning #Samplers