https://atlas.whatip.xyz/post.php?slug=between-resource-scarcity-and-orbital-inflation-rethinking-the-space-model
Latest: <p>The space sector is booming
#rethinking #inflation #resource #scarcity
Duplication Isn't Always an Anti-Pattern
https://medium.com/@HobokenDays/rethinking-duplication-c1f85f1c0102
#HackerNews #Duplication #Anti-Pattern #Rethinking #CodeQuality #SoftwareDevelopment #MediumArticle
With the End of 10 already in play I'm having recurring conversations at our local #RepairCafe about what to suggest people do when they can't upgrade their device to Windoze 11.
Which way do you direct people?
Rethinking the Linux cloud stack for confidential VMs
https://lwn.net/Articles/1030818/
#HackerNews #Rethinking #the #Linux #cloud #stack #for #confidential #VMs #LinuxCloud #ConfidentialComputing #VMs #SecurityTech #OpenSource
Rethinking Losses for Diffusion Bridge Samplers
https://arxiv.org/abs/2506.10982
#HackerNews #Rethinking #Losses #Diffusion #Bridge #Samplers #MachineLearning #Research #Arxiv
Diffusion bridges are a promising class of deep-learning methods for sampling from unnormalized distributions. Recent works show that the Log Variance (LV) loss consistently outperforms the reverse Kullback-Leibler (rKL) loss when using the reparametrization trick to compute rKL-gradients. While the on-policy LV loss yields identical gradients to the rKL loss when combined with the log-derivative trick for diffusion samplers with non-learnable forward processes, this equivalence does not hold for diffusion bridges or when diffusion coefficients are learned. Based on this insight we argue that for diffusion bridges the LV loss does not represent an optimization objective that can be motivated like the rKL loss via the data processing inequality. Our analysis shows that employing the rKL loss with the log-derivative trick (rKL-LD) does not only avoid these conceptual problems but also consistently outperforms the LV loss. Experimental results with different types of diffusion bridges on challenging benchmarks show that samplers trained with the rKL-LD loss achieve better performance. From a practical perspective we find that rKL-LD requires significantly less hyperparameter optimization and yields more stable training behavior.
What can #spiritual #ecology do in concrete terms?
If the evil lies in our assumptions, in our #worldview, then that is where we must start. That is why #education is the first concrete measure.
As a society, we have become entrenched in a stubborn #materialistic way of thinking. But even convinced materialists must admit that this attitude toward #nature has put our planet in danger.
So the solution lies in #rethinking. Viewing all life as something #sacred is a reasonable alternative.