fly51fly (@fly51fly)

논문 'Blind denoising diffusion models and the blessings of dimensionality' (Z. Kadkhodaie, A. Pooladian, S. Chewi, E. Simoncelli, Simons Foundation & Yale University, 2026)이 arXiv에 게시되었음을 알리는 트윗입니다. 블라인드 디노이징 확산모델과 고차원성의 이점에 관한 연구 결과를 소개합니다.

https://x.com/fly51fly/status/2021704465548726357

#diffusionmodels #denoising #arxiv #research

fly51fly (@fly51fly) on X

[LG] Blind denoising diffusion models and the blessings of dimensionality Z Kadkhodaie, A Pooladian, S Chewi, E Simoncelli [Simons Foundation & Yale University] (2026) https://t.co/sOEbQeqrcZ

X (formerly Twitter)

Someone at AI Research Roundup made a video about our SharpXR: Structure-Aware Denoising for Pediatric X-Rays
https://www.youtube.com/watch?v=UXFYh7GPDR4

#AI #MachineLearning #DeepLearning #MedicalImaging #Denoising #PediatricRadiology #ChestXRay

SharpXR: Structure-Aware Denoising for Pediatric X-Rays

YouTube

**Заголовок:**
**Open-source ИИ-апскейлинг для аниме: лучшие бесплатные инструменты без потери стиля**

**Вступление:**
Апскейлинг аниме-изображений — штука коварная: обычные алгоритмы легко превращают тонкие линии и плоские заливки в мыльную кашу. К счастью, существуют **open-source решения**, обученные именно на аниме и манге. Они используют нейросети для увеличения разрешения, удаления шумов и восстановления деталей, сохраняя визуальный язык оригинала. Ниже — отобранные бесплатные инструменты, которые можно использовать локально под Windows, macOS или Linux, без подписок и закрытых «чёрных ящиков».

**Интеграция хэштегов в текст:**

Ниже — подборка **бесплатных и открытых (#OpenSource) инструментов** для улучшения качества аниме-изображений с помощью **ИИ-апскейлинга (#AIUpscaling)**. Все они ориентированы на сохранение аниме-стилистики (#AnimeStyle), подходят для манги и иллюстраций (#Anime #Manga #DigitalArt) и распространяются с открытым исходным кодом (#FOSS).

### 1. Waifu2x (включая Waifu2x-ncnn-vulkan)

Классический нейросетевой апскейлер (#Waifu2x), обученный специально на аниме-изображениях. Используется для увеличения разрешения (#Upscaling), удаления шумов (#Denoising) и сохранения чётких контуров без искажений. Версия ncnn-vulkan ускоряет обработку за счёт GPU (#Vulkan #GPU).

### 2. Anime4K

Набор алгоритмов (#Anime4K) для быстрого апскейлинга и шумоподавления аниме-видео и изображений, в том числе в реальном времени (#Realtime). Часто применяется через видеоплееры, но подходит и для статичных картинок (#AnimeVideo).

### 3. Upscayl (Real-ESRGAN Anime 6B)

Простой GUI-инструмент (#Upscayl) на базе Real-ESRGAN (#ESRGAN), поддерживающий аниме-модели вроде Anime 6B и mangascale. Хорошо справляется с низким разрешением и артефактами, поддерживает пакетную обработку (#BatchProcessing).

### 4. Cupscale

Графическая оболочка (#Cupscale) для Real-ESRGAN с расширенным контролем качества. Подходит для продвинутых пользователей, которым важно сравнение моделей и тонкая настройка результата (#ImageProcessing).

### 5. Clarity AI

Веб-инструмент (#ClarityAI) для апскейлинга и улучшения изображений прямо в браузере (#WebTool). Имеет отдельный режим для аниме и регулировку степени стилизации (#StyleControl).

### 6. chaiNNer

Node-based редактор (#chaiNNer) для построения сложных пайплайнов обработки изображений. Позволяет комбинировать апскейлинг и denoising в одном графе (#NodeBased #Workflow).

Все перечисленные решения используют нейросети для обработки изображений (#NeuralNetworks) и в большинстве случаев выигрывают от наличия видеокарты (#GPUAcceleration). Для быстрого старта подойдут **Waifu2x** или **Upscayl**, а для сложных сценариев — **chaiNNer**.

Open-source ИИ-апскейлинг для аниме: лучшие бесплатные инструменты без потери стиля
https://orwellboxxx4.blogspot.com/2026/01/open-source.html

The Math Art of Artist 0thernes “Not The Typical”

Prompt share for math art

Medium

"To generate images, diffusion models use a process known as denoising. They convert an image into digital noise (an incoherent collection of pixels), then reassemble it. It’s like repeatedly putting a painting through a shredder until all you have left is a pile of fine dust, then patching the pieces back together. For years, researchers have wondered: If the models are just reassembling, then how does novelty come into the picture? It’s like reassembling your shredded painting into a completely new work of art.

Now two physicists have made a startling claim: It’s the technical imperfections in the denoising process itself that leads to the creativity of diffusion models. In a paper(opens a new tab) that will be presented at the International Conference on Machine Learning 2025, the duo developed a mathematical model of trained diffusion models to show that their so-called creativity is in fact a deterministic process — a direct, inevitable consequence of their architecture.

By illuminating the black box of diffusion models, the new research could have big implications for future AI research — and perhaps even for our understanding of human creativity. “The real strength of the paper is that it makes very accurate predictions of something very nontrivial,” said Luca Ambrogioni(opens a new tab), a computer scientist at Radboud University in the Netherlands."

https://www.quantamagazine.org/researchers-uncover-hidden-ingredients-behind-ai-creativity-20250630/

#AI #GenerativeAI #DiffusionModels #Denoising #Creativity

Researchers Uncover Hidden Ingredients Behind AI Creativity | Quanta Magazine

Image generators are designed to mimic their training data, so where does their apparent creativity come from? A recent study suggests that it’s an inevitable by-product of their architecture.

Quanta Magazine
Aiarty Video Enhancer: Desktop AI Tool for Upscaling, Denoising & Deblurring – Faster Than Ever!

As imaging and video technology keep evolving, capturing and delivering sharp, clean visuals without losing time has become essential.

PetaPixel

Wait, three of the Technical Oscars went to different denoising algorithms? That seems like a lot. Maybe this should be a new category.

#AcademyAwards #oscars #MachineLearning #denoising

A fully automated, faster noise rejection approach to increasing the analytical capability of chemical imaging for digital histopathology.
PLoS ONE 14(4): e0205219, 2019
https://doi.org/10.1371/journal.pone.0205219
#denoising #openaccess
A fully automated, faster noise rejection approach to increasing the analytical capability of chemical imaging for digital histopathology

Chemical hyperspectral imaging (HSI) data is naturally high dimensional and large. There are thus inherent manual trade-offs in acquisition time, and the quality of data. Minimum Noise Fraction (MNF) developed by Green et al. [1] has been extensively studied as a method for noise removal in HSI data. It too, however entails a manual speed-accuracy trade-off, namely the process of manually selecting the relevant bands in the MNF space. This process currently takes roughly around a month’s time for acquiring and pre-processing an entire TMA with acceptable signal to noise ratio. We present three approaches termed ‘Fast MNF’, ‘Approx MNF’ and ‘Rand MNF’ where the computational time of the algorithm is reduced, as well as the entire process of band selection is fully automated. This automated approach is shown to perform at the same level of accuracy as MNF with now large speedup factors, resulting in the same task to be accomplished in hours. The different approximations produced by the three algorithms, show the reconstruction accuracy vs storage (50×) and runtime speed (60×) trade-off. We apply the approach for automating the denoising of different tissue histology samples, in which the accuracy of classification (differentiating between the different histologic and pathologic classes) strongly depends on the SNR (signal to noise ratio) of recovered data. Therefore, we also compare the effect of the proposed denoising algorithms on classification accuracy. Since denoising HSI data is done unsupervised, we also use a metric that assesses the quality of denoising in the image domain between the noisy and denoised image in the absence of ground truth.

“Self-Rectifying Diffusion Sampling with Perturbed-Attention Guidance”

Perturbed-Attention Guidance (#PAG) é uma técnica sampling guidance de difusão que melhora a qualidade das amostras em cenários condicionais e incondicionais, sem necessidade de treino adicional ou integração de módulos externos. A PAG aprimora a estrutura das amostras sintetizadas durante o processo de #denoising, manipulando mapas de auto-atenção selecionados na #UNet de #difusão

🔗https://https://ku-cvlab.github.io/Perturbed-Attention-Guidance/

Self-Rectifying Diffusion Sampling with Perturbed-Attention Guidance

'On Efficient and Scalable Computation of the Nonparametric Maximum Likelihood Estimator in Mixture Models', by Yangjing Zhang, Ying Cui, Bodhisattva Sen, Kim-Chuan Toh.

http://jmlr.org/papers/v25/22-1120.html

#hessian #denoising #likelihood

On Efficient and Scalable Computation of the Nonparametric Maximum Likelihood Estimator in Mixture Models