#approximation : a drawing, advancing or being near
- French: approximation
- German: die Annäherung
- Italian: ravvicinamento
- Portuguese: aproximação
- Spanish: aproximación
------------
Try our new word guessing game @ https://24hippos.com
#approximation : a drawing, advancing or being near
- French: approximation
- German: die Annäherung
- Italian: ravvicinamento
- Portuguese: aproximação
- Spanish: aproximación
------------
Try our new word guessing game @ https://24hippos.com
mistakes were made, who needs perfect symetry anyway?
#approximation : a drawing, advancing or being near
- French: approximation
- German: die Annäherung
- Italian: ravvicinamento
- Portuguese: aproximação
- Spanish: aproximación
------------
Word of The Hour's Annual Survey @ https://wordofthehour.org/r/form

Your responses to the questions below will directly impact the future of Word of The Hour. Your support and kindness has really meant a lot over the past three years. Thank you so much! Michael Wehar https://wordofthehour.org [email protected]
Правда ли KAN лучше MLP? Свойство разделения глубины между двумя архитектурами
Прошлым летом в свет вышла новая архитектура нейронных сетей под названием Kolmogorov-Arnold Networks (KAN). На момент выхода статьи про KAN эта новость произвела фурор в мире машинного обучение, так как KAN показывала существенный прирост в качестве аппроксимации различных сложных функций. Ошибка новых сетей падает значительно быстрее при увеличении числа параметров. Однако, за все приходится платить, и цена таких маленьких значений функции ошибки - медленное обучение: KAN обучается примерно в 10 раз медленнее, чем старый добрый MLP. Из всего этого возникает вопрос: насколько все же уместно использование новой архитектуры вместо привычных всем MLP? В данной статье будет найдена функция, которая может быть реализована с помощью двухслойного KAN полиномиальной ширины, но не может быть приближена никакой двухслойной ReLU MLP сетью с полиномиальной шириной
https://habr.com/ru/articles/929972/
#kan #mlp #approximation #math #machine_learning #deep_learning #science #neural_networks #research
Pi Approximation Day (22/7) actually is on 21.9911485751286 / 7 = 07-21T23:47:15.237...
#approximation : a drawing, advancing or being near
- French: approximation
- German: die Annäherung
- Italian: ravvicinamento
- Portuguese: aproximação
- Spanish: aproximación
------------
Word of The Hour's Annual Survey @ https://wordofthehour.org/r/form
It is well known that the "spherical cow" is an overly aggressive approximation. Hence:
The spherical cow approximation is widely used in the literature, but is rarely justified. Here, I propose several schemes for extending the spherical cow approximation to a full multipole expansion, in which the spherical cow is simply the first term. This allows for the computation of bovine potentials and interactions beyond spherical symmetry, and also provides a scheme for defining the geometry of the cow itself at higher multipole moments. This is especially important for the treatment of physical processes that are suppressed by spherical symmetry, such as the spindown of a rotating cow due to the emission of gravitational waves. I demonstrate the computation of multipole coefficients for a benchmark cow, and illustrate the applicability of the multipolar cow to several important problems.
用「加法」來計算浮點數乘法的近似值
在「Why Does Integer Addition Approximate Float Multiplication? (probablydance.com)」這邊看到的,原文在「Why Does Integer Addition Approximate Float Mu
#Murmuring #754 #addition #approximate #approximation #float #ieee #integer #multiplication
在「Why Does Integer Addition Approximate Float Multiplication? (probablydance.com)」這邊看到的,原文在「Why Does Integer Addition Approximate Float Multiplication?」。 這邊的浮點數主要指的是 IEEE 754-1985 (32 bits 的 single precision 與 64 bits 的 double precision),因為表示方式的性質,剛好讓「加法」與乘法的操作有點像。 這邊刻意用引號括弧起來的「加法」是直接把 IEEE 754...