but I was talking about the sun
Approximations
Elizabeth Hazen
#Poetry #ElizabethHazen #Approximations #TheShore #Sun #Proton #Anger #Twinkies #MiddleAge
https://www.theshorepoetry.org/elizabeth-hazen-approximations
but I was talking about the sun
Approximations
Elizabeth Hazen
#Poetry #ElizabethHazen #Approximations #TheShore #Sun #Proton #Anger #Twinkies #MiddleAge
https://www.theshorepoetry.org/elizabeth-hazen-approximations
'Laplace Meets Moreau: Smooth Approximation to Infimal Convolutions Using Laplace's Method', by Ryan J. Tibshirani, Samy Wu Fung, Howard Heaton, Stanley Osher.
http://jmlr.org/papers/v26/24-0944.html
#convolutions #laplace #approximations
France24 speaks french like a vache espagnole
Les gars sont journalistes.... (ou alors des #ia_generative ?)
'Correction to "Wasserstein distance estimates for the distributions of numerical approximations to ergodic stochastic differential equations"', by Daniel Paulin, Peter A. Whalley.
http://jmlr.org/papers/v25/24-0895.html
#ergodic #wasserstein #approximations
'Low-rank Variational Bayes correction to the Laplace method', by Janet van Niekerk, Haavard Rue.
http://jmlr.org/papers/v25/21-1405.html
#variational #hyperparameters #approximations
Fast Kernel Methods for Generic Lipschitz Losses via $p$-Sparsified Sketches
Tamim El Ahmad, Pierre Laforgue, Florence d'Alché-Buc
Action editor: Makoto Yamada.
Non-asymptotic approximations of Gaussian neural networks via second-order Poincar\'e inequalities
Non-asymptotic approximations of Gaussian neural networks via second-order Poincar\'e inequalities
The real problem there is when #QFT (for example) is no better overall at explaining. If the leading edge theories were sufficiently developed, there would be easy to follow modelling.
Also, the notion that #maths are capable of doing this without proper models & explanations leads to false impressions. (shut up & calculate exemplifies this)
I think appeals to authority are part of it, when anyone who is 'supposed to know' can't actually explain without resorting to contradictive analogies, they just fall back to the pressure of 'this is the version that will be given points on the test' for practical consideration, and sprinkling in enough 'mystery & #paradox' to hold the superior emotional center.
This creates a #society educated into false consensus, with little trust in pursuing #reason or truth, and instead places the most value on marching forward without such clarity. The #information trickle down effect places teachers right in the middle of this dilemma.
Note that this is a necessary element we've evolved with; we won't eliminate this. We can however, spend more time #teaching about the shortcomings of the #approximations students are expected to learn, and encourage them to contribute to the next level of understanding rather than to #fear trying & failing.
Solving #brain dynamics gives rise to flexible machine learning models | #MIT CSAIL
Liquid neural networks made an order of magnitude faster and more scalable by the use of closed form #approximations, that is, “closed-form continuous-time” (CfC) neural network.
#LiquidNeuralNetworks #DeepLearning #ContinuousTime #ODE #AI #MachineLearning
https://www.csail.mit.edu/news/solving-brain-dynamics-gives-rise-flexible-machine-learning-models