Andrew Geng

22 Followers
29 Following
179 Posts
Math artist, composer, code monkey.
Websitehttps://pteromys.melonisland.net
GitHubhttps://github.com/pteromys

Hey All, I made a header only C++ library where it's 1 line of code to init, then you can start writing to pixels on the screen.

I call it thirteen.h, as it is inspired by the simplicity of the 13h days.

Examples include a mandelbrot viewer and a playable mine sweeper game.

MIT licensed.

https://github.com/Atrix256/Thirteen

Anyway it's an amazing feeling to realize that, because of questionable choices you made a long time ago, you have exactly the tool you need now.

And this is why I will be a hoarder when I'm old.

In https://arxiv.org/abs/1706.02515 , the authors give a computer-assisted proof of this (under independence assumptions on the matrices that we'll just pray stay true long enough to do training) and explicitly pick out a formula that makes the fixed point (0, 1).

Beware there might be some subtleties to the shape—at https://old.reddit.com/r/MachineLearning/comments/6g5tg1/ , Unterthiner remarked that forcing left and right to match slopes broke attempts solve for a fixed point.

Self-Normalizing Neural Networks

Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.

arXiv.org

What's SELU, you might ask? A trick to make deeper neural networks self-normalizing—

As well as I've understood it, the idea seems to be that repeatedly passing a column vector through alternating steps of:

1. multiply by a matrix whose rows are nearly mean=0, variance=1
2. pass each entry through an "activation function" with the right shape

...draws the (mean, variance) of its entries towards a fixed point, which gives you a hope of getting out of batch normalization by adding more layers.

Want to see a demo of SELU (Klambauer+Unterthiner+Mayr+Hochreiter 2017)? I just added it to https://pteromys.melonisland.net/neuralnets/

1. Click into the 2D tab, hit the reroll button, and then run it for about 2000 steps just to see what the default settings do.
2. Change tanh to selu, enter 8,8,8,8,8,8,8,8,8,8,8,8 in the next box, then reroll and run for 500 steps.
3. Switch back to tanh and see how long it takes to get the same level of detail.
4. Go for a walk—watching too long gives you motion sickness.

General reminder:

The domain name putty.org is *NOT* run by the #PuTTY developers. It is run by somebody not associated with us, who uses the domain to interpose advertising for their unrelated commercial products. We do not endorse those products in any way, and we have never given any kind of agreement for PuTTY's name to be used in promoting them.

Please do not perpetuate the claim that putty.org is the PuTTY website. If anyone is linking to it on that basis, please change the link. The PuTTY website is https://www.chiark.greenend.org.uk/~sgtatham/putty/ and it always has been.

You can check this by downloading the source code, which cites that URL in many places (the README, the documentation, some strings in the actual code), or by using the "Visit Web Site" menu options in the official Windows binaries (the ones signed with my personal Authenticode certificate). The true PuTTY website is the one that PuTTY itself says it is.

Many search engines list putty.org above chiark. I don't know if this is due to active SEO on the part of the domain owner, or a heuristic in the rankings. Either way, don't believe them. It's not our site.

PuTTY: a free SSH and Telnet client

dook is now installable using `uv` from https://pypi.org/project/dook/ !

The biggest changes are a new config format (if I broke your config, sorry—post an issue on my github if you have trouble migrating), bash and zsh completions, and tons of languages.

The silliest change is the `-i` flag for case-insensitive search that attempts some questionable transformations if you specify it twice:

This Month's High-resolution Render for Patrons of Level Square and up !

We are in the Icosahedral Quasicrystal obtained by projecting the 6-dimensional Grid to 3 Dimensions.

we view this as a 360° Panorama View from the inside of a cut out cylindrical hole.

Full Size: 25600x8640 Pixels

#creativecoding

Penrose cooling process. I learned this from Thomas Fernique.

The left part is nearly chaotic, the right part is the standard Penrose tiling, the middle part is in the between.

Updated https://crates.io/crates/dook — mostly, a long-overdue change to disable recursion by default because it keeps getting bogged down searching for things it doesn't know are language builtins. Plus some more thoroughness detailed in the changelog: https://github.com/pteromys/dook/blob/main/CHANGES.md
crates.io: Rust Package Registry