exploring the phonetic space by just setting large chunks of the dimensions to arbitrary values
visualizing the vector in the latent phonetic space while interpolating between "abacus" and "mastodon." (this is after inferring the latent vectors via orthography->phoneme features->VAE). I just arbitrarily reshaped the vectors from (1, 1, 128) to (8, 16), so the 2d patterns are arbitrary. still interesting to see what it's actually learning!
I extracted 45k noun phrases from the wikipedia pages for every unicode character in the
'punctuation' and 'symbol' categories. here are 24 sampled at random
ah yes, my favorite john mayer song
decoding the same underlying vectors from the VAE using the french spelling model, for some reason, sure, whatever
using the phonetic VAE to interpolate between US state names in a grid
love these new ornamental dingbats in unicode
chart of the day: voiced obstruents (i.e., phonemes like /b/, /g/, /z/) in pokémon names, by evolution level
going back to the regular seq2seq networks, I'm trying to do some quantitative evaluation. the phoneme features to orthography model gets... ~60% of words wrong, and ~12% of letters wrong (working on samples a few thousand words from cmudict), but its guesses seem... reasonable? not sure how to talk about this
(a) minimalist definition of narrative (b) name of a hit new YA series (c) phrase from a movie review that damns with faint praise (d) something rad to memorize & recite as your last words ("well, that sure was...") (e) all of the above