My #introduction needs a refresh.

I'm a #PhD student at the University of #Vermont, studying the #Evolution of #Evolvability. I'm into #AI, #ALife, #Biology, and #Philosophy, because I want to understand #life, #adaptation, and #intelligence using my native language of #ComputerScience. I share my musings and #research on my #blog. I love #science generally, and am full of bitchy #AcademicChatter.

I was a #SoftwareEngineer in #SiliconValley for many years, but left in 2021. I'm glad I did, and now I feel a bit betrayed by the #TechIndustry. I've been going back to my #FOSS roots, and gradually #DeGoogle ing my life. I still love to talk about #code #craft, #UX, and healthy #engineering #culture. Recently I've been enjoying #gpu #programming, mostly in #taichilang.

I have a wife and a #cat. I love #nature, #photography, #cooking, and #yoga.

All kinds of people are valid and worthy, but #trans people, folks on the #autism spectrum, and #bipoc get a shout out right now because they need our support.

I've been trying Jax for an Alife programming project, and that's just made me appreciate Taichi more.

The big selling point of Jax for a lot of people is vmap, which is an easy way to jit compile and vectorize Python code. It can get you a huge performance boost for little effort on custom code operating on Numpy-style arrays. That's already a boon for many projects! It's also perfect for "glue code" between GPU-based libraries or neural networks that avoids memory transfers over the PCI bus. What it doesn't do is use all the threads on your GPU, which is a shame, because there are thousands of them. For that, you have to write a custom kernel using an underdeveloped side library.

Taichi requires more thoughtful coding than Jax, but it lets you write kernels that use your whole GPU in a simple, clean way without manually managing grid sizes and memory allocations. This is a huge win for big simulation jobs, in terms of performance and ease of use.

#jax #taichilang #numpy #python #programming #alife

Hi folks! I am starting a research blog on creative coding and (non-generative) AI in music - mostly notes to my future self πŸ˜€ https://helenbledsoe.com/research-blog/

My "normal" blog for flutists and composers will still continue πŸ˜€ https://helenbledsoe.com/blog/

#artistresearch
#artisticresearch
#maxmsp
#taichilang
#python
#audiovisual
#creativecoding
#aimusic

Research Blog in Creative Coding and AI

Exploring creative coding and AI in music

Research Blog in Creative Coding and AI

@isaaclyman
That's pretty much how #Python optimizing compilers like #cython, #mypyc, #numba, and #TaichiLang work, and iiuc is the idea behind #MOJOlang.

As for leaky abstractions, I'd mitigate that by moving the lower-level algorithms into a separate module and limit the optimization pass to that module. Higher-level modules, like CLI entry points or API server route handlers, shouldn't need the extra optimization.

I've finally finished my traditional NEAT-based evolver of "interesting" Game of Life patterns. It sucks! But, now I'm convinced that it sucks, not because of my code, but because of the limitations of this algorithm. That's perfect! 'Cuz now I get to use it as my control for something way more interesting.

Now for the fun part. 

#programming #EvolutionaryComputation #taichilang #gecco2024

I'm having a lot of fun developing my custom, GPU accelerated implementation of the NEAT algorithm, even though it's extremely challenging work. Currently beginning to tune the evolutionary algorithm for managing a diverse population of neutral networks.

#programming #EvolutionaryComputation #taichilang

Ugh, I hate running into language / compiler bugs. It's hard to understand the problem when the computer isn't actually doing what your code says it should do.

Currently being forced to rethink my designs to workaround a surprise language limitation. 

#programming #taichilang

I'm having a blast doing some challenging parallel programming work for my evolutionary robotics final project.

Honestly, the tech industry gave me very few opportunities to write interesting algorithms! Academia and the massively parallel nature of my research give me a good excuse to go nuts.

Also, I love how parallel programming engages the visuospatial parts of my mind! For me, it's a new level of coding bliss. Even when I'm pulling out my hair.

#programming #cuda #taichilang

I adore #taichilang. One of my all time favorite #programming languages, I think. It's a simple, elegant way to do high performance parallel programming and graphics work that integrates seamlessly into #python code. It's got some annoyances, but overall it's a delight to use, IMO.

I’m learning #TaichiLang. It’s got a lot of promise and a few major frustrations. I hope it continues to grow and improve!

Mostly I love the #programming model. It gives you the power to implement complex parallel algorithms for the #GPU, in an elegant way, from idiomatic #Python. It hides a lot of #CUDA complexity, and does an amazing job computing grid layouts and optimizing performance for you. Automatic differentiation is a killer feature my weird #AI projects, though I wish there were better #NeuralNetwork training utilities.

On the other hand, the language can be pretty fiddly to work with. Taichi does some pretty intensive rewriting of kernel code, and that only works if it’s structured in just the right way. I find the limitations easy to work around, but hard to anticipate, so I do a lot of rewriting. Also, the error messages are garbage. I basically just comment out code and insert print statements until I find the offending line, then fiddle away.