Voyager probes: built to last.
Lab Website | https://redishlab.umn.edu/ |
Books | Changing How We Choose: The New Science of Morality |
Books | The Mind within the Brain |
Lab Website | https://redishlab.umn.edu/ |
Books | Changing How We Choose: The New Science of Morality |
Books | The Mind within the Brain |
Voyager probes: built to last.
Lets be clear; changes to tax laws do not 'force' the wealthy to leave the country.
The wealthy are choosing not to increase their contribution to the country they are (currently at least) living in.
Sure that is a choice they are free to make, but its their choice; no-one is forcing them into anything.
They already have enough money, so refusing to pay higher taxes is a choice about what they wish to support in their (for now) home country!
They are not victims!
Sorry for the late responses. I've been traveling.
I don't hate on anyone using Python. We're scientists. The goal is to get the right answer. If Python works for you, great. You be you. I've come to the conclusion that it doesn't work for me or what I want to do codewise. What I was objecting to was the python evangelicalism that is the usual response to my "so what else is there than Matlab?" query.
1. The main examples of python + numpy being inefficient have been in representations of large vectorized data files. Python tends to create lots of extra variables unnecessarily. @neuralreckoning 's example of X .* Y + Z which creates X.*Y as an intermediate variable is a classic case. But I see this everywhere, often in unexpected places.
I am usually working at the limits of the efficiency of my computers. The difference between fitting all of the data into memory and having to push some out to disk is obviously a big deal. And, if my computer gets faster, I'm going to push more data through, to get to the most I can do in that limit.
The three concrete examples that we have experienced all entailed loading separate files of signals (say LFPs or voltage signals), putting them into a large matrix (say one row per signal), and then doing some big operation on them (say correlation, or vectorized mink, or something). Importantly, while it might be possible to do these efficiently in numpy, three separate highly experienced and talented programmers ended up with inefficient code that took hours to run and used massive amounts of memory, and thus ran into limitations on scale, while I was able to get Matlab to do them in minutes. Which brings me to what I think is really going on:
2. One of the major issues is also code readability. In my experience, writing good Matlab code that is vectorized, fast, and efficient is relatively simple, looks readable, and is easy to parse. I mean, if I really wanted efficient but didn't care about readability, I could always go back to #APL (where "invert the matrix" is one character in the RPN).
This is a particularly important issue for *training*. In my experience, getting a newbie to write efficient Matlab code is easy (not obvious, but very teachable), while getting a newbie to write efficient python vectorized code is very very hard.
3. Re packaging. @neuralreckoning is right: We maintain an intra-lab code-set that does not require an external package process. Because a lot of what I consider basic functionality (plotting functions!) is an external package in python, I find that getting something with basic functionality requires massive dependencies, which are often incompatible.
Moreover, enforcing a system where everyone is using the same set of packages and the same codeset is particularly difficult. In my experience, running a lab entails an unruly balance between treating the lab like a company (where everyone is forced to do things exactly the same way because "I said so") and treating the lab like a department (a set of independent researchers who get to drive their own practices in the way that makes them the most comfortable). In practice, our "intralab Matlab codeset" works well. The python package libraries has not.
And, finally, there's just something deeply uncomfortable about the wasted disk space that these ecosystem package libraries require. Again, disks are cheap, but I am, once again, pushing the limits of whatever hardware I have. Not because the hardware is expensive, but because the closer we am to those limits, the more science we can do.
Game-changing innovation generally happens when we create space for ideas that seem impossible today. When we fund research without demanding immediate returns. And when we recognize that breakthrough discoveries can come from places and people we might not expect.
History teaches us that our nation wins when we bet on human curiosity before we know where it leads.
@mathworks - how is *downloads* not the first thing you restored? π€¬
All the other stuff is gravy. If you have downloads, people can install their own license servers and run matlab. How is this not the first thing fixed?
ps. Does anyone have a link to @mathworks so we can actually talk to them?
Every underfunded scientific lab and institution may become a future industry we're giving away to competitors.
Each PhD student who leaves for better opportunities abroad takes tomorrow's breakthroughs with them. America built its booming economy by betting on science before we knew where it would lead.
Basic research feels expensive until it creates trillion-dollar industries. #uspol /2