Whoever wrote #pandas and #numpy have a special place in hell for their function naming conventions. Sure, why not use undiscoverable and completely unself-documenting function names? Why not write them like it's still 1978 and make sure you conserve that 16kb of RAM?

It's so clearly written by mathematicians and not software engineers it makes my head spin. Every modern language has moved on to verbose naming conventions sometime around 1994 but Python? Nope. You're still stuck writing like you're K&R on a goddamn PDP-11 in Bell Labs.

To all the new #ml / #ai students/programmers, no this is not normal, and no, no sane programmer in 30 years has written any software that looks like Pandas or NumPy's conventions (outside of microcontrollers).

@cguess I'm sorry, this is just yet another iteration of "software engineer does not understand usecase". The function name being short and close to what it looks like in the formula that you just translated into code is a feature.
@sqrt2 so the formulas being absolutely impenetrable is a feature as well? Mathematical notation and naming is not a goal I'd consider to aspire to.
@cguess It's not impenetrable, just governed by a set of conventions that you're not used to.

@sqrt2 I've studied university-level maths and been programming for 20+ years. I did not study very specific subfields of math which also share notation (but not meaning of that notation) with other fields.

"select_rows" is to get multiple rows (and takes a Dictionary for some reason) but "iloc" is to get one row. That's just bad usability, no matter what field you're in.

@cguess That's a complaint about pandas, which is not about formula translating, unlike numpy, and err... you can get multiple rows with iloc, and there is no select_rows. (Not in numpy either.)