Jairaj Devadiga

@jairajdevadiga
18 Followers
10 Following
216 Posts
Economist
Emailjairajdevadiga@gmail.com
Websitehttps://jairajdevadiga.com
Tech bros don’t want to save the world. They want to own it, brand it, license it, and charge you a monthly fee to visit.
Literally three weeks between these two images. Who could POSSIBLY have predicted something like this would happen?
Ever wondered why car ergonomics have gone down the drain as touch screens replaced buttons to drive down costs? This reader comment on an FT article on the topic says it all.

Teaching my son about right to repair by explaining why Nintendo chose to have like 8 Triwing screws, and 6 different fasteners to get into the DVD drive we had to replace on our old Wii.

He said "do they just want you to like buy another one or something?"

He learns quickly.

Personally I always had issues with the concept of a "country" and "borders"

Like who are you to tell me where I can and cannot go on this planet..

Everyone should be free to travel where ever they want on this planet

Just because a huuman claimed land centuries ago decides how we live, it's pretty messed up if you ask me

"So, you're against using genAI agents in education? What's your stance on calculators then? Gotcha here!"

My dude (why is it always a dude? No, don't answer me), your argument has more holes than a colander.

Calculators work. They do exactly what the people who want you to buy/use them claim they do. Not so with genAI chatbots.

Perhaps even more importantly, the existence of calculators does not negate the importance of teaching kids basic arithmetic. That is because what's important is not just the result, but also how you get to this result, and why you want this result in the first place. So even if genAI chatbots were failproof, I would still want students to be able to do the work themselves, because what matters is not so much the outcome as the process.

And, as using a calculator properly requires the ability to have some notion of what the result should look like, so as to reduce the very real possibility of a user error, using genAI assistants requires the ability to look critically at their output in order to fix all the errors (that, again, aren't due to user input so much as to their being error-prone by nature).

If you, like me, grew up with parents who said "it could always be worse" to dismiss all your pain and trauma—including everything they caused—then you might feel that you don't deserve help or empathy unless you're in the WORST situation you can imagine (which is impossible, because it can always be compared to death).
AOC is quite on point with this.