In honor of my trolls, I'm going to evangelize the things they accused me of. Because if the trolls are against it, then it must not be that bad!

So here's me evangelizing for
1. Effective Altruism
2. Population growth
3. Neoliberalism
4. Evolutionary psych

1. I think of Effective Altruism as "Altruism studies" - a discipline that exists largely outside of academia. The dominant paradigm within this discipline involves utilitarianism, RCTs, rationalism, longtermism, concerns about AGI, etc. However, many reject part or all of these elements. If you have good arguments against any element, I invite you to contribute by posting your criticism on the EA forum. EA is so young and is in desperate need of good compelling critiques.
2. Population ethics is generally an area of philosophy I stay away from. Where else does logic and intuition so differ? But heres my thought on population growth; Mathus was right until he wasn't. He saw the trends and realized the trends were bad and rightfully sounded the alarm. But then we got synthetic manure and GMOs which greatly reduced how much land we need, and are likely the two inventions which have saved more lives than any other.
So if you are concerned about population growth but want kids, then have kids and offset the extra amount of land and resources they need by eating more GMOs. GMOs have repeatedly shown to be safe, and no different than natural breeding techniques, and they are better for the environment.
3. Neoliberalism is ill defined because it was originally meant as an insult. But when my trolls accused me of being a neoliberal apologist, I can only assume they meant "capitalism", so here's my defence of capitalism: it's good actually. You need strong markets if you are going to fund a welfare state. That's what Denmark has done, and we should follow their example. Yay for markets!
4. EvoPsych has some bad apples, bad assumptions, and bad methodologies. And I could focus on that, but to honor my trolls I will only offer defences of the field. And here's my defence of EvoPsych; I'm glad somebody is doing it. I am glad that someone is taking the current paradigm seriously and seeing where it leads. If you disagree, you should develop the new paradigm and challenge them. I will even read your paper.
So please everyone, I ask of you to contribute to EA! Contribute to population growth! Contribute to our markets! And contribute to EvoPsych! The world will be better off as you engage in these ways
@jtpeterson Ad 1, the good faith critique of MacAskill is "he gets everything wrong." He says 15 degrees of global warming are survivable, which no serious climatologist thinks; he cites conversations with scientists and other experts who say they've never talked to him; he brushes off real problems like genocide as not-existenti. All of this can be said in EA forums, but how seriously should I take a movement that has a warmer reaction to cranks and white supremacists than to Timnit Gebru?
@Alon @jtpeterson The "longtermist" 4000 billion human AI stuff is a dangerous cult, but it seems to me (although I know very little on the subject) that normie effective altruism ideas like trying to measure how much "good" charities do for a given amount of funding and giving to the most efficient ones are still valuable. I wonder if there should be a new label to set those ideas apart from pro-global-warming BS.
@scunneen @jtpeterson Yeah, normie stuff about malaria is good, but that's a little bit like saying Judaism is a good religion because "you shall not murder" and "you shall not steal" are good commandments. Everything else that makes EA distinctive is bad: longtermism and AI risk are GIGO analysis, and Earn to Give centers idle money over work (cf. skill-oriented charity like Doctors Without Borders).
@Alon @jtpeterson I don't think people with money should be given the acclaim that we give to human rights advocates or heroic doctors, but I would think that the average upper-middle-class person would probably accomplish more by giving the equivalent of, say, 200 hours wages to charity rather than volunteering for 200 hours, as a given volunteer's skills are unlikely to be exactly what a volunteer needs, and wages in low income countries where many charities operate are low.
@scunneen @jtpeterson Yeah, personally traveling to a poor country to do unskilled construction work has negative value. I think it's Bill Easterly (who a lot of good-EA obliquely references) who, after the Haiti quake, told people to give money, not food or clothes. But then there is skilled work, like medicine or engineering. Or advocating, as a finance worker, that your company divest from genocidal regimes. But that latter kind of action is collective and EA is really uncomfortable with it.

@Alon @scunneen @jtpeterson 80k has been saying since 2015 that most people who care about impartial impact (the EA movement's audience) should focus on direct work on high-impact stuff rather than earning to give.

https://80000hours.org/2015/07/80000-hours-thinks-that-only-a-small-proportion-of-people-should-earn-to-give-long-term/

80,000 Hours thinks that only a small proportion of people should earn to give long term

Norman Borlaug didn't make millions, his research just saved millions of lives. One of the most common misconceptions that we've encountered about 80,000 Hours is that we're exclusively or predominantly focused on earning to give. This blog post is to say definitively that this is not the case. Moreover, the proportion of people for whom we think earning to give is the best option has gone down over time.

80,000 Hours
@Alon @scunneen @jtpeterson Also, EA orgs put their money where their mouth is re: animal welfare. All the food I've had at EA conferences and events has been vegan.

@sunysh0re @scunneen @jtpeterson Yes, but.

1. To MacAskill himself, much of the high-impact stuff is "inventing AI," rather than, say, making sure AI algorithms don't lead to genocide as they did in North Arakan.

2. E2G is still a big enough thing in EA that there are people who've been convinced that they must overwork themselves in order to E2G. (In general, if I'm making personal assertions like this, you should assume it's from the Boston rat/EA community. Maybe SF is different.)

@Alon @scunneen @jtpeterson I live in NYC; the NYC EA community seems pretty chill to me. Bay Area EA seems more intense and AI-heavy due to its heavy overlap with the rat community there (s.t. "Bay Area rationalist" is a term), tho I haven't immersed myself in that community. Idk abt Boston.

1. That would be OpenAI. Some folks I know object to OpenAI's strategy of trying to win the "AI capabilities arms race" so they can invent aligned AI first. They'd prefer to work on alignment directly.

@Alon @scunneen @jtpeterson 1. (cont'd.) I can't speak for the entire AIS community on the importance of near-term AI ethics, but there are AIS people who think it's a distraction (like Yudkowsky) and those who think it's important to cooperate with near-term AI ethicists. Andrew Critch has a post about the relevance of near-term topics to AIS (fairness, accountability, and interpretability all rank pretty highly):

https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1

@Alon I'm especially excited about computational social choice because it promises to directly hold AI systems democratically accountable 🏦🗳️

And I think fairness *will* matter for steering super-AIs in the right direction. If super-AIs think it's more okay to screw over marginalized groups than dominant ones, then they're not aligned with the interests of all humanity 🙂

@sunysh0re The main criticism of AI alignment + OpenAI (by Gebru, again, and others) is that it still completely misses the way AI actually is used for ill today, such as the Facebook algorithm amplifying incitement to genocide in countries where no Facebook engineer speaks the language, or systematization of racial and gender discrimination in work and services. @scunneen @jtpeterson
@Alon never read MacAskill, never seen a white supremacist tolerated in the movement, and never heard of Timnit. EA is a big movement with people from many countries, and all kind of views (some of which I find very wrong). If you think something is wrong, I recommend making an argument for why it is wrong

@jtpeterson You should read her. Her best-known work is about algorithmic bias, e.g. how AI algorithms used in tech perpetuate the racist/sexist stereotypes of the people working on them, the society that uses them, and the data they train them on. For example, see this book chapter of hers: https://arxiv.org/abs/1908.06165

(Re white supremacy: Scott Alexander is a race-and-IQ eugenicist. I know rationalism and EA aren't exactly the same but they tend to co-occur.)

Oxford Handbook on AI Ethics Book Chapter on Race and Gender

From massive face-recognition-based surveillance and machine-learning-based decision systems predicting crime recidivism rates, to the move towards automated health diagnostic systems, artificial intelligence (AI) is being used in scenarios that have serious consequences in people's lives. However, this rapid permeation of AI into society has not been accompanied by a thorough investigation of the sociopolitical issues that cause certain groups of people to be harmed rather than advantaged by it. For instance, recent studies have shown that commercial face recognition systems have much higher error rates for dark skinned women while having minimal errors on light skinned men. A 2016 ProPublica investigation uncovered that machine learning based tools that assess crime recidivism rates in the US are biased against African Americans. Other studies show that natural language processing tools trained on newspapers exhibit societal biases (e.g. finishing the analogy "Man is to computer programmer as woman is to X" by homemaker). At the same time, books such as Weapons of Math Destruction and Automated Inequality detail how people in lower socioeconomic classes in the US are subjected to more automated decision making tools than those who are in the upper class. Thus, these tools are most often used on people towards whom they exhibit the most bias. While many technical solutions have been proposed to alleviate bias in machine learning systems, we have to take a holistic and multifaceted approach. This includes standardization bodies determining what types of systems can be used in which scenarios, making sure that automated decision tools are created by people from diverse backgrounds, and understanding the historical and political factors that disadvantage certain groups who are subjected to these tools.

arXiv.org