I completely understand the position of people who don't want to use LLMs or consume any content produced with LLMs. I do not understand the position of "NO ONE should use LLMs at all" because how are you planning to make that happen? no one should be *forced* to use them, but plenty of people are using them now. it's not something you can wish away or achieve via moral condemnation.

@lzg simply stating using a tool is immoral is not intended to be our only resistance to LLM use, nor is it a condemnation of any one. it's a condemnation of an act people can choose whether or not they do. nuclear weapons should never be used. i can't stop nuclear superpowers from using them. they make the choice whether or not they use them.

if someone feels attacked because someone is making the well-supported argument that using LLMs is immoral, that is not because they are being attacked. likely, they are having trouble dismissing that view so that they can continue to use LLMs believing they are morally justified or that the immoral grime is worth the benefit.

@tyzbit I want to hear about a better world where the harm of LLMs is mitigated, or avoided altogether. What does that world look like? abstinence only is not usually a good strategy.

@lzg making a moral statement is not a plan or order. it is a philosophical view. if we want to minimize the harm at all, first simply agreeing that it is harmful is necessary. once we agree on that, then an effective plan would have many facets including discouraging use but it may also:

  • make the real cost of LLMs more widely known (right now they are subsidized and obscured)
  • make the actual efficacy of LLMs more understood. the messaging advertises magic essentially, but has an offhand footnote that the magic is not real. maybe further development will help this, but i know of no model that is actually trained morally.
  • find justice for the fact the companies making LLMs pirated huge amounts of data illegally
  • etc