our first AB position paper is out: https://www.w3.org/TR/llms-standards/

we tried to very briefly discuss LLMs in the standards process: where they could work and where they could be a problem, including guardrails to do it “responsibly”.

finding common ground in the AB is part of what we do and I'm glad our first attempt was published today!

Use of Large Language Models in Standards Work

As Large Language Models (LLMs) become increasingly synonymous with “AI”, and are used by people within our community, we want to highlight considerations around different ways in which LLMs can be useful or problematic when it comes to leveraging them in standards work at W3C.

@hdv I think the "subtle incorrectness" note is a really good callout. It can take a lot of maintainer/reviewer effort to QA/dispell/validate those kinds of mistakes.

I also liked (and think about often) that callout you made in a Github thread awhile back around "asymmetry in thinking"... like if an expert and spends time building a thoughtful reply or presenting months of research only to be met with "well claude said..." that's really damaging to thoughful and informed discourse.

@davatron5000 thank you! these are def some of my biggest concerns with all this.