I was just in a meeting where someone used a thing called Fathom to get an 'AI' summary of the meeting. Aside from some understandable typos arising from not understanding terms of art and replacing them with common English words, one of the key points that it concluded was that A was faster than B. It reached this conclusion because it missed one of the digits in the time for A. This completely inverted the key takeaway from one important section of the meeting.

Do not use plausible-nonsense generators for anything important.

Do not use plausible-nonsense generators for anything important.

Page from a 1979 IBM training manual:

@nicholas

It's not quite the same, people don't think of summarising as making a decision.

@david_chisnall @nicholas Agreed, but perhaps folks *should*.

To wit, corporate personhood in the US is a legal theory that derives entirely from an inaccurate summary of a court session.

https://en.wikipedia.org/wiki/Corporate_personhood#In_the_United_States

Corporate personhood - Wikipedia

@xgranade @david_chisnall @nicholas if the input to the actual decision-maker runs through an LLM, that decision is not necessarily properly informed—even if the LLM was not instructed to produce a decision-like output.
@nicholas @david_chisnall a computer cannot be horny and rhus never should make art