Today we had a fire alarm in the office. A colleague wrote to a Slack channel 'Fire alarm in the office building', to start a thread if somebody knows any details. We have AI assistant Glean integrated into the Slack, and it answered privately to her: "today's siren is just a scheduled test and you do not need to leave your workplace". It was not a test or a drill, it was a real fire alarm. Someday, AI will kill us.
@tagir_valeev how would a conversational « agent » know about a real fire unless it’s somehow hooked up to sensors (in which case, you actually wouldn’t need any AI at all, anyway)…
People are way too ready to give up their thinking ability to chat bots that are fairly good at pretending to be human…
@metacosm nobody asked the AI input at all. It just was configured in the particular channel to answer automatically if it thinks it can help faster than fellow humans (sometimes people actually ask something which was asked before, so AI could be helpful). The configuration will be adjusted after this incident.
@tagir_valeev @metacosm You need to stop anthropomorphising LLMs. LLMs do not think! They do not even hallucinate! They just spit out the most probable next tokens from their training set, and the training set is all of the human knowledge plagiarized + all of the human bullshit their crawlers could find on the web! If everyone turns them off, there will be fewer fires in the future (in both senses)!