Thomas Knox

@knoxilla
51 Followers
173 Following
276 Posts
Meeting all your knoxilla needs! 💻 webdev/devops at U Michigan
🌼 gardening at flickr.com/photos/knoxilla 🌊 he/him 🏳️‍🌈🏳️‍⚧️
Still looking at ways to make this fit my workflow and habits. Two steps forward, one step back!
https://smashingmagazine.com/2023/05/ai-tools-skyrocket-programming-productivity/
How To Use AI Tools To Skyrocket Your Programming Productivity — Smashing Magazine

The rise of Artificial Intelligence (AI) in recent times has incited fear in many over losing their jobs. However, that shouldn’t be the case. On the contrary, AI is an opportunity to take your programming to the next level when used tactfully with the right knowledge, as we’ll cover today.

Smashing Magazine
Well now I feel sheepish. https://t.co/pKMtxZm4y6
Andrew Heiss (🐘 @[email protected]) on Twitter

“regular PSA for those working with survey data and Likert scales: It's "lick-ert" not "like-ert" https://t.co/6nF0lRlkLW”

Twitter
The climes they are a-changin'! https://t.co/wIu9rvSWHG
How climate change could affect which trees grow near you

As greenhouse gas emissions nudge temperatures higher, projections show trees’ growing ranges are shifting northward.

The Washington Post
Ron Filipkowski on Twitter

“Yup, it’s real.”

Twitter
Dancing Figures and Natural Elements Coalesce in Jonathan Hateley's Elegant Bronze Sculptures — Colossal

Immersed in nature, female figures dance, reflect, and rest in Jonathan Hateley's limber bronze sculptures.

Colossal
Alas, no more raisin toast... End of an era! https://t.co/t9Pc7HMZko
Angelo’s Restaurant to close as University of Michigan plans $4.5M purchase

Famous for its raisin toast and sandwiches, the sale of the Ann Arbor staple will likely be approved at Thursday's regents meeting.

mlive

1) Insert chip into cocoon
2) Remotely pilot metamorphosed insect
3) ???

Sleep well, everybody!

https://t.co/QiRcvI8DtT https://t.co/6ZZe26yqMI

They really don't work, and in fact can make things worse. But they definitely help you check off some boxes re 'compliance' and 'inclusion'. No thanks. #a11y https://t.co/jdXpfzxPCe
Karl Groves (he/him) on Twitter

“778 people across the globe have endorsed the Overlay Factsheet. https://t.co/6ZKZeVSbfA”

Twitter
RT @EnglishOER: You can ask a language model to explain why it gave a particular output, and it will answer, but its answer is just another language prediction. It may not actually reflect the reasons. This study shows LLMs make up justifications when influenced by engineers' tweaks. https://t.co/54blHdyyRw
Miles Turpin on Twitter

“⚡️New paper!⚡️ It’s tempting to interpret chain-of-thought explanations as the LLM's process for solving a task. In this new work, we show that CoT explanations can systematically misrepresent the true reason for model predictions. https://t.co/ecPRDTin8h 🧵”

Twitter

RT @milesaturpin: ⚡️New paper!⚡️

It’s tempting to interpret chain-of-thought explanations as the LLM's process for solving a task. In this new work, we show that CoT explanations can systematically misrepresent the true reason for model predictions.
https://t.co/ecPRDTin8h

🧵 https://t.co/9zp5evMoaA

Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting

Large Language Models (LLMs) can achieve strong performance on many tasks by producing step-by-step reasoning before giving a final output, often referred to as chain-of-thought reasoning (CoT). It is tempting to interpret these CoT explanations as the LLM's process for solving a task. This level of transparency into LLMs' predictions would yield significant safety benefits. However, we find that CoT explanations can systematically misrepresent the true reason for a model's prediction. We demonstrate that CoT explanations can be heavily influenced by adding biasing features to model inputs--e.g., by reordering the multiple-choice options in a few-shot prompt to make the answer always "(A)"--which models systematically fail to mention in their explanations. When we bias models toward incorrect answers, they frequently generate CoT explanations rationalizing those answers. This causes accuracy to drop by as much as 36% on a suite of 13 tasks from BIG-Bench Hard, when testing with GPT-3.5 from OpenAI and Claude 1.0 from Anthropic. On a social-bias task, model explanations justify giving answers in line with stereotypes without mentioning the influence of these social biases. Our findings indicate that CoT explanations can be plausible yet misleading, which risks increasing our trust in LLMs without guaranteeing their safety. Building more transparent and explainable systems will require either improving CoT faithfulness through targeted efforts or abandoning CoT in favor of alternative methods.

arXiv.org