Sarah A Fisher

352 Followers
282 Following
152 Posts

I'm a researcher at UCL, working at the interface of philosophy and the social sciences.

I've spent several years thinking about linguistic framing effects and the role of context in meaning. I'm now investigating online speech, in particular.

Websitehttps://sites.google.com/view/sarahafisher/home
now that's just unfair

Of course, Hicks et al. got there before me with their wonderful (and wonderfully titled!) "ChatGPT is bullshit" https://doi.org/10.1007/s10676-024-09775-5

During peer review I had the opportunity to include discussion of their arguments and highlight the points where we agree / disagree.

That helped me improve my own arguments...and I guess this is the advantage of being the younger sibling 😅

ChatGPT is bullshit - Ethics and Information Technology

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

SpringerLink

I have a new article out today!

But it is destined to always sit in the shadow of its charismatic older sibling. So, like any good parent, let me give it a little boost 🤗

"Large language models and their big bullshit potential" https://doi.org/10.1007/s10676-024-09802-5

🗣️ I say there is a sense in which LLMs bullshit
🗣️ They produce meaningful content without checking for truth
🗣️ But equally, we might find ways to curb their bullshit...

#largelanguagemodels #chatgpt #bullshit

Large language models and their big bullshit potential - Ethics and Information Technology

Newly powerful large language models have burst onto the scene, with applications across a wide range of functions. We can now expect to encounter their outputs at rapidly increasing volumes and frequencies. Some commentators claim that large language models are bullshitting, generating convincing output without regard for the truth. If correct, that would make large language models distinctively dangerous discourse participants. Bullshitters not only undermine the norm of truthfulness (by saying false things) but the normative status of truth itself (by treating it as entirely irrelevant). So, do large language models really bullshit? I argue that they can, in the sense of issuing propositional content in response to fact-seeking prompts, without having first assessed that content for truth or falsity. However, I further argue that they need not bullshit, given appropriate guardrails. So, just as with human speakers, the propensity for a large language model to bullshit depends on its own particular make-up.

SpringerLink

Vacancy: "Lecturer in Cognitive Science and AI" at @Radboud_uni

We're seeking a candidate who wants to pursue a career in academic teaching. Are you excited by teaching and student supervision? Do you want to inspire students to think critically about cognitive science and AI? Are you an interdisciplinary and collaborative academic? Are you committed to diversity, equity and inclusion? Do you have strong organisational skills? If so, this position may be for you!

https://www.ru.nl/en/working-at/job-opportunities/lecturer-in-cognitive-science-and-ai

Lecturer in Cognitive Science and AI | Radboud University

Working as a lecturer in Cognitive Science and AI at the Faculty of Science? Check our vacancy here!

The new Knight–Georgetown Institute are hiring an Associate Director to work on tech policy. Would be a cool job.

https://kgi.georgetown.edu/work-with-us/318-2/

- The Knight–Georgetown Institute

Associate Director Location: Washington, DC The Knight-Georgetown Institute (KGI) is seeking its first Associate Director. KGI was recently established to translate research about technology and the online information environment for policy and industry audiences, and to connect and convene academics with policy stakeholders. The Associate Director will be a central member of KGI’s small start-up […]

The Knight–Georgetown Institute
Online event next Tuesday to launch a policy report produced by our lab at UCL and the think tank Demos #AI #generativeAI #elections #misinformation #syntheticmedia
https://demos.co.uk/event/synthetic-elections-are-we-prepared-for-generative-ai-in-2024/
Synthetic Elections: are we prepared for generative AI in 2024?

Demos is Britain’s leading cross-party think-tank. We produce original research, publish innovative thinkers and host thought-provoking events.

Demos
This piece has been a long time in the (re)(re)(re)drafting process. I'm happy and relieved to see it finally published! 😂
Instead semanticists have to be uncovering norms that operate at the level of the linguistic community and govern language-users from outside their heads.
They can't be reverse-engineering some psychological system. Otherwise different language-users could have idiosyncratic sets of constraints and we wouldn't be able to narrow down which contents they could be expressing with their words.
I argue that if that second view is right, we ought to think of the job of semanticists in a particular way.