Aaron Steven White

229 Followers
164 Following
16 Posts
Associate professor of linguistics and computer science at the University of Rochester. Director of the FACTS.lab. Modular synths + DIY electronics. Cocktails.
Personalhttp://aaronstevenwhite.io
Labhttp://factslab.io
LocationRochester, NY
CONCLUSION:
AI-as-engineering has been trespassing into cogsci, confusing us with decoys. The time is apt to reclaim the early conception of AI-as-theoretical-psychology. This means using AI as a theoretical tool; but we want to not fall in the trap of makeism again. 17/n
We're hiring for a TT assistant professor in computational social science this year, and linguistics is one of the areas we're particularly interested in!

I'm looking to bring on ≥1 PhD student next year who's interested in working on computational models for inducing full-fledged logical forms from inference judgment datasets, with the aim of quantitatively comparing theories of natural language semantics in terms of their core representational assumptions.

If you are such a student, let's chat! (Scheduling link on my website: http://aaronstevenwhite.io/.) If you know such a student, send them my way!

About

Aaron Steven White

New work with @aaron : https://ling.auf.net/lingbuzz/007450.

There's been debate recently about the discrete versus gradient nature of factive presuppositions ('Jo knows X' ~> X): specifically, whether factive predicates support inferences that are on a par with entailments, insofar as they are all-or-nothing - or something a bit weaker, giving rise to a boost to the likelihood that X is true without putting it on a par with other types of semantically licensed inference.

Factivity, presupposition projection, and the role of discrete knowlege in gradient inference judgments - lingbuzz/007450

We investigate whether the factive presuppositions associated with some clause-embedding predicates are fundamentally discrete in nature - as classically assumed - or fundamentally gradient - as recen - lingbuzz, the linguistics archive

Toddler (2;10) hasn’t gotten the memo on Condition A.
—
B: You can get them [his shoes] on yourself.
C: No! You get them on myself!
TFW you think you're going to have to prepare a course module from scratch but you discover that your past self produced far more extensive notes than you remembered.
One thing that makes reviewing for *ACL conferences in 2022 so tedious is having to say over and over: "Yeah. Cool idea. XYZ 1992 had it and then there were 20 years where people in the *ACL community built on it. What you're doing is still useful because it updates the idea for the LLM era; but you sure could have saved yourself a lot of mental labor if you had just read the literature."
@trochee This is not to say NLPers should work on these sorts of tasks. Just that when people are interested in them, there can be reasons other than “I think this benefits some downstream task”.
@trochee this is one of quite a few cases I’ve run into recently in talking with (psycho)linguist colleagues, where NLPers no longer see a point to a task (SRL, syntactic parsing, etc.) but where (psycho)linguists would greatly benefit from having the kind of high-accuracy system that can now be constructed for that task.
@trochee Item selection for an experiment where the sample needs to be stratified based on predicate sense and role configuration and where the corpora of interest are not currently propbank-tagged. (Sampling against senses and roles induced from the embeddings directly has turned out to be too noisy for current purposes.)