| Personal | http://aaronstevenwhite.io |
| Lab | http://factslab.io |
| Location | Rochester, NY |
| Personal | http://aaronstevenwhite.io |
| Lab | http://factslab.io |
| Location | Rochester, NY |
I'm looking to bring on ≥1 PhD student next year who's interested in working on computational models for inducing full-fledged logical forms from inference judgment datasets, with the aim of quantitatively comparing theories of natural language semantics in terms of their core representational assumptions.
If you are such a student, let's chat! (Scheduling link on my website: http://aaronstevenwhite.io/.) If you know such a student, send them my way!
New work with @aaron : https://ling.auf.net/lingbuzz/007450.
There's been debate recently about the discrete versus gradient nature of factive presuppositions ('Jo knows X' ~> X): specifically, whether factive predicates support inferences that are on a par with entailments, insofar as they are all-or-nothing - or something a bit weaker, giving rise to a boost to the likelihood that X is true without putting it on a par with other types of semantically licensed inference.
We investigate whether the factive presuppositions associated with some clause-embedding predicates are fundamentally discrete in nature - as classically assumed - or fundamentally gradient - as recen - lingbuzz, the linguistics archive