Research
Research
A paper about why fixing reference by maximizing knowledge (Williamson 2007) is ultimately unpromising.
A paper about why cases of common confusion are a challenge for certain influential convention-based theories of linguistic interpretation, namely Lewis' (1983) and Grice's (1989) accounts.
I illustrate what is puzzling about reference change independently of Kripke's (1980) well-known causal picture. The puzzle threatens about a dozen influential or contemporary accounts of reference fixing/change. I also suggest a way out of the puzzle.
Communitites who only utter simple subject-predicate setnences express more than one proposition with thier sentences. Sophisticated communitites uttering complex setnences don't. The discussion is set against a Lewisian (1975) background of interpretation.
It's unclear whether LLMs know. It seems clear however that those reading the outputs of LLMS don't. This is especially worrying for strategies of interepting LLMs by looking at the knowledge gained by readers (Cappelen and Dever 2021).
Interpreting AI by Trusting it: A Lewisian Approach (Complete)
I propose a novel account of why the outputs of LLMs are meaningful that inspired by Lewis's (1983) work on convention. This account does not require AI to have any mental states.
A paper about how mental states (conceived as sentences of Mentalese) come to have the content they do. The strategy is based on maximizing Mentalese sentences that are safe. The strategy gives the right predictions in cases involving hallucination and cases involving objects that are modally robust.
Is full beleif gradable and sorities susceptible? (In Progress)
Yes
Higher-Order Anti-Metaphysics (with Juhani Yli-Vakkuri) (In Progress)
A paper about why certain popular claims in contemporary metaphysics are either trivially true, ungrammatical or involve equivocation.