Hello! I’m Robert Long. I work on issues at the intersection of philosophy of mind, cognitive science, and ethics of AI. I’m current a Philosophy Fellow at the Center for AI Safety in San Francisco, CA.

Before that I was a Research Fellow at the Future of Humanity Institute at Oxford University, and completed a PhD in philosophy at NYU, where my advisors were David Chalmers, Ned Block, and Michael Strevens.

You can find a collection of my papers and talks here. My substack is here.

Recently I’ve been working on issues related to AI sentience.

I’m also interested in the relationship between human intelligence and artificial intelligence more broadly. In ‘Nativism and Empiricism in Artificial Intelligence’, I explore how the classic debate between nativists and empiricists can inform, and be informed by, contemporary AI research.

I am also interested in ethical issues in artificial intelligence. In a recent paper, I critically review putative measures of fairness in machine learning–and argue that one prominent measure, equal false positive rates, is fundamentally misguided and fails to capture anything important about fairness.

I also have interests in epistemology and philosophy of perception, and the intersection between the two.

Before NYU, I did research in developmental psychology. While working at the Harvard Lab for Developmental Studies, I investigated the development of abstract relational thought in young children. I have a master’s in philosophy from Brandeis University (2015), and a bachelor’s degree in Social Studies from Harvard University (2011).