Hello! I’m Robert Long. I work on issues at the intersection of philosophy of mind, cognitive science, and ethics of AI. I’m currently a Research Associate at the Center for AI Safety in San Francisco, CA and doing research with the NYU Mind, Ethics, and Policy Program.

Before that I was a Research Fellow at the Future of Humanity Institute at Oxford University, and completed a PhD in philosophy at NYU, where my advisors were David Chalmers, Ned Block, and Michael Strevens.

Recently I’ve been working on issues related to AI consciousness. Here’s a newly released report: Consciousness in Artificial Intelligence: Insights from the Science of Consciousness. I also tweet and substack about these issues.

You can find a collection of my papers and talks here

I’m also interested in the relationship between human intelligence and artificial intelligence more broadly. In ‘Nativism and Empiricism in Artificial Intelligence’, I explore how the classic debate between nativists and empiricists can inform, and be informed by, contemporary AI research.

I also have work on fairness and bias in artificial intelligence.

I also have interests in epistemology and philosophy of perception, and the intersection between the two.

Before NYU, I did research in developmental psychology. While working at the Harvard Lab for Developmental Studies, I investigated the development of abstract relational thought in young children. I have a master’s in philosophy from Brandeis University, and a bachelor’s degree in Social Studies from Harvard University.