Student Research Award
The ValuesLab plans to send out a call for proposals in early 2025, to undergraduates and MA students in Philosophy, Computer Science, Data Science, and other relevant fields. Details TBD!
PHIL GR9180 Approaches to Applied Ethics: Philosophy of AI
Graduate seminar, Fall 2024. From the course description:
The philosophy of AI is an emerging field. Right now, AI is importantly concerned with LLMs. It is also concerned with the relation between natural and artificial intelligence. Researchers and public discourse ask whether AI can be “aligned” with values. Accordingly, key questions in Philosophy of AI relate to language, thought, and values.
The seminar starts with the widely debated alignment problem (Part I: weeks 1-4). Independent of how alignment works, it is by no means clear what the desired outcome is. People disagree about values. With which values should AI be aligned? At times the answer is: with “human values.” Are “human values,” in this context, the different and incompatible sets of values human beings have? If yes, what about the values that we should have?
AI researchers often focus on fairness, typically understood as the elimination of bias (Part II: weeks 5-6). We examine relevant notions of fairness and ask how fairness relates to other values, including and especially accuracy.
Next, we ask whether it makes sense to ascribe beliefs and intentions to AIs (Part III: weeks 7-11). Can AIs engage in reasoning, lie and be held responsible? We discuss “explainable AI,” asking whether AI-outputs can be understood.
Finally, we examine questions about language as they apply to AIs (Part IV: weeks 12-14). How can LLMs cope with famously tricky components of language and thought, such as generics and implicature?
Part of the seminar are workshop sessions associated with the ValuesLab. Invited guest speakers come from a range of fields. This seminar aims to contribute to dialogue between philosophers and AI developers, and to a shared vocabulary.
COMS W2702 AI in Context
Team-taught, interdisciplinary class, Fall 2024. From the course description:
This team-taught, interdisciplinary class covers the history of AI, the development from Neural Networks (NNs) to Large Language Models (LLMs), philosophy of AI, as well as the role of AI in music and writing. Four sessions are devoted to foundational philosophical questions that bear on AI. Session 1: Can we ascribe beliefs and intentions to AI? Can LLMs speak? Can they lie? Session 2: Can AI be aligned with human values? What is explainable AI (XAI)? Session 3: What makes an AI “fair”? How does fairness relate to accuracy and other values? Session 4: How should LLMs deal with generics? What about social generics and bias?
Philosophy of AI Reading Group
Organizer: Syan Timothy Lopez
Faculty sponsor: Katja Vogt
From the description:
The Philosophy and AI reading group meets weekly to discuss contemporary papers in artificial intelligence and at the intersection of philosophy and AI. In the Fall 2024, we will be focusing especially on issues of AI and fairness. The reading group is open to current and former graduate students, visiting scholars, lecturers, and faculty. We especially welcome people from other disciplines outside of philosophy, such as computer science, engineering, cognitive science, law, business, etc. If you are interested, please contact Syan Timothy Lopez (sfl2126@columbia.edu).
Undergraduate Research
Oscar Alexander Lloyd
This project examines how and if normativity transfers in Large Language Models between the input data and output text. It proposes a relational rather than semantic understanding of the ‘reasoning’ performed by these models based on their inability to take into account both sense and reference. This means that text generated by an LLM, which we might ordinarily treat as intuitively motivational, fails to have normative power and we should treat it as such.
Henry Michaelson
Through this paper, I hope to explore and establish that the conditions under which the social contract theories were formulated in the pre-digital world no longer apply. Large technology companies have systematically attempted to erode and eradicate the moral and political institutions—first and foremost the state—that were theorized within the framework of a social contract. Rather than argue that the notion of a social contract no longer makes sense, I contend that the rise of Web3 technology marks a shift not back to the previous status quo, but toward a more practical and reinvigorated social contract, both politically and morally.