XIIID Strategy and Research: How philosophers will lead the world through the Age of AI (full article here, login required)
[…] When asked about concerns regarding the powers of machine learning, Dr. Kissinger expresses concern that in the realm of AI, “We have no great philosophers” which he calls “an unprecedented challenge for humanity.” We are reminded that the most successful scientific revolutions and intellectual movements were guided by philosophical insight. Thus, philosophers will be best-positioned to meet these new challenges.
[…] Ethical implications of AI, the strain on resources from increasing GAI workloads, and intellectual property rights are a few among many of the pressing issues surrounding AI today, often perceived as too vast to tackle effectively. Even more concerning, Vogt and Haas observed that computer scientists, engineers, and software developers tend to be less inclined than philosophers and artists to approach these questions. Enter the ValuesLab.
Vogt and Haas are co-founders of the ValuesLab—a project that originated from their reflections on gaps in AI discussions and research that philosophy could bridge. In late 2023, Haas launched the ValuesLab website to highlight the most important considerations for advancing the project. As the two analyzed ongoing AI projects and trends, they realized that “one thing in particular is missing: a place that puts philosophers in a position where they actively contribute to AI.” Haas states, “This is the ValuesLab’s mission. We want to help shape new projects, by collaborating with researchers who build AIs, inside and outside of academia. We also want to keep things focused and independent.”
[…] Most important to its mission is the ValuesLab’s use of the Socratic model to explore critical questions relevant to computational intelligence. Vogt explains that being “Socratic” involves being “driven by questions and a love of inquiry. It also means that some of the most important values relate to truth and understanding” which she explains are critical to tackling challenges related to comprehending how Explainable AI systems reach certain conclusions.
Moreover, Socrates is famous for asking “What is X?” questions, where X is a certain unknown. Vogt and Haas agree that this question type is critical for understanding AI. Vogt asks, “What, for example, is AI? Some experts say it’s a probability calculus. Others see a future where we interact with machines that have minds.” Haas continues: “The Socratic method distinguishes between straightforward factual questions, where the answer is a piece of information, and questions that require inquiry. The ValuesLab is interested in the latter. In that context, the best answer can be a question, one that launches a new line of inquiry. For example, one new player who pursues this kind of idea in exciting ways is perplexity.ai.”
[…] If philosophy is crucial to AI, some might ask whether computer scientists might simply enroll in an applied ethics course. Vogt argues that this approach would be inadequate, stating that “AI needs philosophy, not merely applied ethics.” That is, developing AI necessitates drawing insights from philosophical inquiry in various subjects including language, value, equity, bias, and autonomy—topics that extend far beyond what a single course can cover.
Columbia Data Scientists Discuss the Nature of Fairness at Data Science Day 2024 (full article here)
Katja Maria Vogt, a Professor of Philosophy and PI on the Values Lab, who moderated the session, opened by asking not what fairness meant but how fairness came to take the spotlight in cultural conversations about ethical AI. She posited that “fairness” is often used in place of “justice” is as a way to ground discussion and sidestep big philosophical questions, a hypothesis drawn from a John [Rawls] paper that posits that fairness is considered more of a workable notion than justice.
Vogt suggested that maybe there is no way around the big picture questions and interdisciplinary contexts like the Data Science Institute play a crucial role in helping society pose them.
The Art of AI (full article here)
“No matter what our students major in, they’ll need the analytical tools to ask what AI is and how we should relate to it,” says Katja Vogt, professor of philosophy at Columbia and co-founder of the ValuesLab. “These questions require collaboration between fields, including philosophy.”
“As a philosopher, I’m interested in values, language, and the mind. Applied to AI, this means I’m interested in what it would mean for AI to be aligned with values and whether this is possible,” she explains. “In class, we discuss this with regard to a range of values — some ethical or moral, some concerned with language and thought: fairness, truth, accuracy, understanding, interpretability, and more.”