°Ä²Ê¿ª½±

Aaron Wolf

Back to Directory
awolf

Aaron Wolf

Senior Lecturer in University Studies and Research Affiliate in Philosophy

Department/Office Information

Philosophy
103 Hascall Hall
  • PhD, Syracuse University 
  • BA, Muhlenberg College 

Ethics of Artificial Intelligence & Data Science

Algorithmic Fairness

Applied Ontology & Knowledge Representation

  • Algorithmic Fairness and Educational Justice (forthcoming) Educational Theory
  • - (2021) Synthese 1996785–6802
  •  - (2020) Thought, vol. 9, no. 2: 84–93
  •  -  (2018) Journal of Value Inquiry, vol. 52, no. 2: 179–185
  •  -  (2015) Australasian Journal of Philosophy, vol. 93, no. 1: 109–125

Algorithmic Fairness and Educational Justice (forthcoming)

Philosophy papers about the fairness of AI-driven automated decision making are a dime a dozen these days. Most are motivated by (in)famous cases from criminal justice, healthcare, and finance. Occasionally, authors will mention education as another important context, but to my knowledge there have been no sustained philosophical treatments bringing together conceptions of fairness from philosophy of education with the broader algorithmic fairness literature. This paper remedies that. I give some reasons for thinking that pure statistical approaches to fairness, as well as virtue epistemic approaches, are especially promising in educational contexts. 

 

Enhancing the Fairness Metrics Ontology

Researchers from RPI and IBM have recently developed a useful tool for taxonomizing the wide range of metrics for evaluating fairness in machine learning, which they call the Fairness Metrics Ontology (FMO). In this paper, I build on their work by addressing some shortcomings in the way their ontology represents relations between entities, and bringing it into compliance with ISO standard 21838-2: Basic Formal Ontology, as well as its related Common Core Ontologies. 

 

Ruling Out: Making Sense of "No Ought From Is" without Sentence Categories

In the last 30 years, discussion of the Humean "no ought from is" thesis has coalesced around the idea of proving theorems to the effect that normative sentences can never be properly inferred from descriptive ones. But each existing theorem comes with significant costs, and what they all have in common is that they need to cleanly sort sentences into different types. But what if sentence-level categories are the source of the problem? I explore the possibility that we can make sense of "no ought from is" just by talking about terms and remaining agnostic about sentences. 

  • Applied Ontology and the Problem of Unruly Data
  • Ethics, Algorithms, & Artificial Intelligence
  • Well-Being, Meaning, & Death
  • Environmental Ethics
  • Modern Philosophy
  • Contemporary Political Philosophy
  • Ethics
  • Introduction to Philosophical Problems
  • Challenges of Modernity