default search action
FAT* 2019: Atlanta, GA, USA
- danah boyd, Jamie H. Morgenstern:
Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* 2019, Atlanta, GA, USA, January 29-31, 2019. ACM 2019 - Smitha Milli, Ludwig Schmidt, Anca D. Dragan, Moritz Hardt:
Model Reconstruction from Model Explanations. 1-9 - Berk Ustun, Alexander Spangher, Yang Liu:
Actionable Recourse in Linear Classification. 10-19 - Chris Russell:
Efficient Search for Diverse Coherent Explanations. 20-28 - Vivian Lai, Chenhao Tan:
On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection. 29-38 - Samir Passi, Solon Barocas:
Problem Formulation and Fairness. 39-48 - Ben Hutchinson, Margaret Mitchell:
50 Years of Test (Un)fairness: Lessons for Machine Learning. 49-58 - Andrew D. Selbst, danah boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, Janet Vertesi:
Fairness and Abstraction in Sociotechnical Systems. 59-68 - Severin Engelmann, Mo Chen, Felix Fischer, Ching-yu Kao, Jens Grossklags:
Clear Sanctions, Vague Rewards: How China's Social Credit System Currently Defines "Good" and "Bad" Behavior. 69-78 - Stevie Chancellor, Michael L. Birnbaum, Eric D. Caine, Vincent M. B. Silenzio, Munmun De Choudhury:
A Taxonomy of Ethical Tensions in Inferring Mental Health States from Social Media. 79-88 - Ziad Obermeyer, Sendhil Mullainathan:
Dissecting Racial Bias in an Algorithm that Guides Health Decisions for 70 Million People. 89 - Ben Green, Yiling Chen:
Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments. 90-99 - Michael J. Kearns, Seth Neel, Aaron Roth, Zhiwei Steven Wu:
An Empirical Study of Rich Subgroup Fairness for Machine Learning. 100-109 - Jake Goldenfein:
The Profiling Potential of Computer Vision and the Challenge of Computational Empiricism. 110-119 - Maria De-Arteaga, Alexey Romanov, Hanna M. Wallach, Jennifer T. Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Cem Geyik, Krishnaram Kenthapadi, Adam Tauman Kalai:
Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting. 120-128 - Abhijnan Chakraborty, Gourab K. Patro, Niloy Ganguly, Krishna P. Gummadi, Patrick Loiseau:
Equality of Voice: Towards Fair Representation in Crowdsourced Top-K Recommendations. 129-138 - Mahmoudreza Babaei, Abhijnan Chakraborty, Juhi Kulshrestha, Elissa M. Redmiles, Meeyoung Cha, Krishna P. Gummadi:
Analyzing Biases in Perception of Truth in News Stories and Their Implications for Fact Checking. 139 - Filipe Nunes Ribeiro, Koustuv Saha, Mahmoudreza Babaei, Lucas Henrique C. Lima, Johnnatan Messias, Fabrício Benevenuto, Oana Goga, Krishna P. Gummadi, Elissa M. Redmiles:
On Microtargeting Socially Divisive Ads: A Case Study of Russia-Linked Ad Campaigns on Facebook. 140-149 - Dimitrios Bountouridis, Jaron Harambam, Mykola Makhortykh, Mónica Marrero, Nava Tintarev, Claudia Hauff:
SIREN: A Simulation Framework for Understanding the Effects of Recommender Systems in Online News Environments. 150-159 - L. Elisa Celis, Sayash Kapoor, Farnood Salehi, Nisheeth K. Vishnoi:
Controlling Polarization in Personalization: An Algorithmic Framework. 160-169 - Hadi Elzayn, Shahin Jabbari, Christopher Jung, Michael J. Kearns, Seth Neel, Aaron Roth, Zachary Schutzman:
Fair Algorithms for Learning in Allocation Problems. 170-179 - Moshe Babaioff, Noam Nisan, Inbal Talgam-Cohen:
Fair Allocation through Competitive Equilibrium from Generic Incomes. 180 - Hoda Heidari, Michele Loi, Krishna P. Gummadi, Andreas Krause:
A Moral Framework for Understanding Fair ML through Economic Models of Equality of Opportunity. 181-190 - Meg Young, Luke Rodriguez, Emily Keller, Feiyang Sun, Boyang Sa, Jan Whittington, Bill Howe:
Beyond Open vs. Closed: Balancing Individual Privacy and Public Accountability in Data Sharing. 191-200 - Shan Jiang, John Martin, Christo Wilson:
Who's the Guinea Pig?: Investigating Online A/B/n Tests in-the-Wild. 201-210 - Aws Albarghouthi, Samuel Vinitsky:
Fairness-Aware Programming. 211-219 - Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, Timnit Gebru:
Model Cards for Model Reporting. 220-229 - Smitha Milli, John Miller, Anca D. Dragan, Moritz Hardt:
The Social Cost of Strategic Classification. 230-239 - Sampath Kannan, Aaron Roth, Juba Ziani:
Downstream Effects of Affirmative Action. 240-248 - Nicole Immorlica, Katrina Ligett, Juba Ziani:
Access to Population-Level Signaling as a Source of Inequality. 249-258 - Lily Hu, Nicole Immorlica, Jennifer Wortman Vaughan:
The Disparate Effects of Strategic Manipulation. 259-268 - Bruce Glymour, Jonathan Herington:
Measuring the Biases that Matter: The Ethical and Casual Foundations for Measures of Fairness in Algorithms. 269-278 - Brent D. Mittelstadt, Chris Russell, Sandra Wachter:
Explaining Explanations in AI. 279-288 - Sebastian Benthall, Bruce D. Haynes:
Racial categories in machine learning. 289-298 - Brenda Leong, Evan Selinger:
Robot Eyes Wide Shut: Understanding Dishonest Anthropomorphism. 299-308 - Ran Canetti, Aloni Cohen, Nishanth Dikkala, Govind Ramnarayan, Sarah Scheffler, Adam D. Smith:
From Soft Classifiers to Hard Decisions: How fair can we be? 309-318 - L. Elisa Celis, Lingxiao Huang, Vijay Keswani, Nisheeth K. Vishnoi:
Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees. 319-328 - Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian, Sonam Choudhary, Evan P. Hamilton, Derek Roth:
A comparative study of fairness-enhancing interventions in machine learning. 329-338 - Jiahao Chen, Nathan Kallus, Xiaojie Mao, Geoffry Svacha, Madeleine Udell:
Fairness Under Unawareness: Assessing Disparity When Protected Class Is Unobserved. 339-348 - David Madras, Elliot Creager, Toniann Pitassi, Richard S. Zemel:
Fairness through Causal Awareness: Learning Causal Latent-Variable Models for Biased Data. 349-358 - Hussein Mouzannar, Mesrob I. Ohannessian, Nathan Srebro:
From Fair Decision Making To Social Equality. 359-368 - Dallas Card, Michael Zhang, Noah A. Smith:
Deep Weighted Averaging Classifiers. 369-378
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.