default search action
Aylin Caliskan
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c36]Yiwei Yang, Anthony Z. Liu, Robert Wolfe, Aylin Caliskan, Bill Howe:
Label-Efficient Group Robustness via Out-of-Distribution Concept Curation. CVPR 2024: 12426-12434 - [c35]Chahat Raj, Anjishnu Mukherjee, Aylin Caliskan, Antonios Anastasopoulos, Ziwei Zhu:
BiasDora: Exploring Hidden Biased Associations in Vision-Language Models. EMNLP (Findings) 2024: 10439-10455 - [c34]Anjishnu Mukherjee, Aylin Caliskan, Ziwei Zhu, Antonios Anastasopoulos:
Global Gallery: The Fine Art of Painting Culture Portraits through Multilingual Instruction Tuning. NAACL-HLT 2024: 6398-6415 - [i32]Steven A. Lehr, Aylin Caliskan, Suneragiri Liyanage, Mahzarin R. Banaji:
ChatGPT as Research Scientist: Probing GPT's Capabilities as a Research Librarian, Research Ethicist, Data Generator and Data Predictor. CoRR abs/2406.14765 (2024) - [i31]Chahat Raj, Anjishnu Mukherjee, Aylin Caliskan, Antonios Anastasopoulos, Ziwei Zhu:
Breaking Bias, Building Bridges: Evaluation and Mitigation of Social Biases in LLMs via Contact Hypothesis. CoRR abs/2407.02030 (2024) - [i30]Chahat Raj, Anjishnu Mukherjee, Aylin Caliskan, Antonios Anastasopoulos, Ziwei Zhu:
BiasDora: Exploring Hidden Biased Associations in Vision-Language Models. CoRR abs/2407.02066 (2024) - [i29]Sourojit Ghosh, Pranav Narayanan Venkit, Sanjana Gautam, Shomir Wilson, Aylin Caliskan:
Do Generative AI Models Output Harm while Representing Non-Western Cultures: Evidence from A Community-Centered Approach. CoRR abs/2407.14779 (2024) - [i28]Kyra Wilson, Aylin Caliskan:
Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval. CoRR abs/2407.20371 (2024) - [i27]Gandalf Nicolas, Aylin Caliskan:
A Taxonomy of Stereotype Content in Large Language Models. CoRR abs/2408.00162 (2024) - [i26]Sourojit Ghosh, Nina Lutz, Aylin Caliskan:
"I don't see myself represented here at all": User Experiences of Stable Diffusion Outputs Containing Representational Harms across Gender Identities and Nationalities. CoRR abs/2408.01594 (2024) - 2023
- [j7]Munsif Jan, Asifa Ashraf, Abdul Basit, Aylin Caliskan, Ertan Güdekli:
Traversable Wormhole in f(Q) Gravity Using Conformal Symmetry. Symmetry 15(4): 859 (2023) - [c33]Shiva Omrani Sabbaghi, Robert Wolfe, Aylin Caliskan:
Evaluating Biased Attitude Associations of Language Models in an Intersectional Context. AIES 2023: 542-553 - [c32]Sourojit Ghosh, Aylin Caliskan:
ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource Languages. AIES 2023: 901-912 - [c31]Sourojit Ghosh, Aylin Caliskan:
'Person' == Light-skinned, Western Man, and Sexualization of Women of Color: Stereotypes in Stable Diffusion. EMNLP (Findings) 2023: 6971-6985 - [c30]Isaac Slaughter, Craig Greenberg, Reva Schwartz, Aylin Caliskan:
Pre-trained Speech Processing Models Contain Human-Like Biases that Propagate to Speech Emotion Recognition. EMNLP (Findings) 2023: 8967-8989 - [c29]Robert Wolfe, Yiwei Yang, Bill Howe, Aylin Caliskan:
Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias. FAccT 2023: 1174-1185 - [c28]Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, Aylin Caliskan:
Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale. FAccT 2023: 1493-1504 - [c27]Katelyn Mei, Sonia Fereidooni, Aylin Caliskan:
Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks. FAccT 2023: 1699-1710 - [c26]Aylin Caliskan:
Artificial Intelligence, Bias, and Ethics. IJCAI 2023: 7007-7013 - [i25]Sourojit Ghosh, Aylin Caliskan:
ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource Languages. CoRR abs/2305.10510 (2023) - [i24]Katelyn X. Mei, Sonia Fereidooni, Aylin Caliskan:
Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks. CoRR abs/2306.05550 (2023) - [i23]Shiva Omrani Sabbaghi, Robert Wolfe, Aylin Caliskan:
Evaluating Biased Attitude Associations of Language Models in an Intersectional Context. CoRR abs/2307.03360 (2023) - [i22]Inyoung Cheong, Aylin Caliskan, Tadayoshi Kohno:
Is the U.S. Legal System Ready for AI's Challenges to Human Values? CoRR abs/2308.15906 (2023) - [i21]Isaac Slaughter, Craig Greenberg, Reva Schwartz, Aylin Caliskan:
Pre-trained Speech Processing Models Contain Human-Like Biases that Propagate to Speech Emotion Recognition. CoRR abs/2310.18877 (2023) - [i20]Sourojit Ghosh, Aylin Caliskan:
'Person' == Light-skinned, Western Man, and Sexualization of Women of Color: Stereotypes in Stable Diffusion. CoRR abs/2310.19981 (2023) - 2022
- [j6]Ryan Wails, Andrew Stange, Eliana Troper, Aylin Caliskan, Roger Dingledine, Rob Jansen, Micah Sherr:
Learning to Behave: Improving Covert Channel Security with Behavior-Based Designs. Proc. Priv. Enhancing Technol. 2022(3): 179-199 (2022) - [c25]Robert Wolfe, Aylin Caliskan:
VAST: The Valence-Assessing Semantics Test for Contextualizing Language Models. AAAI 2022: 11477-11485 - [c24]Robert Wolfe, Aylin Caliskan:
Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations. ACL (1) 2022: 3050-3061 - [c23]Aylin Caliskan, Pimparkar Parth Ajay, Tessa Charlesworth, Robert Wolfe, Mahzarin R. Banaji:
Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics. AIES 2022: 156-170 - [c22]Shiva Omrani Sabbaghi, Aylin Caliskan:
Measuring Gender Bias in Word Embeddings of Gendered Languages Requires Disentangling Grammatical Gender Signals. AIES 2022: 518-531 - [c21]Robert Wolfe, Aylin Caliskan:
American == White in Multimodal Language-and-Image AI. AIES 2022: 800-812 - [c20]Robert Wolfe, Aylin Caliskan:
Markedness in Visual Semantic AI. FAccT 2022: 1269-1279 - [c19]Robert Wolfe, Mahzarin R. Banaji, Aylin Caliskan:
Evidence for Hypodescent in Visual Semantic AI. FAccT 2022: 1293-1304 - [c18]Robert Wolfe, Aylin Caliskan:
Detecting Emerging Associations and Behaviors With Regional and Diachronic Word Embeddings. ICSC 2022: 91-98 - [i19]Robert Wolfe, Aylin Caliskan:
VAST: The Valence-Assessing Semantics Test for Contextualizing Language Models. CoRR abs/2203.07504 (2022) - [i18]Robert Wolfe, Aylin Caliskan:
Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations. CoRR abs/2203.07511 (2022) - [i17]Robert Wolfe, Mahzarin R. Banaji, Aylin Caliskan:
Evidence for Hypodescent in Visual Semantic AI. CoRR abs/2205.10764 (2022) - [i16]Robert Wolfe, Aylin Caliskan:
Markedness in Visual Semantic AI. CoRR abs/2205.11378 (2022) - [i15]Shiva Omrani Sabbaghi, Aylin Caliskan:
Measuring Gender Bias in Word Embeddings of Gendered Languages Requires Disentangling Grammatical Gender Signals. CoRR abs/2206.01691 (2022) - [i14]Aylin Caliskan, Pimparkar Parth Ajay, Tessa Charlesworth, Robert Wolfe, Mahzarin R. Banaji:
Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics. CoRR abs/2206.03390 (2022) - [i13]Robert Wolfe, Aylin Caliskan:
American == White in Multimodal Language-and-Image AI. CoRR abs/2207.00691 (2022) - [i12]Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, Aylin Caliskan:
Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale. CoRR abs/2211.03759 (2022) - [i11]Robert Wolfe, Yiwei Yang, Bill Howe, Aylin Caliskan:
Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias. CoRR abs/2212.11261 (2022) - 2021
- [j5]Ryan Steed, Aylin Caliskan:
A set of distinct facial traits learned by machines is not predictive of appearance bias in the wild. AI Ethics 1(3): 249-260 (2021) - [j4]Aylin Caliskan, Yesim Deniz Ozkan-Ozen, Yucel Ozturkoglu:
Digital transformation of traditional marketing business model in new industry era. J. Enterp. Inf. Manag. 34(4): 1252-1273 (2021) - [c17]Wei Guo, Aylin Caliskan:
Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases. AIES 2021: 122-133 - [c16]Akshat Pandey, Aylin Caliskan:
Disparate Impact of Artificial Intelligence Bias in Ridehailing Economy's Price Discrimination Algorithms. AIES 2021: 822-833 - [c15]Robert Wolfe, Aylin Caliskan:
Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models. EMNLP (1) 2021: 518-532 - [c14]Autumn Toney, Aylin Caliskan:
ValNorm Quantifies Semantics to Reveal Consistent Valence Biases Across Languages and Over Centuries. EMNLP (1) 2021: 7203-7218 - [c13]Ryan Steed, Aylin Caliskan:
Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases. FAccT 2021: 701-713 - [c12]Autumn Toney, Akshat Pandey, Wei Guo, David A. Broniatowski, Aylin Caliskan:
Automatically Characterizing Targeted Information Operations Through Biases Present in Discourse on Twitter. ICSC 2021: 82-83 - [i10]Robert Wolfe, Aylin Caliskan:
Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models. CoRR abs/2110.00672 (2021) - 2020
- [i9]Ryan Steed, Aylin Caliskan:
Machines Learn Appearance Bias in Face Recognition. CoRR abs/2002.05636 (2020) - [i8]Autumn Toney, Akshat Pandey, Wei Guo, David A. Broniatowski, Aylin Caliskan:
Pro-Russian Biases in Anti-Chinese Tweets about the Novel Coronavirus. CoRR abs/2004.08726 (2020) - [i7]Autumn Toney, Aylin Caliskan:
ValNorm: A New Word Embedding Intrinsic Evaluation Method Reveals Valence Biases are Consistent Across Languages and Over Decades. CoRR abs/2006.03950 (2020) - [i6]Wei Guo, Aylin Caliskan:
Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases. CoRR abs/2006.03955 (2020) - [i5]Akshat Pandey, Aylin Caliskan:
Iterative Effect-Size Bias in Ridehailing: Measuring Social Bias in Dynamic Pricing of 100 Million Rides. CoRR abs/2006.04599 (2020) - [i4]Ryan Steed, Aylin Caliskan:
Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases. CoRR abs/2010.15052 (2020)
2010 – 2019
- 2019
- [j3]Aylin Caliskan, Burcu Karaöz:
Can market indicators forecast the port throughput? Int. J. Data Min. Model. Manag. 11(1): 45-63 (2019) - [j2]Edwin Dauber, Aylin Caliskan, Richard E. Harang, Gregory Shearer, Michael J. Weisman, Frederica Free-Nelson, Rachel Greenstadt:
Git Blame Who?: Stylistic Authorship Attribution of Small, Incomplete Source Code Fragments. Proc. Priv. Enhancing Technol. 2019(3): 389-408 (2019) - 2018
- [c11]Edwin Dauber, Aylin Caliskan, Richard E. Harang, Rachel Greenstadt:
Git blame who?: stylistic authorship attribution of small, incomplete source code fragments. ICSE (Companion Volume) 2018: 356-357 - [c10]Aylin Caliskan, Fabian Yamaguchi, Edwin Dauber, Richard E. Harang, Konrad Rieck, Rachel Greenstadt, Arvind Narayanan:
When Coding Style Survives Compilation: De-anonymizing Programmers from Executable Binaries. NDSS 2018 - 2017
- [c9]Aylin Caliskan:
Beyond Big Data: What Can We Learn from AI Models?: Invited Keynote. AISec@CCS 2017: 1 - [i3]Edwin Dauber, Aylin Caliskan Islam, Richard E. Harang, Rachel Greenstadt:
Git Blame Who?: Stylistic Authorship Attribution of Small, Incomplete Source Code Fragments. CoRR abs/1701.05681 (2017) - 2016
- [i2]Aylin Caliskan Islam, Joanna J. Bryson, Arvind Narayanan:
Semantics derived automatically from language corpora necessarily contain human biases. CoRR abs/1608.07187 (2016) - 2015
- [j1]Aylin Caliskan Islam:
How do we decide how much to reveal? SIGCAS Comput. Soc. 45(1): 14-15 (2015) - [c8]Aylin Caliskan Islam, Richard E. Harang, Andrew Liu, Arvind Narayanan, Clare R. Voss, Fabian Yamaguchi, Rachel Greenstadt:
De-anonymizing Programmers via Code Stylometry. USENIX Security Symposium 2015: 255-270 - [i1]Aylin Caliskan Islam, Fabian Yamaguchi, Edwin Dauber, Richard E. Harang, Konrad Rieck, Rachel Greenstadt, Arvind Narayanan:
When Coding Style Survives Compilation: De-anonymizing Programmers from Executable Binaries. CoRR abs/1512.08546 (2015) - 2014
- [c7]Sadia Afroz, Aylin Caliskan Islam, Ariel Stolerman, Rachel Greenstadt, Damon McCoy:
Doppelgänger Finder: Taking Stylometry to the Underground. IEEE Symposium on Security and Privacy 2014: 212-226 - [c6]Aylin Caliskan Islam, Jonathan Walsh, Rachel Greenstadt:
Privacy Detective: Detecting Private Information and Collective Privacy Behavior in a Large Social Network. WPES 2014: 35-46 - 2013
- [c5]Alex Kantchelian, Sadia Afroz, Ling Huang, Aylin Caliskan Islam, Brad Miller, Michael Carl Tschantz, Rachel Greenstadt, Anthony D. Joseph, J. D. Tygar:
Approaches to adversarial drift. AISec 2013: 99-110 - [c4]Ariel Stolerman, Aylin Caliskan, Rachel Greenstadt:
From Language to Family and Back: Native Language and Language Family Identification from English Text. HLT-NAACL 2013: 32-39 - [c3]Sadia Afroz, Aylin Caliskan Islam, Jordan Santell, Aaron Chapin, Rachel Greenstadt:
How Privacy Flaws Affect Consumer Perception. STAST 2013: 10-17 - 2012
- [c2]Andrew W. E. McDonald, Sadia Afroz, Aylin Caliskan, Ariel Stolerman, Rachel Greenstadt:
Use Fewer Instances of the Letter "i": Toward Writing Style Anonymization. Privacy Enhancing Technologies 2012: 299-318 - [c1]Aylin Caliskan, Rachel Greenstadt:
Translate Once, Translate Twice, Translate Thrice and Attribute: Identifying Authors and Machine Translation Tools in Translated Text. ICSC 2012: 121-125
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-15 19:29 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint