{"id":787,"date":"2019-09-25T13:36:31","date_gmt":"2019-09-25T18:36:31","guid":{"rendered":"http:\/\/wp.comminfo.rutgers.edu\/vsingh\/?page_id=787"},"modified":"2024-04-23T16:35:25","modified_gmt":"2024-04-23T20:35:25","slug":"algorithmic-bias","status":"publish","type":"page","link":"https:\/\/sites.comminfo.rutgers.edu\/vsingh\/algorithmic-bias\/","title":{"rendered":"Algorithmic Bias"},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; _builder_version=&#8221;4.16&#8243; global_colors_info=&#8221;{}&#8221;][et_pb_row _builder_version=&#8221;4.16&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.16&#8243; custom_padding=&#8221;|||&#8221; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text admin_label=&#8221;Text&#8221; _builder_version=&#8221;4.18.0&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h2><strong>Algorithmic Fairness<\/strong><\/h2>\n<p>In multiple domains, ranging from automatic face detection to automated decisions on parole, machine learning algorithms have been found to be systematically biased and favoring one demographic group over another.<\/p>\n<p>Our work focuses on:<br \/>(1) <strong>Auditing multiple algorithms that affect human lives.\u00a0<\/strong>We have looked at bias in multiple applications such as, visual gender bias in Wikipedia biographies [3], image search results for professional images [7], sounds used by household devices [5], face matching algorithms [11], pupil detection algorithms [4], toxicity\/cyberbullying detection algorithms [1, 8, 10], misinformation detection [2] and sentiment detection algorithms [9].<\/p>\n<p>(2) <strong>Designing newer algorithms that are less biased in measurable ways. <\/strong>We have been designing newer algorithms that reimagine how bias should be quantified and what corrective actions can be undertaken. This includes probabilistically fusing different decisions coming from different modalities (e.g, text, images)\u00a0 or different blackbox algorithms [8, 9] to ensure fairness and accuracy. We have also worked on identifying the right optimization parameters within the algorithm to ensure fairness and accuracy [10]. Active projects in this space aim to reify time [1] and networks in the definitions of fairness.<\/p>\n<p><strong>Related Publications<\/strong><\/p>\n<ol>\n<li>Almuzaini, A. A., Bhatt, C. A., Pennock, D. M., &amp; Singh, V. K. (2022, June).\u00a0<a href=\"http:\/\/sites.comminfo.rutgers.edu\/behavioralinformatics\/wp-content\/uploads\/sites\/36\/2022\/09\/ABCinML__Anticipatory_Bias_Correction_in_Machine_Learning_Applications__FAccT_pdfa.pdf\" rel=\"attachment wp-att-1072\">ABCinML: Anticipatory Bias Correction in Machine Learning Applications.\u00a0<\/a>In\u00a0<i>2022 ACM Conference on Fairness, Accountability, and Transparency<\/i>\u00a0(pp. 1552-1560).<\/li>\n<li>Park, J., Ellezhuthil, R., Arunachalam, R., Feldman, L., &amp; Singh, V. (2022).\u00a0<a href=\"http:\/\/sites.comminfo.rutgers.edu\/behavioralinformatics\/wp-content\/uploads\/sites\/36\/2022\/09\/Fairness-in-Misinformation-Detection-Algorithms.pdf\" rel=\"attachment wp-att-1070\">Fairness in Misinformation Detection Algorithms<\/a>. In\u00a0<i>Workshop Proceedings of the 16th International AAAI Conference on Web and Social Media. Retrieved from https:\/\/doi. org\/10.36190<\/i>.<\/li>\n<li>Beytia, P., Agarwal, P., Redi, M., &amp; <strong>Singh, V.K.<\/strong> (2022),<a href=\"https:\/\/osf.io\/preprints\/socarxiv\/59rey\/\"> \u201cVisual Gender Biases in Wikipedia: A Systematic Evaluation across the Ten Most Spoken Languages\u201d<\/a>. To be published in the <em>Proceedings of the ACM International Conference on Web and Social Media (ICWSM).<\/em><\/li>\n<li>Kulkarni, O. N., Patil, V., <strong>Singh, V. K.<\/strong>, &amp; Atrey, P. K. (2021). <a href=\"http:\/\/sites.comminfo.rutgers.edu\/vsingh\/wp-content\/uploads\/sites\/35\/2022\/01\/Accuracy_and_Fairness_in_Pupil_Detection_Algorithm.pdf\">Accuracy and Fairness in Pupil Detection Algorithm<\/a>.<em> In 2021 IEEE Seventh International Conference on Multimedia Big Data (BigMM) (pp. 17-24). IEEE.\u00a0<\/em><\/li>\n<li>Roy, J., Bhatt, C., Chayko, M., &amp; <strong>Singh, V. K.<\/strong> (2021). Roy, J., Bhatt, C., Chayko, M., &amp; Singh, V. K. (2021). <a href=\"https:\/\/wp.comminfo.rutgers.edu\/vsingh\/wp-content\/uploads\/sites\/110\/2022\/01\/2021-Roy-Gendered-Sounds-in-Household-Devices.pdf\">Gendered Sounds in Household Devices: Results from an Online Search Case Study.<\/a>\u00a0<em>Proceedings of the Association for Information Science and Technology<\/em>,\u00a0<em>58<\/em>(1), 824-826.<\/li>\n<li><strong style=\"font-size: 14px\">Singh, V. K.<\/strong><span style=\"font-size: 14px\">, Andr\u00e9, E., Boll, S., Hildebrandt, M., &amp; Shamma, D. A. (2020). <\/span><a style=\"font-size: 14px\" href=\"https:\/\/wp.comminfo.rutgers.edu\/vsingh\/wp-content\/uploads\/sites\/110\/2022\/01\/Legal_and_Ethical_Challenges_in_Multimedia_Research.pdf\">Legal and ethical challenges in multimedia research<\/a><span style=\"font-size: 14px\">.\u00a0<\/span><em style=\"font-size: 14px\">IEEE MultiMedia<\/em><span style=\"font-size: 14px\">,\u00a0<\/span><em style=\"font-size: 14px\">27<\/em><span style=\"font-size: 14px\">(2), 46-54.<\/span><\/li>\n<li><strong style=\"font-size: 14px\">Singh, V.<\/strong><span style=\"font-size: 14px\"><strong>,<\/strong> Chayko, M., Inamdar, R., &amp; Floegel, D. (2020), <\/span><a style=\"font-size: 14px\" href=\"https:\/\/wp.comminfo.rutgers.edu\/vsingh\/wp-content\/uploads\/sites\/110\/2019\/12\/JASIST_Accepted.pdf\">Female Librarians and Male Computer Programmers: Gender Bias in Occupational Images on Digital Media Platforms<\/a><span style=\"font-size: 14px\">. \u00a0<i>Journal of the Association for Information Science and Technology<\/i>,\u00a0<i>71<\/i>(11), 1281-1294.\u00a0[see <a href=\"https:\/\/wp.comminfo.rutgers.edu\/vsingh\/wp-content\/uploads\/sites\/110\/2022\/01\/Singh-Chayko-Research-Poster.pdf\">Poster<\/a>]<\/span><\/li>\n<li>Alasadi, J., Ramanathan, A., Atrey, P. &amp; <strong>Singh, V. K.<\/strong> (2020). <a href=\"http:\/\/sites.comminfo.rutgers.edu\/vsingh\/wp-content\/uploads\/sites\/35\/2022\/01\/A_Fairness-Aware_Fusion_Framework_for_Multimodal_Cyberbullying_Detection.pdf\">A Fairness-Aware Fusion Framework for Multimodal Cyberbullying Detection.\u00a0<\/a><em>In Proceedings of the IEEE International Conference on Multimedia Big Data.\u00a0<\/em><\/li>\n<li>Abdulaziz, A., &amp; <strong>Singh, V. K.<\/strong> (2020). <a href=\"https:\/\/wp.comminfo.rutgers.edu\/vsingh\/wp-content\/uploads\/sites\/110\/2022\/01\/Almuzaini-Fairness-Blackbox.pdf\">Balancing Fairness and Accuracy in Sentiment Detection Using Multiple Black-box Models.<\/a>\u00a0<em>In\u00a0Proceedings of the 2nd ACM International Workshop on Fairness, Accountability, and Transparency, and Ethics in MultiMedia.<\/em><\/li>\n<li><strong>Singh, V.,<\/strong> &amp; Hofenbitzer, C. (2019).<a href=\"http:\/\/sites.comminfo.rutgers.edu\/vsingh\/wp-content\/uploads\/sites\/35\/2022\/01\/Fairness_across_Network_Positions_in_Cyberbullying_Detection_Algorithms.pdf\"> Fairness across network positions in cyberbullying detection algorithms.<\/a> In\u00a0<em>2019 IEEE\/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)<\/em> (pp. 557-559). IEEE.<\/li>\n<li>Alasadi, J., Al Hilli, A., &amp; <strong>Singh, V.,<\/strong> (2019). <a href=\"http:\/\/sites.comminfo.rutgers.edu\/vsingh\/wp-content\/uploads\/sites\/35\/2022\/01\/Alasadi-facematchin-fairness.pdf\">Toward Fairness in Face Matching Algorithms.<\/a> (2019)\u00a0In\u00a0<i>Proceedings of the 1st International Workshop on Fairness, Accountability, and Transparency in MultiMedia<\/i>\u00a0(pp. 19-25).<\/li>\n<\/ol>\n<p><strong>Fun<\/strong><strong style=\"font-size: 14px\">ding and Support<\/strong><\/p>\n<p>We gratefully acknowledge the support from the National Science Foundation for this work.<\/p>\n<p><a href=\"https:\/\/www.nsf.gov\/awardsearch\/showAward?AWD_ID=1915790&amp;HistoricalAwards=false\">1. EAGER: SaTC: Early-Stage Interdisciplinary Collaboration: Fair and Accurate Information Quality Assessment Algorithm<\/a><\/p>\n<p>2. <a href=\"https:\/\/www.nsf.gov\/awardsearch\/showAward?AWD_ID=2027784&amp;HistoricalAwards=false\">RAPID: Countering Language Biases in COVID-19 Search Auto-Completes<\/a><\/p>\n<div class=\"pageheadline\">\u00a0<\/div>\n<div class=\"pageheadline\"><strong style=\"font-size: 14px\">Coverage<\/strong><\/div>\n<p>Media coverage for gender bias in professional images <a href=\"https:\/\/wp.comminfo.rutgers.edu\/vsingh\/wp-content\/uploads\/sites\/110\/2019\/12\/JASIST_Accepted.pdf\">paper<\/a>:\u00a0<a href=\"https:\/\/in.style.yahoo.com\/online-images-reinforce-occupational-gender-085017119.html\">Yahoo Lifestyle<\/a>,\u00a0<a href=\"https:\/\/americanlibrariesmagazine.org\/latest-links\/occupational-gender-bias-online-images\/\">American Libraries Magazine<\/a>,\u00a0<a href=\"https:\/\/technews.acm.org\/archives.cfm?fo=2020-02-feb\/feb-12-2020.html\">ACM Tech News<\/a>,\u00a0<a href=\"https:\/\/www.hindustantimes.com\/more-lifestyle\/online-images-reinforce-occupational-gender-stereotypes\/story-M7Hp0vLbLECIO1cMnFhQ7M.html\">Hindustan Times<\/a>,\u00a0<a href=\"https:\/\/www.dailytargum.com\/article\/2020\/02\/rutgers-research-shows-gender-bias-in-media-relating-to-different-occupations\">Daily Targum<\/a>.<a style=\"font-size: 14px\" href=\"https:\/\/www.rutgers.edu\/news\/online-autocompletes-are-more-likely-yield-covid-19-misinformation-spanish-english\">2020: <\/a>Rutgers Today: <a style=\"font-size: 14px\" href=\"https:\/\/www.rutgers.edu\/news\/online-autocompletes-are-more-likely-yield-covid-19-misinformation-spanish-english\">Online Autocompletes Are More Likely to Yield COVID-19 Misinformation in Spanish than in English<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Algorithmic Fairness In multiple domains, ranging from automatic face detection to automated decisions on parole, machine learning algorithms have been found to be systematically biased and favoring one demographic group [&hellip;]<\/p>\n","protected":false},"author":2442,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":"In multiple domains, ranging from automatic face detection to automated decisions on parole, machine learning algorithms have been found to be systematically biased and favoring one demographic group over another. Our work focuses on:\r\n(a) auditing multiple algorithms that affect human lives.\r\n(b) designing newer algorithms that are less biased in measurable ways.\r\n\r\nActive projects include:\r\n<strong>(1) <a href=\"\">Female Librarians and Male Computer Programmers? Gender Bias in Occupational Images on Digital Media Platforms<\/a><\/strong>\r\nMedia platforms, technological systems, and search engines act as conduits and gatekeepers for all kinds of information. They often influence, reflect, and reinforce gender stereotypes, including those that represent occupations. This study examines the prevalence of gender stereotypes on digital media platforms and considers how human efforts to create and curate messages directly may impact these stereotypes. While gender stereotyping in social media and algorithms has received some examination in recent literature, its prevalence in different types of platforms (e.g., wiki vs. news vs. social network) and under differing conditions (e.g., degrees of human and  machine led content creation and curation) has yet to be studied. This research explores the extent to which stereotypes of certain strongly gendered professions (librarian, nurse, computer programmer, civil engineer) persist and may vary across digital platforms (Twitter, the New York Times online, Wikipedia, and Shutterstock). The results suggest that gender stereotypes are most likely to be challenged when human beings act directly to create and curate content in digital platforms, and that highly algorithmic approaches for curation showed little inclination towards breaking stereotypes. Implications for the more inclusive design and use of digital media platforms, particularly with regard to mediated occupational messaging, are discussed.\r\n\r\n<strong>(2) <a href=\"https:\/\/arxiv.org\/abs\/1905.03403\">Fairness across Network Positions in Cyberbullying Detection Algorithms<\/a><\/strong>\r\nCyberbullying, which often has a deeply negative impact on the victim, has grown as a serious issue in online social networks. Recently, researchers have created automated machine learning algorithms to detect Cyberbullying using social and textual features. However, the very algorithms that are intended to fight off one threat (cyberbullying) may inadvertently be falling prey to another important threat (bias of the automatic detection algorithms). This is exacerbated by the fact that while the current literature on algorithmic fairness has multiple empirical results, metrics, and algorithms for countering bias across immediately observable demographic characteristics (e.g. age, race, gender), there have been no efforts at empirically quantifying the variation in algorithmic performance based on the network role or position of individuals.  We audit an existing cyberbullying algorithm using Twitter data for disparity in detection performance based on the network centrality of the potential victim and then demonstrate how this disparity can be countered using an Equalized Odds post-processing technique. The results pave the way for more accurate and fair cyberbullying detection algorithms.\r\n\r\n<strong>(3) <a href=\"https:\/\/wp.comminfo.rutgers.edu\/vsingh\/wp-content\/uploads\/sites\/110\/2019\/09\/Workshop_paper_CameraReady.pdf\">Fairness in Face Matching Algorithms<\/a><\/strong>\r\nAutomated face matching algorithms are used in a wide variety of societal applications ranging from access authentication to criminal identification, to application customization. Hence, it is important for such algorithms to be equitable in their performance for different demographic groups.  If the algorithms work well only for certain racial or gender identities, they would adversely affect others. Recent efforts in algorithmic fairness literature (typically not focused on multimedia or computer vision tasks such as face matching) have argued for designing algorithms and architectures to tackle such bias via trade-offs between accuracy and fairness. Here, we show that adopting an adversarial deep learning-based approach allows for the model to maintain the accuracy at face matching while also reducing demographic disparities compared to a baseline (non-adversarial deep learning) approach at face matching. The results motivate and pave way for more accurate and fair face matching algorithms.","_et_gb_content_width":"","footnotes":""},"class_list":["post-787","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/sites.comminfo.rutgers.edu\/vsingh\/wp-json\/wp\/v2\/pages\/787","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sites.comminfo.rutgers.edu\/vsingh\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.comminfo.rutgers.edu\/vsingh\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.comminfo.rutgers.edu\/vsingh\/wp-json\/wp\/v2\/users\/2442"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.comminfo.rutgers.edu\/vsingh\/wp-json\/wp\/v2\/comments?post=787"}],"version-history":[{"count":2,"href":"https:\/\/sites.comminfo.rutgers.edu\/vsingh\/wp-json\/wp\/v2\/pages\/787\/revisions"}],"predecessor-version":[{"id":1097,"href":"https:\/\/sites.comminfo.rutgers.edu\/vsingh\/wp-json\/wp\/v2\/pages\/787\/revisions\/1097"}],"wp:attachment":[{"href":"https:\/\/sites.comminfo.rutgers.edu\/vsingh\/wp-json\/wp\/v2\/media?parent=787"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}