{"id":1506,"date":"2024-10-10T14:59:35","date_gmt":"2024-10-10T12:59:35","guid":{"rendered":"https:\/\/wordpress-test.app.u-pariscite.fr\/diip\/?p=1506"},"modified":"2024-10-25T10:39:13","modified_gmt":"2024-10-25T08:39:13","slug":"maintaining-fairness-for-decision-making-under-social-considerations","status":"publish","type":"post","link":"https:\/\/wordpress-test.app.u-pariscite.fr\/diip\/maintaining-fairness-for-decision-making-under-social-considerations\/","title":{"rendered":"Maintaining fairness for decision-making under social considerations"},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; _builder_version=&#8221;3.22&#8243;][et_pb_row _builder_version=&#8221;3.22&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221;][et_pb_column type=&#8221;1_4&#8243; _builder_version=&#8221;3.0.47&#8243;][et_pb_text _builder_version=&#8221;3.22.1&#8243; background_color=&#8221;#072c72&#8243; border_color_all=&#8221;#3255c9&#8243; text_orientation=&#8221;right&#8221; background_layout=&#8221;dark&#8221; custom_padding=&#8221;20px|15px|15px|&#8221; z_index_tablet=&#8221;500&#8243;]<\/p>\n<p><em>2024 <\/em><\/p>\n<p><em>Strategic Projects<\/em><\/p>\n<p><span data-sheets-root=\"1\">@Computer Science<\/span><\/p>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;3.22.1&#8243; text_orientation=&#8221;right&#8221; z_index_tablet=&#8221;500&#8243;]<\/p>\n<p>#Machine Learning<\/p>\n<p>#Data Drift<\/p>\n<p>#Fairness<\/p>\n<p>\u00a0<\/p>\n<p>[\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;3_4&#8243; _builder_version=&#8221;3.0.47&#8243;][et_pb_text _builder_version=&#8221;3.22.1&#8243; z_index_tablet=&#8221;500&#8243;]<\/p>\n<h3><strong>Project Summary<\/strong><\/h3>\n<p>Machine learning (ML) is increasingly used to make decisions that highly impact applications like hiring, medical diagnosis, education and criminal justice, etc. All of these applications directly impact people\u2019s lives, and can harm our society if not designed and engineered with considerations to fairness and ethics. Potential sources of fairness issues in machine learning predictions, can arise from biases in the data causing unfair decisions. One form of bias is data drift, which refers to changes in the distribution of features (e.g., gender, race, sex, age, and others). It may be harmful, because it can negatively impact the performance of machine learning models in a significant way, leading the models to inaccurate predictions and unfair decision-making.<\/p>\n<p>In this project, we focus on gender fairness caused by drifts. Many existing works study the existence and effects of gender disparities in ML applications, and affirm they are observed in society at various levels and perpetuated in ML applications. For example, in hiring applications, it\u2019s demonstrated that job applicants are affected and related to gender stereotypes, e.g., men tend to apply for more technical and ambitious jobs as compared to women. According to the division of gender equality of UNESCO, ML systems used in the recruitment software are found to be biased towards women. Hence, still more effort and work are required to mitigate gender disparities and make ML applications fairer for all users.<\/p>\n<p>In social science, it is crucial to ensure that social studies accurately reflect the experiences and perspectives of people of all genders. This allows to ensure that data used in ML applications to be representative and their outcomes are not harmful. Gender fairness is then an ongoing effort that requires collaboration between data scientists and social scientists, and other diverse user groups. By considering gender fairness as a fundamental aspect of model performance, ML applications can better serve all users while avoiding to reinforce harmful disparities or stereotypes. For example, in applications using demographic data for recruitment to predict job suitability or promotions, it is important to ensure that the model provides equal opportunities for genders.<\/p>\n<p>Our objective is hence to investigate the social science perspective of gender fairness in ML applications. Our contributions will reflect the collaborative efforts and shared interests of computer science and social science fields. Setting such interdisciplinary research is a challenge in itself, starting from finding a common terminology and definition of fairness, which is not trivial.<\/p>\n<p>&nbsp;<\/p>\n<h3><strong>Soror Sahri<br \/><\/strong>soror.sahri@parisdescartes.fr<\/h3>\n<ul>\n<li>Associate Professor, UPCit\u00e9<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><strong>Philipp Brandt<br \/><\/strong>Assistant professor of sociology &#8211; SciencesPo<\/p>\n<p><strong>Marc Hulcelle<\/strong><br \/>Postdoc &#8211; diiP, UPCit\u00e9<\/p>\n<p><strong>Sijie Dong<\/strong><br \/>PhD student &#8211; LIPADE, UPCit\u00e9<\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row custom_margin=&#8221;120px||&#8221; admin_label=&#8221;Row&#8221; _builder_version=&#8221;3.22.1&#8243; locked=&#8221;off&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;3.22.1&#8243;][et_pb_divider _builder_version=&#8221;3.22.1&#8243;][\/et_pb_divider][et_pb_text admin_label=&#8221;\u00c0 lire aussi&#8221; _builder_version=&#8221;3.22.1&#8243; z_index_tablet=&#8221;500&#8243; locked=&#8221;off&#8221;]<\/p>\n<h2><span class=\"st\">Projects in the same discipline<br \/><\/span><\/h2>\n<p>[\/et_pb_text][et_pb_blog posts_number=&#8221;4&#8243; include_categories=&#8221;31&#8243; show_author=&#8221;off&#8221; show_date=&#8221;off&#8221; show_pagination=&#8221;off&#8221; module_id=&#8221;page_type_blog&#8221; _builder_version=&#8221;3.22.1&#8243; header_level=&#8221;h4&#8243; border_width_bottom_fullwidth=&#8221;1px&#8221; border_color_bottom_fullwidth=&#8221;rgba(51,51,51,0.18)&#8221; custom_padding=&#8221;||50px|&#8221; z_index_tablet=&#8221;500&#8243; locked=&#8221;off&#8221;][\/et_pb_blog][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>2024 Strategic Projects@Computer Science #Machine Learning#Data Drift#Fairness\u00a0 Project Summary Machine learning (ML) is increasingly used to make decisions that highly impact applications like hiring, medical diagnosis, education and criminal justice, etc. All of these applications directly impact people\u2019s lives, and can harm our society if not designed and engineered with considerations to fairness and ethics.&hellip; <a class=\"continue\" href=\"https:\/\/wordpress-test.app.u-pariscite.fr\/diip\/maintaining-fairness-for-decision-making-under-social-considerations\/\">Lire la suite<span> Maintaining fairness for decision-making under social considerations<\/span><\/a><\/p>\n","protected":false},"author":560,"featured_media":2263,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"categories":[51,31,1,26],"tags":[],"class_list":["post-1506","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-51","category-computer-science","category-diip","category-strategic-projects"],"_links":{"self":[{"href":"https:\/\/wordpress-test.app.u-pariscite.fr\/diip\/wp-json\/wp\/v2\/posts\/1506","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/wordpress-test.app.u-pariscite.fr\/diip\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wordpress-test.app.u-pariscite.fr\/diip\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wordpress-test.app.u-pariscite.fr\/diip\/wp-json\/wp\/v2\/users\/560"}],"replies":[{"embeddable":true,"href":"https:\/\/wordpress-test.app.u-pariscite.fr\/diip\/wp-json\/wp\/v2\/comments?post=1506"}],"version-history":[{"count":5,"href":"https:\/\/wordpress-test.app.u-pariscite.fr\/diip\/wp-json\/wp\/v2\/posts\/1506\/revisions"}],"predecessor-version":[{"id":3138,"href":"https:\/\/wordpress-test.app.u-pariscite.fr\/diip\/wp-json\/wp\/v2\/posts\/1506\/revisions\/3138"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/wordpress-test.app.u-pariscite.fr\/diip\/wp-json\/wp\/v2\/media\/2263"}],"wp:attachment":[{"href":"https:\/\/wordpress-test.app.u-pariscite.fr\/diip\/wp-json\/wp\/v2\/media?parent=1506"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wordpress-test.app.u-pariscite.fr\/diip\/wp-json\/wp\/v2\/categories?post=1506"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wordpress-test.app.u-pariscite.fr\/diip\/wp-json\/wp\/v2\/tags?post=1506"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}