Fellowship on Anti Racism Research

Fellow Carlos Isaac Espinosa Ramirez
Fellow Carlos Isaac Espinosa Ramirez

You can read Carlos's report here

 

Individuals from underrepresented groups often find that STEM communities are not truly inclusive. This extends to online forums dedicated to sharing and discussing open source software tools, which often play a central role in science and engineering research projects. According to Espinosa, these forums are saturated with discriminatory and hostile comments made by members.

“An open source software community can make or break a project. Seeing toxic comments can discourage someone and set a negative tone for the project. It’s important to find ways to promote diversity and inclusion to retain a sense of belonging in the community, increase participation, and push a project towards success,” said Espinosa, whose summer project focused on creating more diverse and inclusive open source software communities.

With mentorship from Stephanie Lieggi, assistant director for the Center for Research in Open Source Software (CROSS), Espinosa spent the summer investigating different tools, methodologies, and policies that can be applied within open source software communities to deter discrimination and hostility and increase member participation and collaboration.

Espinosa noticed a reduction in discriminatory comments in open source software communities that introduced codes of conduct. He hopes to enact similar policies as part of his current project, Open Source Autonomous Vehicle Controller (OSAVC), which aims to create an advanced computer board for the multipilot platform in an autonomous vehicle to aid in more intelligent decision making. If successful, he hopes other projects will be inspired by his lead and adopt similar practices.

(Written by Melissa Weckerle)

You can read Carlos's report here

Fellow Yatong Chen
Fellow Yatong Chen

You can read Yatong's report here

 

Today, many decision-making processes have been automated and rely on machine-learning models to inform — or even arrive at — a final decision. Examples include job and loan application systems, many of which are driven by artificial rather than human intelligence. However, the machine-learning models that underlie these decision-making tools are prone to being “gamed” when applicants respond dishonestly to obtain favorable outcomes. This can result in decisions that increase disparities among historically underrepresented populations, eliciting concerns about the fairness of using machine learning to make decisions that affect people’s lives.

To address this problem, Chen used her summer fellowship to explore ethics in machine learning. Under the guidance of Assistant Professor of Computer Science and Engineering Yang Liu, Chen researched the biases that currently plague these machine-learning models and use this information to develop algorithms that are fair, accountable, transparent, and equally incentivize improvements from different individuals from different subpopulations.

“As a computer scientist who has experienced racism, I feel an obligation to use my expertise to fight discrimination,” said Chen.

After developing new, bias-free algorithms, Chen is interested in studying how fairness constraints in machine-learning models affect different racial groups over time. The data she gathers will help her refine the models and develop unbiased applications for the field of artificial intelligence, which Chen hopes will be widely adopted.

(Written by Melissa Weckerle)