Problem
Real world inequalities are reproduced within algorithms and flow back into the real world.
Machine learning algorithms are being widely used in many applications, and they have a direct impact on our lives. When designed and engineered to serve humans, they contribute to significant advances in the economy and the public good. However, these algorithms, as they are created and designed by humans, are prone to bias.
A new generation of researchers tasked with creating new algorithmic systems have solid technical backgrounds but lack substantial human rights knowledge or frameworks to use this technical knowledge as
AI for Social Good
. At university, new machine learning engineers and data scientists are being taught
that data is unique, both ground truth and objective, not relative and contextual as we now widely understand.
Universities have a critical role in teaching, sharing, and expanding discourse on human rights concepts into this burgeoning and increasingly fundamental field of study and practice. Integrating human rights values into computing technology can help educate the next generation of scientists and engineers to work for the public good, creating technology aware of and in line with human values rather than uwittingly harming them.
Why Human Rights?
Human rights are rights we have because we exist as human beings. These universal rights are inherent to us all, regardless of nationality, sex, national or ethnic origin or any other status.
Fairness is a sociocultural concept, derived from ethics and political philosophy which refers to plural conceptions of justice between individuals.
Human rights on the other hand:
- are often better defined and measurable
- most are defined under international or national law.
- provide an ethical lens that exceeds national and cultural borders
- converts voluntary promises of ethical behaviour into compulsory requirements for compliance with established legislation.
- put people in the centre of decision-making and can assess and address any unintentional harm
<AI & Equality> methodology
Our methodology includes a workshop consisting of a Human Rights module and code, outreach and community plan incorporating human rights concepts with data science.
- Goal
Bring an international university generation to understand the scientist’s unique potential of social impact in the real world, bridging science and human rights policy to foster systemic resilience and more equal, just, robust democracies. - Team
A joint work between EPFL (École Polytechnique Fédérale de Lausanne), Women at the table and in collaboration with the Office of the United Nations High Commissioner for Human Rights - Audience
Computer/data science students and early career data scientists
Contact Us
Contact us as we build our interdisciplinary community!
Please leave us a message using the form below and we’ll get back to you as soon as possible.
sofia[at]womenatthetable.net