v0.7

Coded Fairness Project

Enabling a bias-sensitive development process of machine learning systems

Mike Lehmann

Marina Rost

Vera Schindler-Zins

The Coded Fairness Toolkit (digital and analogue).

Machine learning is being used in more and more industries to solve complex problems. And although machine learning often represents actual added value, its use can also have undesirable side effects. Discrimination by algorithms is already an everyday problem, and as a result, old prejudices deeply rooted in our society are transferred and scaled into a new medium. The reason for this are the biases found within the algorithms.

This problem is neither unknown nor unaddressed. There are many technical solutions that examine the properties of the training data or the evaluations of a machine learning system for possible biases.

In this master's thesis, we use our skills in design to develop a solution that, contrary to previous technical approaches, focuses on the people involved in the development process of a machine learning system. We believe that this is a more sustainable way to achieve lasting improvements, and the result of this approach is the Coded Fairness Project.

Video: Summary of the Coded Fairness Project

The Coded Fairness Project combines methods that promote bias-sensitive development of machine learning systems with discursive approaches to how to support such efforts within a company. The methods can then be applied in the form of a workshop by the people involved in the system’s development. The enclosed booklet provides background information, hints and instructions to support the implementation.

For the orientation of the toolkit, we derived the four basic principles of awareness, responsibility, inclusion and testing from our research and, based on these principles, developed various methods that were iterated through collaboration with experts from different disciplines such as psychology or computer science as well as through user testing.

Our finished set of methods is presented in the context of the fictitious organisation "Coded Fairness Project". Furthermore, we want to create a basis for discussion with seals and employee certificates on how companies can be motivated to not only implement and work with our methods, but also how to approach the general issue of harmful biases in machine learning.

The field of human-centered design encompasses many promising approaches and experiences that may also be applied to products being developed in other disciplines. We want to utilize and bring together these different perspectives to add another piece to the puzzle surrounding biases in machine learning - because in our eyes, this is an issue that needs to be covered across the spectrum.

Coded Fairness Project – Concept Overview
At the beginning of our thesis we explored the process of machine learning and occurring obstacles wich lead to biases, via Teachable Machine (https://teachablemachine.withgoogle.com)
Individual methods were tested remotely via miro in the form of user testing.
The "Your Persona" method in action.
Poster Packaging of the Coded Fairness Toolkit
The Coded Fairness Posters.
Coded Fairness Toolkit Poster Overview
The Coded Fairness Toolkit Booklet which explains the methods and provides additional background information. (Magazine PSD Mockup – www.mrmockup.com)
The website serves as a communication tool to show different examples of discrimination by algorithms as well as a product page. (Apple iPhone 11 & Macbook Pro Mockup (PSD) - www.unblast.com)
In the form of a certificate, companies are to be motivated to use our methods. (Flyer psd created by CosmoStudio - www.freepik.com)