Ramverk för att motverka algoritmisk snedvridning
Information
Författare: Linnea Skärdin, Clara EngmanBeräknat färdigt: 2019-06
Handledare: Victoria Jonsson
Handledares företag/institution: Cybercom Group AB
Ämnesgranskare: Anders Arweström Jansson
Övrigt: -
Presentationer
Presentation av Linnea SkärdinPresentationstid: 2019-06-03 15:15
Presentation av Clara Engman
Presentationstid: 2019-06-03 16:15
Opponenter: Alexander Groth, Saranya Silawiang
Abstract
In the use of the third generation Artificial Intelligence for the development of products and services, there are many hidden risks that may be difficult to detect at an early stage. One of the risks with the use of Deep Learning algorithms is algorithmic bias which, in simplified terms, means that implicit prejudices and values are comprised in the implementation of AI. A well-known case is Google Photo’s image recognition, which identified black people as gorillas. The purpose of this master thesis is to create a framework with the aim to minimize the risk of algorithmic bias in AI development projects. To succeed with this task the project has been divided into three parts. The first part is a literature study of the phenomenon bias, both from a human perspective as well as from an algorithmic bias perspective. The second part is an investigation of existing frameworks and recommendations published by Facebook, Google and EU. The third part consists in an empirical contribution in the form of a qualitative interview study which has been used to create and adapt an initial general framework. Ultimately, the framework has been revised according to the interview results and applied on a specific AI project in the public sector.