Bias in generative AI
Because machine learning models are trained on human-generated data, there is an inherent bias in the outcomes that the algorithms produce.%%link back to generative AI training page%%
Human beings are responsible for generating, curating, organising, and labelling the datasets. They’re also responsible for designing the algorithms and deciding which variables to give more weight to. Then they choose which algorithms to apply to the problem. At each point in the training process, a human being is making a (sometimes biased) decision.
So inclusivity matters—from who designs it to who sits on the company boards and which ethical perspectives are included. Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes. - Kate Crawford (2016)
The difference between human bias and machine learning bias is that machine learning algorithms can be corrected. It’s not easy because of the black box nature of machine learning algorithms but it’s possible.
Additional resources
- Hart, R.D. (2017). If you’re not a white male, artificial intelligence’s use in healthcare could be dangerous. Quartz. Retrieved 10/11/2022 from https://qz.com/1023448/if-youre-not-a-white-male-artificial-intelligences-use-in-healthcare-could-be-dangerous
- Crawford, K. (2016). Artificial intelligence’s white guy problem. The New York Times. Retrieved 10/11/2022 from https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html