Bias

You are participating in an online survey where you have to choose which of three candidates is most suitable for a job as a firefighter. You only see their photos and names. 

The explanation you get after clicking on a photo reflects how AI systems work. Bias happens when AI is trained on data that contains underlying patterns or assumptions. For example, if most of the training data consists of images of male doctors and female nurses, the system will tend to assume that doctors are usually men and nurses are usually women. But it goes beyond just photos. Imagine an AI system is used to evaluate job applications. If the system has been trained on data from the past ten years where mostly men were hired for a particular job, AI might give women lower scores. This leads to discrimination, perhaps unintentionally, but the impact is real.

A simpler example: have you noticed that on social media, you keep seeing the same types of posts over and over again? That happens because the algorithm building your feed develops certain preferences based on what you have clicked on in the past. This is how bias in technology operates, gradually shaping and influencing the choices we make.

Bias can pop up in unexpected ways. For instance, some AI tools that check for plagiarism or language use have been found to be biased against students who are not native English speakers. AI systems are often trained on texts from native speakers, and non-native sentence structures or word choices might be flagged as suspicious, even though they are just a result of a different language background. This kind of bias can lead to unfair evaluations and give a false impression of your writing skills.

It is important to understand how AI systems work and to stay alert for potential bias. Ask yourself if the data the system uses is representative and diverse, and whether the results might be biased against certain groups. By staying critical and asking these questions, you can help ensure technology is fairer and more responsible.