IF LAST YEAR HAD A TITLE, it could be ‘the year of artificial intelligence’ (AI), especially advanced systems that use machine learning (ML). Universities around the world, including ours, were asking more students to take tests in classrooms to prevent them from using tools like ChatGPT to write answers for them. Meanwhile, notable computer scientists, including Geoffrey Hinton, considered the godfather of AI, wrote open letters sounding the alarm over “existential threats” from future versions of these technologies.
As much as we agree that it would be highly undesirable for computers to start wars or interfere in elections, we don’t need to imagine future technologies to see that ML tools are already reproducing social inequalities. In order to consider how these tools can reproduce inequalities, it helps to understand a bit about how they work.
ML is a subcategory of AI that uses algorithms to detect patterns in data, typically vast quantities of data, then uses these insights to make decisions. What distinguishes ML is that it aims to accomplish tacit knowledge tasks that humans can perform but struggle to explain. For example, consider a chair with four legs, a back, a seat and armrests. Is it still a chair without armrests? What if it is shaped like an egg, or has rockers instead of legs? While it may be challenging to explain exactly what makes an object a chair, it is generally easy for humans to decide if an object is a chair or not.
Traditional algorithms are rules-based, in that a programmer writes a set