As ChatGPT enthralls the public with promises of limitless potential, there is no better time to talk about what AI tools might also erase. Enter Joy Buolamwini, an MIT researcher, poet, and emphatic social activist, who studies how artificial intelligence inserts bias into its results. Her research focuses on what she calls the “coded gaze.” For example, she found AI used by IBM, Facebook, and Face++ did fairly well at identifying genders from looking at a face — but was more successful when those images were of people who were lighter-skinned and male. That certainly creates an equity bias, but it’s a business problem too, Buolamwini believes.
Read more about Buolamwini or watch her interview below.