fbpx
Search

Bias in our Digital Democracy

Joy Buolamwini, Coded Bias

Democracy, as we’ve known it, is dead. And, perhaps this is good news. Because, if democracy is the will of the people, then widespread access to digital tools may ensure more of our voices are heard.

Of course, it’s not that simple. Democracy, like most social institutions, is a complex blend of ideas and practices. The truth is, when it comes to our representative democracy, we the people have more questions than answers. 

  • Do our votes really count?
  • Whose voices matter?
  • Who gets an opportunity to run for office, and which special interests have a vested financial interest in their success?
  • How can we remain well-informed citizens when the search for credible information has become a moving target?

We’ve asked these questions for hundreds of years. So, what makes today’s digital democracy so different?

“Two of the essential pillars of democracy are liberty and equality. AI erodes both these principles.”

Sukhayl Niyazov in The Future of Democracy in the AI Era

Two words: coded bias.

We may have the impression that the 1’s and 0’s of computer code are neutral. Factual. Devoid of bias. But, in our digital democracy, who creates the countless lines of code used for everything from political polling chatbots, to social media feeds, to biometric identity verification tools that may be used in voting machines? Human beings – people who carry subconscious ideas, preferences, and biases.

Woman with lines of code across her face.
Photo by cottonbro from Pexels

These coded biases have real-world, unpredictable consequences that give rise to brand new questions that reach well beyond politics:

  • What kind of information is gathered about us when we use technology? Who is gathering it? And, what are they doing with it?
  • Can we trust what we see or hear online? What is real and what is a digital creation?
  • If tech companies feed us only the predicted and curated information that fits our historical preferences, will we lose our ability for critical thinking and empathy?
  • Is Artificial Intelligence (AI) fallible? In other words, can AI predict, filter, rank, or match data incorrectly? If so, what are the consequences?

Joy Buolamwini, a Ph.D. student in the  Massachusetts Institute of Technology (MIT) Media Lab and “poet of code”, faced many of these questions while creating a fun AI-based class project called the Aspire Mirror. The mirror she designed would project an inspiring image onto her face for a boost of confidence. This idea required facial detection software.

As it turned out, the software easily detected the faces of her lighter-skinned colleagues, but not her darker-skinned visage. To use the software, she had to cover her face with a cheap, white, craft-store mask. In her 2016 TED Talk, she coined the term “coded gaze” to describe the algorithmic bias she experienced. 

Joy Buolamwini with white mask, Courtesy of Coded Bias
Courtesy of Coded Bias

Unfortunately, this wasn’t Buolamwini’s first experience with algorithmic bias.

The Fight for Algorithmic Justice

Buolamwini founded the Algorithmic Justice League, “an organization that combines art and research to illuminate the social implications and harms of artificial intelligence.” Additionally, Buolamwini’s story features in a documentary called “Coded Bias,” directed and produced by Shalini Kantayya. This film shines a light on the issue of widespread algorithmic bias in law enforcement, citizen surveillance, housing, healthcare, employability, credit-worthiness, and more. Recently, “Coded Bias” premiered at the 2020 Sundance Film Festival and is now playing at more than 70 virtual cinemas.

Along with Buolamwini, other global researchers like United States (U.S.)-based author and mathematician Dr. Cathy O’Neil, United Kingdom (U.K.)-based director and activist Silkie Carlo, and international data rights legal expert Ravi Naik appeared in the film. Each gave testimony to their findings of algorithmic injustice.



Today, the list of organizations and people engaged in fighting against algorithmic bias is growing steadily. At the same time, not everyone supports this work — especially those who profit from the unregulated use of AI. 

For example, as of the writing of this column, there is public controversy surrounding the allegedly-forced resignation of Dr. Timnit Gebru, former Staff Research Scientist and Co-Lead of the Ethical Artificial Intelligence team at Google. Gebru’s supervisors prevented her from presenting a research paper that revealed the potential problems with an AI-generated language model. She believed this model actually encoded bias, privileging the language patterns of wealthy nations.  

Algorithmic Bias and Democracy

Algorithmic bias directly impacts so many aspects of our society,. It replicates years of historic bias that minoritized citizens continue to experience. As more people feel they are not being represented fairly, we risk eroding the foundations of our representative democracy, especially in the digital age.

And, unfortunately, algorithmic bias is just one of the many factors threatening our democracy:

  • Synthetic data, such as fake videos (deepfakes), present photorealistic, fake videos of people synced to audio.
  • Hyper-customized web search results feed us a distorted view of the world where everyone seems to share our perspective (i.e. filter bubbles).
  • Election interference, where foreign and domestic extremists use social media as a vehicle to manipulate and deceive voters.

What does this mean for culturally-fluid citizens? 

Man with an image of the American flag projected onto his body.
Photo by cottonbro from Pexels

It seems the answers are just as complex and unpredictable as democracy itself. As tech companies build AI-based tools to collect and filter data to share with political candidates and elected officials, our unique, culturally-fluid perspectives may be lost. Worse, we may experience algorithmic bias in our own lives.

The digital age is disrupting every aspect of our lives, extending down to the foundations of civil society as we know it. More than ever, we must find real-world, concrete ways to participate in our digital democracy – beyond likes and tweets.

Pick up the phone and call your elected official. Volunteer to get out the vote. Write letters. Join community organizations.

Don’t let them reduce your beautiful, unique perspective to ones and zeros.


8 comments

  1. Before reading this article, I definitely questioned the same things about democracy that were mentioned in the beginning of the article regarding whether my vote counted and such, however I had no idea about coded bias. I can’t believe I wasn’t aware of such a prominent thing in our digital day and age. I feel like we as a society tend to think of advancing technology as a positive contribution, however that may not be the case, and it’s important that we recognize this and stay cautious before it’s too late.

    1. Christina, you are so right. I, too, was unaware of the problem of coded bias before researching this piece. It’s incredible what we don’t know. And, as we encounter more and more “deep fake” videos and content, citizens will need to be even more vigilant and educated about how tech can manipulate us. Very few people have an awareness of this, and the problem is already here. Thanks for your reflection.

  2. I knew about coded biases and filter bubbles, but I never thought about how that might affect culturally fluid people. These algorithms that cater to an individual’s personal views are usually too “black-and-white;” man or woman, democrat or republican, black or white, etc. This obviously doesn’t account for multiculturalism and could make culturally fluid individuals feel even more out of place. While these kinds of technologies advance, we need more diverse and multicultural voices to be heard so they are not left behind.

    1. Yes, Melissa, you bring up a very good point. We often see more of what we choose to click on, rather than anything outside of that “bubble.” There is so much nuance that gets lost in the way AI categorizes everything and everyone into neat little boxes. I agree that we need more diverse and multicultural voices to be heard and, in my opinion, to be in the room of people who are building these tech tools. The Algorithmic Justice League, mentioned in this piece, is fighting for this – as are others. Thanks for reading and for your comment.

  3. Reading this article reminded me of many algorithmic processes that display overt and covert biases within the media, society, and online. One in particular that initially crossed my mind while reading specifically about the Aspire Mirror project was related to Twitter. There have been various tests and experiments to test if the algorithm was biased, and it was. There were four images posted with a column of three men. One with the same three images of a white man, one with three images of a Black man, and the other two with two out of three of the two races showing. When looking at the images prior to opening the post, only the white man would be shown in the thumbnail. The one with the black man did not show his face, but rather his suit. This highlights the racial biases within the algorithms that was highlighted within this article. I enjoyed reading about the other examples of AI and algorithmic biases and dangers including deepfakes and interference.

    1. Yes, Angelica, it is quite stunning when you realize how much of what we see (or don’t see) has been engineered. It is so important that we continue to bring these examples to light as we continue through the 4th Industrial Revolution. There are many who have no idea this is happening. Thank you for your comment.

Comments are closed.

Close

Culturs Global Multicultural Media

Celebrating Cross-Cultural TCK Identity
© Copyright 2021. All rights reserved.
Close
Verified by MonsterInsights