Democracy, as we’ve known it, is dead. And, perhaps this is good news. Because, if democracy is the will of the people, then widespread access to digital tools may ensure more of our voices are heard.
Of course, it’s not that simple. Democracy, like most social institutions, is a complex blend of ideas and practices. The truth is, when it comes to our representative democracy, we the people have more questions than answers.
- Do our votes really count?
- Whose voices matter?
- Who gets an opportunity to run for office, and which special interests have a vested financial interest in their success?
- How can we remain well-informed citizens when the search for credible information has become a moving target?
We’ve asked these questions for hundreds of years. So, what makes today’s digital democracy so different?
“Two of the essential pillars of democracy are liberty and equality. AI erodes both these principles.”Sukhayl Niyazov in The Future of Democracy in the AI Era
Two words: coded bias.
We may have the impression that the 1’s and 0’s of computer code are neutral. Factual. Devoid of bias. But, in our digital democracy, who creates the countless lines of code used for everything from political polling chatbots, to social media feeds, to biometric identity verification tools that may be used in voting machines? Human beings – people who carry subconscious ideas, preferences, and biases.
These coded biases have real-world, unpredictable consequences that give rise to brand new questions that reach well beyond politics:
- What kind of information is gathered about us when we use technology? Who is gathering it? And, what are they doing with it?
- Can we trust what we see or hear online? What is real and what is a digital creation?
- If tech companies feed us only the predicted and curated information that fits our historical preferences, will we lose our ability for critical thinking and empathy?
- Is Artificial Intelligence (AI) fallible? In other words, can AI predict, filter, rank, or match data incorrectly? If so, what are the consequences?
Joy Buolamwini, a Ph.D. student in the Massachusetts Institute of Technology (MIT) Media Lab and “poet of code”, faced many of these questions while creating a fun AI-based class project called the Aspire Mirror. The mirror she designed would project an inspiring image onto her face for a boost of confidence. This idea required facial detection software.
As it turned out, the software easily detected the faces of her lighter-skinned colleagues, but not her darker-skinned visage. To use the software, she had to cover her face with a cheap, white, craft-store mask. In her 2016 TED Talk, she coined the term “coded gaze” to describe the algorithmic bias she experienced.
Unfortunately, this wasn’t Buolamwini’s first experience with algorithmic bias.
The Fight for Algorithmic Justice
Buolamwini founded the Algorithmic Justice League, “an organization that combines art and research to illuminate the social implications and harms of artificial intelligence.” Additionally, Buolamwini’s story features in a documentary called “Coded Bias,” directed and produced by Shalini Kantayya. This film shines a light on the issue of widespread algorithmic bias in law enforcement, citizen surveillance, housing, healthcare, employability, credit-worthiness, and more. Recently, “Coded Bias” premiered at the 2020 Sundance Film Festival and is now playing at more than 70 virtual cinemas.
Along with Buolamwini, other global researchers like United States (U.S.)-based author and mathematician Dr. Cathy O’Neil, United Kingdom (U.K.)-based director and activist Silkie Carlo, and international data rights legal expert Ravi Naik appeared in the film. Each gave testimony to their findings of algorithmic injustice.
Today, the list of organizations and people engaged in fighting against algorithmic bias is growing steadily. At the same time, not everyone supports this work — especially those who profit from the unregulated use of AI.
For example, as of the writing of this column, there is public controversy surrounding the allegedly-forced resignation of Dr. Timnit Gebru, former Staff Research Scientist and Co-Lead of the Ethical Artificial Intelligence team at Google. Gebru’s supervisors prevented her from presenting a research paper that revealed the potential problems with an AI-generated language model. She believed this model actually encoded bias, privileging the language patterns of wealthy nations.
Algorithmic Bias and Democracy
Algorithmic bias directly impacts so many aspects of our society,. It replicates years of historic bias that minoritized citizens continue to experience. As more people feel they are not being represented fairly, we risk eroding the foundations of our representative democracy, especially in the digital age.
And, unfortunately, algorithmic bias is just one of the many factors threatening our democracy:
- Synthetic data, such as fake videos (deepfakes), present photorealistic, fake videos of people synced to audio.
- Hyper-customized web search results feed us a distorted view of the world where everyone seems to share our perspective (i.e. filter bubbles).
- Election interference, where foreign and domestic extremists use social media as a vehicle to manipulate and deceive voters.
What does this mean for culturally-fluid citizens?
It seems the answers are just as complex and unpredictable as democracy itself. As tech companies build AI-based tools to collect and filter data to share with political candidates and elected officials, our unique, culturally-fluid perspectives may be lost. Worse, we may experience algorithmic bias in our own lives.
The digital age is disrupting every aspect of our lives, extending down to the foundations of civil society as we know it. More than ever, we must find real-world, concrete ways to participate in our digital democracy – beyond likes and tweets.
Pick up the phone and call your elected official. Volunteer to get out the vote. Write letters. Join community organizations.
Don’t let them reduce your beautiful, unique perspective to ones and zeros.