Connect with Adolescent
Close%20button 2

Current Events Mitch McConnell or Barack Obama—which will the Twitter algorithm choose?

Oct. 27, 2020
Avatar img 3591.jpegab5f2fa7 b5ab 4a94 be37 b4d0dec36fa7

Last month, I logged onto Twitter and saw a post showing two photos of a grinning white man on my feed. Curious, I clicked the photo on the left, expanding the full image to reveal a long vertical banner. At the top was an image of Senate leader Mitch McConnell, followed by some white space, then a photo of Barack Obama. That’s odd. I exited and expanded the photo on the right. This time the photo of Barack Obama was positioned on the top and Mitch McConnell was at the bottom. Both photos were professional headshots with identical lighting, identical image sizes, and similar facial expressions. But both times the Twitter algorithm truncated the photo to only display the senator. You can probably guess what the caption said. “Trying a horrible experiment… Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama?”

Twitter uses a cropping algorithm to focus on the most important or interesting parts of an image. Not only does this allow the image to fit into a preview pane, it's also designed to entice more clicks and readership. In this post, the algorithm decided that Mitch McConnell—the right-winged Kentucky senator known as the obstructionist to the democratic effort—was more interesting than the former two-term president of the United States. I think it’s significant to out myself right now and stress that before this, I had never seen a photo of Mitch McConnell in my life. But Barack Obama? That’s a face that’s almost definitely graced your presence before. Perhaps when he was serenading the country in a jazzy Blues Brother rendition, or calling Kanye an ass, or maybe even professing his love to Michelle on Instagram. My point being, you hardly need any political wit to understand the virality of the former president. 

Why, then, did the Twitter algorithm decide the image of Mitch McConnell was more interesting than that of Barack Obama? Dubious skeptics tried tweaking the photos. Perhaps the color of the ties was causing the algorithm to prefer the senator? Maybe if the colors were inverted? Users tried with two Obamas, turning up the contrast on one photo to see which one the algorithm would choose. Each tweak arrived at the same disappointing result. I understood the skepticism. I, too, wanted to give the app the benefit of the doubt, believing that there was a trick to it. That there was no way the algorithm could be so blatantly prejudiced. But the truth is, something doesn’t have to be designed with racist intention to be racist.

The first thing to note is that the algorithm has no idea who those faces are. It doesn’t have a clue who Mitch McConnell is. All it “sees” is what someone else programmed. When we think of machine-learning, we might imagine a supercomputer slowly gaining agency and sentience, learning our human mannerisms, until eventually it’s so well-versed in our idiosyncrasies that we can no longer differentiate between human and machine. If we peek behind the curtain, a less sexy, less Ex Machina version is revealed. It’s still a (probably white) man operating behind the curtain, but now the curtain is a screen. Behind every algorithm is a person with a set of personal beliefs and biases that no code can completely eradicate. Computers are trained with the data sets we teach them. Over time they learn to recognize patterns. If these data sets aren’t diverse, then anything that deviates too far from the norm is harder to detect.

We’re primed to believe that technology and data are neutral. This flawed misconception becomes dangerous as we rely more heavily on technology. Concerns about algorithmic bias aren’t new but have been met with little to no attention. On January 2, 2019, a user pointed out that the same algorithm beheaded women, narrowing in on their bodies. Men in the field accused her of making false claims to gain attention. Zoom’s flawed face-detection algorithm was recently brought to light after failing to detect a Black faculty member’s head while using a virtual background. This stirred massive attention after this summer’s reckoning with racism within the police state. 

We increasingly relegate responsibility to algorithms, entrusting them with the authority to fight crime and detect felons—yet we fail to acknowledge the implicit bias in our own thinking. In early January of this year, Robert Julian-Borchak Williams was arrested and detained after a faulty algorithm misidentified him as the suspect of larceny. This was the first known case of an American wrongfully arrested on account of a flawed facial-recognition algorithm. During the interrogation, the police officer flipped over a photo from the surveillance footage to show a heavy-set Black man. He asked if it was Williams. In disbelief, Williams held the photo up to his face and asked the officer, “No this is not me. You think all Black men look alike?”

In countries where the racial demographic is homogenous, facial recognition is pretty accurate. The problem arises when these companies outsource or supply their facial recognition to areas with diverse populations. The threshold gets significantly less sensitive when matching women and people of color. This results in white women being falsely matched every one in 10,000 tests, and Black women being five times more likely to be misidentified. Critics grasp at straws, offering glib excuses and performing the same mental calisthenics as skeptics of the former Twitter post. They claim that it’s “harder to take a good picture of a person with dark skin than it is for a white person.” That “different demographic groups have differences in the phenotypic expression of genes.” Technology is the golden child, and we refuse to believe that it can do wrong.

At the root of this insidious culture is a colonialist ideology that gets passed down to our mechanical progeny. The structure of the internet, algorithmic culture, and technology mirrors our hegemonic and imperialist social order. Those with the power to design systems wield the power to uphold hierarchical structures that privilege certain information over others. We see this in every crevice of the digital space—in the way marginalized bodies are flattened, ignored, or entirely erased. This lack of representation is directly related to the lack of intersectional analysis in digital spaces. As an unprecedented surplus of computing rises, so does the need to create systems that offer more inclusivity. Systems where diverse teams can check for blind spots, where ethical code is at the forefront of design, and where social change is a priority.