November 2021

This blog post is the third in a series about facial recognition software in various forms of public and private means of transportation, as well as the greater public policy concerns of facial recognition tools. More posts about the relationship between transportation technology, FRS, and modern slavery will follow.

Racism in America has been ingrained for many years now. Though we have come a long way, our history with racism is still very much present in the inner workings of our society, especially when it comes to technology and transportation. So how has the history of our country encouraged tech racism in transportation? In her book Dark Matters: On the Surveillance of Blackness, Professor Simone Brown demonstrates that we can trace the emergence of surveillance technologies and practices back to the trans-Atlantic slave trade

Early surveillance in this country began in the 18th century with the “lantern laws.” Simply put, Black, mixed-race, and Indigenous people carried candle lanterns while walking in the streets after dark and not in the company of a white person so slaves could be easily identified. The “lantern laws” were a prime example of earlier supervisory technology. If these laws were broken, they came with punishments. Not only was this a form of early surveillance, but it was a form of control. The “lantern laws” made it possible for the Black body to be controlled and helped to maintain racial boundaries

In the 1950s and 60s, government surveillance programs like the FBI’s “COINTELPRO” targeted Black people, and it was a systematic attempt to spy on and dispute activists in the name of “national security.” However, recently we have learned that the FBI surveillance program targets so-called “Black Identity Extremists.” Put simply, race plays a major factor in a policy terms such as “Black Identity Extremist” because the FBI attempts to define a movement where none exists. Essentially, a group of Black individuals who are connecting ideologically is considered a threat because they are Black.

We can see how past laws and practices connect to present times. Today, police surveillance cameras are disproportionately installed in Black and Brown neighborhoods to keep a constant watch. Along with the disproportionate rate at which Black and Brown communities are watched, the ACLU says there are additional ways in which the government could misuse cameras including voyeuristic purposes, which have targeted women; spied on and harassed political activists; and even intended criminal purposes. Governmental surveillance programs have been the most recent in a string of periodic public debates around domestic spying. 

Racial bias is a significant factor when it comes to facial recognition technology in transportation, especially when in use by law enforcement agencies. Black people are incarcerated at more than five times the rate of white people. Black people receive harsher prison sentences, Black people are more likely to be held on bail during pretrial procedures, and Black people are dying disproportionately at the hands of the police. Racial biases are still very much present when it comes to the technology that law enforcement agencies use to aid in arrest. 

To start, technology itself can be racially biased. According to Joy Buolamwini and Timnit Gebru, based on their 2018 study, they have brought to the forefront in their research how algorithms can be racist. For example, law enforcement uses digital technology for surveillance and predicting crime on the theory that it will make law enforcement more accurate, efficient, and effective. But digital technology such as facial recognition can be used as a tool for racial bias, not effective policing.

This technology can be beneficial in theory, but when people of color are being misidentified at disproportionate rates, we must reconsider the algorithms and the purpose behind facial recognition. People of color are misclassified over a third of the time, while white people rarely suffer from these mistakes. For example, Joy Buolamwini and Timnit Gebru’s 2018study found that the datasets used to identify people were overwhelmingly composed of lighter-skinned people. Black women are misidentified approximately 35% of the time versus the 0.8% of white men who are misidentified. Additionally, in 2019, a national study of over 100 facial recognition algorithms found that they did not work well on Black and Asian faces.

With many software business models increasingly relying on facial recognition tech, the error-prone algorithms exacerbate the already-pervasive racial biases towards people of color. Moreover, false matches lead to a bigger problem, such as mass incarceration. All it takes is one false match, which can lead to lengthy interrogations, being placed on a watch list by police, dangerous police encounters, false arrest, or worse, wrongful convictions. A false match can come from nearly anything. For example, in New Jersey, Nijeer Parks was arrested for a crime he did not commit based on a bad face recognition match. This bad facial recognition came from the police comparing Mr. Parks New Jersey state ID with a fake Tennessee driver’s license left by the perpetrator.

There is more of a risk factor for people like Mr. Parks, who has a prior criminal record because facial recognition software is often tied into mugshot databases. This amplifies racism further because when a person is arrested and their mugshot is taken by law enforcement, it’s saved in the database. Since people of color are arrested at a higher rate for minor crimes, their faces are more likely to be stored in the databases, which increases the odds of identification and other errors.

Law enforcement agencies and the justice system across the board need to consider that machines can be wrong. Just like humans, algorithms are infallible. For example, recent studies have documented subjective flaws in eyewitness identification of suspects, and those same weaknesses in human judgment can affect the use of facial recognition technologies. There is a human and algorithmic error, but the error rates slip into the design and “training” process when algorithms are tested. Simply put, the NIST tests for differential error rates over different parts of the population show substantial error-rate variations for certain races. As mentioned before, if there are millions of examples of white men in a database and only two Black women, the algorithms will have difficulty distinguishing the faces of Black women. It is not the lack of data training, but the software is less likely to identify features from certain kinds of faces.

Although groups are trying to make surveillance technology better for people of color, we must look at our history as a country, especially regarding tech racism in transportation. Looking back on the “lantern laws” and now facial recognition, government agencies like law enforcement agencies and the FBI are allowed to deploy invasive face surveillance technologies against Black and Brown communities merely for existing. Additionally, racial bias in law enforcement agencies can inform emerging technologies and carry over into the transportation sector. This intersection may be most obvious when we think of interactions such as traffic stops.

There are less obvious connections between systemic racism and FRS in transportation, including access to transportation or failure to recognize pedestrians or riders. Racial disparities within FRS that are used in personal vehicles, rideshares, buses, or trains are not only unfair and unequal, but they are also unsafe. Tech racism could mean that nonwhite people (namely Black people) are locked out of their vehicles, unable to start their vehicles, hit by buses, unidentified by automatic train doors, or unnoticed by safety features such as fatigue prevention at higher rates than white people.

 The Transportation Security Administration has been testing facial recognition technology at airports across the country and expects it to become a preferred method to verify a passenger’s identity. However, according to the National Institute of Standards and Technology, facial recognition software showed a higher rate of incorrect matches between Asian and Black people than white people, even with airport surveillance. The research clearly shows that technology in transportation has had its most significant impacts on people of color who are already dealing with transportation disadvantages. Therefore, if the technology used for transportation continues to reinforce human biases, this will perpetuate inequality as a result.

Facial recognition is a powerful tool for technology. It can have significant implications in criminal justice and everyday life, but we must build a more equitable face recognition landscape. The inequities are being addressed. Algorithms can train diverse and representative datasets, photos within the databases can be made more equitable, and regular and ethical auditing is possible, especially when it comes to skin tone. Though racial bias in facial recognition technology is being addressed, the question remains, should facial recognition technology be banned? There is a historical precedent for technology being used to survey movements of the Black population. Facial technology relies on the data that developers feed it, who are disproportionately white.

This blog post is the second in a series about facial recognition software in various forms of public and private means of transportation, as well as the greater public policy concerns of facial recognition tools. More posts about the relationship between transportation technology, FRS, and modern slavery will follow.

Volume II: Beginning to Think about Modern Slavery and Human Trafficking

This blog post is the next in our series about facial recognition software (FRS) in transportation technology. This time, we will begin considering whether facial recognition software can be a meaningful tool for combatting modern slavery and human trafficking. The two most pressing questions regarding this topic are: first, is FRS an impactful tool in combating slavery and trafficking and, second, what are the relevant slavery risks?

Before we begin to think about either of these questions, let’s think about what slavery and trafficking mean in the context of transportation technology. It is important to understand that there is disagreement among experts about the best working definitions of slavery and trafficking. The definitions we will use here are not absolute. When we use the term “modern slavery”, we are generally talking about the exploitation of humans for personal or financial gain using deceit, force, or abuse of a person’s vulnerability. Modern slavery could be in the production of our clothing, manufacturing of our cars, harvesting of our food, or in forced sex work, for example. Human trafficking, though often conflated with modern slavery, is defined by the United Nations as “transportation, transfer, harbouring or receipt of people through force, fraud or deception, with the aim of exploiting them for profit”.

Turning to the effectiveness of FRS, the short answer is that FRS has helped identify and rescue survivors of slavery and trafficking. There are tools such as Spotlight, which use image processing to scan missing persons’ photographs and search through databases of online sex ads. Spotlight is owned and operated by Thorn, which is an organization that seeks to use technology to promote safety. Thorn focuses primarily on combatting child sex trafficking and child pornography.

Spotlight has helped identify victims 60% faster than searches that do not include FRS and has identified over 17,000 missing children since 2016. The idea is that sex work is often advertised online, and Spotlight is able to match photographs of sex workers with photos in databases of missing persons to identify which ads include children. In the case of adults, such FRS services could identify people who may be participating against their will based on who has been reported missing.

Hypothetically, using FRS in different means of transportation could help identify missing persons more quickly because recognition could occur during transit rather than after a person has already been forced to perform labor or has been advertised for sex online. Additionally, as society moves towards automation, victims of modern slavery and human trafficking will have less and less contact with other humans during transportation who may have otherwise been able to identify a problem. Consequently, FRS could not only replace that safety net, but also do so more quickly and accurately. However, this hypothesis assumes many things about the technology, such as which database the FRS uses, how the images in that database are collected, the accuracy of the analysis, and the ability to intervene. 

Thorn has partnered with Amazon Rekognition to power Spotlight as it has become a tool for law enforcement agencies. Frankly, this partnership is the first consideration of our second question. Because the implementation of FRS in the transportation sector for this purpose is speculative at this point, it is important to consider the relevant risks to vulnerable communities.

This is where things get complicated. Amazon has a long history of serious allegations about abusing employees ranging from labor law violations at best to modern slavery at worst. These claims include factory and warehouse employees urinating in bottles during their shifts because they were discouraged (or even not permitted) to use the restroom. Reports also note at least one person dying of heat exhaustion and dehydration. 

Amazon’s alleged treatment of its employees is troubling because it creates a difficult dynamic; actors who have contributed positively to the fight against slavery and trafficking may also be participants in these egregious practices. Modern slavery and human trafficking are so prominent that an estimated 40 million people are currently enslaved, and almost every individual consumes goods or services produced by slave labor. Each of us has a slavery footprintbecause many companies upon which we rely have slavery somewhere in their supply chain. But what does all of this mean for the impact and risks associated with facial recognition in vehicles?

Truthfully, upticks in slavery have been deeply correlated with every major transportation innovation from roads to ships to rubber tiresVolkswagenPorscheGeneral MotorsMercedes-Bens, and BMW each capitalized on the Holocaust by producing wartime material for Nazi Germany or utilizing forced labor from concentration camps. These are concepts that will be explored further in later blogs. For now, the important question is what does the dark history of these companies have to do with using FRS in different modes of transportation?

Well, put simply, FRS in various modes of transportation could be a great tool to combat modern slavery and human trafficking, but there will be obstacles related to public opinion, the right to privacy, technology racism, agency rulemaking, and legislative drafting. Relatedly, the issue of unethical labor practices by technology and car companies may leave a poor taste in the mouths of consumers who could be left wondering if investing in FRS is simply a public relations stunt when thousands of workers have historically been exploited at the hands of the very same companies. 

For this reason, transparency will be important from both private companies as well as municipalities interested in implementing FRS in public transportation. It is also important to note that most of the aforementioned companies have not only become more transparent about their role in the Holocaust, but also paid reparations (see thisVolkswagen example). 

This point is important because by pointing out this complicated dynamic of entities that have facilitated and now want to combat slavery and trafficking, we are not embarking on a witch hunt to find hypocrites, but rather shedding light on the very complicated web of how modern slavery that has touched nearly every facet of society. Recognizing this relationship does not mean those entities cannot be part of the solution, but it does mean we must anticipate some standoffishness from the public, particularly affected communities, and should be used as something of a lesson about the potential role of the private sector in fostering a solution.

We will address potential legal avenues throughout this series, but one option is federal legislation. Federal human trafficking laws and regulations are nothing new in the United States and date back to The Mann Act of 1910 (which was problematic in its own right, but the beginning of this story nonetheless). The Victims of Trafficking and Violence Prevention Act (TVPA) has been reauthorized three times since it was originally passed in 2000, and was expanded upon in 2005 and 2008 in light of more available research, including technological advancements and the power of the internet era.

State and federal agencies could also regulate FRS as a tool to combat slavery and trafficking. Many federal agencies have invested in FRS studies for various purposes, including a Department of Transportation study on eye tracking to gauge the safety of commercial drivers, train conductors, and air traffic controllers. Interestingly, the Department of Transportation has also proudly led the Transportation Leaders Against Trafficking initiative for nearly a decade. The purpose of the initiative is to connect transportation and travel industry leaders to maximize the industry’s impact on trafficking through training, public outreach, funding, and pledges, meaning the relationship between executive agencies, legislatures, and industry is clear. All three are working to develop FRS, combat slavery and trafficking, and implement transportation technology; it is time for these seemingly unrelated initiatives to overlap.

This blog is merely here to introduce the complicated intersection between transportation technology, FRS, and modern slavery. Various perspectives and further analyses of the legal history and potential legal solutions will follow.