TECHNOLOGY &
CRIMINALIZATION

The carceral state deploys technologies to control, police, surveil, and limit the flow of money and power to communities. The dovetailing of technology and criminalization is not new—it is something communities of color have experienced since the founding of the US. This has included methods from the lantern surveillance laws of the 18th century to COINTELPRO in the 1960s, from the Federal Bureau of Investigation’s ongoing “Black Identity Extremist” designation and its past surveillance of protests at Standing Rock to its use of facial recognition software to identify activists and protestors. Communities of color are increasingly hyper-policed with these technologies and entangled within the criminal legal system, perpetuating the racist history of policing and prisons in the United States. These systems of surveillance range from government databases, police body cameras, private security cameras, social media targeted ads, and consumer profiles to predictive policing and beyond. Most are often deployed without community knowledge, consent, or accessible understanding of the interactions between the state and private companies. Worse, safety and security are often conflated within state narratives to justify the use of these surveillance and other carceral technologies on the public.

Government agencies and private tech companies invest significant resources into developing surveillance technology to broaden the web of criminalization, while organizers use what digital tools and other technology they have at their disposal to combat this violence in movement building spaces. As a result of these investments, the state and its co-conspirators simultaneously erode communities’ access to safer working tools (i.e., through legislation like FOSTA, EARN IT, and the controversies related to USAGM #OTF) and make them more vulnerable to often lethal policing and surveillance.


We’re seeing this conflation of safety and security that has caused a great deal of harm. Law enforcement and city government, they tout increasing safety for communities and almost always they use the security mindset to do that. We’re trying to drive home the narrative that surveillance is not safety. Safety is knowing who your neighbors are. Safety is a resource community center. Safety is thriving public education. Safety is making sure that your neighbors have water and food. Those are things that are safe.

—organizer & researcher

Dominant, pro-surveillance narratives peddle surveillance technology as a “smarter” method of social control to “protect” public safety and national security. Under this pretext, the state has historically shaped the discourse around what criminal behavior looks like, and thus, justified the need for heightened surveillance and other discriminatory law enforcement practices. Policies such as Jim Crow, Broken Window Policing, and California’s Three Strikes Law serve as examples of how a population is vilified to justify extreme levels of surveillance and policing. As the state increasingly uses technological means to expand the scale and scope of its carceral apparatus, we have witnessed a shift in the public narrative to justify government agencies' use of technology to criminalize communities of color and other populations seen as inferior to white, capitalist, patriarchal, and imperial conquests. By stoking fear in primarily white, suburban, upper class communities of certain populations, the state has been able to gain approval of pro-surveillance interventions that impact everyone. Rhetorical arguments calling for “law and order” have underscored this justification. It is vitally important that QT2SBIPOC communities have the necessary resources to counter narratives of militaristic security. A narrative shift is required, a shift that centers communities’ definitions of safety, and supports their fight against violent surveillance and criminalization practices deployed in the name of safety.

AI and decision-making algorithms are becoming a common feature of tech-driven policing and detention systems. AI technologies such as facial recognition software, predictive policing, and pre-trial risk assessment algorithms perpetuate criminalization through racial and gender bias and have become what Joy Buolamwini of the Algorithmic Justice League calls the “coded gaze.” These algorithms are biased because these types of automated systems are designed within an already discriminatory system shaped by the white gaze, as programmers and designers encode their judgements into technical systems. What many call algorithmic bias may be more appropriately described as algorithmic violence because of how it brutally targets Black and brown communities.


You think about how facial recognition software is biased in so many ways. It misgenders Black women in a way that’s very much connected to the masculinization of Black women in this country, for generations. Thinking about what that means for queer, non-binary, and trans people. Do the builders of AI value queerness? For me, queerness is antithetical to AI because it falls outside of any data sets that try to define how you are going to move in this world. The makers of these technologies are getting more and more support. What does this mean for LGBTQI People of Color and how these technologies are used against us? It worries me a lot.

—developer & digital security educator

One of the most dangerous forms of community-level algorithmic violence is “predictive policing,” a range of data-driven surveillance practices that turn entire neighborhoods into vectors of criminal probability. This technology involves the use of software to determine who is considered “criminal” and where crime is “likely” to happen. According to a study of 13 jurisdictions that currently use predictive policing systems, the data on which these systems are built is deeply flawed due to “systemic data manipulation, falsifying police reports, unlawful use of force, planted evidence, and unconstitutional searches.” However, the Stop LAPD Spying Coalition says the problem is not simply dirty data–it’s the fact that predictive policing AI is fundamentally designed to police Black and brown bodies, communities and land.

The coded gaze of predictive policing is an extension of historical systems of racist policing under white supremacy and settler colonialism. This system is not broken, it is working exactly as intended, now with modern technologies to facilitate and automate these processes. Over the last decade in Los Angeles, the LAPD hired Predpol, a predictive policing vendor, to create a statistical model for predicting crime in geographic zones with low-income communities of color, which it labeled “hot spots”. The LAPD combined this with a mapping system, LASER, to create Chronic Offender Bulletins (COBs), which identify people for targeted surveillance. COBs are then analyzed by Palantir, a data-mining search platform that cross-references information from multiple databases and automated license plate readers (ALPRs). Palantir assigns a “score” to persons on the COB list according to gang affiliation, parole or probation status, arrests, and other so-called “quality” police contact. Both Predpol and Palantir operate in other cities where police departments have significant records of racist brutality and misconduct. Contrary to promoting community safety, the Stop LAPD Spying Coalition argues that, “a feedback loop is created where an increasingly disproportionate amount of police resources are allocated to historically hyper-policed communities.”


I view the march of technology rather than policies being a threat. Pretrial risk assessment instruments are a particularly stark example [of] the threat of technology on marginalized communities. There's been a huge movement to end cash bail around the country...but is being substituted by these automated instruments that will gauge people’s risk, whether they’re a flight risk or a public safety risk. In California, SB10 recently passed, which would end cash bail and also bring in this new era of risk assessment. We believe this has the very real potential of hardening a lot of the racial disparities that we’re seeing. It won’t actually lead to decarceration. It might actually do the opposite.

—technologist, researcher, & policy analyst

The criminal legal system’s growing reliance on pre-trial risk assessment algorithms deepens carceral control and racist biases over people’s lives. The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), an algorithmic system for predicting recidivism rates among pre-trial defendants, has erroneously predicted “high-risk” for Black defendants and “low-risk” for white defendants. Much like electronic monitoring (discussed below), pre-trial risk assessment algorithms are presented as an alternative to traditional detention policies like cash bail. Yet, they threaten to further entrench people’s entanglement in the criminal legal system due to the racial biases and decades of racist criminal history data embedded within these tools. Between the continuou development of surveillance technologies and the expansive, often unmitigated carceral reach of the state, it is paramount to connect historical legacies of criminalization with contemporary inequalities regarding mass incarceration.

Surveillance technology is extending the reach of the carceral state beyond physical prisons by normalizing what MediaJustice (MJ) calls “e-carceration”—the use of wearable electronic monitors, such as ankle monitors, that restrict the freedom of movement and agency of individuals on parole and probation and convince the public that the government should be permitted to track people accused of and convicted of crimes. The sharp rise in the number of people on parole and probation has accompanied mass incarceration; approximately 4.5 million people in the US are under some form of correctional supervision outside of formal jails and prisons. More and more systems, including immigration and juvenile detention, are using electronic monitoring devices.

GPS tracking data from electronic monitors can be incorporated into other agencies’ databases, casting an even wider net of surveillance over a person’s every move. Business is complicit in the explosive development of location surveillance phone apps that may ultimately replace ankle technology. For example, over the last year, BI, the world’s largest electronic monitoring company, has doubled the number of people under ICE supervision with SMARTLink, a social media electronic monitoring app. Governmental agencies and the industries producing electronic monitoring devices have responded to critics of ankle monitoring by going mobile.


This technology is being put out there as the tool of the future to restructure the criminal legal system. We’re going to have a lot more people under a whole range of technological surveillance and carceral technological control. It’s not so widespread in usage right now that it couldn’t be stopped, but if we don’t do something about it, we risk setting up another system of incarceration.

—researcher & educator

While reform advocates tout electronic monitors as an alternative to incarceration, carceral technologies actually widen state surveillance and punitive control over formerly incarcerated people’s lives. They effectively create “digital prisons” that further mass criminalization. The majority of people under e-carceration are confined to their houses and are not allowed to leave without a court order or permission from a parole officer. This type of virtual, often solitary confinement greatly affects the mental and physical health of QT2SBIPOC people. A Black trans woman living in Chicago shared her story of e-carceration with MediaJustice, reporting that while she was under electronic supervision, she was denied permission to leave her house to buy food and fill prescriptions

for HIV medications she requires daily. In emergency situations, people are forced to choose between risking going back to prison for unauthorized movement or, for example, taking a sick child to the hospital.

E-carceration also creates barriers to employment because the restriction on movement prevents people from going to job interviews or working in environments like concrete buildings that may interfere with the monitor’s signal. Electronic monitoring further harms communities already vulnerable to mass incarceration, contributing to financial and other material dependence on the families and social networks of those on parole and probation.


There’s a surveillance ecosystem that is emerging that we need to be very, very mindful of in terms of the ways in which some of these companies are capitalizing off of our movements to end bail. As a trade off, people are agreeing to much higher surveillance with digital cages that confine people to a particular neighborhood, that confine them in terms of what times of day they can be out, etc. That would become more of the norm rather than the exception. When you think about people who go in for a ticket or who didn’t pay a fine...where the previous answer was three days in jail, it is now perhaps a month with an ankle monitor and consistent surveillance. This is a huge trade off...it’s the new redlining, frankly

—technologist & digital educator

Electronic monitors spread carceral logics into other spaces of society, including the surveillance of workers in factories and of public areas. This expansion of surveillance extends mass criminalization’s reach to low-wage workers. Technology companies that develop surveillance technology already deploy these products on their own workforce. Major companies—like Amazon and Fitbit—have begun using surveillance technology that tracks the motions of its warehouse workers , who are disproportionately Black, migrant, and People of Color in low-wage positions. In outdoor spaces, monitoring devices facilitate “e-gentrification” and create de facto segregation by restricting those who are forced to wear them from entering certain areas. As a researcher and educator interviewed noted, these exclusion zones apply to people who face criminalization in particular ways, such as those on the sex offender registry, who are banned from going within a certain perimeter of public parks. That electronic monitors come programmed with exclusion zones sets a dangerous precedent for other devices that people use daily. This is becoming more apparent with the fear of stingray cell simulators and contact tracing of COVID-19 being used to track down organizers at the 2020 mobilizations in support of Black Lives.


Since most people have some kind of GPS device anyway, it doesn’t seem like a huge leap to put some kind of controlling technology in those devices where there are exclusion and inclusion zones. You can only go where the technology permits you to go during certain hours of the day. That’s one of the fears I have about how this technology can control people’s movements in a much more systematic way than what we see at the moment-

—campaign organizer


“Credit: Stop LAPD Spying Coalition”

Understanding the hidden ways in which data flows from our personal and private devices to state-based agencies helps us understand the extent to which our communities are being policed and surveilled. The Stop LAPD Spying Coalition defines the “stalker state” as a network of overlapping data-sharing systems between social media corporations, private security firms, public service institutions, military departments, federal agencies, and local law enforcement. It enables data-sharing among state and local police, intelligence agencies, and private companies while also fueling profit-driven tech development as positive progress. A prime example of the stalker state’s reach and impact on a public narrative about safety is the political debate surrounding the militarization of the US border—what so-called progressive politicians espouse as the “smart border”—a surveillance network of AI, drones, cameras, and infrared sensors, as a more humane alternative to the Trump administration’s draconian border wall project.

Tech companies are complicit in developing facial recognition technologies. For example, companies like Thorn and Marnius Analytics have programs that scrape data from escort ads, without consent, to use in the design of new facial recognition technologies targeting sex workers. Under the pretext of “anti-human trafficking” initiatives, companies are enlisting lower-wage workers to surveil sex workers and share data with law enforcement, as one interviewee shared, “this suite of AI tools as well as anti-human trafficking trainings at Uber and Marriott on how to “spot” human trafficking survivors encourage some of the lowest-paid employees of these companies to snitch on sex workers, creating a dynamic where workers are snitching on workers. Sometimes these reports at Marriott and Uber lead to calling of ICE. Once you’re in this system, the state can surveil you using these technologies, harming workers all around.”

US government agencies and local law enforcement are capturing and weaponizing personal and community-level data to increase surveillance and repression of movements. As more data is captured and shared, it is becoming a potent weapon the state wields to disrupt movements. This is happening within the larger international context of US military wars and inter-governmental and inter-agency data sharing in militarized zones such as the US-Mexico border. These practices are then applied by police at the local level in cities across the US. During protests and direct actions, police use surveillance devices like “stingrays” to disrupt activist communications, steal people’s data from their cell phones, and track their physical locations. At least 75 police agencies in 27 states use these devices to capture personally identifying information, which is then shared with other agencies to profile and monitor organizers. These technologies have especially targeted the Black Lives Matter (BLM) movement. During the Eric Garner protests in New York, the NYPD infiltrated BLM and gained access to organizers’ text messages. In 2017, an FBI intelligence assessment surfaced a new designation of security threat, the so-called “Black Identity Extremist” (BIE). In 2020, police used facial recognition technology and the surveillance of social media to identify and arrest activists who were at protests in support of Black Lives. For example, using Clearview AI, activists in New York City and Miami were arrested after participating in the summer protests.


We really have to understand that some of these technologies and data practices are basically created for war zones or for imperialist intervention models. They are brought to a militarized border and start to seep through into the rest of the US and police overall.

— campaign organizer

Battle for privacy-first policies and technologies includes protecting access to technologies like end-to-end encryption and protection from digital attacks. Encryption technology is an important safeguard against state surveillance and is under threat. Recent US legislative efforts (i.e.,the EARN IT Act, the LAED Act and the PACT Act), coupled with subsequent defunding of open-source encrypted technologies, show the US government’s transparent desire to access communities’ private communications and do away with access to technological tools employed for privacy like end-to-end encryption (E2E). If signed into law, it will have a global impact on civilian access to encrypted technologies. The loss of this protection would mean that the state would be able to monitor all communications from any device, anywhere. This would imperil the digital security not only of social movements, but of all. The end of E2E would create a backdoor for law enforcement, and also make it easier for civilians with malicious intent (such as abusers, people spreading revenge porn, etc.) to access previously secure communications. The government is also moving toward targeted malware attacks, known as the Network Investigative Technique (NIT) and dragnet malware to capture data from large groups via a single warrant. Additionally, the government is funding tech companies to design malware products that destroy built-in security mechanisms, which millions of people—including organizers—depend upon. QT2SBIPOC organizers report that the state is using these methods to break into their cell phones and personal devices to try to gain access to their data.

Social media intelligence (SOCMINT) companies have found a lucrative niche in contracting with federal and state agencies to spy on organizers. Private firm ZeroFOX monitored the social media accounts and geolocation of BLM organizers during the Freddie Gray protests in Baltimore, referring to organizers as “threat actors” in intelligence reports to law enforcement. Similarly, Geofeedia obtained user data from Facebook, Instagram, and Twitter to monitor activists during the Michael Brown protests in Ferguson. Movement organizations often rely on social media to communicate with members and promote their campaigns and events, which makes them vulnerable to surveillance, infiltration, and data poaching. They acknowledge that a major contradiction they face in using mainstream platforms is what some call the “network effect.” That is, as organizers reach more and more people on these platforms, they also become more dependent on corporate infrastructure for their organizing work. This, in turn, reinforces the status of mainstream platforms as the “default” venue for all communications. Organizers interviewed for this research even report that police and federal agents have used fake social media accounts to pose as activists and try to gain access to personal and organizational information. The ways that private companies collaborate with state actors is often opaque, but the consequences of these collaborations are very real.

Customs and Border Protection’s (CBP) deployment of surveillance technologies—which it uses to monitor and detain migrants—impacts migrants and Native communities whose land is divided by the border. The US-Mexico border crisis spotlights the interlinked struggles between Native land sovereignty and migrant justice. Border surveillance perpetuates colonial dynamics, offering modern means to maintain historical oppression. The Tohono O’odham Nation, located in southern Arizona/northern Sonora, has been under “wide-area persistent surveillance” since 2006 when CBP began building surveillance towers, flying drones, and using cameras and motion sensors within Tohono O’odham territories. Tribal members say this impedes their relationship with their land and sacred sites. Meanwhile, these technologies further harm the lives of people migrating across the border and erode civil rights protections within the 100-mile inland border zone where most people live in the United States.


It’s been really, really dangerous. Because ICE agents now have more resources from the federal government for surveillance, they have more time on their hands to be able to do more things. You have agents doing the in-person surveillance, so following people, stalking people’s homes, threatening people’s family members...but then you also have the digital surveillance where they’re able to map out family trees and find addresses to conduct their raids based on information from private data brokers. We’re really seeing it on all levels.

—campaign organizer

Third-party agreements between government agencies and data-brokering companies facilitate the interlinking of digital and physical threats. Border agents now commonly ask people for their social media accounts upon entering the US and there are reports of sex workers being denied access at border crossings because of social media and escort ads. Migrant justice advocates warn that immigration authorities monitor queer and trans asylum seekers’ posts on platforms like Facebook and use their social media content to argue against their asylum cases. The linkage of digital and physical surveillance through third-party data sharing has enabled ICE to conduct targeted raids based on people’s addresses, social media posts, and location data. In response to concerns about ICE raids in Puerto Rico, some organizers reported having to close their social media accounts, avoid posting about their activism, or cover their faces during protests to avoid being deported.

The increasing collaboration between the US government and private tech companies is fueling a system where corporations are profiting from surveillance products. This is a clear example of how ‘surveillance capitalism’ works in the service of criminalization. The Department of Homeland Security (DHS), CBP, and ICE are spending billions of taxpayer dollars annually on contracts with tech companies to target immigrants of color. Nowhere is this more apparent than with the escalating arrests, detentions, and deportations that comprise the US administration’s ongoing war against migrant communities. This war, which also includes the geographical and biometric surveillance of immigrants and their communities, represents what has been called crimmigration: the intersections of criminalization and immigration. As with other forms of criminalization, the use of technology to consolidate power and capital is well documented within the policing and militarization of the US borders and immigration system.

The growing technical interconnectedness between federal agencies and local police departments is eroding protections granted by “sanctuary cities” that have historically limited the use of federal immigration detainers for a person of undocumented status in custody. Due to inter-agency data sharing and police use of social media, ICE agents are surveilling and arresting migrants in churches, courtrooms, hospitals, and other public spaces with greater frequency. This restricts the movement of people of undocumented status and prevents them from accessing much needed health, legal, and social services for fear of being detained and deported.

Surveillance Economies and the Criminalization of Migration

  • Palantir, for example, produces software for ICE agents to profile and arrest families of undocumented status, playing a direct role in taking migrant children away from their parents and caging them in privately run detention facilities.
  • Tech companies are ramping up surveillance along the border through an initiative known as the “smart border,” for which Congress approved a $100 million budget in 2019. For example, Elbit Systems, a subsidiary of Israel’s largest military company, signed a $26 million contract with CBP to build a network of massive surveillance towers with night vision cameras, thermal sensors, and ground-sweeping radar. There are currently more than 400 such towers along the US-Mexico border. Anduril Industries is working with US border agents to test a new surveillance system called Lattice, which combines AI, cameras, drones, and LIDAR, and operates miles beyond the border.
  • By networking databases and search tools between CBP, ICE, and US Citizenship and Immigration Services (USCIS), the Continuous Immigration Vetting (CIV) program, which collates information from immigration benefit applications throughout the entire application period, casts a wider net over those without citizenship status.
  • Amazon Web Services (AWS) provides cloud hosting to the federal agencies and local police departments who share information with DHS. DHS stores data from its Automated Biometric Identification System (IDENT), a repository of 230 million unique identities based on fingerprint, iris, and facial records, on the AWS cloud.
  • Cloud hosting also supports ICE’s Integrated Case Management (ICM) system created by Palantir. Not only does ICM collect, store, and analyze massive volumes of personally identifiable information, it also creates the ability to share data across systems at all levels of government.
  • Over 9,000 ICE agents have access to an automated license plate reader (ALPR) database run by Vigilant Solutions, a company with which ICE has a $6.1 million contract. The ALPR database allows ICE to follow migrants across 5 billion points of location data.

Social media companies are letting police, federal agencies, and third-party tech companies surveil QT2SBIPOC communities and organizers via their platforms. Access to mainstream social media platforms is often severely limited for Black, trans, and sex worker communities because of the ways in which these identities are policed and restricted by algorithms that are inherently racist, transphobic, and whorephobic. For example, as a result of content moderation practices, sex workers report difficulty finding each other on social media, Black femmes and people who are coded as sex workers, are banned from Instagram at a higher rate, real name policies prevent sex workers and trans folks from using platforms. Community members are forced to decide if they will risk use of and reliance on specific platforms, which includes facing the risk of removal by the platform moderators who may delete accounts without reason. These removals not only disrupt economic opportunities and community connection, they also make community building, organizing and mutual aid more difficult.


We see corporations like Facebook acting as arms of surveillance and providing all kinds of data or opportunities for law enforcement and corporations to capture data and use it in punitive ways. The way in which surveillance impacts people whom I call the criminalized population...Black, brown, LGBTQIA, native, people of color broadly...for those people, it’s not about somebody snooping in your email or eavesdropping on your phone calls. It’s really about blocking you from employment opportunities, blocking you from education, blocking you from housing, blocking you from travel. It’s a whole range of ways in which it directly impacts your life when all this data is weaponized to be used against you.

—researcher & educator

Platform moderation, or the policing of a platform’s content, is a critical site where the criminalization of sex work intersects with threats to internet autonomy. In 2018, two congressional bills were signed into law: the Fight Online Sex Trafficking Act (FOSTA) and the Stop Enabling Sex Traffickers Act (SESTA). Together known as FOSTA-SESTA, they make platforms liable for sex work related content and further criminalize sex workers. FOSTA-SESTA was the first substantive amendment to Section 230 of the 1996 Communications Decency Act, which protected internet platforms from liability for the content users produce and post to their platforms. Many technology experts argue that Section 230 allowed for the growth of the free and open web that we use today. But FOSTA-SESTA overrides Section 230’s “safe harbor” clause by increasing platform liability, in effect imposing broad internet censorship and a chilling effect.


The passage of SESTA and FOSTA has shutdown spaces for people to do work digitally. It’s impacted people very negatively, here in DC specifically. The people who’ve been most impacted are trans women of color, Black trans women, because now more and more people have to do street work, which is more dangerous in a lot of ways, including because police are out here. It’s easier for people to be arrested and go into the criminal legal system.

— developer & digital security educator

FOSTA-SESTA further polices sex work online and exacerbates existing platform policies and practices that censor online sex work and suppress digital organizing efforts, such as shadowbanning , content moderation, and deplatforming . This means that they do not have the same access to the tools non-sex working folks use to build business and to organize. Like many entrepreneurial businesses, many sex workers rely on an online presence, marketing, and creating their own online content to conduct business, and FOSTA-SESTA threatens to eliminate this capacity. FOSTA-SESTA also harms the freedom of movement and economic opportunity for migrant sex workers. With the Department of Homeland Security and the FBI raiding adult services’ ad platforms and seizing servers containing user IDs and personal data, migrant sex workers, especially those who are of undocumented status, are unable to find work and live in fear that the government will use this data to track, arrest, and deport them.


I was very shadow banned, so I wasn’t showing up in searches. We did a lot of organizing around this hashtag, #LetUsSurvive. Looking at the statistics for this hashtag, though the numbers are there, it does not show up as trending

— sex worker rights activist

FOSTA-SESTA limits the resources that movement organizers need to combat harmful legislation, as many organizers fund their unpaid labor with money earned from sex work. In Hacking//Hustling’s new report on content moderation in sex worker and activist communities in the wake of the 2020 mobilizations in support of Black Lives, they found that individuals who engaged in both sex work and activism work experienced significantly more negative effects of platform policing than individuals who did either just sex work or just activism work. This suggests that there is a compounding effect where platforms more harshly police, censor, and deplatform activists who support their organizing work through sex work.

prev next

Technologies for Liberation: Toward Abolitionist Futures | 2020
Partner organizations