Skip to main content
SearchLoginLogin or Signup

Chapter Three: “What Gets Counted Counts”

Feminists have spent a lot of time thinking about categories, since “male” and “female” are binary categories, and limited categories too. How we count matters as much as what we count. But we don't always count-- or account for-- what is most important to the questions at hand.

Published onNov 05, 2018
Chapter Three: “What Gets Counted Counts”

This chapter is a draft. The final version of Data Feminism will be published by the MIT Press in 2019. Please email Catherine and/or Lauren for permission to cite this manuscript draft.

“Sign in or create an account to continue.” These may be the most unwelcome words on the internet. For most who encounter them, these words elicit a groan– and the inevitability of yet another password that will soon be forgotten. But for people like Maria Munir, the British college student who famously came out as non-binary to (then) President Barack Obama on live TV, the prospect of creating a new user account is more than an annoyance. “I wince as I'm forced to choose female over male every single time, because that's what my passport says, and... being non-binary is still not legally recognised in the UK,” Munir explains. 

For the estimated 9 to 12 million non-binary people in the world-- that is, people who are not either male or female-- the seemingly simple request to “select gender” can be difficult to answer, if it can be answered at all. Yet when creating an online user account, not to mention applying for a passport, the choice between “male” or “female,” and only “male” or “female,” are almost always the only options. These options (or the lack thereof) have consequences, as Munir clearly states: “If you refuse to register non-binary people like me with birth certificates, and exclude us in everything from creating bank accounts to signing up for mailing lists, you do not have the right to turn around and say that there are not enough of us to warrant change. 

“What gets counted counts,” as feminist geographer Joni Seager has asserted, and Munir is one person who understands that. Without the right categories, the right data can’t be collected. And increasingly, without the right data, there can be no social change. We live in a world in which “data-driven” decisions are prioritized over anecdotal ones, and “evidence”--Fox News notwithstanding--is taken to mean “backed up by numbers and facts.” Now, any self-respecting feminist would be the first to tell you that personal accounts should matter as much as any meta-study, and “evidence” can take a range of qualitative and quantitative forms. To disagree with those statements would undo the work of the many feminist activists and scholars of the 1980s and early 1990s who struggled to get qualitative methods, such as interviews and participant observations, accepted as legitimate evidence in the first place. But there is, undeniably, what feminist demographers Christina Hughes and Rachel Lara Cohen call a “pragmatic politics” of using quantitative methods for feminist aims. If the goal is to work towards justice, then by all means use whatever form of evidence is most convincing. It would be an injustice not to!

That being said, there is a second argument in favor of quantitative methods that has less to do with pragmatism, and more to do with the nature of the problem at hand. So many issues of structural inequality are problems of scale, and can seem anecdotal until they are seen as a whole. For instance, when Natalie Wreyford and Shelley Cobb set out to count the women involved in the film industry in the UK, they encountered a female screenwriter who had never considered the fact that, in the UK, male screenwriters outnumber women at a rate of four to one. “Isn’t that a funny old thing?” she said. “I didn’t even know that because screenwriters never get to meet each other.”  

But it’s far less funny when the subject is a matter of life or death, as in ProPublica’s reporting on the racial divide in maternal mortality in the United States, which we discuss in Bring Back the Bodies. While they interviewed the families of many Black women who had died while giving birth, few were aware that the phenomenon extended beyond their own family. But the racial disparity in maternal health outcomes is indeed a structural problem, and it’s why feminist sociologists like Ann Oakley have long advocated for the use of quantitative methods alongside qualitative ones. Without big data, Oakley explains--although she used the term “quantitative research,” since she was writing in 1999--“it is difficult to distinguish between personal experience and collective oppression.”                          

But before issues like the racial divide in maternal mortality, or the structural racism that underlies it, can be identified through large-scale analyses like the one that ProPublica conducted, the data must exist in the first place. Which brings us back to Maria Munir and the importance of collecting data that reflects the population it claims to represent. On this issue, Facebook of all companies was ahead of the curve when, in 2014, it expanded its gender options from the standard two to over fifty choices, ranging from “genderqueer” to “neither”--a move that was widely praised by a range of gender non-conforming groups. One year later, when the company abandoned its select-from-options model altogether, replacing the “Gender” dropdown menu with a blank text field, the decision was touted as even more progressive. Because Facebook users could input any word or phrase in order to indicate their gender, they were at last unconstrained by the assumptions imposed by any preset choice.

Facebook’s initial attempt to allow users to indicate additional genders, ca. 2014.

Credit: Slate

Source: http://www.slate.com/blogs/future_tense/2014/02/13/facebook_custom_gender_options_here_are_all_56_custom_options.html

Facebook’s updated gender field, ca. 2018.

Credit: Facebook. Screenshot by Lauren F. Klein

Source: http://www.facebook.com

But research by Rena Bivens, a scholar of social media, has revealed that, below the surface, Facebook continues to resolve users’ genders into one of either male or female. Evidently, this decision was made so that Facebook could allow its primary clients-- advertisers-- to more easily market to one gender or the other. Put another way, even if you can choose the gender that you show to your Facebook friends, you can’t change the gender that Facebook’s advertisers ultimately see. And this discrepancy leads right back to the body issues we discussed in Chapter One: it’s corporations like Facebook, and not individuals like Maria Munir, who control the terms of data collection--even if it’s folks like Munir, who have personally (and often painfully) run up against the limits of our current classification systems, who are best positioned to improve them.

Detail of the Facebook new account creation page, ca. 2018.

Credit: Facebook. Screenshot by Lauren Klein.

Source: http://www.facebook.com/

Feminists have also spent a lot of time thinking about classification systems, as it turns out, since the criteria by which people are divided into the categories of “male” and “female” is exactly that: a classification system. And while the gender binary is one of the most universal classification systems in the world today, it is no less constructed than the Facebook advertising platform or, say, the Golden Gate Bridge. The Golden Gate Bridge is a physical structure; Facebook Ads is a virtual structure; and the gender binary is a conceptual one. But each of these structures was created by people: people living in a particular place, at a particular time, and who were influenced--as we all are-- by the world around them.

So this starts to get at the meaning behind the phrase, “gender is a social construct.” Our current ideas about the gender binary can be traced to a place (Europe) and a time (the Enlightenment) when new theories about democracy and what philosophers called “natural rights” began to emerge. Before then, there was definitely a gender hierarchy, with men on the top and women on the bottom. (Thanks, Aristotle!) But there wasn’t a binary distinction between those two genders. In fact, according to the historian of gender, Thomas Laqueur, most people believed that women were just inferior men, with penises located inside instead of outside of their bodies, and that-- for reals!-- could descend at any time in life.

For the gender binary to emerge, it would take figures like Thomas Jefferson declaring that all men were created equal, and entire countries (like the U.S.) founded on that principle, before those same figures began to worry what, exactly, they had declared-- and, even more worrisome, to whom it actually applied. All sorts of systems for classifying people date to that era-- not only gender but also, crucially, race. Before the eighteenth century, Western societies understood “race” as a concept tied to religious affiliation, geographic origin, or some combination of both. Although it’s hard to believe, race had nothing to do with skin color until the rise of the transatlantic slave trade, in the seventeenth century. Even then, race was still a hazy concept. It would take the so-called “scientific racism” of the mid-eighteenth century for race to begin to be defined in terms of black and white.

Ever heard of Carl Linnaeus? Think back to middle school, when you likely learned about the binomial classification system that he is credited with creating. Well, Linnaeus’s revolutionary system didn’t just include the category of homo sapiens; it also, lamentably--but as historians would tell you, unsurprisingly--included five subcategories of humans separated by race. (One of these five was set aside for mythological humans who didn’t exist in real life, in case you’re still ready to get behind his science). But Linnaeus's classification system wasn’t even the worst of the lot. Over the course of the eighteenth century, increasingly racist systems of classification began to emerge. These were systems that were designed to exclude, and in instances as far-ranging as the maternal health outcomes we’ve already discussed, to Google search results for “black girls” vs. “white girls,” as information studies scholar Safiya Noble has shown, we can detect the effects of those racist systems every day. 

A simple solution would be to say, “Fine, then. Let’s just not classify anything, or certainly anyone!” But the flaw in that plan is that data must also be classified in some way in order to be put to use. Data, after all, is information made tractable, to borrow a term from computer science (and from another essay that Lauren wrote with a colleague in information studies, Miriam Posner). “What distinguishes data from other forms of information is that it can be processed by a computer, or by computer-like operations,” we write there. And in order to enable those operations, which range from counting to sorting, and from modeling to visualizing, the data must be placed into some kind of category--if not always into a conceptual category like “gender,” then at the least into a computational category like “integer” (a type of number) or “string” (a sequence of letters or words).

It’s been argued that classification systems are essential to any working infrastructure-- and not only to computational infrastructures or even conceptual ones, but also to physical infrastructures like the checkout line at the grocery store. Think about how angry you get when you’re stuck in the express line behind someone with more than fifteen items. Or, if that’s not something that gets you going, just think of the system you use to sort your clothes for the wash. It’s not that we should reject these classification systems out of hand, or even that we could if we wanted to. (We’re pretty sure that no one wants all of their socks to turn pink). It’s just that we rarely question how classification systems are constructed, or ask why they might have been thought up in the first place. In fact-- and this is a point also made by the influential information theorists Geoffrey Bowker and Susan Leigh Star-- we tend not to even think to ask these questions until our systems break.

Classification systems can break for any number of reasons. They can break when an object-- or, more profoundly, a person-- can’t be placed in the appropriate category. They can break when that object or person doesn’t want to be placed in an appropriate category. And they can break when that object or person shouldn’t even be placed in a category to begin with. In each of these cases, it’s important to ask whether it’s the categories that are broken, or whether-- and this is a key feminist move-- it’s the system of classification itself. Whether it’s the gender binary, or the patriarchy, or-- to get a little heady-- the distinction between nature and culture, or reason and emotion, or public and private, or body and world, decades of feminist thinking would tell us to question why these distinctions might have come about; what social, cultural, or political values they reflect; and, crucially, whether they should exist in the first place.   

But let’s spend some time with an actual person who has done this kind of thinking: one Michael Hicks, an eight-year-old Cub Scout from New Jersey. Why has this kid started to question the broken systems of classification that surround him? Well, Mikey, as he’s more commonly known, shares his first and last name with someone who has been placed on a terrorist watch list by the U.S. federal government. As a result, Mikey is subjected to the highest level of airport security screening each time that he travels. “A terrorist can blow his underwear up and they don’t catch him. But my 8-year-old can’t walk through security without being frisked,” his mother lamented to Lizette Alvarez, a reporter for The New York Times, who covered the issue in 2010.

Of course in some ways, Mikey is lucky. He is white, so he does not run the risk of racial profiling—unlike, for example, the many Black women who receive TSA pat-downs due to the perceived threat of their natural hair. Moreover, Mikey’s name is not Muslim-sounding, so he does not need to worry about religious or ethnic profiling either--unlike, for another example, people named Muhammad who are pulled over by the police due to the perceived threat of a Muslim first name. But Mikey the Cub Scout still helps to expose the brokenness of the categories that structure the TSA’s terrorist classification system; the combination of first and last name is simply insufficient to classify someone as a terrorist or not.

Or, consider another person with a history of bad experiences at the (literal) hands of the TSA: Sasha Costanza-Chock. Costanza-Chock is non-binary, like Maria Munir. They are also a design professor at MIT, so they have a lot of experience not only living with, but also thinking through broken classification systems. In a recent essay, they describe how the seemingly simple system employed by the operators of those hand-in-the-air millimeter-wave-scanning machines is in fact quite complex-- and also fundamentally flawed.

No one but a gender non-conforming person would know that, before you step into a scanning machine, the TSA agent operating the machine looks you up and down, decides whether you are male or female, and then pushes a button to select the appropriate gender on the scanner’s touch-screen interface. That decision loads the algorithmic profile for either male bodies or female ones, against which your measurements are compared. If your body’s measurements diverge from the statistical norm of that gender’s body-- whether the discrepancy is because you’re concealing a deadly weapon, or because the TSA agent just made the wrong choice-- you trigger a “risk alert,” and are subjected to the same full-body pat-down as a potential terrorist. So here it’s not that the scanning machines rely upon an insufficient number of categories, as in the case of Mikey the Cub Scout; or even that they employ the wrong ones, as Mikey’s mom would likely say. It’s that the the TSA scanners shouldn’t rely on the category of gender to classify air-travelers to begin with. 

So when we say that what gets counted counts, it’s folks like Costanza-Chock, or Mikey, or Maria Munir, that we’re thinking about. Because broken classification systems like the one that underlies the airport scanner’s risk detection algorithm, or the one that determines which names end up on terrorist watch lists, or simply (simply!) the gender binary, are often the result of larger systems that are themselves broken, but that most people don’t often have the opportunity to see. These invisible systems are what philosopher Michel Foucault would call systems of power. Systems of power don’t simply determine the categories into which individual objects or people are sorted; they over-determine how those groups of objects or people experience the world.

What does it mean for a system to over-determine how people experience the world? Many feminists would point to the example of the patriarchy--a word that describes the combination of legal frameworks, social structures, and cultural values that contribute to the continued male domination of society. But for a more concrete example, we could return to Facebook. It’s not only that anyone who types in a gender that is not “male” or “female” is reduced, in the eyes of advertisers, to the single category of “unknown.” It’s also that, at the level of code, these three remaining categories are further reduced to numerical values: 1, 2, and 3, respectively. So when an app developer requests a list of users sorted by gender for any reason-- whether it’s to sell them useless diet pills, 50% off retail (which no one ever wants); or to offer them a free financial consultation, first come first served (which many people do)-- they receive a list in which male Facebook users are hard-coded to be always first in line.

Now, the software engineers who wrote the word-to-number code were almost certainly not intending to discriminate. They were probably only thinking, “How can we make our gender data easier to sort and manage?” And when it comes to computational data, it’s almost always easier and more efficient to deal with numbers than it is to deal with words. But it’s also not a surprise that in a group of engineers which is a reported 87% male, no one thought to point out (or maybe just that no one felt comfortable saying out loud) that a data classification system in which men are always ranked first might lead to problems for those who ranked second or third-- not to mention those excluded from the list altogether. In fact, if you were to ask a feminist theorist like Judith Butler to weigh in, she’d tell you that the inadvertent and invisible way in which systems of power reproduce themselves is exactly how the gender binary consolidates its force.

It’s not only Facebook that’s to blame. Gender data is almost always collected in the binary categories of male and female, and visually represented by some form of binary as well. This remains true even as a recent Stanford study found that, when given the choice among seven points on a gender spectrum, more than two-thirds of the subjects polled placed themselves somewhere in the middle. It’s also important to remember that there have always been more variations in gender identity than Anglo-Western societies have cared to outwardly acknowledge or collectively remember. These third, fourth and n-th genders go by different names in the different historical and cultural circumstances in which they originate, including female husbands, indigenous berdaches, Hijras, two-spirits, pansy performers, and sworn virgins, along with the category of transgender that we most commonly use today. 

Now, as data analysts and visualization designers, we can’t always control the collection process for the data we use in our research. Like the Facebook engineers, we’re often working with data that we’ve obtained from someplace else. But even in those cases--and, arguably, especially in those cases--it’s important to ask how and why the categories of the dataset we’re using were constructed, and what systems of power they might represent. Because when it comes to classification systems, there’s power up and down, side to side, and everywhere in between. And it’s on us, as data feminists, to ensure that any differentials of power that are encoded in our datasets don’t continue to spread.

Whether we like it or not, we’re all already swayed by these systems of power, as well as by the heuristic techniques that reinforce them. Before you say, “Wait! No one taught me those techniques!” consider that “heuristic techniques” is just a fancy term for the use of mental shortcuts to make judgements--in other words, common sense. The tendency of people to adhere to common sense offers a great evolutionary advantage, in that it’s enabled humanity to survive over many millennia. (What tells you to run away from a bear? Common sense! What tells you not to eat rancid meat? Also common sense (and your gag reflex)). But as the renowned work of cognitive psychologists Daniel Kahnemann and Amos Tversky has showed, this reliance on heuristics eventually leads to an accumulation of cognitive biases--what might be otherwise understood as a snowball of mistaken assumptions that, in a world more challenged by structural inequalities than by grizzly bears, leads to profoundly flawed decision-making, and equally profoundly flawed results.

The Cognitive Bias Codex groups known cognitive biases into four different categories.

Credit: Design by John Manoogian III based on grouping by Buster Benson.

Source: https://commons.wikimedia.org/wiki/File:The_Cognitive_Bias_Codex_-_180%2B_biases,_designed_by_John_Manoogian_III_(jm3).png

Buster Benson, a product manager at the crowd-funding platform Patreon, has made a hobby of classifying these cognitive biases, and with John Manoogian, has visualized them in the chart you see above. If you look at the lower half of the image, you see can see the two quadrants-- “Need to Act Fast” and "Not Enough Meaning" --that include some of the key cognitive biases that come into play when collecting and classifying data.

Now imagine, for a moment, that you are designing a new survey for an analysis of gender and cell phone usage, but you have not yet finished reading this book. Gender is something you are pretty familiar with, you might say to yourself, since you have a gender, and everyone else you know has a gender too. But this is called the overconfidence effect, found on the lower left of the chart in lime green. Still, you go on: in your experience there are two genders, male and female, and everyone else you know would say so, too. (This is called the false-consensus effect, also on the lower left). Men and women should clearly be placed in separate categories, since they are different kinds of people. (This is called essentialism; file under “Not Enough Meaning”). Also, everyone knows that women like talking—stereotyping alert!—so in addition to gender data, how about collecting cell phone minutes data too. (You’ve just committed a fundamental attribution error, in blue on the right).  

Fast forward past the data collection phase to the analysis portion of the project. You note that you were right in your initial assessment of the situation: women did talk on their cell phones more than men. This forms the basis of your subsequent analysis. (This is called confirmation bias). In addition, in your zeal to confirm your essentialist beliefs, you entirely missed an important phenomenon: millennial-aged people of all genders have extremely large social networks. Your expectation bias prevented you from discovering some important insights that might have informed the design of a new product. You receive a negative performance review, and you are fired.

What interrupts this series of bad decisions? Recognizing that common sense is often sexist, racist, and harmful for entire groups of people--especially those groups, like women, who find themselves at the bottom end of a hierarchical classification system; or like non-binary folks, who are excluded from the system altogether.

As should now be clear, a feminist critique of classification systems is not limited to data about women, or to the category of gender alone. This point can’t be overstated, as it forms the basis for the theories of intersectional feminism that inspire this book. Feminist scholars Brittney Cooper and Margaret Rhee address this issue directly in their call to use feminist thinking to “hack” the binary logic that simultaneously underlies the racism experienced by Black people in the United States, and erases the other forms of racism experienced by Latinx, Asian American, and Indigenous groups. “Binary racial discourses elide our struggles for justice,” they state clearly, and we agree. By hacking the binary distinctions that erase the experiences of certain groups, as well as the systems of power that position those groups against each other, we can work towards a more just and equitable future.

Even though the stakes of this project are high, it’s possible for anyone, including you, our readers, to contribute. One of the best visualizations of the concept of intersectionality that we’ve found, for instance, comes from a series of posts on anonymously-authored WordPress blog. “Intersectionality, Illustrated” offers a series of visualizations that employ color gradients to represent the multiple axes of privilege (or the lack thereof) that a person might encounter in the world. At the center of each visualization is a solid circle, which represents that person’s goals and dreams for their life. Colorful lenses spiral out from the center, each representing an aspect of that person’s identity: ethnicity, age, sexual orientation, and so on. In this visualization, opacity is employed to show whether a particular identity trait contributes to an enhanced capacity to achieve one’s personal goals, or a diminished one. A directional gradient underscores how that trait alternately supports the person’s goals, or distances them from them. In this way, the viewer begins to literally see how an intersection of privileged positions-- a term used to describe the advantages offered only to particular groups, such as those that come along with being white, male, able-bodied, or college-educated--can lead to an array of colorful options for the future. An intersection of disadvantaged positions, on the other hand, such as being gay, or transgender, or disabled, or poor, reduces– and, at times, eliminates altogether– that person’s ability to pursue a particular life path. It’s a simple visualization, which relies only upon the creative use of color, opacity, gradient, and form, and yet it illustrates a powerful point: that one’s identity, and therefore one’s privilege, is determined by multiple factors that all intersect.

Left: “Four intersections, with four intersectional privileges” | Right” Four intersections, with one intersectional barrier.”

Credit: Ententa’s Magic

Source: https://ententasmagic.wordpress.com/2013/04/14/intersectionality-illustrated/

In addition to the intersection of the various aspects of a person’s identity, each individual aspect can be quite complex. Again, an anonymous person on the internet offers among the most inspiring examples for considering how we might visualize gender, for example, if we weren’t limited to to the male/female split. The creator of the Non-Binary Safe Space Tumblr shows how gender might be visualized as a spectrum, or as a branching tree. They sketch out how non-binary genders might be placed around a circle, in order to emphasize shared sensibilities rather than differences; or plotted on a Cartesian plane, in which “male” and “female” serve as the axes, with infinite points in between. They even wonder about designing a series of interactive sliders, with “female” and “not female,” “male” and “not male,” and “other” and “not other,” serving as the respective poles; or even a 3D cube, with a vector charting a person’s changing course through their evolving sense of self. These are designs that, like “Intersectionality, Illustrated,” come from personal experience, and they offer a powerful point of departure for thinking through new classification systems and visualization schemes.  

Caption: Four different ways to visualize gender: spectrum, spectrums, donut, cube.

Credit: Non Binary Safe Space Tumblr

Source: https://nonbinarysafespace.tumblr.com/

 When we went to track down the permissions for the Non Binary Safe Space Tumblr, we discovered that the site had been taken over by spammers. But maybe it’s a sign of the times (along with the inevitable descent into spam) that some of these ideas have already begun to enter major publications. For example, when Amanda Montañez, a designer for Scientific American, was tasked with creating an infographic to accompany an article on the evolving science of sex and gender, she envisioned a spectrum not unlike the one pictured above. But she soon found confirmation of what feminist theorists have been saying for decades (and what we’ve been saying so far in this book): that sex and gender are not exactly the same thing. More than that, what we might think of as the easier concept to explain--the biological category of sex--is just as fluid and complicated as the social category of gender.  

Visualizing Sex as a Spectrum

Credit: Credit: Pitch Interactive and Amanda Montaez; Source: Research by Amanda Hobbs; Expert review by Amy Wisniewski University of Oklahoma Health Sciences Center

Source: https://blogs.scientificamerican.com/sa-visual/visualizing-sex-as-a-spectrum/

Permissions: PENDING

The result, “Beyond XX and XY,” a collaboration between Montañez and the design firm Pitch Interactive, is a complex diagram, which employs a color spectrum to represent the sex spectrum, a vertical axis to represent change over time, and branching arrows to connect to text blocks that provide additional information. Montañez hopes that visualization, with its careful adherence to terminology, and inclusion of only properly categorized data, will help “raise public awareness” about intersex as well as transgender and non-binary people, and “help align policies more closely with scientific reality, and by extension, social justice.” In other words, Montañez made what was already counted count.

Even when working with binary gender data, designers can still make those limited categories count. For example, in March 2018, when the reporters on the Lifestyle Desk of The Telegraph, a British newspaper, were considering how to honor International Women’s Day, they were struck by the significant gender gap in the UK in terms of education, politics, business, and culture. They didn’t have the time or the expertise to collect their own data, and even if they had, there’s no telling as to whether they would have collected non-binary gender data. But they wanted to ensure that they didn’t further reinforce any gender stereotypes. They paid particular attention to color, with the awareness that even as many designers are moving away from using pink for girls and blue for boys, most still adhere to the logic that associates warm colors with women and girls, and cool colors with men and boys. Because the stereotype that women are warmer and more caring, while men are cooler and more aloof, is still firmly entrenched in many cultures, the associated colors are easier to interpret—or so this argument goes. 

This stereotype is, of course, another hierarchy, and the goal of the Telegraph team was to mitigate inequality, not reinforce it, and so they took a different source for inspiration: the “Votes for Women” campaign of early 20th century England, in which purple was employed to represent freedom and dignity, and green to represent hope. When thinking about which of these colors to assign to each gender, they took a design principle as their guide: “Against white, purple registers with far greater contrast and so should attract more attention when putting alongside the green, not by much but just enough to tip the scales. In a lot of the visualizations men largely outnumber women, so it was a fairly simple method of bringing them back into focus,” Fraser Lyness, the Telegraph’s Director of Graphic Journalism told Lisa Charlotte Rost, herself a visualization designer who interviewed Lyness for her blog. Here, one hierarchy, the hierarchy in which colors are perceived by the eye—was employed to challenge another one—the hierarchy of gender. Lyness was right. It was a “fairly simple method” to employ. But when put into practice, it had profound results.

There are all sorts of instances of designers, as well as journalists, artists, activists, and scholars, using data and design to bring issues of gender into view. P. Gabrielle Foreman and her team at the University of Delaware are creating a historical dataset of women who would otherwise go uncounted, and therefore unrecognized for their work. The team’s focus is on the women who attended but were not named as participants in the nineteenth-century Colored Conventions, organizing meetings in which Black Americans, fugitive and free, met to strategize about how to achieve educational, economic, and legal justice. Because these women often worked behind the scenes, packing the lunches and watching the children so that their husbands could attend; running the boarding-houses where out-of-town delegates stayed during the conventions; or even, as research has shown, standing in the back of the meeting hall in order to make their presence known, their contributions were not considered as participation in the events. But as continues to be true today--think back to the issue of maternal mortality mentioned at the beginning of Chapter One, or to the issue of sexual assault, as we discuss more in The Numbers Don’t Speak for Themselves—the systems of power that place women below men in patriarchal societies such as ours are the same that ensure that the types of contributions that women make to those societies are valued less, and therefore less likely to be counted.

But counting is not always an unmitigated good. Sometimes counting can have unintended consequences, really bad ones, especially for marginalized groups. Some transgender people, for example, prefer not to disclose the sex they were assigned at birth, keeping their identity as a trans person private. Even for those who generally choose to make their trans identity public, being visibly identified as trans on a map, or in a database, for example, could expose them to violence. Even in a big dataset, there is no additional strength in numbers. Compared to cisgendered people (folks whose genders match the sex they were assigned at birth), trans people are so small a group that they are more exposed, and therefore more vulnerable.

A similar paradox of exposure is evident among undocumented immigrants; visualizing the precise locations of undocumented immigrants may, on the one hand, help make an argument for directing additional resources to a particular area, but on the other, may alert ICE officials of the locations of their homes or schools, making the threat of deportation more likely. In cases where lives are at stake, and the security of the data can’t be guaranteed, not collecting statistical outliers can be the best way to go, as Catherine has argued in some of her other work. In other cases, however, the decision to exclude outliers can be viewed as “demographic malpractice,” since it completely erases the record of those whose experiences are already marginalized in their everyday lives, and forecloses any future analysis for good or ill.

Is there any way out of this paradox? Feminist geographer Joni Seager has studied this issue for decades, and in 2004, experienced its effects firsthand when she began what she thought would be an easy project: making a map of women doctors for her monumental Atlas of Women in the World. But she hit a wall when she discovered that the World Health Organization data on medical professionals did not include a field for gender. Seager had to abandon the map, and as a result, she could not include any information about women doctors in her Atlas. Ever since, her approach has been to always collect gender data according to the most precise possible categories, and also to always ask-- before the analysis phase-- whether the data should be aggregated or otherwise anonymized in order to mask any potential adverse effects.  

[ IMAGE FROM SEAGER ATLAS TK ]

Seager’s research is focused on the collection practices associated with global and nation-wide data, where she has found that gender data is often collected but rarely made available or analyzed in disaggregated form. For example, in 2015, the Pew Research Center published a report about cell phone use in Africa. “Cell Phone Ownership Surges in Africa,” was the title of the report; and the first chart showed the growth in cell phone ownership in the United States compared with several African countries. But buried in the text of report was a surprising finding: “Men are more likely than women to own a cell phone in six of the seven countries surveyed.” Now, this would seem like an important distinction-- and perhaps one tied to other inequities-- but because gender was not treated as a primary category of analysis, those who didn’t read the fine print might not come away with one of its most important findings. In the case of this study, it wasn’t a question of what got counted that turned out to matter, but how that counting was put to use. 

Sometimes, however, questions about counting shouldn’t be answered by the survey designer, or by the data analyst, or even by the most careful reader of this book. As a final example helps to show, questions about counting often go hand-in-hand with questions of consent. Flash back to another era -- 2006 -- when another debate about a border wall was underway. Its source was the Secure Fence Act, a bill signed into law by then President George W. Bush, which authorized the construction of 700-mile-fence along the US-Mexico border. But for the fence to be completed, it would have to pass through the Tohono O'odham Nation, which straddles both countries. Recognizing that they would have to build around several sacred burial sites, the U.S. Government requested that the O’odham nation provide them with the locations of those remains.

Ofelia Rivas is a Tohono O'odham elder who fought against the US government erecting a fence that cut her nation in half.

Credit: Catherine D’Ignazio

Source: Catherine D’Ignazio

In O'odham tradition, however, the locations of burial sites constitute sacred knowledge, and cannot be shared with outsiders under any circumstances. The O'odham Nation refused to violate its own laws by divulging information about its burial sites to the U.S. government, but it could not oppose the legal or political power of the United States. The United States built the fence, unearthing many O’odham remains in the process, and the tribe spent months attempting to get the US to return them.

But why should it be assumed that the O'odham Nation, which has existed for thousands of years, weigh its own laws less heavily than those of the United States, which-- after all-- has existed for less than two hundred fifty? Who has the right to demand that information be made public, and who has the right to protect it? And what are the cultural assumptions-- and not just the logistical considerations-- that go along with making knowledge visible and information known.

We’ve all heard the phrase “knowledge is power,” and the example of the border wall shows how this is undeniably true. But the range of examples in this chapter, we hope, also help to show how knowledge can be used to contest power, and to begin transform it. By paying attention to the politics of data collection, and to the systems of power that influence how that data is collected, we can work to rebalance some of the relationships that would otherwise contribute to their force. We might look to large institutions like the National Library of New Zealand, which began the Ngā Upoko Tukutuku Reo Māori Working Group to develop new subject headings for the Māori materials in its collections, ensuring that those materials would be classified in terms of subjects that make sense within a Māori worldview. We might look to small research groups like Mobilized Humanities, which aggregated and visualized dozens of public data sets relating to the U.S.’s “Zero Tolerance” policy, in order to call attention to the humanitarian crisis that unfolded along the U.S./Mexico border in Summer 2018. We might look to individual artists like Caroline Sinders, who is developing a data set of intersectional feminist content that can be used to train the next generation of feminist AI. Or we might look to distributed movements like #SayHerName, which employed that Twitter hashtag to create a digital record of the police violence against Black women that would otherwise go unrecorded.

These are each projects that recognized that what gets counted counts, and how the act of counting, and how we decide to show our results, profoundly influences the ideas we’re able to take away. An intersectional feminist approach to counting, like the one we’ve demonstrated here, insists that you always ask questions about the categories that structure your data, and the systems of power that might, in turn, have structured them.

Comments
88
?
Os Keyes:

Thinking about this more: the issue with this section is not just the classification but also that it’s eliding the fact that these are fundamentally different models and conceptions of gender - not just in terms of “how many genders there are” but also in terms of the fluidity of gender, the components that go into making gender. What if it was restructured to talk more about models than categories themselves?

?
Os Keyes:

I would probably be more specific than attributing gender to Jefferson?

?
Os Keyes:

Laqueur’s work is “Making Sex” for a reason - this again reduced gender to embodiment.

?
Os Keyes:

This isn’t it; there’s no additional strength in numbers because violence against us is acceptable. It’s not about how many of us there are.

?
Os Keyes:

I might reframe this to not make us sound, er, teeny. There are a lot of us.

?
Nikki Stevens:

this phrasing implies that intersex, trans, and enby are distinct categories, which they are not.

?
Nikki Stevens:

gender identity or expression? important to be specific here.

?
Nikki Stevens:

the “or” here is problematic. I’d love to see a rephrase that reminds the user that one can be gay, trans* and disabled.

?
Nikki Stevens:

here again - is there an implied white? Gender is racialized and white women find themselves near the top of many hierarchies when we also include BIWOC

?
Nikki Stevens:

what counts as rancid is not universal but culturally constructed.

?
Nikki Stevens:

This phrasing reproduces the 1,2,3 above that was the object of critique.

?
Nikki Stevens:

I’m thinking here of Cheney- Lippold’s use of “male” (and measurable types) when describing the way that we are known by the data. it sounds like you could be making a case that male Facebook users are actually “male” because we don’t know their actual gender.

?
Nikki Stevens:

some folks do want this because they can’t afford otherwise.

?
Nikki Stevens:

There are a lot of assumptions here - on class, on preference. It feels out of keeping for a book positioned as power and class aware.

?
Os Keyes:

Yeah, who is “we” here? I think about classification systems all the time - because I *have* to. Because my existence is strongly coerced by gender classification. Am I not ‘we’?

?
Nikki Stevens:

“the other” phrasing implies that there are only 2 genders.

?
Nikki Stevens:

The groups themselves are not gender non-conforming. Do you mean “groups who are activists for rights/representation of gnc folks”

?
Os Keyes:

The treatment of sex as biological/”natural” is one of the big sources of legitimisation for the oppression of intersex people. It is just as much a construct as gender. The treatment of gender and sex as distinct further reinforces and confuses this debate, and does not match sociological models of gender.

Lauren Klein:

You’re right. We need more nuance here.

?
Os Keyes:

+100

?
Os Keyes:

This feels like an oversimplification; yes, visualisation needs to include gender in a more nuanced way, but that has to include operationalising gender in a nuanced way. I do not so much care if you have a spectrum-based graph for assessing gender identity if the thing you’re measuring is actually about how gender is managed or perceived, for example. In terms of identity, the problem is not any classification system - the problem is classification.

?
Os Keyes:

This seems extremely gross. The chapter suddenly transitions from “we should do this because we’re feminists!” to “we should do this because capitalism demands it!” - from principle to self-interest - and in doing so reinforces the legitimacy of self-interest and elides the fact that I have never been in a company where management gave a fig for, for example, non-binary folk. I would really recommend pointing back to principle. Why not point to user harm rather than company harm? A good example could be Bivens’ Bumble paper.

Lauren Klein:

Thanks for this important feedback.

?
Os Keyes:

By “we most commonly” this presumably means white, western people? The sentence structure implies that the previous groups have been extinguished, or exist neatly under “transgender”: I know some Hijra people who would disagree.

?
Os Keyes:

You wouldn’t describe someone as coming out as “gender man” or “gender woman”. To couch non-binary people like this implies some distinction - some invalidity or less-than status of non-binary existences. I would strongly suggest doing something else here.

Lauren Klein:

Thank you for this comment. It was certainly not our intention to imply any invalidity, and I have corrected this phrasing here and throughout the manuscript.

I’m leaving your comment here as it was instructive to me, and it may be to others.

Elizabeth Losh:

Do you want to talk about differences between big data and ethnography? When it comes to cell phone adoption in Africa, the Institute for Money, Technology, and Financial Inclusion has interesting feminist research.

Elizabeth Losh:

The work of Seda Gürses on tracking female scientists is also interesting, which is described at https://clalliance.org/blog/advocating-online-privacy/

Elizabeth Losh:

There seem to be a lot of ejaculatory exclamations in this chapter, and I am not sure how effective the second-person address is at persuading its imagined audience.

Lauren Klein:

Noted. Thanks for the feedback.

Elizabeth Losh:

Acknowledging the potential counterargument here of abandoning classification seems very important in light of the book’s project in defending feminist quantitative as well as qualitative research. In transitioning to the next paragraph, perhaps more could be said about the relationship between platforms (that may require sorting systems in basic design) and infrastructures. Obviously a figure like Leigh Star is important in making this connection, but perhaps more could be said to unpack the legacies of the Star and Bowker work.

Momin M. Malik:

Bowker & Star have a fantastic chapter about people who didn’t fit neatly into apartheid categories.

Momin M. Malik:

Bowker & Star!!!!!

Also, see Arthur Stinchcombe, When Formality Works. The first two chapters are quite useful, although not philosophically precise.

?
Os Keyes:

+100; Star and Bowker seem key here since their focus is on the guaranteed imperfection of classification systems.

Momin M. Malik:

See Deborah Hellman, When is Discrimination Wrong?. I saw her present around fairness in ML and was incredible impressed. She’s co-authoring a paper with Hannah Wallach that I’m excited to see.

Momin M. Malik:

This could be a place to cite Bowker & Star, Sorting Things Out: Classification and its Consequences.

Lauren Klein:

We cite Bowker and Star later in this chapter, but it sounds like folks want to see them referenced earlier. It’s something we can do, for sure, although it’s important to me to emphasize that feminist theorists were rethinking classification systems for decades before STO.

+ 1 more...
?
Yanni Loukissas:

Can’t this be an endnote?

Lauren Klein:

Yes. Initially, we were operating under the assumption that the book would not have endnotes, and that all references needed to be inline.

?
Yanni Loukissas:

Yes!

?
Firaz Peer:

a quick recap of the examples might be helpful. Or maybe including a paragraph to summarize this chapter.

?
Nicole S.:

I’m not entirely clear on who “we” is in this sentence. Is it Lauren and Miriam? Or Lauren and Catherine?

?
Rena Bivens:

Perhaps this happens somewhere earlier on in this book, but it could be worthwhile to explain exactly what this means and give links or other ways for people to learn more. I also thought about this at the beginning of this chapter when the word ‘wince’ was used. This experience would be recognizable to many but for those who are unlearning the gender binary for the first time, they may need qualifiers or examples to get what underlies the wince. Of course this is a question of audience(s), which others have pointed to in various ways and I’m sure you both are thinking about a lot.

Lauren Klein:

Thanks, Rena. We appreciate this comment.

?
Rena Bivens:

here it is :)

?
Rena Bivens:

Facebook relies only on the (always public) pronoun selection, not on the gender selection, to determine how gender is stored in the database (which is what is accessed by advertisers and other third party clients).

Lauren Klein:

Good to know. Thanks for the clarification.

?
Rena Bivens:

You could consider noting somewhere in this paragraph that ideas beyond the gender binary pre-existed this time. For example, two-spirit identities amongst Indigenous peoples (which are varied).

Lauren Klein:

Good point. We can bring up this discussion from later on in the chapter, where it currently sits.

?
Heather Krause:

Running away from a bear is often the worst thing you can do if you want to survive

Lauren Klein:

Noted!

?
Heather Krause:

Is this true globally or only in the US or Western world? I work with a lot of international data that is not gender binary.

Lauren Klein:

I know some of the countries that allow people to indicate non-binary gender on passports (Australia and India come to mind), but would love more specific references if you could provide them.

?
Heather Krause:

I was under the impression that FB’s third category was other/unknown not just unknown. Not that this improves the situation - but it might need a fact check.

?
Heather Krause:

Totally agree with the valid point. However, are you certain that this is how FB maps gender for advertisers? I didn’t see this in Biven’s work - or other pieces on gender and FB - but I might have missed it.

?
Rena Bivens:

Just a mention that this continues to be restricted based on language choice of the user and for some languages only binary gender is an option.

Lauren Klein:

Oh, interesting! Noted!

?
Os Keyes:

I mean, you could look at race, or disability, or..

?
Rena Bivens:

Given that the chapter has dealt with problems associated with binary gender categories so far, this example feels a bit out of place given that it only deals with gender in a binary way.

+ 2 more...
Marian Dörk:

spell out for non-US readers

Marian Dörk:

until here one might get the impression that categories and classifications are inherently bad. you do allude to the necessity of categories at the start of the chapter, but i think it would be helpful to elaborate again the potential and utility of classification systems. maybe a plea for action would be to work towards inclusive or emancipatory classifications that work for all and also allow us to address the injustices and exclusions that are still present.

maybe you also find this research on database criticism useful in this regard:

Feinberg, M., Carter, D., and Bullard, J. (2014). Always somewhere, never there: Using critical design to understand database interactions. In CHI ’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 1941–1950.

Marian Dörk:

this is a strong plea, but i am unsure whether that we can really _ensure_ the power of such categories to spread.

isn’t this the crux here?: how can we fight categorical oppression without reproducing the categories?

Marian Dörk:

the facebook signup screenshot shows only male and female - maybe acknowledge in caption that the freeform text field appears in the settings once signed up…

?
Rena Bivens:

Agreed. It continues to be a problem that the sign-up form remains binary, but is also demonstrative of the importance of gender (understood as binary) for Facebook.

+ 2 more...
Marian Dörk:

broken for whom and what purpose? maybe too obvious to state, but it requires considerable awareness, empathy, and practical solidarity by gender-binary folks to consider these categories to be broken.

?
Nikki Stevens:

+1 here.

?
Annette Vee:

I don’t think this is exactly synonymous, right?

Lauren Klein:

We’re trying to use the best possible definitions we can find, so please correct us if we’ve gotten this wrong.

+ 1 more...
?
Annette Vee:

A few people have mentioned this above, but the persistent “you” assumptions here don’t sit well with me as a reader.

?
Os Keyes:

+1

?
Annette Vee:

Check out Stacey Waite’s poem on this exact topic. https://www.youtube.com/watch?v=OAIfgbMmBIA

Catherine D'Ignazio:

Thank you - this is a great poem/performance!

?
Maya Wagoner:

This reads oddly, like the reader should start asking Black women around them about their embarrassing hair/security state experiences, or as if hair can be isolated from racial imaginaries in general (it’s not actually just about hair). Maybe rephrase as “He is white, so he does not run the risk of racial profiling, unlike, for example, the many Black women each day who are patted down ostensibly because of the perceived threat of their natural hair.”

Lauren Klein:

Great suggestion. A much better way to convey the point.

Leaving your comment so others can advise as to whether this rewrite make the point more clearly.

Oliver Haimson:

Overall I think this chapter is great. One thing you might consider is writing a bit more not just about people who don’t fit within categories, but about people who shift from one category to another or who are fluid between categories. For example, in the gender context, most of the examples you discuss are about non-binary experiences of not fitting within existing gender categories of male or female. These are important. But other transgender experiences are about shifting from one binary category to another, which can be similarly contentious when it comes to data, analysis, visualization, etc. Gender fluidity is another example of shifting between and among categories. All of these are important to consider. Thanks for writing this and doing this important work!

Lauren Klein:

We really appreciate this comment, and will try to account more for gender fluidity in our revision.

Oliver Haimson:

I agree with this argument. In some cases, it may make sense to collect data (if safe) and report percentages of marginalized groups in aggregate but not identify particular people. I think you get to this in the next example, but might be good to put in a sentence here as well.

Oliver Haimson:

It might be good to state that this is not a definitive way to visualize gender, but just an example of potentials (for example, the positions of non-binary, GNC, and genderqueer on the spectrum seem arbitrary).

Lauren Klein:

Good point.

Oliver Haimson:

This diagram is a bit hard to read, at least for me - maybe describe what particular identities the figure on the left and the right may represent?

Oliver Haimson:

powerful sentence! :)

Oliver Haimson:

I don’t particularly follow the argument that assigning the male category a 1 value means that men are ranked first, because I don’t think there is a list that gets sorted in this way.

?
Rena Bivens:

Agreed. Perhaps we are missing something about where this argument is coming from?

Oliver Haimson:

Great example. Note that trans people (as well as non-binary or gender non-conforming people) also face substantial harassment and discrimination - what Dean Spade would call “administrative violence” - at the hands of TSA.

Oliver Haimson:

great paragraph!

Oliver Haimson:

I like the Golden Gate Bridge metaphor. You could take it further - the bridge requires constant maintenance, otherwise it starts to break down. The gender binary, since it applies to people, has started to break down under the constant maintenance that we impose on each person - or something more well-thought-out along these lines.

Lauren Klein:

I like this idea!

Oliver Haimson:

You might consider citing Bivens and Haimson’s study about gender options in social media sign-up pages - we found that most sites that required gender on sign-up only allowed binary options. https://journals.sagepub.com/doi/abs/10.1177/2056305116672486

?
Meredith Kelling:

“goes.”

?
Meredith Kelling:

Potentially you’ll want to use her full name as she does when she publishes: “Safiya Umoja Noble.”

?
Christopher Linzy:

For flow/clarity I suggest a rewording similar tot he following: “ These were systems that were designed to exclude, and in instances as far-ranging as the maternal health outcomes we’ve already discussed, to the troublingly divergent Google search results for “black girls” vs. “white girls” shown by information studies scholar Safiya Noble, we can detect the effects of those racist systems every day. “

?
Christopher Linzy:

For clarity and emphasis I recommend a rewrite similar to the following: “And this discrepancy leads right back to the body issues we discussed in Chapter One: power lies with corporations like Facebook who control the terms of data collection, but individuals like Maria Munir, who have personally (and often painfully) run up against the limits of our current classification systems, are often those best informed on their shortcomings and how to improve them.”

?
Hannah House:

Will there be a section with specific info on references?

Lauren Klein:

We came to an agreement with the publisher to use endnotes /works cited only after this draft was started. We’ll have a lot more in the final version.

?
Kecia Ali:

There’s more at stake here than the names by which we refer to these categories. Sometimes, yes, they are best understood as genders - but also, sometimes it’s more an issue of sexuality, or spiritual role, or something else. Which, of course, points to the fact that “gender” itself is a category that doesn’t always separate easily from other sorts of designations.

Lauren Klein:

Thanks for this comment. We’ll be sure to treat these terms with more nuance in the final draft.

?
Ksenia Gueletina:

A simple definition would probably work better, the tone is a little strange.

Lauren Klein:

Point taken, and good idea!

?
Shannon Mattern:

A really powerful paragraph

Anne Pollock:

It’s not just about group size, of course. Some small groups are less vulnerable than the majority — the 1%, e.g.

Lauren Klein:

Had not thought about this. Good point!

Lauren Klein:

Good point. We are very invested in providing some of these steps in each chapter, so we’ll think about how we can make some possible steps more clear and concrete.

?
Shannon Mattern:

There’s a call to action, but then no recommendations for concrete steps. I’d follow this up by saying *how* — or by telling us that you’ll recommend some concrete actions later in the chapter, or in the last chapter (or wherever).

Anne Pollock:

Assumes too much about the reader.

Anne Pollock:

Struck by the absence of intersex in this chapter.

?
Rena Bivens:

I was thinking that to in relation to the earlier TSA discussion.

+ 2 more...
Anne Pollock:

Ah, this addresses my earlier question about in what sense advertisers see users’ genders. What is the basis for assuming that this is the way it happens? I’m very skeptical that actual lists would be provided from Facebook to an advertiser. I would assume that Facebook would keep the list for itself, and use its list to distribute the advertiser’s ad.

?
Ksenia Gueletina:

I agree, this criticism of categorisation doesn’t quite hold up. I think there’s a slight conflation of categorical and ordinal here - categorisation can inflict harm by either forcing a binary choice when the truth is non-binary or by splitting a fluid spectrum into categories (such as the diagnostic category debates around mental health). I don’t think that the ordering section contributes to your argument—it would be sort of like implying that people whose last names start with a Z are worse off than those with an A.

+ 3 more...
Anne Pollock:

Weird choice of example. Many people wouldn’t want this — people without much money, people concerned about privacy… Maybe pick something more widely desired?

Lauren Klein:

I take your point, but do you have any thoughts as to a good example? I was trying to think of an example I’d seen, which also might lead to additional positive developments down the road. (So, not just a free pizza, for instance).

Anne Pollock:

Again, no need to assume an ignorant reader. A relatively small but certainly nonzero number of cisgender people know this, because they have read or heard about it before.

Lauren Klein:

True. I’ll work on a rephrasing.

Anne Pollock:

This kind of use of the second person comes across as condescending. It’s just as easy to say something like “Middle school students routinely learn about…” The text shouldn’t assume that readers know anything about the history of science, but it shouldn’t assume that they are completely ignorant of it, either.

Lauren Klein:

Thanks for pointing this out. We’ll need to think through our use of the second person throughout, as I think I mention to you in a comment in an earlier chapter.

Oliver Haimson:

I’d recommend changing “see” to “target.”

Anne Pollock:

In what sense do the advertisers see users’ genders? Aren’t the ads just shown selectively to the gender-marked audiences?

+ 1 more...
Anne Pollock:

Seems like an undercount — where does this number come from?

Oliver Haimson:

I agree that this non-binary statistic needs a citation - since (as you note) counting non-binary people is a contentious practice itself and varies widely depending on the measure.

+ 1 more...
Anne Pollock:

The point is good but obviously not all user accounts require selecting gender (or any other characteristic). The overstatement is distracting.

Anne Pollock:

Oh come now - so much harassment and abuse on the internet, there are other words more unwelcome.

Jaron Heard:

Rhetorically effective, but do you want your readers to actually do this? It might be fine, but worth consideration.

Anne Pollock:

I share Jaron Heard’s concern about this phrasing, but for a different reason: it operates on the assumption that the reader is neither a Black woman nor a person named Muhammad.

+ 3 more...
?
Jonas Parnow:

I think it is important to mention one more type: Boolean.

Which is not just technically the base of the others but especially in the context of this book the manifestation of the whole binary thinking. (Which then leads to interesting general questions about the totality of the discriminating nature of the digital)

Oliver Haimson:

I agree - Boolean is a much more appropriate example of a computational category. You could also consider mentioning statistical binary data types such as “dummy” variables.

+ 1 more...
?
Jonas Parnow:

One similar example from Germany would be the discussion around #brauneKarte (brown map). Some people created a map on Google maps called »No refugee centers in my neighborhood« with the locations of all centers in Germany. After many protests from people who feared attacks Google took down the map. Unfortunately, I can’t find any English articles about it.

?
Shannon Mattern:

Perhaps you could also briefly mention the debate over including a citizenship question on the US Census ?

?
Sarah Yerima:

I’m not familiar with neurology literature, but as an undergrad, I vaguely remembering a professor mentioning that human brains are wired (for lack of a better term) to process information via categorization. As such, we simply cannot avoid classifications/categorizations of different types! As you say, the way the systems of classifications are constructed—and the potential for a vertical or hierarchical distribution of categories—is/should be our concern (i.e. being attentive to racial/gender/sexuality/etc based difference isn’t a problem problem (in and of itself) but racism/sexism/and queer antagonism are problems).

?
Nick Lally:

This section makes me think of these two excellent pieces: “Securitizing Gender: Identity, Biometrics, and Transgender Bodies at the Airport” by Currah & Mulqueen; “"What did TSA find in Solange's fro"? : security theater at the airport” in Simone Browne’s Dark Matters

Anne Pollock:

Yes to those from Nick Lally. Also Shoshana Magnet and Tara Rodgers’ “Stripping for the state.”