Tag Archives: sexism

How much should we worry about sexism in tech?

292134

While browsing for distractions on my way to the airport, I stumbled upon Kat Hagan’s post “Ways men in tech are unintentionally sexist”, hosted on Anjani Ramachandran’s One Size Fits One. The post is part of a larger debate about women’s presence and recognition in tech, a debate that at its worst sums up the way socially relevant issues could be discussed and they’re not: we could make a smart use of the wealth of bright minds and insightful data made available by the digital age, and instead we pursue click-baiting headlines and artificially inflated scandals.

To her credit, Kat Hagan has a much more thorough and thought-through approach, referencing scientific theories and academic papers, to illustrate how men can be unintentionally sexist when approaching/designing/managing technology and its development. We need more of that.

 

She then makes a list of behaviours that should be avoided, some of which are very reasonable and uncontroversial, such as not using “guys” when addressing a group of mixed genders, or ignoring women’s needs (the example of the lack of period tracking functionality in Apple’s new Health app is particularly spot on).

Other recommendations, though, may sound entirely sensible at first (as confirmed by readers’ comments), yet hide a logical flaw that often recurs in discussions around sexism and other forms of discrimination:

you can’t scale linearly from individual to mass.

While there are great variations between individuals, as you get to big numbers you see statistically significant similarities between people of the same gender. We all agree that we should treat each individual on their own merit, but should we extend that to millions, or hundreds of millions, of people in the face of these similarities? Should we ignore them? Or worse, deny them?

 

I’m going to make some increasingly uncomfortable examples to show that things are more complicated than the sexism debate seems to account for, and there are difficult questions worth at least asking ourselves.

 

74648403

1. Assuming gender identity

Kat argues that using avatars that are male by default is a form of sexism that should be avoided, but the underlying issue is whether we should allow ourselves to assume that a user is of a certain gender, and at what cost. The avatar example is an easy way out of the problem, because you can always go for neutral (although when I registered to Pinterest, with its overwhelmingly female membership, I’d have had no issue being presented with a female icon). Things get trickier when it comes to design choices that don’t always have an optimal neutral solution: colour palette; sizes; font; images of a user, such as a face, or a hand. If the numbers proved that there are significant differences in preference among the genders, and our platform were skewed male or female, should we ignore it? Should we opt for a neutral solution even if it doesn’t please anyone, as long as it doesn’t displease one or the other?

 

 

2. Assuming gender differences

Kat’s point no.8 is “Stop denigrating things by comparing them to women or femininity”, like saying “you fight like a girl” or “you like chick flicks”.

This is a campaign by Always. Who couldn’t like it? Who couldn’t agree with it?

Unfortunately it’s hypocritical, because it hides an uncomfortable empirical truth. In our experience (and there may be times and places where things are different) most girls fight “like girls”; most “chick flicks” are viewed and liked by girls; just like most “jerk” acts and comments are made by stupid males, and most horrible sex comments are mouthed by male “pigs”. Is it true that “like a girl” tends to be an insult whereas “like a man” is celebratory? Yes. But we have other derogatory terms for men: jerk; pigs; a**-hole; wanker… They’re all unequivocally male.

Should we replace “fight like a girl” with “fight like a bitch”? Is this what we’re talking about?

On the other hand, we can decide that we’re better off as a society by being hypocritical and treating these uncomfortable empirical truths as if they didn’t exist, but facts tend to be stubborn things, and in the long run hypocritical conventions end up damaging the broader issue they’re supposed to protect because they make it come across as artificial and false.

 

3. Assuming gender interests

Kat argues that “assuming the women they meet are in non-technical roles” is a form of sexism: this is certainly true if you meet them at a tech conference, the (once again too easy) example that she chose to illustrate her point; it’s a lot less true if you’re introduced to a new team of mixed roles, or if you’re meeting students at a grad fair. You can legitimately assume that someone interested in Computer Science is more likely to be male because the numbers prove you right, so if hypothetically you only had time to speak with one applicant with no knowledge of their background, picking a man would not be a form of sexism, it’d be weighing your odds.

Of course that doesn’t mean that you should rule female applicants out:

it’s ok to prepare for the usual, as long as you welcome the unusual with open eyes and mind.

But this is an easy-to-agree principle, so let’s move on to more troubling questions: if you’re a parent of a young girl, and you have to enrol her in an extra class of either literature or coding, knowing that right now she’s interested in both (or neither), what should you do? And if you were to build a new dorm for your future Computer Science students in a country where men and women can’t share facilities, would you split the space half and half?

 

4. Assuming gender capability

Kat contrasts the prejudicial view that “Women just aren’t interested in programming/math/logic” with evidence that “the variation between individuals dwarfs any biological difference”. Although counterintuitive, both statements are true: there are massive variations between the capabilities of any two random individuals, and that’s why we should always be judged on our own merit; but at the same time when it comes to large numbers, men are marginally better performing and significantly more interested in mathematical and technical disciplines.
We design technology for millions, sometimes billions of users, and even a marginal difference in response can amount to a dramatic increase in adoption, revenues, and success. Should we ignore that for the sake of equality? Should we do more than that?

 

A famous experiment from a few years back showed that what we consider an absolute (eg. how good someone is at something) is everything but: female Korean-American students were given a math assignment, after going through a process that would remind them either of their gender or of their heritage. Participants who were primed on their Asian roots (positively associated with math skills) performed statistically better than equivalent students who were primed on their female gender (often associated with being bad with numbers).

If we’re pursuing equality, should we actively design technology requiring quantitative skills in a way that makes women forget that they’re women? Are women actually better off in a “sexist” office that calls everyone “guys”?

 

I’m not suggesting an answer to any of these questions, but I think it’s worth asking them. Human behaviour is counterintuitive and complicated: individually, we’re very different; in groups, we influence one another and form clusters; when you have to design for large groups, you inevitably sacrifice the uniqueness of each individual.

 

1-Xp9BSxnjheOQrQaRX6xMsg

 

The point is not how to avoid discrimination.

We always discriminate: when a newspaper publishes an article with a certain font size; when a supermarket places a product on an eye-level shelf and another one high up; when it was decided to use certain colours for traffic lights.

We always discriminate in technology, too: when we decide what operating system we develop apps for; what apps we preload onto a device; what features we include in those apps.

The point is how to discriminate well.

If we look at the world we live in, we follow a few principles:

  1. Discrimination must have a purpose: newspapers were printed in only one font size because, before digital came along, it would have been economically inefficient to do otherwise
  2. It should be optimal for a sufficient majority: traffic lights are a bad solution for the blind and color-blind, but because most people don’t have such problems, it is the solution we chose
  3. It should not make things too hard for the minority: if you’re too short to reach a product on the top shelf of a supermarket, you can ask someone to help you
  4. Sometimes, it requires people to adapt: if you move abroad you can’t expect people to learn your language, you have to learn theirs. It’s a discrimination against new immigrants, but the alternative would be so inconvenient that they just have to comply.

 

When it comes to technology, we need to be aware that these trade-offs are an inevitable part of the job regardless of how uncomfortable they are, and so are the questions they bring along.

If Apple didn’t include period tracking in their Health app because it would have come at the expense of another feature or of a faster performance that would have made the product better for most of their users, would it still be wrong? And would it be an ethical question or a commercial question?

Would the answer change if there were fewer alternative health apps on the market?

 

How much worrying about sexism is too much?

And if we say it’s never too much, let’s rephrase that: how much disregarding of statistically different behaviours among genders is too much?

How much gender-neutrality can we pursue, without being counterproductive to the success of what we do?

How much equality can we enforce without being patronising?

Tagged , , , , , , ,
%d bloggers like this: