Category Archives: technology

How much should we worry about sexism in tech?

292134

While browsing for distractions on my way to the airport, I stumbled upon Kat Hagan’s post “Ways men in tech are unintentionally sexist”, hosted on Anjani Ramachandran’s One Size Fits One. The post is part of a larger debate about women’s presence and recognition in tech, a debate that at its worst sums up the way socially relevant issues could be discussed and they’re not: we could make a smart use of the wealth of bright minds and insightful data made available by the digital age, and instead we pursue click-baiting headlines and artificially inflated scandals.

To her credit, Kat Hagan has a much more thorough and thought-through approach, referencing scientific theories and academic papers, to illustrate how men can be unintentionally sexist when approaching/designing/managing technology and its development. We need more of that.

 

She then makes a list of behaviours that should be avoided, some of which are very reasonable and uncontroversial, such as not using “guys” when addressing a group of mixed genders, or ignoring women’s needs (the example of the lack of period tracking functionality in Apple’s new Health app is particularly spot on).

Other recommendations, though, may sound entirely sensible at first (as confirmed by readers’ comments), yet hide a logical flaw that often recurs in discussions around sexism and other forms of discrimination:

you can’t scale linearly from individual to mass.

While there are great variations between individuals, as you get to big numbers you see statistically significant similarities between people of the same gender. We all agree that we should treat each individual on their own merit, but should we extend that to millions, or hundreds of millions, of people in the face of these similarities? Should we ignore them? Or worse, deny them?

 

I’m going to make some increasingly uncomfortable examples to show that things are more complicated than the sexism debate seems to account for, and there are difficult questions worth at least asking ourselves.

 

74648403

1. Assuming gender identity

Kat argues that using avatars that are male by default is a form of sexism that should be avoided, but the underlying issue is whether we should allow ourselves to assume that a user is of a certain gender, and at what cost. The avatar example is an easy way out of the problem, because you can always go for neutral (although when I registered to Pinterest, with its overwhelmingly female membership, I’d have had no issue being presented with a female icon). Things get trickier when it comes to design choices that don’t always have an optimal neutral solution: colour palette; sizes; font; images of a user, such as a face, or a hand. If the numbers proved that there are significant differences in preference among the genders, and our platform were skewed male or female, should we ignore it? Should we opt for a neutral solution even if it doesn’t please anyone, as long as it doesn’t displease one or the other?

 

 

2. Assuming gender differences

Kat’s point no.8 is “Stop denigrating things by comparing them to women or femininity”, like saying “you fight like a girl” or “you like chick flicks”.

This is a campaign by Always. Who couldn’t like it? Who couldn’t agree with it?

Unfortunately it’s hypocritical, because it hides an uncomfortable empirical truth. In our experience (and there may be times and places where things are different) most girls fight “like girls”; most “chick flicks” are viewed and liked by girls; just like most “jerk” acts and comments are made by stupid males, and most horrible sex comments are mouthed by male “pigs”. Is it true that “like a girl” tends to be an insult whereas “like a man” is celebratory? Yes. But we have other derogatory terms for men: jerk; pigs; a**-hole; wanker… They’re all unequivocally male.

Should we replace “fight like a girl” with “fight like a bitch”? Is this what we’re talking about?

On the other hand, we can decide that we’re better off as a society by being hypocritical and treating these uncomfortable empirical truths as if they didn’t exist, but facts tend to be stubborn things, and in the long run hypocritical conventions end up damaging the broader issue they’re supposed to protect because they make it come across as artificial and false.

 

3. Assuming gender interests

Kat argues that “assuming the women they meet are in non-technical roles” is a form of sexism: this is certainly true if you meet them at a tech conference, the (once again too easy) example that she chose to illustrate her point; it’s a lot less true if you’re introduced to a new team of mixed roles, or if you’re meeting students at a grad fair. You can legitimately assume that someone interested in Computer Science is more likely to be male because the numbers prove you right, so if hypothetically you only had time to speak with one applicant with no knowledge of their background, picking a man would not be a form of sexism, it’d be weighing your odds.

Of course that doesn’t mean that you should rule female applicants out:

it’s ok to prepare for the usual, as long as you welcome the unusual with open eyes and mind.

But this is an easy-to-agree principle, so let’s move on to more troubling questions: if you’re a parent of a young girl, and you have to enrol her in an extra class of either literature or coding, knowing that right now she’s interested in both (or neither), what should you do? And if you were to build a new dorm for your future Computer Science students in a country where men and women can’t share facilities, would you split the space half and half?

 

4. Assuming gender capability

Kat contrasts the prejudicial view that “Women just aren’t interested in programming/math/logic” with evidence that “the variation between individuals dwarfs any biological difference”. Although counterintuitive, both statements are true: there are massive variations between the capabilities of any two random individuals, and that’s why we should always be judged on our own merit; but at the same time when it comes to large numbers, men are marginally better performing and significantly more interested in mathematical and technical disciplines.
We design technology for millions, sometimes billions of users, and even a marginal difference in response can amount to a dramatic increase in adoption, revenues, and success. Should we ignore that for the sake of equality? Should we do more than that?

 

A famous experiment from a few years back showed that what we consider an absolute (eg. how good someone is at something) is everything but: female Korean-American students were given a math assignment, after going through a process that would remind them either of their gender or of their heritage. Participants who were primed on their Asian roots (positively associated with math skills) performed statistically better than equivalent students who were primed on their female gender (often associated with being bad with numbers).

If we’re pursuing equality, should we actively design technology requiring quantitative skills in a way that makes women forget that they’re women? Are women actually better off in a “sexist” office that calls everyone “guys”?

 

I’m not suggesting an answer to any of these questions, but I think it’s worth asking them. Human behaviour is counterintuitive and complicated: individually, we’re very different; in groups, we influence one another and form clusters; when you have to design for large groups, you inevitably sacrifice the uniqueness of each individual.

 

1-Xp9BSxnjheOQrQaRX6xMsg

 

The point is not how to avoid discrimination.

We always discriminate: when a newspaper publishes an article with a certain font size; when a supermarket places a product on an eye-level shelf and another one high up; when it was decided to use certain colours for traffic lights.

We always discriminate in technology, too: when we decide what operating system we develop apps for; what apps we preload onto a device; what features we include in those apps.

The point is how to discriminate well.

If we look at the world we live in, we follow a few principles:

  1. Discrimination must have a purpose: newspapers were printed in only one font size because, before digital came along, it would have been economically inefficient to do otherwise
  2. It should be optimal for a sufficient majority: traffic lights are a bad solution for the blind and color-blind, but because most people don’t have such problems, it is the solution we chose
  3. It should not make things too hard for the minority: if you’re too short to reach a product on the top shelf of a supermarket, you can ask someone to help you
  4. Sometimes, it requires people to adapt: if you move abroad you can’t expect people to learn your language, you have to learn theirs. It’s a discrimination against new immigrants, but the alternative would be so inconvenient that they just have to comply.

 

When it comes to technology, we need to be aware that these trade-offs are an inevitable part of the job regardless of how uncomfortable they are, and so are the questions they bring along.

If Apple didn’t include period tracking in their Health app because it would have come at the expense of another feature or of a faster performance that would have made the product better for most of their users, would it still be wrong? And would it be an ethical question or a commercial question?

Would the answer change if there were fewer alternative health apps on the market?

 

How much worrying about sexism is too much?

And if we say it’s never too much, let’s rephrase that: how much disregarding of statistically different behaviours among genders is too much?

How much gender-neutrality can we pursue, without being counterproductive to the success of what we do?

How much equality can we enforce without being patronising?

Advertisements
Tagged , , , , , , ,

Julian Assange doesn’t matter

About 10 years ago, shortly after the birth of Napster, videos documenting police violence at the G8 in Genoa started to circulate on the p2p networks. Those who witnessed and filmed the event knew that their videos couldn’t have made their way into mainstream media because that would have cast the government in too much of a  bad light, so they uploaded them into the grassroots network that they were already using to share music.

Nobody noticed back then because it was a sporadic event, and when that same technology started crashing the entertainment industry, that’s what grabbed the headlines.

If anything, it’s surprising that it took 10 years for the unauthorized leak of sensitive information to go mainstream, and for Wikileaks to establish itself.

What is not surprising, is the hysterical reaction by established institutions, that are behaving just like the music industry did, quite possibly with the same results.

Prosecuting Shawn Fanning didn’t stop peer2peer, and did no good to the music industry. Prosecuting Julian Assange isn’t going to stop Wikileaks. Even if Assange was to be taken out of the equation, someone else will set up a digital presence to allow people to anonymously upload any information that they deem should go public.

There is a significant demand, there is a digital infrastructure, there are very low barriers. No effort from any governmental institution can balance the opportunities for atomized publication and sharing that digital has brought along. And just saying that something is different from how things should be done, or even dangerous, is not going to stop it. Anyone who doesn’t understand this, doesn’t understand the XXI century.

Technology is not a question that legislators can answer “yes” or “no” to. Once adopted by a significant amount of people, technology is a fact.

(Update: Dave Winer makes the same point here.)

Tagged , , ,

Why Google can still rock the world

So, I indulged in a bit Google slapping in some of my last posts, but since I’m very much aware that this seems to be one of Blog-land favourite activities for 2010 I thought I’d be true to the spirit of this blog and complicate things a bit. Here are some unsolicited considerations on why and how Google can still get bigger and better:

Search is a by-product

This is a fundamental point for everything that follows. Search is not a product. Not because it’s a feature: of course it is, but that’s not the point. In people’s eyes, the difference between product and feature is meaningless. Search is not a product because it carries no value in itself. It’s a by-product, that is generated by, and only makes sense with, a worthy product.

When Google launched, the web was one mastodontic product that needed to be sorted out. Now you don’t need to agree with the theory that the web is dead (it’s not, the logic and math behind that theory are flawed) to acknowledge that we now have many products within the web: Facebook is one, but there are many others, such as Amazon, Groupon, Yelp, Wikipedia. (ie. the verticals). Facebook is not trying to replicate the internet within itself because Zuckerberg is ambitious: every one of the above sites has a natural drive to do it, Facebook is just getting there sooner and better. And that’s not a problem. However, all those sites have generated their own search as a by-product of their increasing complexity. That’s the problem.

Products create scale, by-products create profits

Up until the mid-2000s , things were pretty simple. Someone else created the product (the web), and Google profited from the by-product (search). This was only possible because in an atomized web no player was big enough to generate the users value needed to create a self-sufficient ecosystem. Even Amazon struggled to achieve the scale needed to generate profitability.

When the internet finally went mainstream, things changed. Facebook, Amazon, eBay and Skype were amazing products that were capable of drawing millions of users, effectively creating massive ecosystems. The scale of these ecosystems generated at the same time the demand for certain byproducts (ads, search, payment) and the business model that would support them. In a line, the lesson was: offer an amazing product, build a massive community, and money will come. (Ironically, that’s exactly what caused the dot-com bust, but the problem back then was that companies went public before they went popular)

Great products deserve a brand

The recommendation for Mountain View is fairly simple: build amazing products. Oh, and please don’t call them Google-something.

First, after Google Wave and Google Buzz, your next Google-branded product will suffer an unnecessary PR handicap. Second, it confuses users: Google stands for search, and that’s what they know and use already. If you build a product, you shouldn’t call it with the name of the by-product.

If you look at what’s working, you’re doing it already: your two most successful recent products are not called Google Browser and Google Mobile, they’re called Chrome and Android. And they’re utterly brilliant!

And by the way, this is where you have an advantage over Facebook: because of its strategy, Facebook needs to bring everything under its roof and its name, and that limits what they can do and stand for. (The only exception is the “Like” button, that has a very different nature from Facebook: it’s no coincidence that it’s not called “Share”. It’s a bookmark, a favourite, a Digg with the scale that Digg never had: half of its value is one-to-self, to keep track of what you like; the other half is one-to-everyone, to broadcast your preference to strangers. None of these two propositions are consistent with Facebook. That’s why it has massive potential.)

Start from Android

It’s a great product, a great brand, and it’s doing brilliantly. The only thing that could jeopardize its success is a lack of leadership, and this is exactly what’s happening. We all like the idea of an open environment, but marketing is like physics: there can be no void; if a substance leaves a place something else occupies it. And in this case it’s the worst substance of all: operators. They’re taking advantage of Android’s openness to pre-install junkware, restrict access to applications that are bad for them but good for users (such as Skype), and impose arbitrary limitations. As of now, they’re the single, greatest danger to Android.

Just like democracy, technology needs leadership: you need to inspire, educate and, yes, regulate. And then let people vote with their fingers. Hopefully it won’t be about the middle.

Tagged , , , ,
%d bloggers like this: