Category Archives: complicating

How much should we worry about sexism in tech?

292134

While browsing for distractions on my way to the airport, I stumbled upon Kat Hagan’s post “Ways men in tech are unintentionally sexist”, hosted on Anjani Ramachandran’s One Size Fits One. The post is part of a larger debate about women’s presence and recognition in tech, a debate that at its worst sums up the way socially relevant issues could be discussed and they’re not: we could make a smart use of the wealth of bright minds and insightful data made available by the digital age, and instead we pursue click-baiting headlines and artificially inflated scandals.

To her credit, Kat Hagan has a much more thorough and thought-through approach, referencing scientific theories and academic papers, to illustrate how men can be unintentionally sexist when approaching/designing/managing technology and its development. We need more of that.

 

She then makes a list of behaviours that should be avoided, some of which are very reasonable and uncontroversial, such as not using “guys” when addressing a group of mixed genders, or ignoring women’s needs (the example of the lack of period tracking functionality in Apple’s new Health app is particularly spot on).

Other recommendations, though, may sound entirely sensible at first (as confirmed by readers’ comments), yet hide a logical flaw that often recurs in discussions around sexism and other forms of discrimination:

you can’t scale linearly from individual to mass.

While there are great variations between individuals, as you get to big numbers you see statistically significant similarities between people of the same gender. We all agree that we should treat each individual on their own merit, but should we extend that to millions, or hundreds of millions, of people in the face of these similarities? Should we ignore them? Or worse, deny them?

 

I’m going to make some increasingly uncomfortable examples to show that things are more complicated than the sexism debate seems to account for, and there are difficult questions worth at least asking ourselves.

 

74648403

1. Assuming gender identity

Kat argues that using avatars that are male by default is a form of sexism that should be avoided, but the underlying issue is whether we should allow ourselves to assume that a user is of a certain gender, and at what cost. The avatar example is an easy way out of the problem, because you can always go for neutral (although when I registered to Pinterest, with its overwhelmingly female membership, I’d have had no issue being presented with a female icon). Things get trickier when it comes to design choices that don’t always have an optimal neutral solution: colour palette; sizes; font; images of a user, such as a face, or a hand. If the numbers proved that there are significant differences in preference among the genders, and our platform were skewed male or female, should we ignore it? Should we opt for a neutral solution even if it doesn’t please anyone, as long as it doesn’t displease one or the other?

 

 

2. Assuming gender differences

Kat’s point no.8 is “Stop denigrating things by comparing them to women or femininity”, like saying “you fight like a girl” or “you like chick flicks”.

This is a campaign by Always. Who couldn’t like it? Who couldn’t agree with it?

Unfortunately it’s hypocritical, because it hides an uncomfortable empirical truth. In our experience (and there may be times and places where things are different) most girls fight “like girls”; most “chick flicks” are viewed and liked by girls; just like most “jerk” acts and comments are made by stupid males, and most horrible sex comments are mouthed by male “pigs”. Is it true that “like a girl” tends to be an insult whereas “like a man” is celebratory? Yes. But we have other derogatory terms for men: jerk; pigs; a**-hole; wanker… They’re all unequivocally male.

Should we replace “fight like a girl” with “fight like a bitch”? Is this what we’re talking about?

On the other hand, we can decide that we’re better off as a society by being hypocritical and treating these uncomfortable empirical truths as if they didn’t exist, but facts tend to be stubborn things, and in the long run hypocritical conventions end up damaging the broader issue they’re supposed to protect because they make it come across as artificial and false.

 

3. Assuming gender interests

Kat argues that “assuming the women they meet are in non-technical roles” is a form of sexism: this is certainly true if you meet them at a tech conference, the (once again too easy) example that she chose to illustrate her point; it’s a lot less true if you’re introduced to a new team of mixed roles, or if you’re meeting students at a grad fair. You can legitimately assume that someone interested in Computer Science is more likely to be male because the numbers prove you right, so if hypothetically you only had time to speak with one applicant with no knowledge of their background, picking a man would not be a form of sexism, it’d be weighing your odds.

Of course that doesn’t mean that you should rule female applicants out:

it’s ok to prepare for the usual, as long as you welcome the unusual with open eyes and mind.

But this is an easy-to-agree principle, so let’s move on to more troubling questions: if you’re a parent of a young girl, and you have to enrol her in an extra class of either literature or coding, knowing that right now she’s interested in both (or neither), what should you do? And if you were to build a new dorm for your future Computer Science students in a country where men and women can’t share facilities, would you split the space half and half?

 

4. Assuming gender capability

Kat contrasts the prejudicial view that “Women just aren’t interested in programming/math/logic” with evidence that “the variation between individuals dwarfs any biological difference”. Although counterintuitive, both statements are true: there are massive variations between the capabilities of any two random individuals, and that’s why we should always be judged on our own merit; but at the same time when it comes to large numbers, men are marginally better performing and significantly more interested in mathematical and technical disciplines.
We design technology for millions, sometimes billions of users, and even a marginal difference in response can amount to a dramatic increase in adoption, revenues, and success. Should we ignore that for the sake of equality? Should we do more than that?

 

A famous experiment from a few years back showed that what we consider an absolute (eg. how good someone is at something) is everything but: female Korean-American students were given a math assignment, after going through a process that would remind them either of their gender or of their heritage. Participants who were primed on their Asian roots (positively associated with math skills) performed statistically better than equivalent students who were primed on their female gender (often associated with being bad with numbers).

If we’re pursuing equality, should we actively design technology requiring quantitative skills in a way that makes women forget that they’re women? Are women actually better off in a “sexist” office that calls everyone “guys”?

 

I’m not suggesting an answer to any of these questions, but I think it’s worth asking them. Human behaviour is counterintuitive and complicated: individually, we’re very different; in groups, we influence one another and form clusters; when you have to design for large groups, you inevitably sacrifice the uniqueness of each individual.

 

1-Xp9BSxnjheOQrQaRX6xMsg

 

The point is not how to avoid discrimination.

We always discriminate: when a newspaper publishes an article with a certain font size; when a supermarket places a product on an eye-level shelf and another one high up; when it was decided to use certain colours for traffic lights.

We always discriminate in technology, too: when we decide what operating system we develop apps for; what apps we preload onto a device; what features we include in those apps.

The point is how to discriminate well.

If we look at the world we live in, we follow a few principles:

  1. Discrimination must have a purpose: newspapers were printed in only one font size because, before digital came along, it would have been economically inefficient to do otherwise
  2. It should be optimal for a sufficient majority: traffic lights are a bad solution for the blind and color-blind, but because most people don’t have such problems, it is the solution we chose
  3. It should not make things too hard for the minority: if you’re too short to reach a product on the top shelf of a supermarket, you can ask someone to help you
  4. Sometimes, it requires people to adapt: if you move abroad you can’t expect people to learn your language, you have to learn theirs. It’s a discrimination against new immigrants, but the alternative would be so inconvenient that they just have to comply.

 

When it comes to technology, we need to be aware that these trade-offs are an inevitable part of the job regardless of how uncomfortable they are, and so are the questions they bring along.

If Apple didn’t include period tracking in their Health app because it would have come at the expense of another feature or of a faster performance that would have made the product better for most of their users, would it still be wrong? And would it be an ethical question or a commercial question?

Would the answer change if there were fewer alternative health apps on the market?

 

How much worrying about sexism is too much?

And if we say it’s never too much, let’s rephrase that: how much disregarding of statistically different behaviours among genders is too much?

How much gender-neutrality can we pursue, without being counterproductive to the success of what we do?

How much equality can we enforce without being patronising?

Tagged , , , , , , ,

Marketing and the sharing economy: get smart before someone else does.

ChooseLife

(An edited version of this post first appeared on Campaign Asia)

Here’s a revealing exercise that we never do: the next time we go home, let’s take out pen and paper and start making an inventory of everything we own. How much of it do we use? How much do we need? How much do we want?

This is not a clichéd hunt for the pair of trousers we haven’t worn for the past 7 years, or for the picture frame that we never even unpacked. It’s something more fundamental than that.

Dating back to when our personal understanding of the world was formed, we either had something, or we didn’t.

Of course there were services that we had access to and never really owned, such as public transport, schools and streets, but they were exceptions of such a large scale that we instinctively felt they belonged to a different category. When it came to goods we consumed, we either owned them and used them, or didn’t own them and didn’t use them, apart from occasionally borrowing them from a friend.

As a staple of my formative years goes, that’s how we ended up owning “a fucking big television, cars, fixed interest mortgage repayments, leisurewear and matching luggage, DIY and wondering who the fuck you are on a Sunday morning”.

Some of it makes perfect sense, but do we really need to own a drill that we use once every two years, and generally with embarrassing results? How about a lawnmower?

It’s not like we thought it was a good idea at the time:

we knew it wasn’t, but it was the only one we had.

The emergence of the sharing economy over the past decade was built on the hypothesis that the ownership model was not the commercial equivalent of the end of history, but rather an incidental situation dictated as much by the alternative opportunities that we were missing than by the wealth we had acquired: the advent of a networking technology and culture is providing the platform to test this hypothesis and investigate what goods we’re willing to part from, and what instead we will still like to call our own.

While this process is still in its infancy, as changes in human behaviour are much slower than the marketing news cycle, we can already identify some driving forces.

It’s natural to desire

Croesus and Solon — 1624; Gerard van Honthorst; Kunsthalle Museum — Hamburg

 

Let’s start by dispelling a common myth: the literature blaming advertising for making us “buy things we don’t need with money we don’t have to impress people we don’t like” is as large as it is superficial.

The truth is that, despite what we may think of ourselves, advertising is not that all-powerful, and it was never about “inventing desire” as it was about inventing responses to desires people already had. The fundamental human motivations are always the same, and they’re not going to go away: the network economy offers the opportunity to design new solutions to fulfil old desires. A service like “Bag, Borrow or Steal”, for instance, still gives you access to high-end designer handbags that make you stand out, but letting you borrow them on a monthly basis instead of buying them.

Of course, there’s a reason why we could have done this 10 years ago, but we’re only talking about it now.

Events change minds

Economist

Environmentally-conscious activists spent the best part of the last decade trying to persuade everyone who’d bother to listen that we should buy less, eat less, consume less: they failed.

Then the financial crisis hit, and the middle- and lower-class in the West found itself forced to downgrade and downscale. When that happened, our brains played a trick on us:

Behavior often shapes attitude more than the other way around, so finding ourselves unable to own more due to financial circumstances made us post-rationalize it into a better option in the first place.

This accelerated our critical view on consumerism, to the point that now, according to “The New Consumer and the Sharing Economy”, a global survey by advertising agency Havas Worldwide, 46% of people in 29 countries ranging from Argentina to Vietnam prefer to “share things rather than own them” and 56% resell or donate old goods rather than throwing them away.

While we’ve been forced into this disposition by events beyond our control, it’s entirely possible that it will leave roots deep in our minds, and we won’t necessarily revert to the same old habits even once we have the means to do so.

After all, just like the financial crisis gave millions of people the motivation to experiment with a new behaviour, that same new behaviour is in turn giving thousands of marketers the right motivation to experiment with new go-to-market strategies.

Flightcar.com gives you free airport parking by letting you rent your car while you’re away.

IKEA ran a two-week promotion turning its Facebok page into a digital flea market where people could buy and sell used furniture.

UK’s DIY leader B&Q created “Streetclubs”, a service that helps neighbours come together and share tools and other household items.

While these three examples are all enabled by digital technology, it took a double shift in mindset to make them happen: without a crisis that generated talk of a “new normal”, ideas like these might still sit on the fringe of what’s acceptable by mainstream consumers; and in turn, a decrease in traditional spending paired with an openness towards new models gave the most innovative marketers a licence to pursue innovation more radically than they would allow themselves to when the economy was growing.

If anything, what’s holding back more of such experiments on a larger scale is a conservative corporate culture that is fixated on selling the same products rather than fulfilling the same needs, and that underestimates how radically different alternatives can reshape whole industries and leave consumers better off in the process.

A call for “smarter marketing”

This is our brain when we hear the word “New”

This is why the popular call for “smarter consumption” is somewhat misplaced. Consumers respond to the environment they’re provided with, and while they now have a greater power to affect it than ever before, it’s at the same time irresponsible and dangerous for marketers to wash their hands of the problem.

As we said, people’s desires don’t change, and if we don’t find new ways to fulfil them, they’ll stick with the old ones. In particular, as we humans constantly long for all things “new,” fans of sustainability should not delude themselves into thinking that consumers can be convinced to keep what they have until it breaks.

They don’t replace the old with the new because we manipulate them into doing it against their instinct; they do it because it makes them feel good.

We should find ways to generate that same feeling without turning Earth into a waste bin, or we’ll be responsible for it because this is our job, not theirs.

Nobody needs a new tablet every three months, so how do we make old tablets feel new? How do we make a new use of tablets without making new tablets?

And since nobody needs 100 different tablet models, how do we produce just enough to keep people happy and the market innovative, and make a better use of the time and resources we liberate?

These are marketing questions for marketing professionals, and eventually someone will answer them: that’s why “smarter marketing” is not just a moral call, it’s a competitive requirement.

While hotel groups were busy building more hotels because that’s the business they saw themselves in, Air BnB created millions of accommodations without laying a single brick.

H&M increased their inventory without a stitch being sewed by collecting 7.7 million pounds of used clothes to be resold or converted into other products.

The Walgreen drugstore chain partnered with Taskrabbit, an online small jobs marketplace, to deliver over-the-counter cold and flu medicines to customers unable to make it to the store, effectively growing an ubiquitous sales force without hiring a single new employee.

Zopa, the UK’s leading peer-to-peer lending service, has issued loans in the amount of 500 million pounds without branches or upfront capital.

These examples are not about the clichéd “doing more with less”,

they’re really about “doing better”.

An old marketing quote states that “people don’t buy quarter-inch drills, they buy quarter-inch holes”. There are now more potential alternatives to drills than ever, and people don’t even need to buy them. So what’s the smarter way of giving them that hole?

Is Salient the new Viral?

Do you know someone who seems to regularly say exactly what you’re thinking, but using better words? To me that’s (Jed Bartlet and) Martin Weigel. Good thing he has the good taste to voice his opinions before I do, so I can at least avoid the embarrassment. (Although that also implies that he’s either way more efficient than I am to find time for it while producing brilliant work, or he’s just as lazy but gets to those ideas faster than I do. Both scenarios are rather discouraging.)

Case in point, his list of “Words I hate” that I would sign with my own blood (except for strategists: I don’t like “planner” because it leads to an abundance of plans and a shortage of ideas), and his general disdain for Adland rhetoric.

That’s why I’ve been scratching my head over his two long posts about Differentiation v Saliency. He makes a great job of combining an extensive range of sources to make the argument that:

  1. Consumers are just not that into brands. Virtually any attempt to engage them in a relationship, join a conversation or expect them to respond to the intricacies of your brand are futile, delusional and egotistic. (Spot on!)
  2. Shopping metrics show that consumers are highly unloyal, purchasing from a basket of brands for each category, and disproportionately rewarding the market leader. Consumer segmentation models that distinguish between the Brand-X woman and the Brand-Y woman are a work of fiction.  (Can’t argue with that…)
  3. This is backed by research showing that consumers can’t differentiate between brands, across almost all brands and categories. Differences in brand attributes are overwhelmingly explained by scale. (Hmm….)
  4. Consequently, our efforts towards differentiation have been misplaced. If consumers don’t spend enough time in their purchase decisions, then there is no point explaining the differences between products. We should get out of the persuasion business. (Hmm hmm…)
  5. We should instead find creative ways to turn our generic, un-ownable products into something exciting and worth remembering. This is what it really takes to trigger a purchase. (Ouch…)

When I first read the articles I couldn’t reconcile how much I agreed with their initial points and how unconvinced I was by their conclusions. I thought it boiled down to a contradiction (Did we fail to create brand differentiation or did we succeed but it was proven worthless? You can’t have it both ways…), but there is more to it. So let’s complicate this:

Brands are not people, my friend

Let’s get the first two points out of the way: most normal people want to engage with other human beings, not with commercial abstractions.  They don’t want to own your brand, nor are they keen to join any conversation with it. Virtually all segmentation models produced by the corporate world are bull-s**t. End of story. I know it, you know it. Let’s move on.

Spot the difference

There’s a difference between saying that brands are undifferentiated and that most brands are undifferentiated. While it’s true that we have plenty of examples of interchangeable brands, we also know some that are wildly recognized as different, with research to back it up: Volkswagen v Chevrolet, Barclays v The Cooperative Bank, Innocent v Minute Maid, Jil Sander v D&G,  Singapore Airlines v American Airlines…

There’s more: there’s a difference between saying that “consumers don’t differentiate between brands” and that “according to research, consumers  don’t differentiate between brands”. The output of a research is only as good as its input. Most brand equity researchers test fundamental category attributes with very traditional questions, and what you get out of it is not very insightful. Take sportswear: if you run a traditional test on items such as “modern”, “athletic”, “successful” you probably get very similar results between Nike and Adidas, with differences explained by the relative size of the user base. But if you instead ask them who would win in a street fight, you get much more revealing results. I know because I asked.

Let’s face it: we’re really not that good

This is a point I feel very strong about. Martin looks at how central “differentiation” is in the marketing textbooks, and concludes that if we failed despite all our efforts, then it must be unattainable. I have a very different point of view: we’ve been rubbish. You only need to walk into virtually any meeting room of virtually any company in the past 40 years to see the same words written on virtually any brand identity model: how many banks are about “fulfilling dreams” and being “by your side”? How many mobile operators about “being better together”? How many posters have we seen with headlines such as “Capture life”? Or “Never miss [X]”? And how many “Inter-racial-urban-young-adults-raising-their-hands-at-a-gig”?

We should take a good look at ourselves as an industry and admit it: garbage in, garbage out.

Of course, some brands make the opposite mistake: in an effort for textbook hyper-differentiation, they look for the tiniest granular ownable property (2% more whatever-unpronounceable-ingredient) and expect that people will care. This is true, but we shouldn’t benchmark our strategies on this kind of rubbish. The quest for ultimate ownability should have been pronounced dead ever since the question “But can’t our competitors also claim X?” first received the answer: “Yes, but they’re not.” Let’s move on.

Let me entertain you (?)

The traditional Christmas cake in Italy is called “Panettone”. It’s a very simple product: a sweetbread filled with raisins and candied fruit that is mostly produced industrially and, to be perfectly honest, is not what you would call an unforgettable culinary experience. It’s mostly produced industrially, and it’s the kind of product you only think about once a year: every Italian family buys one for Christmas lunch or dinner, with an attitude that is more about ticking a box than anticipating a festive delight.

You can now understand the challenge that a friend of mine was faced with a few years ago, while working on a brief for a brand of Panettone that was going to spend the same budget of its 4-5 major competitors, who were targeting the same consumers with the same message (ie. “Yummie!). The fans of “saliency” would advocate saying pretty much whatever you want as long as it’s not repulsive (“we’re not in the message business”), but doing so in a compelling, exciting, memorable way. My friend did something different and, well, complicated things a bit. He bet on the hypothesis that even though Panettone is a tick-boxing purchase, it can be about more than taste: while everyone else claimed yummie, he put all his chips on “soft”. He believed that the weekend before Christmas shoppers would flock to supermarkets and, faced with a half dozen equally legitimate brands and similar packages that all claimed to taste good (who wouldn’t? and how can you believe it anyway?), they wouldn’t know where to turn to. He knew they’d want to buy something that their children wouldn’t complain about, and there was his answer: “soft.” Children like softer cakes more than harder ones. And not just that: old Panettone gets hard, so you can desume that fresh Panettone is soft; as for another non-negative, soft also makes it seem less likely to be dry.

Did he convey that in a memorable, compelling ad like the Cadbury Gorilla? Not really, as you can see below. But it was enough for Panettone Motta to achieve record sales that year. And the following. And the one after that.

What’s the big deal?

So why am I writing a ridiculously long post about something that was written months ago by a guy whose other opinions I agreed with before and since? Because I see a risk hidden behind that argument, the same I see in Dave Trott’s words advocating that being interesting is more important than being relevant. It’s not just that there is no silver bullet (but it’s always worth repeating that); it’s also that we fail to grasp the complexity of our job.

I believe that the single most important contribution a creative agency can make to a brand is making it distinctive. Not just distinctive among all the other distractions we’re exposed to today: I agree with that, but it’s not enough. We must also make it distinctive among the competing options that shoppers are forced to consider, especially when they’re frustrated about it.

No one  is happy about how electronics retailers are displaying tens of tens of TVs forming an endless black wall. But this is how things are, and we can’t pretend otherwise. We also can’t pretend that shoppers will walk into an electronics shop and not be shaken by such a wide choice, no matter how preeminent brand X was in their head before they walked in. “Sony Balls” was a great ad not just because it was memorable, but also because it gave shoppers a cognitive shortcut to navigate through that choice: “Colour”.

Martin Weigel recognizes this when he quotes Romanuik and Sharp (Conceptualizing and measuring brand salience, 2004) and their recommendation to consider a range of attributes associated with the brand in any measure of salience, but we should also be aware that this is not very different from what we’ve been trying to do for the past few decades. We simply haven’t done it very well, for many reasons.

If we instead celebrate “saliency” as a Copernican Revolution, the process of dumbing everything down that has been dooming our industry will more than likely turn it into a new buzzword like it did with “viral”, and we’ll soon hear clients asking us to give them something “salient” like they used to ask us for a “viral”: this terrifies me, because the quest for the “new exciting wonder” coupled with the unlimited creative possibilities of the digital age is more likely to produce the the most amazing collective waste of resources that Adland has ever seen than anything really valuable.

I’d rather do what we should have been doing, and do it well: investigate our product; explore what makes people tick; see if there’s a connection between the two; make it easy for them to find it; get them excited in the process, but not more than they’re willing to be.

If we do all this, and we do it well, we’ll make our brands salient. Chances are, we’ll make them viral, too.

Tagged , , , , , ,

Why the Dark Knight doesn’t rise high enough

Let’s get one thing out of the way: The Dark Knight Rises is one of the best films of the year, and one of the best superhero movies ever. However, it fails to live up to the expectations generated by the first two films, and that says a lot about the ambition of a franchise that has deconstructed super-heroism and addressed complex philosophical and sociological themes, while at the same time making more than 1 billion dollars at the box office. (“Lost” is the only other example of such an intellectually ambitious yet commercially successful project I can think of in recent years…)

The most common criticism of TDKR you can find around the internet boils down to one word: the film is “bloated”. Christopher Nolan is accused of having tried to pack too much into a story that ended up running at 165 minutes.

That’s true, but only at a superficial level. I believe Nolan should have added something more into the film, something of critical importance, but talking about it requires a big deal of spoilers.

[SPOILER ALERT]

[ No, seriously. If you haven’t yet seen the film, go watch The Daily Show or something…]

What Christopher Nolan didn’t include in TDKR, the one thing that would have tied up all loose ends, is Bane’s motivation, and the story behind it.

Is Bane a populist or a terrorist?

In the film he’s both at the same time, and that not only makes him a superficial and foggy character, it also makes his job harder: “I’m here to free you, and I have a nuclear bomb!” is not exactly the most compelling rallying cry.

Had Bane been given a clear motivation, he’d have come across as a stronger, driven figure, and this would also have made his interactions with the other characters more natural, helping them develop their own narrative arc.

So, what could have Bane been?

A terrorist

Bane as a pure terrorist pursuing the destruction of Gotham City would have borrowed from the two previous movies: here you’d have a character who wants to see the world burn like the Joker, but does so out of a moral imperative like Ra’s al Ghul, rather than for pure madness.

While not very original, this take would have still raised interesting questions: Does Batman’s choice to spare his enemies result in them coming back with new, stronger faces? How will Gotham react to widespread, asymmetrical war after 8 years of peace? Will the already tough Dent Act be made tougher, and how will people react? Will common citizens invoke the return of the Dark Knight, so that the devil they know can save them from the devil they don’t? Will Bane and Miranda Tate destroy Bruce Wayne’s reputation framing him as the reclusive, paranoid mastermind behind the nuclear threat?

A populist

This is when things get more interesting. As a populist intent on bringing down the old order, Bane can be the perfect Batman mirror: a charismatic figure that challenges Gotham’s established powers and inspires people to follow his example and rebel. This take would also make many other characters more credible: it would give Selina Kyle a compelling reason to help him and bring Batman to him (in the film she seems to do it just so that the plot can move on); it can offer the citizens of Gotham an interesting role, instead of just fading in the background; it would cause an underprivileged Blake to struggle with what side of the law to stay on; it would also make Batman even more of  a troubled anti-hero, as for once he could be seen as an instrument of Bruce Wayne’s wealth, instead of the other way around.

However, doing this would require setting up an underlying social struggle, between a wealthy class that has been taking advantage of the economic growth that we can expect following 8 years with no organized crime, and a middle- and lower- class that may not have seen its quality of life improve since the days of Falcone and the Joker. Building this setup within the context of a super-hero movie and the limits of an already stretched script would have sure been a challenge, but it’s one that I’d have loved to see Christopher Nolan tackle, as it’d have brought out the best of his talent as a storyteller and a visionary director. Given how much the trailer hinted at this (“A storm is coming…”), and the narrative and genre-subverting potential of such a framework, I believe that this is THE missed opportunity of TDKR.

A populist that is later revealed to be a terrorist

This might have been the most coherent solution, in light of the previous films and the characters’ development. Bane could have introduced himself to the people of Gotham as a populist leader, offering them freedom and inviting them to overturn the establishment; with the violence spreading, he’d prove to Batman that Gotham is beyond saving, as ordinary citizens are turned into vandals, robbers and killers; at this point he’d be ready to unveil the bomb, having broken Batman’s faith in his city and given Gotham citizens’ a glimpse of hope before the despair.

This character evolution would also give Miranda Tate more time to develop a proper relationship with Bruce Wayne (as opposed to rain/kiss/sex), introducing her as a fellow member of the establishment at the mercy of an angry mob, before revealing her to be Talia al Ghul when the bomb is announced. It would also make Selina Kyle’s arc more credible, going from sympathy towards a populist to fear of a terrorist, with no need for an unlikely Mac Guffin such as the “clean slate”.

While this seems a more complex arc, it would have actually resulted in a more linear and credible plot, that would have laid the basis for stronger character interactions and compelling moral dilemmas: Is inequality a moral or a security issue? What are regular people willing to do when there is no law? Who are we willing to believe in?

At the end of the day, the catalyst for action in super-hero stories is the super-villain. If that character is not perfectly crafted, everything else will tend to fall apart, and it’s a testament to how good a director Christopher Nolan is that he still manages to make The Dark Knight Rises a really good film.

Tagged , , , , , , , , ,

Unlike-minded slides

A rallying cry for the unlike-minded. With fewer words and more pretty pictures.

Tagged , , , ,

No more like-minded people, please.

If you work anywhere near journalism, the question keeping you up at night is probably “how do I get paid?”. But if you’ve sorted that out, or if you’ve given up on it altogether, chances are that it’s been replaced by “what’s the role for journalism in the XXI century?” And namely: can journalism still play an educational role? In an age where every media organization is competing with more and more news outlets for a shorter and shorter share of attention, can we afford to feed people the stories they need to know? Or should we just serve them the stories they want, in the quality and size they want?

This conundrum is perfectly captured in online news sites that consistently favor popular headlines that people are most likely to click on over more nuanced or instructive stories that their readers are unfamiliar with. This has generated new metrics that journalism is being held accountable for, as demonstrated by a leaked document that outlines strategic objectives behind the AOL-Huffington Post deal:

“[the route towards survival is] to drive the average cost per unit of content down to $84 (from the current $99) and use “search engine optimization” and other techniques to attract an average of 7,000 page views per item, up from the current 1,500.” James Fallows, “Learning to love the (shallow, divisive, unreliable) new media”

The business plan behind this is quite simple: if you want enough eyeballs to keep your website (marginally) profitable, you have to give people what they want. It’s nothing new, as it’s the same business model that made television thrive in the XX century, but if applied to the news it raises a number of concerns, most of which are explained by Ted Kopper in “The case against news we can choose“. To sum up his argument, here is the most significant paragraph:

“Beginning, perhaps, from the reasonable perspective that absolute objectivity is unattainable, Fox News and MSNBC no longer even attempt it. They show us the world not as it is, but as partisans (and loyal viewers) at either end of the political spectrum would like it to be. This is to journalism what Bernie Madoff was to investment: He told his customers what they wanted to hear, and by the time they learned the truth, their money was gone.”

This is dangerous on a number of levels, and it’s a debate that has been going on for a while, but I think it’s worth taking this conversation outside of the media environment. To me, for instance, it’s about eggs.

***

Waitrose is an upscale British supermarket chain that, like all British supermarkets, is going above and beyond to meet the latest food culture: meat comes from local farmers; tuna are best friends with dolphins; whatever you think of, you can have it organic. And eggs are not only free-range, they are “reared with care by farmers who share our values”.

Of  course I understand what lies behind it (sustainability, care, reassurance), but there’s a point when rhetoric gets too far. And when I find myself sharing an ideological affinity when I happen to buy half a dozen eggs, I tend to think we’ve crossed that line. We’ve always made fun of the most radical ideological consumers who let their beliefs dictate their every purchase, whether they were extreme fashionistas or no-logo activitists; the difference is that what was once a fringe behaviour is showing the first signs of stepping into the mainstream.

Over the past few years businesses have borrowed the rhetoric of the internet ideologues (transparency, collaboration, generosity…) and supercharged their brands with Values. There’s nothing wrong with adopting those values and a lot of good can actually come from it. It’s when you use those values to create belonging to a close, like-minded community that things get dangerous, because communities based on belonging inevitably end up building fences around them.

Now, no-one will grow intolerant to those who buy eggs from farmers who do not share their values. The problem with rhetoric happens when it builds up into a consistent message that we find everywhere we turn, and starts affecting our view of the world. Over the past 10 years we’ve been exposed to more and more of the same message: follow the news you like, buy products that your friends have bought, read books recommended by people who share your taste… (It was inevitable that we’d end up with Cupidtino: a dating site for Apple fanboys and fangirls.)

The internet has provided the most efficient platform to enable this, but it’s not a technological trend; it’s a cultural trend.

We want more of the same. And this is understandable, at the very least for two reasons: the world is a scary place and we’re afraid of the unknown; and the world is a messy place and sorting through that mess to find new, hidden gems requires time and effort that we’d rather spend somewhere else.

So what’s the problem?

***

The problem is that self-segregation was never a good idea throughout history, and there are more reasons now than ever to say that we can’t afford it.

Like it or not, we’re an increasingly diverse society, and we need to make common choices. We won’t be able to make those choices if we don’t even speak the same language or agree on facts, let alone find a shared interest or define the common good.

And like it or not, the greatest challenges that we’re facing in the XXI century, from energy dependence to welfare reform to international cooperation, require radical new thinking, and innovation doesn’t come from hanging about with like-minded people, nor from experiencing more of the same.

That’s true for individuals, that’s true for governments, that’s true for businesses and brands.

We have a shared responsibility to get out of our comfort zone and pursue the unexpected, day in day out. As for me, I’ll start looking for a farmer who doesn’t share my values but can still sell decent eggs.

Tagged , , , , , ,

Run before you walk

Walk before you run: the adage all of us must have heard over and over, when assessing a new project, investment, process… And who wouldn’t agree with it? It’s plain common sense. After all, that’s how human beings work, so it should be the same for organizations, shouldn’t it?

It sounds like the reasonable thing to do, but it’s not. We can’t afford it.

The thing is, as human beings we are wired to grow into running, and to do it by a certain stage. We may start with walking, but we soon start moving our feet quicker and quicker. And then we trip. No big deal. We cry a little, pick ourselves up, start walking faster. And trip again. And pick ourselves up again. Until we look around, and we’re running. (With that realization we usually trip again.)

If human beings evolved into running like organizations do, we’d have a committee forecasting the amount of attempts and energy required by running against its supposed benefit (assuming they could see one, as you could reach the very same destination just by, well, walking); then they would invest in a simulation to predict the optimal body balance and step succession to achieve the ideal running; there would be contingency plans in case of trips, an event whose chances would have to be minimized anyway not to upset stakeholders; finally, they’d impose a deadline by which return on investment would have be proved, or they’d pull the plug on this new “running” scenario, leaving someone else to experiment with it.

If we evolved in our lives the way we evolve in our work, we’d still be researching “Global benchmarks in running” by the age of 8.

Instead, we inevitably try running way before we’re ready for it. The urgency of the inevitable. That’s how things get done.

Tagged ,

It’s not about the network, it’s about you.

The first premise to this post is that I’m allergic to debates about the internet killing this or that, of the kind that seem to regularly get online pundits excited, but fail to produce sound thinking.

Consequently, the second premise is that I’m not going to ask “Is the internet killing strategy?”, and I really didn’t think I’d have to write a post such as this: I’d expect that we’d all get very excited with the new toy, and at the same time we’d keep in mind that while marketing is fundamentally changing, the fundamentals of marketing are not.

But then I read “From consumer insights to network insights” (via @BBHLabs) by Patricia McDonald (@PatsMc) and I start feeling uneasy: her core argument is that, because we live in a society that is more connected than ever, we should worry less about “consumer insights” and more about network insights, ie. who is sharing what, why and how.

I think this argument is partially right, and substantially wrong. Consumer insights and network insights are both valuable and they serve two different purposes at different stages. Seeing them as conflicting fails to acknowledge how three pillars of marketing have not changed:

  1. Competitive strategies are more important than ever. With the explosion of digital, it seems like brand and creative agencies took some time off strategizing: we’ve been experimenting with the new challenges and opportunities set forth by digital, and that was enough to keep us busy. Moreover, doing the right thing for the new digital age was enough to be distinctive and credible: Ideastorm made Dell stand out as a different brand brand because it was the first one to start a frank dialogue with its customers. That’s now one of the rules of the game, like being open, participative, useful, innovative, reactive… We can argue whether these are necessary conditions for a brand to thrive in the XXI century (I don’t think they always are), but they’re definitely not sufficient. It still boils down to how your participative brand is different from all the other participative brands out there. We realized a long time ago that a market where brands try to come up with ever more ingenious ways of telling the same story isn’t as rewarding (and profitable) as one where different brands tell different stories. That is still true today.
  2. A strategy is about what you stand for. Whatever it is (a message, a vision, a product…), a strategy is first and foremost about you. Because that’s what people are interested in, or not. We can fool them by hanging out in the cool places where they hang out, and using the cool language that they use, but ultimately their interest for  us depends on the story we bring to the table: that’s why we have to start from there. The more distinctive, credible and relevant our story is, the more compelling it is. That’s when consumer human insights are precious.
  3. Your strategy precedes and informs your network. Advocates of propagation planning are putting great effort into crafting a framework that helps brands engage with networks (view this to find out more) and that’s having a positive contribution to campaign development. However, the celebration of networks can go too far, leading some to the rebuttal of the concept of media-neutral. While we can all agree that the deployment of a strategy should be devised in a way that best leverages media, I believe that the origination of strategy itself should not take that into account, for a very simple reason: if you base a strategy on the network, you end up with a strategy about the network. And that’s not what you want unless, well, your brand is the network. We have way too many examples of brands offering what proves to be popular social currency online, but failing to produce any value whatsoever for the business, because that social currency is not rooted in a competitive advantage. Let’s be completely honest: whatever brands do, must be beneficial to the business, and the underlying assumption that successful businesses offer products and services that benefit consumers is there to ensure that at the end of they day everyone’s happy. Our job is to use the network to benefit the brand, not to use the brand to benefit the network. If our competitive business strategy is relevant, we’ll make people happy in the process.

 

***

 

A framework of reference

U.S. Presidential Elections are possibly the most complex and advanced form of creative campaigns, as they deal with many of the new and old challenges that we have to face every day, but in a condensed and frenetic fashion: engage different, sometimes conflicting, demographics; steer influence; orchestrate a multi-channel plan;respond to unforeseeable events in real time; change people’s mind and lead them to action…

The team tasked with making sense of this complexity is essentially headed by two figures: a strategist and a campaign manager.

The strategist is in charge of the message, ranging from the overarching campaign theme (what we’d call a brand idea) to the candidate’s stand on policy themes.

The campaign manager is in charge of  the plan: what States are required to reach 270 electoral votes, what resources are allocated to win those States, what team is required to manage those resources.

The two clearly work together, but it’s the message that frames the campaign, and for a very good reason: if you let the message lead, you set the agenda; if you let the network (eg. the media, the activitists…) lead, you’re at the mercy of their own agenda and end up being defensive and reactive. This is neatly summed up in a quote from the West Wing (s07e02): “People think the campaign’s about two competing answers to the same question. They’re not. They’re a fight over the question itself.”

That’s why you need to lead with a message that is based on your competitive strategy: because the question that you want people to ask each other should be the one that is best answered by your product or service.

Replace “strategist” with “brand/account/strategic planner”, “campaign manager” with “media/comm/propagation planner” and “agenda” with “brand strategy” and you have a model that creative agencies can easily relate to.

 

***

 

I’ve never met McDonald, but based on her stellar credentials I’m pretty sure she knows all this. I also believe that she’d agree with the very uncontroversial points I made in a post that I wasn’t even thinking would be worth writing.

If I did, it’s because something has been bugging me for a while: if we spend our time talking mostly about a side of our work, we can end up fostering a culture that believes that that side is what matters most; if we make our case with generalizations, we can end up fostering a culture that is generic.

That puts everyone who writes about, well, anything in a very difficult position, because we don’t want details and exceptions and clarifications to clip the wings of our insightful and inspiring words. At the same time, we have to keep an eye on what the industry is talking about, and raise warning if we think we’re not doing justice to the complexity of an argument.

Every single thing we write online is a new chapter in the planning handbook that people open every day to find some guidance in the choices they have to make. I’d rather put too much into that handbook than leave something valuable out.

Tagged , , , ,

Julian Assange doesn’t matter

About 10 years ago, shortly after the birth of Napster, videos documenting police violence at the G8 in Genoa started to circulate on the p2p networks. Those who witnessed and filmed the event knew that their videos couldn’t have made their way into mainstream media because that would have cast the government in too much of a  bad light, so they uploaded them into the grassroots network that they were already using to share music.

Nobody noticed back then because it was a sporadic event, and when that same technology started crashing the entertainment industry, that’s what grabbed the headlines.

If anything, it’s surprising that it took 10 years for the unauthorized leak of sensitive information to go mainstream, and for Wikileaks to establish itself.

What is not surprising, is the hysterical reaction by established institutions, that are behaving just like the music industry did, quite possibly with the same results.

Prosecuting Shawn Fanning didn’t stop peer2peer, and did no good to the music industry. Prosecuting Julian Assange isn’t going to stop Wikileaks. Even if Assange was to be taken out of the equation, someone else will set up a digital presence to allow people to anonymously upload any information that they deem should go public.

There is a significant demand, there is a digital infrastructure, there are very low barriers. No effort from any governmental institution can balance the opportunities for atomized publication and sharing that digital has brought along. And just saying that something is different from how things should be done, or even dangerous, is not going to stop it. Anyone who doesn’t understand this, doesn’t understand the XXI century.

Technology is not a question that legislators can answer “yes” or “no” to. Once adopted by a significant amount of people, technology is a fact.

(Update: Dave Winer makes the same point here.)

Tagged , , ,

Complicating marketing: increasing sales

If you work anywhere near marketing, chances are you’ve been faced with the above objectives, over and over, whether it was coming from your boss or your client. And more often than not, it didn’t go beyond those two words: “increase sales”. So let’s take a look at what that means.

Economics provides a clear textbook response to that: if you want to increase sales, decrease the price. Neat, simple, and unacceptable to your average marketer.

So, here comes textbook strategy #2: advertise your product, and you will get more consumers, and that’ll increase sales.

However, we know that it’s not always an option, or not necessarily the best one, so let’s complicate this and look at some real word strategies.

12 basic diagonal ways to increase sales:

  1. increase the dose of product per consumption (loyalty): people rarely pay attention to the amount of product they really use, and tend to go with a suggested or simple dosing; eg., pretty much any laundry detergent
  2. increase the dose of product per pack(loyalty): while this tactic effectively reduces the number of sales, it increases current revenues at the expense of potential future purchases that might benefit a competitor, so it’s worth being included; eg., family-size packs
  3. increase your product share per user(loyalty): there are cases where consumers use a number of products of the same category, and increasing your share within that range can lead to dramatic increases in sales; eg. sodas
  4. increase the number of usage occasions (mostly loyalty): this is a common strategy for complementary or ingredient products; eg. anybody Got Milk?
  5. increase the frequency of usage occasions (loyalty): an apple a day keeps the doctor away; eg., Calgon “goes in every wash”
  6. get your current users to upgrade to a new product(loyalty): tech companies have learned how to make their perfectly-functioning products feel obsolete, so that the most passionate users will sacrifice them in favour of the latest release; eg., do I even need to say that?
  7. increase the number of users within your current households (acqusition&loyalty): your current users may be your best advocates, and they might not need to leave the house or get online to do it; eg. Wii Fit
  8. get former consumers to come back to your product (acquisition and loyalty): it may be about reminding them of your existence, or getting them to give you a second chance, but it always boils down to trying to reignite the love; eg. 122 years of Hovis
  9. get consumers to change the place where they buy a product(acquisition and loyalty): if you’re competitively stronger in one channel than in another, get consumers to visit/buy more often in that channel; eg. Polident in supermarkets v pharmacies
  10. get consumers to use your product in conjunction with something they’re already using (acquisition): what is your product complimentary to?; eg. how many jars of Nutella are used at crepes stands all over the world?
  11. get consumers to quit a product from a different category and replace it with yours (acquisition): redefine what your competitive lanscape is; eg. chocolate for Valentine’s Day
  12. create an entirely new market that requires your product(acquisition): create something valuable and give it away for free, but make sure it can’t be enjoyed without your product; eg. Sudoku and pencils

Can you come up with more?

Tagged
%d bloggers like this: