google is your butler- the tension between utility and privacy

I’ve often defended Google’s thirst to know things about people with a butler analogy. Good software should, like a butler, try hard to understand your preferences and act on them for you without you even realizing they are there. That means learning and remembering things you’ve done in the past, and using that to base recommendations on. When you tell your butler ‘bring me desert, please’, he should remember that you usually like chocolate, and that all this week you’ve been experimenting with different cakes, and therefore bring you another variant on chocolate cake. If he suddenly forgot you liked chocolate and you’ve been having cake all week, you’d be irritated when he asked you those things again, or if he just brought you a canoli out of the blue.

Ideally you want your butler to know at least something about what your friends and co-workers are doing too- if I say ‘bring me a shirt’, and the butler knows I’m going out with the cool kids tonight, then I want my trendiest shirt based on what my friends think is trendy. But if I am going to the office and say ‘bring me a shirt’, I want the butler to know that my workplace is casual, but not too casual, and so on. I could of course tell him all these things every time he brought me a shirt, but it is easier for everyone if he just remembers, and perhaps does some outside research on his own.

Like a butler, you want your tools to work intelligently based on context and history, and Google is without doubt one of those tools- for many of us, the most important single tool in our computing lives. The problem, of course, is that your butler has a lot of incentives to keep your private information private. Surely the butler can be bribed, but therefore you pay him well and treat him like a human being, and you try to avoid these sorts of problems. Google’s incentives run at least partially the other way- they have strong incentives to mine that data extensively, to share it with others, and to collect well more than most people might think is useful, in the name of being the ultimate butler. And these incentives lead to risks- incentives to share with third parties that you might not trust; risks that things might be subpoenaed; risks that they might leak to Google employees or even outside Google; risks that effective advertising might use such information to manipulate your political views. On balance, most of us are going to look at these issues and decide that we’re OK with Google knowing these things, because the risks are remote and the benefits tangible. So we acknowledge there is a tension between privacy and functionality, and move on.

I wish that at this point I could announce some deep new insight about the balance between these two competing forces. I can’t; most of what there is to be said has been said already. The thing that makes me write about it right now is, of course, Eric Schmidt’s recent comment. The thing that bugs me about it is that he doesn’t seem to realize there is a tension. These words don’t speak of ‘we’re wrestling hard with this question every day’ (a reasonable compromise position) or ‘we’re doing everything we can to collect as little data as possible’ (the pragmatic civil libertarian perspective). They speak of a company (or at least a CEO) which doesn’t realize or doesn’t care that there are balances and compromises to be struck and continuously re-considered. And that, to me, is very, very troubling; more troubling than any particular policy position could be.

So I’m experimenting this week with other search engines, and once I finish moving I’ll be looking again at other mail and rss readers. I really don’t ask much of Google in return for trusting them; I’m not an absolutist, I just need to know that they are continuing to treat privacy as a difficult, multi-faceted issue that constantly has to be evaluated and considered. And if Schmidt is any indication, that isn’t what Google is doing right now.