Free as in … ? My LibrePlanet 2016 talk

Below is the talk I gave at LibrePlanet 2016. The tl;dr version:

  • Learning how political philosophy has evolved since the 1670s shows that the FSF’s four freedoms are good, but not sufficient.
  • In particular, the “capability approach” pioneered by Amartya Sen and Martha Nussbaum is applicable to software, and shows us how to think about improving the capability of people.
  • There are a bunch of ways that free software, as a movement, could refocus on liberating people, not code.

I did not talk about it in the talk (given the audience), but I think this approach is broadly applicable to every software developer who wants to make the world a better place (including usability-inclined developers, open web/standards folks, etc.), not just FSF members.

I was not able to use my speaker notes during the talk itself, so these may not match terribly well with what I actually said on Saturday – hopefully they’re a bit more coherent. Video will be posted here when I have it. [Update: video here.]

Most of you will recognize this phrase as borrowed from the Wikimedia Foundation. Think on it for a few seconds, and how it differs from the Four Freedoms.

I’d like to talk today about code freedom, and what it can learn from modern political philosophy.

Last time I was at Libre Planet, I was talking with someone in a hallway, and I mentioned that Libre Office had crashed several times while I was on the plane, losing some data and making me redo some slides. He insisted that it was better to have code freedom, even when things crashed in a program that I could not fix without reading C++ comments in German. I pointed out, somewhat successfully, that software that was actually reliable freed me to work on my actual slides.

We were both talking about “freedom” but we clearly had different meanings for the word. This was obviously unsatisfying for both of us – out common language/vocabulary failed us.

This is sadly not a rare thing: probably many of us have had the same conversation with parents, friends, co-workers, etc.

So today I wanted to dig into “freedom” – what does it mean and what frameworks do we hang around it.

So why do we need to talk about Freedom and what it means? Ultimately, freedom is confusing. When card-carrying FSF members use it, we mean a very specific thing – the four freedoms. When lots of other people use it, they mean… well, other things. We’ll get into it in more detail soon, but suffice to say that many people find Apple and Google freeing. And if that’s how they feel, then we’ve got a very big communication gap.

I’m not a political philosopher anymore; to the extent I ever was one, it ended when I graduated from my polisci program and… immediately went to work at Ximian, here in Boston.

My goal here today is to show you that when political philosophers talk about freedom, they also have some of the same challenges we do, stemming from some of the same historical reasons. They’ve also gotten, in recent years, to some decent solutions – and we’ll discuss how those might apply to us.

Apologies if any of you are actually political philosophers: in trying to cram this into 30 minutes, we’re going to take some very, very serious shortcuts!

Let’s start with a very brief introduction to political philosophy.

Philosophers of all stripes tend to end up arguing about what is “good”; political philosophers, in particular, tend to argue about what is “just”. It turns out that this is a very slippery concept that has evolved over time. I’ll use it somewhat interchangeably with “freedom” in this talk, which is not accurate, but will do for our purposes.

Ultimately, what makes a philosopher a political philosopher is that once they’ve figured out what justice might be, they then argue about what human systems are the best ways to get us to justice.

In some sense, this is very much an engineering problem: given the state of the world we’ve got, what does a better world look like, and how do we get there? Unlike our engineering problems, of course, it deals with the messy aspects of human nature: we have no compilers, no test-driven-development, etc.

So before Richard Stallman, who were the modern political philosophers?

Your basic “intro to political philosophy” class can have a few starting points. You can do Plato, or you can do Hobbes (the philosopher, not the tiger), but today we’ll start with John Locke. He worked in the late 1600s.

Locke is perhaps most famous in the US for having been gloriously plagiarized by Thomas Jefferson’s “life, liberty, and pursuit of happiness”. Before that, though, he argued that to understand what justice is, you have to look at what people are missing when they don’t have government. Borrowing from earlier British philosophers (mostly Hobbes), he said (in essence) that when people have no government, everyone steals from – and kills – everyone else. So what is justice? Well, it’s not stealing and killing!

This is not just a source for Jefferson to steal from; it is perhaps the first articulation of the idea that every human being (at least, every white man) is entitled to certain inalienable rights – what are often called the natural rights.

This introduces the idea that individual freedom (to live, to have health, etc.) is a key part of justice.

Locke was forward-thinking enough that he was exiled to the Netherlands at one point. But he was also a creature of his time, and concluded that monarchy could be part of a just system of government, as long as the people “consented” by, well, not immigrating.

This is in some sense pretty backwards, since in 1600s Europe, emigration isn’t exactly easy. But it is also pretty forward looking – his most immediate British predecessor, Hobbes, basically argued that Kings were great. So Locke is one of the first to argue that what the people want (another aspect of what we now think of as individual freedom) is important.

It is important to point out that Locke’s approach is what we’d now call a negative approach to rights: the system (the state, in this case) is obligated to protect you, but it isn’t obliged to give you anything.

Coming from the late 1600s, this is not a crazy perspective – most governments don’t even do these things. For Locke to say “the King should not take your stuff” is pretty radical; to have said “and it should also give you health care” would have also made him the inventor of science fiction. And the landed aristocracy are typically fans!

(Also, apologies to my typographically-sensitive friends; kerning of italicized fonts in Libre Office is poor and I got lazy around here about manually fixing it.)

But this is where Locke starts to fall down to modern ears: if you’re not one of the landed aristocracy; if you’ve got no stuff for the King to take, Locke isn’t doing much for you. And it turns out there are a whole lot of people in 1600s England without much stuff to take.
So let’s fast forward 150+ years.

You all know who Marx is; probably many of you have even been called Marxists at one point or another!

Marx is complicated, and his historical legacy even more so. Let’s put most of that aside for today, and focus on one particular idea we’ve inherited from Marx.

For our purposes, out of all of Marx, we can focus on the key insight that people other than the propertied class can have needs.(This is not really his insight; but he popularizes it.) I

Having recognized that humans have needs, Marx then goes on to propose that, in a just society, the individual might not be the only one who has a responsibility to provide those needs – the state, at least when we reach a “higher phase” of economic and moral development, should also provide.

This sounds pretty great on paper, but it is important to grok that Marx argues that his perfect system will happen only when we’ve reached such a high level of economic development that no one will need to work, so everyone will work only on what they love. In other words, he ignores the scarcity we face in the real world. He also ignores inequality – since the revolution will have washed away all starting differences. Obviously, taken to this extreme, this has led to a lot of bad outcomes in the world – which is what gives “marxism” its bad name.

But it is also important to realize that this is better than Locke (who isn’t particularly concerned with inequality), and in practice the idea (properly moderated!) has led to the modern social welfare state. So it is a useful tool in the modern philosophical toolkit.

Fast forward again, another 100 years. Our scene moves down the street, to Harvard. Perhaps the two most important works of political philosophy of the 20th century are written and published within four years of each other, further up Mass Avenue from MIT.

John Rawls publishes his Theory of Justice in 1971; Robert Nozick follows up with his Anarchy, the State, and Utopia in 1974.

Rawls and Nozick, and their most famous books, differ radically in what they think of as justice, and what systems they think lead to the greatest justice. (Nozick is the libertarian’s libertarian; Rawls more of a welfare-state type.) Their systems, and the differences between them, are out of our scope today (though both are fascinating!).

However, both agree, in their ways, that any theory of a just world must grapple with the core fact that modern societies have a variety of different people, with different skills, interests, backgrounds, etc. (This shouldn’t be surprising, given that both were writing in the aftermath of the 60s, which had made so clear to many that our societies were pretty deeply unjust to a lot of people.)

This marks the beginning of the modern age of political philosophy: Locke didn’t care much about differences between people; Marx assumed it away. Nozick and Rawls can be said, effectively, to mark the point when political philosophy starts taking difference seriously.

But that was 40 years ago – what has happened since then?

So that brings us to the 1990s, and also to 2016. (If you haven’t already figured it out, political philosophy tends to move pretty slowly.)

The new-ish hotness in political philosophy is something called capability theory. The first work is put forward by Amartya Sen, an Indian economist working with (among others) the United Nations on how to focus their development work. Martha Nussbaum then picked up the ball, putting in a great deal of work to systematize it.

When Sen starts working on what became capability theory, he’s a development economist trying to help understand how to help improve the lives of his fellow Indian citizens. And he’s worried that a huge focus on GDP is not leading to very good outcomes. He turns to political theory, and it doesn’t help him: it is focused on very abstract systems. John Locke saying “life, liberty, property” and “sometimes monarchs are OK” doesn’t help him target the UN’s investment dollars.

So his question becomes: how do I create a theory of What is Just that actually helps guide decisions in the real world? Capability theory, in other words, is ultimately pragmatic.

To put it another way, you can think of the capability approach as an attempt to figure out what effective freedom is: how do we take freedom out of textbooks and into something that really empowers people?

One of the key flaws for Sen of existing theories was that they talked about giving people at worst, negative rights (protecting their rights to retain property they didn’t have) and at best, giving them resources (giving them things or training they couldn’t take advantage of). He found this unconvincing, because in his experience India’s constitution gave all citizens those formal rights, but often denied them those rights in practice, through poverty, gender discrimination, caste discrimination, etc.

And so from this observation we have the name of the approach: it focuses on what, pragmatically, people need to be capable of acting freely.

Some examples may be helpful here to explain what Sen and Nussbaum are getting at.

For example, if all men and women have the same formal access to education, but women get fewer job callbacks after college than men with identical resumes, or men refuse to care for children and aging parents, then it seems unlikely that we can really claim to have a just society.

Somalia, circa 1995-2000, was, on the face of it, a libertarian paradise: it gave you a lot of freedom to start businesses! No minimum wage, no EPA.

But it turns out you need more than “freedom from government interference” to run a business: you have to have a lot of other infrastructure as well. (Remember, here, Locke’s “negative” rights: government not stopping you, v. government supporting you.)

These examples suggest that answering political philosopher question #1 (“what is justice?”) requires more than just measuring access to resources. What you want to know to understand whether a system is just, you have to measure whether all people have the opportunity to get to the important goals.

In other words, do they have the capability to act?

This is the core insight that the capabilities approach is grounded in: it is helpful, but not enough, to say “someone has the natural rights” (Locke) or “some time in the future everyone will have the same opportunity” (Marx).

(Is any of this starting to ring a bell?)

Capability approach is, again, very pragmatic, and comes from a background of trying to allocate scarce development resources in the real world, rather than a philosopher’s cozy university office. So if you’re trying to answer the political philosopher’s question (“what system”), you need to pick and choose a few capabilities to focus on, and figure out what system will support those capabilities.

Again, an example might be helpful here to show how picking the right things to focus on can be important when you’re aiming to build a system that supports human capability.

If you focus on only one dimension, you’re going to get things confused. When Sen was beginning his work, the development community tended to focus exclusively on GDP. Comparing the Phillippines and South Africa by this number would have told you to focus your efforts on the Philippines.

But  one of the most basic requirements to effective freedom – to supporting people’s capability to act – is being alive! When we look at it through that lens, we pretty quickly see that South Africa is worth more energy. It’s critical to look through that broader lens to figure out whether your work is actually building human freedom.

This is, perhaps, the most contentious area of capability theory – it’s where writing is being done across a variety of disciplines, including economics, political philosophy, sociology, and development. This writing has split into two main areas: the pragmatists, who just want to figure out useful tools that help them improve the world, and the theorists, who want to ground the theory in philosophy (sometimes as far back as Aristotle).

This is a great place to raise Martha Nussbaum again: she’s done the most to bring theoretical rigor to the capability approach. (Some people call Sen’s work the “capability approach”, to show that it is just a way of thinking about the problem; and Nussbaum’s work “capability theory”, to show that it is a more rigorous approach.)

I have bad news: there is no one way of doing this. Some approaches can include:

  • Local nuance: What is valued and important in one culture may not be in another; or different obstacles may exist in different places and times. Nussbaum’s work particularly focuses on this, interviewing people both to find criteria that are particularly relevant to them, but also to attempt to identify global values.
  • Democracy: Some of Sen’s early research showed that democracies were better at getting people food than non-democracies of similar levels of economic development, leading to avoidance of famines. So “what people prioritize based on their votes” is a legitimate way to understand the right capabilities to focus on.
  • Data: you’ll almost never see a table like the one I just showed you in most political philosophy! The capability approach embraces the use of data to supplement our intuitions and research.
  • Old-fashioned philosophizing: it can be perfectly appropriate to sit down, as Richard did, and noodle over our problems. I tend to think that this is particularly important when we’re identifying future capabilities – which is of course our focus here.

Each of these can be seen as overlapping ways of identifying the best issues to identify – all of them will be useful and valid in different domains.

Shared theme of that last slide? Thinking primarily about people. Things are always a means to an end in the capability approach – you might still want to measure them as an important stepping stone to helping people (like GDP!) but they’re never why you do something.

There is no one right way to pick which capabilities to focus on, which drives lots of philosophers mad. We’ll get into this in more detail soon – when I talk about applying this to software.

Probably the bottom line: if you want to know how to get to a more just system, you want to ask about the capabilitiesof the humans who are participating in that system. Freedom is likely to be one of the top things people want – but it’s a means, not the end.

So now we’ve come to the end of the philosophy lecture. What does this mean for those of us who care about software?

So, again, what do political philosophers care about?

The FSF’s four freedoms try to do the right thing and help build a more just world.

If you don’t have some combination of time, money, or programming skills, it isn’t entirely clear the four freedoms do a lot for you.

The four freedoms are negative rights: things no one can take away from you. And that has been terrific for our elites: Locke’s landed aristocracy is our Software as a Service provider, glad the King can’t take away his right to run MySQL. But maybe not so much for most human beings.
This brings us to our second question – what system?

Inspired by the capability approach, what I would argue that we need is a focus on effective freedom. And that will need not just a change to our focus, but to our systems as well – we need to be pragmatic and inclusive.

So let me offer four suggestions for free software inspired by the capability approach.

We need to start by having empathy for all our users, since our goal should be software that liberates all people.

Like the bureaucrat who increases GDP while his people die young, if we write billions of lines of code, but people are not empowered, we’ve failed. Empathy for others will help us remember that.

Sen, Nussbaum, and the capability approach also remind us that to effectively provide freedom to people we need to draw opinions and information from the broadest possible number of people. That can simply take the form of going and listening regularly to why your friends like the proprietary software they use, or ideally listening to people who aren’t like you about why they don’t use free software. Or it can take the form of surveys or even data-driven research. But it must start with listening to others. Scratching our own itch is not enough if we want to claim we’re providing freedom.

Or to put it another way: our communities need to be as empowering as our licenses. There are lots of great talks this weekend on how to do that – you should go to them, and we should treat that as philosophically as important as our licenses.

I think it is important to point out that I think the FSF is doing a lot of great work in this area – this is the most diversity I’ve seen at Libre Planet, and the new priorities list covers a lot of great ground here.

But it is also a bad sign that at the new “Open Source and Feelings” conference, which is specifically aimed at building a more diverse FOSS movement, they chose to use the apolitical “open” rather than “free”. That suggests the FSF and free software more generally still have a lot of work to do to shed their reputation as being dogmatic and unwelcoming.

Which brings me to #2: just as we have to listen to others, we have to be self-critical about our own shortcomings, in order to grapple with the broad range of interests those users might have.

At the begining of this talk, I talked about my last visit to Libre Planet, and how hard it was to have a conversation about the disempowerment I felt when Libre Office crashed. The assumption of the very well-intentioned young man I was talking to was that of course I was more free when I had access to code. And in a very real way, that wasn’t actually true – proprietary software that didn’t crash was actually more empowering to me than libre software that did crash. And this isn’t just about crashing/not-crashing.

Ed Snowden reminded us this morning that Android is freely-licensed, but that doesn’t mean it gives them the capability to live a secure life.

Again, here, FSF has always done some of the right thing! You all recognize this quote: it’s from freedom zero. We often take pride in this, and we should!

But we also often say “we care about users” but only test what the license is. I’ve never seen someone say “this is not free, because it is impossible to use” – it is too easy, and too frequent, to say “well, the license says you can run the program as you wish, so it passes freedom zero”. We should treat that as a failure to be humble about.

Humility means admitting our current. unidimensional systems aren’t great at empowering people. The sooner we admit that freedom is complex, and goes beyond licensing, the quicker we can build better systems.

The third theme of advice I’d give is to think about impact. Again, this stems from the fundamental pragmatism of the capability approach. A philosophy that is internally consistent, but doesn’t make a difference for people, is not a useful philosophy. We need to take that message to heart.

Mako Hill’s quantitative research has shown us that libre code doesn’t necessarily mean quality code, or sucessful projects. If we want to impact users, we have to understand why our core development tools are no longer best-in-class, and fix them, or develop new models to replace them.

We built CVS, SVN, and git, and we used those tools to build some of the most widely-used pieces of software on earth. But it took the ease of use of github to make this accessible to millions of developers.

Netsplit.de is a search engine for IRC services. Even if both of these numbers are off by a factor of two (say, because of private networks missing from the IRC count, and if Slack is inflating user counts), it still suggests Slack will have more users than IRC this year. We need to think about why that is, and why free software like IRC hasn’t had the impact we’d like it to.

If we’re serious about spreading freedom, this sort of “post-mortem” of our successes and failures is not optional – it is a mandatory part of our commitment to freedom.

I’ve mentioned that democracy is one way of choosing what capabilities to focus on, and is typically presumed in serious analyses of the capability approach – the mix of human empowerment and (in Sen’s analysis) better pragmatic impact make it a no-brainer.

A free software focused on impact could make free licensing a similar no-brainer in the software world.

Dan Gillmor told us this morning that “I came for the technical excellence and stayed for the freedom”: as both he and Edward Snowden said this morning, we have to have broaden our definition of technical excellence to include usability and pragmatic empowerment. When we do that, our system – the underlying technology of freedom – can lead to real change.

This is the last, and hardest, takeaway I’ll have for the day.

We’ve learned from the capability approach that freedom is nuanced, complex, and human-focused. The four freedoms, while are brief, straightforward, and easy to apply, but those may not be virtues if our goal is to increase user freedom.

As I’ve said a few times, the four freedoms are like telling you the king can’t take your property: it’s not a bad thing, but it also isn’t very helpful if you don’t have any property.

We need to re-interpret “run the program as you wish” in a more positive light, expanding our definitions to speak to the concerns about usability and security that users have.

The capability approach provides us with questions – where do we focus? – but not answers. So it suggests we need to go past licensing, but doesn’t say where those other areas of focus might be. Here are some suggestions for what directions we might evolve free software in.

Learning from Martha Nussbaum and usability researchers, we could work with the next generation of software users to understand what they want, need, and deserve from effective software freedom.

We could learn from other organizations, like UNICEF, who have built design and development principles. The graphic here is from UNICEF’s design principles, where they talk about how they will build software that improves freedom for their audience.

It includes talk about source code – as part of a coherent whole of ten principles, not an end in and of itself.

Many parts of our community (including FSF!) have adopted codes of conduct or similar policies. We could draw on the consistent themes in these documents to identify key values that should take their place alongside the four freedoms.

Finally, we can vote with our code: we should be contributing where we feel we can have the most impact on user freedom, not just code freedom. That is a way of giving our impact: we can give our time only to projects that empower all users. In my ideal world, you come away determined to focus on projects that empower all people, not just programmers.

Ultimately, this is my vision, and why I remain involved in free software – I want to see people who are liberated. I hope after this talk you all understand why, and are motivated to help it happen.
Thanks for listening.

Further reading:

Image sources and licenses (deck itself is CC BY-SA 4.0):

 

 

a rumbling about X QA

As I rebooted this morning as a result of RH bug 4733471 two serious questions popped into my head:

  1. do any of the major core X contributors2 employ a full-time X QA person? As far as I know the answer is ‘no’ but I’d love to be wrong.
  2. would a full-time X QA person funded fractionally by the major X contributors, reporting to the development managers for each of those contributors, but formally employed by freedesktop.org, make even more sense?

My sense is that this kind of position that may be hard for any one contributor to justify but that it is the kind of thing that is probably necessary for a complex piece of software to succeed, so a position with costs shared across the various contributors might make sense.

(This is only partially inspired by Owen’s recent call on behalf of Friends of GNOME and the sysadmin team, but I’ve always thought a full-time GNOME QA manager would make sense- it really is vastly more efficient for everyone involved if much of this sort of stuff is done upstream. And it just struck me today that probably the same is true for X.)

  1. this was today’s first reboot, but recent experience suggests I’ll reboot at least one more time and probably at least twice more today []
  2. RH, Intel, Novell, as far as I know? []

Bzzzzt.

Wrong answer.

giant by brom. License:

The six month release cycle is not an all-controlling god, and bugs in one known, specific subsystem are not undebuggable without wide release (which was KDE’s most valid excuse). If it isn’t ready for wide use, it isn’t and shouldn’t be a GNOME .0. It isn’t ‘too late’ to decide that; you’re the QA team, dammit- it is your job to say no right up to the very last minute, and demand extra time to protect the users.

Distros follow our schedule because we promise high-quality .0s, so they should be thrilled we’ve admitted this won’t be high-quality, and they should either happily take a pass, or if they have a serious problem with it, they should provide resources to make it high-quality on time. (If they have a problem with it, their users should probably question doubt their commitment to shipping quality software.)

(cc’d to bugsquad shortly for real discussion.)

[on second thought: closing comments here because discussion on a topic of this sort should be on d-d-l or bugsquad, not on a blog.]

Infotopia, information-gathering, and software QA

A couple of weeks ago I finished reading Cass Sunstein’s Infotopia. While certainly not a perfect book by any stretch, it gives a stimulating overview of a central problem for any society- how it collects and filters information so that it can make decisions. Being a good U of C guy, he starts with Hayek’s notion that the price mechanism is an elaborate mechanism for ‘sharing and synchronizing local and personal knowledge‘ (to quote Wikipedia), and then goes on to discuss other mechanisms for getting information out of the heads which contain it- wikis, open source, democracy, polling, deliberation, prediction markets, etc. An interesting read to frame a lot of discussions around.

One of those discussions came up today. Quite simply, the big problem in QA is getting information about the state of the software out of the software and into the hands of developers as efficiently as possible.

This has three aspects: creating the information, getting it in the hands of the QA teams, and then filtering it into a form that is useful for developers to work on. Traditional QA has a very hard time getting the information- there are a lot of lines of code to be exercised, and very few people exercising the code (relatively speaking.) It is like squeezing water out of a stone, so they have to do a lot of things (like extensive automated testing) to get that information. The output is a relatively small amount of very regularized data, which is easy to present (though hard to weight efficiently and accurately.)

In contrast, open source QA has a whole ocean of information from the legions of volunteers willing to run pre-release code; the trick is to tap into that water without drowning in it.  It isn’t regularized, but given a large enough body of users over time, you can be fairly certain that the bug reports will represent an accurate cross section of your problems, and the interaction with real users (instead of interaction with automated test tools or third-hand via the sales/customer relationship) can give you a fairly good idea of what bugs are actually important to real people.

If you’ve got one person to work on QA, I’d say you always want to swim in the ocean instead of doing any amount of automated squeezing information from the stone. This is not to say automated testing doesn’t have its place- in particular, good unit testing captures information at a very high-efficiency junction (when the original author is writing code) and then gives it back in a very compressed, efficient form that the developer should know immediately how to prioritize and deal with. Similarly, automated tests that attempt to capture regressions once a bug is fixed are also fairly efficient- they capture information which real humans in the field have identified as an important problem, and they again report simple, clear, efficient information- this bug # and commit # which were fixed are now not fixed. But generic ‘well, we’re going to write tests now because that is how we did it when we had no users willing to help us test’ testing is a very inefficient use of manpower- it is trying to dig a deep well to get information when you live next to a deep, clear, safe mountain lake.

So there you have it- proprietary QA is trying to squeeze information-water from a stone; open source QA is trying to learn how to swim in a sea of information. I know which problem I’d rather have.

productive testing tips

I was cited in an article on testing tips this past week. Here is some cut and paste from the email I sent to the author (Joe Barr) when he asked me for tips; my email goes into more detail than he was able to put in his article so I thought it might be worth posting.

  1. use a bugtracking system of some sort, and use as much metadata (including good titles) in that bugtracking system as you can. At first it is time-consuming, but over the long run, it’ll save you time by helping you avoid testing things or filing bugs twice. Relatedly, if we’re talking about things that can be done as a group, and not just as an individual, have a bugmaster- someone charged with organizing, sorting, and generally knowing what is going on with the bug tracker. That one person will help every other person who uses the bug tracker (both testers and developers) be more productive, which is invaluable.
  2. Where possible, use the very latest code. Don’t be afraid to rebuild things from CVS every night or every morning. The moreup-to-date your code is, the quicker you’ll catch things (again, helping everyone’s productivity) and the less time you’ll spend going back and forth with ‘is this in the latest version?’ Again, when speaking of a team, if someone can be charged with making this process as easy as possible, that one person has a productivity multiplier for the entire team- makes everyone more productive.
  3. If at all possible, write automated unit tests. The best way to be productive is to have the computer do the work for you. Again, this has up-front costs, but over the long run is a *huge* win. (If we’re talking about GUI software, try to focus on non-GUI code first when writing tests- GUI tests tend to be fragile, and you’re best off writing tests that will fail when things go badly wrong, not when a pixel shifted here or there.)
  4. dogfood, dogfood, dogfood. If at all possible, use the code you’re testing under real-life conditions to do everyday work. Real-life testing is always going to be more efficient than fake ‘do this, then do this, then do this’ checklist testing- that doesn’t catch edge cases, and it feels like work. If you’re using code for real work, you catch the edge cases that real users find, and you do it in the courseof doing something else- it doesn’t feel like work then. (Relatedly: if you dogfood and find a bug, make sure to do (1) file it immediately, and (3) write an automated test to duplicate the bug ASAP.)
  5. use automated crash reporting tools: every major OS now has ways to catch stack traces of catches and send them back to a bug database. Use those, and ship those- help the users help you.
  6. if you have an active volunteer community who are using nightly builds to do daily work, don’t spend your time testing things that they will inevitably test. For example, I’ve seen test plans that say things like ‘check to make sure it launches’. If it doesn’t run from the command line, a good community will let you know about it ASAP. If it crashes when you open the print dialog with 10,000 printers on your local network, well, most communities don’t run into that sort of thing- they have one printer. So spend your precious test cycles testing *that* kind of scenario, instead of testing basic stuff our community will catch like ‘does it start? does the file open dialog work?’

misc. post-weekend bits

  • Diebold is considering selling their voting machine unit. RH, this is your chance.
  • Actually used a Wii for the first time this weekend. What fun. This is real thinking outside the box. (I’ve never owned a game machine, and never previously been particularly tempted. But I am now, esp. since I didn’t buy myself anything for my birthday last week. :)
  • Saw the first iPhone ad (unfree media, but at least unfree with an official player on Linux) on TV yesterday. Hope that Nokia will take the hint from Apple and put a cell chip in the N800’s successor- this is a market that Nokia should have owned a year ago, had they been willing to think a bit more outside their own boxes. I’m still glad they are doing all they are doing, and it is a unique and interesting little box, but it is frustrating to see that all the pieces are in place- except one- for the iPhone to have been a year earlier, and Free.
  • Hopefully I’ll score an openmoko at the end of the month so I can replace the one proprietary component in my personal digital stack, even if it does look to be clearly suboptimal to an N800 with a cell chip. Openmoko folks, if you see this, you really, really need some free QA experts testing and helping you refine this thing :)
  • Three bits of awesome news for GNOME distribution and testing- rpath-based images, a buildbot with integrated testing, and wider distribution of unstable builds.
  • I miss Miami.
  • I’m glad I’m not the only one who thinks that wordpress upgrades mar an otherwise excellent experience. Comments indicate fixes may be in the pipeline, yay.
  • Interesting post on property rights and how they explode over at the Volokh Conspiracy. Look forward to the rest of the posts in the series, though I wish the citations provided links, or at least enough information to be googleable. (I’d link to SSRN, but SSRN is considered harmful.)
  • Speaking of citations, am about to launch into a pile of bluebooking. Expect fun, law-school/data-organization related ranting to follow!

More on QA, Ubuntu, trust, etc.

Heard from Ubuntu this morning (in comments and via email, neither official) so I figured I owed an update, having slammed them fairly thoroughly here :) So some notes from email and comments:

  • I didn’t see an official Ubuntu announce about this because I didn’t look in the most obvious place of all- ubuntu.com. Looking there points you to this message, with more details in a subsequent linked page. Good on Ubuntu for discussing the issue in the most highly visible place they can, and promising (albeit all the way at the bottom of the second page) that they are investigating the problem. Given that one of the most valuable things any distro has (especially Ubuntu) is the trust of its users, I would probably have given the ‘we are researching the problem’ statement much more prominence, but it is there, at least.
  • To be very clear: I don’t expect Ubuntu to have researched the cause of the procedural problem and fixed it in 48 hours. That would be nice but unreasonable. I just expect them to very publicly say what they are doing about the problem, in terms of research, etc.
  • To also be clear: I’m surprised I’ve seen nothing on planet ubuntu (not planet gnome), because I assume that at least some developers blog about what they are thinking about/working on, and if no developer blogs about this Very Big Fuckup, then… that ain’t good :)
  • The negatives: apparently the problem was there for 17 hours. Not a good sign, but again, that is partially because I have high standards for Ubuntu.
  • Apparently the reason I didn’t know about dapper-proposed is that it isn’t fully deployed yet. That is mixed news, I guess- good that there is a reason I didn’t know about it; bad that something like dapper-proposed was not fully tested and in place before the LTS release. (Note here that again I’m holding Ubuntu to a very high standard; as far as I know no other distro has such a queue for their long-term distros yet either. Of course, every distro should. If I’m wrong, and other distros do have it, I’d love to know- please let me know in comments.)
  • James: I’ve not considered an LWN article on distro QA because for quite some time (really since around when I left Novell) I’ve been pondering writing the definitive serious white paper on the subject. As dobey is about to find out, writing anything of that length is hard :) We’ll see if these blog posts coalesce my thinking enough to get something LWN-length out, though.
  • error27, others who have discussed enterprise distros: Enterprise distros have substantial resources directed at identifying stable upstream versions, and stabilizing them even more. So of course we should expect that at this point enterprise distros are very stable; more so than their more bleeding-edge community counterparts. However, traditional enterprise distros can only be resourced from within the company that produces them, and their users are explicitly paying not to worry about it- the payment is mostly in lieu of other forms of contribution. In contrast, a community distro like Fedora or Ubuntu should be virtually unlimited in terms of the amount of testing, feedback, triage, etc., that it receives from community members. Given that, if coordination and communication problems are solved, community distros should be of at least equal quality to enterprise distros. (It should be of no surprise, given that coordination/communication problems are perhaps the biggest stumbling block to this, that I think everyone needs a bugmaster.)
  • Go read the comments in last night’s post for more comments on the Edgy/Unstable differences. They are all dead on; no need for me to repeat them, except to say that obviously there are a lot of various layers to the disparity. Still, the basic question stands: how do you get more people onto unstable, and get them contributing?

I swear I’ll write something about law school soon :)

Posted in QA

Notes about distros, QA, etc.

Yesterday I flamed Ubuntu, I think with cause, for breaking X. Followups:
Points out of the comments in that thread yesterday:

  • Fedora now believes that they are going to be able to support (apparently already have supported) distro->distro upgrades, like Debian has done for years and Ubuntu has done since day 1. This is very big for Fedora. Along with a growing selection of packages in Extras, two of Debian/Ubuntu’s biggest selling points to the technical community are under siege. Yay for competition :)
  • One of the Fedora dudes clarified my understanding of the FC5/X7.1 situation- it was not nearly as broken as I’d thought. That said, part of what Fedora should be trying to do is build a culture around QA and quality- which means clear messaging about these sorts of things. So I’m glad that they were doing the right things for roughly the right reasons, but they need to get better about communicating those so that their culture grows up with them.
  • rpath points out that the obvious solution to problems like the one I had yesterday is to be able to rollback packages, and that conary (rpath’s system management tool) can do that. I continue to think that rpath is doing really interesting stuff; this would be one good demo of why. (Yes, I know red carpet and other tools have done rollback for a while, but conary’s implementation, from what I can grok of it, is nice and well-integrated.)
  • Lucas did a really interesting (and totally unscientific) ‘survey’ of Ubuntu and Debian users by way of CTCP in IRC, and discovered that something like 4% of Ubuntu users were using edgy, while 76% of Debian users were using unstable. My hunch is that this says more about Ubuntu’s incredible success in getting newbies into #ubuntu than it does anything else, but the core question (‘what percentage of our users are using and testing our development branch? what steps are we taking to raise that percentage?’) is a really interesting one which every free software project should ask itself. (NB that GNOME is failing here, and has been since Ximian stopped funding packaging of unstable builds years ago. Ubuntu’s unstable builds have been a huge pickup in that respect.)
  • Shockingly, no real response from Ubuntu that I can see anywhere (planet, bugsquad list, forums), other than the fast fix. Remember that much of this is about expectations- I expect a lot from Ubuntu, so when they fuck up, (1) I get very very pissed, because I trusted them and (2) I expect openness about why it happened this time and how they are going to prevent it from happening next time, so that I can again trust them.

Some things I personally should have explained better:

  • I am not switching distros. All things considered, at this time Ubuntu still offers the best mix of maintainability and support for my needs, especially now that I know not to actually trust their support packages. Silly and naive of me to have done so earlier, though, and led to (frankly) great anger.
  • QA for this sort of thing (big, big bug in package everyone uses) is not really very hard to do. Ubuntu has been leading the way for quite some time in open source distro QA implementation, pushing packages early and quickly in their unstable branch so that their stable release is both well tested and fairly up-to-date- exactly what every other distro should be doing, and some do to various degrees. But Ubuntu have not quite pushed hard on the last mile for stable- getting users to test proposed updates before they ship. I only discovered yesterday that they have a ‘proposed updates’ channel for apt. Quite simply, if I (who am completely obsessive about open source QA) don’t know about your proposed updates channel, you haven’t pimped it enough. Every distro should have a proposed updates channel, like Ubuntu does, and pimp it heavily to their skilled users, which AFAICT Ubuntu does not.  Skilled users who are not running unstable should, in response, consider it nearly a moral obligation to use the proposed updates channel on any non-mission-critical boxes. That combo, used effectively, should have caught this before it went out. If Ubuntu is post-morteming this (which they should be) that would be the big question to ask- why did the community not catch this for us?
  • It is worth remembering that every significant community linux distro has a community of thousands who will gladly test anything you throw at them, so distros must actively encourage and take advantage of that. Any distro which doesn’t (and many don’t) is throwing away free time and free money. (Relatedly, I firmly believe that as a result of the opportunity for free QA, most open distros should in practice be more stable than their ‘enterprise’ alternatives, which have smaller user bases who would rather pay for someone else to do the work. That in practice enterprise distros tend to be more stable points to inefficiencies in how open distro QA is done, IMHO, not just the obvious points about business models.)

!@#@!#@!- still learning what ‘long term support’ means

Things that are not good:

  • put all your class notes in something X-based
  • see an X update from Ubuntu before you go to class
  • decide not to install the X update, because, hey, you wouldn’t want a broken X right before class
  • read some email, have breakfast
  • remember you’re running not just any distro, but hey, the ‘Long Term Support’ distro- the one that presumably has, you know, a QA process. And no one would put out a package that breaks X in their Long Term Support, enterprise-ready distro, right?
  • install the upgrade
  • turn off the computer
  • go to class
  • turn on the computer
  • discover that you have no X, and class started two minutes ago.

Furious would not begin to describe how I felt. The internal dialog in my head was ‘!@!@#@!#. What actually well-supported distro can I switch to?’ because lets be clear- I’m running a stable distro for the first time in ages specifically to avoid shit like this. If the ‘stable’ distro still breaks my fucking X, it isn’t stable. Period. End of discussion. So I need another distro.

To ubuntu’s credit, there was an update in apt within a few minutes of when I got to class, so I was able to fix it by apt-get’ing again. But if your QA process for the Long Term Support distro let through an X update that broke X, well, your QA process still needs some work. (I understand that given the vast diversity of hardware X runs on, it isn’t possible to do perfect QA, but if it breaks a lot of machines, which it did, something went deeply wrong in your process.)

Side note: Abi’s XML is pretty noisy when you’re using outline mode. Turns out emacs + abi xml was not quite the savior I would have hoped it would be in my initial, paniced moments.