Governing Values-Centered Tech Non-Profits; or, The Route Not Taken by FSF

A few weeks ago, I interviewed my friend Katherine Maher on leading a non-profit under some of the biggest challenges an org can face: accusations of assault by leadership, and a growing gap between mission and reality on the ground.

We did the interview at the Free Software Foundation’s Libre Planet conference. We chose that forum because I was hopeful that the FSF’s staff, board, and membership might want to learn about how other orgs had risen to challenges like those faced by FSF after Richard Stallman’s departure in 2019. I, like many others in this space, have a soft spot for the FSF and want it to succeed. And the fact my talk was accepted gave me further hope.

Unfortunately, the next day it was announced at the same conference that Stallman would rejoin the FSF board. This made clear that the existing board tolerated Stallman’s terrible behavior towards others, and endorsed his failed leadership—a classic case of non-profit founder syndrome.

While the board’s action made the talk less timely, much of the talk is still, hopefully, relevant to any value-centered tech non-profit that is grappling with executive misbehavior and/or simply keeping up with a changing tech world. As a result, I’ve decided to present here some excerpts from our interview. They have been lightly edited, emphasized, and contextualized. The full transcript is here.

Sunlight Foundation: harassment, culture, and leadership

In the first part of our conversation, we spoke about Katherine’s tenure on the board of the Sunlight Foundation. Shortly after she joined, Huffington Post reported on bullying, harassment, and rape accusations against a key member of Sunlight’s leadership team.

[I had] worked for a long time with the Sunlight Foundation and very much valued what they’d given to the transparency and open data open government world. I … ended up on a board that was meant to help the organization reinvent what its future would be.

I think I was on the board for probably no more than three months, when an article landed in the Huffington Post that went back 10 years looking at … a culture of exclusion and harassment, but also … credible [accusations] of sexual assault.

And so as a board … we realized very quickly that there was no possible path forward without really looking at our past, where we had come from, what that had done in terms of the culture of the institution, but also the culture of the broader open government space.

Katherine

Practical impacts of harassment

Sunlight’s board saw immediately that an org cannot effectively grapple with a global, ethical technological future if the org’s leadership cannot grapple with its own culture of harassment. Some of the pragmatic reasons for this included:

The [Huffington Post] article detailed a culture of heavy drinking and harassment, intimidation.

What does that mean for an organization that is attempting to do work in sort of a progressive space of open government and transparency? How do you square those values from an institutional mission standpoint? That’s one [pragmatic] question.

Another question is, as an organization that’s trying to hire, what does this mean for your employer brand? How can you even be an organization that’s competitive [for hiring] if you’ve got this culture out there on the books?

And then the third pragmatic question is … [w]hat does this mean for like our funding, our funders, and the relationships that we have with other partner institutions who may want to use the tools?

Katherine

FSF suffers from similar pragmatic problems—problems that absolutely can’t be separated from the founder’s inability to treat all people as full human beings worthy of his respect. (Both of the tweets below lead to detailed threads from former FSF employees.)

Since the announcement of Stallman’s return, all top leadership of the organization have resigned, and former employees have detailed how the FSF staff has (for over a decade) had to deal with Richard’s unpleasant behavior, leading to morale problems, turnover, and even unionization explicitly to deal with RMS.

And as for funding, compare the 2018 sponsor list with the current, much shorter sponsor list.

So it seems undeniable: building a horrible culture has pragmatic impacts on an org’s ability to espouse its values.

Values and harassment

Of course, a values-centered organization should be willing to anger sponsors if it is important for their values. But at Sunlight, it was also clear that dealing with the culture of harassment was relevant to their values, and the new board had to ask hard questions about that:

The values questions, which … are just as important, were… what does this mean to be an organization that focuses on transparency in an environment in which we’ve not been transparent about our past?

What does it mean to be an institution that [has] progressive values in the sense of inclusion, a recognition that participation is critically important? … Is everyone able to participate? How can we square that with the institution that are meant to be?

And what do we do to think about justice and redress for (primarily the women) who are subjected to this culture[?]

Katherine

Unlike Sunlight, FSF is not about transparency, per se, but RMS at his best has always been very strong about how freedom had to be for everyone. FSF is an inherently political project! One can’t advocate for the rights of everyone if, simultaneously, one treats staff disposably and women as objects to be licked without their consent, and half the population (women) responds by actively avoiding the leadership of the “movement”.

So, in this situation, what is a board to do? In Sunlight’s case:

[Myself and fellow board member Zoe Reiter] decided that this was a no brainer, we had to do an external investigation.

The challenges of doing this… were pretty tough. [W]e reached out to everyone who’d been involved with the organization we also put not just as employees but also trying to find people who’ve been involved in transparency camps and other sorts of initiatives that Sunlight had had run.

We put out calls for participation on our blog; we hired a third party legal firm to do investigation and interviews with people who had been affected.

We were very open in the way that we thought about who should be included in that—not just employees, but anyone who had something that they wanted to raise. That produced a report that we then published to the general public, really trying to account for some of the things that have been found.

Katherine

The report Katherine mentions is available in two parts (results, recommendations) and is quite short (nine pages total).

While most of the report is quite specific to the Sunlight Foundation’s specific situation, the FSF board should particularly have read page 3 of the recommendations: “Instituting Board Governance Best Practices”. Among other recommendations relevant to many tech non-profits (not just FSF!), the report says Sunlight should “institute term limits” and “commit to a concerted effort to recruit new members to grow the Board and its capacity”.

Who can investigate a culture? When?

Katherine noted that self-scrutiny is not just something for large orgs:

[W]hen we published this report, part of what we were hoping for was that … we wanted other organizations to be able to approach this in similar challenges with a little bit of a blueprint for how one might do it. Particularly small orgs.

There were four of us on the board. Sunlight is a small organization—15 people. The idea that an even smaller organizations don’t have the resources to do it was something that we wanted to stand against and say, actually, this is something that every and all organizations should be able to take on regardless of the resources available to them.

Katherine

It’s also important to note that the need for critical self scrutiny is not something that “expires” if not undertaken immediately—communities are larger, and longer-lived, than the relevant staff or boards, so even if the moment seems to be in the relatively distant past, an investigation can still be valuable for rebuilding organizational trust and effectiveness.

[D]espite the fact that this was 10 years ago, and none of us were on the board at this particular time, there is an accounting that we owe to the people who are part of this community, to the people who are our stakeholders in this work, to the people who use our tools, to the people who advocated, who donated, who went on to have careers who were shaped by this experience.

And I don’t just mean, folks who were in the space still—I mean, folks who were driven out of the space because of the experiences they had. There was an accountability that we owed. And I think it is important that we grappled with that, even if it was sort of an imperfect outcome.

Katherine

Winding down Sunlight

As part of the conclusion of the report on culture and harassment, it was recommended that the Sunlight board “chart a new course forward” by developing a “comprehensive strategic plan”. As part of that effort, the board eventually decided to shut the organization down—not because of harassment, but because in many ways the organization had been so successful that it had outlived its purpose.

In Katherine’s words:

[T]he lesson isn’t that we shut down because there was a sexual assault allegation, and we investigated it. Absolutely not!

The lesson is that we shut down because as we went through this process of interrogating where we were, as an organization, and the culture that was part of the organization, there was a question of what would be required for us to shift the organization into a more inclusive space? And the answer is a lot of that work had already been done by the staff that were there…

But the other piece of it was, does it work? Does the world need a Sunlight right now? And the answer, I think, in in large part was not to do the same things that Sunlight had been doing. …

The organization spawned an entire community of practitioners that have gone on to do really great work in other spaces. And we felt as though that sort of national-level governmental transparency through tech wasn’t necessarily needed in the same way as it had been 15 years prior. And that’s okay, that’s a good thing.

Katherine

We were careful to say at Libre Planet that I don’t think FSF needs to shut down because of RMS’s terrible behavior. But the reaction of many, many people to “RMS is back on the FSF board” is “who cares, FSF has been irrelevant for decades”.

That should be of great concern to the board. As I sometimes put it—free licenses have taken over the world, and despite that the overwhelming consensus is that open won and (as RMS himself would say) free lost. This undeniable fact reflects very badly on the organization whose nominal job it is to promote freedom. So it’s absolutely the case that shutting down FSF, and finding homes for its most important projects in organizations that do not suffer from deep governance issues, should be an option the current board and membership consider.

Which brings us to the second, more optimistic topic: how did Wikimedia react to a changing world? It wasn’t by shutting down! Instead, it was by building on what was already successful to make sure they were meeting their values—an option that is also still very much available to FSF.

Wikimedia: rethinking mission in a changing world

Wikimedia’s vision is simple: “A world in which every single human can freely share in the sum of all knowledge.” And yet, in Katherine’s telling, it was obvious that there was still a gap between the vision, the state of the world, and how the movement was executing.

We turned 15 in 2016 … and I was struck by the fact that when I joined the Wikimedia Foundation, in 2014, we had been building from a point of our founding, but we were not building toward something.

So we were building away from a established sort of identity … a free encyclopedia that anyone can edit; a grounding in what it means to be a part of open culture and free and libre software culture; an understanding that … But I didn’t know where we were going.

We had gotten really good at building an encyclopedia—imperfect! there’s much more to do!—but we knew that we were building an encyclopedia, and yet … to what end?

Because “a free world in which every single human being can share in the sum of all knowledge”—there’s a lot more than an encyclopedia there. And there’s all sorts of questions:

About what does “share” mean?

And what does the distribution of knowledge mean?

And what does “all knowledge” mean?

And who are all these people—“every single human being”? Because we’ve got like a billion and a half devices visiting our sites every month. But even if we’re generous, and say, that’s a billion people, that is not the entirety of the world’s population.

Katherine

As we discussed during parts of the talk not excerpted here, usage by a billion people is not failure! And yet, it is not “every single human being”, and so WMF’s leadership decided to think strategically about that gap.

FSF’s leadership could be doing something similar—celebrating that GPL is one of the most widely-used legal documents in human history, while grappling with the reality that the preamble to the GPL is widely unheeded; celebrating that essentially every human with an internet connection interacts with GPL-licensed software (Linux) every day, while wrestling deeply with the fact that they’re not free in the way the organization hopes.

Some of the blame for that does in fact lie with capitalism and particular capitalists, but the leadership of the FSF must also reflect on their role in those failures if the organization is to effectively advance their mission in the 2020s and beyond.

Self-awareness for a successful, but incomplete, movement

With these big questions in mind, WMF embarked on a large project to create a roadmap, called the 2030 Strategy. (We talked extensively about “why 2030”, which I thought was interesting, but won’t quote here.)

WMF could have talked only to existing Wikimedians about this, but instead (consistent with their values) went more broadly, working along four different tracks. Katherine talked about the tracks in this part of our conversation:

We ran one that was a research track that was looking at where babies are born—demographics I mentioned earlier [e.g., expected massive population growth in Africa—omitted from this blog post but talked about in the full transcript.]

[Another] was who are our most experienced contributors, and what did they have to say about our projects? What do they know? What’s the historic understanding of our intention, our values, the core of who we are, what is it that motivates people to join this project, what makes our culture essential and important in the world?

Then, who are the people who are our external stakeholders, who maybe are not contributors in the sense of contributors to the code or contributors to the projects of content, but are the folks in the broader open tech world? Who are folks in the broad open culture world? Who are people who are in the education space? You know, stakeholders like that? “What’s the future of free knowledge” is what we basically asked them.

And then we went to folks that we had never met before. And we said, “Why don’t you use Wikipedia? What do you think of it? Why would it be valuable to you? Oh, you’ve never even heard of it. That’s so interesting. Tell us more about what you think of when you think of knowledge.” And we spent a lot of time thinking about what these… new readers need out of a project like Wikipedia. If you have no sort of structural construct for an encyclopedia, maybe there’s something entirely different that you need out of a project for free knowledge that has nothing to do with a reference—an archaic reference—to bound books on a bookshelf.

Katherine

This approach, which focused not just on the existing community but on data, partners, and non-participants, has been extensively documented at 2030.wikimedia.org, and can serve as a model for any organization seeking to re-orient itself during a period of change—even if you don’t have the same resources as Wikimedia does.

Unfortunately, this is almost exactly the opposite of the approach FSF has taken. FSF has become almost infamously insulated from the broader tech community, in large part because of RMS’s terrible behavior towards others. (The list of conference organizers who regret allowing him to attend their events is very long.) Nevertheless, given its important role in the overall movement’s history, I suspect that good faith efforts to do this sort of multi-faceted outreach and research could work—if done after RMS is genuinely at arms-length.

Updating values, while staying true to the original mission

The Wikimedia strategy process led to a vision that extended and updated, rather than radically changed, Wikimedia’s strategic direction:

By 2030, Wikimedia will become the essential infrastructure of the ecosystem of free knowledge, and anyone who shares our vision will be able to join us.

Wikipedia

In particular, the focus was around two pillars, which were explicitly additive to the traditional “encyclopedic” activities:

Knowledge equity, which is really around thinking about who’s been excluded and how we bring them in, and what are the structural barriers that enable that exclusion or created that exclusion, rather than just saying “we’re open and everyone can join us”. And how do we break down those barriers?

And knowledge as a service, which is without thinking about, yes, the technical components of what a service oriented architecture is, but how do we make knowledge useful beyond just being a website?

Katherine

I specifically asked Katherine about how Wikimedia was adding to the original vision and mission because I think it’s important to understand that a healthy community can build on its past successes without obliterating or ignoring what has come before. Many in the GNU and FSF communities seem to worry that moving past RMS somehow means abandoning software freedom, which should not be the case. If anything, this should be an opportunity to re-commit to software freedom—in a way that is relevant and actionable given the state of the software industry in 2021.

A healthy community should be able to handle that discussion! And if the GNU and FSF communities cannot, it’s important for the FSF board to investigate why that is the case.

Checklists for values-centered tech boards

Finally, at two points in the conversation, we went into what questions an organization might ask itself that I think are deeply pertinent for not just the FSF but virtually any non-profit, tech or otherwise. I loved this part of the discussion because one could almost split it out into a checklist that any board member could use.

The first set of questions came in response to a question I asked about Wikidata, which did not exist 10 years ago but is now central to the strategic vision of knowledge infrastructure. I asked if Wikidata had been almost been “forced on” the movement by changes in the outside world, to which Katherine said:

Wikipedia … is a constant work in progress. And so our mission should be a constant work in progress too.

How do we align against a north star of our values—of what change we’re trying to effect in the world—while adapting our tactics, our structures, our governance, to the changing realities of the world?

And also continuously auditing ourselves to say, when we started, who, you know, was this serving a certain cohort? Does the model of serving that cohort still help us advance our vision today?

Do we need to structurally change ourselves in order to think about what comes next for our future? That’s an incredibly important thing, and also saying, maybe that thing that we started out doing, maybe there’s innovation out there in the world, maybe there are new opportunities that we can embrace, that will enable us to expand the impact that we have on the world, while also being able to stay true to our mission and ourselves.

Katherine

And to close the conversation, I asked how one aligns the pragmatic and organizational values as a non-profit. Katherine responded that governance was central, with again a great set of questions all board members should ask themselves:

[Y]ou have to ask yourself, like, where does power sit on your board? Do you have a regenerative board that turns over so that you don’t have the same people there for decades?

Do you ensure that funders don’t have outsize weight on your board? I really dislike the practice of having funders on the board, I think it can be incredibly harmful, because it tends to perpetuate funder incentives, rather than, you know, mission incentives.

Do you think thoughtfully about the balance of power within those boards? And are there … clear bylaws and practices that enable healthy transitions, both in terms of sustaining institutional knowledge—so you want people who are around for a certain period of time, balanced against fresh perspective.

[W]hat are the structural safeguards you put in place to ensure that your board is both representative of your core community, but also the communities you seek to serve?

And then how do you interrogate on I think, a three year cycle? … So every three years we … are meant to go through a process of saying “what have we done in the past three, does this align?” and then on an annual basis, saying “how did we do against that three year plan?” So if I know in 15 years, we’re meant to be the essential infrastructure free knowledge, well what do we need to clean up in our house today to make sure we can actually get there?

And some of that stuff can be really basic. Like, do you have a functioning HR system? Do you have employee handbooks that protect your people? … Do you have a way of auditing your performance with your core audience or core stakeholders so that you know that the work of your institution is actually serving the mission?

And when you do that on an annual basis, you’re checking in with yourself on a three year basis, you’re saying this is like the next set of priorities. And it’s always in relation to that that higher vision. So I think every nonprofit can do that. Every size. Every scale.

Katherine

The hard path ahead

The values that the FSF espouses are important and world-changing. And with the success of the GPL in the late 1990s, the FSF had a window of opportunity to become an ACLU of the internet, defending human rights in all their forms. Instead, under Stallman’s leadership, the organization has become estranged and isolated from the rest of the (flourishing!) digital liberties movement, and even from the rest of the software movement it was critical in creating.

This is not the way it had to be, nor the way it must be in the future. I hope our talk, and the resources I link to here, can help FSF and other value-centered tech non-profits grow and succeed in a world that badly needs them.

Reinventing FOSS user experiences: a bibliography

There is a small genre of posts around re-inventing the interfaces of popular open source software; I thought I’d collect some of them for future reference:

Recent:

Older:

The first two (Drupal, WordPress) are particularly strong examples of the genre because they directly grapple with the difficulty of change for open source projects. I’m sure that early Firefox and VE discussions also did that, but I can’t find them easily – pointers welcome.

Other suggestions welcome in comments.

What tools are changing our world next?

Quick brain dump after a bike ride home: free software took a huge leap in the late 90s and early 00s in large part because of non-ideological advantages that the rest of the world is now competing with or surpassing:

HDR automatically created by Google Photos from my old pictures of Muir Woods. Not perfect, but better than I ever bothered to do!

  • Collaboration tools: Because we got to the ‘net first, our tools for collaborating with each other were simply better than what proprietary developers were doing: cvs, mailman, wiki, etc., were all better than the silo’d old-school tools. Modern best-of-breed collaboration tools have all learned from what we did and added proprietary sauce on top: github, slack, Google Docs, etc. So our tools that are now (at best) as productive as our proprietary counterparts, and sometimes less productive but ideologically agreeable.
  • Release processes: “Release early/release often” made us better partners for our users. We’re now actively behind here: compare how often a mobile app or web user gets updates, exactly as the author intended, relative to a user of a modern Linux distro.
  • Zero cost: We did things for no (direct) cost by subsidizing our work through college, startups, or consulting gigs; now everyone has a subsidize-by-selling-something-else model (usually advertising, though sometimes freemium). Again, advantage (mostly?) lost.
  • Knowing our users: We knew a lot about our users, because we were our biggest users, and we talked to other users a lot; this was more effective than what passed for software design in the late 90s. This has been eclipsed by extensive a/b testing throughout the industry, and (to a lesser extent) by more extensive usage of direct user testing and design-thinking.

None of these are terribly original observations – all of these have been remarked on before. But after playing some with Google Photos this weekend, I’m ready to add another one to the list:

Worth asking what your project is doing that could be radically changed if your competitors get access to new technology. For example, for Wikipedia:

  • Collaborating: Wiki was best-of-breed (or close); it isn’t anymore. Visual Editor helps get editing back to par, but the social aspect of collaboration is still lacking relative to the expectations of many users.
  • Knowledge creation: big groups of humans, working together wiki-style, is the state of the art for creating useful, non-BS knowledge at scale. With the aforementioned machine learning, I suspect this will no longer the case in a (growing) number of domains.

I’m sure there are others…

Come work with me – developer edition!

It has been a long time since I was able to say to developer friends “come work with me” in anything but the most abstract “come work under the same roof” kind of sense. But today I can say to developers “come work with me” and really mean it. Which is fun :)

By Supercarwaar, CC BY-SA 3.0
Details: Wikimedia’s new community tech team is hiring for a community tech developer and a team lead. This will be extremely community-intensive work, so if you enjoy and get energy from working with a community and helping them achieve their goals, this could be a great role for you. This team will work intensely with my department to ensure that we’re correctly identifying and prioritizing the needs of our most active editors. If that sounds like fun, get in touch :)

[And I realize that I’ve been bad and not posted here, so here’s my new job announce: “my department” is the Foundation’s new Community Engagement department, where we work to support healthy contributor communities and help WMF-community collaboration. It is a detour from law, but I’ve always said law was just a way to help people do their thing — so in that sense is the same thing I’ve always been doing. It has been an intense roller coaster of a first two months, and I look forward to much more of the same.]

Democracy and Software Freedom

As part of a broader discussion of democracy as the basis for a just socio-economic system, Séverine Deneulin summarizes Robert Dahl’s Democracy, which says democracy requires five qualities:

First, democracy requires effective participation. Before a policy is adopted, all members must have equal and effective opportunities for making their views known to others as to what the policy should be.

Second, it is based on voting equality. When the moment arrives for the final policy decision to be made, every member should have an equal and effective opportunity to vote, and all votes should be counted as equal.

Third, it rests on ‘enlightened understanding’. Within reasonable limits, each member should have equal and effective opportunities for learning about alternative policies and their likely consequences.

Fourth, each member should have control of the agenda, that is, members should have the exclusive opportunity to decide upon the agenda and change it.

Fifth, democratic decision-making should include all adults. All (or at least most) adult permanent residents should have the full rights of citizens that are implied by the first four criteria.

From An Introduction to the Human Development and Capability Approach“, Ch. 8 – “Democracy and Political Participation”.

Poll worker explains voting process in southern Sudan referendum” by USAID Africa Bureau via Wikimedia Commons.

It is striking that, despite talking a lot about freedom, and often being interested in the question of who controls power, these five criteria might as well be (Athenian) Greek to most free software communities and participants- the question of liberty begins and ends with source code, and has nothing to say about organizational structure and decision-making – critical questions serious philosophers always address.

Our licensing, of course, means that in theory points #4 and #5 are satisfied, but saying “you can submit a patch” is, for most people, roughly as satisfying as saying “you could buy a TV ad” to an American voter concerned about the impact of wealth on our elections. Yes, we all have the theoretical option to buy a TV ad/edit our code, but for most voters/users of software that option will always remain theoretical. We’re probably even further from satisfying #1, #2, and #3 in most projects, though one could see the Ada Initiative and GNOME OPW as attempts to deal with some aspects of #1, #3, and #4

This is not to say that voting is the right way to make decisions about software development, but simply to ask: if we don’t have these checks in place, what are we doing instead? And are those alternatives good enough for us to have certainty that we’re actually enhancing freedom?

I am the CADT; and advice on NEEDINFOing old bugs en masse

[Attention conservation notice: probably not of interest to lawyers; this is about my previous life in software development.]

Bugsquad barnstar, under MPL 1.1

Someone recently mentioned JWZ’s old post on the CADT (Cascade of Attention Deficit Teecnagers) development model, and that finally has pushed me to say:

I am the CADT.

I did the bug closure that triggered Jamie’s rant, and I wrote the text he quotes in his blog post.1

Jamie got some things right, and some things wrong. The main thing he got right is that it is entirely possible to get into a cycle where instead of seriously trying to fix bugs, you just do a rewrite and cross your fingers that it fixes old bugs. And yes, this can particularly happen when you’re young and writing code for fun, where the joy of a from-scratch rewrite can overwhelm some of your other good senses. Jamie also got right that I communicated the issue pretty poorly. Consider this post a belated explanation (as well as a reference for the next time I see someone refer to CADT).

But that wasn’t what GNOME was doing when Jamie complained about it, and I doubt it is actually something that happens very often in any project large enough to have a large bug tracking system (BTS). So what were we doing?

First, as Brendan Eich has pointed out, sometimes a rewrite really is a good idea. GNOME 2 was such a rewrite – not only was a lot of the old code a hairy mess, we decided (correctly) to radically revise the old UI. So in that sense, the rewrite was not a “CADT” decision – the core bugs being fixed were the kinds of bugs that could only be fixed with massive, non-incremental change, rather than “hey, we got bored with the old code”. (Immediately afterwards, GNOME switched to time-based releases, and stuck to that schedule for the better part of a decade, which should be further proof we weren’t cascading.)

This meant there were several thousand old bugs that had been filed against UIs that no longer existed, and often against code that no longer existed or had been radically rewritten. So you’ve got new code and old bugs. What do you do with the old bugs?

It is important to know that open bugs in a BTS are not free. Old bugs impose a cost on developers, because when they are trying to search relevant bugs, old bugs can make it harder to find the things they really should be working on. In the best case, this slows them down; in the worst case, it drives them to use other tools to track the work they want to do – making the BTS next to useless. This violates rule #1 of a BTS: it must be useful for developers, or else it all falls apart.

So why did we choose to reduce these costs by closing bugs filed against the old codebase as NEEDINFO (and asking people to reopen if they were still relevant) instead of re-testing and re-triaging them one-by-one, as Jamie would have suggested? A few reasons:

  • number of triagers v. number of bugs: there were, at the time, around a half-dozen active bug volunteers, and thousands of pre-GNOME 2 bugs. It was simply unlikely that we’d ever be able to review all the old bugs even if we did nothing else.
  • focus on new bugs: new bugs are where triagers and developers are much more likely to be relevant – those bugs are against fresh code; the original filer is much more likely to respond to clarifying questions; etc. So all else being equal, time spent on new bugs was going to be much better for the software than time spent on old bugs.
  • steady flow of new bugs: if you’ve got a small number of new bugs coming in, perhaps you split your time – but we had no shortage of new bugs, nor of motivated bug reporters. So we may have paid some cost (by demotivating some reporters) but our scarce resource (developers) greatly appreciated it.
  • relative burden: with thousands of open bugs from thousands of reporters, it made sense to ask old them to test their bug against the new code. Reviewing their old bugs was a small burden for each of them, once we distributed it.

So when isn’t it a good idea to close ask for more information about old bugs?

  • Great at keeping old bugs triaged/relevant: If you have a very small number of old bugs that haven’t been touched in a long time, then they aren’t putting much burden on developers.
  • Slow code turnover: If your development process is such that it is highly likely that old bugs are still relevant (e.g., core has remained mostly untouched for many years, or effective use of TDD has kept the number of accidental new bugs low) this might not be a good idea.
  • No triggering event: In GNOME, there was a big event, plus a new influx of triagers, that made it make sense to do radical change. I wouldn’t recommend this “just because” – it should go hand-in-hand with other large changes, like a major release or important policy changes that will make future triaging more effective.

Relatedly, the team practices mailing list has been discussing good practices for migrating bug tracking systems in the past few days, which has been interesting to follow. I don’t take a strong position on where Wikimedia’s bugzilla falls on this point – Mediawiki has a fairly stable core, and the volume of incoming bugs may make triage of old bugs more plausible. But everyone running a very large bugzilla for an active project should remember that this is a part of their toolkit.

  1. Both had help from others, but it was eventually my decision. []

Why feed reading is an open web problem, and what browsers could do about it

I’ve long privately thought that Firefox should treat feed reading as a first-class citizen of the open web, and integrate feed subscribing and reading more deeply into the browser (rather than the lame, useless live bookmarks.) The impending demise of Reader has finally forced me to spit out my thoughts on the issue. They’re less polished than I like when I blog these days, but here you go – may they inspire someone to resuscitate this important part of the open web.

What? Why is this an open web problem?

When I mentioned this on twitter, an ex-mozillian asked me why I think this is the browser’s responsibility, and particularly Mozilla’s. In other words – why is RSS an open web problem? why is it different from, say, email? It’s a fair question, with two main parts.

First, despite what some perceive as the “failure” of RSS, there is obviously  a demand by readers to consume web content as an automatically updated stream, rather than as traditional pages.1 Google Reader users are extreme examples of this, but Facebook users are examples too: they’re no longer just following friends, but companies, celebrities, etc. In other words, once people have identified a news source they are interested in, we know many of them like doing something to “follow” that source, and get updated in some sort of stream of updates. And we know they’re doing this en masse! They’re just not doing it in RSS – they’re doing it in Twitter and Facebook. The fact that people like the reading model pioneered by RSS – of following a company/news source, rather than repeatedly visiting their web site – suggests to me that the widely perceived failure of RSS is not really a failure of RSS, but rather a failure of the user experience of discovering and subscribing to RSS.

Of course, lots of things are broadly felt desires, and aren’t integrated into browsers – take email for example. So why are feeds different? Why should browsers treat RSS as a first-class web citizen in a way they don’t treat other things? I think that the difference is that if closed platforms (not just web sites, but platforms) begins to the only (or even best) way to experience “reading streams of web content”, that is a problem for the web. If my browser doesn’t tightly integrate email, the open web doesn’t suffer. If my browser doesn’t tightly integrate feed discovery and subscription, well, we get exactly what is happening: a mass migration away from consuming (and publishing!) news through the open web, and instead it being channeled into closed, integrated publishing and subscribing stacks like FB and Twitter that give users a good subscribing and reading experience.

To put it another way: Tantek’s definition of the open web (if I may grotesquely simplify it) is a web where publishing content, implementing software that consumes that content, and accessing the content is all open/decentralized. RSS2 is the only existing way to do stream-based reading that meets these requirements. So if you believe (as I do) that reading content delivered in a stream is a central part of the modern web experience, then defending RSS is an important part of defending the open web.

So that’s, roughly, my why. Here’s a bunch of random thoughts on what the how might look like:

Discovery

When you go to CNN on Facebook, “like” – in plain english, with a nice icon – is right up there, front and center. RSS? Not so much. You have to know what the orange icon means (good luck with that!) and find it (either in the website or, back in the day, in the browser toolbar). No wonder no one uses it, when there is no good way to figure out what it means. Again, the failure is not the idea of feeds- the failure is in the way it was presented to users. A browser could do this the brute-force way (is there an RSS feed? do a notice bar to subscribe) but that would probably get irritating fast. It would be better to be smart about it. Have I visited nytimes.com five times today? Or five days in a row? Then give me a notice bar: “hey, we’ve noticed you visit this site an awful lot. Would you like to get updates from it automatically?” (As a bonus, implementing this makes your browser the browser that encourages efficiency. ;)

Subscription

Once you’ve figured out you can subscribe, then what? As it currently stands, someone tells you to click on the orange icon, and you do, and you’re presented with the NASCAR problem, made worse because once you click, you have to create an account. Again, more fail; again, not a problem inherent in RSS, but a problem caused by the browser’s failure to provide an opinionated, useful default.

This is not an easy problem to solve, obviously. My hunch is that the right thing to do is provide a minimum viable product for light web users – possibly by supplementing the current “here are your favorite sites” links with a clean, light reader focused on only the current top headlines. Even without a syncing service behind it, that would still be helpful for those users, and would also encourage publishers to continue treating their feeds as first-class publishing formats (an important goal!).

Obviously solving the NASCAR problem is still hard (as is building a more serious built-in app), but perhaps the rise of browser “app stores” and web intents/web activities might ease it this time around.

Other aspects

There are other aspects to this – reading, social, and provision of reading as a service. I’m not going to get into them here, because, well, I’ve got a day job, and this post is a month late as-is ;) And because the point is primarily (1) improving the RSS experience in the browser needs to be done and (2) some minimum-viable products would go a long way towards making that happen. Less-than-MVPs can be for another day :)

  1. By “RSS” and “feeds” in this post, I really mean the subscribing+reading experience; whether the underlying tech is RSS, Atom, Activity Streams, or whatever is really an implementation detail, as long as anyone can publish to, and read from them, in distributed fashion. []
  2. again, in the very broad sense of the word, including more modern open specifications that do basically the same thing []

One year on OSI’s board (aka one year in OSI’s licensing)

Since it has been roughly one year since Mozilla nominated me to sit on the OSI board, I thought I’d recap what I’ve done over the course of the year. It hasn’t been a perfect year by any stretch, but I’m pretty happy with what we’ve done and I think we’re pointed in the right direction. Because my primary public responsibility on the board has been chairing the license committee, this can also sort of double as a review of the last year in license-discuss/license-review (though there is lots of stuff done by other members of the community that doesn’t show up here yet).

Outside of licensing, my work has consisted mostly of cheerleading the hard work of others on the board (like Deb’s hard work on our upcoming DC meeting and the work of many people on our membership initiative) – I haven’t listed each instance of that here.

Wikimedia Deutschland offices in Berlin, during the tour at the Chapters Meeting 2011“, by Mike Peel, under CC-BY-SA 2.5. (Mind you, CC is not actually OSI-certified ;)

Some things that got done:

  • Drafted and published a beta Code of Conduct for license-discuss/license-review. This was drafted with the intent that it will eventually be a CoC for all of OSI, but we’re still formally beta-testing it in the license committee community.
  • Revised the opensource.org/licenses landing page to make it more useful to visitors who are not familiar with open source. Also poked and prodded others to do various improvements to the FAQ, which now has categories and a few improved questions.
  • Revised OSI’s history page. The main changes were to update it to reflect the past  5-6 years, but also to make it more readable and more positive.
  • Oversaw a number of license submissions. I can’t take much credit for these- the community does most of the heavy lifting. But the group submitted in the past year include AROS, MOSL, “No Nonsense“, and CeCILL. The new EUPL is in the pipeline as well.
  • Engaged Greenberg Traurig as outside counsel to OSI, and organized and hosted a board face-to-face meeting at Greenberg’s San Francisco office space.
  • Helped keep lines of communication open (and hopefully improving!) with SPDX and OKFN.

Some projects are important, but incomplete:

Some projects never really got off  the ground:

  • I wanted to get GNOME to join OSI as an affiliate. This, very indirectly, spurred the history page revision mentioned above, but otherwise never really got anywhere.
  • I wanted to have OSI reach out to the authors of the CPOL and push them to improve it or adopt an existing license. That never happened.
  • I wanted to figure out how to encourage github to require a license for new projects, but got no traction.

I hope that this sounds like a pretty good year- it isn’t perfect but it felt like a good start to me, giving us some things we can build on for future years.

That said, it shouldn’t be up to just me – if you think this kind of thing sounds useful  for the broader open source community, you can help :)

  • Join license-discuss, or, if you’re more sensitive to mail traffic, but still want to help with the committee’s most important work, join license-review, which focuses on approving/rejecting proposed new licenses.
  • Become a member! Easier than joining license-discuss  ;) and provides both fiscal and moral support to the organization.

Showrunner and Show Bible? Or Cult?

I don’t currently do much heavily collaborative writing, but I’m still very interested in the process of creating very collaborative works. So one of the many stimulating discussions at Monktoberfest was a presentation by two awesome O’Reilly staffers about the future (and past) of authorship. Needless to say, collaborative authoring was a major theme. What particularly jumped out at me in the talk and the discussion afterwards was a nagging fear that any text authored by multiple people would necessarily lack the coherence and vision of the best single-author writing.

I’ve often been very sympathetic to this concern. Watching groups of people get together and try to collaboratively create work is often painful. Those groups that have done best, in my experience, are often those with some sort of objective standard for the work they’re creating. In software, that’s usually “it compiles,” followed (in the best case) by “it passes all the tests.” Where there aren’t objective standards all team members can work with – as is often the case with UI  – the process tends to fall apart. Where there are really detailed objective standards that every contribution can be measured against – HTTP, HTML – open source is often not just competitive, but dominant.

On the flip side, you get no points for thinking of the canonical example of a single designer’s vision guiding the development of software. But Apple is an example that proves the rule – software UIs that are developed without reference to objective standards of good/bad are usually either bad, or run by a not-very-benevolent dictator who has spent decades refining his vision of authorship.

Wikipedia is another very large exception to the “many cooks” argument. It is an exception because most written projects can’t possibly have a rule of thumb so straightforward and yet effective as “neutral point of view,” because most written projects aren’t factual, dry or broken-up-into-small-chunks. In other words, most written projects aren’t encyclopedias and so can’t be written “by rule.”

Or at least that’s what I was thinking during the talk. In response to this, someone commented during the post-talk Q&A1 that essentially all TV shows are collaboratively written, and yet manage to be coherent. In fact, in our new golden age of TV drama they’re often more than coherent- they’re quite good, despite extremely complex plots sprawling over several years of effort. This has stuck in my head ever since because it goes against all my hard-learned instincts.

I really don’t know what the trick is, since I’m not a TV writer. I suspect that in most cases the showrunner does it by (1) having a very clear vision of where the show is going (often not the case in software) and (2) clearly articulating and communicating that vision – i.e. having a good show bible and sticking to it.

If you’re not looking carefully, this looks a lot like what Aaron has rightly called a cult of personality. But I think, after being reminded about showrunners and show bibles, it is important to distinguish the two. It is a fine line, but there is a real different between what Aaron is concerned about and skilled leadership. Maybe a good test is to ask that leader: where is your show bible? What can I read to understand the vision, and help flesh it out like the writer of an episode? If the answer is “follow whatever I’m thinking about this month,” or “I’m too busy leading to write it down”, then you’ve got problems. But if your leadership can explain, don’t throw the baby out with the bathwater- that’s a person who has thought seriously about what they’re doing and how you can help them build something bigger and better than you could each do alone, not a cult leader.

  1. if you’re this person, please drop me a note and I’ll credit you! []

Thanking Contributors by Printing the MPL

As part of a general drive to get rid of stuff, I’ve recently become increasingly willing to part with my old books. This has been a painful process – books have many happy memories for me – but I think also a good and focusing one. As part of my emotional reaction to this, I’ve become increasingly interested in making beautiful, printed texts – things that stand up better to the test of time than the paperbacks I’ve been thinning out.

In 2010, as part of this process, I bought Typography for Lawyers, and incorporated some of what I learned from that into the HTML version of MPL 2.0. In 2011, as I was putting the finishing touches on the final draft of the MPL,  I attended the holiday fair at the San Francisco Center for the Book (neat Flickr stream), and ran across some work from Painted Tongue Press– beautiful broadside printings of poetry and wedding vows.

This gave me the idea to thank the most involved contributors to the MPL with a hand-made, printed copy of the text of the license.

The wonderful Kim Vanderheiden, of Painted Tongue, worked with me over the course of several months to plan this process, and then she and her team put them together. First, we designed the layout, not just of the text, but of the relatively unusual accordion-fold binding, which allowed the final product to be displayed like an A-Frame or by hanging the entire (very long!) thing from a wall. Then we picked paper for the text, and cloth and ribbon for the bindings (the ribbon symbolising both the fact that these are gifts and traditional bindings for legal documents). Kim’s team then hand printed them on their presses, and Kim used watercolors to paint the colored highlights (including the yellow highlighting that replaces the ALL CAPS text). Finally, they were bound.

The end result has been fifteen copies of beautiful, tangible, printed words, which I am now in the slow process of distributing to various contributors. I hope that this token of the maintainers’ appreciation for their assistance (in a variety of ways) is appreciated.

The thanks and colophon is as follows:

Thank You!

This revision of the MPL would not have happened without your  help. Please accept this hand-crafted printing of the license as a token of our appreciation, and a reflection of the effort and care you put into your contributions to the license.

The MPL Module Owners

Mitchell Baker
Harvey Anderson
Gervase Markham
Heather Meeker
Luis Villa

-o-

Colophon

The type was set in Equity by Matthew Butterick (typo.la/equity – used with permission of the typographer) and Droid Sans Mono by Google (droidfonts.com – used under the Apache 2.0 license). The book is printed on Somerset Velvet Radiant White and covered in Duo Cloth Birch.

Design, printing, binding, and painting were done with care by the excellent team at Painted Tongue Press, Oakland, California (paintedtonguepress.com).

This edition of MPL 2.0 was printed in August 2012 to celebrate the publication of, and thank contributors to, MPL 2.0. You are holding copy # __
of 15.