Reinventing FOSS user experiences: a bibliography

There is a small genre of posts around re-inventing the interfaces of popular open source software; I thought I’d collect some of them for future reference:

Recent:

Older:

The first two (Drupal, WordPress) are particularly strong examples of the genre because they directly grapple with the difficulty of change for open source projects. I’m sure that early Firefox and VE discussions also did that, but I can’t find them easily – pointers welcome.

Other suggestions welcome in comments.

What tools are changing our world next?

Quick brain dump after a bike ride home: free software took a huge leap in the late 90s and early 00s in large part because of non-ideological advantages that the rest of the world is now competing with or surpassing:

HDR automatically created by Google Photos from my old pictures of Muir Woods. Not perfect, but better than I ever bothered to do!
  • Collaboration tools: Because we got to the ‘net first, our tools for collaborating with each other were simply better than what proprietary developers were doing: cvs, mailman, wiki, etc., were all better than the silo’d old-school tools. Modern best-of-breed collaboration tools have all learned from what we did and added proprietary sauce on top: github, slack, Google Docs, etc. So our tools that are now (at best) as productive as our proprietary counterparts, and sometimes less productive but ideologically agreeable.
  • Release processes: “Release early/release often” made us better partners for our users. We’re now actively behind here: compare how often a mobile app or web user gets updates, exactly as the author intended, relative to a user of a modern Linux distro.
  • Zero cost: We did things for no (direct) cost by subsidizing our work through college, startups, or consulting gigs; now everyone has a subsidize-by-selling-something-else model (usually advertising, though sometimes freemium). Again, advantage (mostly?) lost.
  • Knowing our users: We knew a lot about our users, because we were our biggest users, and we talked to other users a lot; this was more effective than what passed for software design in the late 90s. This has been eclipsed by extensive a/b testing throughout the industry, and (to a lesser extent) by more extensive usage of direct user testing and design-thinking.

None of these are terribly original observations – all of these have been remarked on before. But after playing some with Google Photos this weekend, I’m ready to add another one to the list:

Worth asking what your project is doing that could be radically changed if your competitors get access to new technology. For example, for Wikipedia:

  • Collaborating: Wiki was best-of-breed (or close); it isn’t anymore. Visual Editor helps get editing back to par, but the social aspect of collaboration is still lacking relative to the expectations of many users.
  • Knowledge creation: big groups of humans, working together wiki-style, is the state of the art for creating useful, non-BS knowledge at scale. With the aforementioned machine learning, I suspect this will no longer the case in a (growing) number of domains.

I’m sure there are others…

Come work with me – developer edition!

It has been a long time since I was able to say to developer friends “come work with me” in anything but the most abstract “come work under the same roof” kind of sense. But today I can say to developers “come work with me” and really mean it. Which is fun :)

By Supercarwaar, CC BY-SA 3.0
Details: Wikimedia’s new community tech team is hiring for a community tech developer and a team lead. This will be extremely community-intensive work, so if you enjoy and get energy from working with a community and helping them achieve their goals, this could be a great role for you. This team will work intensely with my department to ensure that we’re correctly identifying and prioritizing the needs of our most active editors. If that sounds like fun, get in touch :)

[And I realize that I’ve been bad and not posted here, so here’s my new job announce: “my department” is the Foundation’s new Community Engagement department, where we work to support healthy contributor communities and help WMF-community collaboration. It is a detour from law, but I’ve always said law was just a way to help people do their thing — so in that sense is the same thing I’ve always been doing. It has been an intense roller coaster of a first two months, and I look forward to much more of the same.]

Democracy and Software Freedom

As part of a broader discussion of democracy as the basis for a just socio-economic system, Séverine Deneulin summarizes Robert Dahl’s Democracy, which says democracy requires five qualities:

First, democracy requires effective participation. Before a policy is adopted, all members must have equal and effective opportunities for making their views known to others as to what the policy should be.

Second, it is based on voting equality. When the moment arrives for the final policy decision to be made, every member should have an equal and effective opportunity to vote, and all votes should be counted as equal.

Third, it rests on ‘enlightened understanding’. Within reasonable limits, each member should have equal and effective opportunities for learning about alternative policies and their likely consequences.

Fourth, each member should have control of the agenda, that is, members should have the exclusive opportunity to decide upon the agenda and change it.

Fifth, democratic decision-making should include all adults. All (or at least most) adult permanent residents should have the full rights of citizens that are implied by the first four criteria.

From An Introduction to the Human Development and Capability Approach“, Ch. 8 – “Democracy and Political Participation”.

Poll worker explains voting process in southern Sudan referendum” by USAID Africa Bureau via Wikimedia Commons.

It is striking that, despite talking a lot about freedom, and often being interested in the question of who controls power, these five criteria might as well be (Athenian) Greek to most free software communities and participants- the question of liberty begins and ends with source code, and has nothing to say about organizational structure and decision-making – critical questions serious philosophers always address.

Our licensing, of course, means that in theory points #4 and #5 are satisfied, but saying “you can submit a patch” is, for most people, roughly as satisfying as saying “you could buy a TV ad” to an American voter concerned about the impact of wealth on our elections. Yes, we all have the theoretical option to buy a TV ad/edit our code, but for most voters/users of software that option will always remain theoretical. We’re probably even further from satisfying #1, #2, and #3 in most projects, though one could see the Ada Initiative and GNOME OPW as attempts to deal with some aspects of #1, #3, and #4

This is not to say that voting is the right way to make decisions about software development, but simply to ask: if we don’t have these checks in place, what are we doing instead? And are those alternatives good enough for us to have certainty that we’re actually enhancing freedom?

I am the CADT; and advice on NEEDINFOing old bugs en masse

[Attention conservation notice: probably not of interest to lawyers; this is about my previous life in software development.]

Bugsquad barnstar, under MPL 1.1

Someone recently mentioned JWZ’s old post on the CADT (Cascade of Attention Deficit Teecnagers) development model, and that finally has pushed me to say:

I am the CADT.

I did the bug closure that triggered Jamie’s rant, and I wrote the text he quotes in his blog post.1

Jamie got some things right, and some things wrong. The main thing he got right is that it is entirely possible to get into a cycle where instead of seriously trying to fix bugs, you just do a rewrite and cross your fingers that it fixes old bugs. And yes, this can particularly happen when you’re young and writing code for fun, where the joy of a from-scratch rewrite can overwhelm some of your other good senses. Jamie also got right that I communicated the issue pretty poorly. Consider this post a belated explanation (as well as a reference for the next time I see someone refer to CADT).

But that wasn’t what GNOME was doing when Jamie complained about it, and I doubt it is actually something that happens very often in any project large enough to have a large bug tracking system (BTS). So what were we doing?

First, as Brendan Eich has pointed out, sometimes a rewrite really is a good idea. GNOME 2 was such a rewrite – not only was a lot of the old code a hairy mess, we decided (correctly) to radically revise the old UI. So in that sense, the rewrite was not a “CADT” decision – the core bugs being fixed were the kinds of bugs that could only be fixed with massive, non-incremental change, rather than “hey, we got bored with the old code”. (Immediately afterwards, GNOME switched to time-based releases, and stuck to that schedule for the better part of a decade, which should be further proof we weren’t cascading.)

This meant there were several thousand old bugs that had been filed against UIs that no longer existed, and often against code that no longer existed or had been radically rewritten. So you’ve got new code and old bugs. What do you do with the old bugs?

It is important to know that open bugs in a BTS are not free. Old bugs impose a cost on developers, because when they are trying to search relevant bugs, old bugs can make it harder to find the things they really should be working on. In the best case, this slows them down; in the worst case, it drives them to use other tools to track the work they want to do – making the BTS next to useless. This violates rule #1 of a BTS: it must be useful for developers, or else it all falls apart.

So why did we choose to reduce these costs by closing bugs filed against the old codebase as NEEDINFO (and asking people to reopen if they were still relevant) instead of re-testing and re-triaging them one-by-one, as Jamie would have suggested? A few reasons:

  • number of triagers v. number of bugs: there were, at the time, around a half-dozen active bug volunteers, and thousands of pre-GNOME 2 bugs. It was simply unlikely that we’d ever be able to review all the old bugs even if we did nothing else.
  • focus on new bugs: new bugs are where triagers and developers are much more likely to be relevant – those bugs are against fresh code; the original filer is much more likely to respond to clarifying questions; etc. So all else being equal, time spent on new bugs was going to be much better for the software than time spent on old bugs.
  • steady flow of new bugs: if you’ve got a small number of new bugs coming in, perhaps you split your time – but we had no shortage of new bugs, nor of motivated bug reporters. So we may have paid some cost (by demotivating some reporters) but our scarce resource (developers) greatly appreciated it.
  • relative burden: with thousands of open bugs from thousands of reporters, it made sense to ask old them to test their bug against the new code. Reviewing their old bugs was a small burden for each of them, once we distributed it.

So when isn’t it a good idea to close ask for more information about old bugs?

  • Great at keeping old bugs triaged/relevant: If you have a very small number of old bugs that haven’t been touched in a long time, then they aren’t putting much burden on developers.
  • Slow code turnover: If your development process is such that it is highly likely that old bugs are still relevant (e.g., core has remained mostly untouched for many years, or effective use of TDD has kept the number of accidental new bugs low) this might not be a good idea.
  • No triggering event: In GNOME, there was a big event, plus a new influx of triagers, that made it make sense to do radical change. I wouldn’t recommend this “just because” – it should go hand-in-hand with other large changes, like a major release or important policy changes that will make future triaging more effective.

Relatedly, the team practices mailing list has been discussing good practices for migrating bug tracking systems in the past few days, which has been interesting to follow. I don’t take a strong position on where Wikimedia’s bugzilla falls on this point – Mediawiki has a fairly stable core, and the volume of incoming bugs may make triage of old bugs more plausible. But everyone running a very large bugzilla for an active project should remember that this is a part of their toolkit.

  1. Both had help from others, but it was eventually my decision. []

Why feed reading is an open web problem, and what browsers could do about it

I’ve long privately thought that Firefox should treat feed reading as a first-class citizen of the open web, and integrate feed subscribing and reading more deeply into the browser (rather than the lame, useless live bookmarks.) The impending demise of Reader has finally forced me to spit out my thoughts on the issue. They’re less polished than I like when I blog these days, but here you go – may they inspire someone to resuscitate this important part of the open web.

What? Why is this an open web problem?

When I mentioned this on twitter, an ex-mozillian asked me why I think this is the browser’s responsibility, and particularly Mozilla’s. In other words – why is RSS an open web problem? why is it different from, say, email? It’s a fair question, with two main parts.

First, despite what some perceive as the “failure” of RSS, there is obviously  a demand by readers to consume web content as an automatically updated stream, rather than as traditional pages.1 Google Reader users are extreme examples of this, but Facebook users are examples too: they’re no longer just following friends, but companies, celebrities, etc. In other words, once people have identified a news source they are interested in, we know many of them like doing something to “follow” that source, and get updated in some sort of stream of updates. And we know they’re doing this en masse! They’re just not doing it in RSS – they’re doing it in Twitter and Facebook. The fact that people like the reading model pioneered by RSS – of following a company/news source, rather than repeatedly visiting their web site – suggests to me that the widely perceived failure of RSS is not really a failure of RSS, but rather a failure of the user experience of discovering and subscribing to RSS.

Of course, lots of things are broadly felt desires, and aren’t integrated into browsers – take email for example. So why are feeds different? Why should browsers treat RSS as a first-class web citizen in a way they don’t treat other things? I think that the difference is that if closed platforms (not just web sites, but platforms) begins to the only (or even best) way to experience “reading streams of web content”, that is a problem for the web. If my browser doesn’t tightly integrate email, the open web doesn’t suffer. If my browser doesn’t tightly integrate feed discovery and subscription, well, we get exactly what is happening: a mass migration away from consuming (and publishing!) news through the open web, and instead it being channeled into closed, integrated publishing and subscribing stacks like FB and Twitter that give users a good subscribing and reading experience.

To put it another way: Tantek’s definition of the open web (if I may grotesquely simplify it) is a web where publishing content, implementing software that consumes that content, and accessing the content is all open/decentralized. RSS2 is the only existing way to do stream-based reading that meets these requirements. So if you believe (as I do) that reading content delivered in a stream is a central part of the modern web experience, then defending RSS is an important part of defending the open web.

So that’s, roughly, my why. Here’s a bunch of random thoughts on what the how might look like:

Discovery

When you go to CNN on Facebook, “like” – in plain english, with a nice icon – is right up there, front and center. RSS? Not so much. You have to know what the orange icon means (good luck with that!) and find it (either in the website or, back in the day, in the browser toolbar). No wonder no one uses it, when there is no good way to figure out what it means. Again, the failure is not the idea of feeds- the failure is in the way it was presented to users. A browser could do this the brute-force way (is there an RSS feed? do a notice bar to subscribe) but that would probably get irritating fast. It would be better to be smart about it. Have I visited nytimes.com five times today? Or five days in a row? Then give me a notice bar: “hey, we’ve noticed you visit this site an awful lot. Would you like to get updates from it automatically?” (As a bonus, implementing this makes your browser the browser that encourages efficiency. ;)

Subscription

Once you’ve figured out you can subscribe, then what? As it currently stands, someone tells you to click on the orange icon, and you do, and you’re presented with the NASCAR problem, made worse because once you click, you have to create an account. Again, more fail; again, not a problem inherent in RSS, but a problem caused by the browser’s failure to provide an opinionated, useful default.

This is not an easy problem to solve, obviously. My hunch is that the right thing to do is provide a minimum viable product for light web users – possibly by supplementing the current “here are your favorite sites” links with a clean, light reader focused on only the current top headlines. Even without a syncing service behind it, that would still be helpful for those users, and would also encourage publishers to continue treating their feeds as first-class publishing formats (an important goal!).

Obviously solving the NASCAR problem is still hard (as is building a more serious built-in app), but perhaps the rise of browser “app stores” and web intents/web activities might ease it this time around.

Other aspects

There are other aspects to this – reading, social, and provision of reading as a service. I’m not going to get into them here, because, well, I’ve got a day job, and this post is a month late as-is ;) And because the point is primarily (1) improving the RSS experience in the browser needs to be done and (2) some minimum-viable products would go a long way towards making that happen. Less-than-MVPs can be for another day :)

  1. By “RSS” and “feeds” in this post, I really mean the subscribing+reading experience; whether the underlying tech is RSS, Atom, Activity Streams, or whatever is really an implementation detail, as long as anyone can publish to, and read from them, in distributed fashion. []
  2. again, in the very broad sense of the word, including more modern open specifications that do basically the same thing []

One year on OSI’s board (aka one year in OSI’s licensing)

Since it has been roughly one year since Mozilla nominated me to sit on the OSI board, I thought I’d recap what I’ve done over the course of the year. It hasn’t been a perfect year by any stretch, but I’m pretty happy with what we’ve done and I think we’re pointed in the right direction. Because my primary public responsibility on the board has been chairing the license committee, this can also sort of double as a review of the last year in license-discuss/license-review (though there is lots of stuff done by other members of the community that doesn’t show up here yet).

Outside of licensing, my work has consisted mostly of cheerleading the hard work of others on the board (like Deb’s hard work on our upcoming DC meeting and the work of many people on our membership initiative) – I haven’t listed each instance of that here.

Wikimedia Deutschland offices in Berlin, during the tour at the Chapters Meeting 2011“, by Mike Peel, under CC-BY-SA 2.5. (Mind you, CC is not actually OSI-certified ;)

Some things that got done:

  • Drafted and published a beta Code of Conduct for license-discuss/license-review. This was drafted with the intent that it will eventually be a CoC for all of OSI, but we’re still formally beta-testing it in the license committee community.
  • Revised the opensource.org/licenses landing page to make it more useful to visitors who are not familiar with open source. Also poked and prodded others to do various improvements to the FAQ, which now has categories and a few improved questions.
  • Revised OSI’s history page. The main changes were to update it to reflect the past  5-6 years, but also to make it more readable and more positive.
  • Oversaw a number of license submissions. I can’t take much credit for these- the community does most of the heavy lifting. But the group submitted in the past year include AROS, MOSL, “No Nonsense“, and CeCILL. The new EUPL is in the pipeline as well.
  • Engaged Greenberg Traurig as outside counsel to OSI, and organized and hosted a board face-to-face meeting at Greenberg’s San Francisco office space.
  • Helped keep lines of communication open (and hopefully improving!) with SPDX and OKFN.

Some projects are important, but incomplete:

Some projects never really got off  the ground:

  • I wanted to get GNOME to join OSI as an affiliate. This, very indirectly, spurred the history page revision mentioned above, but otherwise never really got anywhere.
  • I wanted to have OSI reach out to the authors of the CPOL and push them to improve it or adopt an existing license. That never happened.
  • I wanted to figure out how to encourage github to require a license for new projects, but got no traction.

I hope that this sounds like a pretty good year- it isn’t perfect but it felt like a good start to me, giving us some things we can build on for future years.

That said, it shouldn’t be up to just me – if you think this kind of thing sounds useful  for the broader open source community, you can help :)

  • Join license-discuss, or, if you’re more sensitive to mail traffic, but still want to help with the committee’s most important work, join license-review, which focuses on approving/rejecting proposed new licenses.
  • Become a member! Easier than joining license-discuss  ;) and provides both fiscal and moral support to the organization.

Showrunner and Show Bible? Or Cult?

I don’t currently do much heavily collaborative writing, but I’m still very interested in the process of creating very collaborative works. So one of the many stimulating discussions at Monktoberfest was a presentation by two awesome O’Reilly staffers about the future (and past) of authorship. Needless to say, collaborative authoring was a major theme. What particularly jumped out at me in the talk and the discussion afterwards was a nagging fear that any text authored by multiple people would necessarily lack the coherence and vision of the best single-author writing.

I’ve often been very sympathetic to this concern. Watching groups of people get together and try to collaboratively create work is often painful. Those groups that have done best, in my experience, are often those with some sort of objective standard for the work they’re creating. In software, that’s usually “it compiles,” followed (in the best case) by “it passes all the tests.” Where there aren’t objective standards all team members can work with – as is often the case with UI  – the process tends to fall apart. Where there are really detailed objective standards that every contribution can be measured against – HTTP, HTML – open source is often not just competitive, but dominant.

On the flip side, you get no points for thinking of the canonical example of a single designer’s vision guiding the development of software. But Apple is an example that proves the rule – software UIs that are developed without reference to objective standards of good/bad are usually either bad, or run by a not-very-benevolent dictator who has spent decades refining his vision of authorship.

Wikipedia is another very large exception to the “many cooks” argument. It is an exception because most written projects can’t possibly have a rule of thumb so straightforward and yet effective as “neutral point of view,” because most written projects aren’t factual, dry or broken-up-into-small-chunks. In other words, most written projects aren’t encyclopedias and so can’t be written “by rule.”

Or at least that’s what I was thinking during the talk. In response to this, someone commented during the post-talk Q&A1 that essentially all TV shows are collaboratively written, and yet manage to be coherent. In fact, in our new golden age of TV drama they’re often more than coherent- they’re quite good, despite extremely complex plots sprawling over several years of effort. This has stuck in my head ever since because it goes against all my hard-learned instincts.

I really don’t know what the trick is, since I’m not a TV writer. I suspect that in most cases the showrunner does it by (1) having a very clear vision of where the show is going (often not the case in software) and (2) clearly articulating and communicating that vision – i.e. having a good show bible and sticking to it.

If you’re not looking carefully, this looks a lot like what Aaron has rightly called a cult of personality. But I think, after being reminded about showrunners and show bibles, it is important to distinguish the two. It is a fine line, but there is a real different between what Aaron is concerned about and skilled leadership. Maybe a good test is to ask that leader: where is your show bible? What can I read to understand the vision, and help flesh it out like the writer of an episode? If the answer is “follow whatever I’m thinking about this month,” or “I’m too busy leading to write it down”, then you’ve got problems. But if your leadership can explain, don’t throw the baby out with the bathwater- that’s a person who has thought seriously about what they’re doing and how you can help them build something bigger and better than you could each do alone, not a cult leader.

  1. if you’re this person, please drop me a note and I’ll credit you! []

Thanking Contributors by Printing the MPL

As part of a general drive to get rid of stuff, I’ve recently become increasingly willing to part with my old books. This has been a painful process – books have many happy memories for me – but I think also a good and focusing one. As part of my emotional reaction to this, I’ve become increasingly interested in making beautiful, printed texts – things that stand up better to the test of time than the paperbacks I’ve been thinning out.

In 2010, as part of this process, I bought Typography for Lawyers, and incorporated some of what I learned from that into the HTML version of MPL 2.0. In 2011, as I was putting the finishing touches on the final draft of the MPL,  I attended the holiday fair at the San Francisco Center for the Book (neat Flickr stream), and ran across some work from Painted Tongue Press– beautiful broadside printings of poetry and wedding vows.

This gave me the idea to thank the most involved contributors to the MPL with a hand-made, printed copy of the text of the license.

The wonderful Kim Vanderheiden, of Painted Tongue, worked with me over the course of several months to plan this process, and then she and her team put them together. First, we designed the layout, not just of the text, but of the relatively unusual accordion-fold binding, which allowed the final product to be displayed like an A-Frame or by hanging the entire (very long!) thing from a wall. Then we picked paper for the text, and cloth and ribbon for the bindings (the ribbon symbolising both the fact that these are gifts and traditional bindings for legal documents). Kim’s team then hand printed them on their presses, and Kim used watercolors to paint the colored highlights (including the yellow highlighting that replaces the ALL CAPS text). Finally, they were bound.

The end result has been fifteen copies of beautiful, tangible, printed words, which I am now in the slow process of distributing to various contributors. I hope that this token of the maintainers’ appreciation for their assistance (in a variety of ways) is appreciated.

The thanks and colophon is as follows:

Thank You!

This revision of the MPL would not have happened without your  help. Please accept this hand-crafted printing of the license as a token of our appreciation, and a reflection of the effort and care you put into your contributions to the license.

The MPL Module Owners

Mitchell Baker
Harvey Anderson
Gervase Markham
Heather Meeker
Luis Villa

-o-

Colophon

The type was set in Equity by Matthew Butterick (typo.la/equity – used with permission of the typographer) and Droid Sans Mono by Google (droidfonts.com – used under the Apache 2.0 license). The book is printed on Somerset Velvet Radiant White and covered in Duo Cloth Birch.

Design, printing, binding, and painting were done with care by the excellent team at Painted Tongue Press, Oakland, California (paintedtonguepress.com).

This edition of MPL 2.0 was printed in August 2012 to celebrate the publication of, and thank contributors to, MPL 2.0. You are holding copy # __
of 15.

Format(ting?) of Forever

Mark Pilgrim had a great post1 a little while ago where he talked about Docbook as ‘The Format of Forever’, but HTML as the ‘Format of Now.’ He also argued that (since technical books were constantly outdated) generating technical books in the Format of Now instead of the Format of Forever made a lot of sense.

I’m working on a project that I’d like to see as a long-term, Format of (nearly) Forever kind of work. Specifically, it is my grandfather’s autobiography, which I’d like to see as a long-term enough work that I can give it to my own grandkids some day. As a result, I’ve been wrestling on and off with two questions: (1) what is the right ‘Format of Forever’ and (2) once you’ve chosen that source format, what is the best ‘Output Format of Now’? Thoughts welcome in comments; my own mumblings below.

Great-great-grandpa Lewis Hannum.

Grandpa, of course, wrote in the ultimate in formats of forever: typewriter. I scanned and OCRed it shortly after he passed away using the excellent gscan2pdf2, and have been slowly collecting other materials to use to supplement what he wrote – mostly pictures and scans of his Apollo memorabilia, but also family photos, like Grandpa’s Grandpa, Lewis Hannum, pictured above.

I’ve converted that to what I think may be the right ‘Format of Forever’: pandoc markdown, plus printed, easily re-scannable hard-copy. I’m thinking that markdown is the right source for a couple of reasons. Primarily: plain, simple ASCII text is hard to beat for future-proofing. Markdown is also easier to edit than HTML3.

The downside with markdown is that, while markdown is terrific for a very simple document (like grandpa’s writing is) I’d like to experiment with some slightly non-traditional media inclusion. For example, it would be nice to include an audio recording of my brother at the 1982 Columbia Shuttle launch, or a scan of Grandpa’s patent. Markdown has some facilities for including other files, but they appear to be non-standard (i.e., each post-processor handles them differently). Even image inclusion and basic formatting often feels wonky. HTML would make me happier in that direction, I suspect. And of course styling the output is a pain, though I think I have various ideas on how to do that.

Thoughts? Tips?

  1. vanished since I originally drafted this, but link kept for reference []
  2. Which, for the record, was roughly 1,000 times better than Canon’s bundled scanning crapware. []
  3. which is sort of pathetic; how come we still don’t have a decent simple HTML editor? []