Democracy and Software Freedom

As part of a broader discussion of democracy as the basis for a just socio-economic system, Séverine Deneulin summarizes Robert Dahl’s Democracy, which says democracy requires five qualities:

First, democracy requires effective participation. Before a policy is adopted, all members must have equal and effective opportunities for making their views known to others as to what the policy should be.

Second, it is based on voting equality. When the moment arrives for the final policy decision to be made, every member should have an equal and effective opportunity to vote, and all votes should be counted as equal.

Third, it rests on ‘enlightened understanding’. Within reasonable limits, each member should have equal and effective opportunities for learning about alternative policies and their likely consequences.

Fourth, each member should have control of the agenda, that is, members should have the exclusive opportunity to decide upon the agenda and change it.

Fifth, democratic decision-making should include all adults. All (or at least most) adult permanent residents should have the full rights of citizens that are implied by the first four criteria.

From An Introduction to the Human Development and Capability Approach“, Ch. 8 – “Democracy and Political Participation”.

Poll worker explains voting process in southern Sudan referendum” by USAID Africa Bureau via Wikimedia Commons.

It is striking that, despite talking a lot about freedom, and often being interested in the question of who controls power, these five criteria might as well be (Athenian) Greek to most free software communities and participants- the question of liberty begins and ends with source code, and has nothing to say about organizational structure and decision-making – critical questions serious philosophers always address.

Our licensing, of course, means that in theory points #4 and #5 are satisfied, but saying “you can submit a patch” is, for most people, roughly as satisfying as saying “you could buy a TV ad” to an American voter concerned about the impact of wealth on our elections. Yes, we all have the theoretical option to buy a TV ad/edit our code, but for most voters/users of software that option will always remain theoretical. We’re probably even further from satisfying #1, #2, and #3 in most projects, though one could see the Ada Initiative and GNOME OPW as attempts to deal with some aspects of #1, #3, and #4

This is not to say that voting is the right way to make decisions about software development, but simply to ask: if we don’t have these checks in place, what are we doing instead? And are those alternatives good enough for us to have certainty that we’re actually enhancing freedom?

I am the CADT; and advice on NEEDINFOing old bugs en masse

[Attention conservation notice: probably not of interest to lawyers; this is about my previous life in software development.]

Bugsquad barnstar, under MPL 1.1

Someone recently mentioned JWZ’s old post on the CADT (Cascade of Attention Deficit Teecnagers) development model, and that finally has pushed me to say:

I am the CADT.

I did the bug closure that triggered Jamie’s rant, and I wrote the text he quotes in his blog post.1

Jamie got some things right, and some things wrong. The main thing he got right is that it is entirely possible to get into a cycle where instead of seriously trying to fix bugs, you just do a rewrite and cross your fingers that it fixes old bugs. And yes, this can particularly happen when you’re young and writing code for fun, where the joy of a from-scratch rewrite can overwhelm some of your other good senses. Jamie also got right that I communicated the issue pretty poorly. Consider this post a belated explanation (as well as a reference for the next time I see someone refer to CADT).

But that wasn’t what GNOME was doing when Jamie complained about it, and I doubt it is actually something that happens very often in any project large enough to have a large bug tracking system (BTS). So what were we doing?

First, as Brendan Eich has pointed out, sometimes a rewrite really is a good idea. GNOME 2 was such a rewrite – not only was a lot of the old code a hairy mess, we decided (correctly) to radically revise the old UI. So in that sense, the rewrite was not a “CADT” decision – the core bugs being fixed were the kinds of bugs that could only be fixed with massive, non-incremental change, rather than “hey, we got bored with the old code”. (Immediately afterwards, GNOME switched to time-based releases, and stuck to that schedule for the better part of a decade, which should be further proof we weren’t cascading.)

This meant there were several thousand old bugs that had been filed against UIs that no longer existed, and often against code that no longer existed or had been radically rewritten. So you’ve got new code and old bugs. What do you do with the old bugs?

It is important to know that open bugs in a BTS are not free. Old bugs impose a cost on developers, because when they are trying to search relevant bugs, old bugs can make it harder to find the things they really should be working on. In the best case, this slows them down; in the worst case, it drives them to use other tools to track the work they want to do – making the BTS next to useless. This violates rule #1 of a BTS: it must be useful for developers, or else it all falls apart.

So why did we choose to reduce these costs by closing bugs filed against the old codebase as NEEDINFO (and asking people to reopen if they were still relevant) instead of re-testing and re-triaging them one-by-one, as Jamie would have suggested? A few reasons:

  • number of triagers v. number of bugs: there were, at the time, around a half-dozen active bug volunteers, and thousands of pre-GNOME 2 bugs. It was simply unlikely that we’d ever be able to review all the old bugs even if we did nothing else.
  • focus on new bugs: new bugs are where triagers and developers are much more likely to be relevant – those bugs are against fresh code; the original filer is much more likely to respond to clarifying questions; etc. So all else being equal, time spent on new bugs was going to be much better for the software than time spent on old bugs.
  • steady flow of new bugs: if you’ve got a small number of new bugs coming in, perhaps you split your time – but we had no shortage of new bugs, nor of motivated bug reporters. So we may have paid some cost (by demotivating some reporters) but our scarce resource (developers) greatly appreciated it.
  • relative burden: with thousands of open bugs from thousands of reporters, it made sense to ask old them to test their bug against the new code. Reviewing their old bugs was a small burden for each of them, once we distributed it.

So when isn’t it a good idea to close ask for more information about old bugs?

  • Great at keeping old bugs triaged/relevant: If you have a very small number of old bugs that haven’t been touched in a long time, then they aren’t putting much burden on developers.
  • Slow code turnover: If your development process is such that it is highly likely that old bugs are still relevant (e.g., core has remained mostly untouched for many years, or effective use of TDD has kept the number of accidental new bugs low) this might not be a good idea.
  • No triggering event: In GNOME, there was a big event, plus a new influx of triagers, that made it make sense to do radical change. I wouldn’t recommend this “just because” – it should go hand-in-hand with other large changes, like a major release or important policy changes that will make future triaging more effective.

Relatedly, the team practices mailing list has been discussing good practices for migrating bug tracking systems in the past few days, which has been interesting to follow. I don’t take a strong position on where Wikimedia’s bugzilla falls on this point – Mediawiki has a fairly stable core, and the volume of incoming bugs may make triage of old bugs more plausible. But everyone running a very large bugzilla for an active project should remember that this is a part of their toolkit.

  1. Both had help from others, but it was eventually my decision. []

Some quick notes on the occasion of my fourth OSI board face-to-face

The past two days were my fourth OSI face-to-face board meeting, this time about a block from my office in San Francisco.

Image by opensource.com under a CC BY-SA license.

 

Some brief thoughts:

  • I’m excited about the hire of our new General Manager (and first paid employee), Patrick Masson. One of the hardest things for an all-volunteer organization is execution: finishing the last 10% of tasks; reliably repeating necessary things; generally taking ideas from conception to reality. I expect in the short term that hiring Patrick will really help with those first two, and in the longer term, as he gets his feet under him, he’ll be a great help for the last one. Potentially a very exciting time for OSI.
  • Relatedly, we have a lot of proposals in the air, particularly with regards to membership and education/outreach. I hope in the next six months we’ll see a lot of concrete results from them.
  • This meeting represented a bit of  a changing of the guard — it was the first board meeting for Richard Fontana and the last for a long-time member of our community who will be term-limited out at the end of this term. I always enjoy working with Richard, and fresh blood is great, but it was a good reminder that time continues to pass and that I should focus on, you know, getting things done :)
  • This is a small thing, but the OSI website now has a brief summary of open source on the front page. We also recently helped a high-profile website update and correct their definition of open source – the kind of small thing that isn’t very visible but will be influential in the long run, as we educate the next wave of open source devs. Hoorah for small victories!
  • You can help (I): We continue to look for help to update the OSI website, both by improving our license information and by updating or writing other pages on the website.
  • You can help (II): Richard’s addition to the board is significant, because just like I was part of the first class of board members nominated by affiliates, he is the first board member elected by the membership. That is an important milestone for us, as we move towards being more accountable to, and governed by, the open source community. You can help by joining as an individual member, or by encouraging your organizations to join as an affiliate.
  • I continue to be grateful to the Wikimedia Foundation (and my boss!) in supporting me in doing this. There is never a good time to take two days mostly off.
  • The relationship between OSI and other groups promoting open definitions was a continued source of discussion at the meeting, and I imagine will continue to be a repeating theme. I hope I can continue to act as a bridge with organizations in open data and open culture, and I expect Patrick will be great in helping us bridge to open education.

I’m Donating to the Ada Initiative, and You Should Too

I was going to write a long, involved post about why I donated again to the Ada Initiative, and why you should too, especially in the concluding days of this year’s fundraising drive (which ends Friday).

Lady Ada Lovelace, by Alfred Edward Chalon [Public domain], via Wikimedia Commons
But instead Jacob Kaplan-Moss said it better than I can. Some key bits:

I’m been working with (and on) open source software for over half my life, and open source has been incredibly good for me. The best things in my life — a career I love, the ability to live how and where I want, opportunities to travel around the world — they’ve all been a direct result of the open source communities I’ve become involved in.

I’m male, so I get to take advantage of the assumed competency our industry heaps on men. … I’ve never had my ideas poached by other men, something that happens to women all the time. … I’ve never been refused a job out of fears that I might get pregnant. I can go to conferences without worrying I might be harassed or raped.

So, I’ve been incredibly successful making a life out of open source, but I’m playing on the lowest difficulty setting there is.

This needs to change.

Amen to all that. The Ada Initiative is not enough – each of us needs to dig into the problem ourselves, not just delegate to others. But Ada is an important tool we have to attack the problem, doing great work to discuss, evangelize, and provide support. I hope you’ll join me (and Jacob, and many other people) in doing our part to support it in turn.

Forking and Standards: Why The Right to Fork Can Be Pro-Social

[I originally sent a version of this to the W3C’s Patents and Standards Interest Group, where a fellow member asked me to post it more publicly. Since then, others have also blogged in related veins.]

Blue Plastic Fork, by David Benbennick, used under CC-BY-SA 3.0.

It is often said that open source and open standards are different, because in open source, a diversity of forks is accepted/encouraged, while “forked” standards are confusing/inefficient since they don’t provide “stable reference points” for the people who have to implement the spec.

Here is the nutshell version of a critical way in which open source and open standards are in fact similar, and why licensing that allows forking therefore matters.

Open source and open standards are similar because both are generally created by communities of collaborators who must trust each other across organizational boundaries. This is relevant to the right to fork, because the “right to fork” is an important mechanism to help those communities trust each other. This is surprising – the right to fork is often viewed as an anti-social right, which allows someone to stomp their feet and walk away. However, it is also pro-social: because it makes it impossible for any one person or organization to become a single point of failure. This is particularly important where the community is in large part a volunteer one, and where a single point of failure is widely perceived to have actually failed in the past.

Not coincidentally, “largely volunteer” and “failed single point of failure” describes the HTML working group (HTML WG) pretty closely. W3C was a single point of failure for HTML, and most HTML WG participants believe W3C’s failures stalled the development of HTML from 1999 to 2006. For some of the history, see David Baron’s recent post; for a more detailed
history by the author of HTML5, you can look in the spec itself.

Because of this history, the HTML WG participants voted for permissive licenses for their specification. They voted for permissive licenses, even though many of them have the most to gain from “stable reference points”, since they are often the ones who (when not writing the standards) are the one paid to implement the standards!

An alternate way to think about this is to think of the forkable license as a commitment mechanism for W3C: by committing to open licensing for the specification, W3C is saying to the HTML WG community “we will be good stewards – because we know otherwise you’ll leave”. (Alternate commitment mechanisms are of course a possibility, and I believe some have been discussed – but they’d need careful institutional design, and would all result in some variety of instabililty.)

So, yes: standards should be stable. But the options for HTML may not be “stable specification” or “unstable specification”. The options, based on the community’s vote and discussions, may well be “unstable specification” or “no (W3C) specification at all”, because many of the key engineers involved don’t appear to trust W3C much further than they can throw it. The forkable license is a popular potential mechanism to reduce that trust gap and allow everyone to move forward, knowing that their investments are not squandered should the W3C screw up again in the future.

Information diet weekend

As a slight sequel to my “feed reading is an open web problem” post, so far this weekend I have taken the following information diet steps:

RSS feeds: 610→339 (and counting).

Based on Google’s stats, I’d probably read about a million feed items in Reader. This is just too much. The complaints about attention span in this piece and in The Information Diet1 rang very true. Reader is a huge part of that problem for me. (Close friends will also note that I’ve been mostly off gchat and twitter during the work day since I started the new job, and that’s been great.) So I’ve spent time, and will spend more time soon, pruning this list.

National news feeds: lots→~0; weekly paper news magazines: 0→2; local news feeds: small #? large #?

My friend Ed put in my head a long time ago that national news is not very useful. It riles the passions, but otherwise isn’t helpful: you’re not making the world a better place as a result of knowing more, and you’re not making yourself happier either.2  So you’re better off reading much less national political news, and much less frequently: hence the two new on-paper subscriptions to weekly news magazines.

Besides allowing you to get off the computer(!), the time saved can also be used to focus on things that either make your life better (e.g., happier) or that give you actionable information to resolve problems. To tackle both of those needs, I’d like to curate a set of local news feeds. I’ll be blogging more about this later (including what I’m already reading), but suggestions are welcome. I suspect that will make me much happier (or at least less angry), and present opportunities to actually do things, in ways that the national news obviously never can.

Moved from reader→feedly.

The impending shutdown of Reader was obviously the catalyst for all this change; feedly seems not perfect but pretty solid. I continue to keep an eye on newsblur (still a variety of issues) and feedbin.me (no mature Android client yet), since feedly is still (1) closed source and (2) has no visible business model – leaving it susceptible to the same Reader shutdown problem.

“Two young children picket for the ILGWU carrying placards including ‘I Need a Healthy Diet!’ outside the Kolodney and Myers Employment Office” by the Kheel Center at Cornell University, used under CC-BY 2.0.

Steps still to come:

Separate the necessary from the entertaining

Joe pointed out to me that all news sources aren’t equal. There are feeds you must read in a timely manner (e.g., for me right now, changes in work-critical Wikipedia talk pages), and feeds that can be sampled instead. The traditional solution to this is folders or categories within the same app. But we’re starting to see apps that are optimized for the not-mission-critical entertainment feed stream (Joe specifically recommended Currents). I’d like to play with those apps, and use one of them to further prune my “serious feeds” list.  Recommendations happily accepted.

Improve publication

I do want to participate, in some small way, in the news stream, by creating a stream of outbound articles and commentary on them. I never used Reader’s features for this, because of the walled garden aspect. Many of our tools now make it easy to share out to places like Twitter and Facebook, but that means I’m contributing to the problem for my friends, not helping solve it. I’d like my outbound info to be less McDonalds and more Chez Panisse :) The tools for that aren’t quite there, but this morning I stumbled across readlists, which looks like it is about 90% something I’ve been looking for forever. I’ll keep keeping an eye out, so again: good suggestions for outbound curation tools happily accepted.

What else?

I hate the weasely “ask your audience” blog post ending as much as anyone, but here, I have a genuine curiosity: what else are friends doing for their info diets? I want to eventually get towards the “digital sabbath” but I’m not there yet; other tips/suggestions?

  1. capsule book review: great diagnosis of the problem, pretty poor recommendations for solutions []
  2. It’s pretty much a myth that reading the news makes you a better voter: research shows even supposedly high-information voters have already decided well before they read any news, and if for some reason you’re genuinely undecided, you’re better off reading something like ballotpedia than a streaming bunch of horse-race coverage. []

Why feed reading is an open web problem, and what browsers could do about it

I’ve long privately thought that Firefox should treat feed reading as a first-class citizen of the open web, and integrate feed subscribing and reading more deeply into the browser (rather than the lame, useless live bookmarks.) The impending demise of Reader has finally forced me to spit out my thoughts on the issue. They’re less polished than I like when I blog these days, but here you go – may they inspire someone to resuscitate this important part of the open web.

What? Why is this an open web problem?

When I mentioned this on twitter, an ex-mozillian asked me why I think this is the browser’s responsibility, and particularly Mozilla’s. In other words – why is RSS an open web problem? why is it different from, say, email? It’s a fair question, with two main parts.

First, despite what some perceive as the “failure” of RSS, there is obviously  a demand by readers to consume web content as an automatically updated stream, rather than as traditional pages.1 Google Reader users are extreme examples of this, but Facebook users are examples too: they’re no longer just following friends, but companies, celebrities, etc. In other words, once people have identified a news source they are interested in, we know many of them like doing something to “follow” that source, and get updated in some sort of stream of updates. And we know they’re doing this en masse! They’re just not doing it in RSS – they’re doing it in Twitter and Facebook. The fact that people like the reading model pioneered by RSS – of following a company/news source, rather than repeatedly visiting their web site – suggests to me that the widely perceived failure of RSS is not really a failure of RSS, but rather a failure of the user experience of discovering and subscribing to RSS.

Of course, lots of things are broadly felt desires, and aren’t integrated into browsers – take email for example. So why are feeds different? Why should browsers treat RSS as a first-class web citizen in a way they don’t treat other things? I think that the difference is that if closed platforms (not just web sites, but platforms) begins to the only (or even best) way to experience “reading streams of web content”, that is a problem for the web. If my browser doesn’t tightly integrate email, the open web doesn’t suffer. If my browser doesn’t tightly integrate feed discovery and subscription, well, we get exactly what is happening: a mass migration away from consuming (and publishing!) news through the open web, and instead it being channeled into closed, integrated publishing and subscribing stacks like FB and Twitter that give users a good subscribing and reading experience.

To put it another way: Tantek’s definition of the open web (if I may grotesquely simplify it) is a web where publishing content, implementing software that consumes that content, and accessing the content is all open/decentralized. RSS2 is the only existing way to do stream-based reading that meets these requirements. So if you believe (as I do) that reading content delivered in a stream is a central part of the modern web experience, then defending RSS is an important part of defending the open web.

So that’s, roughly, my why. Here’s a bunch of random thoughts on what the how might look like:

Discovery

When you go to CNN on Facebook, “like” – in plain english, with a nice icon – is right up there, front and center. RSS? Not so much. You have to know what the orange icon means (good luck with that!) and find it (either in the website or, back in the day, in the browser toolbar). No wonder no one uses it, when there is no good way to figure out what it means. Again, the failure is not the idea of feeds- the failure is in the way it was presented to users. A browser could do this the brute-force way (is there an RSS feed? do a notice bar to subscribe) but that would probably get irritating fast. It would be better to be smart about it. Have I visited nytimes.com five times today? Or five days in a row? Then give me a notice bar: “hey, we’ve noticed you visit this site an awful lot. Would you like to get updates from it automatically?” (As a bonus, implementing this makes your browser the browser that encourages efficiency. ;)

Subscription

Once you’ve figured out you can subscribe, then what? As it currently stands, someone tells you to click on the orange icon, and you do, and you’re presented with the NASCAR problem, made worse because once you click, you have to create an account. Again, more fail; again, not a problem inherent in RSS, but a problem caused by the browser’s failure to provide an opinionated, useful default.

This is not an easy problem to solve, obviously. My hunch is that the right thing to do is provide a minimum viable product for light web users – possibly by supplementing the current “here are your favorite sites” links with a clean, light reader focused on only the current top headlines. Even without a syncing service behind it, that would still be helpful for those users, and would also encourage publishers to continue treating their feeds as first-class publishing formats (an important goal!).

Obviously solving the NASCAR problem is still hard (as is building a more serious built-in app), but perhaps the rise of browser “app stores” and web intents/web activities might ease it this time around.

Other aspects

There are other aspects to this – reading, social, and provision of reading as a service. I’m not going to get into them here, because, well, I’ve got a day job, and this post is a month late as-is ;) And because the point is primarily (1) improving the RSS experience in the browser needs to be done and (2) some minimum-viable products would go a long way towards making that happen. Less-than-MVPs can be for another day :)

  1. By “RSS” and “feeds” in this post, I really mean the subscribing+reading experience; whether the underlying tech is RSS, Atom, Activity Streams, or whatever is really an implementation detail, as long as anyone can publish to, and read from them, in distributed fashion. []
  2. again, in the very broad sense of the word, including more modern open specifications that do basically the same thing []

Licensing confusion is great! (for lawyers)

I want to heartily unendorse Simon Phipps’ Infoworld article about Github and licensing. Simon’s article makes it sound like no one benefits from sloppy licensing practices, and that is simply not true. Specifically, lawyers benefit! I regularly get calls from clients saying “I have no idea if I’m allowed to use <project X>, because it is on github but doesn’t have a license.” When that happens, instead of money going to developers where it could actually build something productive, instead, I get to spend my time and the client’s money fixing a problem that the original author could have easily avoided by slapping an Apache license on the thing in the first place – or that github could have avoided by adding default terms.

So, support your local open source lawyer today – publish source code without a license!1

  1. Tongue firmly in cheek, in case that isn’t obvious. Seriously, lawyers are the only ones who benefit from this situation, except for that handful of seconds it took you to “git add LICENSE”. Always license your code, kids! []

Showrunner and Show Bible? Or Cult?

I don’t currently do much heavily collaborative writing, but I’m still very interested in the process of creating very collaborative works. So one of the many stimulating discussions at Monktoberfest was a presentation by two awesome O’Reilly staffers about the future (and past) of authorship. Needless to say, collaborative authoring was a major theme. What particularly jumped out at me in the talk and the discussion afterwards was a nagging fear that any text authored by multiple people would necessarily lack the coherence and vision of the best single-author writing.

I’ve often been very sympathetic to this concern. Watching groups of people get together and try to collaboratively create work is often painful. Those groups that have done best, in my experience, are often those with some sort of objective standard for the work they’re creating. In software, that’s usually “it compiles,” followed (in the best case) by “it passes all the tests.” Where there aren’t objective standards all team members can work with – as is often the case with UI  – the process tends to fall apart. Where there are really detailed objective standards that every contribution can be measured against – HTTP, HTML – open source is often not just competitive, but dominant.

On the flip side, you get no points for thinking of the canonical example of a single designer’s vision guiding the development of software. But Apple is an example that proves the rule – software UIs that are developed without reference to objective standards of good/bad are usually either bad, or run by a not-very-benevolent dictator who has spent decades refining his vision of authorship.

Wikipedia is another very large exception to the “many cooks” argument. It is an exception because most written projects can’t possibly have a rule of thumb so straightforward and yet effective as “neutral point of view,” because most written projects aren’t factual, dry or broken-up-into-small-chunks. In other words, most written projects aren’t encyclopedias and so can’t be written “by rule.”

Or at least that’s what I was thinking during the talk. In response to this, someone commented during the post-talk Q&A1 that essentially all TV shows are collaboratively written, and yet manage to be coherent. In fact, in our new golden age of TV drama they’re often more than coherent- they’re quite good, despite extremely complex plots sprawling over several years of effort. This has stuck in my head ever since because it goes against all my hard-learned instincts.

I really don’t know what the trick is, since I’m not a TV writer. I suspect that in most cases the showrunner does it by (1) having a very clear vision of where the show is going (often not the case in software) and (2) clearly articulating and communicating that vision – i.e. having a good show bible and sticking to it.

If you’re not looking carefully, this looks a lot like what Aaron has rightly called a cult of personality. But I think, after being reminded about showrunners and show bibles, it is important to distinguish the two. It is a fine line, but there is a real different between what Aaron is concerned about and skilled leadership. Maybe a good test is to ask that leader: where is your show bible? What can I read to understand the vision, and help flesh it out like the writer of an episode? If the answer is “follow whatever I’m thinking about this month,” or “I’m too busy leading to write it down”, then you’ve got problems. But if your leadership can explain, don’t throw the baby out with the bathwater- that’s a person who has thought seriously about what they’re doing and how you can help them build something bigger and better than you could each do alone, not a cult leader.

  1. if you’re this person, please drop me a note and I’ll credit you! []

Format(ting?) of Forever

Mark Pilgrim had a great post1 a little while ago where he talked about Docbook as ‘The Format of Forever’, but HTML as the ‘Format of Now.’ He also argued that (since technical books were constantly outdated) generating technical books in the Format of Now instead of the Format of Forever made a lot of sense.

I’m working on a project that I’d like to see as a long-term, Format of (nearly) Forever kind of work. Specifically, it is my grandfather’s autobiography, which I’d like to see as a long-term enough work that I can give it to my own grandkids some day. As a result, I’ve been wrestling on and off with two questions: (1) what is the right ‘Format of Forever’ and (2) once you’ve chosen that source format, what is the best ‘Output Format of Now’? Thoughts welcome in comments; my own mumblings below.

Great-great-grandpa Lewis Hannum.

Grandpa, of course, wrote in the ultimate in formats of forever: typewriter. I scanned and OCRed it shortly after he passed away using the excellent gscan2pdf2, and have been slowly collecting other materials to use to supplement what he wrote – mostly pictures and scans of his Apollo memorabilia, but also family photos, like Grandpa’s Grandpa, Lewis Hannum, pictured above.

I’ve converted that to what I think may be the right ‘Format of Forever’: pandoc markdown, plus printed, easily re-scannable hard-copy. I’m thinking that markdown is the right source for a couple of reasons. Primarily: plain, simple ASCII text is hard to beat for future-proofing. Markdown is also easier to edit than HTML3.

The downside with markdown is that, while markdown is terrific for a very simple document (like grandpa’s writing is) I’d like to experiment with some slightly non-traditional media inclusion. For example, it would be nice to include an audio recording of my brother at the 1982 Columbia Shuttle launch, or a scan of Grandpa’s patent. Markdown has some facilities for including other files, but they appear to be non-standard (i.e., each post-processor handles them differently). Even image inclusion and basic formatting often feels wonky. HTML would make me happier in that direction, I suspect. And of course styling the output is a pain, though I think I have various ideas on how to do that.

Thoughts? Tips?

  1. vanished since I originally drafted this, but link kept for reference []
  2. Which, for the record, was roughly 1,000 times better than Canon’s bundled scanning crapware. []
  3. which is sort of pathetic; how come we still don’t have a decent simple HTML editor? []