Slide embedding from Commons

A friend of a friend asked this morning:

I suggested Wikimedia Commons, but it turns out she wanted something like Slideshare’s embedding. So here’s a test of how that works (timely, since soon Wikimanians will be uploading dozens of slide decks!)

This is what happens when you use the default Commons “Use this file on the web -> HTML/BBCode” option on a slide deck pdf:

Not the worst outcome – clicking gets you to a clickable deck. No controls inline in the embed, though. And importantly nothing to show that it is clickable :/

Compare with the same deck, uploaded to Slideshare:

Some work to be done if we want to encourage people to upload to Commons and share later.

Update: a commenter points me at viewer.js, which conveniently includes a wordpress plugin! The plugin is slightly busted (I had to move some files around to get it to work in my install) but here’s a demo:

Update2: bugs are fixed upstream and in an upcoming 0.5.2 release of the plugin. Hooray!

Designers and Creative Commons: Learning Through Wikipedia Redesigns

tl;dr: Wikipedia redesigns mostly ignore attribution of Wikipedia authors, and none approach the problem creatively. This probably says as much or more about Creative Commons as it does about the designers.

disclaimer-y thing: so far, this is for fun, not work; haven’t discussed it at the office and have no particular plans to. Yes, I have a weird idea of fun.

A mild refresh from interfacesketch.com.

It is no longer surprising when a new day brings a new redesign of Wikipedia. After seeing one this weekend with no licensing information, I started going back through seventeen of them (most of the ones listed on-wiki) to see how (if at all) they dealt with licensing, attribution, and history. Here’s a summary of what I found.

Completely missing

Perhaps not surprisingly, many designers completely remove attribution (i.e., history) and licensing information in their designs. Seven of the seventeen redesigns I surveyed were in this camp. Some of them were in response to a particular, non-licensing-related challenge, so it may not be fair to lump them into this camp, but good designers still deal with real design constraints, and licensing is one of them.

History survives – sometimes

The history link is important, because it is how we honor the people who wrote the article, and comply with our attribution obligations. Five of the seventeen redesigns lacked any licensing information, but at least kept a history link.

Several of this group included some legal information, such as links to the privacy policy, or in one case, to the Wikimedia Foundation trademark page. This suggests that our current licensing information may be presented in a worse way than some of our other legal information, since it seems to be getting cut out even by designers who are tolerant of some of our other legalese?

Same old, same old

Four of the seventeen designs keep the same old legalese, though one fails to comply by making it impossible to get to the attribution (history) page. Nothing wrong with keeping the existing language, but it could reflect a sad conclusion that licensing information isn’t worth the attention of designers; or (more generously) that they don’t understand the meaning/utility of the language, so it just gets cargo-culted around. (Credit to Hamza Erdoglu , who was the only mockup designer who specifically went out of his way to show the page footer in one of his mockups.)

A winner, sort of!

Of the seventeen sites I looked at, exactly one did something different: Wikiwand. It is pretty minimal, but it is something. The one thing: as part of the redesign, it adds a big header/splash image to the page, and then adds a new credit specifically for the author of the header/splash image down at the bottom of the page with the standard licensing information. Arguably it isn’t that creative, just complying with their obligations from adding a new image, but it’s at least a sign that not everyone is asleep at the wheel.

Observations

This is surely not a large or representative sample, so all my observations from this exercise should be taken with a grain of salt. (They’re also speculative since I haven’t talked to the designers.) That said, some thoughts besides the ones above:

  • Virtually all of the designers who wrote about why they did the redesign mentioned our public-edit-nature as one of their motivators. Given that, I expected history to be more frequently/consistently addressed. Not clear whether this should be chalked up to designers not caring about attribution, or the attribution role of history being very unclear to anyone who isn’t an expect. I suspect the latter.
  • It was evident that some of these designers had spent a great deal of time thinking about the site, and yet were unaware of licensing/attribution. This suggests that people who spend less time with the site (i.e., 99.9% of readers) are going to be even more ignorant.
  • None of the designers felt attribution and licensing was even important enough to experiment on or mention in their writeups. As I said above, this is understandable but sort of sad, and I wonder how to change it.

Postscript, added next morning:

I think it’s important to stress that I didn’t link to the individual sites here, because I don’t want to call out particular designers or focus on their failures/oversights. The important (and as I said, sad) thing to me is that designers are, historically, a culture concerned with licensing and attribution. If we can’t interest them in applying their design talents to our problem, in the context of the world’s most famously collaborative project, we (lawyers and other Commoners) need to look hard at what we’re doing, and how we can educate and engage designers to be on our side.

I should also add that the WMF design team has been a real pleasure to work with on this problem, and I look forward to doing more of it. Some stuff still hasn’t made it off the drawing board, but they’re engaged and interested in this challenge. Here is one example.

I am the CADT; and advice on NEEDINFOing old bugs en masse

[Attention conservation notice: probably not of interest to lawyers; this is about my previous life in software development.]

Bugsquad barnstar, under MPL 1.1

Someone recently mentioned JWZ’s old post on the CADT (Cascade of Attention Deficit Teecnagers) development model, and that finally has pushed me to say:

I am the CADT.

I did the bug closure that triggered Jamie’s rant, and I wrote the text he quotes in his blog post.1

Jamie got some things right, and some things wrong. The main thing he got right is that it is entirely possible to get into a cycle where instead of seriously trying to fix bugs, you just do a rewrite and cross your fingers that it fixes old bugs. And yes, this can particularly happen when you’re young and writing code for fun, where the joy of a from-scratch rewrite can overwhelm some of your other good senses. Jamie also got right that I communicated the issue pretty poorly. Consider this post a belated explanation (as well as a reference for the next time I see someone refer to CADT).

But that wasn’t what GNOME was doing when Jamie complained about it, and I doubt it is actually something that happens very often in any project large enough to have a large bug tracking system (BTS). So what were we doing?

First, as Brendan Eich has pointed out, sometimes a rewrite really is a good idea. GNOME 2 was such a rewrite – not only was a lot of the old code a hairy mess, we decided (correctly) to radically revise the old UI. So in that sense, the rewrite was not a “CADT” decision – the core bugs being fixed were the kinds of bugs that could only be fixed with massive, non-incremental change, rather than “hey, we got bored with the old code”. (Immediately afterwards, GNOME switched to time-based releases, and stuck to that schedule for the better part of a decade, which should be further proof we weren’t cascading.)

This meant there were several thousand old bugs that had been filed against UIs that no longer existed, and often against code that no longer existed or had been radically rewritten. So you’ve got new code and old bugs. What do you do with the old bugs?

It is important to know that open bugs in a BTS are not free. Old bugs impose a cost on developers, because when they are trying to search relevant bugs, old bugs can make it harder to find the things they really should be working on. In the best case, this slows them down; in the worst case, it drives them to use other tools to track the work they want to do – making the BTS next to useless. This violates rule #1 of a BTS: it must be useful for developers, or else it all falls apart.

So why did we choose to reduce these costs by closing bugs filed against the old codebase as NEEDINFO (and asking people to reopen if they were still relevant) instead of re-testing and re-triaging them one-by-one, as Jamie would have suggested? A few reasons:

  • number of triagers v. number of bugs: there were, at the time, around a half-dozen active bug volunteers, and thousands of pre-GNOME 2 bugs. It was simply unlikely that we’d ever be able to review all the old bugs even if we did nothing else.
  • focus on new bugs: new bugs are where triagers and developers are much more likely to be relevant – those bugs are against fresh code; the original filer is much more likely to respond to clarifying questions; etc. So all else being equal, time spent on new bugs was going to be much better for the software than time spent on old bugs.
  • steady flow of new bugs: if you’ve got a small number of new bugs coming in, perhaps you split your time – but we had no shortage of new bugs, nor of motivated bug reporters. So we may have paid some cost (by demotivating some reporters) but our scarce resource (developers) greatly appreciated it.
  • relative burden: with thousands of open bugs from thousands of reporters, it made sense to ask old them to test their bug against the new code. Reviewing their old bugs was a small burden for each of them, once we distributed it.

So when isn’t it a good idea to close ask for more information about old bugs?

  • Great at keeping old bugs triaged/relevant: If you have a very small number of old bugs that haven’t been touched in a long time, then they aren’t putting much burden on developers.
  • Slow code turnover: If your development process is such that it is highly likely that old bugs are still relevant (e.g., core has remained mostly untouched for many years, or effective use of TDD has kept the number of accidental new bugs low) this might not be a good idea.
  • No triggering event: In GNOME, there was a big event, plus a new influx of triagers, that made it make sense to do radical change. I wouldn’t recommend this “just because” – it should go hand-in-hand with other large changes, like a major release or important policy changes that will make future triaging more effective.

Relatedly, the team practices mailing list has been discussing good practices for migrating bug tracking systems in the past few days, which has been interesting to follow. I don’t take a strong position on where Wikimedia’s bugzilla falls on this point – Mediawiki has a fairly stable core, and the volume of incoming bugs may make triage of old bugs more plausible. But everyone running a very large bugzilla for an active project should remember that this is a part of their toolkit.

  1. Both had help from others, but it was eventually my decision. []

Summarizing “hacker legal education” crisply and cleanly

James Grimmelman is a better writer than I am. I already knew this, but in this commentary on Biella Coleman’s (excellent) Coding Freedom, he captures something I have struggled to express for years in two crisp, clean sentences:

Hacker legal education, with its roots in programming, is strong on formal precision and textual exegesis. But it is notably light on legal realism: coping with the open texture of the law and sorting persuasive from ineffective arguments.

This distinction is worth keeping in mind, for both sides of the professional/amateur legal discussion, to understand the relative strengths and weaknesses of their training and experience.

(Note that James says this, and I quote it, with all due love and respect, since we were both programmers before we were lawyers.)

Reviewing the Manual of Style for Contract Drafting by Editing Twitter’s Patent Agreement

strike {color:red;}
u {color:blue;}

Synopsis for lawyers

You should really buy the Manual of Style for Contract Drafting – it’ll make you a better drafter and editor. This post applies the book’s rules and guidelines to a publicly-available legal agreement (Twitter’s Innovator’s Patent Agreement) to explain what the book is and why it is valuable.

tl;dr, for programmers

Contract writers have no equivalent of RFC 2119, mostly because contract drafting is hard. MSCD is a good try – defining terms and demanding consistency, just like the compiler lawyers lack. This post is a rewrite and fleshing out of the github edit history.

    <dl id="attachment_2631" class="wp-caption aligncenter" style="max-width:508px">
        <dt><a href="http://i0.wp.com/commons.wikimedia.org/wiki/File:Sales_contract_Shuruppak_Louvre_AO3760.jpg?ssl=1"><img src="http://i2.wp.com/tieguy.org/blog/wp-content/uploads/2013/10/Sales_contract_Shuruppak_Louvre.jpg?resize=508%2C480" alt="A contract for the selling of a field and a house." class="size-full wp-image-2631" /></a></dt>
        <dd>A contract for the selling of a field and a house, from <a href="https://en.wikipedia.org/wiki/Shuruppak">the Sumerian city of Shurrupak</a>, now in the Louvre.</dd>
    </dl><br />

Continue reading “Reviewing the Manual of Style for Contract Drafting by Editing Twitter’s Patent Agreement”

Thoughts on the CC Summit

I was lucky enough to attend the Creative Commons Global Summit in Buenos Aires last week, including the pre-conference session on copyright reform.

Tattoo (cropped), by Os Keyes, used under CC BY-SA

Like Wikimania, there is simply too much here to summarize in coherent chunks, so here are my motes and thoughts during my return flight:

  • Maira Sutton of EFF summed up my strongest feeling about the event (and Wikimania, and many others) quite perfectly: “Getting a chance to finally meet those people you’ve admired from the Internet… Yea I hope that never gets old.” I hope I always remember that we are parts of a movement that draws much of its strength from being human – from being, simply, good to each other, and enjoying that. I realize sometimes being a lawyer gets in the way of that, but hopefully not too often ;)
  • At the copyright reform mini-conference, it was super-interesting to see the mix of countries playing offense and defense on copyright reform. Reform efforts discussed appeared to be patchwork; i.e., folks asking for one thing in one country, another in others, varying a great deal based on local circumstances. (The one “global” proposed solution was from American University/InfoJustice, who have worked with a team of lawyers from around the world to create a sort of global fair use/fair dealing exception called flexible use. An interesting idea.) Judging from my conversations at Wikimania and with Wikipedians at CC Summit, this is an area of great interest to Wikipedians, and possibly one where we could have a great impact as an example of the benefits of peer production.
  • Conversation around the revised CC 4.0 license drafts was mostly quite positive. The primary expressed concerns were about fragmentation and cross-jurisdictional compatibility. I understand these concerns better now, having engaged in several good discussions about them with folks at the conference. That said, I came away only confirmed on my core position on CC’s license drafting: when in doubt, CC should always err on the side of creating a global license and enabling low-complexity sharing.
  • This is not to say CC should rush things for 4.0, or be legally imprecise – just that they must be careful not to accidentally overlook the negative costs or overlawyering. Unfortunately, creating something knowingly imperfect is a profoundly difficult position for a lawyer to be in; something we’re trained to avoid at almost all costs. It is easiest to be in this position when there is an active negotiator on the other side, since they can actively persuade you about the compromise – instead of arguing against yourself. Public license drafting is perhaps unusually susceptible to causing this problem in lawyers; I do not envy the 4.0 drafters their difficult task.
  • There was a fair bit of (correct) complaining about the definition about Effective Technological Measures in the license – the most lawyerly piece of writing in 3.0 and the current drafts. Unfortunately, this is inevitable – to create a new, independent definition, instead of referring to the statute, is to risk protecting too much or too little, neither of which would be the correct outcome for CC. It would also make the license much longer than it currently is. I believe that the right solution is to drop the definition, and instead have a parallel distribution clause, where the important definition is easy: the recipient must be able to obtain at least one copy in which they are not prohibited from exercising the rights already defined. ETM then becomes much less important to define precisely.
  • Interesting to see that the distribution of licenses is mostly getting more free over time. After seeing the focuses of the various Creative Commons affiliates, I think this is probably not coincidence – they all seem quite dedicated to educating governments, OERs, and others about transaction costs associated with less free licenses, and many report good results.
  • That said, licensing data, even under free licenses, is going to be tricky – the trend there appears to be (at least) attribution, not disclaimer of rights. Attribution will be complicated for database integration; both from an engineering and a legal perspective.
  • Combined with the push towards government/institutional publication of data, there were a lot of talks and discussions about what to do with information that are difficult or inappropriate to edit, like scientific articles or historical documents. Lots of people think there is a lot of value to be added by tools that allow collaborative annotation and discussion, even on documents that can’t/shouldn’t be collaboratively edited. I think this could be a Wiki strength, if we built (or borrowed) the right tools, and I really hope we start on that soon.
  • Great energy in general from the affiliates around two areas: copyright reform, and encouragement of government and institutions to use CC licenses. I think these issues, and not the licenses themselves, will really be what drives the affiliates in the next 3-5 years. Remains to be seen where exactly CC HQ will fit into these issues – they are building a great team around OER, and announced support for copyright reform, but these are hard issues to lead from the center on, because they often need such specific, local knowledge.
  • Met lots of great people; too many to list here, but particularly great conversations with Prodi, Rafael, and folks from PLOS (who I think Wiki should partner with more). And of course catching up with a lot of old friends as well. In particular, perhaps my conversation with Kragen will spur me to finish my long-incomplete essay on Sen and Stallman.
  • I also had a particularly great conversation with my oldest friend, Dan, about what a modern-day attribution looks like. Now that we’re no longer limited to static textual lists of authors, as we have done since the dawn of the book, what can we do? How do we scale to mega-collaborative documents (like the Harry Potter page) that have hundreds or thousands of authors? How do we make it more two-way, so that there is not just formal attribution but genuine appreciation flowing both ways (without, of course, creating new frictions)? The “thanks” feature we’ve added to Wikipedia seems one small way to do this; Dan spoke also of how retweets simultaneously attribute and thank. But both of those are in walled silos- how can we take them outside of that?
  • Saw a great talk on “Copyright Exceptions in the Arab World” pan-Arab survey; really drove home how fragmented copyright statutes can be globally. (Translation, in particular, seemed an important and powerful exception, though my favorite exception was for military bands.) Of course, the practical impact of this is nearly nil – many of the organizations that are in charge of administering these literally don’t know they exist, and of course most of the people using the copyrights in the culture not only don’t know, they don’t care.
  • Beatriz Busaniche gave a nice talk; perhaps the most important single thing to me: a reminder that we should remember that even today most cultural communication takes place outside of (intentional) copyright.
  • Lessig is still Lessig; a powerful, clear, lucid speaker. We need more like him. In that vein, and after a late-night discussion about this exact topic, I remind speakers that before their next conference they should read Presentation Zen and Slideology.
  • Database rights session was interesting and informative, but perhaps did not ultimately move the ball forward very much. I fear that the situation is too complex, and the underlying legal concepts still too immature, for the big “add database to share-alike” step that CC is now committed to taking with 4.0. My initial impression (still subject to more research) is that Wikipedia’s factual and jurisdictional situation will avoid problems for us, but it may be worse for others.
  • After seeing all the energy from affiliates, as well as seeing it in Wikimedia’s community, I’m really curious about how innovation tends to happen in global NGOs like Red Cross or Greenpeace. Do national-level organizations discover issues and bring them to the center? Or is it primarily the center spotting issues (and solutions) and spurring the affiliates onward? Some mix? Obviously early CC was the former (Lessig personifies leadership from a center outwards) but the current CC seems to lean towards the latter. (This isn’t necessarily a bad place to be – it can simply reflect, as I think it does here, that the local affiliates are more optimistic and creative because they are closer to conditions on the ground.)
  • Watched two Baz Luhrmann films on my flight back, a fun reminder of the power of remix. I know most of my film friends think he’s awful, and admittedly for the first time I realized that Clair Danes is … not very good … in Romeo and Juliet. But in Luhrmann there is a zest, a gleeful chopping, mixing, and recreating of our culture. And I love that; I hope CC can help enable that for the next generation of Luhrmanns.

I’m Donating to the Ada Initiative, and You Should Too

I was going to write a long, involved post about why I donated again to the Ada Initiative, and why you should too, especially in the concluding days of this year’s fundraising drive (which ends Friday).

Lady Ada Lovelace, by Alfred Edward Chalon [Public domain], via Wikimedia Commons
But instead Jacob Kaplan-Moss said it better than I can. Some key bits:

I’m been working with (and on) open source software for over half my life, and open source has been incredibly good for me. The best things in my life — a career I love, the ability to live how and where I want, opportunities to travel around the world — they’ve all been a direct result of the open source communities I’ve become involved in.

I’m male, so I get to take advantage of the assumed competency our industry heaps on men. … I’ve never had my ideas poached by other men, something that happens to women all the time. … I’ve never been refused a job out of fears that I might get pregnant. I can go to conferences without worrying I might be harassed or raped.

So, I’ve been incredibly successful making a life out of open source, but I’m playing on the lowest difficulty setting there is.

This needs to change.

Amen to all that. The Ada Initiative is not enough – each of us needs to dig into the problem ourselves, not just delegate to others. But Ada is an important tool we have to attack the problem, doing great work to discuss, evangelize, and provide support. I hope you’ll join me (and Jacob, and many other people) in doing our part to support it in turn.

Forking and Standards: Why The Right to Fork Can Be Pro-Social

[I originally sent a version of this to the W3C’s Patents and Standards Interest Group, where a fellow member asked me to post it more publicly. Since then, others have also blogged in related veins.]

Blue Plastic Fork, by David Benbennick, used under CC-BY-SA 3.0.

It is often said that open source and open standards are different, because in open source, a diversity of forks is accepted/encouraged, while “forked” standards are confusing/inefficient since they don’t provide “stable reference points” for the people who have to implement the spec.

Here is the nutshell version of a critical way in which open source and open standards are in fact similar, and why licensing that allows forking therefore matters.

Open source and open standards are similar because both are generally created by communities of collaborators who must trust each other across organizational boundaries. This is relevant to the right to fork, because the “right to fork” is an important mechanism to help those communities trust each other. This is surprising – the right to fork is often viewed as an anti-social right, which allows someone to stomp their feet and walk away. However, it is also pro-social: because it makes it impossible for any one person or organization to become a single point of failure. This is particularly important where the community is in large part a volunteer one, and where a single point of failure is widely perceived to have actually failed in the past.

Not coincidentally, “largely volunteer” and “failed single point of failure” describes the HTML working group (HTML WG) pretty closely. W3C was a single point of failure for HTML, and most HTML WG participants believe W3C’s failures stalled the development of HTML from 1999 to 2006. For some of the history, see David Baron’s recent post; for a more detailed
history by the author of HTML5, you can look in the spec itself.

Because of this history, the HTML WG participants voted for permissive licenses for their specification. They voted for permissive licenses, even though many of them have the most to gain from “stable reference points”, since they are often the ones who (when not writing the standards) are the one paid to implement the standards!

An alternate way to think about this is to think of the forkable license as a commitment mechanism for W3C: by committing to open licensing for the specification, W3C is saying to the HTML WG community “we will be good stewards – because we know otherwise you’ll leave”. (Alternate commitment mechanisms are of course a possibility, and I believe some have been discussed – but they’d need careful institutional design, and would all result in some variety of instabililty.)

So, yes: standards should be stable. But the options for HTML may not be “stable specification” or “unstable specification”. The options, based on the community’s vote and discussions, may well be “unstable specification” or “no (W3C) specification at all”, because many of the key engineers involved don’t appear to trust W3C much further than they can throw it. The forkable license is a popular potential mechanism to reduce that trust gap and allow everyone to move forward, knowing that their investments are not squandered should the W3C screw up again in the future.

San Francisco News

When I wrote about cutting back on national news, and trying to get more serious about local news in SF, a few people asked that I share my sources of SF news. Here’s a first cut, in alphabetical order:

  • Bay Nature – Bay-area nature-related news and events
  • Burrito Justice – hard to summarize, but city history, neighborhood-related humor, and an obsession with the Sutro Tower
  • CitiReport – city and state politics; unfortunately not full-feed
  • Curbed SF – great mix of real estate, history, and other related topics; more city politics/less house-by-house coverage than SocketSite
  • Fog City Journal – cranky, but very insider-y, city politics coverage
  • LiveSOMA – coverage of my neighborhood, though sadly on life support
  • SF Public Press – think of them as an on-paper NPR. I donate, and you should too!
  • SFist – quirky-ish local news
  • SocketSite – real estate news
  • Spots Unknown – little bits of  SF stories
  • StreetsBlog SF – great coverage of transit and biking in the city; must-read if you’re a biker in the city
  • Bay Citizen – had great non-profit coverage of the city; not clear what happens with that now that they’ve merged with CIR
  • The Snitch – valuable mostly for its pointers to lots of other city news sources
  • ThinkWalks – actual real-world walks in the city
  • Whilst in SF – because sometimes you have to laugh
  • SPUR Blog – urban development policy! what could be more fun!

I’m very open to more suggestions, particularly more “mainstream” suggestions. The traditional media frankly often pisses me off, but if I want to understand San Francisco, I should probably also take in some of the same media stream as most of my neighbors.

At the Wikimedia Foundation (for, um, three months now)

Since it was founded 12 years ago this week, Wikipedia has become an indispensable part of the world’s information infrastructure. It’s a kind of public utility: You turn on the faucet and water comes out; you do an Internet search and Wikipedia answers your question. People don’t think much about who creates it, but you should. We do it for you, with love.

Wikimedia Foundation Executive Director Sue Gardner, from http://blog.wikimedia.org/2013/01/14/wikipedia-the-peoples-encyclopedia/

As Sue says, the people who create Wikipedia are terrific. I’m lucky enough to say that I’ve just wrapped up my first three months as their lawyer – as Deputy General Counsel at the Wikimedia Foundation. Consider this the personal announcement I should have made three months ago :)

Wikimania 2012 Group Photograph, by Helpameout, under CC-BY-SA 3.0.

Greenberg Traurig was terrific for me: Heather has a wealth of knowledge and experience about how to do deals (both open source and otherwise), and through her, I did a lot of interesting work for interesting clients. Giving up that diversity and experience was the hardest part of leaving private practice.

Based on the evidence of the first three months, though, I made a great choice – I’ve replaced diversity of clients with a vast diversity of work; replaced one experienced, thoughtful boss with one of equal skill but different background (so I’m learning new things); and replaced the resources (and distance) of a vast firm with a small but tight and energized team. All of these have been wins. And of course working on behalf of this movement is a great privilege, and (so far) a pleasure. (With no offense to GT, pleasure is rarely part of the package at a large firm.)

The new scope of the work is perhaps the biggest change. Where I previously focused primarily on technology licensing, I’m now an “internet lawyer” in the broadest sense of the word: I, my (great) team, and our various strong outside counsel work on topics from employment contracts, to privacy policies, to headline-grabbing speech issues, to patent/trademark/copyright questions – it is all over the place. This is both challenging, and great fun – I couldn’t ask for a better place to be at this point in my life. (And of course, being always on the side of the community is great too – though I did more of that at Greenberg than many people would assume.)

I don’t expect that this move will have a negative impact on my other work in the broader open source community. If anything, not focusing on licensing all day at work has given me more energy to work on OSI-related things when I get home, and I have more flexibility to travel and speak with and for various communities too. (I’m having great fun being on the mailing lists of literally every known open source license revision community, for example. :)

If you’d like to join us (as we work to get the next 1/2 billion users a month), there are a lot of opportunities open right  now, including one working for me on my team, and some doing interesting work at the overlap between community, tech, and product management. Come on over – you won’t regret it :)