Quick brain dump after a bike ride home: free software took a huge leap in the late 90s and early 00s in large part because of non-ideological advantages that the rest of the world is now competing with or surpassing:

- Collaboration tools: Because we got to the ‘net first, our tools for collaborating with each other were simply better than what proprietary developers were doing: cvs, mailman, wiki, etc., were all better than the silo’d old-school tools. Modern best-of-breed collaboration tools have all learned from what we did and added proprietary sauce on top: github, slack, Google Docs, etc. So our tools that are now (at best) as productive as our proprietary counterparts, and sometimes less productive but ideologically agreeable.
- Release processes: “Release early/release often” made us better partners for our users. We’re now actively behind here: compare how often a mobile app or web user gets updates, exactly as the author intended, relative to a user of a modern Linux distro.
- Zero cost: We did things for no (direct) cost by subsidizing our work through college, startups, or consulting gigs; now everyone has a subsidize-by-selling-something-else model (usually advertising, though sometimes freemium). Again, advantage (mostly?) lost.
- Knowing our users: We knew a lot about our users, because we were our biggest users, and we talked to other users a lot; this was more effective than what passed for software design in the late 90s. This has been eclipsed by extensive a/b testing throughout the industry, and (to a lesser extent) by more extensive usage of direct user testing and design-thinking.
None of these are terribly original observations – all of these have been remarked on before. But after playing some with Google Photos this weekend, I’m ready to add another one to the list:
- “big data”: It didn’t matter that neither libre nor proprietary developers had their user’s data, because no one could do much that was useful with it. Now, Gmail’s spam filtering is better than SpamAssassin precisely because of all the data they have. Google Photos is nearly magical, again because they’ve processed literally billions of photos. That’s going to be difficult (impossible?) for pure peer production models to match. (I didn’t list machine learning because the techniques there can literally be done by summer interns – the blocker is the training data, not the processes.)
Worth asking what your project is doing that could be radically changed if your competitors get access to new technology. For example, for Wikipedia:
- Collaborating: Wiki was best-of-breed (or close); it isn’t anymore. Visual Editor helps get editing back to par, but the social aspect of collaboration is still lacking relative to the expectations of many users.
- Knowledge creation: big groups of humans, working together wiki-style, is the state of the art for creating useful, non-BS knowledge at scale. With the aforementioned machine learning, I suspect this will no longer the case in a (growing) number of domains.
I’m sure there are others…
I would not consider big data to be so out of reach for free software. After all it was not Google who created the data – but its users. Building a large userbase is of course a challenge, but not unique to free software.
We do have the extra challenge that we need use (and create) freedom-respecting ways that we can aggregate knowledge from the data.
[…] Luis Villa: What tools are changing our world next? […]
[…] Syndicated 2015-06-06 15:00:06 from Luis Villa » Blog […]