Death and rebirth of the science journal, journalism, and what the hell, maybe the law journal too

Interesting melange of articles, not quite sure what they point to for me in the future:

  • Two bioinformatics guys write about ‘the death of the paper‘. Their thesis is that scientific journal articles are going to need to be reinvented, as massive data sets become more common, and more important than the actual paper/writing themselves. In specific, they want to publish the actual searchable databases, not just a handful of graphs, and they want to get academic ‘credit’ for something other than the actual article itself; primarily, the db.
  • Jurisdynamics* extrapolates the previous link into the legal realm. As I mention in the comments, legal articles don’t typically have databases, so I think if the legal article matures technologically, it’ll look more like the next two links…
  • … in which Stephen O’Grady writes about microformats replacing boxscores and the future of journalism, and Adrian Holovaty writes similarly and in more depth (brilliant minds thinking alike, yadda, yadda.) This is in some ways the exact opposite of the ‘death of the paper’ article, since microformats are data in micro-sized chunks, and bioinformatics databases are anything but micro. But obviously the core is the same- traditional reporting and traditional scientific papers could potentially become vastly more valuable if they contained machine-parseable data.
  • Of course, has given some thought to some of these things. Of particular interest to me right now is that they have given some thought to legal citations. So a microformat-driven journal- and the tools to consume that data- is perhaps not far off. (Of course, the utility of all this in the legal journal is constrained by the two proprietary databases that drive all legal research, but that is a rant for another time.)

So what does this mean, exactly? I’m honestly not sure, but the obvious conclusion to draw is that everyone on the publishing side of things (CMSs, word processors, and of course authors themselces) needs to be thinking very hard about how to publish all the relevant structured data that they can, and doing it soon, because the demand from content creators is finally in the pipeline. On the consumption side (web browsers, word processors, etc.) people should start thinking about how they can make the coming flood of parseable data useful to their users. That might look like whatever the hell Mark Pilgrim is working on, or it might look like dabbledb, or more likely like something not yet invented, but people should probably start thinking about it now, because time may be running out if you want to get in on the ground floor :)
* conflict of interest note: I might start writing for one of their side projects soon.