There are lots of sources of links these days- delicious, twitter, and blogs. Many of these are interesting, but not so interesting that I want to read them all the time. Currently I have to decide either to read or not read these people.
I’d like to add a third option: to have a ‘middle’ pool of sources who I don’t read directly, but who are monitored and serve as pointers to other, interesting things. I think having such a third option would let me read less (because I’d stop skimming these intermediate sources), but still also give me fairly good confidence that I’m not missing important things that I should read.
The outline of the software in my head goes something like this:
Step 1: User provides a list of RSS feeds (a mix of blogs, twitter/identica, and delicious feeds).
Step 2: A harvester collects the contents of said RSS feeds.1
Step 3: Parse the content of those feeds for URLs and dump them in a db.2
Step 4: Unshorten the URLs if necessary. 3
Step 5: When a particular url has been mentioned X times in the past Y days4, fetch the URL5, find the content within it6, and jam it in an RSS feed for consumption along with the rest of my top-level RSS feeds.
Bonus step: mash up snippets from the posts/twitters/delicious feeds to provide context for the URL’s content, similar to what Google Reader does when friends comment on a feed item.
I feel like someone must have done this already. If not, the pieces are available (see the footnotes for details on many of the pieces); I sure wish I had the time/skill to put them together myself. :/ This project is one of the things I wish we had more reliable bounty infrastructure for- I’d actually put money up for it if I thought there were a reliable way to get some matching funds and find good developers for it.
Ideas, either about the rough feature sketch, existing software that fits this need, or about methods to make it happen, are all welcome.
- Planet is an example of infrastructure that does this. [↩]
- Planet’s meme plugin can do this. [↩]
- There are scripts and web services available for this; the basics aren’t that complicated. [↩]
- again, meme plugin has this concept already implemented [↩]
- not in memeplugin- it only provides links [↩]
- not trivial, but source is available that does this via readability [↩]
12 thoughts on “building software to let me read more and less at the same time?”
I want this too. The Ultra Gleeper promised that, but I could never get it to work and the author stopped working on it.
[…] Step 2: A harvester collects the contents of said RSS feeds.1 […]
We try to do something of that sort with the Social News system originally built for maemo.org – we look at all feeds subscribed to Planet Maemo, see what items have been bookmarked or commented most, and lift those to the “current important stories” feed.
Though yes, this was build before Twitter became hugely popular and therefore we don’t do anything about short URLs yet.
Google Reader has a feature to sort feeds by “magic.” I put a bunch of the feeds in there that I don’t follow as closely, and from time to time when I click on it, I’m rewarded with the magic of good posts at the top.
Mike Melanson was working on something like that:
you might want to take a look at PostRank or Fever. They seem to be doing exactly what you are looking for.
I hope that helps.
[…] building software to let me read more and less at the same time? […]
[…] Luis Villa's Internet Home / building software to let me read more … […]
The way I understand it, the Raindrop project from Mozilla is intended as a server that both aggregates various media sources like e-mail and microblogs and prioritises it for you based on its contents and your preferences. I got the impression that it’s supposed to work both as a web service and eventually with pretty much any IMAP client, particularly Thunderbird. However, it still looks pretty experimental.
Thub: raindrop is a long way from production, and I think they are mostly focusing on twitter/email at this time, though I agree it would be very cool if they did RSS eventually.
Michael: feedafever looks like *exactly* what the doctor ordered. I wish I could get over the licensing, but I guess better to have it locally hosted and proprietary than my current solution (which is google reader, and hence I have no control over it whatsoever.)
Luis et al.,
I learned about this post through John Fleck. We seem to have the software that you want. It is called Smart Website and has been running for about a year now.
I like your steps. A couple quick comments.
Step 1 can be improved.
Step 3 can be greatly improved.
We have developed CAL(tm) which out performs available DBs. Instead of being a DB it is software that’ watchs’ a user and learns about the user. It then adapts so that the user sees a web site or an RSS feed in the way that pleases the user. CAL filters and rearranges the large feed into a smaller more organized feed based on the user’s current interests. The learning and filtering and even the DB like function are automatic and don’t require the standard annoying user steps.
(Yes, there is a lot of novel IP in CAL)
Anyone who wants to know more can contact me.
Thanks for the post.
[…] The document has moved here. […]
Comments are closed.