Recently, my good friend and funk-soul brother Mark Oehlert (@moehlert) and I squared off on Twitter about the value of curated content. It would not be very clear to most casual observers why I’d take a stand about this topic (especially in what turned out to be a non-argument). For one, in my short experience as a K-8 librarian, the notion of “weeding the collection” became embedded in my psyche and I have spent the last ten years, personally and professionally, continuing to clean my proverbial shelves of items that are outdated or irrelevant. In other words, I’ve been curating for a long time. Secondly, search and discovery of learning content is an area that I and many colleagues have been thinking about; given my experiences with social networking and a familiarity with the tools and constructs, I’ve had some thoughts on how to work on this issue — and how not to.
So, about two months ago, I set up a new blog for notes associated with my presentations, as well as my reading notes and highlights. I did this in conjunction with the conferences I was participating in, but I also wanted to perform an (albeit crude) experiment on how to increase traffic to dark areas of the web, especially repositories of content. Any blog is, at a very high level, a repository of content and though I had an opinion on whether one could build traffic inorganically, I wanted to try and test a negative hypothesis.
I set up @aaronesilvers as a twitter account that is completely automated; I wanted to see what kind of traffic could be generated vs. organically driven — in other words, could I automate my way to building traffic to the site.
I activated ChartBeat, which is a real-time service that pings me when the site is down or when there’s real-time traffic to the site. I also activated Feedburner, Google Site Maps and Google Analytics to ensure that my RSS feeds were consumable and that the site itself was discoverable and optimized for Google’s crawlers.
It turns out that my presentation allowed for an interesting micro-experiment. If you google “metaparadata” my site, or citations about my presentation, seem to dominate the results. So that, by itself is interesting. But it’s not the whole story.
That first blip in traffic, on June 13, was the result of a RT of my automated @aaronesilvers tweet, here:
If you look at the top content in the same time period from May 23-July 23…
…the /archives/113 link with 19 page views? That’s the link in the RT.
Now back to the google search for “metaparadata” — it certainly isn’t a driver of traffic to the site, even though my site here is the first to come up in searches, whether it’s Google or Bing. The top searches that bring traffic to The Beard’s Notebook are (and I have to chuckle a little bit with #2):
I will readily concede that this isn’t a great experiment; some readers will likely just think to themselves that this is, in fact, a no-brainer conclusion. I am not an expert on Search Engine Optimization, and my example was overly simplistic with content plainly published on a blog or marginal (at best) interest to even as specific an audience as, say for example, #lrnchat participants and conference go-ers.
My point in sharing this is that there are many parts of the web that seem to share a common problem: how do we improve the discoverability of learning content. There are many people who believe that discoverability can be automated. My opinion before I started this was simply that I disagreed with the idea that this could all be automated, but I wanted to test my assertion in a way I could somewhat control. My conclusion from this (possibly first) experiment is that one cannot simply automate discoverability. There are many other factors — and my hypothesis remains that if you’re going to leverage a social platform like Twitter, only people who link to your content will assist in it being more discoverable through search engines like Google or Bing.