So on Thursday I listened in on a webchat that a friend was helping to organize about how users might benefit from some ‘curation’ of the blurtstream from new-web sources, especially Twitter but also Facebook and wherever else. Considering the shelf-life of information in that little world, I’m sure my musings are woefully out of date and irrelevant 48 hours later. Nevertheless, this stuff is gnawing on my mind, and I’m going to have to blurt about it just to quiet the internal noise.
The two lead panelists, Erick Schonfeld and Andrew Keen squabbled about the need for some kind of way to filter the incessant yammering over various web channels, as people abandon the old, static web of permanently posted content accessed as needed in favor of the continuous buzzing of so many little iBees. Keen openly disdains the blurtstream as a contentless distraction that actually degrades culture and conversation by overshadowing more substantial resources; he demands that human intelligence be involved in selecting and organizing information if it is to have value. Schonfeld happily swims in that same stream of stuff, needing only to find the right people to follow so he’ll get to consume* the best nuggets as they float by. Beneath the emotional veneer of their message, I wonder why these two guys think they disagree. Both are entirely correct about the parts of the picture they choose to see and care about, and both are completely wrong to ignore the rest.
Schonfeld makes a valid point that tweets very often call up and link back to valuable static content that’s been created, cataloged, and posted by humans. He notes that the tweets act as pointers, and often as annotations, adding context to the previously published material. Beyond that, I’d say that Schonfeld makes part of Keen’s point for him, in that the fundamental value of the pointer comes from the truly important information it calls up. And anyway, many of those tiny notes refer only to other notes, and the annotated annotations accumulate in a not very useful fluffy mess.
So how can an ordinary human invest a finite attention span in selecting the bits that matter? Keen wants humans to ‘curate’ the stream and use their informed judgment to pull out the good parts. That’s basically the time-honored role in journalism of the editor — a much closer metaphor than the curator of some collection of curiosities. But it’s clearly too slow a process for the so called real-time web, and if he wants the traffic to slow down to his chosen pace, he’s welcome to sit down and wait for that to happen. Schonfeld wants to pick a few experts whose streams will include the good stuff and ignore the rest. Again, he makes Keen’s point for him, because he’s relying on those specific people’s judgment about what to read and dropping the rest of the all-too-human stream from his list. He’s bound to be making significant type I errors when he wastes attention on nongermane (to him) posts by those he follows. Much more significant is the massive type II error he’s making in ignoring germane posts by people he’s chosen not to follow or never heard of. He acknowledges that it’s a lot of work to develop the right follow list, finding the good ones and trimming the bad ones. Is this quicker or more efficient than Keen’s process of human filtering? I have my doubts.
The fallacy on both sides of this squabble is that anyone ever has to choose one side or the other. Maybe it’s just me, but my solution is going to involve paying notice to those who watch a subject I care about and show good judgment in pointing out useful information about it. But I’ll still want to hear about relevant stuff from other sources, maybe found by automated search tools configured to select resources based on semantic cues I specify. And I want help from my librarians in creating and deploying these tools, since they’re building on generations of experience in exactly that task, applying techniques of which the spiffy shiny two-dot-oh techkiddies seem to have only a vague awareness. This kind of tool is going to require careful structuring of information to expose the content in a way that’s conducive to scanning and cataloging by computers — it’ll take semantic tools a lot more powerful than near-random hashtags, for example.
And I’ll still want to refer back to trusted human-edited sources that have established expertise over time in my subject of interest. Institutions like universities, journals, professional associations, newspapers, etc. will continue to play vital roles, including maybe a new one of providing a solid foundation for the frantically reconstructed house of cards that is the real-time blurtstream. At the same time, they’d better be finding ways to cull the bits useful to them, because sometimes — as happened recently in the flood of 140-character reporting from the streets of Teheran or when the usual channels go down — it’s going to provide information that’s just not available otherwise. The project isn’t to choose one source and dump the rest; it’s to assemble collections of information from all kinds of sources. That project will succeed only to the extent that people quit hiding in the safety of their silos while lobbing bombs at all the others and start looking for ways that make easy, computer-navigable bridges between all the silos.
*And btw, may I say that I hate and abhor the misuse of the word consume in this context, as though people were dining on the data, taking nourishment from it, and excreting the undigestible remainder. It comes from references in programmerspeak to computer code consuming some variable or other data and somehow processing it. That’s bad enough, but then to extend that metaphor to people’s engagement with media completely distorts the process by which they gather and understand information.
Originally published on arttartare.net.