Ascendance of the factoid

filter bubbles!
Filter bubbles?

Google has more to say about the Knowledge Graph and how it will finally fulfill the Star Trek promise of computers that will intently listen, perfectly understand, and obediently retrieve exactly what we want. Hell, the graph may even read our minds and give us what we want before we want it.

I don’t see much in that announcement, or in the happy video, about the problem of the computer telling us what we are allowed to want. Well, it tells us what we can have, and that sets boundaries on what we can decide to want, just because it’s all we know about. I wonder if that bouncy graphic at the top of the Google Blog is actually intended to represent filter bubbles, or is that just a happy coincidence?

I don’t mean to make fun (much) of this rather startling and inspiring achievement. It seems built around old-school blunt-force computing power, as I have mentioned, but it looks like a fine application with focused processing of absolutely massive data in service to users’ preferences, as best they can be determined in bulk and with minimal context. The graph crunches text strings in the context of user link behavior to discover entities and the relationships between them that users might want to understand. It assembles factoids to build Tinker Toy representations of users’ most likely mental structures in the moment. More than that, it audaciously tries to model the entire mess of knowledge in a structured way and make it all visible from a single point in the graph. Actually, more even than that, it tries for visibility from a single point into all the possible graphs that might intersect there.

That sort of process will have to make some presumptions about user intentions. The new mini-summaries of likely user topics seem intended to pop the filter bubbles that result and enable nonlinear exploration — maybe they’ll help. These amount to small Wikipedia pages, assembled on the fly, presenting bits of fact in useful context that can lead people along the paths that earlier explorers have blazed through the same parts of the forest. I still see the probability of disintermediation of sources I called out in the March post, but I’m thinking now that it’s likely to affect other summarizers like Wikipedia more than solid, original sources, and I feel better about that outcome. If Google captures bigger chunks of the web’s ad revenue by facilitating access, more power to ’em. Are they smart enough to keep linking out to those sources and feeding traffic to the geese laying their golden eggs? I sure hope so.

One risk, though, is the loss of complexity. As the graph summarizes the summaries so strongly favored by Google’s relevance ranks, it’s omitting detail from sources that already had omitted a lot of detail. Now maybe users will find their way to detail and complex interpretation via those exploration paths that Google is so earnestly trying to build. To the extent that the paths lead to true source materials, that seems likely; to the extent that they lead only to slightly less dumbed-down summaries, the risk is rising that users’ finite determination and attention span will be exhausted before they get someplace useful.

The knowledge graph is a beautiful vision of facts assembled in user-specific context. It still needs to look toward assembling the structures users ought to want, or would want if they suspected, in addition to those they’ve built with their click trails. Some queries really do need to be informed by topical expertise so people can move beyond the crowd-sourced zeitgeist, which can’t do much more than refresh its own image, as presented by the carnival mirror that technology provides.