Why should online be any different?

Tim O’Reilly comments on how easy it is for online data to go bad, and how difficult it is to correct it. He even uses the whack-a-mole analogy:

We face this problem all the time with book metadata in our publishing business. Retailers demand notification of upcoming books as much as six months before the book is published. As you can imagine, titles change, page counts change, prices change — sometimes books are cancelled — and corralling the old data becomes a game of whack-a-mole. You’d think that the publisher would be an authoritative source of correct data, but this turns out not to be the case, as some wholesalers and retailers have difficulty updating their records, or worse, retailers sometimes overwrite newer, correct data with older bad data from one of the wholesalers who also supply them.

But why should online be any different than anywhere else. After all this isn’t exactly a new problem, as any victim of identity theft, or anyone who has ever tried to correct an erroneous credit report can easily attest to.

The problem is that in the real or online worlds there is rarely if ever an authoritative source of correct data, because we aggregate the data from so many different sources.Tim says this is a Web 2.0 problem:

As Lou said in one email, "the whole story seems to be such a strong illustration of the downsides of connected and linked databases (and therefore very much a Web 2.0 lesson)."

But this isn’t just a Web 2.0 problem.It’s a regular old database problem.

Technorati:

Powered by Bleezer

Leave a Reply