The problem is not merely personal. Lovelace, misled by the amusing Robokitten picture, assumed April’s “artificial brain” story to be a seasonal hoax. Not so: a Web search revealed the artificial intelligence researcher Dr Hugo de Garis to be very much real. Proof, however, was not straightforward. Unfamiliar with AI hardware, Lovelace was unable to judge the technical credentials on his Japanese home page. Ultimately, she sought evidence of peer review, and took his name on prestigious AI conference lists as conclusive.
Lovelace found the experience depressing. On venturing beyond her own specialism, she had immediately to fall back upon the scientific world’s conventional reliance on restricted social interaction. This system has always seemed to Lovelace inefficient, nepotist and inherently subject to ossification. Nevertheless, it undeniably worked on its own terms; similar monopolies on communication have upheld socio-political hierarchies throughout history.
Judging real science is hardly clear-cut. Babbage, after all, took nearly two centuries to be vindicated; but his friend Mr Andrew Crosse, who once dined in Royal Society circles, is now thought a crank. Most checks are provisional. We have to assume that self-published pages may be promotional. We can watch the typography for the electronic equivalent of the rant in green ink, and other critical routines exist. See, for example, Mr Russell Turpin of the University of Texas on characterising quack theories (http://public.logica.com/~stepneys/sf/quack.htm).
While such rules-of-thumb may only echo Lovelace’s prejudices, they can be checked against the reader's common sense. But with medium and message advancing almost daily into the unknown, we cannot always rely on experience. It seems we must evolve novel procedures for validating identity and information, or the Web’s communicative power will always be weakened if we can only trust people we already know.