This false Babbage was a prototype ‘human-form software agent’ she saw at a small R&D group concerned with applying Analytical Engines to means of communication with expert systems and databases. The explanation seemed prosaic enough; Babbage’s screen image had been analysed frame-by-frame from hours of video conference recordings. His mannerisms had been identified and related to causal events, so that the Engine could call upon them, with random and subliminal variations, to create a lifelike ‘avatar’ that answered Lovelace’s typed queries. Nevertheless, these facts failed to dispel fully her reaction.
Analytical Engines have problems understanding faces: despite the existence of a handful of commercial applications, an Alta Vista search on “face recognition” reveals this as a field still very much at the research stage. But to humans, the face is a ready-made information-rich communication device, which our eyes and brains are already tuned to read. From simple beginnings in representing multidimensional data as cartoon ‘Chernoff faces’, such face representations could provide a powerful and intuitive means of communication between scientists and software.
At present, the concept is in its infancy; the false Babbage would not have convinced Lovelace for long. But how long before Engines can completely mimic a person's presence, with any required informative or emotive effect? A friendly face to data is one thing; but might we not also be duped by cybernetic sales staff, engineered to present every visual cue we interpret as trustworthiness? Or be tormented by irate cyber-watchdogs warning us of poor performance? Lovelace is left with a sense of confusion, like the Velveteen Rabbit in Mrs Margery Williams’ classic children’s story, whose first question was “What is real?”