Analog Science Fiction & Fact Magazine
"The Alternate View" columns of John G. Cramer 
Previous Column  Index Page  Next Column 

Telepresence - Reach Out and Grab Someone

by John G. Cramer

Alternate View Column AV-40
Keywords: telepresence, robotics, bandwidth, virtual reality, tele-operation
Published in the July-1990 issue of Analog Science Fiction & Fact Magazine;
This column was written and submitted 12/17/89 and is copyrighted ©1989, John G. Cramer. All rights reserved.
No part may be reproduced in any form without the explicit permission of the author.

    Does science fiction really anticipate the future? Does it point unerringly in the directions that technology is likely to take us? Or does it fall victim to simplistic analogies that mistake the difficult for the easy, the unlikely for the likely? Consider the robot: humanoid robots have been a stock figures in science fiction from Karel Capek's R.U.R. through Jack Williamson's Humanoids and Isaac Asimov's famous robot series that has proceeded from the I, Robot stories to the robot novels and the Good Doctor's recent "sharecropper" robot anthologies.

    Superficially, the robots of early SF appears to be a prediction that is on target. In factories all over the world "robots" are replacing human workers at production lines that assemble everything from Toyotas to Macintoshes. Surely it's only a matter of time before these robots walk out of the factories and into shops and business offices, replacing human workers everywhere with more efficient mechanical substitutes. Indeed, many SF writers have based stories on just such premises.

    Well, you needn't feel threatened. It isn't going to happen, at least not anytime soon. To understand why such a robot takeover isn't likely, one has only to take a good hard look at these manufacturing "robots". These robots are not general purpose metal humanoids that clank and scurry around the factory floor, putting devices together, perhaps constructing more of their own kind. They are huge special-purpose machines that can function only when provided with a regular schedule of maintenance and recalibration. The robot programming bears little resemblance to Asimov's "robopsychology". It is a very simple and specific set of instructions, and is often generated by tracking a human expert as he performs the operations that the production robot will then mindlessly follow.

    Further, these production robots are not mobile; they are bolted firmly to the floor, and for good reasons. Any small shift in their locating baseplates, any wear in their actuator mechanisms, destroys their effectiveness until such time as they can be recalibrated and realigned by human beings. The real robots of the late 20th century have little in common with Asimov's humanoid robots. Why not?

    As we learn more about the engineering field of robotics, as researchers try to construct machines that can duplicate the dexterity of humans in manipulating objects, an important message emerges. It ain't so easy. As our technology reaches the plateau where it should be easy to duplicate human dexterity, the Asimovian goal of the humanoid robot has receded behind nested walls of problems that appear faster than they can be solved:

How can the robot recognize the target object when it may be at any arbitrary position and orientation?
How can the robot determine the spatial locations of its own manipulators?
How can the robot predict all the effects of a given mechanical action?
And so on ...

    It seems so obvious to us that it's easy to run and catch a baseball or to select a needed tool from a drawer, while it's difficult to add up a long column of figures or to memorize the script of a play. Yet for digital-computer-based machines the latter actions are trivial, while the former are such daunting problems that they have not yet been solved by our top robotics experts.

    Human dexterity emerges not as an easily duplicated mechanical action but rather as a dimly understood miracle, the gift of a billion years of evolution. We are so good at manipulating objects because in order to survive, we have needed to be. For our survival, nature has provided us with an elaborate system of neural programming. It is this that allows us to easily reach out and pick up an object from a desk, to swat a fly, to pour a glass of water, actions that our best robot manipulators backed by our best supercomputers and computer vision systems can only poorly duplicate. Our technology is simply not yet ready to provide us with the mobile humanoid robots predicted by SF and seems very far from doing so.

    Or consider another familiar SF prediction: the neural interface, the capability of linking human mind to computer, of "jacking into the Net" as described in Vernor Vinge's True Names or William Gibson's Neuromancer. The analogy of nerves to wires, of the learned patterns and memories of the human nervous system to the programs and stored information of digital computers has led these writers to the expectation that some direct interface between brain and computer should be achievable in the near future.

    While a two-way neural interface is not impossible, from the perspective of present day technology even a brain-to-machine neural link for general use looks remote. This conclusion is based on attempts of medical technology to control prosthetic limbs by detecting nerve impulse, e.g., controlling an artificial hand from the nerve endings in the stump. This has actually succeeded to a limited degree, but requires enormous effort to establish even the simplest neural "connections". These connections differ enormously from one individual to another, so that each case is unique. It appears that there is no single "data bus language" intrinsic to the human nervous system that would allow the straightforward installation of some general purpose one- or two-way interface to a computer. This conjecture is supported by PET scans of human brain activity during a variety of mental activities. These indicate that the basic organization of the brain can vary dramatically between individuals. It appears unlikely , for example, that an improved technology for detecting and inducing brain waves could soon lead to a good brain-to-machine link.

    But let's not abandon all hope. There's more than one kind of machine-human interface. And one line of development close on the technological horizon promises a kind of brain-computer link and a kind of mobile humanoid robot. It's called telepresence, the linking of human to machine electronically using only the available senses (sight, sound, and possibly touch,) but done so well as to give the illusion of true physical presence on one or both ends of the link.

    Let's start with a simple example of telepresence. In his book The Tomorrow Makers Grant Fjermedal describes a visit to a telepresence research laboratory located in Tsukuba, Japan. There the researchers lowered a black-velvet lined box over Fjermedal's head and strapped it in place. Within the box were earphones and two small color video screens, each optically matched to one eye. Grant's head was free to move within the box, but every movement of his head was measured electronically. Wires from the box led across the room to a robot-like machine with an articulated "head" on which were mounted twin television cameras, their lenses separated like human eyes. The robot head was also fitted with microphone "ears", and the whole head assembly moved to replicate any movement of Grant's head, as monitored by the box across the room.

    Fjermedal's experience was not of having his head in a dark box, or of watching video screens. The depth and scope of human vision was so completely reproduced, the colors so clear and vivid, that his first reaction was the disorientation of not being where he had expected to be. His second reaction was one of wild visual delight. Wherever he turned his head, the images received by the video cameras and transmitted to his retinas faithfully reproduced what he would have seen directly. He studied the computer panels, the work tables, the instruments that filled the laboratory. Then, with the help of one of the researchers, he turned and looked across the room. Over there he could see a familiar figure, his head strapped in a black-velvet box. It was Grant himself. The researchers laughed. They knew well the epistomological paradox Grant had just encountered. They had all gone through it themselves during their own electronically-induced out-of-body experiences. "Are you here," the group leader asked him, "or are you there? Which is your body?" Telepresence may soon confront us all with this paradox.

    The link between Grant and the telepresence head spanned only a few yards, but it could just as well have been the width of a continent or an ocean. As the world is criss-crossed and knitted together by fiber-optics communications links the price of communications band-width is one of the few commodities that is falling faster than the cost of microelectronics itself. In the near future it will be feasible at low cost to link some improved version of the Tsukuba prototype "black box" to a remote telepresence unit anywhere in the world. The remote units can be given mobility, better sensors, faster response. The user's headset is already being re-engineered to reduce it from a large box to a helmet or even a pair of "magic glasses". The addition of eye-position and eye-focus monitoring to the headset are currently bring investigated as ways of improving performance.

    Imagine now that you are a user of some advanced telepresence system. You could, without leaving home, climb a peak in the Himalayas, explore the Amazon rain forest, or investigate the bizarre life forms near a volcanic vent on the bottom of the Pacific. Business people could bypass much of the expense and effort tied up in travel by using telepresence for conferences and meetings. The "location" visited by the telepresence user need not even be real. It might be a computer generated "virtual reality". But we will save discussion of virtual reality for another column, and focus here on interactions with the real world.

    Must the telepresence user be only a passive observer? Let's recall another famous SF prediction: Robert Heinlein's "waldo", a remote robotic hand electronically operated by a human. As SF writers sometimes do, Heinlein oversimplified the technical problems of the waldo. Up-scaling and down-scaling remote manipulators is a far more dificult undertaking than he imagined, because the time constants and material strength parameters change at each size scale. Nevertheless, the whole area of remote manipulation from hot-cell chemistry and machining with radioactive materials to underwater remote handling owes Heinlein a deep debt.

    The waldo concept grows considerably more powerful when combined with high-quality visual and auditory telepresence and with new manual interface devices like the "data glove" which senses hand position and movement. Reproducing human dexterity and pattern-recognition skills, a presently intractable problem for true robotics, is easy with telepresence because the neural programming of the operator effortlessly supplies these functions. A computer between the operator and his waldo manipulators can further enhance the illusion of presence and direct manipulation by supplying the visual, auditory, and tactile cues that the human operator expects, including force-feedback that restricts the motion of the glove when a solid object is encountered.

    Response time, of course, is an important consideration for telepresence, especially when remote manipulation is involved. Signals between object and eye, between glove and manipulator, cannot be delayed by more than a few milliseconds if the illusion of true presence is to be preserved. This means that links involving geosynchronous satellites must be avoided. Low-delay transmissions, preferably over fiber-optics land-lines, are preferred. However, telepresence even to a space station in low-earth-orbit is not ruled out, provided an interrupt free low-delay communications link to the space station can be maintained. This means that, at least in principle, all of us could visit space stations in low-earth orbit using a telepresence link to a remote unit on the station. We could perform micro-gravity experiments or do maintenance work on the station. We could go for a space-walk, or look down on the Earth as it passes underneath, or participate in zero-gee sport activities. Similarly we could go for a walk along th Pacific Rift, collecting samples as we went. The remote machinery for doing these things, once developed, is likely to be simpler than the space suits and deep diving equipment needed to put humans in the same locations, since the complications of life support, of transporting an earth-normal environment into space or to ocean depths, would be eliminated.

    As a practicing physicist, I have a considerable interest in the development of telepresence with good manipulation capabilities. A physicist who like me works at a university has commitments for teaching and other responsibilities requiring his presence on the university campus, even though his research in nuclear or particle physics often requires his presence at a large accelerator located at some accelerator facility (e.g. FermiLab or Brookhaven) that may be on the other side of the continent. It's usually necessary to make frequent trips to such facilities for experiment runs, group meetings, planning sessions, seminars, and presentations of proposals for new experiments. In the process, great quantities of time, effort, and research funds are consumed in the intrinsically unproductive activity of moving one human being to some distant location and back.

    With telepresence, as I expect it to be developed in the next decade or so, this will change. I'll be able to sit in my office at the University of Washington and at the same time work in a high radiation field to test the operation of an ongoing experiment at the Superconducting Supercollider in Waxahachie, Texas. Or, with a telepresence proxy wearing my face on its video screen, I can be attending a physics conference in Bombay. Or I can be changing the parameters of a micro-gravity crystal growth experiment on the space station. Or I can be hauling a bouyed column of photomultiplier light-sensors across the floor of the Pacific to enlarge an ultra-high energy neutrino detector array. All this without leaving the campus. I can hardly wait!

John G. Cramer's 2016 nonfiction book (Amazon gives it 5 stars) describing his transactional interpretation of quantum mechanics, The Quantum Handshake - Entanglement, Nonlocality, and Transactions, (Springer, January-2016) is available online as a hardcover or eBook at: or

SF Novels by John Cramer: Printed editions of John's hard SF novels Twistor and Einstein's Bridge are available from Amazon at and His new novel, Fermi's Question may be coming soon.

Alternate View Columns Online: Electronic reprints of 212 or more "The Alternate View" columns by John G. Cramer published in Analog between 1984 and the present are currently available online at: .


Mind Children, Hans Moravec, Harvard University Press, Boston (1989);

The Tomorrow Makers, Grant Fjermedal, Chapter 16, Macmillian Publishing Co., New York (1986);

Data Glove:
"Interfaces for Advanced Computing," James D. Foley, Scientific American 257, #4, pp. 126-135, (Oct. - 1987).

Previous Column  Index Page  Next Column 

Exit to Analog Logo website.

 This page was created by John G. Cramer on 11/08/2014.