Posted on Categories Discover Magazine
A close look at the Apollo 11 EVA footage shows ghostly astronauts, which of course has launched speculation that the footage is faked. If NASA could get to the Moon, why couldn’t it capture good video?! The footage wasn’t faked. The poor quality and ghostly look is an artifact from the odd way NASA had to convert the lunar footage to a format that could be broadcast. To understand this, we have to unpack how exactly TVs worked in the mid 20th century.
In the United States at least, the cathode ray tube technology that yielded television in the 1950s remained pretty much unchanged from the 1950s until the 2000s. The black and white TVs gave way to colour without too much change, but then flat screen LED technology took over and our living rooms became less cluttered. But for the moment we’re interested in black and white cathode ray tube televisions, and it all starts with the camera, so let’s start there.
Inside an analogue video camera, the image or scene being filmed is focussed through a lens onto a photosensitive plate. That plate is scanned by an electron beam. Two coiled wires around the camera tube deflect the beam so it scans in lines, left to right, top to bottom, covering the whole plate. It does this 30 times every second, which, incidentally, is where we get the standard frame rate of 30 frames a second. As it scans, information is encoded on that electron beam. The brighter the point on a frame the higher the point on the wave.
The signal — that beam — is then sent to a monitor. The monitor has its own electron beam, which is changed by the voltage according to information from the camera. The voltage is increased when the wave is higher, corresponding to a brighter point. That electron beam is pushed to the front of the monitor to strike a screen coated with phosphor. Every electron strike yields a point of visible light, and the higher the voltage the brighter the point. The beam scans the monitor’s screen the same way it did the image on the plate in the camera, line by line 30 frames each second, leaving behind points of light mirroring the original filmed scene. Our brains put those dots together to form an image.
There are other signals encoded in the beam that help create the video. Synchronizing signals tell the beam when to transmit no light at the end of a line, when to go back up to the top of the screen, and to make sure that the lines are aligned properly to avoid a wavy image.
But there’s another element to the video image, a slight complication born of the technological limits of the era. The image we see on a screen is the glow of an electron hitting the phosphor on the faceplate. Each point glows then fades. If the electron beam scanned the screen top to bottom, the image at the top would be faded by the time the image was shown on the bottom. The solution was to break up each frame. A standard broadcast frame has 525 lines, so each frame is broken into two fields with 262.5 lines each, which also means the 30 frames a second becomes 60 fields a second when you’re interlacing the image. The beam fills in all the odd numbered lines first and then returns to the top to fill in the even numbered lines. It’s a process called interlacing — two fields of video are put together to create one frame. Another electrical pulse in the beam ensures the two fields are properly interlaced with one field coming in a half line after the other, and it all happens so fast our brains just see a clean image.
As we know from the name — cathode ray tube — this technology relied on tubes. It was the tube inside the monitor that generated the electron beam inside a vacuum vessel, hot wires providing the electron beam, and coiled wires deflecting the beam so it could scan the plate or screen. It was a hot, heavy system that drew a lot of power. And none of those are things you want to have when you’re working on the Moon. So NASA used a simpler camera for Apollo 11’s moonwalk.
For simplicity’s sake, Apollo 11 used black and white cameras, which had the added benefit of using less bandwidth when the signal was sent from the Moon to the Earth. The camera had one imaging tube that scanned at just 10 frames a second with 320 lines per frame. There was no interlacing. The bandwidth of was also low, 0.4 MHz vs. 5MHz for what was then standard broadcast. Adding to the low quality video, the vidicon type of imaging tube caused a lag, adding a bit of a smeariness to the image.
Bandwidth and smeared image aside, the 10 frames per second, 320 lines, and no interlacing was a wholly incompatible type if image that couldn’t be seen on pretty much any TV system in the world when it came back from the Moon. Before it could be broadcast it had to be converted by systems installed at certain ground stations, generating the right kind of broadcast signal.
This was a two-stage process. First, another vidicon camera was set up facing a TV screen showing the lunar footage. This camera recorded the video at a rate of 60 fields per second but only when there was a full image on the screen. This meant that the converted image video had a full image every tenth of a second. Only one out of every six fields contained an image. So the next step was replacing the missing five fields. To do this, the good frame was recorded onto a magnetic disk then replayed five times. This yielded the necessary 60 fields per second for the 262.5 lines, the same as 30 frames per second of the full frame of 525 lines. The signal was ready for broadcast around the world via radio dishes, the same way TV was always broadcast at the time. The repetition of frames, however, give the footage that super low quality, ghostly look.
Even if it’s not great, we’re lucky to have had a live broadcast of Apollo 11’s landing at all. The mission was about politics and technology, not about television. Wally Schirra somewhat infamously resisted live TV broadcasts from Apollo 7, arguing that it would interfere with the primary goal of the mission in supporting the eventual landing on the Moon. But when Apollo 8 broadcast a live image of Earth from the Moon around Christmas in 1968, audiences were glued to their televisions and NASA realized that sharing the landing live with the world would not only have an impact on everyone, it would impact how the world remembered the program. The original plan to send Apollo 11 with only a 16mm movie camera without enough film to record the whole moonwalk was scrapped in favour of this somewhat awkward system for bringing live images from the Moon to the world.
Curious how we got that first shot of Armstrong walking down the LM ladder if no one was on the surface? I’ve got a video about that right here:
Source: How Apollo Flew to the Moon by David Woods; Basic TV Technology: Digital and Analog by Robert L. Hartwig;