NASA still trying to identify what took Hubble offline
On June 13, the Hubble Space Telescope took itself offline due to a fault in its payload computer, which manages the telescope’s scientific instruments. Since then, NASA has been doing the sort of troubleshooting that’s familiar to many of us—with the added pressure of the hardware being irreplaceable, in space, and about the same vintage as a Commodore 64.
So far, controllers have managed to figure out several things that are not at fault, based on attempted fixes that haven’t worked. The workers have narrowed the problem down, but they haven’t pinpointed it. And at this point, the next steps will depend on the precise nature of the problem, so getting a diagnosis is the top priority.
If at first you don’t succeed…
The hardware at issue is part of the payload computer system, which contains a control processor, a communications bus, a memory module, and a processor that formats data and commands so that the controller can “speak” to all the individual science instruments (the system also converts the data that the instruments produce into a standard format for transmission to Earth). There’s also a power supply that is supposed to keep everything operating at the proper voltage.
Being cautious sorts, the people who designed Hubble provided a backup controller and three backup memory modules.
Initial indications showed a potential problem with the memory module, so the first attempt to restore the Hubble involved trying to switch to one of the backups. That fix failed, suggesting that the odd memory behavior was just a symptom of problems elsewhere. Switching to the backup controller also failed to fix the problem; no matter which combination of controller and memory module was used, Hubble could not read or write to the memory.
Given that information, the controllers have turned their attention elsewhere. Prime candidates are now the power supply, the data bus, and the data formatting processor. It’s still possible to switch to the backup controller and memory, but the sequence of the procedure will differ based on exactly what is at fault. In a press release, NASA referred to this process as “more complex and riskier.”
But we also have reason for optimism: a data formatter failed in 2008, and NASA successfully switched to backups, which operated until a servicing mission replaced the failed hardware.
Given that NASA no longer has access to a vehicle designed for those sorts of servicing missions, getting a functional backup in place will be essential if we want to squeeze more years out of this one-of-a-kind observatory.
https://arstechnica.com/?p=1777677