Project Suncatcher and the case for data centers in orbit
Google’s Project Suncatcher is a moonshot to try running AI hardware in space, and it’s already moving from slides to hardware. The plan is to test whether TPU chips can survive radiation and thermal extremes and to launch two prototype satellites into low Earth orbit, some 400 miles above the Earth, in early 2027. This effort puts a deadline and measurable goals behind a concept that used to live mostly in white papers.
Project Suncatcher aims to use solar-powered satellites running TPU chips and to link them with lasers so they can pass data between each other. Google’s TPU chips, tensor processing units, already power its latest model, Gemini 3. The idea is to adapt familiar hardware to operate reliably where radiation and temperatures swing wildly.
Other players are thinking along the same lines. Elon Musk has said SpaceX “will be doing data centers in space,” and some startups plan to put GPU-equipped satellites into orbit. That competition matters because it pushes the engineering and the economics in parallel.
One of the main attractions is power and cooling. In orbit you can harvest sunlight almost continuously in certain orbits, and you avoid clouds and most atmospheric loss. That makes solar arrays more productive than many ground installations.
Google plans to place these test satellites in sun-synchronous orbits so panels see near-constant sunrise or sunset illumination. That orbital choice aims to maximize energy capture without relying on earthbound conditions. It also shapes thermal and communications design in specific ways.
Early lab tests give reason for cautious optimism. The company exposed chips to radiation from a proton beam and found they can tolerate almost three times the dose they’ll receive in space. That doesn’t solve long-term reliability under solar storms, debris strikes, or years of thermal cycling, but it’s a promising start.
Thermal control is one of the toughest engineering problems for space servers. On Earth, air or liquid cooling is routine, but in vacuum all heat must escape through radiators. NASA studies show radiators can account for more than 40 percent of total power system mass at high power levels, which drives both design and launch cost decisions.
Communications is the other big hurdle. Optical networking via lasers could deliver multi-terabit links between fast-moving satellites, but keeping those links aligned and handling orbital drift is hard. Satellites will also need robust ground stations and strategies to deal with weather and atmospheric disruptions.
Maintenance and upgrades are a practical headache most people overlook. Terrestrial data centers rely on regular servicing and hardware swaps; in orbit you need robotics or extra missions to repair or upgrade gear. Those logistics add risk and cost, and they change how you design redundancy.
The economics will ultimately decide if this scales beyond prototypes. Project Suncatcher’s analysis suggests launch costs could fall below $200 (£151) per kilogram by the mid 2030s, seven or eight times cheaper than today. At that price point construction could rival some terrestrial builds, but only if satellite lifetimes hold up and replacements aren’t frequent.
A successful two-satellite test in 2027 would validate key assumptions: TPUs surviving radiation, solar power stability, and laser comms performance. But even a clean demonstration won’t prove that full-scale orbital data centers are practical or cost-effective. Large-scale adoption would mean solving thermal radiators, long-term radiation effects, serviceability, and network fabric challenges over many years.
Space-based computing remains a bold bet: technically demanding, expensive to get right, and slow to scale. If it works, it could change how we host AI workloads and how we think about infrastructure, but the road from a two-satellite demo to fleets of orbital data centers is long and full of engineering trade-offs.

