Sensor Uses AI to Obtain Real-Time Images from Satellite Orbit

Facebook X LinkedIn Email
TOKYO, Jan. 22, 2019 — Researchers at Tokyo Institute of Technology (Tokyo Tech) have developed a low-cost star tracker and Earth sensor. The star tracker will be used with microsatellites to handle calibration observations, operation verification tests, and long-term performance monitoring during orbit. The Earth camera, with a design inspired by “edge computing,” will perform image recognition while in orbit using an artificial intelligence (AI) system to identify land use and vegetation distribution. The star tracker and Earth sensor were installed on the Japan Aerospace Exploration Agency’s (JAXA) Epsilon-4 rocket, which was launched Jan. 17, 2019.

The tracker and sensor use deep learning to determine orientation (attitude) in space. With no ground to distinguish directionality, the tracking device constantly tracks multiple fixed stars to achieve accuracy, while the sensor performs attitude estimation based on images of Earth. The plan is for this Deep Learning Attitude Sensor (DLAS) to capture images of stars while in orbit under various conditions, calibrate the sensor system, determine attitude based on algorithms, and demonstrate long-term operation with a test period of one year.

Deep Learning Attitude Sensor for space, Tokyo Institute of Technology.

This is the Deep Learning Attitude Sensor control box and camera unit (left) and the camera unit undergoing vibration testing (right). Courtesy of Yoichi Yatsu of Tokyo Institute of Technology.

The DLAS was developed with three goals in mind, said the team led by professor Yoichi Yatsu. The first is to demonstrate that a star tracker made from inexpensive, high-performance, commercially available components can effectively operate in space. The second goal is to conduct orbital testing of real-time image recognition using deep learning. The third goal is to apply this image identification technology, and to evaluate the technologies for estimating 3-axis attitude by comparing land features obscured by clouds with map data prerecorded in the onboard computer.

Deep Learning Attitude Sensor for space, Tokyo Institute of Technology.
This is an example of vegetation/land-use identification using an Earth image from the ISS. Courtesy of Yoichi Yatsu.

To photograph Earth, the DLAS will use two compact visible light cameras incorporated in the baffle of the star tracker. The 8-megapixel images taken will be processed in about 4 seconds using a custom high-speed, lightweight image identification algorithm. Recognition of vegetation and land use will be performed over nine categories. According to the researchers, this will be the first demonstration of real-time image recognition in space using deep learning. In orbit, more than 1,000 images will be taken as learning data and transferred to the ground for use in satellite image application tests.

The researchers plan to monitor a superwide field of 100 square degrees with UV light and conduct research with the aim of discovering initial activity of short-term astrophysical phenomena such as gravitational wave sources and unknown astrophysical events. Satellites are needed for this endeavor, said the team, since most UV light is blocked by the atmosphere, and in order to obtain sharp photographs of faint stars, high attitude stability is required. In cooperation with the NASA Jet Propulsion Laboratory, the Tokyo Tech team will employ an ultrasensitive backside-illuminated CMOS imager optimized for the UV color band to achieve the required sensitivity.

Deep Learning Attitude Sensor for space, Tokyo Institute of Technology.

This is attitude determination using a star tracker (left) and Earth sensor (right). Courtesy of Yoichi Yatsu of Tokyo Institute of Technology.

Also, to enable detailed observation using a combination of terrestrial telescopes, image analysis will be performed in the satellite. Real-time image recognition on orbiting satellites, at the computing “edge,” has the potential to enhance the value and operation of the nanosatellite platform, said the team. For example, information for use in defense, disaster monitoring, or debris capture can quickly lose its value if it’s not communicated in a timely way. The Tokyo Tech researchers ultimately want to achieve autonomous detection of information by satellites, instead of human eyes. To accomplish such an advanced observation mission, they believe it was necessary to develop a highly accurate star tracker, an advanced onboard computer that can be mounted on a satellite, and an automatic image analysis technique.

Read the full story here

Published: January 2019
deep learning
Deep learning is a subset of machine learning that involves the use of artificial neural networks to model and solve complex problems. The term "deep" in deep learning refers to the use of deep neural networks, which are neural networks with multiple layers (deep architectures). These networks, often called deep neural networks or deep neural architectures, have the ability to automatically learn hierarchical representations of data. Key concepts and components of deep learning include: ...
artificial intelligence
The ability of a machine to perform certain complex functions normally associated with human intelligence, such as judgment, pattern recognition, understanding, learning, planning, and problem solving.
Research & TechnologyeducationAsia-PacificTokyo TechTokyo Institute of TechnologyImagingLight SourcesOpticsSensors & DetectorscamerasCMOSaerospaceCommunicationssatellitenanosatellitedeep learningartificial intelligencestar trackertelescopesDeep Learning Attitude Sensoredge computing

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.