STARCOP 2.0, Detecting methane onboard satellites
Methane is more than 80 times more potent than CO₂ over 20 years. Yet detecting methane leaks from space remains slow and computationally heavy because most methods require orthorectification, a correction step often performed on the ground, delaying crucial action.
At the Earth Systems Lab this year, researchers tackled a major bottleneck: can we detect methane accurately even before images are fully processed?
Read the full blog below:
Revealing the invisible from space – methane detection with ML onboard satellites
Over 80 times more potent than carbon dioxide on a 20-year time scale, methane calls for urgent actions to curb its emissions, particularly from the oil and gas industries. Invisible to the naked eye, the detection of point-source emissions and leaks remains challenging. Remote sensors onboard satellites provide a widening coverage of light spectrum through hyperspectral imaging, as well as a global view. Together with ML, they have the potential to detect methane emissions with speed and accuracy directly from satellites to facilitate a rapid response system. Imagine a world where a constellation of satellites is capable of safeguarding our planet by holding super-emitters accountable!
From this vision emerged our project STARCOP as part of FDL’s Earth Systems Lab (ESL), a programme in partnership with ESA that pushes the boundaries of applied AI in solving some of the most pressing challenges of our planet. In its first iteration, researchers were able to demonstrate impressive gains from applying ML on hyperspectral images. In its second run this year, we took a major step towards the real-world implementation of ML in space by tackling a major bottleneck – orthorectification.
Orthorectification is an image processing procedure that removes distortion effects caused by the satellite observation angle, ground elevation and the Earth’s surface curvature. Crucial for the extraction of atmospheric features, most state-of-the-art methane detection methods, both ML and non-ML, rely on it. However, it is computationally intensive, therefore often done at ground level after the images are sent back to Earth. This results in extra downlink time and cost that would delay methane mitigation efforts. We therefore asked the question – can we develop an ML algorithm that detects methane with equal accuracy on unorthorectified images?
While exploring supervised learning approaches, we identified a major challenge which was the limited availability of annotated data in the unorthorectified format, as all existing methane plume labels originate from images that are already orthorectified.
To tackle this, we developed a novel method to reverse the orthorectification in existing annotations, and created the first ML-ready unorthorectified dataset for methane detection. Additionally, we addressed the severe imbalance between image tiles with and without methane plumes using a dedicated image jiggering scheme. The dataset is publicly released on huggingface.co to accelerate future development in ML methods more suitable for onboard applications. Exploiting ML algorithms from computer vision and vision transformers, we demonstrated that they are able to achieve comparable performances in methane plume classification and segmentation on both the orthorectified and unorthorectified datasets. They also out-perform the state-of-the-art non-ML method by a mile!
Our work provided a first peak of what ML onboard satellites can achieve even with minimally processed images. Looking ahead, with the launch of ESA’s Copernicus Hyperspectral Imaging Mission (CHIME) on the horizon, we hope this work sets a strong foundation for onboard methane detection – a step closer to making our vision a reality.
The STARCOP 2.0 team consists of Maggie Chen, Luca Marini, Hala Lamdouar, Laura Martínez-Ferrer, Giacomo Acciarini and Chris Bridges. Our work will be presented at the NeurIPS 2025 Machine Learning and Physical Sciences Workshop.
- Written by Maggie Chen, FDL Earth Systems Lab researcher
The team have:
Designed and validated a "Tip and Cue" satellite system that uses onboard AI to perform end-to-end methane plume detection and analysis, dramatically cutting the delay from detection to action.
Optimised Vision Transformer model, tested on hardware simulating spacecraft constraints, achieved ultra-fast plume detection in just 5-10 milliseconds, proving the feasibility of real-time, in-orbit processing.
Developed and released two new, machine-learning-ready datasets, including one specifically optimised for the challenges of onboard deployment.
Framework enables a direct pipeline from detection to stakeholders, transforming raw satellite imagery into actionable intelligence (plume location, size, and concentration).
Models trained on this dataset performed as well as, and in some cases far better than, state-of-the-art approaches, even without orthorectified images. This opens the door to methane detection directly onboard satellites, reducing latency and enabling rapid responses to super-emitters.
Faster detection. Faster mitigation. A clearer path toward reducing one of the planet’s most damaging greenhouse gases.