2019-06-24, 10:00–13:00, Room 1
Extracting valuable information from satellite imagery datasets is challenging, both due to large amounts of data, as well as the lack of techniques able to automatically extract complex patterns in such spatio-temporal data. Join us to see how eo-learn
can help you extract meaningful information from satellite data with just a few lines of code.
The availability of open Earth observation (EO) data through the Copernicus and Landsat programs represents an unprecedented resource for many EO applications, ranging from ocean and land use/land cover monitoring to disaster control, emergency services and humanitarian relief. Large amounts of such spatiotemporal data call for tools that are able to automatically extract complex patterns embedded inside.
eo-learn
is a collection of open source Python packages that have been developed to seamlessly access and process spatio-temporal satellite imagery in a timely and automatic manner. eo-learn
makes extraction of valuable information from satellite imagery as easy as defining a sequence of operations to be performed on satellite imagery. It also encourages collaboration --- the tasks and workflows can be shared, thus allowing for community-driven ways to exploit EO data.
eo-learn
library acts as a bridge between the Earth Observation (EO)/Remote Sensing (RS) field and the Python ecosystem for data science and machine learning. The library is written in Python and uses NumPy arrays to store and handle remote sensing (raster) data and GeoPandas data-frames for vector data. Its aim is to make an easier entry to the field of RS for non-experts and simultaneously bring the state-of-the-art tools for computer vision, machine learning, and deep learning existing in Python ecosystem to remote sensing experts.
During the workshop we will introduce the eo-learn
framework, show examples of tasks dealing with retrieving the EO data (e.g. Sentinel-2, Sentinel-1, DEM), processing it, adding non-EO data (e.g. labels) to the dataset etc. and finally build the whole pipeline to run such workflow for larger areas, thus preparing the data for ML algorithms.