0.13
gp2021
GeoPython 2021
2021-04-22
2021-04-23
2
00:05
https://submit.geopython.net/gp2021/schedule/
2021-04-22T08:30:00+02:00
08:30
00:30
Track 1
gp2021-258-registration-opens
https://submit.geopython.net/gp2021/talk/HNBSXV/
false
Registration Opens
Special Session
en
Streaming links can be found in chat rooms ("Track 1" and "Track 2")
(Check your email for your login to chat)
2021-04-22T09:00:00+02:00
09:00
00:15
Track 1
gp2021-259-opening-session-day-1
https://submit.geopython.net/gp2021/talk/YCHTDH/
false
Opening Session Day 1
Special Session
en
Conference Opening Day 1
2021-04-22T09:15:00+02:00
09:15
00:30
Track 1
gp2021-219-geopythonic-processing-of-massive-high-resolution-copernicus-sentinel-data-streams-on-cloud-infrastructure
https://submit.geopython.net/gp2021/talk/K7AJAX/
false
Geopythonic processing of massive high resolution Copernicus Sentinel data streams on cloud infrastructure
Talk (25min + 5min Q&A)
en
We demonstrate the use of geopython solutions to address Big Data Analytics requirements in cloud-based processing of massive high resolution Copernicus Sentinel data streams in a European agricultural use context.
The European Union's Copernicus Sentinel sensors produce large volume Earth Observation data streams, which are available under a full, free and open license. The Copernicus program also supports the establishment of Data and Information Access Services (DIAS) cloud-based processing solutions, some of which are federated in the European Open Science Cloud (EOSC). DIAS platforms closely couple the provision of compute resources with access to very large S3 object storage for data Sentinel archives, which include high resolution Sentinel-1 and -2 sensor data (10 m resolution), with high revisit (5-6 days) and continental coverage.
We demonstrate how we use a combination of geopython modules (GDAL, rasterio, geopandas) with PostgreSQL/Postgis spatial databases to manage the processing of deep time series data stacks with very large vector data sets that outline agricultural parcels in selected EU Member States. Accelerated processing is supported by integration of Numba and orchestration across multiple VMs on the cloud platform using customized Docker containers. Our client interfaces make use of Flask RESTful services and Jupyter Notebooks to support analytical tasks, which can include scipy based image analysis. Time series can also be integrated into machine learning frameworks like TensorFlow and PyTorch. We will demonstrate how our modular set up facilitates the use in monitoring tasks that are required in the Common Agricultural Policy context.
In the course of our presentation, we'll outline specific processing needs, and how we intend to integrate more advanced hardware solutions, such as GPU-based processing (in cupy, Numba), which is still surprisingly sparsely used in the geospatial domain. The relevance of our initiative in the context of European programs such as Destination Earth and European Data Spaces will be shortly addressed as well. Finally, we'll introduce the public github repository where we document our current and ongoing developments.
Guido LemoineANASTASAKIS Konstantinos
2021-04-22T09:45:00+02:00
09:45
00:30
Track 1
gp2021-212-interpolating-elevation-data-inside-tunnel-and-bridge-networks
https://submit.geopython.net/gp2021/talk/WDNMFG/
false
Interpolating Elevation Data inside Tunnel and Bridge Networks
Talk (25min + 5min Q&A)
en
We present a method to interpolate elevation data inside complex tunnel or bridge networks. Our work is based on Python libraries such as NumPy, NetworkX and Flask.
Elevation data is often not available for roads or tracks on bridges and inside tunnels.
We present a method to interpolate elevation data inside complex bridge and tunnel networks from known elevation values at the access points. For the case of two access points, a simple linear interpolation is sufficient. However, many realistic bridge or tunnel networks (e.g. subway tracks or multi-lane tunnels) contain intersections, railway switches or more than two access points.
For the general case, the task is formulated as an optimization problem which reduces to solving a system of linear equations for each network.
The method is implemented as a REST API using Python libraries such as NumPy, NetworkX and Flask.
The API is capable of matching input GeoJSON LineString features to tunnel and bridge data from OpenStreetMap and decorating them with interpolated elevation data (for example from SRTM).
As an example of use, we demonstrate how the API integrates with our routing services to yield elevation profiles for trips in public transport.
These elevation profiles could assist the planning of electromobility infrastructure such as the location of charging points.
Alexander Held
2021-04-22T10:15:00+02:00
10:15
00:30
Track 1
gp2021-217-how-to-use-spatial-data-to-identify-cpg-demand-hotspots
https://submit.geopython.net/gp2021/talk/SSWEKE/
false
How to Use Spatial Data to Identify CPG Demand Hotspots
Talk (25min + 5min Q&A)
en
Spatial models can provide a rich set of tools to analyze multivariate geolocated data, enabling data-driven decisions to understand consumer behavior in the CPG industry.
Spatial data from a variety of sources are increasingly used to target marketing campaigns and prioritize rollout to an optimal audience. In this talk, we will demonstrate how different data sources (e.g. geosocial segments, internet searches, credit card data, demographics and point of interest data) can be blended and how spatial models can be used in identifying “demand hotspots” for Consumer Good Products (CPG). First, I will walk through a methodology on how to select target audiences in New York and Philadelphia for organic / natural products, based on spatial analysis of factors from the different datasets. I will then show a statistical analysis on how features for elimination purposes take place and a classifier is built to examine the impact of each factor on the selection of the “demand hotspots”. I will also present how the conclusions can be extrapolated for further locations, making use of a similarity score index which is based on probabilistic principal components analysis.
Argyrios Kyrgiazos
2021-04-22T10:45:00+02:00
10:45
00:30
Track 1
gp2021-254-break
https://submit.geopython.net/gp2021/talk/9BMJ9T/
false
Break
Special Session
en
2021-04-22T11:15:00+02:00
11:15
00:30
Track 1
gp2021-218-mapquadlib-a-python-library-that-supports-multi-level-tiled-representations-of-the-map-of-the-earth-
https://submit.geopython.net/gp2021/talk/SKQ9NE/
false
Mapquadlib - A Python library that supports multi-level tiled representations of the map of the earth.
Talk (25min + 5min Q&A)
en
Mapquadlib is a zero-dependency OSS Python library that contains implementations for various tile schemas used at HERE.
A quadtree is a tree data structure in which each internal node has exactly four children. Quadtrees are most often used to partition a two-dimensional space by recursively subdividing it into four quadrants or regions. The regions may be square or rectangular, or may have arbitrary shapes.
This data structure provides a very convenient way to represent some area on the map. A map is divided to a set of tiles. Each of these tiles can also be further divided to four tiles of smaller size and this process can continue recursively until required precision is reached. A tile zoom level is a length of the path from the tile to the root of the quad tree. Each tile can be addressed by its unique key.
Mapquadlib supports different different tiling schemas and some convenient methods to work with them:
- Earth Core (Native) Tiling schema
* Mercator (Bing) Tiling schema
* MOS Quad Block Tiling schema
* HERE Tiling schema
* Nds Quads
We will show some technicalities of different Quadtree schemas and how to work with them using Mapquadlib.
Christian Stade-Schuldt
2021-04-22T11:45:00+02:00
11:45
00:30
Track 1
gp2021-225-30-maps-in-30-days-with-python
https://submit.geopython.net/gp2021/talk/W7ATST/
false
30 Maps in 30 days with Python
Talk (25min + 5min Q&A)
en
The *#30DayMapChallenge* is an increasingly popular phenomenon, started on Twitter just 2 years ago by Topi Tjukanov, where he encourages fellow geo folks to make a map to different themes each day during the month of November. In this talk we introduce the MapChallenge and describe how to solve all the challenges within the available Python/PyViz geospatial library ecosystem, including packages, themes, challenges, gotchas and revelations during making 30 maps in 30 days.
In 2019, Topi Tjukanov [@tjukanov](https://twitter.com/tjukanov) started the __#30DayMapChallenge__ Twitter phenomenon. It is a simple and fun challenge - for each day during the month of November everyone is invited to make a map with any tool they like towards the topic of the day and post it on Twitter under the hashtag #30DayMapChallenge. Last year it has become even more popular and widespread and during November Twitter was flooded with maps from all over the world. I aimed to solve all the challenges with my favourite toolkit, the Python programming language and its geospatial libraries. It quickly becomes obvious that besides crafting a map, coming up with a nice idea and the data acquisition will take a lot of time. Consequently, being effective with the available toolkits becomes very important. From dealing with cartographic projections and various geodata formats with Gdal, GeoPandas and Rasterio to the peculiarities of styling and image composition in Matplotlib, GeoPlot and Datashader, the talk will present the overall map challenge concept and lots of maps, and then dive into gotchas and revelations of making 30 maps in 30 days only using Python.
Alexander KmochTopi Tjukanov
2021-04-22T13:30:00+02:00
13:30
00:30
Track 1
gp2021-205-python-in-qgis
https://submit.geopython.net/gp2021/talk/M87FFJ/
false
Python in QGIS
Talk (25min + 5min Q&A)
en
Python can be used in many ways in QGIS. The presentation shortly introduces how and where Python can be used from the QGIS Python console to the standalone QGIS Python applications.
QGIS gives several opportunities to the users/developers to extend its capabilities using Python. During the presentation we walk through most of them. Besides pure Python a basic knowledge of QGIS PyAPI should be picked up. Fortunately there are many free tutorials, handbooks to help with the beginning steps. Tips are presented where to start.
We start our journey with the QGIS Python console and script editor. It is an easy to use tool to execute direct Python commands or to write short scripts for yourself and your colleagues. ScriptRunner plugin gives a more comfortable environment for your simple scripts. A GUI helps you to organize scripts and metadata, too.
You can write your own functions for Field Calculator. It is the next level of Python intrusion. This helps to make field calculator more powerful to solve your specific tasks.
Python actions are the first in our series which extend the user interface for non programmers, too. There are different actions types, but Python actions are the most environment independent ones. They will work on all supported platforms.
While the previous Python intrusions are useful for creative users with some programming background, with the next three you can create easy to use modules, programs for any QGIS user. The easiest from the point of view of the programmer is the so called Processing script, which can be used from the processing framework. You do not need to deal with GUI a lot, there is a simple environment to define GUI elements for input parameters of the script, which are managed by the Processing framework. If you would like to create an interactive tool in QGIS with its own GUI elements, you should create a plugin or a standalone application. For this you need some Qt practice, too.
Short simple examples are presented for each above mentioned opportinuities.
Zoltan Siki
2021-04-22T14:00:00+02:00
14:00
00:30
Track 1
gp2021-227-predicting-traffic-accident-hotspots-with-spatial-data-science
https://submit.geopython.net/gp2021/talk/77DJHV/
false
Predicting Traffic Accident Hotspots with Spatial Data Science
Talk (25min + 5min Q&A)
en
Road traffic accidents are a major health and economic problem worldwide. Spatial Data combined with Data Science tools and models can help anticipate high-risk locations dynamically based on factors such as traffic, weather, and road signaling.
Road traffic injuries are among the ten leading causes of death worldwide and they have a significant effect on the world’s economy. Governments and the private sector are making big efforts to reduce these numbers and, as a result, today we can have real-time information on traffic and weather conditions, in addition to traffic statistics. However, this information is available either post-accident or it is static. Knowing where accidents happen and the conditions under which they happen is very powerful information that can be leveraged to identify hotspots dynamically and take action to anticipate accidents (e.g., city administrations can share this information with their citizens and organize their traffic police accordingly, and logistics companies can use this information to avoid specific routes)
In this talk, we will show how different spatial data sources (road traffic, weather, road signaling, human mobility, points of interest, and working population) affect traffic accidents and how they can be used to identify hotspots dynamically. First, I will walk you through the spatial support selection phase and the process of bringing all data sources to the same support. Once the data is ready to be consumed, I will walk you through a spatial and temporal analysis of accident data using different tools and techniques. This first analysis will already give us some hints on typical characteristics of traffic accident hotspots. I will then present a predictive model using Regression Kriging with Random Forest as regressor that will allow us to predict annual accidents. This predictive model will help us validate our hypothesis of changing conditions affecting traffic accidents and the potential of defining dynamic hotspots. The analysis focuses on the city of Barcelona (Spain), which has a rich Open Data catalog available.
Miguel Alvarez
2021-04-22T14:30:00+02:00
14:30
00:30
Track 1
gp2021-214-building-custom-web-administrators-for-geographic-data-driven-websites-with-django
https://submit.geopython.net/gp2021/talk/VU9D9F/
false
Building custom web administrators for geographic data driven websites with Django
Talk (25min + 5min Q&A)
en
In this talk I will show some of the Django admin core functionalities (routing, ORM, templating, i18n & l10n,...) that will allow us to set up a backend for our web map in just a few steps. I’ll show some of the customizations that we can do out-of-the-box, as well as some of the third-party modules that we can use to include additional functionalities to our backend, such as tabbed forms, REST API, menus, dashboards, adding field types and widgets (geom, rich text editor, color field, …).
In our line of work we are often asked to develop a simple website with a map and a backend administrator to maintain the map data. It happens often that the backend requires more functionality than the frontend, transforming what was meant to be a “simple map” into a rather bulky budget.
Django is a general purpose web framework for Python that comes with a powerful backend administrator ready to use. It can be easily adapted to fit about every need we may encounter and can be used against legacy databases.
We, at the GIS service of the University of Girona (SIGTE-UdG), often use this approach to build simple admin backends in just a few days, making our custom "simple map" projects really simple and much more affordable while keeping a fully functional backend.
In the talk I plan on sharing our experience with this framework, showing some of the most interesting functionalities and how to make the Django admin geographically aware to fit our purposes.
Marc Compte
2021-04-22T15:30:00+02:00
15:30
00:30
Track 1
gp2021-254-break
https://submit.geopython.net/gp2021/talk/9BMJ9T/
false
Break
Special Session
en
2021-04-22T16:00:00+02:00
16:00
01:30
Track 1
gp2021-199-interactive-mapping-and-analysis-of-geospatial-big-data-using-geemap-and-google-earth-engine
https://submit.geopython.net/gp2021/talk/ACNAEX/
false
Interactive mapping and analysis of geospatial big data using geemap and Google Earth Engine
Workshop/Tutorial (90 min)
en
This workshop introduces the [geemap](https://geemap.org) Python package and how it can be used for interactive mapping and analysis of large-scale geospatial datasets with Earth Engine in a Jupyter-based environment. We will also demonstrate how to produce publication-quality maps and build interactive web apps.
Google Earth Engine (GEE) is a cloud computing platform with a multi-petabyte catalog of satellite imagery and geospatial datasets. It enables scientists, researchers, and developers to analyze and visualize changes on the Earth’s surface. The geemap Python package provides GEE users with an intuitive interface to manipulate, analyze, and visualize geospatial big data interactively in a Jupyter-based environment. The topics will be covered in this workshop include: (1) introducing geemap and the Earth Engine Python API; (2) creating interactive maps; (3) searching GEE data catalog; (4) displaying GEE datasets; (5) classifying images using machine learning algorithms; (6) computing statistics and exporting results (7) producing publication-quality maps; (8) building and deploying interactive web apps, among others. This workshop is intended for scientific programmers, data scientists, geospatial analysts, and concerned citizens of Earth. The attendees are expected to have a basic understanding of Python and the Jupyter ecosystem. Familiarity with Earth science and geospatial datasets is useful but not required.
Qiusheng WuKel Markert
2021-04-22T17:30:00+02:00
17:30
00:30
Track 1
gp2021-254-break
https://submit.geopython.net/gp2021/talk/9BMJ9T/
false
Break
Special Session
en
2021-04-22T18:00:00+02:00
18:00
00:30
Track 1
gp2021-220-how-i-used-python-and-big-data-to-measure-seismic-silences-during-the-covid19-lockdown-
https://submit.geopython.net/gp2021/talk/GXLFTM/
false
How I Used Python and Big Data to Measure Seismic Silences during the COVID19 Lockdown?
Talk (25min + 5min Q&A)
en
"Lockdown' was a key tool used by governments around the world to stem the movement of people to check the spread of COVID19. I analyzed the impact of lockdown on human movements by writing Python algorithms to measure the reduction in seismic vibrations using data from seismic stations across Canada.
On 11 March 2020, the World Health Organization declared Covid19 a pandemic. Countries around the world rushed to declare various states of emergencies and lockdowns. Canada also implemented emergency measures to restrict the movements of people including the closure of borders, non-essential services, and schools and offices to slow the spread of Covid19. I used this opportunity to measure changes in seismic vibrations registered in Canada before, during, and after the lockdown due to the slowdown in transportation, economic, and construction activities. I analyzed continuous seismic data for 6 Canadian cities: Calgary and Edmonton (Alberta), Montreal (Quebec), Ottawa, and Toronto (Ontario), and Yellowknife (Northwest Territories). These cities represented the wide geographical spread of Canada. The source of data was seismic stations run by the Canadian National Seismograph Network (CNSN). Python and ObSpy libraries were used to convert raw data into probabilistic power spectral densities. The seismic vibrations in the PPSDs that fell between 4 Hz and 20 Hz were extracted and averaged for every two weeks period to determine the trend of seismic vibrations. The lockdown had an impact on seismic vibrations in almost all the cities I analyzed. The seismic vibrations decreased between 14% - 44% with the biggest decrease in Yellowknife in the Northwest Territories. In the 3 densely populated cities with a population of over 1 million - Toronto, Montreal, and Calgary, the vibrations dropped by over 30%.
To enable other students to undertake similar projects for their cities, I created a comprehensive online training module using Jupyter notebooks available on Github. Students can learn about seismic vibrations, how to obtain datasets, and analyze and interpret them using Python. They can share their findings with local policymakers so that they become aware of the effectiveness of the lockdown imposed and are better prepared for lockdowns in the future. When we make data and technology accessible, then lockdowns because of pandemics can be an opportunity for students to take up practical geoscience projects from home or virtual classrooms.
The outputs of my research on COVID19 and Seismic Vibrations are accessible at www.MonitorMyLockdown.com
Artash Nath
2021-04-22T18:30:00+02:00
18:30
00:30
Track 1
gp2021-231-saferplaces-platform-a-geopython-based-climate-service-addressing-urban-flooding-hazard-and-risk-
https://submit.geopython.net/gp2021/talk/JYEKBQ/
false
SaferPLACES platform: a GeoPython-based climate service addressing urban flooding hazard and risk.
Talk (25min + 5min Q&A)
en
GeoPython libraries for mapping flood hazard and risk in urban areas
Floods are a global hazard that may have adverse impacts on a wide-range of social, economic, and environmental processes. Nowadays our cities are flooding with increased occurrence due to more severe weather events but also due to anthropogenic pressures like soil sealing, urban growth and, in some areas, land subsidence. Frequency and intensity of extreme floods are expected to further increase in the future in many places due to climate change.
The characterisation of flood events and of their multi-hazard nature is a fundamental step in order to maximise the resilience of cities to potential flood losses and damages.
SaferPLACES employs innovative climate, hydrological and raster-based flood hazard and economic modelling techniques to assess pluvial, fluvial and coastal flood hazards and risks in urban environments under current and future climate scenarios.
SaferPLACES platform provides a cost-effective and user-friendly cloud-based solution for flood hazard and risk mapping. Moreover SaferPLACES supports multiple stakeholders in designing and assessing multiple mitigation measures such as flood barriers, water tanks, green-blue based solutions and building specific damage mitigation actions.
The intelligence behind the SaferPLACES platform integrates innovative fast DEM-based flood hazard assessment methods and Bayesian damage models, which are able to provide results in short computation times by exploiting the power of cloud computing.
A beta version of the platform is available at platform.saferplaces.co and active for four pilot cities: Rimini and Milan in Italy, Pamplona in Spain and Cologne in Germany.
SaferPLACES (saferplaces.co) is a research project founded by EIT Climate-KIC (www.climate-kic.org).
stefano bagli
2021-04-22T20:00:00+02:00
20:00
00:30
Track 1
gp2021-241-ml-enabler-enabling-rapid-machine-learning-inference-of-school-mapping-in-asia-africa-and-south-america
https://submit.geopython.net/gp2021/talk/CEGLX7/
false
ML-Enabler: Enabling Rapid Machine Learning Inference of School Mapping in Asia, Africa and South America
Talk (25min + 5min Q&A)
en
ML-Enabler is an open source model inferencing tool with a UI that acts as a github for models, allows users to run inference at scale, validate model predictions, integrate with common OSM mapping tools like Map Roulette. We will discuss how Development Seed used ML-Enabler to facilitate model inference to detect previously unmapped schools over 71 million zoom 18 tiles over multiple countries in Africa, Asia, and South America as part of UNICEF’s Project Connect initiative.
UNICEF and Development Seed are working to leverage machine learning, high-resolution imagery, and inexpensive cloud computing to create a comprehensive map of school at the global scale. Accurate data about school locations is critical to provide quality education and promote lifelong learning, UN sustainable development goal 4 (SDG4), to ensure equal access to opportunity (SDG10) and eventually, to reduce poverty (SDG1). However, in many countries educational facilities’ records are often inaccurate or incomplete. Understanding the location of schools can help governments and international organizations gain critical insights around the needs of vulnerable populations, and better prepare and respond to exogenous shocks such as disease outbreaks or natural disasters. Unfortunately, some national governments still don’t know where all the schools in their country are, or have out of date school maps.
Despite their varied structure, many schools have identifiable overhead signatures that make them possible to detect in high-resolution imagery with deep learning techniques. Approximately 18,000 previously unmapped schools across 5 African countries, Kenya, Rwanda, Sierra Leone, Ghana, and Niger, were found in satellite imagery with a deep learning classification model. These 18,000 schools were validated by expert human mapping analysts. In addition to finding previously unmapped schools, the models were able to identify already mapped schools with accuracy between 77 - 95% depending on the country. To facilitate running model inference across over 71 million zoom 18 tiles of imagery development seed relied on our open source tool ML-Enabler.
ML Enabler generates and visualizes predictions from models that are compatible with Tensorflow’s TF Serving. ML-Enabler makes managing the infrastructure for running inference at scale, and visualizing predictions straight-forward from a UI. ML Enabler will spin up the required AWS resources and run inference to generate predictions. ML Enabler helps harness the power of expert human mappers because model predictions can be validated within the UI and validated predictions can be used to generate new training data and re-train the initial model.
Martha Morrissey
2021-04-22T20:30:00+02:00
20:30
00:30
Track 1
gp2021-232--talk-cancelled-estimating-the-economic-impact-of-covid-19-using-real-time-images-from-space
https://submit.geopython.net/gp2021/talk/7CG8GP/
false
[TALK CANCELLED] Estimating the economic impact of COVID-19 using real-time images from space
Talk (25min + 5min Q&A)
en
In this talk, we will first discuss the process of analysing GeoTIFF images of surface lights on Earth from space using multicore processing tools on AWS. Second, we will discuss how the data can then be used to predict GDP and other economic metrics, especially during supply-demand shocks like COVID-19.
How can we tell how COVID-19 has affected our economies ? Can we rely on official estimates ? Can we estimate the impact ourselves - with data from space - using satellite images available anyone in real-time using just a computer and a connection to the internet ?
With offices closed, data scarce and reliability of published results in question, the usual tools of obtaining such information is limited. It is in this backdrop that we explore using images of Earth taken at night-time from space. These images are available as GeoTIFF files with precise lon/lat co-ordinates at every pixel. Each pixel in turn contains radiance values that can reflect the economic activity on the surface. Such datasets have been shown to have correlation as high as 99% with actual GDP. Combined with data from power grids, the predictive power can be unparalleled. In this talk, I'll walk through examples of a) how such GeoTIFF files can be processed at scale using standard tools like GNU Parallel, Python Multiprocessing and how the resulting data can be then analysed by assigning radiance values with state/national boundaries and creating models that track changes over time.
Nataraj Dasgupta
2021-04-22T10:45:00+02:00
10:45
00:30
Track 2
gp2021-254-break
https://submit.geopython.net/gp2021/talk/9BMJ9T/
false
Break
Special Session
en
2021-04-22T13:30:00+02:00
13:30
00:30
Track 2
gp2021-213-crop-yield-prognosis-using-ml-and-eo-data
https://submit.geopython.net/gp2021/talk/XDDLKG/
false
Crop yield prognosis using ML and EO data
Talk (25min + 5min Q&A)
en
SEGES, a Danish agricultural knowledge and innovation centre, developed and productionized in 2020 a crop yield prognosis model. We present the utilized ML methods, EO data, de-facto Python GIS packages, experiment results, and DevOps solutions.
We present the within-field crop yield prognosis model developed by SEGES using machine learning (ML) methods and earth observation (EO) data. Our model and its prognoses were released to the Danish farmers in 2020 via our WebUI solution called [CropManager](https://cropmanager.dk/#/?currentLanguage=%22en%22), where the prognosis maps of a 10x10 meter resolution are visualized. This presentation will drill into the specifics of the utilized ML algorithm, ground truth yield data, and the other EO data sources used for creating the prognosis model. Additionally, we also present our use of DevOps solutions, like TeamCity, Octopus, Azure Machine Learning, and Kubernetes, to productionize the ML model.
SEGES is the leading agricultural knowledge and innovation centre in Denmark. We offer sustainable products and services for the agriculture and food sector, by collaborating with international customers, clients and farmers, to build a bridge between research and practical farming.
https://en.seges.dk/.
Peter Fogh
2021-04-22T14:00:00+02:00
14:00
00:30
Track 2
gp2021-236-mapping-monitoring-and-forecasting-groundwater-floods-in-ireland
https://submit.geopython.net/gp2021/talk/VSY3Q3/
false
Mapping, Monitoring and Forecasting Groundwater Floods in Ireland
Talk (25min + 5min Q&A)
en
An automated approach for characterizing groundwater floods in Ireland based on remote sensing data, GIS information and hydrological models to improve the reliability of adaptation planning and predictions in the groundwater sector.
In recent years Ireland has experienced significant and unprecedented flooding events, such as groundwater floods, that extended up to hundreds of hectares during the winter flood season, lasting for weeks to months, and affecting many rural communities in Ireland. This issue was highlighted following widespread and record-breaking flooding in Winter 2015/2016 when little or no hydrometric data of groundwater floods was recorded. Further disruptive groundwater floods in 2018, 2020 and 2021 outlined the need for a systematic and large-scale mapping technique.
In response to these flooding events Geological Survey Ireland started the GWFlood project (2016 - 2019) and the GWClimate project (2020 – 2022) with the aim to establish an automated approach for mapping, monitoring and forecasting groundwater floods at a national scale, and to quantify the impact that Climate Change may have in groundwater systems. The use of remote sensing data, Sentinel-1 satellite imagery from the European Space Agency Copernicus program, was key to overcome practical limitations of establishing and maintaining a national field-based monitoring network. Remote sensing data was complemented with Geographic Information System (GIS) datasets to improve reliability in the final products, and with hydrological models to generate historical groundwater flood records based on meteorological data from Met Eireann and to provide forecast for groundwater floods.
Key deliverables of the GWFlood project included: 1) a national historic groundwater flood map 2) a methodology for hydrograph generation using satellite images, 3) predictive groundwater flood maps, and 4) a groundwater monitoring network to provide baseline data. The GWClimate project is enhancing the tools developed by GWFlood in order to deliver: 1) seasonal peak flood maps, 2) near-real time satellite-based hydrographs, 3) groundwater flood forecasting tools, and 4) maps evaluating the impact of climate change in groundwater systems in Ireland.
Data and maps from GWClimate and GWFlood projects are available at:
1) https://gwlevel.ie, and
2) https://www.gsi.ie/en-ie/programmes-and-projects/groundwater/activities/groundwater-flooding/gwflood-project-2016-2019/Pages/default.aspx
Joan Campanyà i Llovet
2021-04-22T14:30:00+02:00
14:30
00:30
Track 2
gp2021-223-the-power-of-where-location-data-in-moovit
https://submit.geopython.net/gp2021/talk/ZB7FGR/
false
The power of "Where" - Location data in Moovit
Talk (25min + 5min Q&A)
en
behind the scene of geo-data challenges in Moovit.
Location data is everywhere, in multiple formats and different environments.
We use python as cross-platform programming to work with our location data in many ways.
Python allows us to read, edit, and analyze location data on one hand and visualize the data, on the other hand. the data process and the visualization process can be in GIS software, Jupyter notebooks, or by standalone Python script.
In this talk, I will discuss some Geodata challenges in the operations side of Moovit.
Moovit, an Intel company, is helping to create cleaner, safer cities by guiding people in getting around town using any mode of transport. Today, Moovit serves over 950 million users in 3,400 cities across 112 countries and is the creator of the #1 urban mobility app.
Our Transit Data Repository contains millions of data points, including Real-Time arrivals and Service Alerts from more than 7500 transit operators around the world.
The data comes from multiple sources, is stored in multiple environments, and several formats. Python helps us create cross-platform processes to analyze, and visualize the data. especially for Geodata, we use Python to integrate data in QGIS for GIS experts and to create flexible reports using Jupyter notebooks for no geo experts.
Yehuda Horn
2021-04-22T15:00:00+02:00
15:00
00:30
Track 2
gp2021-240-universal-geospatial-data-storage-with-tiledb-no-more-file-formats
https://submit.geopython.net/gp2021/talk/WDRPAR/
false
Universal geospatial data storage with TileDB: No more file formats
Talk (25min + 5min Q&A)
en
This talk will describe the open-source TileDB Embedded library and its integrations in the geospatial domain. We will give examples of its use for point clouds, SAR and weather with partners such as Capella Space and exactEarth, and emphasize on the need to depart from file formats and focus on universal, end-to-end solutions instead.
TileDB Embedded is an open-source, universal storage engine with integrations into many tools that already exist within Python such as Dask, xarray and Pandas as well as geospatial specific frameworks such Rasterio, Python-PDAL, GeoPandas and our own open-source library for netCDF and HDF-like data, TileDB-CF-Py.
TileDB Embedded is ideal for geospatial data as it is based on sparse and dense multi-dimensional arrays, implementing indexes such as R-trees and Hilbert curve orderings. It is cloud-native and can encompass multiple geospatial domains. I’ll make the case for universal geospatial data storage in the following parts:
#### Analysis-ready geospatial data
No more files. A universal format can cover all geospatial data types as sparse or dense arrays allowing rapid slicing, arbitrary metadata, and with versioning and time-traveling built-in. Here, I’ll examine the shared structure of geospatial data that makes it best suited for array-based storage.
#### Superior interoperability
The tools don’t change. I’ll look at how TileDB arrays work within the Python ecosystem. Leverage PDAL, GDAL and existing tools such as Dask, xarray and Pandas to perform geospatial analysis.
#### Solution focused
We focus on end-to-end solutions, not format standards. Despite defining a powerful open-spec data format for all geospatial data, our goal is to deliver unprecedented speed for analytics queries and integration with numerous computational tools, via the well-defined APIs of our TileDB Embedded storage engine.
#### Proven
TileDB Embedded is successfully used by high-profile users and customers to store SAR, hyperspectral, weather, seismic data as well as point cloud data from SONAR and LiDAR sensors, all within a universal data engine that can be used seamlessly from Python. The talk concludes with a co-presented example from TileDB user Capella Space.
Norman Barker
2021-04-22T15:30:00+02:00
15:30
00:30
Track 2
gp2021-254-break
https://submit.geopython.net/gp2021/talk/9BMJ9T/
false
Break
Special Session
en
2021-04-22T17:30:00+02:00
17:30
00:30
Track 2
gp2021-254-break
https://submit.geopython.net/gp2021/talk/9BMJ9T/
false
Break
Special Session
en
2021-04-22T18:00:00+02:00
18:00
00:30
Track 2
gp2021-242-3d-geological-modelling-using-gempy
https://submit.geopython.net/gp2021/talk/CFAPHQ/
false
3D Geological Modelling using GemPy
Talk (25min + 5min Q&A)
en
Several 3D modelling and visualization Python scripts have been combined into a seamless workflow and subsequently applied to a sparse dataset in a geologically complex area. Preliminary 3D-model results are encouraging and align with known and inferred regional geology.
The geological field experience is traditionally directed on raw data collection with orientation measurements, observations and rock samples descriptions representing some classic examples. Creating geologic and contour maps by hand are also prominent activities within the limited timeframe.
We aim to improve this strategy by introducing a seamless and iterative 3D-modelling workflow, in the pursuit of shifting focus back to geological idea and concept integration versus data. Our proposed workflow is intended to work in-parallel, thereby bolstering the efficiency of allotted field time.
The workflow for 3D-modelling and visualization combines new and existing Python scripts and using open-source tools to furnish users with a coherent approach for achieving both maps and models. Our approach utilizes GemPy, a 3D geological structural modelling tool, based on the Potential Field (PF) method. Using a sparse dataset, a regional 3D-model was generated and also easily re-generated upon the introduction of new data. Cross-section views of the 3D-model can also be obtained and 2D geological maps may be extracted.
With respect to data management, well-known tools such as rasterio, geopandas and numpy were exploited for data imports and processing. The digital elevation model (DEM), field data stored as shapefiles and other required data were organized into a single geopackage, which can be shared and updated as needed. Much effort has been placed on the ease-of-use for data organization.
Our proposed approach may have strong impacts on field data collection and decisions, especially in regions with sparse geological knowledge. This notion is supported by promising initial results, well-aligned with inferred regional geology.
Kristiaan Joseph
2021-04-22T18:30:00+02:00
18:30
00:30
Track 2
gp2021-222-the-open-data-cube-odc-a-very-intuitive-tool-to-store-manage-and-analyse-satellite-images-data
https://submit.geopython.net/gp2021/talk/XWAW73/
false
The Open Data Cube (ODC): a very intuitive tool to store, manage and analyse satellite images data
Talk (25min + 5min Q&A)
en
In the era of Big Data, mechanisms to easily store, retrieve and analyze large amounts of earth observation data are needed. The Open Data Cube (ODC) proposes to minimize these complexities, with the use of open source tools (xarray, gdal, rasterio, dask, netcdf, geotiff, postgresql) composed in a single Python interface.
In this talk the attendee will be able to understand **what is the Open Data Cube (ODC)** , **why it is important** and some **use cases**. In addition we will demonstrate a typical satellite image processing workflow in order to show the benefits offered by the tool. In the demo:
1. We will deploy an Open Data Cube environment in Docker containers
2. Then, a satellite image is added to the data cube index.
3. Finally, will query the index and retrieve satellite image data to perform a basic NDVI analysis.
Aurelio Vivas
2021-04-22T20:00:00+02:00
20:00
00:30
Track 2
gp2021-228-eemont-a-python-package-that-extends-google-earth-engine
https://submit.geopython.net/gp2021/talk/3XU3BZ/
false
eemont: A Python Package that extends Google Earth Engine
Talk (25min + 5min Q&A)
en
eemont is a new python package that extends Earth Engine classes with methods to pre-process (and process) the most used satellite imagery.
eemont was created to speed up the writing of Google Earth Engine python scripts and it extends EE classes with new methods such as clouds/shadows masking, image scaling and spectral indices computation.
Let's take a look at the simple usage of eemont:
```python
import ee, eemont
ee.Authenticate()
ee.Initialize()
point = ee.Geometry.Point([-76.21, 3.45])
S2 = (ee.ImageCollection('COPERNICUS/S2_SR')
.filterBounds(point)
.closest('2020-10-15') # Extended (pre-processing)
.maskClouds(prob = 70) # Extended (pre-processing)
.scale() # Extended (pre-processing)
.index(['NDVI','NDWI','BAIS2'])) # Extended (processing)
```
And most of these methods are available for a bunch of platforms such as Sentinel 2 and 3, Landsat Series and MODIS products!
David Montero Loaiza
2021-04-22T20:30:00+02:00
20:30
00:30
Track 2
gp2021-252-deep-learning-based-remote-sensing-for-disaster-relief-with-python
https://submit.geopython.net/gp2021/talk/9JEQMC/
false
Deep learning-based remote sensing for disaster relief with Python
Talk (25min + 5min Q&A)
en
Attend this talk to learn about ongoing and future work using deep learning techniques to remotely sense and assess building damage post-natural disaster, using Python.
Artificial intelligence, including machine learning and deep learning, have been increasingly utilized for humanitarian applications, from combating climate change to assessing car accidents. Specifically in the domain of geoscientific analysis, deep learning-based remote sensing has yielded many promising humanitarian applications and results. The occurrence of natural disasters is increasing in frequency and intensity due to climate change, and efficient and accurate computational methods of assessing the building damage caused post-disaster must be in place. This assessment aids in the allocation of resources and personnel. Using Python, we can develop convolutional neural networks and other deep learning architectures to detect and classify levels of infrastructure damage to inform disaster relief and recovery programs. A popular data source for doing so is real-time satellite imagery, which is much more easily gathered than data from on the ground. Other data sources include social media posts.
Thomas Chen
2021-04-23T09:00:00+02:00
09:00
00:15
Track 1
gp2021-260-opening-session-day-2
https://submit.geopython.net/gp2021/talk/PRQML8/
false
Opening Session Day 2
Special Session
en
2021-04-23T09:15:00+02:00
09:15
01:30
Track 1
gp2021-237-geospatial-analysis-using-python-101
https://submit.geopython.net/gp2021/talk/RHEB9Q/
false
Geospatial analysis using python 101
Workshop/Tutorial (90 min)
en
This workshop is ideal for someone who has recently started using python and exploring the possibilities of it in the GIS industry. This is the beginning of complex spatial scripting
Since almost all industries are more or less connected to Location and mapping, it is important to spread awareness and literate developers to understand different aspects of the GIS (Geographic Information System) industry.
The first Part of this series focuses on different GIS Data types and how to read them, This includes understanding different data formats such as **Shapefiles, GeoJSON, WKT, CSV, TIFF, GeoTIFF, etc.**. Users can actually read such files on their computers and be familiar with them.
The second part of this series focuses on geospatial analysis with python. Users will first practice working with some core GIS functionalities using `GDAL` and `OGR` on the terminal (and later in python). After this, users will be familiarised with the most widely used geospatial python libraries such as `pandas, geopandas, fiona, shapely, matplotlib, PySAL, rasterio`.
Complete Series is divided into the following sub-topics :
1. Introduction and Installation of all Geospatial libraries in computer and in python environment
2. Working with GDAL and OGR capabilities
3. Spatial Operations and Relationships
4. Vector data analysis and visualization
5. Raster Data analysis and visualization
6. Working with Interactive Map in a python notebook
Pre-requisite for this workshop:
1. Basic knowledge of python
2. Basic knowledge of GIS and GIS Data formats
Access more detailed information at :
https://github.com/krishnaglodha/GeoPython_2021#workshop-submission--geospatial-analysis-using-python-101
krishna lodha
2021-04-23T10:45:00+02:00
10:45
00:30
Track 1
gp2021-254-break
https://submit.geopython.net/gp2021/talk/9BMJ9T/
false
Break
Special Session
en
2021-04-23T11:15:00+02:00
11:15
00:30
Track 1
gp2021-206-predicting-dissolved-oxygen-in-a-lagoon-using-interpretable-machine-learning
https://submit.geopython.net/gp2021/talk/YAQVD8/
false
Predicting dissolved oxygen in a lagoon using interpretable machine learning
Talk (25min + 5min Q&A)
en
The goal of the study is to predict dissolved oxygen concentration in a lagoon with XGBoost algorithm, based on a series of explanatory variables (e.g., water temperature, pH value, oxidation-reduction potential, air temperature, salinity). Special focus is given on interpreting the outcomes using Additive exPlanations (SHAP) methodology, aiming to elucidate the environmental windows that cause low levels of dissolved oxygen (anoxic conditions), which may have severe impact on the survival rate of aquatic organisms.
Dissolved oxygen is a key indicator in aquaculture reflecting the change in water quality. In this study, we used the Extreme Gradient Boosting (XGBoost) machine learning algorithm to predict dissolved oxygen in a lagoon (western Greece), considering ten physicochemical and meteorological explanatory variables, that is water temperature, pH value, oxidation-reduction potential, air temperature, salinity, chlorophyll, relative humidity, atmospheric pressure, wind speed and wind direction. XGBoost was trained and then evaluated attaining a root mean square error of 0.66 mg/L in predictions. Results showed that pH, salinity and air temperature were the key factors driving dissolved oxygen variability. Special focus was given on interpreting the outcomes using Additive exPlanations (SHAP) methodology, aiming to elucidate the environmental windows that cause low levels of dissolved oxygen (anoxic conditions), which may have severe impact on the survival rate of aquatic organisms.
Dimitris Politikos
2021-04-23T12:15:00+02:00
12:15
00:30
Track 1
gp2021-198-curie-point-depth-mapping-using-pycurious-from-aeromagnetic-data
https://submit.geopython.net/gp2021/talk/GAPTPN/
false
Curie Point Depth Mapping using PyCurious from Aeromagnetic Data
Talk (25min + 5min Q&A)
en
Curie Point Depth is the depth where the earth crust are losing the magnetic ability and the temperature is above 580 °C. The anomalous CPD (shallower depth than usual, approximately at 15km) could be the initial information of the heat source of a geothermal reservoir. The CPD of an area could be estimated by processing the a set of magnetic anomaly data.
PyCurious is a python package that able to calculate Curie Point Depth (CPD) from magnetic data using Tanaka and Bouligand. In this talk, I will share my experience on working with this package to create the CPD map of Botswana for my MSc research using Bouligand method from an aeromagnetic dataset.
izzul
2021-04-23T13:30:00+02:00
13:30
00:30
Track 1
gp2021-244-understanding-qiskit-quantum-by-quantum
https://submit.geopython.net/gp2021/talk/MACWGG/
false
Understanding Qiskit: Quantum by Quantum
Talk (25min + 5min Q&A)
en
Although there have been some significant advancements in software technologies, there are still many problems which classical computing cannot solve. Quantum Computing has the potential to solve such problems and provide high-performance computing capabilities. This talk focuses on introducing the basic concepts of <b>Quantum Computing using Qiskit</b>.
<b>High Peformance Computing using Quantum Computing</b> is the future and the <b>GIS community</b> should start learning about its concepts and implementation techniques in order to leverage Quantum Computing for solving complex spatial problems. This talk focuses on introducing the basic concepts of <b>Quantum Computing using Qiskit</b>.
<b>Qiskit</b> is a popular open-source, feature-rich, and modular <b>Quantum Computing Python SDK</b> by <b>IBM</b>. It enables working with Quantum Computers at the level of circuits, algorithms, and application modules.
<b>The following topics will be covered during the talk:</b>
- Basics of <b>Quantum Computation</b>
- <b>Qubits</b> and the <b>Quantum States</b>
- Working with <b>Multiple Qubits</b>
- <b>Qiskit</b> and <b>Qubit Gates</b> - Hands-on mode
- Building small <b>Quantum Circuits</b>
- Where to go from here - a <b>roadmap</b> for working with <b>Quantum Algorithms</b>
- <b>Applications</b> for the <b>GIS community (Spatial Algorithms and more)</b>
<b>Pre-requisites:</b>
- Basics of Linear Algebra
- Working knowledge of Python
- Basics of Quantum Physics (good to have; the session will cover some basics)
<b>Talk Level:</b> Intermediate
Anmol Krishan Sachdeva
2021-04-23T14:00:00+02:00
14:00
00:30
Track 1
gp2021-208-improved-crop-yield-prediction-through-spatio-temporal-analysis-of-agricultural-data
https://submit.geopython.net/gp2021/talk/QFEYM3/
false
Improved Crop Yield Prediction through Spatio-Temporal Analysis of Agricultural Data
Talk (25min + 5min Q&A)
en
Precision agriculture has seen a remarkable progress over the last decade or so with its primary goal being accurate prediction of crop yield. This talk provides an overview of efforts in SFI's funded project CONSUS whereby agricultural data obtained from a commercial agronomy service company is used to derive significant insights for crop yield prediction.
Over the past decade or so agricultural technology has seen remarkable progress giving birth to precision agriculture whereby huge quantities of useful data can be extracted from Internet of Things, sensors, satellites, weather stations, robots, farm equipment, agricultural laboratories, farmers, government agencies etc. In this talk I will highlight various aspects of putting this data to practical use towards enhanced decision-making in crop management practices leading towards crop yield optimization while also helping towards preservation of the environment. This is done by means of modern machine learning methods that treat the data as being part of a a complex system of interconnected variables and conditions with the final output enabling predictions of crop yield to be made. A special emphasis is given to spatio-temporal aspects of agricultural data, and how it is incorporated into our crop yield prediction algorithms.
Some analysis of various agricultural fields and corresponding spatio-temporal variables such as weather, soil properties, topographical factors is also performed in order to motivate the underlying theory for our proposed methodology.
Arjumand Younus
2021-04-23T14:30:00+02:00
14:30
00:30
Track 1
gp2021-245-the-bavarian-open-data-cube
https://submit.geopython.net/gp2021/talk/AVCYJL/
false
The Bavarian Open Data Cube
Talk (25min + 5min Q&A)
en
Earth Observation Data are an important source of information to tackle the Global Change. Datacubes in cloud environments can be helpful to organize the growing amount of data and makes it easier for many experts to get started.
We are facing an unprecedented Global Change. We seem to have a not inconsiderable influence on this state and we already know what we can do to mitigate the possible consequences. We are just beginning to understand, if, how and how much these consequences are connected to the Global Change. It is not possible to know all details and there could be a need to adapt to new situations in shorter time steps.
Earth Observation Data, especially satellite data, has a great potential to support us during this process. And we must be aware what happened since 1965 when the USGS wanted to start a program for systematic earth observation.
With the first Landsat satellite, launched in 1972 started a new era. Now, we are close to the 9th Landsat generation, and the whole archive is open to the public. ESAs Copernicus program is the next game changer, e.g. Sentinel-2A and 2B twin satellites orbiting since 2015 and 2017, using open data policies. Like Sentinel-1A, 1B, 3A and 3B. Tons of data, and we need to get the information out of them in a efficient way.
Datacubes are common tools for storing information, also for Earth Observation Data today. The Open Data Cube (ODC) Python library is an impressive open source tool to manage satellite data, and lower the obstacles to bring experts together, which can immediately use analysis ready products or workflows from a large an still growing community. Without producing high costs. Our subnational ODC implementation tries to support decision making in the context of forest monitoring without using proprietary services to lower barriers for state authorities with different data protection regulations.
Sebastian FoertschSteven Hill
2021-04-23T15:00:00+02:00
15:00
00:30
Track 1
gp2021-254-break
https://submit.geopython.net/gp2021/talk/9BMJ9T/
false
Break
Special Session
en
2021-04-23T16:30:00+02:00
16:30
00:30
Track 1
gp2021-230-trackintel-an-open-source-python-library-for-human-mobility-modeling-and-analysis
https://submit.geopython.net/gp2021/talk/SXMP9E/
false
Trackintel: An open-source python library for human mobility modeling and analysis
Talk (25min + 5min Q&A)
en
Focusing on human mobility data, the trackintel framework (https://github.com/mie-lab/trackintel) provides functionalities for mobility data modeling, quality enhancement, data integration, performing quantitative analysis and mining tasks, and visualizing the data and/or analyzing results.
The trackintel framework structures human movement into hierarchical units (i.e., positionfixes, triplegs/stages, trips and tours), and provides functionality to generate everything starting from the raw tracking data. It also provides functions for a complete mobility data processing pipeline:
- Preprocessing (filtering, outlier detection, imputation of missing values, quality assessment, data aggregation)
- Contextual Augmentation (map matching, trajectory algebra-based context addition)
- Analysis (extraction of mobility metrics, preferences, systematic mobility)
- Visualization and Communication (generation of maps, charts)
In this talk, we will introduce you to the most important functionalities of the trackintel framework. Moreover, using real-world tracking data, we will provide typical use-cases for different mobility data processing tasks, revealing the usefulness of the framework.
Ye Hong
2021-04-23T17:00:00+02:00
17:00
00:30
Track 1
gp2021-254-break
https://submit.geopython.net/gp2021/talk/9BMJ9T/
false
Break
Special Session
en
2021-04-23T17:30:00+02:00
17:30
00:30
Track 1
gp2021-249-on-the-role-of-packaging-in-gis-or-how-to-drive-computer-clusters-and-gpus-from-within-desktop-gui-applications-for-non-technical-users
https://submit.geopython.net/gp2021/talk/PGDPTX/
false
On the role of packaging in GIS, or: How to drive computer clusters and GPUs from within desktop GUI applications for non-technical users
Talk (25min + 5min Q&A)
en
There is a growing discrepancy between what FOSS GIS GUI applications are capable of and what the contemporary (geo-) Python stack can do. While the latter is clearly much more powerful, it requires extensive and constantly growing software development skills. This talk looks at ways of how to (re-) connect the two worlds, especially for non-technical users through the means and recent advances in software packaging.
Broad software development skills are becoming a hard requirement for potential (scientific) users of the Free and Open Source Software (FOSS) ecosystem around Geographic Information System (GIS). There is a clear boundary between what can be done with Graphical User Interface (GUI) applications such as QGIS only and contemporary software libraries – if one actually has the required skillset to use the latter. Far more than just rudimentary programming development skills are usually required, however, which are difficult to acquire and distract from the actual scientific work at hand. Installation and deployment on their own of much desired high-performance computing (HPC) software libraries for e.g. general-purpose computing on graphics processing units (GPGPU) or computations on clusters or cloud resources are very often show-stoppers. Recent advances in Python packaging and deployment systems ease the situation and enable a new kind of thinking. Desktop GUI applications can now much more easily be combined with the mentioned type of libraries, which lowers the entry barrier to HPC applications and the handling of large quantities of data drastically. This talk aims at providing an overview over where we are at the moment and what is practically and theoretically possible down the road.
Sebastian M. Ernst
2021-04-23T18:00:00+02:00
18:00
00:30
Track 1
gp2021-200-pyinterpolate-python-package-for-spatial-interpolation-and-deconvolution-of-areal-data
https://submit.geopython.net/gp2021/talk/SFXVLB/
false
Pyinterpolate - Python package for spatial interpolation and deconvolution of areal data
Talk (25min + 5min Q&A)
en
**PyInterpolate** is designed as the Python library for geostatistics. It's role is to provide access to spatial statistics tools used in a wide range of studies. The main advantage of a package is ability to transform areal aggregates into smaller blocks with Area-to-Point Poisson Kriging technique.
**PyInterpolate** is designed as the Python library for geostatistics. It's role is to provide access to spatial statistics tools used in a wide range of studies.
If you’re:
- GIS expert,
- geologist,
- mining engineer,
- ecologist,
- public health specialist,
- data scientist.
Then this package may be useful for you. You could use it for:
- spatial interpolation and spatial prediction,
- alone or with machine learning libraries,
- for point and areal datasets.
Pyinterpolate allows you to perform:
1. **Ordinary Kriging** and **Simple Kriging** (spatial interpolation from points),
2. **Centroid-based Kriging** of Polygons (spatial interpolation from blocks and areas),
3. **Area-to-area and Area-to-point Poisson Kriging** of Polygons (spatial interpolation and data deconvolution from areas to points).
Szymon Moliński
2021-04-23T19:30:00+02:00
19:30
00:30
Track 1
gp2021-211-bins-an-easy-path-to-make-them-using-fast-api-postgis-and-javascript
https://submit.geopython.net/gp2021/talk/GRQZLC/
false
Bins! An easy path to make them using Fast API, PostGIS and JavaScript
Talk (25min + 5min Q&A)
en
In order to manage and analyze geospatial data, more frequently are being used the Binning Process to achieve consistent results. In a more generic way, Binning is the process to group (cluster) points data into defined geometric features like squares or hexagons.
For example, the process that uses hexagon polygons is defined as "Hexagon Binning". Nowadays, this process became more used as provides a quick and efficient way to summarize (clustering) points data, giving a better overview principal when the amount of points are high.
In this talk will be shown and discussed the process to create a WebApp using Fast API (Python), PostGIS and JavaScript (Vue.JS, OpenLayers and Turf.JS) and also a Widget to generate hexagon or square bins, including explanations about the architecture, some alternatives for this process and others insights that will improve and help real time data analysis using the binning process.
Is widely know that in our pathway to find solutions and solve problems we most of the time face situations that can be solved in many ways. But also, in our path to be more productive we commonly pressure by time, project delivers and many others aspects, so we mainly don't have the time to ask for ourselves: "which is the ways to solve this problem? Which is the best one?". Most of time we just start and go.
Besides the technical talk and explanations also will be show that GIS development is not an easy task for sure as we have many technologies and alternatives to execute our tasks.
Taking this into consideration, we will discuss that we can develop the same app and tool in more than one way, specific describing suggestions for architecture, databases, backend and frontend solutions for the binning process.
Vinícius Cruvinel Rêgo
2021-04-23T20:00:00+02:00
20:00
00:30
Track 1
gp2021-250-spatial-sql-can-you-say-that-in-python-please-
https://submit.geopython.net/gp2021/talk/DRDDJR/
false
Spatial SQL? ...can you say that in Python, please?
Talk (25min + 5min Q&A)
en
In this talk, we'll review and solve some spatial analysis exercises in both of two ways: using Spatial SQL in PostGIS, and using GeoPandas in Python
This talk will attemp to show, in a didactic and straightforward manner, how you can perform spatial analysis with both of two tools (Spatial SQL and GeoPandas) in order to solve the same given exercise. Several exercises will be discussed in order of ascending complexity, from inside a Jupyter Notebook environment.
This could be of interest for any of the following:
* python and sql people interested in spatial analysis
* postgis people interested in the python environment
* geospatial python people interested in the database environment
César Ariel Pérez Mercado
2021-04-23T20:30:00+02:00
20:30
00:30
Track 1
gp2021-235-audio-signal-processing-for-feature-building-and-machine-learning
https://submit.geopython.net/gp2021/talk/MBZYWR/
false
Audio Signal Processing for Feature Building and Machine Learning
Talk (25min + 5min Q&A)
en
This talk will highlight audio signals, audio processing techniques, feature building and end to end Machine Learning examples along with the open source tools that can be leveraged in python.
Unlike types of data that are more commonly dealt with in the industry these days, such as numerical data, text or image data, audio signals need a different approach while trying to extract information and building machine learning models. This talk will highlight the challenges with Audio Classification problems starting with what an audio signal is and what its numerical representation means, how it is widely different from other data types, what feature extraction from audio looks like, how to go about it, what it means and the open source tools in Python that can be leveraged for the same. Digital signal processing, that includes audio processing, is a whole separate field to study and leveraging portions of learning from that in order to build successful models on audio data is an interesting and challenging problem. In addition, Matlab is a popular language of choice with great tools for audio signal processing. Python being a popular language of choice for Machine Learning presents another set of challenges to build successful audio and speech classification solutions in Python alone. Focus will then upon how to build classification models from the features representing the unseen information from audio and speech signals and doing it all leveraging different open source tools available to Python users. This will be followed by a few examples of different audio classification and prediction problem statements and a solution for attempting to solve them using Python using the different features formation techniques and tools discussed earlier in the talk.
Jyotika Singh
2021-04-23T21:00:00+02:00
21:00
00:30
Track 1
gp2021-202-cal-toxtrack-a-web-gis-for-pollution-mapping-in-california
https://submit.geopython.net/gp2021/talk/38SD9X/
false
Cal ToxTrack: A Web GIS for Pollution Mapping in California
Talk (25min + 5min Q&A)
en
This project focused on the public’s right-to-know about toxic chemical releases in their community by developing a geospatial web application called Cal ToxTrack. Built from scratch using PostgreSQL as a database, GeoDjango as a Python development framework, and Leaflet as a JavaScript framework, it effectively visualizes chemical releases and provides interactive tools to help explore pollution data.
Past chemical emergencies in the United States prompted the initiation of a variety of toxic substance and pollution control programs and regulations, including the Emergency Planning and Community Right-to-Know Act and the Clean Air Act. While these have produced decades-worth of valuable pollution datasets, they are stored on a government website in a collection of CSV tables. This method of accessibility is largely incompatible for public analysis due to the static nature of tables— the need to download them locally and appropriately query them to extract relevant data. For analysis, data is best visualized with dynamic tools and within interactive environments.
This application supports open source software and was developed entirely from the backend database to the front end by relying on publicly-available pollution datasets, PostGIS, GeoDjango, and LeafletJS. With Cal ToxTrack, users can utilize a map and spatiotemporal tools to visualize what chemicals have been released, to what magnitude, and where; practicing their right-to-know.
Megan Luisa White
2021-04-23T21:30:00+02:00
21:30
00:30
Track 1
gp2021-263-closing-session
https://submit.geopython.net/gp2021/talk/JRMRHT/
false
Closing Session
Special Session
en
2021-04-23T10:45:00+02:00
10:45
00:30
Track 2
gp2021-254-break
https://submit.geopython.net/gp2021/talk/9BMJ9T/
false
Break
Special Session
en
2021-04-23T11:15:00+02:00
11:15
00:30
Track 2
gp2021-226-travel-time-prediction-for-urban-travel-using-uber-movement-and-openstreetmap
https://submit.geopython.net/gp2021/talk/GB7TFF/
false
Travel Time Prediction for Urban Travel using Uber Movement and OpenStreetMap
Talk (25min + 5min Q&A)
en
In this talk, we will demonstrate how two large open datasets - Uber Movement and OpenStreetMap (OSM) - can be used to develop a pretty robust travel time predictor for urban travel. We use open-source routing libraries to build a machine learning model that can accurately predict travel time across many cities of the world.
The foundation of our model is the anonymized and aggregated taxi trip data shared by Uber through their Uber Movement platform. This is a large dataset which has both spatial and temporal components to it. We combine this dataset with OpenStreetMap road network and routing data to build a model of travel time that accounts for travel distance, hour-of-the-day and historic traffic. We will demonstrate the process and show a live demo of the model in action.
We will introduce modern geospatial libraries such as geopandas, shapely, folium and services such as Open Source Routing Machine and OpenRouteService that make working with large spatio-temporal datasets easy. Come and learn about how you can use these open datasets and learn about incorporating spatial datasets in your data science workflows.
See the [code and demo](https://nbviewer.jupyter.org/github/spatialthoughts/spatial-data-science/blob/master/travel_time_prediction/uber_osm_model.ipynb) online!
Ujaval GandhiVishnu Prasad J S
2021-04-23T11:45:00+02:00
11:45
00:30
Track 2
gp2021-203-spatial-analysis-of-covid-19-relation-with-weather-parameters
https://submit.geopython.net/gp2021/talk/ESR3RZ/
false
Spatial analysis of Covid-19 relation with weather parameters
Talk (25min + 5min Q&A)
en
Finding relation between weather parameters and covid-19 spread using netcdf data.
The purpose of this lecture is finding the effect of different parameters on the prevalence of corona. In this paper, similar behaviors in three parameters of wind speed, temperature and air pressure are clustered between 70 cities and then by clustering the daily changes in the statistics of corona virus cases, cities that participate in different clusters are identified and the impact of each of these parameters on corona outbreak determined.
Abouzar Ramezani
2021-04-23T15:00:00+02:00
15:00
00:30
Track 2
gp2021-254-break
https://submit.geopython.net/gp2021/talk/9BMJ9T/
false
Break
Special Session
en
2021-04-23T15:30:00+02:00
15:30
00:30
Track 2
gp2021-238-maps-with-django
https://submit.geopython.net/gp2021/talk/9BF87G/
false
Maps with Django
Talk (25min + 5min Q&A)
en
Keeping in mind the Pythonic principle that “simple is better than complex” we'll see how to create a web map with the Python based web framework Django using its GeoDjango module, storing geographic data in your local database on which to run geospatial queries.
A *map* in a website is the best way to make geographic data easily accessible to users because it represents, in a simple way, the information relating to a specific geographical area and is in fact used by many online services.
Implementing a web *map* can be complex and many adopt the strategy of using external services, but in most cases this strategy turns out to be a major data and cost management problem.
In this talk we'll see how to create a web *map* with the **Python** based web framework **Django** using its **GeoDjango** module, storing geographic data in your local database on which to run geospatial queries.
Through this intervention you can learn how to add a *map* on your website, starting from a simple *map* based on **Spatialite/SQLite** up to a more complex and interactive *map* based on **PostGIS/PostgreSQL**.
Paolo Melchiorre
2021-04-23T16:00:00+02:00
16:00
00:30
Track 2
gp2021-264-creating-3d-terrain-models-of-switzerland-using-open-data
https://submit.geopython.net/gp2021/talk/WAWC9E/
false
Creating 3D Terrain Models of Switzerland using Open Data
Talk (25min + 5min Q&A)
en
Since March 1, 2021, the Federal Office of Topography swisstopo made its official digital data and services available online free of charge as Open Government Data (OGD).
In this talk I show how the API to retrieve data works and some examples, such as generating 3D Models using high resolution areal image data and elevation data.
Martin Christen
2021-04-23T17:00:00+02:00
17:00
00:30
Track 2
gp2021-254-break
https://submit.geopython.net/gp2021/talk/9BMJ9T/
false
Break
Special Session
en
2021-04-23T17:30:00+02:00
17:30
00:30
Track 2
gp2021-243-exploratory-movement-data-analysis
https://submit.geopython.net/gp2021/talk/88FGPF/
false
Exploratory Movement Data Analysis
Talk (25min + 5min Q&A)
en
Recent developments in Python data visualization libraries enable data analysts and scientists to quickly and intuitively create interactive data visualizations. In this talk, we dive into examples of visualizing movement (GPS tracking) datasets using MovingPandas, GeoViews, and HoloViz in Jupyter notebooks.
This talk looks at the current status of data visualization capabilities from the perspective of analysts / scientists working with GPS tracking data. We explore interactive visualization possibilities provided by the Pandas ecosystem: starting from GeoPandas & [MovingPandas](http://movingpandas.org) and its visualization options and limitations, followed by a dive into [Pandas & Datashader for larger datasets](https://anitagraser.com/2020/12/06/plotting-large-point-csv-files-quickly-interactively/) and [creating linked plots](https://anitagraser.com/2020/12/13/spatial-data-exploration-with-linked-plots/).
Anita Graser
2021-04-23T18:00:00+02:00
18:00
00:30
Track 2
gp2021-204-should-we-return-to-python-2-
https://submit.geopython.net/gp2021/talk/E8ZEN8/
false
Should We Return to Python 2?
Talk (25min + 5min Q&A)
en
Did you migrate all your projects to Python 3 or kept a backdoor open just in case?
Migration to Python 3 is over, but that's not the end of the journey. Although your code runs with the currently supported Python 3.6 to 3.9, there may be some pieces of code that look obvious to you, but may surprise younger developers who have never seen Python 2 code.
At the end of 2020 I started looking for Python projects on GitHub and helped them to get rid of those Python 2 relics. I'll show you a few recipes beyond the automatic tools, how to make your code modern and prepared for future updates.
And no, we should not return to Python 2. We should get rid of it completely.
Miroslav Šedivý