{"schedule": {"version": "0.98", "base_url": "https://submit.geopython.net/geopython2019/schedule/", "conference": {"acronym": "geopython2019", "title": "GeoPython2019", "start": "2019-06-24", "end": "2019-06-26", "daysCount": 3, "timeslot_duration": "00:05", "days": [{"index": 1, "date": "2019-06-24", "day_start": "2019-06-24T04:00:00+02:00", "day_end": "2019-06-25T03:59:00+02:00", "rooms": {"Auditorium / other": [{"id": 75, "guid": "c06e850a-82a9-5f82-9a4e-d12713ef53d1", "logo": "", "date": "2019-06-24T09:00:00+02:00", "start": "09:00", "duration": "01:00", "room": "Auditorium / other", "slug": "RSDXKN", "url": "https://submit.geopython.net/geopython2019/talk/RSDXKN/", "title": "Registration", "subtitle": "", "track": null, "type": "Coffee Break", "language": "en", "abstract": "-", "description": "-", "recording_license": "", "do_not_record": false, "persons": [], "links": [], "attachments": [], "answers": []}, {"id": 63, "guid": "5e68090d-7fb1-58a9-877c-08e7a3e1470f", "logo": "", "date": "2019-06-24T10:30:00+02:00", "start": "10:30", "duration": "00:30", "room": "Auditorium / other", "slug": "9KFDY3", "url": "https://submit.geopython.net/geopython2019/talk/9KFDY3/", "title": "Coffee Break (Workshop Day)", "subtitle": "", "track": null, "type": "Coffee Break", "language": "en", "abstract": "-", "description": "-", "recording_license": "", "do_not_record": false, "persons": [], "links": [], "attachments": [], "answers": []}, {"id": 72, "guid": "5e61e708-b37f-519b-a57e-0e8f6de7b348", "logo": "", "date": "2019-06-24T13:00:00+02:00", "start": "13:00", "duration": "01:00", "room": "Auditorium / other", "slug": "9BQAMG", "url": "https://submit.geopython.net/geopython2019/talk/9BQAMG/", "title": "Lunch", "subtitle": "", "track": null, "type": "Lunch Break", "language": "en", "abstract": "-", "description": "-", "recording_license": "", "do_not_record": false, "persons": [], "links": [], "attachments": [], "answers": []}, {"id": 64, "guid": "2faa0074-2d13-5cf2-a0c1-8558ddcaa673", "logo": "", "date": "2019-06-24T16:00:00+02:00", "start": "16:00", "duration": "00:30", "room": "Auditorium / other", "slug": "FXQPYM", "url": "https://submit.geopython.net/geopython2019/talk/FXQPYM/", "title": "Coffee Break (Workshop Day)", "subtitle": "", "track": null, "type": "Coffee Break", "language": "en", "abstract": "-", "description": "-", "recording_license": "", "do_not_record": false, "persons": [], "links": [], "attachments": [], "answers": []}, {"id": 68, "guid": "4fd27506-ca65-5775-a81e-7927a0d10ec3", "logo": "/media/geopython2019/images/79T8KJ/icebreaker.jpg", "date": "2019-06-24T18:30:00+02:00", "start": "18:30", "duration": "02:00", "room": "Auditorium / other", "slug": "79T8KJ", "url": "https://submit.geopython.net/geopython2019/talk/79T8KJ/", "title": "Ice Breaker Party (12th floor, or outside building if nice weather)", "subtitle": "", "track": null, "type": "Ice Breaker Party", "language": "en", "abstract": "Enjoy local beers and a small aperitif at the traditional GeoPython ice breaker party.", "description": "The ice-breaker Party takes place in the 12th floor (lounge) of the building and enjoy the view over Basel.", "recording_license": "", "do_not_record": false, "persons": [], "links": [], "attachments": [], "answers": []}], "Room 1": [{"id": 29, "guid": "d3ba82bc-8aed-54a0-a9fe-ca72d4773b8c", "logo": "", "date": "2019-06-24T10:00:00+02:00", "start": "10:00", "duration": "03:00", "room": "Room 1", "slug": "TKSDGG", "url": "https://submit.geopython.net/geopython2019/talk/TKSDGG/", "title": "Bridging Earth Observation data and Machine Learning in Python", "subtitle": "", "track": null, "type": "Workshop (3 hours with 30 minutes break)", "language": "en", "abstract": "Extracting valuable information from satellite imagery datasets is challenging, both due to large amounts of data, as well as the lack of techniques able to automatically extract complex patterns in such spatio-temporal data. Join us to see how [`eo-learn`](https://eo-learn.readthedocs.io/en/latest/) can help you extract meaningful information from satellite data with just a few lines of code.", "description": "The availability of open Earth observation (EO) data through the Copernicus and Landsat programs represents an unprecedented resource for many EO applications, ranging from ocean and land use/land cover monitoring to disaster control, emergency services and humanitarian relief. Large amounts of such spatiotemporal data call for tools that are able to automatically extract complex patterns embedded inside.\r\n\r\n`eo-learn` is a collection of open source Python packages that have been developed to seamlessly access and process spatio-temporal satellite imagery in a timely and automatic manner. `eo-learn` makes extraction of valuable information from satellite imagery as easy as defining a sequence of operations to be performed on satellite imagery. It also encourages collaboration --- the tasks and workflows can be shared, thus allowing for community-driven ways to exploit EO data.\r\n\r\n`eo-learn` library acts as a bridge between the Earth Observation (EO)/Remote Sensing (RS) field and the Python ecosystem for data science and machine learning. The library is written in Python and uses NumPy arrays to store and handle remote sensing (raster) data and GeoPandas data-frames for vector data. Its aim is to make an easier entry to the field of RS for non-experts and simultaneously bring the state-of-the-art tools for computer vision, machine learning, and deep learning existing in Python ecosystem to remote sensing experts.\r\n\r\nDuring the workshop we will introduce the `eo-learn` framework, show examples of tasks dealing with retrieving the EO data (e.g. Sentinel-2, Sentinel-1, DEM), processing it, adding non-EO data (e.g. labels) to the dataset etc. and finally build the whole pipeline to run such workflow for larger areas, thus preparing the data for ML algorithms.", "recording_license": "", "do_not_record": false, "persons": [{"id": 82, "code": "X8A9RH", "public_name": "Matej Aleksandrov", "biography": null, "answers": []}], "links": [], "attachments": [], "answers": []}, {"id": 81, "guid": "1eef451d-d3ef-55c9-9b06-c63ea2fa5fe0", "logo": "", "date": "2019-06-24T14:00:00+02:00", "start": "14:00", "duration": "02:00", "room": "Room 1", "slug": "98YU9E", "url": "https://submit.geopython.net/geopython2019/talk/98YU9E/", "title": "Deep Learning using Airborne Imagery", "subtitle": "", "track": null, "type": "Workshop (2 hours)", "language": "en", "abstract": "An introduction to Deep Learning for Airborne Imagery.", "description": "The workshop will focus mainly on implementing Convolutional Neural networks on airborne imagery.\r\n\r\n* 15': Introduction to Deep Learning (Denis Jordan)\r\n* 45': Example 1: Land Use Classification (Daniel Rettenmund)\r\n* 45': Example 2: Faster RCNN Bounding Box Object Detection (Adrian Meyer)\r\n* 15': Q & A", "recording_license": "", "do_not_record": false, "persons": [{"id": 19, "code": "ZDM8RC", "public_name": "Adrian Meyer", "biography": "Data Scientist for Machine Learning and Remote Sensing", "answers": []}, {"id": 77, "code": "CVKCME", "public_name": "Daniel Rettenmund", "biography": null, "answers": []}, {"id": 76, "code": "WBMUAN", "public_name": "Denis Jordan", "biography": "Denis Jordan is Professor of Mathematics and Statistics at the University of Applied Sciences and Arts Northwestern Switzerland (FHNW). He grew up in Berne where he graduated in Mathematics and Theoretical Physics before he received a doctoral and postdoctoral qualification at the Department of Anesthesiology of the Technical University of Munich (TUM). His primary research interests include machine learning in remote sensing paradigms (FHNW) and neural correlates of the conscious and unconscious brain in multimodal recordings of electroencephalography, functional magnetic resonance tomography and positron emission tomography (TUM).", "answers": []}], "links": [], "attachments": [], "answers": []}, {"id": 13, "guid": "5a41bd9a-6bc1-5c7b-873e-bb2857098599", "logo": "", "date": "2019-06-24T16:30:00+02:00", "start": "16:30", "duration": "02:00", "room": "Room 1", "slug": "XMKH3J", "url": "https://submit.geopython.net/geopython2019/talk/XMKH3J/", "title": "Wikidata - a new source for geospatial data", "subtitle": "", "track": null, "type": "Workshop (2 hours)", "language": "en", "abstract": "Wikidata gives us a new way to query the world's knowledge (e.g \"give me a map of cities with a female mayor, ordered by size\"). This workshop will give an introduction to the project and show how it can be interfaced with Python.", "description": "Wikidata is a project by the Wikimedia foundation and the Wikidata community. Its goal is to be a machine readable collection of all human knowledge. Wikidata does this by forming a knowledge graph that can be extended and queried by any user, anywhere. Much of the data that is stored in Wikidata of course has a geographical reference or relates to entities that have a geographical reference. Wikidata can be a powerful helper when building geographical applications. It can be used to query geographical data directly but it can also become especially helpful as a way to enhance existing geodata.\r\n\r\nIn this workshop, I will introduce the Wikidata project and show examples that have been built with it. It will also show how it can be interfaced with Python both to query data but also to import more data into the project. Furthermore, integration with other existing tools like OpenStreetMap will be shown.", "recording_license": "", "do_not_record": false, "persons": [{"id": 16, "code": "XNNY7P", "public_name": "Knut H\u00fchne", "biography": "I am a Berlin based software developer and love all things open. As one of the organisers of the Berlin Open Knowledge Lab, I got to know the Wikidata project and continue to find more and more use-cases for it. I love sharing my knowledge and think that Wikidata and Open Data in general can be a very empowering tool for everyone.", "answers": []}], "links": [], "attachments": [], "answers": []}], "Room 2": [{"id": 40, "guid": "3d81c475-3621-52e6-90e9-b5253accc619", "logo": "", "date": "2019-06-24T10:00:00+02:00", "start": "10:00", "duration": "03:00", "room": "Room 2", "slug": "QEU8RY", "url": "https://submit.geopython.net/geopython2019/talk/QEU8RY/", "title": "Python from \u201cHello World\u201d to \u201cFit for GeoPython\u201d in 180 Minutes", "subtitle": "", "track": null, "type": "Workshop (3 hours with 30 minutes break)", "language": "en", "abstract": "Enough Python to get you started for all the wonderful GeoPython talks and workshops.", "description": "A quick introduction to Python 3.7 and its standard library. We'll write together a very simple geographical program. At the end you'll be able to follow and understand the syntax of the Python code snippets presented in other talks and workshops. Some previous programming experience in any other language is an advantage. \r\n\r\n- installation: you can already install Python 3.7 (or 3.6) before the workshop\r\n- \u201chello world\u201d, running scripts\r\n- interactive mode with IPython and Jupyter Notebook (you may install these)\r\n- variables: bool, int, float, str, tuple, list, set, dict\r\n- string formatting\r\n- conditions, loops\r\n- user interaction\r\n- reading and writing files\r\n- functions\r\n- modules\r\n- standard library: datetime, math, random, pathlib, pickle, gzip, csv, json, argparse, logging, subprocess, \u2026\r\n- testing (with pytest: you may install it)\r\n\r\nEncore (names of the third party libraries you could install):\r\n- pandas: a better spreadsheet at your fingertips\r\n- pytest: automatic testing of your code\r\n- pytz: time zone definitions", "recording_license": "", "do_not_record": false, "persons": [{"id": 32, "code": "ESU9JG", "public_name": "Miroslav \u0160ediv\u00fd", "biography": "Senior Software Developer at [solute GmbH](https://www.solute.de/). Using Python to get you the best prices online. Using geography to find my way home. Born in Czechoslovakia, studied in France, living in Germany. Addicted to foreign languages and the \u201chuman\u201d face of computing, such as writing systems, calendar and time zones, and teaching computers to work on the boring tasks. Twitter: [@eumiro](https://twitter.com/eumiro)", "answers": []}], "links": [], "attachments": [], "answers": []}, {"id": 80, "guid": "f83b14b6-ed96-5dcb-951d-39aa508c8390", "logo": "", "date": "2019-06-24T14:00:00+02:00", "start": "14:00", "duration": "02:00", "room": "Room 2", "slug": "QA78H3", "url": "https://submit.geopython.net/geopython2019/talk/QA78H3/", "title": "Introduction to geospatial data analysis with GeoPandas and the PyData stack", "subtitle": "", "track": null, "type": "Workshop (2 hours)", "language": "en", "abstract": "This tutorial is an introduction to geospatial data analysis, with a focus on tabular vector data using GeoPandas. It will show how GeoPandas and related libraries can improve your workflow and (importing GIS data, visualizing, joining and preparing for analysis, exploring spatial relationships, ...) and fit nicely in the traditional PyData stack.", "description": "This tutorial is an introduction to geospatial data analysis in Python, with a focus on tabular vector data using GeoPandas. The content focuses on introducing the participants to the different libraries to work with geospatial data and will cover munging geo-data and exploring relations over space. This includes importing data in different formats (e.g. shapefile, GeoJSON), visualizing, combining and tidying them up for analysis, and will use libraries such as pandas, geopandas, shapely, pyproj, matplotlib, cartopy, ... The tutorial will cover the following topics, each of them using Jupyter notebooks and hands-on exercises with real-world data:\r\n\r\n 1. Introduction to vector data and GeoPandas\r\n 2. Visualizing geospatial data\r\n 3. Spatial relationships and operations\r\n 4. Spatial joins and overlays\r\n\r\nMaterials of previous versions of this tutorial: https://github.com/jorisvandenbossche/geopandas-tutorial", "recording_license": "", "do_not_record": false, "persons": [{"id": 72, "code": "VVHLEB", "public_name": "Joris Van den Bossche", "biography": "Joris is a core contributor to Pandas and maintainer of GeoPandas. He has given several tutorials at international conferences and a course on python for data analysis for PhD students at Ghent University. He did a PhD at Ghent University and VITO in air quality research, worked at the Paris-Saclay Center for Data Science, and, currently is a freelance software developer and teacher.", "answers": []}], "links": [], "attachments": [], "answers": []}, {"id": 10, "guid": "acb7ceac-bcde-5cae-a320-cff18f7a85ac", "logo": "/media/geopython2019/images/JB3B8C/2019-02-07_13_28_46-10_Jim_O_Leary_Python_Productivity_with_FME.pdf.png", "date": "2019-06-24T16:30:00+02:00", "start": "16:30", "duration": "02:00", "room": "Room 2", "slug": "JB3B8C", "url": "https://submit.geopython.net/geopython2019/talk/JB3B8C/", "title": "Introduction to Spatial Data Processing using FME and Python", "subtitle": "", "track": null, "type": "Workshop (2 hours)", "language": "en", "abstract": "FME is a great and affordable tool to process data. It has a full list of build-in connectors and tools that make spatial processing easy. This workshop will show how Python can enhance all these functionalities.", "description": "FME is a widely used spatial ETL in the geospatial field. It is considered to be the Swiss army knife for your data. It helps you get spatial data into the exact format and structure you need, using a fast, simple and straight-forward process. Python can be used within FME to accomplish tasks either before or after FME runs or to perform tasks within FME which are not possible with standard FME tools and transformers.\r\n\r\nIn this workshop, I will show where python scripts can be used within a workspace and how it can interact with features. I will illustrate the Python FME API with concrete examples and exercises.\r\n\r\nThis workshop is designed for people willing to learn how to use Python with FME, no initial knowledge of FME is required.", "recording_license": "", "do_not_record": false, "persons": [{"id": 18, "code": "YA9YYE", "public_name": "R\u00e9gis Longchamp", "biography": "Having an experience of many years in a land surveying firm I\u2019m currently working for INSER SA as a GIS Analyst. I hold a MSc in Environmental Sciences and Engineering of the EPFL and I am FME Certified Professional & Trainer.", "answers": []}], "links": [], "attachments": [], "answers": []}]}}, {"index": 2, "date": "2019-06-25", "day_start": "2019-06-25T04:00:00+02:00", "day_end": "2019-06-26T03:59:00+02:00", "rooms": {"Auditorium / other": [{"id": 76, "guid": "923058f8-b067-57c4-a607-a10ea98e4e28", "logo": "", "date": "2019-06-25T09:00:00+02:00", "start": "09:00", "duration": "00:15", "room": "Auditorium / other", "slug": "3N9GAU", "url": "https://submit.geopython.net/geopython2019/talk/3N9GAU/", "title": "Opening / Annoucements", "subtitle": "", "track": null, "type": "Short Talk", "language": "en", "abstract": "Opening of GeoPython 2019", "description": "-", "recording_license": "", "do_not_record": false, "persons": [{"id": 1, "code": "YH9TLU", "public_name": "Martin Christen", "biography": "Martin Christen is a professor of Geoinformatics and Computer Graphics at the Institute Geomatics at the University of Applied Sciences Northwestern Switzerland (FHNW). His main research interests are geospatial Virtual- and Augmented Reality, 3D geoinformation, and interactive 3D maps. \r\nMartin Christen is very active in the Python community. He teaches various Python-related courses and uses Python in most research projects. He organizes the PyBasel meet up - the local Python User Group Northwestern Switzerland. He also organizes the yearly GeoPython conference. He is a board member of the Python Software Verband e.V.", "answers": []}], "links": [], "attachments": [], "answers": []}, {"id": 67, "guid": "6ea46c34-daf1-5655-af76-2911bed85976", "logo": "", "date": "2019-06-25T10:45:00+02:00", "start": "10:45", "duration": "00:30", "room": "Auditorium / other", "slug": "RAPMXJ", "url": "https://submit.geopython.net/geopython2019/talk/RAPMXJ/", "title": "Coffee Break", "subtitle": "", "track": null, "type": "Coffee Break", "language": "en", "abstract": "-", "description": "-", "recording_license": "", "do_not_record": false, "persons": [], "links": [], "attachments": [], "answers": []}, {"id": 71, "guid": "1ff79578-4288-5d92-be56-6cb534e72f91", "logo": "", "date": "2019-06-25T13:00:00+02:00", "start": "13:00", "duration": "01:00", "room": "Auditorium / other", "slug": "DAZRS9", "url": "https://submit.geopython.net/geopython2019/talk/DAZRS9/", "title": "Lunch", "subtitle": "", "track": null, "type": "Lunch Break", "language": "en", "abstract": "-", "description": "-", "recording_license": "", "do_not_record": false, "persons": [], "links": [], "attachments": [], "answers": []}, {"id": 66, "guid": "50dfd294-24b0-55f4-8cf6-cda72e79081d", "logo": "", "date": "2019-06-25T15:30:00+02:00", "start": "15:30", "duration": "00:30", "room": "Auditorium / other", "slug": "Q7HXSV", "url": "https://submit.geopython.net/geopython2019/talk/Q7HXSV/", "title": "Coffee Break Afternoon", "subtitle": "", "track": null, "type": "Coffee Break", "language": "en", "abstract": "-", "description": "-", "recording_license": "", "do_not_record": false, "persons": [], "links": [], "attachments": [], "answers": []}, {"id": 77, "guid": "5ecd2b15-e9e0-5e36-a6d6-2bdb25de51c3", "logo": "", "date": "2019-06-25T17:45:00+02:00", "start": "17:45", "duration": "00:45", "room": "Auditorium / other", "slug": "XWMNUR", "url": "https://submit.geopython.net/geopython2019/talk/XWMNUR/", "title": "Lightning Talks", "subtitle": "", "track": null, "type": "Talk", "language": "en", "abstract": "Lightning Talks", "description": "Lightning talks are registered directly at the conference on a first come first serve basis. More info will be provided at the opening session\r\n\r\nA lightning talk is a very short talk where you share an idea, concept, or a bit of information you find interesting. They\u2019re quick, easy, and a great way to practice. \r\n\r\nA lightning talk should be about five minutes long, just long enough to give an overview and make people curious about your topic. You can talk about anything that\u2019s related to the event\u2019s general theme (in the case of Write the Docs, anything even remotely related to documentation). \r\n\r\nFirst, you need a topic. Your topic might be:\r\nA concept, process, or tool that you learned recently or are still learning\r\nAn idea for a website or product that would solve a problem you have\r\nA retrospective, or what went right/wrong during a project you did or are doing\r\nAnything relevant that the audience might be interested in knowing more about\r\nNext, you need an outline for the content. Think about the audience, and the goal of your talk. Choose points to make that will be understandable by the audience and achieve your presentation goal. Remember how quickly five minutes goes by when choosing what to include!\r\n\r\nPotential points of interest might be: \r\n\r\n* What could you use this for or when could you use it? Have you already used it? How?\r\n* When wouldn\u2019t it not be as useful? What are some contraindications to using it?\r\n* Resources related to the subject, including books, documentation, and URLs.\r\n* Are there any projects or companies that are using what you\u2019re sharing?\r\n* Is this something you\u2019d like to collaborate with others on? Feel free to ASK!\r\n* What are some of the challenges related to using, building, or configuring what you\u2019re showing?\r\n\r\nYou absolutely don\u2019t need slides. However, if you\u2019d like to make slides, use anything that you are comfortable with. Don\u2019t worry if it doesn\u2019t look polished, lightning talks don\u2019t need to be! You might use Microsoft Word, Keynote, a PDF, or a web site. Even a simple terminal or console window where you enter commands can work well for presenting your ideas. \r\n\r\nKeep in mind that the projector will be lower resolution, typically 1024x768, and that low-contrast slides don\u2019t present well. You\u2019ll also need to make your terminal or console font very large so that everyone can see what you\u2019re typing. If you\u2019re running code examples, have them written, debugged, and ready to go. Watching someone write code as they go can be great in a longer deep-dive type of talk, but it\u2019s not very well-suited to a lightning talk. \r\n\r\nYou may have the urge to do a live demonstration of the thing you\u2019re talking about. It seems like an easy way to help the audience see your vision, and it is\u2026 if it works! Following Murphy\u2019s Law, however, we can deduce that your live demo will go horribly wrong. A failed demo can derail all but the most skilled presenters, but if you choose to do a demo and it goes wrong don\u2019t worry! Have a backup story to tell that explains what the demo would have shown and revert to it if necessary. \r\n\r\nTake a deep breath and go for it. You are among friends, and nobody will mind if you make mistakes. Almost everyone starts out their public speaking career in the tech industry by giving lightning talks, so you can assume your audience has been in your shoes before. Throw caution to the wind and embrace your five minutes! :) \r\n\r\nBe sure to bring everything you need to do your presentation. It\u2019s wise to assume that the internet access will fail precisely when you need it. Load web pages you need into your browser beforehand. Bring the adapters you\u2019d normally need to connect your laptop to a monitor or projector, and keep a backup copy of your presentation on a USB memory stick \u2013 laptops can and do fail, and this will allow you to use someone else\u2019s laptop if the need arises. \r\n\r\nIf you have your slide presentation or example code available online you can let the group know where to find it if you want to share it. Curious people may follow up with you if they\u2019d like to collaborate or have feedback about your presentation. \r\n\r\nThanks to the lovely Portland Python Users Group for use of this content. \r\n\r\nLightning Talks: A Guide for Beginners by Michelle Rowley of PDX Python is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.", "recording_license": "", "do_not_record": false, "persons": [], "links": [], "attachments": [], "answers": []}, {"id": 69, "guid": "f9817bc6-f15f-5d11-af9e-67ada1b8f48b", "logo": "/media/geopython2019/images/9RCSTJ/rebhausbasel.jpeg", "date": "2019-06-25T19:00:00+02:00", "start": "19:00", "duration": "03:00", "room": "Auditorium / other", "slug": "9RCSTJ", "url": "https://submit.geopython.net/geopython2019/talk/9RCSTJ/", "title": "Conference Dinner (in Basel \"Restaurant Rebhaus\")", "subtitle": "", "track": null, "type": "Conference Dinner", "language": "en", "abstract": "The Rebhaus was first mentioned in the year 1349. The Honor Society of Rebhaus has been documented to be the owner of Rebhaus since 1397. Today it is still a famous place in Basel, still belonging to the Rebhaus guild.", "description": "The Rebhaus is located in center of Basel. \r\n\r\n**From the conference venue:**\r\n\r\n* Walk to Muttenz train station (~4-5 minutes)\r\n* Take urban train line S1 or S3 to Basel SBB. (1 Stop, 7 minutes ride). [Recommended: S1 at 18:34]\r\n* Then change to Tramway Line 2, direction \"Riehen Grenze\" (4 stops to \"Wettsteinplatz\", first stop after crossing the river Rhine, 6 minutes)\r\n* Walk to Rebhaus (3 minutes). You will see the \"lion fountain\" outside.\r\n\r\n(If you have a hotel in Basel city you will get the \"Baselcard\" and all rides are for free.)\r\n\r\n The address is: **Riehentorstrasse 11. CH-4058 Basel**", "recording_license": "", "do_not_record": false, "persons": [], "links": [], "attachments": [], "answers": []}], "Room 1": [{"id": 56, "guid": "32983661-cbc5-5e00-ae01-13ee8c7aafa4", "logo": "/media/geopython2019/images/RQB7PX/WhatsApp_Image_2019-03-05_at_11.53.42_1.jpeg", "date": "2019-06-25T09:15:00+02:00", "start": "09:15", "duration": "00:30", "room": "Room 1", "slug": "RQB7PX", "url": "https://submit.geopython.net/geopython2019/talk/RQB7PX/", "title": "Machine Learning for Land Use / Land Cover Statistics of Switzerland", "subtitle": "", "track": null, "type": "Talk", "language": "en", "abstract": "This project demonstrates a powerful prototype to classify land use / land cover statistics for an entire country using a Deep Learning approach for aerial imagery processing and a Random Forest architecture for data fusion with time series and other auxilliary datasets.", "description": "The Swiss land use / land cover (LULC) statistics (\"Arealstatistik\") is produced by the Swiss Federal Statistical Office and is an important instrument for long-term spatial observation. The statistics have been collected periodically since the 1980s and have been based on the same survey method: aerial photographs of Switzerland are overlaid with a regular grid of 100x100 meters and a team of trained interpreters determines the land cover and land use classes at each grid intersection. The final product contains more than 4 million sample points. \r\n\r\nThe area statistics are produced with great personnel effort in periodic intervals of 9 years. The goal of the project \u2018AI pilot for area statistics\u2019 is to partially automate the interpretation task using ML methods. The developed prototype employs CNN-based supervised learning for land use/cover classification of aerial images. The CNN output is fused with various auxiliary data (cadastral survey information, altitude, satellite-derived time series etc.) and classified using the Random Forest method. The obtained results show great potential of the proposed approach in partial automation of data interpretation.", "recording_license": "", "do_not_record": false, "persons": [{"id": 19, "code": "ZDM8RC", "public_name": "Adrian Meyer", "biography": "Data Scientist for Machine Learning and Remote Sensing", "answers": []}], "links": [], "attachments": [], "answers": []}, {"id": 30, "guid": "fd566aaf-a352-55f7-93f1-0ccf4caf8c5e", "logo": "", "date": "2019-06-25T09:45:00+02:00", "start": "09:45", "duration": "00:30", "room": "Room 1", "slug": "9XUTYZ", "url": "https://submit.geopython.net/geopython2019/talk/9XUTYZ/", "title": "How to structure EO data for ML workflows", "subtitle": "", "track": null, "type": "Talk", "language": "en", "abstract": "The availability of open Earth observation (EO) data represents an unprecedented resource for many EO applications. The value hidden within an open access satellite imagery can not only be revealed by looking at spatial context but also by taking into account the temporal evolution of a pixel or an area within an image. We found available data structures not best suited for an automatic extraction of complex patterns in such spatio-temporal data. In this talk we will present lessons learned dealing with optical imagery from Sentinel-2 satellite with a five-day global revisit time. The value extraction pipelines relying on other external machine learning and deep learning frameworks are streamlined with the [`eo-learn`](https://eo-learn.readthedocs.io/en/latest/) library in which an `EOPatch` plays a central role as a data container.", "description": "`eo-learn` is a collection of open source Python packages that makes extraction of valuable information from satellite imagery as easy as defining a sequence of operations to be performed on satellite imagery. It acts as a bridge between the EO and remote sensing (RS) fields and the Python ecosystem for data science and machine learning, making an easier entry into the field of RS for non-experts and simultaneously bringing the state-of-the-art tools for computer vision, machine learning, and deep learning existing in Python ecosystem to the RS experts.\r\n\r\nWe will present how we leverage NumPy arrays to store and handle RS data, and GeoPandas for vector and attribute data in data containers called `EOPatch`es, as well as what are the benefits and what problems did we run into with our typical usecases. Comparison with formats, optimised for cloud-based access (e.g. netcdf, cloud-optimised geotiff) will be presented. We will show how Land Cover prediction on a (small) country level can be implemented on your laptop and then scaled to run on a cluster, splitting the area into a grid of `EOPatch`es and using `EOExecutor` to handle execution and monitoring of the workflow. Future design and improvements of `eo-learn`, particularly regarding the `EOPatch` structure, will also be discussed.", "recording_license": "", "do_not_record": false, "persons": [{"id": 83, "code": "VHEY7N", "public_name": "Matic Lubej", "biography": null, "answers": []}], "links": [], "attachments": [], "answers": []}, {"id": 31, "guid": "6229ebe5-c2d1-51e7-81a3-56857a6eaf84", "logo": "/media/geopython2019/images/UV7APF/dolines-segmented_13ZTGQg.jpeg", "date": "2019-06-25T10:15:00+02:00", "start": "10:15", "duration": "00:30", "room": "Room 1", "slug": "UV7APF", "url": "https://submit.geopython.net/geopython2019/talk/UV7APF/", "title": "Terrain segmentation with label bootstrapping for lidar datasets, case of doline detection", "subtitle": "", "track": null, "type": "Talk", "language": "en", "abstract": "Aerial lidar scan of Slovenia with resolution 1 m^2 was used to segment out a large number of dolines, specific relief depressions that are a diagnostic feature of karst landscape with typical diameter of 42 m. This talk will cover data processing, label bootstrapping, TPU based model training and inference that was done to deliver a catalog of 472k segmented objects.", "description": "Project goal was to create catalog of dolines based on publicly available lidar scan of Slovenia (20k km^2 at 1 m^2 resolution).\r\nWe estimated that there would be between 200k and 1M dolines on the state territory. Assuming 10 seconds of work to label each object it would take between 23 and 115 human days to label the entire territory, so manual labor was not feasible and we attempted machine learning approach.\r\n\r\nWe used manually created approximate segmentation dataset, produced with about 8 hours of manual work to train an image segmentation algorithm. We used the trained model to segment previously unseen data and manually reviewed segmentations that it produced. We joined original manual labels with reviewed segmentations and trained the segmentation algorithm again on the union. We repeated this process until we produced a diverse enough label dataset which we used to train the final segmentation model. This model we finally use to create a catalogue of all Slovenian dolines.\r\n\r\nThe segmentation algorithm used was U-nets. Significant data augmentation was applied to extend the dataset. Finally, the labelled country-wide dataset was manually reviewed by a domain expert.\r\n\r\nDuring this project we also developed and [open sourced a Python library for semi-supervised label creation](https://github.com/rok/label-wrapper).", "recording_license": "", "do_not_record": false, "persons": [{"id": 39, "code": "SLRRXA", "public_name": "Rok Mihevc", "biography": "Physicist turned freelance data scientist. Open source contributor.\r\nInterested in data science tooling and building complete data pipelines, shipping data products.\r\n\r\n[GitHub](https://github.com/rok)\r\n[Blog](http://mihevc.org)\r\n[LinkedIn](https://www.linkedin.com/in/rokmihevc/)", "answers": []}], "links": [], "attachments": [], "answers": []}, {"id": 16, "guid": "2ddd7a2d-0a13-5002-a1f1-6c1d5812807d", "logo": "", "date": "2019-06-25T11:30:00+02:00", "start": "11:30", "duration": "00:30", "room": "Room 1", "slug": "UDMMGM", "url": "https://submit.geopython.net/geopython2019/talk/UDMMGM/", "title": "Detect and Remediate Bias in Machines Learning Datasets and Models", "subtitle": "", "track": null, "type": "Talk", "language": "en", "abstract": "We will share lessons learnt while using AI Fairness 360 and show how to leverage it to detect and de bias models during pre-processing, in-processing, and post-processing.", "description": "One of the most critical and controversial topics around artificial intelligence centers around bias. As more apps come to market that rely on artificial intelligence, software developers and data scientists can unwittingly inject their personal biases into these solutions. \r\n\r\nBecause flaws and biases may not be easy to detect without the right tool, we have launched AI Fairness 360, an open source library to detect and remove bias in models and data sets.\r\n\r\nThe AI F 360 Python package includes a comprehensive set of metrics for data sets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias. In total, AIF360 has 30 fairness metrics and ten bias mitigation algorithms. \r\n\r\nWe will share lessons learnt while using AI Fairness 360 and show how to leverage it to detect and de bias models during pre-processing, in-processing, and post-processing.", "recording_license": "", "do_not_record": false, "persons": [{"id": 79, "code": "B9SU7V", "public_name": "Ilja Rasin", "biography": null, "answers": []}], "links": [], "attachments": [], "answers": []}, {"id": 33, "guid": "150642d7-44ed-592d-93e0-fc84a48c62d7", "logo": "", "date": "2019-06-25T12:00:00+02:00", "start": "12:00", "duration": "00:30", "room": "Room 1", "slug": "BMLGLC", "url": "https://submit.geopython.net/geopython2019/talk/BMLGLC/", "title": "The Mission Support System", "subtitle": "", "track": null, "type": "Talk", "language": "en", "abstract": "Examining the atmosphere by research aircraft is a costly and highly collaborative effort involving a large community of scientists, their one-of-a-kind measurement instruments and a very limited amount of available flight-hours.\r\nThe Mission Support System MSS enables the planning of optimal flight paths by visualising the results of model simulations in combination with the chosen flight path and allowing for a simple iterative and collaborative improvement process, enabling the best measurement flights possible.", "description": "The MSS Software is used for planning research aircraft missions. Such missions involve the measurement of interesting atmospheric situation by research aircraft. These missions typically involve a wide range of one-of-a-kind instruments designed and operated by different scientific institution, with different requirements and operational conditions.\r\n\r\nTo measure in the scientifically most interesting locations, it is necessary to have model forecasts of relevant quantities such as meteorological parameters, chemical composition or particle information to guide the aircraft to the location of interest. \r\n\r\nThe MSS software consists of two major components. One is formed by an extended OGC webmap server capable of visualising the big data generated by complex 3-D atmospheric simulations in a highly configurable manner and delivering the resulting small PNG images over the internet. The second one is a flight path editor that allows to overlay the figure produced by our server (or other OWs compliant services) with the flight path to identify regions of interest and change the plan accordingly. The split is necessary as the data is typically located in super-computing centers, while the scientist are often based in remote locations with poor internet-connections. Several special features are implemented in addition to this basic functionality, e.g. the ability to provide vertical cross-sections along the flight-path, which are ideal to assess the measurements that, e.g. a lidar would take.\r\n\r\n\r\n### Table of Contents\r\n\r\nShort introduction on the scope of atmospheric research\r\n- sketch processes in the atmosphere\r\n- why, how, aim\r\n- collaboration\r\n\r\nPlanning Phase\r\n- preparing OGC Webmap Services\r\n- model calculations\r\n\r\nSoftware Description\r\n- architecture\r\n- Client/Server Model \r\n- advanced storage concept\r\n- poor internet optimisations\r\n- usage\r\n\r\nOutlook\r\n- new features", "recording_license": "", "do_not_record": false, "persons": [{"id": 41, "code": "G97QXK", "public_name": "Reimar Bauer", "biography": "I am a programmer from J\u00fclich, Germany. That\u2018s a small town between Aachen and Cologne.\r\n\r\nI work at the Forschungszentrum J\u00fclich GmbH. Employees research in the fields of energy and the environment, information and brain research with the aim of providing society with options for action facilitating sustainable development.\r\n\r\nMy work is related to atmospheric science. \r\n\r\nI have been a fellow of the Python Software Foundation since 2013.\r\n\r\nA more detailed interview at [blog.pythonlibrary.org](https://www.blog.pythonlibrary.org/2018/11/26/pydev-of-the-week-reimar-bauer/).", "answers": []}], "links": [], "attachments": [], "answers": []}, {"id": 5, "guid": "8c00f002-beeb-5dc1-a47a-aa6dc768a8d2", "logo": "", "date": "2019-06-25T12:30:00+02:00", "start": "12:30", "duration": "00:30", "room": "Room 1", "slug": "JZFKYV", "url": "https://submit.geopython.net/geopython2019/talk/JZFKYV/", "title": "The Pony Express and How Technology Moves Fast", "subtitle": "", "track": null, "type": "Talk", "language": "en", "abstract": "It's been said many times - technology moves fast. As do the philosophies behind it. This isn't new - the Pony Express lasted only 18 months before being replaced by telegraph. Why is turnover so fast? What are the benefits and pitfalls to this turnover? Where is the value in what we do?", "description": "This talk will take a look at certain aspects of tech to see where we find value, where things have been built on shaky ground, and where strong foundations will move the industry forward. We'll take a look at:\r\n- Philosophies and Fads (Agile, DevOps, BDD/TDD/Shame Driven Development)\r\n- Programming Languages: Emerging vs Established\r\n- Hardware, the Cloud, Containers, and Serverless\r\n\r\nEach step will examine what we have learned from our past, what it means now, and where it may lead moving forward.", "recording_license": "", "do_not_record": false, "persons": [{"id": 5, "code": "XNLZN3", "public_name": "PJ Hagerty", "biography": "Developer, writer, speaker, musician, and Community Advocate, PJ is the founder of DevRelate.io. He is known to travel the world speaking about programming and the way people think and interact. He is also known for wearing hats.", "answers": []}], "links": [], "attachments": [], "answers": []}, {"id": 19, "guid": "31564d39-6cd3-5b1e-853e-2da64af14283", "logo": "/media/geopython2019/images/RCCNTG/sharks.jpg", "date": "2019-06-25T14:00:00+02:00", "start": "14:00", "duration": "00:30", "room": "Room 1", "slug": "RCCNTG", "url": "https://submit.geopython.net/geopython2019/talk/RCCNTG/", "title": "Spotting Sharks with the TensorFlow Object Detection API", "subtitle": "", "track": null, "type": "Talk", "language": "en", "abstract": "Object detection is about locating and classifying the objects in an image. In this worked example, we\u2019ll use TensorFlow to build an application that can tell the difference between a sneaky shark and a sunburnt surfer. We\u2019ll demystify the jargon, and learn about R-CNN, Faster R-CNN, YOLO and SSD.", "description": "Computer vision technology is rapidly improving and developers now have access to many state of the art computer vision libraries. One of these libraries is the [TensorFlow Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection#tensorflow-object-detection-api), which makes it easy to train, test and evaluate object detection models.\r\n\r\nThis talk will start with a jargon busting introduction to the problem of object detection, explaining the different approaches that are available and the benefits and trade-offs that they introduce.\r\n\r\nThe talk will finish with a worked example of an object detection application that looks to identify marine life in aerial footage, with potential safety and scientific applications.", "recording_license": "", "do_not_record": false, "persons": [{"id": 28, "code": "PFRWRM", "public_name": "Andrew Carter", "biography": "Andrew is a young developer of open source and commercial software. He graduated from the University of Warwick in 2014 with a first class masters degree in Physics. At Warwick he focused his studies on scientific computer simulations and developing software to run on high performance clusters.\r\n\r\nSince graduating, Andrew has worked on embedded software and web development. He has also spoken at many user groups and conferences and had technical articles published by SitePoint magazine.", "answers": []}], "links": [], "attachments": [], "answers": []}, {"id": 15, "guid": "c0c3e91f-bd74-599d-894f-0384b7e94490", "logo": "", "date": "2019-06-25T14:30:00+02:00", "start": "14:30", "duration": "00:30", "room": "Room 1", "slug": "JJ3HE7", "url": "https://submit.geopython.net/geopython2019/talk/JJ3HE7/", "title": "Building a Secure and Transparent ML Pipeline Using Open Source Technologies", "subtitle": "", "track": null, "type": "Talk", "language": "en", "abstract": "Learn about open-source tools for creating scalable, end-to-end ML pipelines that are open, transparent and fair.", "description": "The application of AI algorithms in domains such as criminal justice, credit scoring, and hiring holds unlimited promise. At the same time, it raises legitimate concerns about algorithmic fairness. There is a growing demand for fairness, accountability, and transparency from machine learning (ML) systems. And we need to remember that training data isn\u2019t the only source of possible bias and adversarial contamination. It can also be introduced through inappropriate data handling, inappropriate model selection, or incorrect algorithm design.\r\n\r\nWhat we need is a pipeline that is open, transparent, secure and fair, and that fully integrates into the AI lifecycle. Such a pipeline requires a robust set of bias and adversarial checkers, \u201cde-biasing\u201d and \"\"defense\"\" algorithms, and explanations. In this talk we are going to discuss how to build such a pipeline leveraging open source projects such as AI Fairness 360 (AIF360), Adversarial Robustness Toolbox (ART), and Fabric for Deep Learning (FfDL), Model Asset eXchange (MAX) and Seldon Core.", "recording_license": "", "do_not_record": false, "persons": [{"id": 79, "code": "B9SU7V", "public_name": "Ilja Rasin", "biography": null, "answers": []}], "links": [], "attachments": [], "answers": []}, {"id": 25, "guid": "299eb25a-8c1e-50dc-aeb5-8e5b1a0828c4", "logo": "", "date": "2019-06-25T15:00:00+02:00", "start": "15:00", "duration": "00:30", "room": "Room 1", "slug": "SZ3JGZ", "url": "https://submit.geopython.net/geopython2019/talk/SZ3JGZ/", "title": "Bayesian modeling with spatial data using PyMC3", "subtitle": "", "track": null, "type": "Talk", "language": "en", "abstract": "This talk will be a dive into the field of spatial statistical modeling using Bayesian models. We'll learn how to define the Bayesian model, how to sample from a posterior distribution and then evaluate our results using an ecological application.", "description": "In this talk we will be learning how to define a Bayesian model for spatial data for a simple ecological application using PyMC3. We'll also be going over some diagnostics to check our model.\r\n\r\nMarkov chain Monte Carlo (MCMC) methods are used to sample from various complex probability distributions. In this talk, we'll primarily go over two MCMC techniques - Gibbs sampler and a random-walk Metropolis-Hastings sampler.\r\n\r\nBayesian models split a complicated model into three basic components. The spatial data model occupies one level of the hierarchy, while the process model resides below it. Typically, a third hierarchical level contains statistical models, also called priors, for unknown parameters that include additional physical information. \r\n\r\nAll the statistical jargon aside, all we're doing is simply building a model by assuming certain priors and then making some more assumptions to explain the spatial data we see - it could be population, probability of a disease or census data. MCMC sampling techniques help us to approximate certain posterior distributions. And we'll use PyMC3 library for this. PyMC3 is a highly popular library for probabilistic programming.\r\n\r\nBy the end of this talk, the audience would have :\r\n1. Learnt how to define a Bayesian model for spatial data in Python\r\n2. Learnt the basics of using two MCMC sampling techniques in PyMC3 - gibbs and Metropolis Hastings\r\n3. Learnt how to conduct a proper diagnosis of the model using metrics like autocorrelation plots, standard error and histogram plots \r\n\r\n*Audience level:* \r\nPython : Beginner\r\nComputational skills: Intermediate\r\n\r\n*Outline*\r\n1. Introduction (10 mins)\r\n* Bayesian models - priors, conjugate posteriors (5 mins)\r\n* MCMC sampling techniques in PyMC3 (5 mins)\r\n\r\n2. Building the model (10 mins)\r\n* Defining the model for our ecological application (5 mins)\r\n* Model hyperparameters - initial values and priors (5 mins)\r\n\r\n3. Results and Diagnostics (10 mins)\r\n* Diagnostic check of model using metrics mentioned above (5 mins)\r\n* Comparing the probability distribution sampled with the true distribution (5 mins)", "recording_license": "", "do_not_record": false, "persons": [{"id": 14, "code": "JJPRJM", "public_name": "Shreya Khurana", "biography": "A Statistics grad at the University of Illinois, I like to play with numbers and code. Currently I'm working on a Bayesian Hierarchical model for an ecological application. But I've also worked in deep learning and text mining. Current favorite area of interest would be NLP.", "answers": []}], "links": [], "attachments": [], "answers": []}, {"id": 44, "guid": "cc027c37-6893-566a-8e2f-6c40c71b26fc", "logo": "/media/geopython2019/images/BQYNUY/geop-2019.png", "date": "2019-06-25T16:00:00+02:00", "start": "16:00", "duration": "00:30", "room": "Room 1", "slug": "BQYNUY", "url": "https://submit.geopython.net/geopython2019/talk/BQYNUY/", "title": "Understanding and Implementing Generative Adversarial Networks (GANs): One of the BIGGEST Breakthroughs in the Deep Learning Revolution", "subtitle": "", "track": null, "type": "Talk", "language": "en", "abstract": "With the computational resources becoming more powerful over time, tremendous advancements are being made in the field of Deep Learning. Generative Adversarial Networks (GANs) are one amongst such advancements. Interested in knowing how to **\"generate\"** content (images, music, speech, prose, and much more) instead of **\"classifying\"** one into categories? Let's dive into the granularities of **Generative Adversarial Networks (GANs): One of the BIGGEST Breakthroughs in the Deep Learning Revolution**.", "description": "The advancements in the field of Deep Learning are approaching a breakneck speed. Recent years have witnessed enormous research activities in Deep Learning, and Generative Adversarial Networks (GANs) is one of them. GANs are one of the most intriguing Deep Nets that have ever been built. GANs belong to a class of algorithms called the Generative Algorithms which help in predicting features given a certain label. This has led to the generation of artificial content (like images, music, speech, prose, and much more). Generative Adversarial Networks have a wide array of applications in the real world (including Super-resolution imaging). This talk aims at discussing the working of GANs, their applications to the real world (including Geo-Imagery), and demonstrating a quick hands-on code implementation using Python.\r\n\r\n**The flow of the talk will be as follows:**\r\n