Partner Perspective

Where ethics and geospatial AI meet

A joint AI for Good panel discussion by WGIC and the ITU (in partnership with Politecnico Milano) explored how the geospatial industry could become a leading force in the discussions on ethics and AI.

Remco Takken March 1, 2022
Where ethics and geospatial AI meet

These discussions took place online on February 22nd, 2022. They partly drew on WGIC’s Policy and Trends report on GeoAI. Many GeoAI experts presented their views in there, on the need for ethical governance around geospatial data. They expressed the need for building norms for the ethical use of AI. Customers, policymakers, and citizens at large, should be able to have confidence and trust in the geospatial industry.

Implementation challenges: bias and local context

Left, in white: building outlines of an African city as digitized by the community. On the right, in yellow, the results of an algorithm.

Geospatial scientist Caroline Gevaert notices that there’s a lot of generic interest in AI ethics. In the geospatial realm, that’s no different. She gives examples of how it is difficult to change when moving from set guidelines to actual implementation. We have all heard about algorithms that have a bias about gender, ethnicity, and age. What about geospatial bias? “The AI algorithm might be biased against the areas that are more informal and poorer than other areas of the city.” Her example (see picture) shows how difficult it is to establish that a particular misinterpretation is biased. “We need to have a better understanding of the local context.”

Fairness and inclusivity

Visible objects considered privacy-sensitive in Rwanda and Tanzania.

AI ethics guidelines often mention fairness and inclusivity. “Especially cultural diversity is quite a tricky issue,” says Gevaert, “Even though we mind at a global level at least a certain guidelines of values, especially implementation on a local level may cause differences.” Gevaert showed an African example of a survey done in Tanzania and Rwanda, with wildly different answers on what the contributors saw as privacy-sensitive.

TomTom’s data and services

WGIC Member TomTom is one of the leading location technology specialists. This company collects data to produce highly accurate maps for its car navigation software. It also provides real-time traffic data and services. TomTom processes some of those using aggregated mobile phone data. TomTom is a tech company, and at the heart of it, is data. That’s why “Data privacy is not optional for us,” as Head of Data Stephan Galsworthy says. “It’s even a strategic differentiator for TomTom.” Galsworthy dives into a use case to show how that relates to ethics. “What I typically call ‘responsible AI’. How can you deploy AI in a way that meets all those ethical and privacy requirements?”

Six pillars for Responsible Artificial Intelligence

According to Stephan Galsworthy there are six main pillars for responsible AI. These are: Fairness, Inclusiveness, Accountability, Reliability & Safety, Privacy & Security and Transparency & Explainability. “If you take the example of traffic and predicting travel times, privacy and security of the data is really high. Reliability and safety is super top-of-minds.” What does this mean to developers who are building these kind of services? Galsworthy says: “When you’re trying to solve a problem, you should look at the challenge you’re solving, and make sure that you’re assessing it in relation to these pillars of Responsible AI. But: make sure that the trade-offs that you make are the right ones to assure that you reach the right quality of service for your end users.”

“Datasets that are not geo-explicit”

Amina Al Sherif is not only Lead Innovation at Anno.ai, she’s also Chief Data Ethics Officer there. She also leads the Data Ethics Consortium for Security (DECS). They provide an annotation platform for overhead imagery, like (Tactical) UAVs, for the Joint Intelligence Center in Northern Virginia. “We’ve started to see in our own hands-on working developing models to do auto-detection, that there are definitely some very significant issues around overhead imagery and a geo-bias.” She shows some commonly used datasets, satellite data, as used for machine learning training. Large parts of South America, Africa, Asia and Australia show gaps in that data. “Those areas of the world continue to have no machine learning models that have been trained to accommodate for those specific areas.” Al Sherif explicitly mentions the large deserts in the Sahara region. “Think of all the open source datasets that we know of, that are not geo-explicit!”

Locus Charter

An international team that focuses on ethics initiated Locus Charter. Organizations that support Locus Charter are EthicalGeo (powered by the American Geographical Society) and, through mutual funding by philantropical institution Omidyar Network, the UK-based Benchmark Initiative, Raising Standards for Location Integrity.

The five P’s

Chris Tucker, Chairman, introduces the ‘five P’s’. He presents them as challenges: “Not only Privacy, but also distorted incentives in Political Processes. And it deals with People, undermining or biased against vulnerable Populations. Skewing on how we understand our Planet and the ability to support our Planet. Lastly, there’s Property. There’s so much bias built into the system. Geospatial has empowered some of the wealthier and more organized, to the disadvantage of vulnerable populations. In the Geo4Good Movement, if you will, luckily we’ve had a lot of empowerment through geo that has allowed vulnerable populations to try to encode their property.”

Different values at play

‘Locationization’ is probably the Word of the Day during GeoAI for Good on Ethics and AI, according to WGIC Director Barbara Ryan.

This complex landscape raised awareness on the fact that there was no international charter for the responsible use of geospatial tech. This rings true for regular GIS, remote sensing or more advanced GeoAI capabilities that seem to be rolling out every single day across the public sector, the private sector and the academic world. Tucker remarks: “There’s many different values at play. There’s many different cultural contexts that need to be appreciated. The legal frameworks are in different stages of evolution in different countries and regions around the world. And even the different points in economic development mean that there’s different kinds of technologies at play. How communities weigh the costs and benefits of new technologies will differ, depending on where they are coming from.”

‘How do we do it?’

The panel discussion summarized the main challenge for GeoAI in the word ‘how?’. Caroline Gevaert: “We’re agreeing on the principles. Everybody here wants to do a better job, the industry is interested, academia and policy makers. Nobody wants to be unintentionally biased and non-inclusive. The question is: how are we going to do it?” Chris Tucker feels that AI specialists aren’t necessarily well-versed in geospatial. Close to the end of the panel discussion, all speakers shared their views on the perceived anonymity of data. While the panel asserted that ‘spillage, leakage of information’ occurs and that ‘anonymity is way more complex than you would think at first glance’, the comment section filled itself with use cases sent in by the audience. An agreeable reaction to a discussion that’s still open ended and definitely to be continued in the next joint AI4Good session by ITU and WGIC.

aiforgood.itu.int/geoai-challenge