Artificial intelligence is changing the world as we know it. Check out nine technology advancements in GeoAI that will continue to push the field forward. This blog is part of the GeoAI Series, taken from the WGIC GeoAI Report.
For developing the report “Geospatial AI/ML Applications and Policies: A Global Perspective,” WGIC conducted one-to-one interviews with more than thirty AI/ML experts in the geospatial industry, including WGIC Members.
Here is the list of nine technology advancements in AI/ML, geospatial, and high-performance computing technologies that will continue to push the GeoAI field forward. These technological advancements will lead to breakthrough applications in 2030 and beyond. The WGIC AI Study participants reported such breakthroughs as:
1. Ensuring Data Veracity: The degree to which data can be accurate and trustworthy is critical for AI algorithms trained to be effective and unbiased. A modified copy of data could lead to adverse outcomes. Developments in making data more trusted and secure when sharing it with other applications will result in fewer data silos.
2. Increasing Sample Efficiency: Current AI models require increasingly large amounts of data to be trained for improving their results. Relative to machines, humans need significantly smaller amounts of data and are much more sample efficient. The computational costs of training algorithms are very high, becoming unaffordable for small organizations. Therefore, research breakthroughs in increasing sample efficiency of algorithms would be of great value for smaller organizations and thus increase AI’s democratization.
3. Custom GPUs or Application-specific integrated circuit (ASIC) chips: GPUs have long been the chip of choice for performing AI/ML tasks. Advancements in general-purpose GPUs (GPGPUs) and AI-specific GPUs and ASICs are expected to accelerate the adoption and use of resource-intensive deep learning techniques. Such processors/ chips for efficient geospatial data analysis will be in the market in 2-5 years.
4. Digital twins: Digital twin technology will allow practitioners to build digital representations of physical objects like vehicles, equipment, buildings, factories, and even entire cities/regions. By recreating our world in a computer system, we can apply geospatial AI/ML techniques to predict outcomes based on variable conditions for finding efficiencies and other benefits.
5. Hyperspectral Satellite Imagery: The deployment of low-cost satellite constellations capable of hyperspectral imagery at high ground resolutions will lead to a revolution in high-frequency Earth observation and prediction AI/ML applications for various industries.
6. Edge Computing: Edge computing promises to make near-real-time geospatial AI/ML applications possible by moving ML techniques to field equipment and servers closer to the satellites and sensors generating the geospatial and remote sensing data.
7. LiDAR-capable consumer devices: Wider adoption of Light Detection and Ranging (LiDAR) sensors in consumer devices like smartphones and tablets is expected to increase the availability of massive user-generated real-time geospatial data lead to unique geospatial AI/ML applications. Apple already includes LiDAR sensors in its latest series of iPhones and iPads, and others will follow soon.
8. Knowledge Guided Machine Learning: Current AI/ML approaches build an understanding of the observed phenomena from the beginning using patterns in data. This new paradigm explores how to significantly utilize the treasure of accumulated scientific knowledge to improve the machine learning method’s performance.
9. Quantum Computing: Quantum computers are theoretically designed to manage vast amounts of data at exponentially faster rates than classical computers. Quantum computing will facilitate geospatial AI/ML applications dependent on massive multi-dimensional data sets and complex models.
Also Read