GISCafe Weekly Review February 9th, 2024

GISCafePredictions for 2024 – EuroGeographics
February 8, 2024  by Sanjay Gangal

By Sallie Payne Snell,  Secretary General & Executive Director  EuroGeographics

Sallie Payne Snell

Providing certainty in an uncertain world

Misinformation and disinformation are the biggest short-term risks facing society according to the World Economic Forum’s Global Risks Report 2024. In a data-driven world, maintaining public trust in official information will be more important than ever.

In uncertain times, authoritative map, cadastral and land registration information provides certainty to those responsible for making critical decisions about people and places. Trusted, transparent and interoperable public sector data based on fundamental rights and common values are key building blocks for a wide range of policies, including the EU’s Digital Decade and Green Deal, and the United Nation’s Decade of Action.

In 2024, NMCAs will play a key role in helping to realise the aspirations of people everywhere for a better future by providing the trustworthy data for a sustainable, safer and fairer society.

Read the full article
GISCafe Predictions 2024 – 1Spatial
February 7, 2024  by Sanjay Gangal

By Seb Lessware, CTO (Chief Technology Officer), 1Spatial

Seb Lesswar

What lies ahead for 2024? It’s all about Geospatial AI: Navigating the Future of Automation, Drones, and Data Aggregation
I predict that all the other predictions will focus on AI (Artificial Intelligence) and it’s hard not to with so much new hype last year. In fact, in my previous years’ predictions, I highlighted that we would see some use-cases for AI in the industry grow while others fall short, depending on the available data. What it didn’t predict was the explosion of interest in Large Language Models made accessibly by OpenAI’s ChatGPT and this will certainly help boost many tasks involving humans interfacing with machines – but it is still a ‘language model’ and not a spatial model. This means that it can help empower users for tasks such as documentation, code, and script writing, or helping interact with complex systems such as for analytics or schema matching, which are generic tasks and not unique to the Geospatial industry.
Meanwhile, truly geospatial uses of AI will be in two principal areas:

  • Digitising unstructured data such as imagery, point clouds or PDFs into structured spatial content: This has been happening steadily for a long time though it has never quite achieved the levels of accuracy and automation that were hoped for. It’s used more for anomaly detection (e.g. does the video show a crack in this pipe? Do these trees overhang the railway?) but maybe that continued improvements will make mostly automated data capture and (more importantly) data update and maintenance more achievable.
  • Using structured spatial data for analytics and inference: This is an area of opportunity to automate more tasks that are currently manual and require good-quality structured spatial data as input, as well as many examples of ‘the right thing’ to train the models. We expect to do more of these types of projects this year and maybe one day, a tech giant will create a global ‘Large Spatial Model’ equivalent to a Large Language Model, to represent the global natural and built environment – which would make these projects even easier.

One method of capturing this unstructured or semi-structured data is using drones which for several years have been the highlights of geospatial hardware shows. They are widely used for human inspection via cameras, or for point cloud capture for projects but mostly just for human visualisation. If the AI techniques described above improved automated management of structured spatial data, then this would drive the use of drones for not only human interpretation, but structured data capture so one improvement would unlock the other.

In the meantime, there is still a disconnect between the data produced by the design and build phases of construction – held in CAD formats, drawings or point clouds – and the data needed by large scale data management systems of record. The handover and adoption of this information is a big driver for projects that we have been involved in over the last few years. We are seeing a tipping point where the automatic validation and integration of this data is now the norm, so more organisations will adopt this approach. Some projects such as the National Underground Asset register are no longer worrying about ‘how do we ingest, integrate, maintain and share this data?’ but ‘what are the future use-cases for this hugely valuable and up-to-date structured data asset?’.

The growth of automation in data capture and data ingest projects also drive the need for measuring and protecting data quality to ensure that automation does not lead to loss of data quality, which might otherwise have been spotted by the people capturing the data. Automation of data quality alongside automation of data capture means the data is then suitable for powerful use cases such as underpinning digital twins and smart cities. These large-scale data aggregation projects mean that there will be a better data framework from which these smart uses can flourish, and we hope to see more of that in the coming year.

Data aggregation hubs might only be a steppingstone towards a federated data mesh approach. Aggregating data that is mastered in many different systems by physically mirroring it in an up-to-date data hub is great to get consistent data in a consistent structure which provides a single system to manage resilience, performance, security, and role-based access. But there will always be a lag between what is stored in the hub vs the latest version of the data which might be updated on an hourly basis. A federated model in which the data is pulled live from each data mastering organisation’s system would provide an even more up-to-date version of the data.

In the shorter term this is usually achieved using metadata catalogues which can be searched to find and link to relevant data which can be streamed or downloaded. This catalogue approach allows the data to remain in the mastering systems, but they are not usually made available in a consistent structure or format, so it is harder to aggregate this data for use.

Data federation is harder, especially when an agreed structure is needed for virtual aggregation, because it requires an agreement on the structure and encoding of the data as well as a high level of technical maturity at the data custodian to provide live services which are scalable and secure. While there are good standards for data sharing from organisations such as the OGC (Open Geospatial Consortium), and good examples of live data feeds being used in production, it will be interesting to see whether more widespread secure data federation is progressed this year – possibly not yet.

All these capabilities are underpinned by web connectivity and therefore also at risk of hacking and disruption. The AI techniques described above which can automate positive outcomes, can also be used to speed up and empower cyber criminals, terrorists and ‘state actors’ for negative outcomes and so the ongoing security arms race will continue at full speed with continual upgrades, testing and best practices. Whether there are any seismic changes in the security area we don’t know, but it will be an ongoing discipline that needs to be kept up with to sustain and improve confidence in the systems to ensure that they can continue to be connected in a trusted and secure way.

In summary then, many of these developments enable more automation, and automation drives efficiency and opens up new opportunities so we should see various outcomes becoming real this year: Automated AI Data capture experiments will start to show whether they are viable, New data aggregation projects will start to automate ingestion by enforcing rigorous data checks and existing aggregation projects will start to benefit from leveraging their data in new and innovative ways.

About Author:

With a degree in Cybernetics and Computer Science, Seb joined Laser-Scan (which became 1Spatial) in 1997 as a Software Engineer. After working on many projects and a broad range of software as a Senior and then Principal Software Engineer, he then moved into Consultancy and then Product Management which provided insight into customer and industry needs and trends. After leading Product Management for a number of years, Seb is now Chief Technology Officer (CTO) at 1Spatial.

GISCafe Industry Predictions for 2024 – Timmons
February 7, 2024  by Sanjay Gangal

By Lowell Ballard, Director of Geospatial Solutions, Timmons Group

Lowell Ballard

The year 2024 promises to be a pivotal interval in technology that will be marked by various advancements—several of which we are expecting and can monitor, as well as plenty that haven’t hit our radars yet.

Geospatial technology and GIS are the basis for applications, software, and digital processes that play a central role in connecting nearly 8 billion people with their surroundings and the global Internet every day. It’s only natural that we’d track how GIS can affect and support technological changes for the masses.

As we move into the future, three key elements with roots in AI (Artificial Intelligence) and ML (Machine Learning) stand out as driving forces that are reshaping geospatial technology: Machine Learning as a Service (MLaaS), Digital Twins, and Digital Delivery. Changes in these technologies are set to redefine the way we perceive and interact with geospatial data, offering unprecedented insights across diverse industries.

Read the full article


You are registered as: [newsletter@newslettercollector.com].

CafeNews is a service for GIS professionals. GISCafe.com respects your online time and Internet privacy. Edit or Change my newsletter's profile details. Unsubscribe me from this newsletter.

Copyright © 2024, Internet Business Systems, Inc. — 670 Aberdeen Way Milpitas, CA 95035 — +1 (408) 882-6554 — All rights reserved.