The future of spatial and life as we know it

Monday, 11 Nov, 2019   /   Zag

tech ed (1)

New Zealand and Australia are surprisingly “undiscovered” when it comes to GIS and spatial technology. When you compare our adoption of spatial software and technologies to other developed areas like Europe and North America, it doesn’t stack up.

So, what’s available in spatial technology now, or soon, that businesses and governments could be using to their advantage? And what are other countries and organisations doing that we can learn from?

The good news is, you still have time to beat your competitors to it.

Here’s the scoop.

The impact of a self-centric society

Everybody says it’s all about people and ‘humanising’ in today’s IT environment. We could take that a step further to say it’s all about individuality. The upcoming generation of buyers and users care about personalisation and convenience, and this has impacted the direction of spatial.

Current research and development tell us that the future of spatial is in:

  • Sensor & AI-driven moments – to provide information from all assets around us (big data) and make sense of it.
  • Personalised real-time analytics – based on your location (GPS) to provide constant feedback of offers, events, activities, possibilities and opportunities.
  • Augmented Humanity – to enable and enhance the human body with internal devices (inside our bodies, under the skin, on the skin, on the peripherals), wearables & personal devices to better engage with constant feedback.

Emerging spatial technologies for retail

What technology exists now? We’ll use some of your favourite sci-fi movies to help explain.

uiDo you remember that moment from Minority Report where Tom Cruise walks into a shopping mall with ads changing for him specifically? This technology is here and is used in things like iBeacons (or Beacons from non-Apple vendors, see here.

For example, retail chain Target uses beacons to show your location on a mobile map app rather than just as a guide on the corridors or a piece of a paper map.

Another retail chain Nordstrom uses beacons to inform you when passing their outlet in the shopping mall or on the street, even driving your car if they have your mobile basket item in stock.

Retail giants CVS Pharmacies and Walmart push notifications on offers to customers based on their shopping habits, prescription notices etc. while they are walking the shop corridors, in real-time.

Emerging spatial technologies for transport and healthcare

ukkuiHow about that moment from Total Recall (the remake) when Colin Farrell used the augmented handphone the first time, calling with it and pressing it on sensor glass to get the full video view? Well, that technology is here already too; RFID tags are embedded under people’s skins.

In the Nordic country of Sweden, thousands of Swedes have had RFID tags planted under their skin and can use them for public metro travel, instead of tickets or key cards.

Almost here are augmented vision and smart drugs, see this link. Then there are these more bizarre augmented humanity experiments like this video of a third thumb project.

Augmented humanity accessories and smart drugs send a whole lot of information to the cloud already (including a GPS location) and are going to be the basis for future microtransactions and microservices.

Consider a full workflow mechanism supported by a combination of analytics, AI, automation, sensors and smart devices /clothing/ smart meds to seriously boost emergency recovery scenarios and reduce the time needed for agencies to attend a crash site or an emergency.

See the following series of moments in a traffic crash incident (sequence: left-to-right, down one then left-to-right again):

Capture-6

This is a possible example of a traffic accident showing how automated devices and AI scripts manage the process of efficiently organising everything around an incident.

What’s happening at this exact moment? The person in the accident is part of a connected network that includes their vehicles, devices and everything else around them. All these objects work as sensors that capture real-time information about the person and what is happening around them. So, sensors can translate pretty accurately what has happened to that person. Even though they cannot tell that the person was distracted while driving a car and that’s why they crashed (they cannot read minds), the car can tell how that person was driving and whether that person made a mistake (or somebody else did).

Augmentation and smart meds inside a person’s body can tell if their heart rate changed or if their brain was stimulated with something other than expected when driving. This information is communicated with wider AI entities to organise an ambulance, police, insurance to cover for the costs and even social media to inform friends and family.

Emerging spatial technologies for asset management

Slope and aspect, contours and 3D cadastre are becoming important for asset management, so much so that soon you won’t be able to get the data in 2D anymore.

Spatial has matured enough to enable models from the CAD world to be natively accessed in GIS tools like ArcGIS Professional and transformed to be used with Augmented, Virtual and Mixed Reality (AR, VR, MR).

vsdg

Other dimensions are also emerging – for example 4D is all about time series, showing the change on assets across time.

Historical data can be applied to 2D entities and maps too. A good example is the picture to the left using satellite imagery of various ages and seeing the historical imagery through a lens on top of a standard vector base map.

Note that slope and contour are actually provided through something called 2.5D – that’s an automated build of a 3D environment using 2D data.  For example, wrapping aerial imagery around 3D slope/contour as well as on large buildings rendered on top of aerial imagery to provide the land and buildings’ actual look. This is how Google Maps does its 3D map rendering which it’s perfectly adequate for most people but not good enough for surveying and construction.

spatial-1

One of the neat new ways of using 3D is indoor building viewing that allows for assets to be viewed via slicing through building walls and floors.

For example, the view on the left is from the Esri Web App Builder using web browser interfaces to allow users to slice through airport facilities.

Going beyond these ways of viewing and manipulating maps, the next wave is already happening, and it’s about the automation of pulling out vector data (what’s changed for example) from satellite/aerial imagery (2D) and LiDAR (3D).

Alternative ways of viewing reality are everywhere. The easiest way is to do it is via your mobile, but there are more and more tools to do this, and human augmentation will include some of this too; think of images being reflected direct to your retina and/or audio directly to your ear canal.

Spatial realms and realities

To understand the different ‘realities’, consider the following simple formula:

Mixed Reality (MR) = Virtual Reality (VR) + Augmented Reality (AR)

All three “realities” are often used in sequence starting with engineers modelling real-world and assets using Virtual Reality, then field testers checking the changes and/or new entities using Mixed Reality and eventually enabling all of this for customers using Augmented Reality.

1. Virtual Reality1-6

VR today is mostly about CAD designers/engineers using virtual modelling device like HoloLens and CAD tools specifically designed for devices with non-projected CAD drawings (so you can locate the 3D drawing anywhere). These can be blown out to view on a real scale, but not usually to where they will be built in.

2. Mixed Reality
2-4

MR takes the VR model, attaches actual ground coordinates into it to allow field testing on the ground.

This is so we can see that the design works as planned on top of the real-world assets and entities, giving us a view on how this would look once it is built and what the view would be for the users of augmented reality.

3. Augmented Reality
3-2

AR views the world through a lens like a smartphone or a tablet and it shows additional information on top of these assets.

It can also show simple virtual symbols, icons or polygons. For example, enabling field workers to capture hazards like asbestos on ceilings, or the training industry highlighting 3D machinery with sequential written instructions on how to fix or manipulate these machines.

Connected everything and everything as a service

We are starting to realise the value of connected networks (Connected Everything and Everything as a Service) and how much more efficiently we can manage our cities when we use technologies like sensors, Internet of Things, Digital Twins and GeoAI.

4-2

Sensors & Internet of Things (IoT) in urban and rural environments both indoor and outdoor are location-aware. Take Singapore,  the world’s first real Smart City with sensors on trees, buildings, city assets, roads, traffic signs etc. measuring everything; the humidity, the temperature, CO2 & SO2 levels, noise, video capture across the whole city, image recognition in metros and motorways etc.

Digital Twins are created against entities like buildings, city assets and even workflows like traffic and most of them are spatial.

5-16176471

A twin is usually against an asset but can be even a whole city; SAP has built something called SmartCitiesWorld, a platform for Smart Cities (kind of a Digital Twin of a city) based on HANA, Spatial Services and IoT services. This has been implemented in several cities, including;  Nanjing in China and San Diego in the US. All the spatial components in it are enabled using Esri and HANA.

Digital Twins together with advanced geospatially enabled Artificial Intelligence (GeoAI) can be a great help for queries, reporting and monitoring IoT devices (vehicles, drones, asset sensors etc.) to track:

  • average commuting time
  • average speed in peak hours
  • the travel share ratio of public transit
  • on-time performance percentage
  • public transit stop coverage (500m) percentage
  • public transit lines length of the whole city
  • lines length percentage
  • bus lane ratio percentage

These monitored attributes are used on various dashboards to provide a constant real-time feed of information, alerts and notifications:

Capture-7


Colour coding is used to show where improvements are needed, historical figures on the side to show trends to positive or negative. Anything can be drilled down on to get more detail on how the city is doing.

7-1

It’s key to understand that GeoAI is much more than just Image Processing or bringing some sensor data into a map. It’s algorithms predicting where crime or accidents are going to happen, or pulling Big Data from other sources like Social Media into the mix, and even building AI learning algorithms to figure out what works and what doesn’t using connected data, as examples.

It’s key to understand that GeoAI is much more than just Image Processing or bringing some sensor data into a map. It’s algorithms predicting where crime or accidents are going to happen, or pulling Big Data from other sources like Social Media into the mix, and even building AI learning algorithms to figure out what works and what doesn’t using connected data, as examples.

Click here to learn more and contact us to book a free one hour workshop.

Which SAP products are the right fit for your business?