Over the last few years, altered realities technology has exploded, and now, with the introduction of Apple’s ARKit and Android’s ARCore, AR, MR, and VR are well within the reach of the non-developer. I can confirm this as I’ve created a few AR apps and MR apps myself over the last few months, with no experience with the technology and almost no scripting knowledge.
In the last few days, Esri, the world’s largest GIS supplier, has been demonstrating its VR export capability in CityEngine, the AUGeo app that converts your ArcGIS Online data into AR on your mobile. They even have some AR and VR capability with their Runtime SDK, now.
There have even been a few QGIS integrations that export AR outputs … the GIS industry is slowly coming around to the fact that this technology is the future. We must remember how people scoffed at 3D geospatial capability, and now it is supported in almost every GIS on the market.
Experience over the last few months has made me question whether we are being too eager in our quest to be the first and point out why we are a fair way off the fully immersed lives we are being promised.
To develop an AR, MR, or VR app you will need a gaming engine. The most popular for this kind of development is Unity, though you can also do it with Unreal Engine, Blender, and directly with Windows UWP, IOs, Android, Webgl, and other systems.
So, here is the thing. To get the information into these systems, do you use a geospatial data format? No, the most efficient formats are the gaming 3D model formats, like FBX and Collada, which require the removal of all geographic information. That isn’t to say that you can’t use geographic data in these systems, but more that they don’t use geographic data natively; therefore, creating true geospatial representations aren’t quite as easy as you think they should be.
Let me be a little more focused in this discussion and just talk about the most popular option, Unity. With Unity, you may specify the units to be meters and have a plethora of options for adjusting scale and x,y,z orientation, though predominantly all around a single coordinate, the same way in which CAD works.
This makes for a more complex workflow for the geospatial user as coordinates and elevation need to be used with caution, as does the coordinate system and datum when working with data over 1km.
Of course, this can all be worked around as long as you are aware of the shortcomings. If you are using AR or MR, are you likely to be creating an environment over a few hundred metres? Probably not, though it is these things you’re not told when you see these super shiny interactive demonstrations. I long to see an amazing city demonstration just so I can ask the creator what datum they used or how accurate the model is.
GIS and Unity
So enough of the issues with using geospatial data; there are other problems that are more pressing that the casual GIS user needs to know.
The GUI is NOT intuitive. If you were one of those people who winced at the number of toolbars in ArcGIS or held your head in your hands looking at QGIS, Unity is not for you. Don’t get me wrong, it makes sense, perfect sense, but not so readily to a GIS user!
You see, there are windows for what the camera is seeing, what assets you have, what the settings for those assets are, what is in the game environment … and if I lost you at “what the camera is seeing” you aren’t alone! The first time I opened Unity I spent 10 minutes just figuring out how to add a simple dataset (or model, as it is called in Unity).
Here’s a rundown of issues for a GIS person:
- Geospatial models have to become plain models to work.
- Coordinate systems and datum aren’t supported.
- Models are placed at 0,0,0 on import.
- Non-intuitive GUI (graphical user interface).
- New terminology to learn.
It’s not all doom and gloom
Okay, so I’ve made my point: GIS and the various realities are not that compatible, and that is a concern. I’m sure it won’t be long before someone comes along and creates a plugin that turns Unity into an “ArcAR” or “QGISAR” because the technology is there, it just isn’t “GIS friendly.”
So, rather than wrap up there, I’m going to discuss what good there is about using GIS for AR, MR, & VR in the game software.
First of all, about that interface, yes, it is quite random and to a GIS person, doesn’t make any sense, but it is typical of a high-end visualization tool. If you have ever used Blender or CityEngine, the terminology and interfaces and controls are similar. In fact, if you use CityEngine by Esri (if you haven’t heard of it, it is a procedural modelling software), you can pretty much create environments and just export them straight into Unity, all using Esri – yes Esri software.
The terminology has come from a gaming environment, and many of us have already had mild exposure to this when using good old SketchUp (formerly Google SketchUp, now Trimble SketchUp). Creating components, importing models, adding assets: it is all vaguely similar in Unity. What’s new is the enormity of control you have over the information. You can’t just pull a model in and turn it into the next “Minecraft.”
You have to add lighting; add materials to tell the software whether the model is made of brick, denim, or sandpaper (different reflection properties); and not forget textures: is it marbled, stonewashed, or just plain eggshell white?
These are decisions we never have to make in GIS … at the moment. And I say “at the moment” due to the advancements in 3D GIS. How long will it be until these things cross over?
Although Unity doesn’t support coordinate systems, it can be made to use a mobile phone’s GPS. To explain this, Unity has many different formats it can “build” the game for, like Xbox, Playstation, Windows, iOS, Steam, or Android. It is so flexible that you can build your game or environment and then just simply hit the “build” button for all the different formats.
Okay, you have to make a few tweaks to the build interface but not the game, which makes it very powerful when you have spent so much time building something. So when you build something for and Android phone or an Apple phone (or any device with a GPS), you can access its gyro and GPS, thus allowing you to place things in the real world to interact with.
Let’s go back a few months and look at that game that had people walking around staring at their phones, trying to catch imaginary creatures, Pokemon GO! This works on the principle mentioned above, whereby the “Pokemon” are just targets entered into Unity using coordinates which the mobile phone uses, so as you walk towards a location, the target starts to appear on the phone.
Now, this is where the purists cry out “GPS isn’t accurate on mobiles,” and they would be justified in their argument, but with mobiles now using assisted GPS and better chips, the accuracy is down to well below 5m, good enough for some high-level interaction.
As AR and MR have developed, so has GPS; there are already new GPS chipsets ready to go into the next generation of mobile phones that have less than 1m accuracy. This means in the next year we will be seeing a lot more interaction using this technology, and it will be a lot more convincing.
I imagine a lot of this will be for archeaological purposes and utilites management, whereby the information can be overlayed onto the real world so that important decisions and work can be made on site without error that would have previously required reference to different sources and weeks of planning.
The way I have been currently using AR and MR is with the use of “targets”—images or objects that tell the camera/phone/computer where to overlay the model/environment I have made. A good example of this is a mixed-reality application I made for wind farm planning where you simply drop a map on the table and then you have paper markers that are for wind turbines, construction pads, control building etc.
When you put these pieces of paper on the map, the 3D objects sit on the targets so you can move them around to get everything in the right place. The wind turbines even have markers around their bases to show the locations of the over topple areas and the 250m buffers to ensure there is no encroachment on the nearby buildings.
Of course, this would be more accurately served in a GIS, but, if I were to do a workshop with the client to workout where to put everything, what would be easier: the 2D mapping or the mixed reality where the client can see the wind farm in full 3D in front of their eyes and move things around to see how it all would look?
As with all things at the moment, we are close to greatness, on the verge of really useful and powerful augmented and mixed-reality GIS. Though, right now, in this moment, we are waiting for the technology to get just a tiny bit better and for the interface to become a little more GIS-savvy.
Am I going to carry on with mixed reality and augmented reality? Let me put this to you: if companies like Esri, Ordnance Survey, Hexagon, Trimble, Bentley, Topcon, and Mapbox are investing heavily in it, then it must be worth knowing about. For now I will suffer the pain for you so that I can pass on what I learn.
Photo credit :Shannon Duggan