Layla Gordon, Ordnance Survey Technology Laboratory Engineer

What is the Ordnance Survey Technology Laboratory?

I managed to get an interview with one of the UK’s national mapping agency’s engineers, Layla Gordon, who has been working on some interesting geospatial tech.

LaylaShoreditch

ND: How did your education lead you to OS (Ordnance Survey) and geospatial?

LG: Back in 1994 I was doing a degree in computer engineering in Iran and was very interested in medical IT. One summer, whilst visiting my sister in the UK, I paid a visit to the open day at Southampton University and realized I can do AI [Artificial Intelligence] as part of the undergraduate computer science course. This was the start of it all!

I did AI as part of my degree and became fascinated with computer vision and biometrics. My third-year thesis was in automatic MRI brain region classification to aid surgeons with finding small tumors, which I got a first for! I then worked in a company specializing in biometrics for two years and developed facial and vein pattern-recognition software for identification purposes.

After that I got employed by Uni on a DARPA human ID project to look at gait recognition. Due to my expert knowledge in computer vision and biometrics, they enrolled me to do a PhD as part of this, and I was a research assistant during that time. The project ended due to some unforeseen circumstances, and I had to submit my Mphil on the topic.

I then found a job advert for a research scientist in an Ordnance Survey remote-sensing team to perform change intelligence. Where my background was in computer vision, this seemed like a good fit, but somehow aerial imagery ended up more complex than the human brain!

Two years ago I stopped working in the research department and moved to the tech labs team to develop prototypes for current and future technologies. For a brief period, I worked in the mobile team within OS and developed the award-winning OSLocate App.

ND: Can you tell me what an OS tech labs engineer does? Photo on 19-04-2016 at 13.29

LG: Each member of the labs is skilled in some specific area. My expertise is mobile development, positioning, computer vision, and machine learning, plus AI. Either we are approached by other teams within OS with an idea for building a POC [proof of concept] demonstrator for a specific idea, or we sometimes find a concept ourselves to develop as part of technology tracking and relevance to OS.

As you may know, OS is going through a big transformation, and we have a strong vision for the future, starting with one for 2020. As part of this looking ahead and finding concepts to work on now, [keeping] future technology in mind is very important. Labs is very keen to help shaping up what the future OS would look like and creating an inspiring vision matched to our strategy.

ND: I was fascinated to hear about the geotech meetup event you led here. How complex was it to create the indoor mapping/navigation & AR mars?

LG: It ranged from easy to relatively complex on both projects. The indoor project initiated from OS was the platinum sponsor of Digital Shoreditch. We looked into developing an app for the visitors to navigate around the venue, as in previous years it had proved challenging to find the exhibition the attendees wanted to visit, given the labyrinth nature of the Victorian basement which was Shoreditch Townhall the venue!1

The challenges were lack of GPS and the complex nature of the building. I explored the relatively new iBeacon technology to help with positioning indoors. I worked with the Estimote team to improve the accuracy from 8 meters to about 3 meters for indoors.

We originally wanted to do a VR version of the building on the app view and let the user navigate before or whilst at the venue, but due to a not-ideal positional accuracy, we switched to creating a routing app to help find the exhibition and showing the user a route to get to it.

We are now looking at other candidates for this type of concept in other places such as hospital, maybe also with the addition of augmented reality in a turn-by-turn navigation scenario.

For the Mars AR I used a package called Vuforia. The first version was created as a native iOS app with Vuforia’s native iOS SDK. I used their cloud image target recognition. I used NASA and UK space agency height maps to create a 3D blender model of the Mars surface and used Vuforia for image recognition. I also used OpenGL within native Objective C to augment the image target with the 3D blender model.

I have since switched to using Vuforia within Unity, which has a few advantages. The code produced is cross-platform; therefore, you can build it for iOS as well as Android and desktop. The terrain is now created within Unity, which is more optimized for rendering. Aligning the target and the augmentation is also easier done.chrisMarsdemo

ND: With the above in mind, can you see this technology being used for survey or field work?

LG: Yes, to an extent. I can see companies who have underground assets using it for underground asset management. If there is planned work to be performed, [you can] learn about where to do the digging before performing it by visualizing the asset that’s buried underground.

ND: What do you think the future is going to bring for survey, mapping, and GIS?

LG: Short term: Lots of sensor data visualization and custom data on GIS systems. More use of point cloud data for data capture and visualization.

Long term: AR/VR as a method of visualization of real-time sensor data and other location-based content. Combination of AR image processing and positioning, delivering real-time custom data on smart glasses for consumers and surveyors.

Leave a Reply

Your email address will not be published. Required fields are marked *