The Cloud, The Edge, and the Beast

How are various platforms for processing and managing reality capture data best leveraged? The answer is a moving target, constantly evolving… for the better.

Reality capture and geospatial datasets are only going to continually and rapidly grow. Is it realistic to still expect to be able to process and manage reality capture data on say, you’re really “powerful beast” of a laptop or workstation alone? Combining the power of local, edge, and cloud computing provides a way forward. – Source: Hexagon

There are sometimes strong views on reality capture (RC) processing approaches, particularly between cloud-based and local approaches. Recently, an acquaintance, a respected practitioner of high-profile RC projects, told me that he hears a din of “The cloud will save us all,” but warned, “No, it will not!” Perhaps with yesterday’s cloud, but now it is a different matter.

Expectations have changed. In the early cloud-hype cycle, processing resources were limited, yet some individuals hyped the idea for applications such as surveying, design, and construction, suggesting that there could be a single, big, definitive model in the cloud —the one model to rule them all. Certainly, the cloud is not that envisioned panacea, but neither is relying solely on a huge beast of a local workstation. Cases can be made for each approach, and for adding edge computing to the mix. That’s the key: a mix. It does not have to be a matter of “or”. The optimal processing profile should be more of an “and” proposition.

Not every project needs to be rapidly processed on site; some can afford extra time to fully process, classify, extract, analyse, etc. While other applications can greatly benefit from near-real-time results. However, no matter what step in the workflow, from capture to deliverables, improved processing, especially where machine learning/AI is applied, the ante is being upped. Utilizing various processing resources along the way can boost efficiency, leverage the latest solutions, and facilitate scalability. 

Before we enter the cloud vs. local debate, let’s look at the rise of “edge computing” for RC applications. While not a new approach, what we’re seeing lately is the addition of edge computing for even small form RC hardware, such as drone payloads.

Rob Klau, founder and director of Klau Geomatics, is mounting their Brumby LiDAR edge computing unit for mobile mapping. At under 2Kg, the Brumby has been deployed on drones as well. – Source: Klau Geomatics

Getting Edgy

A common implementation of edge computing in RC is in mobile mapping systems. The sensors are mounted on a car or van that has room inside for an auxiliary box full of processors. An example is the Leica Pegasus TRK; the edge computing box in the vehicle not only provides a head start on processing but can also do on-the-fly anonymization (e.g., blurring of faces and license plates; mandatory in many countries).

“The value of the edge computing that we do is for a growing tier of LiDAR (Light Detection and Ranging) users,” said Rob Klau, founder and director of Klau Geomatics. “The ones that don’t necessarily want to be LiDAR users, they don’t want to have to become LiDAR specialists. They want the results that LiDAR provides. They want volumes, they want to know how many potholes there are, and where they are. They want change detection, for example, on a construction site or for a mine?—What happened yesterday or today is very important.”

Certainly, there are applications where a full post-processing regime is warranted. However, as Klau pointed out, there’s a growing recognition that many more applications can benefit from rapid results, rather than long waits for super-high precision (though the time vs. precision gap has been closing). Nearly all RC systems do some level of pre-processing, which can show the progress of the capture. For example, with SLAM (simultaneous localization and mapping) LiDAR systems, you look in the software on the tablet or controller and see a rudimentary cloud of the device “painting” the surroundings. This is valuable for checking to see if you missed anything. It is rudimentary edge computing, but it does not directly contribute to deliverables. And still, there are some RC systems where you do not know how complete your capture has been until fully processed later in the office. 

Klau’s solution for such applications is the Brumby, a lidar/imaging system for mobile, airborne, and drone systems. Klau Geomatics, based in Australia, has long been an innovator in the aerial mapping space. Their Klau PPK (post-processed kinematic), GNSS+IMU (inertial measurement unit) solution is popular around the world. Experience in integrating with various RC sensor stacks for a wide variety of RC customers and applications led Klau to their recent foray into edge computing.

“We got into LiDAR four or five years ago now, and coming from a geospatial core, everything we do is about accuracy and geo-referencing, GNSS, inertial, and so on,” said Klau. “What we saw in LiDAR, and it’s still the case, is that they’re essentially a logging device where you capture high volumes of data, and then you’ve got this big, massive log jam when you get back to the office.”

“Meanwhile, the next day, they’ve captured another day’s data, and you only just started into your log jam–it just grows and grows,” said Klau. “You’ve got this exponential problem. Then you’ve got to hire more people, get more processing power, software, and you’ve got to continually build on the back end. The other thing is that data can be less valuable and less usable for some applications, the older it gets.”

Processing in the cloud might be out of the question while still on-site due to connectivity limitations (or no net at all). You could pack a powerful beast of a workstation out to the field, though that’s not always very practical. What if you could put enough processing power onboard the RC system to get a head start on processing? While not alone in developing such systems, Klau Geomatics’ foundational expertise in the precise position aspects of mobile/airborne RC, with edge computing added, is a compelling approach.

“We took a very different approach to this kind of forward engineering,” said Klau. “We’re starting from an accurate trajectory with position and orientation, and then taking the LiDAR, raw data, .pcap file, and being able to process it on the fly is a real challenge. We just happen to be lucky with some of the right kind of software engineers who like working in bare metal code.”

“You haven’t got time for APIs, SDKs, libraries, etc. It needs real bare metal gaming code to make it work, and make it keep up,” said Klau. “We managed to do it, and we can produce an accurate point cloud, with a geoid interpolation as well on the trajectory. Real-world coordinates and heights.” While post-processing of the positioning aspect (i.e., PPK) can in some situations be advantageous over RTK (real-time kinematic) positioning (GNSS), post-processing would defeat the purpose of edge computing for this application. For the Brumby, Klau Geomatics is using RTK, or Terrastar-C Pro PPP (precise point positioning), which can, in many instances, yield RTK accuracy (horizontal, often less accurate in vertical), globally, without base station data. PPP options are quite attractive for remote projects.

The Brumby, with onboard edge computing, deployed as a drone payload. Note that Klau Geomatics has added the ability to use standard power tool batteries. – Source: Klau Geomatics

“Basically, we’re looking for every input we can find,” said Klau. “With two Novatel boards, we get the best out of Terrastar, their ALIGN algorithm and SPAN”. Klau Geomatics has had a long relationship with NovAtel, a prominent advanced positioning solutions provider, which was acquired by Hexagon in 2007.

“There’s more that we can do because we’re forward engineering, and we come up with that point cloud live,” said Klau. “You get back to the office, you pull the drive out, you stick it in your computer, and do your analytics, so you’ve already a day, or [several days on]. And there’s more that could be done onboard. “We are planning more analytics on the fly, having multiple processors to do multiple things. Like generate a DTM live. That’s a smaller product that could be then transmitted down to the survey, construction, or mining office. Why not compare it to yesterday’s DTM showing current changes or volumes?”

The Brumby has been designed in a modular manner: the processing and positioning controller, various LiDAR heads, and Expansion layers such as cameras. Components can be added like “slices”. Plus, there are standard options for Wi-Fi, 4G, and satcomm for remote operation. It is IP67 rated and comes in at under 2Kg. While the initial focus has been on mobile mapping, it can also be deployed on aircraft, helicopters, and drones.

An example project was the mapping of steep, thickly vegetated canyons in the Solomon Islands. The approach to dealing with the terrain and vegetation, to maximize ground points captured, was to fly in bands of elevation, like contours. They flew the creek path low, then worked their way up. The on-site results, which were processed on board, showed how well the ground was being captured.

Other examples of edge computing for surveying and geospatial applications include structural monitoring. There’s a new wave of IoT sensors for monitoring, and installations often have small edge computing hubs onsite. These are important for say, dam monitoring, where local processing is key. In cases where there are remote communications interruptions, the local monitoring processing continues unabated.

The Cloud and the Geospatial Cloud

Generically, “the cloud” can mean a cloud service, like those your information technology team has likely been migrating services to, instead of on-premises server clusters. There are commercial cloud services, like AWS and Azure, where you can load up your software and data, and process everything remotely. But let’s focus on cloud services specifically tailored for geospatial applications. Most geospatial solutions providers have branded cloud services, not just to do the heavy lifting of data processing, but also as collaborative environments for all phases of infrastructure lifecycles, from planning to RC, design, construction, and operations. These are designed with specific functions and brand application built in, and they are constantly adding more. For example, Bentley Infrastructure Cloud, various Esri online and cloud management services, Autodesk Construction Cloud, Hexagon HxDR, and more.

“Consider point cloud classification,” said Eric McSherry, VP of Platforms and Software Solutions for Hexagon. “Most people don’t have, for instance, the highest-end Nvidia GPU (graphics processing unit) in their laptop, and they don’t have 128GB of memory, and so on. With a more average surveyor’s laptop or computer, it could take a day to run these classifications. That’s why putting it into the cloud was really key. You upload the data to HxDR Reality Cloud Studio (RCS), or you may already have data uploaded and being used for collaboration and click on the button to run the classification process in the cloud.”

HxDR Reality Cloud Studio is an example of how geospatial solutions providers are offering cloud processing and collaboration environments, where seemingly limitless processing capacity enables rapid processing that would otherwise slow down workflows if only using local resources. Many of the same functions in desktop reality capture software suites are being added to these cloud environments. – Source: Hexagon    

Another aspect to processing heavy applications like point cloud classification (PCC), design/built difference comparisons, and automated feature extraction is that the underlying AI has evolved. AI-driven PCC has been around for many years, for example, in Leica Geosystems scanning workflows, but through machine learning rather than neural networks. “A few years ago, our engineers developed an updated [PCC] using neural network-style AI and released this in products like Leica Cyclone REGISTER 360 PLUS and Leica Cyclone 3DR. Now it can also be done in Reality Cloud Studio.”

Eric McSherry, VP of Platforms and Software Solutions for Hexagon. – Source: Hexagon

Reality Cloud Studio is one aspect of Hexagon HxDR (Hexagon digital reality). HxDR is a broad umbrella of cloud-based solutions for sharing, visualizing, processing, and analyzing, imaging, survey, and lidar data; building digital twins, and more (refer to their website).

“We’re using gigantic GPUs that most people would never have on the desktop because they cost tens of thousands of dollars, with more memory than you’d ever thought imaginable. That’s the key; we can run them a lot faster, and then we can deliver the classification results through the cloud, back into Cyclone 3DR or the suite of CloudWorx CAD plugins for more modelling and analysis, though we are continually adding more of those features to the cloud as well.”

To work collaboratively in infrastructure design, construction, and operations, there has been a persistent challenge. “We’ve got the capture side, registration, the basic processing side, but it was a huge pain getting all that data out to anyone who might use it,” said McSherry. “Many of the folks involved need a simple solution to open something up, take a quick dimension, and look around. If you imagine, in the current workflow, a lot of projects or enterprises have: There are folks out in the field, they do the scan, often somebody else registers it, gets the data back to the office by putting it on an external hard drive, and puts it in a FedEx pouch. Then maybe that person uploads it to a local file server, or SharePoint or something in-house. But then, if you’re an average person in the project who wants to look at something, are you really going to download a 100GB file locally and open it with some desktop software?

“When we put data into the cloud, and you view it in a browser, it’s not actually the native file, because the native file is not in a really good format to stream,” said McSherry. “What we do is take the native data, process it in the cloud, and break it up into three-dimensional cubes, like 3D tiling.” Spatial data tiling has revolutionized GIS, for instance, enabling the type of instantaneous zooming and panning in what would otherwise be impractically large image or lidar sets. McSherry gives an example: “Say a customer puts half a terabyte of LGSx (a Leica native format) files into the cloud, and through this 3D tiling, you can stream it to view and navigate on an average laptop.”

This is where they are expanding what can be done in the relatively new HxDR Reality Cloud Studio, adding functions where you might have otherwise had to go to different software packages to use. For example, there’s an auto-register function, an inspection engine to detect deviations in design/capture (i.e., “scan-vs-BIM”), point-to-point, planar, and area measurements, panoramic images, creating notated asset lists (called GeoTags), and inviting other collaborators; there’s more all the time. There are similar developments in other geo-cloud solutions, though this one is evolving particularly rapidly.

“We took a lot of the knowledge and code we already had in Cyclone 3DR to evolve HxDR Reality Cloud Studio,” said McSherry. “That’s a nice thing about being in Hexagon, we can take advantage of code developed by other teams. Like the AI classification engine, the Hexagon AI team and Cyclone 3DR team already built that, so we just took it and put it in the cloud. And the same with the deviation engine; they built a really nice SDK around it, so we took that as well. When we do that, we don’t have to spend a lot of development resources and time to make it ourselves. We just have to build a user interface, basically a browser UI.” 

Other downstream functions, like automated feature recognition by firms such as Mach 9 (also acquired by Hexagon), use unrelated (at this time) cloud environments for the collaborative and processing elements of their feature recognition service. 

One unrelated, but intriguing geospatial cloud service is from Looq.ai, which has developed a close-range photogrammetry-based (CRT) handheld reality capture device. You simply “paint” the area you want to model, and the tablet or phone shows the progress. You then upload to the Looq cloud, where the PPK and point cloud processing are done (typically overnight). You access your data through a web UI to do classification and measurements. The whole stack is a subscription, including the capture device rental.

This hand-held, photogrammetry-based mapping system is an example of where all processing is handled by their cloud service (though there are options for local/enterprise processing). The cost of the handheld can be included in the processing subscription. The PPK positioning is processed in the cloud, along with point cloud creation and classification. – Source: Looq.ai

The reality, (no pun intended) of the future of RC, digital twins, and downstream phases of infrastructure lifecycles, is that there will continue to be profound gains from further process automation, and in the leveraging of AI. This will put a premium on processing, and one that local or onsite resources might struggle to keep up with. The cloud can do the heavy lifting when needed. You could keep it all in-house and on your desk if you’d prefer, but is it wise in this dynamic, increasingly digital infrastructure world? 

Desk Beasts

There is something satisfying about having a very powerful workstation and owning all of the software on it… for a while at least. Many folks balk at subscriptions, especially some of us old timers who remember the standalone CAD and civil/survey software, pre-RC days… got a few of those old dongles in a dusty drawer? Nice, but can I meet the needs of the clients in this world of digital construction, VDC, BIM, digital twins, and more, with hardware and software that is years or decades out of date? Subscription services are not to everyone’s liking. And while there are instances where they might not make sense, there are even more instances where they might be the only practical choice, and their popularity continues to rise. But what if I do want to work standalone, what might it take to handle, process, analyse, and model data captured at millions of points a second? 

As an exercise, I asked some folks who process a lot of LiDAR and imaging data what a real beast of an RC workstation would look like. Without doing a tech magazine deep dive into comparisons or doing a “best of” listicle, here are a few of their recommendations: CPU: AMD Ryzen Threadripper PRO 9995WX (96 cores), Intel Core i9-14900K. GPU: NVIDIA RTX 5090, RTX 6000 Ada, RTX 4090 or A6000. RAM: 128GB to 256GB or more DDR5 ECC RAM. You could even go as high as 2TB! Storage: Fast! E.g., Multiple high-speed PCIe 5.0 or 4.0 NVMe SSDs, like a 2TB drive for the OS and applications, plus a larger drive for project data and cache…Not endorsing any of the above, these are just examples; the experts can duke it out. No matter what “the best choices” are, they all represent a likely, costly proposition. Then add on software licenses/subscriptions. 

Chief among the pluses of processing locally is that you can work standalone (if need be). You can keep working even if network/internet connectivity is lost. Or you are working in a remote area without connections (provided you don’t mind lugging that beast out there and providing power. You also have direct control of your data and how you wish to process it. You have control over QA&QC (or at least the parts that have not been buried in the software). You can expand, scale up, add various software as needed… but at what point does keeping up become a Sisyphean toil? 

“And”, Not “Or”

Here’s a proposition for a reality capture environment. Jump on your client’s collaborative cloud (if they are using one) to help coordinate, store deliverables, receive feedback, etc. Get as much of a head start as you can for your RC by processing, in the field, leveraging edge computing. Load up to the cloud and do the classification and more. Many folks are already doing this, especially if directed to do so by their large project clients. One size does not fit all, but there are so many more options now to consider. Those old CAD dongles make nice vintage keepsakes, though. Beep on!