Beyond the Science Project: Productizing the Future of Sensor Fusion

This entry is part 6 of 7 in the series December 2025

XYHT – Q&A with Peter Mardaleichvili and Lukas Meier, Fixposition

When Fixposition showed its latest xFusion™ solutions at Equip Expo, Agritechnica, and INTERGEO, one theme came through clearly from OEMs and integrators: they are done with “science project” stacks. They want fusion that is robust, repeatable, and ready to ship.

Fixposition says that is exactly what they are building: a productized sensor fusion platform that extends RTK-level performance into tree canopy, near buildings, and through indoor–outdoor transitions, while simplifying integration for autonomous mowers, agricultural and industrial robots, and other field machines.

XYHT spoke with Peter Mardaleichvili and Lukas Meier about what they heard on the show floor, why productization is a differentiator, and how they see themselves—not as a GNSS company, not as a vision company, but as a fusion company.

Image: Fixposition

XYHT: When you talk about XFusion™, what is the real differentiator compared to other GNSS+vision systems?

Mardaleichvili: I think there are a few things we do differently. One is very simple to say but hard to do: we’ve productized something.

We’ve created a system that is robust in a variety of situations where we’re able to get the most out of the data. It sounds soft, but what we’re really talking about is a truly available and reliable solution.

You might ask, “How is that different from another GNSS/INS system, or a GNSS/INS plus camera system?” The reality is our algorithms are much more sophisticated. We’ve spent years developing them. We use very advanced optimization techniques. Our factors have been refined through huge amounts of data collection.

And our team is made up of experts from each domain—GNSS experts, vision experts, fusion experts. We’re not just randomly taking data and pushing it into one algorithm and hoping for the best. We understand how to build functions that properly represent each data source, take the best of that data, and use it together to make something robust and reliable on different platforms, for different use cases.

That means we’ve created something that can really be used in the real world. It expands operational capabilities and allows customers to deploy use cases that are product-worthy, not something that works once for a paper and then fails when you try to turn it into a product.

XYHT: At Equip Expo and Agritechnica, you showed a very compact module. How does that hardware reflect this productization message?

Meier: I’m pretty sure you saw our system-on-module, right? It’s about 5 by 5 centimeters.

At trade shows, people look at it and say, “Oh, that’s a pretty big GNSS module.” Then we tell them,“That’s not a GNSS module.” You have an integrated IMU. You can connect up to four cameras. We do real-time processing of all the images on that module.

On top of that, we make it possible to run neural processing on the same board—to do object detection and segmentation, to know where the street is, where the crop is. That’s the point where their eyes fall out of their head. They realize it’s so much more than “just a GNSS receiver.”

And we don’t stop at the bare board. We also have a boxed version. You put antennas and cameras on it, and the rest can be a single computer handling your application. That consolidates systems that today have separate GNSS, separate IMU, separate cameras, all cabled together.

Image: Fixposition

XYHT: During your conversations at Equip Expo and Agritechnica, how did OEMs describe the machines they’re building today? Are they deploying integrated navigation solutions—or are many still wrestling with stitched-together sensor assemblies?

Meier: A lot of the robots we see today have a GNSS receiver here, a camera there, an IMU somewhere else, and another box trying to fuse it.

We realized many OEMs would be very happy if somebody packaged that and made it less complicated. That’s a big part of what we do. We have the fusion software. We have the module. We have the boxed version. It’s all in one package. 

Now that the industry is rapidly maturing, this is needed. They can’t have tons of loosely integrated parts forever.

Mardaleichvili: People say the industry is maturing, but it can’t really mature until somebody helps them mature. You have to put these capabilities into a box. Productization sounds easy—even trivial—but when you think it through, it’s the opposite. Productization means an OEM can take our module and integrate it in weeks instead of years, and trust that it will behave consistently across environments and production batches.

This is where we live, as an enabler for integrators, developers, and manufacturers. We clean up the mess of sensor fusion so that they don’t have to.

XYHT: How do you express the benefit of your fusion engine to OEMs who are used to RTK-centric thinking?

Mardaleichvili: We tend to split it into two sides: the technical capability side and the commercial side.

On the technical side, it’s direct demonstration. We show that we can extend RTK-level precision into situations where, typically, you cannot rely on anything that is not a map-based solution: under tree canopies, in orchards, near buildings, through hallways, mower garages, and so on.

We also provide integrity metrics. If you want a certifiable autonomy system, that’s a must. Our solution gives you position and orientation, but also confidence levels. We have lots of data showing our position error stays within those confidence intervals.

We’ve benchmarked that against competitors. Some will say, “Right now your position went from one centimeter to one meter, and I’m 99% confident you’re within one meter,” but in reality they are already four meters off. That kind of data cannot be used for applications like autonomous shuttles; small/medium-sized robots used for delivery, patrolling, rescue, and cleaning; robot lawnmowers; and agricultural robots.

On the commercial side, we have a finished product. You don’t need to waste resources developing your own fusion stack that may or may not work. You increase time to market, and you focus engineering effort on your own core value.

XYHT: You both stress “multiple modalities.” What does that buy you, beyond accuracy?

Meier: The fact that we have multiple sensor inputs is key. It’s not just GNSS. It’s GNSS, IMU, vision, and more.

If our GNSS signal gets lost, maybe because of the environment or because one of the two GNSS receivers fails, we can continue performing. If a camera has an outage—say a leaf lies in front of the lens—we can often handle that for a certain amount of time.

And when we know something went wrong, that gives the user time to react. For any solution that aims at autonomy, that is crucial.

If you drive your tractor autonomously and something happens, but you have time to react, that helps. Having all these modalities combined in our fusion engine adds redundancy.

We are careful today not to claim “safety,” because we don’t have safety certification yet. But looking forward, that’s where we need to go—and this architecture is designed with that in mind.

Mardaleichvili: We also see our Vision-RTK 2 as the minimum viable product in terms of sensor set. It’s a simple GNSS receiver, a low-grade IMU, and a single camera. Our technology means that is really just the baseline.

From there we can add two or three cameras, better IMUs, different receivers, even radar. We have flexibility to tailor the sensor suite to more challenging requirements, with our algorithms already tested for different modalities.

Image: Fixposition

XYHT: Indoor–outdoor transitions are where deployment risk and integration cost tend to spike. How does your system keep error growth predictable across those transitions—and what does that mean in practice for calibration cadence, testing time, and maintenance?

Mardaleichvili: There are two sides: what we can do today, and what we will extend toward in the future.

First, we’re very clear: we cannot eliminate drift with the solution we have today. We do sensor fusion with one absolute measurement—GNSS—and every other measurement is relative. It only tells us how much we’ve changed with respect to the previous state. 

Our goal is not to “prevent” drift, but to minimize it as much as possible. We already have a post processing software that has achieved impressive low drift for long GNSS outages and we are converting them into real time feature.

That comes back to data quality and algorithms. Better sensors give you cleaner data. Higher resolution means errors accumulate more slowly. The same is true for cameras and IMUs. 

On top of that, your ability to cleanly interpret that data matters. For the same sensors and the same compute, our algorithms will outperform others because this is where our expertise lies.

If I have a wheel encoder, a camera, and an IMU, and I’m indoors with no GPS, I’m trying to understand how my position and orientation have changed. If two sensors agree I moved one meter forward and one says 0.8 meters, the impact of that one is lower. The more data we have, and the better our processing, the more we can reduce drift. 

Meier: And it’s important to understand how our drift behaves.

In many cases, we are not competing with other vision systems. We’re competing with high-grade GNSS/INS. Those systems drift over time. The moment GNSS is bad, you are on the clock, and it just gets worse and worse.

With our system, that “clock” is over distance, not time. If we stop, the clock stops. For many indoor or near-indoor vehicles that move slowly, that is a big difference.

Imagine a delivery robot approaching a house, stopping in front of the door, and waiting for pickup. If GNSS is not good enough, a pure GNSS-INS system struggles. With our system, you can sit there for hours—it doesn’t matter. When the task is done and you continue, you are still good to go.

XYHT: Let’s talk about pass-to-pass accuracy for agriculture and autonomous mowers. How do you perform there, especially when corrections drop out?

Meier: We have to separate environments a bit. When we talk about 0.5% drift, that’s if we have a full GNSS dropout—basically indoors. As long as we are outdoors and there is some sort of GNSS signal, that doesn’t apply in the same way.

For pass-to-pass in fields or mowing, if we have good connectivity to corrections and good GNSS reception, we are centimeter precise. There is essentially zero drift. That is also where many other systems perform well.

Where we start differentiating is when GNSS drops out, or corrections drop out. If your corrections go away, a cheap GNSS solution falls back to sub-meter or meter-level accuracy.

With our solution, we added a feature this summer where we can keep pass-to-pass error below about 15 centimeters after 15 minutes of no corrections. No corrections can mean a connectivity dropout or the corrections themselves disappearing.

That is what the agricultural market needs. That level of performance is basically not available on systems of similar size and cost.

XYHT: At Agritechnica, what pain points did you hear from ag OEMs that align with this?

Mardaleichvili: Anecdotally, my understanding is that for truly open fields with big tractors, current high-end GNSS solutions work reasonably well. That’s not where we heard most of the pain.

The pain shows up near tree lines, in orchards, in greenhouses, and with smaller robots moving under crops—corn, for example, where leaves block the sky. In those settings, GPS “doesn’t really work,” and people are looking for improved technologies.

We were able to demonstrate things like recognizing rows and navigating with respect to that. The standard giant fields with giant machines are not where the biggest complaints are. The real interest is in next-generation machines that are smaller, better for the soil, and focused on high-value crops.

Labor is not getting cheaper or more available, and the current tech doesn’t fully address those cases.

XYHT: What about feedback from the autonomous mower community at Equip Expo?

Mardaleichvili: I can give three quick points.

First, many Robotic Lawn Mower (RLM) type consumer mowers have priced themselves into a corner by trying to out-compete each other on being cheap. Professional and commercial mowers are more interesting for us.

Second, safety is becoming a bigger topic. They want to do more with vision—recognizing people, animals, bottles, obstacles—and learning how to operate in that environment.

Third, they want to expand operational capabilities. For very simple, flat, open-sky lawns, they are more or less happy. But that is a surprisingly small part of the mowing market.

Most high-value work involves trees, parks, buildings, slopes. Many existing solutions were built for 2D, flat environments. As soon as you mow on banks around rivers or roads, 3D positioning becomes important. That is where we see a lot of need.

Meier: We also heard that people are not happy about the stability of many solutions. If something doesn’t always work and isn’t robust, it creates a pain point.

At the same time, cost is still a topic. These machines are expensive. That is a limiting factor for growth.

Our view is that a better fusion system can actually reduce overall system cost. Today, some machines have three lidars and several radars. With our solution, at a fraction of the price of that stack, you can remove a big part of those sensors and still achieve what you need.

XYHT: You also exhibited at Intergeo, in front of more traditional GNSS and geospatial audiences. How are you perceived there—and how do you want to be perceived?

Mardaleichvili: At Intergeo, by the GNSS vendors, we are often seen as “the vision guys.” By the vision companies, we are seen as “the GNSS guys.” It depends on the perspective.

What we want to be seen as is the sensor fusion people—the ones with a really stable solution that handles all the messy parts: multiple sensors, time sync, different noise models.

We also want to be very clear: we don’t want to be put in the GNSS bucket. That is harmful to us, because people start comparing our price and value to an RTK module. We offer much more than that, and we enable much more.

If someone is not familiar and just sees “RTK GNSS” in our marketing, they might think, “Why should I pay this much?” That misses the point. We are not GNSS-centric. We are not camera centric. We are fusion-centric.

Fusion is messy. That is where we have spent our time. We’ve cleaned up that mess, so our customers don’t have to. They can get back to the business they are in, and we are an enabling layer.

XYHT: Final question. If you had to summarize Fixposition’s role in the coming wave of smaller autonomous machines, how would you put it?

Meier: We want to enable autonomy at scale, starting from positioning. That means reliable global positioning, but also understanding where you are with respect to your environment.

We don’t see ourselves as a GNSS company or a vision company. We see ourselves as a fusion company. That’s where the real value is for these newer, smaller, more specialized machines.

Mardaleichvili: Exactly. The next wave of autonomy is not just about big tractors and big robots. It’s about compact machines doing very specific tasks in very difficult environments.

For those machines, GNSS alone isn’t enough. Vision alone isn’t enough. Fusion is the winning approach—and the fact that we’ve productized that fusion is what sets us apart.

December 2025

Measuring Change at Scale: FARO’s Mass-Data Vision for Digital Reality Technology That Tracks