by Leon Rosenshein

GIGO

Even the greatest algorithm can't correct for bad data. Ever hear of photogrammetry? Probably. It's using images to understand the physical world. We use it to map the world. Using stereoscopic techniques and two (or more) pictures of a scene from a known position, you can extract 3D information. Roughly speaking you find points in each image that are the same thing, then, correcting for all sorts of distortions, use the difference in camera locations and the directions to the point from each camera to calculate the position of that point relative to the cameras. Do that for enough points and you get a depth map. One way to find those points is with the SIFT algorithm. It's really nice because it handles differences in scale and orientation. And with our SDVs the images are taken at the same time, so the world hasn't changed between the images.

For aerial photography that isn't the case. Typically there's one airplane, with one camera flying over the area, taking one picture at a time, then looping around and flying a parallel track slightly offset. Repeat this pattern all day. To make the needed stereo pairs images are taken with lots of overlap, typically 80+% in the direction of flight, and 20+% between image strips. Using differential GPS, some Kalman filters, and lots of math, you can get pretty good location info for where the camera was when the image was taken, so that part is covered.

What isn't covered is that the world changes. Trees blow in the wind. Cars move. Waves wash up on the shore. Cows walk.

As part of the Global Ortho project we mapped the continental US and Western Europe with 30 cm imagery and generated a 2.5D surface map with about 4 meter resolution. We did this by splitting the target areas into 1° cells and collecting and processing data in those chunks. Turns out that flying each track, then turning around and flying back takes a few minutes. That means that pictures taken at the beginning of one strip and the end of the next can be 3-5 minutes apart in time.

And lots can happen in that time. Fast things, like planes, trains, and automobiles have moved far enough that the SIFT algorithm doesn't try to match them across images. Things that don't move far, like treetops blowing in the wind get lost in the image resolution. But things that move slowly, but keep going have a wonderful effect. Remember that cow that was walking? It probably gets the same SIFT id since it's a 3x5 black spot against a green pasture. And it didn't move that far, so it gets matched with the one from 3 minutes ago. The same thing happens with whitecaps on open water. Then we triangulate. And depending on which way it moved, you either get a spike or a well in the surface model. All because the cows don't stand still.

And those spikes kept lots of folks employed. Their job was to look at the model, find anomalies, then go into a 3D modeling program, and pound them flat. Yes, we gave them tools to find the issues and we did automatic fixup where we could, but we still needed eyes on all of the data to make sure it was good. All because a cow thought that patch of grass over there looked better. Which meant our data was a little messy. And the automation didn't understand messy data.

So keep your data clean. The earlier you identify/fix/remove bad data the better your results, the less manual correction and explaining of what happened you need to do, and the more your results will be trusted.