Monday, February 16, 2009

Stereo Imagery as a Data Product

Considering the number of geospatial data products that can be derived from stereo imagery, it can seem surprising stereo images are not a widespread data product in their own right. Currently the main vendors of stereo data products are satellite operators such as DigitalGlobe (see here for their basic stereo pair product). Data derived from stereo imagery includes terrain, 3D vector data, and digital orthophotos: three of the most common products in the industry. Also consider that there are plenty of software packages on the market for creating orthophotos, generating and editing terrain, as well as working with vector data.


So why is it rare for stereo imagery to be sold as a "product"?

Personally I think there are a few reasons for this. These include:

  • There's no practical standard for storing photogrammetry metadata (e.g. image exterior orientation parameters). Although there are efforts underway, photogrammetric metadata is typically stored in proprietary formats. This inhibits the ability of the data to move from system-to-system. For example, if the imagery was triangulated in System X and then provided to a person using System Y, they invariably have to go through some pain and suffering to ingest the data (running an import job at a minimum).
  • Airborne flight operations are expensive. Data providers are typically contracted to fly specific jobs for clients such as regional authorities and so forth (here is an example). Times may be changing, but I haven't seen too many companies out there flying stock imagery and then reselling it. This is partly tied to the point above - without a common system for storing photogrammetric metadata, it is difficult for data vendors to deliver a one-size-fits-all solution.
  • The workflow is perceived as being difficult. For example, you need specialized and expensive stereo viewing hardware, domain knowledge in photogrammetry, etc. It all depends on the application, but if you're not performing stereo work all day long as your primary job, then it is still possible to get high-accuracy results working in split-screen or anaglyph mode. As for the domain knowledge, once the imagery is triangulated, then derived products such as the ones above are very easy to create - although they can be time-consuming depending on the project size.
It will be interesting to see if stereo imagery develops as a data product or if there will be a migration to photogrammetry-oriented web-services that essentially hides the process from users. For example, perform server-side orthorectification with a catalogued RPC image and terrain model and then deliver the orthorectified image as a WMS or WCS. This kind of functionality is on the market now, and I think it could be poised for growth...

2 comments:

Anonymous said...

I agree that it's frustrating that there are no standard interchanges for stereo pairs or for single images, for that matter. SocetSet .sup files are as close as there is.

I've looked a little bit at Sensor ML
http://vast.uah.edu/index.php?option=com_content&view=article&id=14&Itemid=52/
but its emphasis is on real-time sensors, and its complete generality adds a lot of overhead.

Given the number of 3D displays that are coming on the market as part of video games, it would seem to be the time for stereo image distribution. I've been saying that the next feature extraction workstation should be a game console: how about an Erdas port to PS3?

Ryan Strynatka said...

Sup files are indeed open, but the problem is that there aren't in a documented, standard, described format. For example, we support sups in LPS but it can be a challenge anytime a parameter gets added/removed/changed in a SS release (e.g. the changes aren't publicized or publicly documented). However I do agree that they, along with PATB, are about as close as one can get at the moment. Ideally a photogrammetric data model should encompass project-oriented info as well, such as image point measurements, capture time, lineage info (e.g. what terrain model was used to create a particular orthophoto, what was it's accuracy, etc) and so forth.

We've been taking a look at SensorML too, as well as a couple other initiatives - more about that later this week.

PS3 or Xbox?