Sunday, April 6, 2008

Sensor to GIS: An Example Workflow (Part 1)

I mentioned in the previous post that photogrammetry is often the relatively unknown link between sensors and GIS. Today I will walk through prepping up sensor data, creating data products and then using them in a GIS.

First of all, a list of the ingredients:

  • A softcopy photogrammetry system: in this case it is LPS running on my laptop. The various modules required are Core, ATE (automatic terrain extraction), the Terrain Editor, and Stereo Analyst for IMAGINE. I'll describe what each of these are used for as I go through the workflow.
  • A GIS: I thought I would give Quantum GIS a whirl. I've been playing around with it lately and it has some interesting functionality. In particular, some pluses are that it is open source, uses GDAL for raster support, and I like how it has a GRASS plugin providing GRASS access within the QGIS environment. This is handy for 2D vector feature extraction and editing, as well as performing some analysis functions with the GRASS geoprocessing modules.
  • Input data: for this example I'm going to use some aerial digital frame photography over Los Angeles, coupled with GPS/IMU data. More details on the source data in a future post...
Now for the workflow. The basic flow of the workflow is to process the raw image data through to an orthorectified product, and create ancillary GIS input data from the triangulated data. These ancillary datasets are, in this case, going to be terrain and 3D feature data. I'm not going to outline each button click and every minor step, but I will cover all the major steps. Here we go:
  1. Setup the "block" in LPS. This is basically where you define the project. The most important part here is to get the projection system correct. This isn't fun to fix up if you've found out it is wrong later on...
  2. Setup the camera file. This provides information about the camera, it's focal length, principal point, radial lens distortion parameters, and more.
  3. Add the images. At this point the images are basically raw and have no orientation or georeferencing information associated with them.
  4. Import the orientation data. Since the sensor was flown with a GPS/IMU system on-board, then I needed to import the orientation parameters. These will provide us with X, Y, Z, Omega, Phi, Kappa orientation parameters for each image. This provides initial orientation data that will can further refine. You can see the layout of the block below.
  5. Run Automatic Point Measurement. This generates "tie points" via image matching technology that can be used to refine the initial orientation parameters in the next step. Tie points are precise locations that can be identified in two or more overlapping images (the more the better). Since I started with good orientation data, measuring true ground control points isn't necessarily required (providing you have high-grade GPS/IMU hardware and didn't run into any problems (e.g. while computing IMU misalignment angles)). I don't actually have ground control for this dataset but if I did, I would measure the GCPs prior to running APM. The screenshot below shows the dense array of tie points.
  6. Run the Bundle Adjustment. This reconstructs the geometry of the block and provides XYZ locations for the measured points from the previous step. Blunders (mis-measured points) from the previous step may have to be removed to achieve a good result. After the adjustment succeeds with a good RMSE, the results can be checked via a report file and the stereo pairs visually inspected in stereo. Y-parallax in stereo would indicate a problem with the adjustment. The screenshot below shows the Staples Center in anaglyph (no stereo on my laptop), where the 3D cursor is positioned in XYZ right on top of the building. Since the top of the building is 150 feet off the ground, you can see the "anaglyph effects" on the ground around the building. At this point, I now have triangulated images and can start creating data products: terrain, orthophotos, and 3D features.
  7. Generate Terrain. For orthophoto generation, I need a "bare-earth" terrain model. Normally terrain would be generated by running Automatic Terrain Extraction, which runs an autocorrelation algorithm to generate terrain points. However, since there are so many skyscapers in the block that it made more sense for this example to simply use the APM points and filter out tie points that had been correlated on buildings. Normally this wouldn't cut it and I would need either LIDAR, autocorrelated, or compiled terrain, but for the sake of expediency the APM-derived surface model should be fine.
  8. Generate Orthophotos. Orthophotos are the #1 data product produced by photogrammetric processing. Most commercial applications have orthorectification capability, and the user interface for LPS' orthorectification tool is below. After orthorectification, there may be the need to produce a final mosaic, or a tiled ortho output. Here is the orthorectification user interface.And voila: here is an ortho, in this case I also mosaicked a few of the images together:
  9. 3D Feature Extraction. For the example workflow I extracted a few 3D buildings in Stereo Analyst for IMAGINE, which is an ERDAS IMAGINE and LPS add-on module. That said, there are a number of commercial software packages that accomplish a similar function. The nice thing about Stereo Analyst is that it doesn't require a 3rd party CAD/GIS package to run on top of, and you can extract 3D shapefiles, texture, and also export to KML. Here's a screen capture of one of the (very few) buildings I extracted. Again I'm working with anaglyph as I am on my laptop.
While we just touched briefly on the major steps, at this point we have all data prepped and ready to go into QGIS. This will be the topic of the next post!

2 comments:

Maps digitization said...

Thanks for sharing. Nice post.
regards
Orthorectification

Photogrammetry services said...

The orthophoto generation is nice. Thanks for sharing.
regards
LiDAR Services