Thursday, May 21, 2009

Exploring Stereo 3D Feature Extraction (Part 2)

While the previous post introduced the concept of 3D stereo feature extraction, this post will focus more on specific tools for 3D vector capture.

For urban 3D modeling, manual stereo feature extraction typically involves collecting rooftops (and sometimes even more detailed structures such as rooftop mechanical features) as 3D polygon features. To create solid 3D models, an extra step is taken to extrude the feature down to the ground. There are typically numerous methods for doing this. In PRO600, an ERDAS application that runs in both the Bentley Microstation and PowerMap environments, there are three options for building extrusion via the “Create Building” operation:

  • Manually measure the ground height.
  • Specify a fixed object height.
  • Extend the feature down to the surface of a digital elevation model.
If you have a DEM available, clearly this is the least time-consuming option - because you can extrude a large number of buildings all at the same time. There are also a number of options to consider when performing the extrusion: see below for Object Creation options (click on the image to enlarge):

Create Building and Object Creation Preferences in PRO600

To demonstrate extrusion and object creation, here is a screen capture of a set of extracted building features. These have all been extruded with the aforementioned “Create Building” operation.

Extracted 3D Building Vectors Displayed in Bentley PowerMap

Here is the same feature dataset with an orthophoto backdrop. Notice how the building polygons fall around the base of the building (the true location) and not the tops of the buildings. This is one of the advantages over heads-up digitizing directly from orthophotos. If the building has any lean at all, then tracing rooftop vectors off the ortho will produce building polygons in the wrong location.

Building Vectors and ECW Orthophoto Displayed in Bentley PowerMap

Switching to an isometric view and altering the display mode, you can see the buildings rendered from a perspective view.

Perspective View of 3D Building Vectors in Bentley PowerMap

While extruding features is one thing, the process of capturing them is another. PRO600 features an extensible library catalog containing feature definitions. While a default catalog comes with the software, it can be completely modified or replaced if necessary. Double-clicking on a feature in the catalog shows the feature attributes (color, description, style, etc.) which can then be edited and saved. Collected features can also be selected an modified on an individual basis. For example, you may have a selection set of buildings that you would like to highlight with a different line thickness and color to make them stand out from the rest of the building vectors in the dataset.

PRO600 Library Catalog and Shape Definitions

While the "Building" feature above triggers off the "Collect Squared" Microstation/PowerMap command, it is also possible to switch into arc-collection mode mid-feature to model any curving or circular lines. An increasing number of building structures feature curving corners instead of sharp edges, making tools like this a necessity.

Modeling complex feature can take time, so snapping is also important. PRO600 has a few different snapping options, including 2D, 3D, and a 2D/3D hybrid - where the tool switches between 2D and 3D based on a user-defined tolerance. Here is a look at the snapping preferences, which also show optional modifications such as the ability to cut a vertex or add a new vertex (to the feature being snapped to).

PRO600 Snapping Preferences

Aside from building polygon features, there are also specific tools available for mapping linear features such as roads. For example, a parallel line tool is available to reduce the time required to digitize both sides of a road or freeway feature. Since photogrammetric mapping has been widely used in the engineering community (e.g. State-level Departments of Transportation in the USA) a number of tools have emerged that are particular to high-accuracy, high-throughput mapping applications. Fortunately, these tools also lend themselves to a broad number of other applications, including visualization, 3D city modeling, urban/environmental planning, and much more.

Wednesday, May 20, 2009

Exploring Stereo 3D Feature Extraction (Part 1)

Manual operator-driven 2D and 3D feature extraction techniques have in use for many years. One of the early methods of getting data into a GIS was from 2D digitization. In addition to tablet digitizing systems, softcopy photogrammetry approaches to vector data have also been available for many years.

Remember this? Click the image for details

The photogrammetric approach, also known as stereoscopic feature extraction, is one step closer to the source of the data than GIS-based digitizing. Instead of compiling feature data from an orthophoto or another source of information, stereo extraction uses the original images along with the triangulation metadata. The triangulation metadata consists of exterior orientation parameters that are required for stereo pair generation. Exterior orientation data could also come from airborne GPS/IMU data – which may be necessary for mapping remote areas where collecting ground control points is not an option.

Stereo feature extraction is inherently 3D. The vector data are measured in X, Y, and Z by manipulating a floating cursor while viewing a stereo pair of left and right images. As a result, the point, line, and polygon feature data all have Z values associated with each vertex. Because GIS developed in a 2D paradigm, many early stereo feature extraction packages were coupled with CAD environments as a platform. Basically a stereo viewer would be added for stereo image viewing and 3D measurement, and the feature data would render in the application's native “viewer”. In more recent times applications have been developed for GIS packages in addition to CAD environments, as well as a number of native applications that do not require platform technology.

Stereo Feature Extraction in PRO600

So why is this important? Data used for GIS analysis needs to come from somewhere. While there are a number of methods used for generating vector data, the demand for 3D vector data appears to be on the rise. One area of current discussion is the format and environment for generating and persisting 3D vectors. Currently the end application tends to determine the format, but this can be challenging for people trying to create one-size-fits-all data products (e.g. if I want to create a textured 3D city model, what format should I use without being locked into one system?). In this respect the development of CityGML is intriguing, as the geometric properties are but one aspect of urban information modeling and are viewed as an inter-related component of a much broader system. But persistence models aside, tools are still required for the original content generation.

More on specific 3D feature extraction tools tomorrow...

Monday, May 18, 2009

ISPRS Hannover Workshop 2009

The ISPRS Hannover Workshop is coming up at the start of June covering "High Resolution Earth Imaging for Geospatial Information." The topic areas look good, covering:

  • Digital aerial cameras
  • Handling of high resolution space images
  • Impact of web-based access to, and use of, remote sensing images (Google Earth, Microsoft Virtual Earth)
  • Potential of small satellites for topographic mapping
  • Airborne laser scanning
  • Synthetic aperture radar (SAR) and interferometric SAR
  • TerraSAR-X
  • Sensor and system calibration and integration
  • Direct georeferencing
  • Sensors and methods of DEM generation
  • Aerial and satellite image analysis
  • Rapid change detection and update for environ mental applications
  • GIS driven updating and refinement of geo spatial data bases
  • From experimental systems for object acquisition and updating to commercial solutions
We'll be there on the Tuesday "DTM" session to present research we've been conducting at ERDAS pertaining to terrain point cloud generation from high resolution optical images (see here for for some initial concepts, more details coming later).

Friday, May 8, 2009

The Google Book Scanner and Mapping

In recent days there has been a lot of media coverage on Google's system for scanning books for it's online books database.  After taking a peek at the patent, it looks like there are a lot of similarities between the system devised by Google and what we do in the mapping business.

Mapping Problem: The surface of the earth is not flat: so when you take a picture of the earth from an airborne (or satellite) sensor,  the images will contain geometric distortion.  This effect is particularly acute in areas of high relief.  This is a problem because due to the distortion, accurate measurements cannot be made from the images.  This means they are not typically suitable for a GIS.

Mapping Solution: Aerial photography is captured in stereo.  After photogrammetric processing (triangulation), the images can be viewed in stereo and 3D measurements can be created - including a 3D model (digital elevation model) of the terrain surface.  The terrain surface is then applied to the original aerial photographs to create digital orthophotos.  If processed correctly, orthophotos will be relatively free of geometric distortion.  This is important because accurate measurements (e.g. distance between two locations) can then be made from them.

Well, it looks like Google ran into the same problem with regards to scanning books....

Book Scanning Problem: Because of their designs, the surface of a book is not flat when you open it.  Rather, it curves out from the spine.  This presents a problem for optical character recognition systems, which typically require the page being scanned to be on a flat surface (e.g. remove the spine of the book and run the pages through a flat-bed scanner).

Book Scanning Solution: According to the patent, Google uses a system of two cameras and infra-red light to capture a stereo image pair of the pages.  Then a "three-dimensional image" of the pages is created, which is then used to geometrically correct the original images.  The "three-dimensional surface" seems to be what we would call a digital elevation model.  The final product is a geometrically corrected image that is suitable for optical character recognition.

Sounds very similar to the orthorectification process, doesn't it?  Instead of an IR camera we have the sun, and instead of curving pages we have the terrain of the earth.  The underlying problem in both cases is similar.  The main difference is that instead of using two cameras, in photogrammetry the norm would involve the use of a single camera on a moving platform (aircraft or satellite).  But it is the same idea...

Here is a graphic of the Google scanning system: