Showing posts with label orthophoto. Show all posts
Showing posts with label orthophoto. Show all posts

Thursday, June 11, 2009

2009 NAIP Status Update

As mentioned on the NSGIC News Blog, the National Agriculture Imagery Program (NAIP) for 2009 will cover roughly two-thirds of the USA. This is quite an achievement for the program and will result in an excellent resource for geospatial practitioners as well as the broader public.

A status update from yesterday indicates that flight operations are already well underway.

The 2009 program contractors include a group of commercial mapping firms that are all well-known in the North American mapping industry: 3001, Aerial Services, the North West Group, Photo Science, Sanborn and Surdex. It is interesting to note that the cameras used will be a mix of large-format frame and pushbroom sensors. 3001 has both a Leica Geosystems ADS40 (pushbroom) as well as an Intergraph DMC (frame), and I'm not sure which will be used for NAIP acquisition. Aerial Services and the North West Group operate Leica ADS sensors. Photo Science and Surdex operate DMCs while Sanborn operates a Microsoft Ultra Cam (frame). The photogrammetric processing workflows for frame and pushbroom sensors are quite different, with pushbroom sensors capturing long strips of imagery in a "pixel carpet" versus the traditional frame approach. However, it is good to see a mix of technology in use.

Here is a map of the contractor areas:

Note that further maps and status updates are available from the APFO (Aerial Photography Field Office) home page.

Wednesday, April 29, 2009

The Role of Seams in High Resolution Image Mosaics

When discussing true orthophoto generation I made reference to the image mosaicking process. I thought I would touch on that more today, with an emphasis on high resolution imagery. One of the main challenges in mosaic production is ensuring the mosaic is seamless. That is, one cannot easily discern where the edges of the input images are. This can be challenging for a number of reasons. One of the most difficult aspects involves the input image geometry. Because the input images have different perspective centers, the geometry of surface objects will vary between images. For example, a tall building in the center of a frame image may not exhibit any building lean, but the same building in the next image will show noticeable lean. So the big challenge in the mosaic process is ensuring the seams between input images conceal any mismatches. While seams between the images can be automatically generated, a quality control check must be performed to ensure there are no issues. Seams usually need to be manually edited if there are any problems.


Here's an example: the image below shows a multi-story building with a seam cutting right through it. Because the angle of building lean is different in each images, it looks like a mess.

In this example, the seam has been edited so that it shifts north of the building. This is ok, but it also reveals that there is a lot of building lean in the image portions chosen for inclusion in the mosaic.

This example shows the same building with the seam edited the other way. Instead of diverting north of the building, I've moved it to the other side. You can see that this provides a better top-down perspective on the building, which will generally make for a better output mosaic.

Another typical problem area: bridge decks. The image below shows a seam cutting through a freeway overpass. It is easy to see that the edges are misaligned.

Below, the seam is edited to run parallel with the bridge instead of cutting through it.

Note that accuracy issues in the terrain model used to create the input orthophotos, such as not having accurate enough terrain or the existence of errors in the terrain, can also introduce "mismatches" between images. In the bridge example above I used a DEM where I modelled the terrain with reasonable accuracy (performing terrain editing in stereo), but I stopped short of modeling the bridge decks. This means that the image area of the ground is accurate, but there is some offset for the bridge. While the seam editing technique hides the error and creates a visually-appealing result, the fact remains that the ortho may be considered flawed because the seams in the mosaic simply mask the error - they do not eliminate it.

Friday, April 17, 2009

More On True Orthos

The previous post covered an explanation of true orthos, and in this post I wanted to outline a few notions on true ortho creation. As discussed previously, true orthos can be very expensive. This is because of the extra effort that goes into production. However, there are a couple of different ways to produce true orthos. These include:

  1. True Ortho Processing

  2. Flight Planning and “Managed Mosaicking”

These two methods both have pro's and cons and are more applicable in some circumstances rather than others.

The first method refers to special techniques used to create true orthos. The ortho photography page at Eastern Topographics outlines the pre and post-processing results quite well. The technique that is typically applied involves the use of input images and a bare-earth terrain model (just like regular ortho processing), but with an additional component of 3D building features that need to be captured in stereo via a 3D feature extraction software. So the raw ingredients to the process would be (a) input imagery, (b) bare earth terrain, and (c) 3D feature data. It is the latter component that typically drives up the cost of production. Because the buildings are typically extracted manually, the human cost of collection gets bundled into the true ortho pricing. As for what actually goes on in the processing, a comment on the previous post provided an excellent link to explain the details of automated true ortho processing. It also outlines the importance of color matching, which is important for achieving acceptable results. The other thing to keep in mind is to ensure there is enough valid pixel data, otherwise occluded areas (aka the areas of the image obscured by building lean) in the input imagery may be filled with black void pixels. This can be alleviated by ensuring the imagery was collected with a high enough overlap percentage.

The other method that is often employed is to simply fly the project area with a very high degree of overlap (e.g. 80/80 forward/sidelap versus the usual 60/40). Then orthos are produced for all the frames via the usual approach with bare earth terrain. During the mosaick process, the operator can then interactively select the image portions (the center area of each frame) via seam editing, mosaick the images and then tile them back out into whatever their specification requirement calls for. This approach may not be applicable for high urban environments (e.g. Manhattan) but can work well for suburban and low-rise building with a few high-rises here and there. While fuel costs are going to be higher because of the increases overlap, the processing costs should remain low.

Note that pushbroom sensors such as the ADS80 can be ideal for the latter approach. This is because they can capture imagery at nadir in long strips, which dramatically reduces the number of input images into the mosaic process. Here's a screen capture of ADS80 imagery taken from the middle of the strip. While the multi-story buildings along the edges of the strip have discernible building lean, the imagery at the center doesn't show any. Flying with high sidelap would allow for the inclusion of just the central areas for the mosaic processing. It may not be perfect 100% of the time, but I would argue that it is good enough for many applications, without requiring the high cost of collecting all the building features.

Wednesday, April 15, 2009

True Orthophotos and Regular Orthophotos: What's the Difference?

Digital Orthophotos have become a premiere geospatial data products in recent years. Although they are often used as background context for the display of vector data, there is quite a bit of complexity that can go into creating them. If you've ever looked into purchasing orthos, you may have been given the option of buying "true orthos". This was the case with yours truly several years ago at my first job out of university, when I was tasked with purchasing orthophotos for several metro areas in the USA. Compared with "regular orthos", true orthophotos seemed outrageously expensive...

So what are True Orthos?

If orthophotos can be characterized as images that are geometrically corrected for relief variation, true orthophotos add the dimension of correcting for the distortion of buildings. Or, simply stated, true orthophotos do not show building lean. This is important for mapping applications such as digitizing street centerlines in Lower Manhattan. "Normal" orthophotos will show displacement of skyscrapers and many of the streets will be obscured. It isn't a major issue in suburban or rural environments, but may be necessary for urban environments such as Hong Kong, NYC, Seoul, and other metro environments with a large number of skyscrapers. True orthos can also be important for transportation planning projects, such as accurately mapping bridges.

Here is an example: the ortho of central Los Angeles below shows building lean that is common in many urban environments. The facades of the skyscrapers are clearly visible, and the surrounding areas on the ground are obscured. Yahoo Maps for the same area shows a similar effect.


This next image shows what a "true ortho" would look like (a different part of central LA). The only way to tell the buildings are highrises, other than the giveaway helicopter landing pads, is the long shadows cast by the two buildings in the center.

Here is another application featuring a couple of buildings in Lucerne, Switzerland. Although these buildings are not skyscrapers, the effect is clearly visible. The building facade is readily displayed in the image below.
The next image, of the exact same building, shows a top down view that does not display building sides.
Nadir view without building lean

Next Post: why true orthos are expensive and some notes on true ortho production.

Friday, January 30, 2009

Value-Added GeoPDFs Part 2

In case you want to check out the example GeoPDF from the previous post, I have now uploaded a couple versions of it to the Adobe Acrobat.com site. The file size is fairly big, so I created two versions:

Higher quality (compression factor of 60) at 119MB.

Lower quality (compression factor of 15) at 55MB.

In both cases the imagery looks decent, but the shaded relief in the 55MB version shows the effects of over-compression. If you haven't had a look at it before check out Acrobat.com as well - 5GB of online file storage are available for free!

To adjust the layers, open the files in Adobe Reader and click on the "Layers" icon on the left side. The bands are as follows:

Bands 1/2/3 = RGB orthomosaic
Bands 4/5/6 = Shaded Relief
Band 7 = DSM

In the example below the Shaded Relief bands are loaded, and as you can see they can be adjusted in a number of combinations.

Thursday, January 29, 2009

Data Presentation: Value-Added GeoPDFs

When I initially heard about GeoPDF I thought it sounded like a good enabling technology for people without advanced geospatial expertise or access to the appropriate software. After all, anyone can view a PDF. Typically I thought of it in the context of viewing RGB orthophotos... Fast-forward to the present: lately I've been thinking about ways of presenting fused data, and this is where GeoPDFs came to mind. In particular I wanted to present terrain and image data.

For input data I started with some edge-matched orthophotos of Sligo, Ireland and processed XYZ LIDAR data of the same area (data courtesy of OSi). I processed the data in LPS and IMAGINE, which basically involved importing the LIDAR data and converting it to the IMAGINE IMG format, and then creating a shaded relief map out of it. For the imagery I mosaiced several tiled input images into a single orthomosaic. The processing resulted in three products: an RGB image file, a shaded relief image, and a DEM. To put them all together I used the "Layer Stack" function in IMAGINE, which resulted in a 7-layer 8bit IMG file.


Next, I converted the IMG to a GeoPDF. This was a painless process and produced a 7-layer PDF file. My input file covers a fairly broad area and was a few gigs (2.7GB to be precise) in size so I used a JPEG compression quality factor of 60, which for visual analysis creates a fairly sharp-looking image in a PDF format with a much-reduced size of 122MB. This allowed me to open up the image in Adobe Reader and view the image.

Because the first three layers of my image are the RGB orthomosaic, this is what is displayed when opening the PDF in Adobe Reader. I also have the GeoPDF Toolbar (recently renamed to TerraGo Desktop) loaded as well for various measurement and manipulation functions:

(click any of the screen captures for larger views)


By clicking on the "Layers" button I could adjust the band combinations for RGB to display the shaded relief.


It was also possible to display various fused results. In this example I am displaying Red and Green from the orthomosaic and then loading the DEM into the Blue channel. This means the colors are skewed but the benefit is that you can see the image feature details and also get a notion of the high-relief areas. In this case high relief shows up as dark blue, and the absence of blue indicates low-relief (e.g. the upper left areas).


Note that many kinds of band combinations/visualizations are possible with this fused product.

Many thanks to Adam Estrada for assisting with the GeoPDF part of this workflow. I also plan on making the dataset available soon (I'll hopefully get it uploaded tonight).

Friday, December 12, 2008

Defining Your Own Raster Basemap in ERDAS TITAN

Something I forgot to mention in yesterday's post is the ability to define your own raster basemap in TITAN. So what does this mean???

Most virtual worlds have pre-cooked imagery that they serve up as their "skin of the earth" basemap. But what if I want to define my own default imagery, just for my own personal use? Some applications allow loading imagery as layers, but one of the useful features of the TITAN Client is that you can use your own imagery as the default skin.

So how do you do this?

First, load your imagery in TITAN's Geospatial Instant Messenger. The screen capture below shows an ECW orthomosaic that I have loaded.
Next, right click on the layer and choose the "Copy WMS URL" option.

With the URL copied, navigate over to My Services, where you'll see a few difference service options. For the next step, double-click on the WMS service.
This opens up the "Select Services" interface.
Here you can paste in the URL of your image that you copied earlier. Accept it and you'll see the WMS show up in the list of services, displayed below.
Right click on the newly-added WMS service (displayed above) and choose "Set as Default Basemap". You should see it turn red, indicating it is the default basemap.

Finally, fire up the TITAN Viewer. You can see the results of the example I ran through below. Note that my only layers are a terrain layer (see yesterday's post) and a KML file. I don't have any image layers, as my orthophoto is being used as the skin for the virtual world. It's a nifty workflow and can be useful if you operate in the photogrammetric world and create/display/use your own orthos or if you purchase orthos and just want to use them locally as a basemap, without having to deal with layer management.

Tuesday, April 8, 2008

Sensor to GIS: An Example Workflow (Part 2)

In the previous post I outlined the process for ingesting raw imagery into a photogrammetric system and creating GIS data products: orthophotos, terrain, and building feature data. While I skipped what could be consider the very start of the workflow (e.g. flight planning, data download, etc) the idea was to demonstrate the major steps involved in creating GIS-ready vector and raster data.

In today's post I'll walk through loading and using the data in Quantum GIS. After downloading the software, installation was pretty straightforward. The only real software setup operation I did before getting started with the workflow below was to install the GRASS plugin. This was fairly easy: you just select the Plugin Manager from the Plugin drop-down menu, select the GRASS plugin, hit OK and the plugin gets installed right away.

Since GIS workflows aren't nearly as linear as the photogrammetric processing was, I'm just going to list bullets of operations I went through:

  • Load the terrain data. I added my terrain layer (which is a GDAL supported .img IMAGINE raster terrain file) with the Add Raster button. This loads the raster and displays the layer name in the legend on the left hand side of the application. By right-clicking on the layer, I could access the raster layer properties. From there it is possible to change things like the Symbology (check out the stylish "Freak Out" color map below), add scale dependent visibility, view metadata and a histogram.
  • Load the orthophoto. The methodology for adding the ortho was the exact same as the terrain since they are both raster layers. In the screen capture below I've resized the main viewer, changed the DEM color map to pseudo-color and adjusted the DEM transparency. This gives me a better idea of elevation over the various parts of the ortho: red is higher elevation while the yellow and greenish-blue parts in the bottom part of the ortho are lower elevation. If the layer is selected I can spot check specific elevation values with the "identify feature" tool in the default toolbar.
  • Load the building vectors. The button for adding a vector layer is right beside the button for adding rasters on the main icon panel. Since I had saved my buildings as a shapefile, the process for adding them into the project was simple. They are displayed in the screen capture below, after changing the polygon color. Yes, I was too lazy to extract more than four buildings....
  • Create a Roads layer. The next thing I did was to create a new shapefile a digitize a few roads. There are actually two ways to do this: you can use either the native QGIS vector creation and editing tools, or you can use the GRASS plugin. In the screen capture below I used the native QGIS tools, but after experimenting for a bit I like the GRASS editing tools better - for example, you just need to right click to finalize an edit procedure, whereas in QGIS you need to right click, hit an OK button on the attribution panel, and then wait a second or so for it to render. I also like how the GRASS editing tools are in a single editing panel. This makes it easy to switch between tools, perform attribution, and makes for easier editing (e.g. moving vertices) on existing features.
  • Buffering Roads. After digitizing the roads layer I opened up the Grass Tools, which are available from the GRASS toolset:
I selected the Vector Buffer operation (highlighted above), which produces a new vector layer based on a user-specified buffer distance. This is a simple operation, and I buffered the roads by 25 meters (see results below). While this is a very simple exercise, what I am trying to illustrate is that by the end of the workflow I was performing pure GIS functions based on vector data. These are "analysis" operations that can be performed independently of the sensor data that was used to create the base map data layers. At this point the imagery may not even be relevant in a real world project - even though sensor data processed in a softcopy photogrammetry environment provided the original source data.

The food for thought here is that knowledge of the base "input" data to a GIS is critical: decisions you make based on your GIS analyses depend on it.
Accuracy issues with the input data (sensor issues or photogrammetric processing errors) can influence the validity of the entire downstream project.

Saturday, April 5, 2008

Article Update: Photogrammetry Workflows, Present and Future

In this post I thought I would update an article I wrote last year that provides an intro to photogrammetric workflows and some thoughts on the latest technology. Originally published last May in GIS Development, this version has updated content and I've also added in links to further information on the various topics discussed throughout the article.

Hope you enjoy!


Introduction

The photogrammetric workflow has been relatively static since the advent of digital photogrammetry. Numerous application tools are dedicated to various parts of the workflow but the actual photogrammetric tasks have seen little change in recent years. However, we are beginning to see changes in the workflows. The growing proliferation of “new” technologies such a LIDAR, pushbroom, and satellite sensors has caused many commercial vendors to re-examine the application tools they offer. In addition, advances in information technology have opened up the possibility to processing increasingly large quantities of data. This, coupled with improved processing capabilities and network bandwidth, are also causing a change in traditional photogrammetric workflows.

Background

ERDAS has a long history in providing both analytical and digital photogrammetry solutions. As a Hexagon company, ERDAS’ mapping legacy dates back to the 1920’s with the founding of Kern Aarau and Wild Heerbrugg. These companies were consolidated into Leica and over the years offered analogue, analytical, and digital photogrammetry and mapping solutions. LH Systems, ERDAS, and Azimuth Corp. were acquired by Leica Geosystems in 2001. These acquisitions allowed Leica to enter a number of spaces in the digital photogrammetry market and offer comprehensive photogrammetric solutions to the production photogrammetry, defense, and GIS markets.

ERDAS’ initial photogrammetric offerings, Orthobase and Stereo Analyst for IMAGINE, were targeted at the GIS user community. As demand for 3D data grew in the GIS community, Leica Geosystems sought to provide easy to use tools for producing “oriented” images from airborne or satellite data and extracting 3D information such as building and road data. With the acquisition of LH Systems in 2001, Leica Geosystems inherited a staff and customer base skilled in production photogrammetry. This new customer base required engineering-level accuracy and primarily worked with large-scale airborne photography in the commercial arena and satellite imagery in the defense market. In early 2004 Leica Geosystems released the Leica Photogrammetry Suite (now LPS). This new product suite initially used updated components from OrthoPase and OrthoBase Pro, and developed new technology for stereo viewing and terrain editing. Shortly thereafter mature products such as PRO600 and ORIMA were integrated into the product suite and numerous update releases increased productivity. In April 2008, Leica Geosystems Geospatial Imaging division was re-branded as ERDAS.

Current Workflows

When asked about the “photogrammetric workflow” most industry professionals will refer to the analog frame camera (e.g. RC30) workflow. Analog frame cameras were prevalent during the transition to digital photogrammetry and still remain a common source of imagery. Numerous software tools have been developed to guide users through the traditional analog frame workflow. Popular vendors include BAE, INPHO (now owned by Trimble), Intergraph, and ERDAS. A brief outline of the mainstream analog frame workflow is provided below.

· Scanning process: Airborne camera film is scanned and converted into a digital file format. Some high performance scanners perform interior orientation (IO) as well.

· Image Dodging: Scanning may introduce radiometric problems such as hotspots (bright areas) and vignetting (dark corners). These can be minimized or reduced by applying a dodging algorithm. Dodging, in the digital photogrammetry sense of the word, generally calculates a set of input statistics describing the radiometry of a group of images. Then, based on user preferences, it generates target output values for every input pixel. Output image pixels are then shifted based on several user parameters and constraints from their current DN value to their target DN. Typically there are options for global statistics calculations for a group of images, which has the net effect of balancing out large radiometric differences between images. Overall this has the effect of resolving the aforementioned problems and “evening out” the radiometry both within individual images and across groups of imagery.

· Project setup: most photogrammetric packages have an initial step where the operator performs steps such as defining a coordinate system for the project, adding images to the project, and providing the photogrammetric system with general information regarding the project. Ancillary information may include data such as flying height, sensor type, the rotation system, and photo direction.

· Camera Information: the operator needs to provide information about the type of camera used in the project. Typically the camera information is stored in an external “camera file” and may be used many times after it is initially defined. It contains information such as focal length, principal point offset, fiducial mark information, and radial lens distortion. Camera file information is typically gathered from the camera calibration report associated with a specific camera.

· Interior Orientation (IO): The interior orientation process relates film coordinates to the image pixel coordinate system of the scanned image. IO can often be performed as an automatic process if it was not performed during the scanning process.

· Aerial Triangulation (AT): The AT process serves to orient images in the project to both one another and a ground coordinate system. The goal is to solve the orientation parameters (X, Y, Z, omega, phi, kappa) for each image. True ground coordinates for each measured point will also be established. The AT process can be the most time-consuming and critical component of the digital photogrammetry workflow. Sub-components of the AT process include:

o Measuring ground control points (typically surveyed points).

o Establishing an initial approximation of the orientation parameters (rough orientation).

o Measuring tie points. This is often an automatic procedure in digital photogrammetry systems.

o Performing the bundle adjustment.

o Refining the solution: this involves removing or re-measuring inaccurate points until the solution is within an acceptable error tolerance. Most commercial software packages contain an error reporting mechanism to assist in refining the solution.

  • Terrain Generation: Digital orthophotos are one of the primary end-products in the photogrammetric workflow. Accurate terrain models are an essential ingredient in the generation of digital orthophotos. They are also useful products in their own right, with uses in many vertical market applications (e.g. hydrology modeling, visual simulation applications, line-of-sight studies, etcetera). Terrain models can take the form of TINs (Triangulated Irregular Network) or Grids. Once AT is complete, terrain generation can typically be run as an automatic process in most photogrammetric packages. Automatic terrain generation algorithms typically match “terrain points” on one two or more images (more images increase the reliability of the point). Seed data such as manually extracted vector files, control points, or other data can often be input to help guide the correlation process. There are usually filtering options to remove blunders, also referred to as “spikes” or “wells” in the output terrain model. Filtering can also be used to assist in the removal of surface features such as buildings and trees. This can be of great assistance if the desired output is a “bare-earth” terrain model. It is important to note that terrain may also be acquired via manual compilation (in stereo), LIDAR, IFSAR (Interferometric Synthetic Aperture Radar), or publicly available datasets such as SRTM.
  • Terrain Editing: Digital terrain models (DTMs) that have been generated by autocorrelation procedures typically require some “cleanup” activities to model the terrain to the required level of accuracy. Most photogrammetric packages include some capability of editing terrain in stereo. It is important for operators to see the terrain graphics rendered over imagery in stereo so that they can determine if automatically generated terrain posts are indeed “on the ground”. That is, that the DTM is an accurate representation of the terrain, or is at least accurate enough for the specific project at hand. Terrain can usually be rendered using a mesh, contours, points, and breaklines. The operator usually has control over which rendering method is used (it could be a combination) as well as various graphic details such as contour spacing, color, line thickness and more. Terrain editing applications usually provide a number of tools for editing TIN and Grid terrain models. In addition to individual post editing (e.g. add, delete, move for TIN posts, adjust Z for Grid cells), area editing tools can be used for a number of operations. These may include smoothing, surface fitting operations, spike and well removal tools, and so on. Geomorphic tools can be used for editing linear features such as a row of trees or hedges. After a terrain edit has been performed, the system will update the display in the viewer so that the operator can assess the accuracy and validity of the edit. Once the editing process is complete the user may have to convert it into a customer-specified output format (e.g. one TIN format to another, or TIN to Grid). DTMs are increasingly a customer deliverable and product, as mentioned previously they have many uses and are becoming quite widespread in various applications.
  • Feature Extraction: Planimetric feature extraction is usually an optional step in the workflow, depending on the project specifications. Automatic 3D feature extraction algorithms are under development, but manual stereo extraction is still the predominant method. Feature extraction tools in digital photogrammetry packages typically allow users to collect, edit and attribute point, line, and polygonal features. Features can be products in themselves, feeding into a 3D GIS or CAD environment. Alternatively building futures may be used again in the photogrammetric processing chain in the production of “true orthos”, which take surface features into account to produce imagery with minimized building lean – which can be particularly beneficial in urban environments.
  • Orthophoto Generation and Mosaicing: Digital Orthophotos are usually the primary final product derived from the photogrammetric workflow. There are many different customer specifications for orthos, including accuracy, radiometric quality, GSD, output tile definitions, output projection, output file format and more. A mosaicing process is usually included in the ortho workflow to produce a smooth, seamless, and radiometrically appealing product for the entire project area. Mosaicing may be performed as part of the orthophoto process directly (ortho-mosaicking) or performed as post-process later on. Generally, orthophoto production follows these steps:
    • Input image selection: the operator chooses the images to be orthorectificed.
    • Terrain source selection: the operator chooses the DTM to be used for orthorectification. This is a critical step, as the accuracy of the orthophoto will be determined by the accuracy of the terrain. A terrain model with gross errors (e.g. a hill not modeled correctly) will result in geometric errors in the resulting orthophoto.
    • Define orthophoto options: Operators typical select a number of option for the orthorectification process. These may include output GSD, the image resampling method, projection, output coordinates and more.
Aside from defining the various parameters, the orthorectification process is not usually an interactive process. However, the mosaicing process usually does involve some degree of operator interaction. After images are chosen for the mosaic process, there is usually some method of defining seams (polygons or lines used to determine which areas of the input images will be used in the output mosaic). While there are many automatic seam generation applications, there is almost always some element of user interaction to either define or edit seams – or at least review the seams. Operators will typically edit the seams so that they run along radiometrically contiguous areas. That is, they do not cut through well-defined features such as buildings. This is because the ultimate goal of seam editing is to “hide” the seams so that they are not visible in the output mosaic. Once seams are defined, they can usually have smoothing or feathering operations applied to them so that their appearance is minimized.

Another important aspect is radiometry. While some operators will tackle radiometry early on in the workflow (as previously discussed in the “Image Dodging” step), others will dodge or apply other radiometric algorithms during the orthomosaic production process. The goal is to make the output group of images radiometrically homogeneous. This will result in a visually appealing output mosaic that has consistent radiometric qualities across the group of images comprising the project area.

A project area may be several hundred square kilometers in size, so a single output mosaic file is not usually an option due to the sheer size. End customers cannot usually handle a single large file and would prefer to receive their digital orthomosaic in a series of tiles defined by their specification. Most photogrammetric systems have a method of defining a tiling system that can be ingested by the orthomosaicing application to produce a seamless tiled output product.

In recent years the introduction of high resolution satellite imagery and airborne pushbroom sensors such as the ADS40 have added new variations to the traditional workflow. Both types of sensors product data that are digital from the point of capture, alleviating the need to scan film photography. Commercially available satellite imagery (e.g. CARTOSAT, ALOS, etcetera) has been available at increasingly high levels of resolution (e.g. 80cm resolution for CARTOSAT-2). While this is sufficient for many mapping projects, some engineering level project applications still require the resolution available from airborne sensors.

Pushbroom sensors such as the ADS40 can achieve a ground sample distance in the 5-10cm range. Modern digital airborne sensors are also usually mounted with a GPS/IMU system. GPS (Global Positioning System) technology assists mapping projects by using a series of base stations in the project area and a constellation of satellites providing positional information accessed by the GPS receiver on-board an aircraft. IMU’s (Inertial Measurement Unit) are increasingly used to establish precise orientation angles (pitch, yaw, and roll) for the sensor platform in relation to the ground coordinate system. GPS and IMU information can be extremely beneficial for mapping areas where limited ground control information is available (e.g. rugged terrain). They also assist in the triangulation process by providing highly accurate initial orientation data, which is then further refined by the bundle adjustment procedure. GPS and IMU information can also be used for “direct georeferencing”, which bypasses the time-consuming AT process. However, direct georeferencing is not a universally-accepted methodology within the mapping community. The caveat to direct georeferencing is that project accuracy may suffer – however this may be acceptable for rapid response mapping and other types of projects where lower accuracies are adequate for the end customer.

Thoughts on Current and Future Photogrammetric Workflows

We are beginning to see some shifts in the currents guiding photogrammetric workflows. These shifts are being driving by advances in computing hardware, new sensor technology, and enterprise solutions.

Data storage and dissemination is dynamic area in the industry. While imagery was traditionally backed up on tape systems, the cost of storage has dramatically declined in recent years. As customer demand for high-resolution data increases, it is becoming less practical for users to store data directly on their workstations. Users are increasingly storing imagery on servers, employing different methods for accessing it. Demand appears to be in the increase for tools to manage and archive data. Organizations are also examining the possibility of sharing and publishing data. The data may stored on servers and published via web services or made available for access, subscription, or purchase via a portal.

Sensor hardware is also rapidly changing the photogrammetric workflow. LIDAR has now been widely adopted and accepted, providing extremely high-density and high-accuracy terrain data. In addition to LIDAR, there is a growing trend of integrating LIDAR with digital frame sensors, which enables the simultaneous collection of optical and terrain data, enabling rapid digital orthophoto processing. This is much more cost-effective than flying a project area with multiple sensors for image and terrain data. When coupled with airborne GPS and IMU technology, terrain and georeferenced imagery – the primary ingredients for orthos – can be available shortly after the data is downloaded after a flight. IFSAR mapping systems are also a growing source of terrain data.

Coupled with explosion of imagery is the need to efficiently process it. One method that researchers and software vendors have begun exploring is distributed processing. Under this model a processing job is divided up into portions which are then submitted to remote “processing nodes”, which results in a significant improvement in overall throughput for large projects. Most commercial efforts, such as the ERDAS Ortho Accelerator, have focused on ortho processing. However there also several other photogrammetric tasks that lend themselves to distributed processing solutions (e.g. terrain correlation, point matching, etc.).

With data increasingly stored on network locations and the general adoption of database management systems, enterprise photogrammetric solutions will likely change the face of the classical photogrammetric workflow. With imagery and other geospatial data increasingly stored on servers, the processing framework is likely to change such that the operator interacts with a client application that kicks off photogrammetric and geospatial processing operations. Rather than running a heavy “digital photogrammetry workstation”, or DPW, the operator will be operating a client view into the project. Also, geospatial servers will enable organizations to store and reuse project and other data. For example, automatic correlation processes could automatically identify and utilize seed data stored in online databases, or terrain data stored from previous jobs. The notion of collecting data once and using it many times will be prevalent. Large quantities of data such as airborne and terrestrial LIDAR-derived point clouds will be able to be stored and have operations such as filtering, classification and 3D feature extraction applied to them. With a shift to enterprise solutions, industry adoption of open standards (e.g. Open GIS Consortium) will be critical. Providing open and extensible systems will allows organizations to customize workflows to meet their specific needs, fully enabling their investment in enterprise technology.

Conclusions

This is an exciting time for those of us in the photogrammetry group at ERDAS. Recent trends discussed above have opened up new avenues for changing, modernizing, and empowering what was until recently a relatively static workflow. Our customers drive us to deliver solutions that meet a variety of needs. While there is the constant need to pay attention to existing workflows, it is important to keep an eye on technology trends that will guide future workflow directions. Enterprise integration will likely change the face of classical photogrammetric workflows, making photogrammetry a ubiquitous component of modern geospatial business decision systems.

Wednesday, March 26, 2008

Welcome to the Fiducial Mark

Welcome to the Fiducial Mark!

Over the past several years the once clear lines between photogrammetry, remote sensing, and GIS have become increasingly blurred. Once rigid workflows have become fluid, which means various tools/products/methods can be used to achieve a variety of results. While the traditional photogrammetric product was an orthophoto, the various by-products used the final product generation, such as 3D terrain and feature data, have become important products in their own right - and often outside the "traditional" photogrammetry domain.

I've started this blog to discuss geospatial and mapping technologies, product developments, workflows, and industry events. My main interests lie in the areas of digital photogrammetry, specifically terrain processing workflows, orthophoto production, mosaicking, and various airborne sensor technologies. Other interests include 3D city modeling and visualization, data sharing, and enterprise geospatial technologies.

Thanks for stopping by!