Photogrammetry
From Wikipedia, the free encyclopedia
Photogrammetry is the first remote sensing technology ever developed, in which geometric properties about objects are determined from photographic images. Historically, photogrammetry is as old as modern photography itself, and can be dated to mid-nineteenth century.
In the simplest example, the distance between two points which lay on a plane parallel to the photographic image plane can be determined by measuring their distance on the image, if the scale s of the image is known. This is done by multiplying the measured distance by 1/s.
A more sophisticated technique, called stereophotogrammetry, makes it possible to estimate the three-dimensional coordinates of points on an object. These are determined by measurements made in two or more photographic images taken from different positions (see stereoscopy). Common points are identified on each image. A line of sight (or ray) can be constructed from the camera location to the point on the object. It is the intersection of these rays (triangulation) that determines the three-dimensional location of the point. More sophisticated algorithms can exploit other information about the scene that is known a priori, for example symmetries, in some cases allowing reconstructions of 3D coordinates from only one camera position.
Photogrammetry is used in different fields, such as topographic mapping, architecture, engineering, manufacturing, quality control, police investigation, and geology, as well as by archaeologists to quickly produce plans of large or complex sites and by meteorologists as a way to determine the actual wind speed of a tornado where objective weather data cannot be obtained. It is also used to combine live action with computer generated imagery in movie post-production; Fight Club is a good example of the use of photogrammetry in film (details are given in the DVD extras).
Algorithms for photogrammetry typically express the problem as that of minimizing the sum of the squares of a set of errors. This minimization is known as bundle adjustment and is often performed using the Levenberg-Marquardt algorithm.
Contents |
[edit] Photogrammetric methods
Photogrammetry uses methods from many disciplines including optics and projective geometry. The data model on the right shows what type of information can go into and come out of photogrammetric methods.
The 3D co-ordinates define the locations of object points in the 3D space. The image co-ordinates define the locations of the object points' images on the film or an electronic imaging device. The exterior orientation of a camera defines its location in space and its view direction. The inner orientation defines the geometric parameters of the imaging process. This is primarily the focal length of the lens, but can also include the description of lens distortions. Further additional observations play an important role: With scale bars, basically a known distance of two points in space, or known fix points, the connection to the basic measuring units is created.
Each of the four main variables can be an input or an output of a photogrammetric method.
Photogrammetry has been defined by ASPRS[[2]] as the art, science, and technology of obtaining reliable information about physical objects and the environment through processes of recoding, measuring and interpreting photographic images and patterns of recorded radiant electromagnetic energy and other phenomena.
[edit] Integration
Photogrammetric data with dense range data from scanners complement each other. Photogrammetry is more accurate in the x and y direction while range data is generally more accurate in the z direction. This range data can be supplied by techniques like LiDAR, Laser Scanners (using time of flight, triangulation or interferometry), White-light digitizers and any other technique that scans an area and returns x,y,z coordinates for multiple discreet points (commonly called "point clouds"). Photos can clearly define the edges of buildings when the point cloud footprint can not. It is beneficial to incorporate the advantages of both systems and integrate it to create a better product.
A 3D visualization can be created by georeferencing the aerial photos and LiDAR data in the same reference frame, orthorectifying the aerial photos, and then draping the orthorectified images on top of the LiDAR grid. [[3]]
[edit] Obsolete
Photogrammetry in the 21st century has now become obsolete. Methods of producing higher quality results, more efficiently and effectively are available to both companies and the general public at low cost. Although it may not be such common practice as it was in the last century, Photogrammetry is still taught to students undertaking degrees in Geomatics engineering and proves vital in the understanding of how the latest techniques in spatial analysis were developed and optimised.
[edit] See also
- Aerial survey
- Edouard Deville, inventor
- Surveying
- Photomapping
- Geomatics engineering
- Videogrammetry
- Leica Photogrammetry Suite
- SOCET SET
- RainStorm
- ERDAS IMAGINE
- Intergraph ImageStation