Photogrammetry: Difference between revisions
Appearance
Shadowdawn (talk | contribs) |
Shadowdawn (talk | contribs) No edit summary |
||
Line 3: | Line 3: | ||
==General Objective== | ==General Objective== | ||
The general objective is to capture the position of every point in the scene by using at least two cameras with different angles. Dynamic scenes such as people or animals in motion can be captured with a complex multi-camera setup. The resulting form of the subject is then calculated via a specialized software like Agisoft PhotoScan, Autodesk 123D Catch, or Autodesk Memento. Compared to 3D scanning, photogrammetry can capture all data at once, which makes scanning more convenient. However, this requires studio setup and the results are still not as detailed as 3D scans. | The general objective is to capture the position of every point in the scene by using at least two cameras with different angles. Dynamic scenes such as people or animals in motion can be captured with a complex multi-camera setup. The resulting form of the subject is then calculated via a specialized software like [[Agisoft PhotoScan]], [[Autodesk 123D Catch]], or [[Autodesk Memento]]. Compared to 3D scanning, photogrammetry can capture all data at once, which makes scanning more convenient. However, this requires studio setup and the results are still not as detailed as 3D scans. | ||
==Types of Photogrammetry== | ==Types of Photogrammetry== | ||
Line 12: | Line 12: | ||
==How Photogrammetry Works== | ==How Photogrammetry Works== | ||
The use of different angles allows for similarity-based mathematical analysis to be performed to reconstruct real-life objects and structures into a faithfully reproduced 3D space. The fact that each point has to be matched with the same point shot from different angle means that opaque and non-specular surfaces cannot be mapped using this method. | The use of different angles allows for similarity-based mathematical analysis to be performed to reconstruct real-life objects and structures into a faithfully reproduced 3D space. The fact that each point has to be matched with the same point shot from different angle means that opaque and non-specular surfaces cannot be mapped using this method. | ||
This stitching process works like a generation of panoramic photographs from multiple images. Once aligned, the resulting point cloud is used for generating a mesh composed of millions for triangles and high-resolution textures that can be applied to previously generated meshes. The final scene can be rendered in a common 3D engine such as Unity. Such scene can be enhanced with dynamic lighting, audio effects, particle effects, or moving objects such as clouds in the background. | |||
This stitching process works like a generation of panoramic photographs from multiple images. Once aligned, the resulting point cloud is used for generating a mesh composed of millions for triangles and high-resolution textures that can be applied to previously generated meshes. The final scene can be rendered in a common 3D engine such as [[Unity]]. Such scene can be enhanced with dynamic lighting, audio effects, particle effects, or moving objects such as clouds in the background. | |||
==Photogrammetric Processing Software== | ==Photogrammetric Processing Software== |