Jump to content

SLAM: Difference between revisions

No edit summary
m Text replacement - "e.g.," to "for example"
Tags: Mobile edit Mobile web edit
Line 4: Line 4:
==How SLAM Works==
==How SLAM Works==
SLAM systems typically involve several key components working together in a continuous feedback loop:
SLAM systems typically involve several key components working together in a continuous feedback loop:
* '''[[Feature Detection|Feature Detection/Tracking]]:''' Identifying salient points or features (often called [[landmarks]]) in the sensor data (e.g., corners in camera images using methods like the [[ORB feature detector]]). These features are tracked frame-to-frame as the device moves.
* '''[[Feature Detection|Feature Detection/Tracking]]:''' Identifying salient points or features (often called [[landmarks]]) in the sensor data (for example corners in camera images using methods like the [[ORB feature detector]]). These features are tracked frame-to-frame as the device moves.
* '''[[Mapping]]:''' Using the tracked features and the device's estimated movement (odometry) to build and update a representation (the map) of the environment. This map might consist of sparse feature points (common for localization-focused SLAM) or denser representations like [[point cloud]]s or [[mesh]]es (useful for environmental understanding).
* '''[[Mapping]]:''' Using the tracked features and the device's estimated movement (odometry) to build and update a representation (the map) of the environment. This map might consist of sparse feature points (common for localization-focused SLAM) or denser representations like [[point cloud]]s or [[mesh]]es (useful for environmental understanding).
* '''[[Localization]] (or Pose Estimation):''' Estimating the device's current position and orientation (pose) relative to the map it has built, often by observing how known landmarks appear from the current viewpoint.
* '''[[Localization]] (or Pose Estimation):''' Estimating the device's current position and orientation (pose) relative to the map it has built, often by observing how known landmarks appear from the current viewpoint.
* '''[[Loop Closure]]:''' Recognizing when the device has returned to a previously visited location by matching current sensor data to earlier map data (e.g., using appearance-based methods like [[bag-of-words]]). This is crucial for correcting accumulated [[Drift (tracking)|drift]] (incremental errors) in the map and pose estimate, leading to a globally consistent map.
* '''[[Loop Closure]]:''' Recognizing when the device has returned to a previously visited location by matching current sensor data to earlier map data (for example using appearance-based methods like [[bag-of-words]]). This is crucial for correcting accumulated [[Drift (tracking)|drift]] (incremental errors) in the map and pose estimate, leading to a globally consistent map.
* '''[[Sensor Fusion]]:''' Often combining data from multiple sensors. [[Visual Inertial Odometry|Visual-Inertial Odometry (VIO)]] is extremely common in modern SLAM, fusing camera data with [[IMU]] data. The IMU provides high-frequency motion updates, improving robustness against fast motion, motion blur, or visually indistinct (textureless) surfaces where camera tracking alone might struggle.
* '''[[Sensor Fusion]]:''' Often combining data from multiple sensors. [[Visual Inertial Odometry|Visual-Inertial Odometry (VIO)]] is extremely common in modern SLAM, fusing camera data with [[IMU]] data. The IMU provides high-frequency motion updates, improving robustness against fast motion, motion blur, or visually indistinct (textureless) surfaces where camera tracking alone might struggle.