Spatial computing: Difference between revisions
Appearance
Xinreality (talk | contribs) No edit summary |
Xinreality (talk | contribs) m Text replacement - "aiwiki:Artificial Intelligence" to "artificial intelligence" |
||
(2 intermediate revisions by the same user not shown) | |||
Line 6: | Line 6: | ||
The concept of blending computation with physical space has roots in multiple fields. While the term "spatial computing" appeared in academic literature related to geographic information systems (GIS) in the mid-1980s<ref name="HandwikiHistory">HandWiki. "Engineering:Spatial computing - History" (Accessed Apr 2025). (Cites a 1985 paper on geography education)</ref>, its modern meaning related to human-scale interaction emerged later. Influential precursors include [[Ivan Sutherland]]'s work on [[Sketchpad]] and the first head-mounted displays in the 1960s, and [[Mark Weiser]]'s vision of [[ubiquitous computing]] at Xerox PARC in 1991, which imagined computers woven into the fabric of everyday life.<ref name="HandwikiHistory"/> | The concept of blending computation with physical space has roots in multiple fields. While the term "spatial computing" appeared in academic literature related to geographic information systems (GIS) in the mid-1980s<ref name="HandwikiHistory">HandWiki. "Engineering:Spatial computing - History" (Accessed Apr 2025). (Cites a 1985 paper on geography education)</ref>, its modern meaning related to human-scale interaction emerged later. Influential precursors include [[Ivan Sutherland]]'s work on [[Sketchpad]] and the first head-mounted displays in the 1960s, and [[Mark Weiser]]'s vision of [[ubiquitous computing]] at Xerox PARC in 1991, which imagined computers woven into the fabric of everyday life.<ref name="HandwikiHistory"/> | ||
In the early 1990s, researchers at the University of Washington’s Human Interface Technology Lab (HIT Lab), led by VR pioneer [[Thomas A. Furness III]], explored advanced 3D interfaces. A spin-off company, Worldesign Inc., founded by Dr. Robert Jacobson, used "Spatial Computing" to describe human interaction within immersive 3D environments at room scale, demonstrating concepts like a virtual Giza Plateau reconstruction in 1993.<ref name="HandwikiHistory"/><ref name="VentureBeatJacobson">Dean Takahashi (June 2023). "With Vision Pro launched, companies must talk about XR, nausea and gender." VentureBeat. (“…‘spatial computing’ | In the early 1990s, researchers at the University of Washington’s Human Interface Technology Lab (HIT Lab), led by VR pioneer [[Thomas A. Furness III]], explored advanced 3D interfaces. A spin-off company, Worldesign Inc., founded by Dr. Robert Jacobson, used "Spatial Computing" to describe human interaction within immersive 3D environments at room scale, demonstrating concepts like a virtual Giza Plateau reconstruction in 1993.<ref name="HandwikiHistory"/><ref name="VentureBeatJacobson">Dean Takahashi (June 2023). "With Vision Pro launched, companies must talk about XR, nausea and gender." VentureBeat. (“…‘spatial computing’ - a term that was actually coined in the early 90s by Dr. Bob Jacobson, founder of Worldesign…”)</ref> An academic publication "Spatial Computing: Issues in Vision, Multimedia and Visualization Technologies" (1997) by T. Caelli and H. Bunke further introduced the term academically.<ref name="HandwikiHistory"/> | ||
The term gained significant traction following [[Simon Greenwold]]'s 2003 Master's thesis at the [[MIT Media Lab]].<ref name="GreenwoldThesis"/> Greenwold defined it as "human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces," emphasizing machines becoming "fuller partners in our work and play."<ref name="TechTargetGreenwoldQuote"></ref> This coincided with growing research in [[context-aware computing]] and ambient interfaces. | The term gained significant traction following [[Simon Greenwold]]'s 2003 Master's thesis at the [[MIT Media Lab]].<ref name="GreenwoldThesis"/> Greenwold defined it as "human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces," emphasizing machines becoming "fuller partners in our work and play."<ref name="TechTargetGreenwoldQuote"></ref> This coincided with growing research in [[context-aware computing]] and ambient interfaces. | ||
Line 23: | Line 23: | ||
Spatial computing typically involves several key components working together: | Spatial computing typically involves several key components working together: | ||
* '''Machine Perception of Space:''' Devices must understand the physical environment in 3D. This involves technologies like [[Simultaneous Localization and Mapping]] (SLAM) to track the device's position and orientation while building a map of the space.<ref name="DurrantWhyteSLAM"/> [[Depth sensor]]s (like [[LiDAR]] or Time-of-Flight cameras) and [[RGB camera]]s capture geometric and visual information. [[Computer vision]] algorithms, often powered by [[artificial intelligence]] (AI), interpret this data to recognize surfaces, objects (for example walls, tables, chairs), people, and potentially understand scene semantics.<ref name="CogentSLAM"/><ref name="TechTargetWhatIs"/> | * '''Machine Perception of Space:''' Devices must understand the physical environment in 3D. This involves technologies like [[Simultaneous Localization and Mapping]] (SLAM) to track the device's position and orientation while building a map of the space.<ref name="DurrantWhyteSLAM"/> [[Depth sensor]]s (like [[LiDAR]] or Time-of-Flight cameras) and [[RGB camera]]s capture geometric and visual information. [[Computer vision]] algorithms, often powered by [[aiwiki:Artificial Intelligence|artificial intelligence]] (AI), interpret this data to recognize surfaces, objects (for example walls, tables, chairs), people, and potentially understand scene semantics.<ref name="CogentSLAM"/><ref name="TechTargetWhatIs"/> | ||
* '''Persistence and Context:''' Digital objects or information placed within the spatial environment can maintain their position and state relative to the physical world, even when the user looks away or leaves and returns (spatial anchors). The system uses its understanding of spatial context to anchor digital elements appropriately and realistically, potentially enabling occlusion (virtual objects appearing behind real ones) and physics interactions.<ref name="HandwikiHistory"/> | * '''Persistence and Context:''' Digital objects or information placed within the spatial environment can maintain their position and state relative to the physical world, even when the user looks away or leaves and returns (spatial anchors). The system uses its understanding of spatial context to anchor digital elements appropriately and realistically, potentially enabling occlusion (virtual objects appearing behind real ones) and physics interactions.<ref name="HandwikiHistory"/> | ||
* '''Natural User Interaction:''' Input moves beyond the [[keyboard]] and [[mouse]]. Common interaction methods include [[Hand tracking]] (recognizing hand shapes and gestures), [[Eye tracking]] (using gaze as a pointer or input trigger), [[Voice command]]s, and sometimes specialized controllers. The goal is intuitive interaction that mimics how humans interact with the physical world, making the computer interface feel "invisible."<ref name="PCMagWhatIs"/><ref name="Microsoft HoloLens"/> | * '''Natural User Interaction:''' Input moves beyond the [[keyboard]] and [[mouse]]. Common interaction methods include [[Hand tracking]] (recognizing hand shapes and gestures), [[Eye tracking]] (using gaze as a pointer or input trigger), [[Voice command]]s, and sometimes specialized controllers. The goal is intuitive interaction that mimics how humans interact with the physical world, making the computer interface feel "invisible."<ref name="PCMagWhatIs"/><ref name="Microsoft HoloLens"/> |