Jump to content

Spatial computing: Difference between revisions

No edit summary
No edit summary
Line 1: Line 1:
{{stub}}
'''[[Spatial computing]]''' is a 3-D computing paradigm that merges digital and physical space, letting computers and humans interact in a natural, immersive manner. It extends technologies such as [[virtual reality]] (VR), [[augmented reality]] (AR) and [[mixed reality]] (MR) by anchoring digital content to real-world locations and objects so that virtual elements appear to share the user’s environment. Companies including [[Apple]], [[Microsoft]], [[Google]], [[Meta Platforms|Meta]] and [[Magic Leap]] market devices and platforms that they position as “spatial computers,” seeing the field as the next great era after personal and mobile computing.<ref>Alexander Gillis & George Lawton (February 2024), “What is spatial computing?”, ''TechTarget''.</ref> Apple’s [[Apple Vision Pro]] headset (announced 2023) was introduced explicitly as “a spatial computer,” and CEO [[Tim Cook]] called its debut “an entirely new era of spatial computing.”<ref>Filipe Espósito, “Tim Cook compares Vision Pro launch to iPhone launch in memo to employees,” ''9to5Mac'', 2 February 2024.</ref>
[[Spatial computing]] is a concept where a computer system could be used with 3D input and display in the user's surrounding space. This is intertwined with the goal of augmented reality.


Tim Cook, CEO of [[Apple]], has remarked that he thinks that spatial computing is great. However, the Apple Vision Pro is a very poor version of what spatial computing could really be.
== Definition ==
MIT researcher '''Simon Greenwold''' defined spatial computing in 2003 as “human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces.”<ref>Simon Greenwold, ''Spatial Computing'' (MIT Media Lab master’s thesis), 2003.</ref> A spatial computing system must therefore
* sense the 3-D environment,
* place and maintain virtual content in that environment, and
* support natural user interaction (hands, voice, eye-gaze, etc.).


Spatial computing can be seen as a buzzword that lacks clear definite meaning, and means something else to a variety of people.
The term overlaps with [[extended reality]] (XR) but stresses the computer’s awareness of and participation in the surrounding physical space rather than merely displaying virtual images.
 
== Core technologies ==
* '''Sensors & spatial mapping''' – RGB cameras, depth sensors (e.g., [[LiDAR]]), and [[inertial measurement unit|IMU]]s feed algorithms such as [[Simultaneous localization and mapping|SLAM]] that build a live 3-D map of the user’s surroundings.<ref>Cogent Infotech, “Spatial Computing: The Next Frontier in Digital Transformation,” 1 January 2025.</ref>
* '''[[Computer vision]] & object recognition''' – AI models detect surfaces, objects and people so that virtual elements occlude and collide correctly.
* '''Human–machine interaction''' – [[Hand tracking]], gesture recognition, eye-tracking and voice commands provide intuitive control; spatial audio delivers sound from the correct direction.
* '''[[Artificial intelligence|AI]]''' – Deep-learning models accelerate vision, speech and scene understanding in real time.
* '''Cloud & edge computing''' – Heavy rendering and multi-user sharing rely on remote servers and the emerging “AR cloud.”<ref>NVIDIA, “What Is Spatial Computing?” (Glossary).</ref>
 
== History ==
* '''1980s–1990s''' – Early use of “spatial computing” in GIS papers; [[Mark Weiser]]’s [[ubiquitous computing]] vision; Worldesign Inc. and the University of Washington HIT Lab popularize the term for immersive 3-D VR theatres.<ref>Dean Takahashi, “With Vision Pro launched, companies must talk about XR, nausea and gender,” ''VentureBeat'', June 2023.</ref>
* '''2003''' – Greenwold formalises the definition at MIT.
* '''2010s''' – Milestones include Microsoft [[Kinect]] (2010), [[Google Glass]] (2013), Project Tango (2014), [[Microsoft HoloLens]] (2015), [[HTC Vive]] & room-scale VR (2016), [[ARKit]]/[[ARCore]] (2017) and Magic Leap One (2018).
* '''2020s''' – Meta’s Quest line adds colour passthrough MR; Apple unveils Vision Pro (2023) and pushes “spatial computer” mainstream.
 
== Relationship to other paradigms ==
{| class="wikitable"
! Paradigm !! Main idea !! Key difference from spatial computing
|-
| [[Ubiquitous computing]] || Computers embedded everywhere || May lack 3-D awareness or graphics
|-
| [[Ambient computing]] || Calm, background assistance || Often screenless; spatial computing foregrounds 3-D visuals
|-
| [[Context-aware computing]] || Apps adapt to user context || Spatial computing focuses on geometric context & AR/VR
|}
 
== Applications ==
* '''Entertainment & gaming''' – Room-scale VR, AR games like ''Pokémon Go'', volumetric concerts.
* '''Productivity''' – Virtual multi-monitor workspaces; collaborative 3-D design reviews across HoloLens, Vision Pro or Quest devices.
* '''Healthcare''' – Surgical guidance overlays, VR rehearsal, physical-therapy gamification.
* '''Education & training''' – AR field trips, VR science labs, industrial step-by-step instructions.
* '''Manufacturing & maintenance''' – Hands-free work instructions and IoT dashboards pinned to equipment.
* '''Retail & marketing''' – “Try-before-you-buy” AR furniture, virtual fashion mirrors, location-based promotions.
 
== Critiques and terminology ==
Analysts note that “spatial computing” is sometimes used loosely or as marketing jargon.<ref>Shira Ovide, “Apple’s Vision Pro is ‘spatial computing.’ Nobody knows what it means,” ''The Washington Post'', 2 February 2024.</ref> Because it overlaps with XR, metaverse and ambient-computing concepts, even experts disagree on precise boundaries.<ref>Cathy Hackl, “What leaders need to know about spatial computing,” ''Harvard Business Review'', 10 November 2023.</ref> Skeptics also point to bulky hardware, high prices and social-privacy concerns as barriers to widespread adoption.
 
== References ==
<references />