Spatial computing: Difference between revisions
Appearance
Xinreality (talk | contribs) No edit summary Tag: Reverted |
Xinreality (talk | contribs) No edit summary |
||
(8 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
{{see also|Terms|Technical Terms}} | {{see also|Terms|Technical Terms}} | ||
[[File:spatial computing1.jpg|350px|thumb|[[Apple Vision Pro]]]] | |||
'''[[Spatial computing]]''' is a [[term]] describing the paradigm where [[human-computer interaction]] (HCI) moves beyond traditional desktop or mobile screens, allowing digital information and processes to be perceived and manipulated as if they exist within [[3D space|three-dimensional physical space]].<ref name="GreenwoldThesis">Greenwold, Simon A. "Spatial Computing". MIT Master's Thesis, June 2003. [https://dspace.mit.edu/handle/1721.1/87460 Link]</ref> It involves machines understanding and interacting with the geometry and semantics of the surrounding environment, enabling users to interact with digital content using natural modalities like gestures, gaze, and voice, often overlaying this content onto their view of the real world. In essence, spatial computing extends technologies like [[virtual reality]] (VR), [[augmented reality]] (AR), and [[mixed reality]] (MR) by anchoring digital content to real-world locations and objects, so that virtual elements can be perceived as part of the surrounding environment<ref name="TechTargetWhatIs">Alexander Gillis & George Lawton (Feb 2024). "What is spatial computing?" TechTarget. ("Companies including Apple, Google, Magic Leap, Meta and Microsoft offer spatial computing devices for consumer audiences.")</ref>, collectively sometimes referred to as [[Extended Reality]] (XR). This approach allows users to intuitively manipulate virtual objects as if they were real, and lets computers understand and respond to the user’s physical context, aiming to make the computer interface invisible by leveraging innate 3D human senses and movements.<ref name="PCMagWhatIs">Jessie Will (June 6, 2023). "What Is Spatial Computing? Apple Vision Pro and the Next Wave of Tech." PCMag. (Describes spatial computing as blending digital/physical, using natural inputs)</ref> | '''[[Spatial computing]]''' is a [[term]] describing the paradigm where [[human-computer interaction]] (HCI) moves beyond traditional desktop or mobile screens, allowing digital information and processes to be perceived and manipulated as if they exist within [[3D space|three-dimensional physical space]].<ref name="GreenwoldThesis">Greenwold, Simon A. "Spatial Computing". MIT Master's Thesis, June 2003. [https://dspace.mit.edu/handle/1721.1/87460 Link]</ref> It involves machines understanding and interacting with the geometry and semantics of the surrounding environment, enabling users to interact with digital content using natural modalities like gestures, gaze, and voice, often overlaying this content onto their view of the real world. In essence, spatial computing extends technologies like [[virtual reality]] (VR), [[augmented reality]] (AR), and [[mixed reality]] (MR) by anchoring digital content to real-world locations and objects, so that virtual elements can be perceived as part of the surrounding environment<ref name="TechTargetWhatIs">Alexander Gillis & George Lawton (Feb 2024). "What is spatial computing?" TechTarget. ("Companies including Apple, Google, Magic Leap, Meta and Microsoft offer spatial computing devices for consumer audiences.")</ref>, collectively sometimes referred to as [[Extended Reality]] (XR). This approach allows users to intuitively manipulate virtual objects as if they were real, and lets computers understand and respond to the user’s physical context, aiming to make the computer interface invisible by leveraging innate 3D human senses and movements.<ref name="PCMagWhatIs">Jessie Will (June 6, 2023). "What Is Spatial Computing? Apple Vision Pro and the Next Wave of Tech." PCMag. (Describes spatial computing as blending digital/physical, using natural inputs)</ref> | ||
Line 5: | Line 6: | ||
The concept of blending computation with physical space has roots in multiple fields. While the term "spatial computing" appeared in academic literature related to geographic information systems (GIS) in the mid-1980s<ref name="HandwikiHistory">HandWiki. "Engineering:Spatial computing - History" (Accessed Apr 2025). (Cites a 1985 paper on geography education)</ref>, its modern meaning related to human-scale interaction emerged later. Influential precursors include [[Ivan Sutherland]]'s work on [[Sketchpad]] and the first head-mounted displays in the 1960s, and [[Mark Weiser]]'s vision of [[ubiquitous computing]] at Xerox PARC in 1991, which imagined computers woven into the fabric of everyday life.<ref name="HandwikiHistory"/> | The concept of blending computation with physical space has roots in multiple fields. While the term "spatial computing" appeared in academic literature related to geographic information systems (GIS) in the mid-1980s<ref name="HandwikiHistory">HandWiki. "Engineering:Spatial computing - History" (Accessed Apr 2025). (Cites a 1985 paper on geography education)</ref>, its modern meaning related to human-scale interaction emerged later. Influential precursors include [[Ivan Sutherland]]'s work on [[Sketchpad]] and the first head-mounted displays in the 1960s, and [[Mark Weiser]]'s vision of [[ubiquitous computing]] at Xerox PARC in 1991, which imagined computers woven into the fabric of everyday life.<ref name="HandwikiHistory"/> | ||
In the early 1990s, researchers at the University of Washington’s Human Interface Technology Lab (HIT Lab), led by VR pioneer [[Thomas A. Furness III]], explored advanced 3D interfaces. A spin-off company, Worldesign Inc., founded by Dr. Robert Jacobson, used "Spatial Computing" to describe human interaction within immersive 3D environments at room scale, demonstrating concepts like a virtual Giza Plateau reconstruction in 1993.<ref name="HandwikiHistory"/><ref name="VentureBeatJacobson">Dean Takahashi (June 2023). "With Vision Pro launched, companies must talk about XR, nausea and gender." VentureBeat. (“…‘spatial computing’ | In the early 1990s, researchers at the University of Washington’s Human Interface Technology Lab (HIT Lab), led by VR pioneer [[Thomas A. Furness III]], explored advanced 3D interfaces. A spin-off company, Worldesign Inc., founded by Dr. Robert Jacobson, used "Spatial Computing" to describe human interaction within immersive 3D environments at room scale, demonstrating concepts like a virtual Giza Plateau reconstruction in 1993.<ref name="HandwikiHistory"/><ref name="VentureBeatJacobson">Dean Takahashi (June 2023). "With Vision Pro launched, companies must talk about XR, nausea and gender." VentureBeat. (“…‘spatial computing’ - a term that was actually coined in the early 90s by Dr. Bob Jacobson, founder of Worldesign…”)</ref> An academic publication "Spatial Computing: Issues in Vision, Multimedia and Visualization Technologies" (1997) by T. Caelli and H. Bunke further introduced the term academically.<ref name="HandwikiHistory"/> | ||
The term gained significant traction following [[Simon Greenwold]]'s 2003 Master's thesis at the [[MIT Media Lab]].<ref name="GreenwoldThesis"/> Greenwold defined it as "human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces," emphasizing machines becoming "fuller partners in our work and play."<ref name="TechTargetGreenwoldQuote"> | The term gained significant traction following [[Simon Greenwold]]'s 2003 Master's thesis at the [[MIT Media Lab]].<ref name="GreenwoldThesis"/> Greenwold defined it as "human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces," emphasizing machines becoming "fuller partners in our work and play."<ref name="TechTargetGreenwoldQuote"></ref> This coincided with growing research in [[context-aware computing]] and ambient interfaces. | ||
Commercial developments accelerated in the 2010s: | Commercial developments accelerated in the 2010s: | ||
Line 22: | Line 23: | ||
Spatial computing typically involves several key components working together: | Spatial computing typically involves several key components working together: | ||
* '''Machine Perception of Space:''' Devices must understand the physical environment in 3D. This involves technologies like [[Simultaneous Localization and Mapping]] (SLAM) to track the device's position and orientation while building a map of the space.<ref name="DurrantWhyteSLAM"/> [[Depth sensor]]s (like [[LiDAR]] or Time-of-Flight cameras) and [[RGB camera]]s capture geometric and visual information. [[Computer vision]] algorithms, often powered by [[artificial intelligence]] (AI), interpret this data to recognize surfaces, objects ( | * '''Machine Perception of Space:''' Devices must understand the physical environment in 3D. This involves technologies like [[Simultaneous Localization and Mapping]] (SLAM) to track the device's position and orientation while building a map of the space.<ref name="DurrantWhyteSLAM"/> [[Depth sensor]]s (like [[LiDAR]] or Time-of-Flight cameras) and [[RGB camera]]s capture geometric and visual information. [[Computer vision]] algorithms, often powered by [[artificial intelligence]] (AI), interpret this data to recognize surfaces, objects (for example walls, tables, chairs), people, and potentially understand scene semantics.<ref name="CogentSLAM"/><ref name="TechTargetWhatIs"/> | ||
* '''Persistence and Context:''' Digital objects or information placed within the spatial environment can maintain their position and state relative to the physical world, even when the user looks away or leaves and returns (spatial anchors). The system uses its understanding of spatial context to anchor digital elements appropriately and realistically, potentially enabling occlusion (virtual objects appearing behind real ones) and physics interactions.<ref name="HandwikiHistory"/> | * '''Persistence and Context:''' Digital objects or information placed within the spatial environment can maintain their position and state relative to the physical world, even when the user looks away or leaves and returns (spatial anchors). The system uses its understanding of spatial context to anchor digital elements appropriately and realistically, potentially enabling occlusion (virtual objects appearing behind real ones) and physics interactions.<ref name="HandwikiHistory"/> | ||
* '''Natural User Interaction:''' Input moves beyond the [[keyboard]] and [[mouse]]. Common interaction methods include [[Hand tracking]] (recognizing hand shapes and gestures), [[Eye tracking]] (using gaze as a pointer or input trigger), [[Voice command]]s, and sometimes specialized controllers. The goal is intuitive interaction that mimics how humans interact with the physical world, making the computer interface feel "invisible."<ref name="PCMagWhatIs"/><ref name="Microsoft HoloLens"/> | * '''Natural User Interaction:''' Input moves beyond the [[keyboard]] and [[mouse]]. Common interaction methods include [[Hand tracking]] (recognizing hand shapes and gestures), [[Eye tracking]] (using gaze as a pointer or input trigger), [[Voice command]]s, and sometimes specialized controllers. The goal is intuitive interaction that mimics how humans interact with the physical world, making the computer interface feel "invisible."<ref name="PCMagWhatIs"/><ref name="Microsoft HoloLens"/> | ||
Line 34: | Line 35: | ||
* '''Displays:''' High-resolution, high-refresh-rate micro-displays ([[Micro-OLED]], [[MicroLED]]) for rendering sharp images. [[Waveguide (optics)|Waveguides]] or other novel optics are used in optical see-through AR glasses. Wide [[Field of view (computer vision)|field-of-view]] (FOV) lenses are common in VR/MR headsets. | * '''Displays:''' High-resolution, high-refresh-rate micro-displays ([[Micro-OLED]], [[MicroLED]]) for rendering sharp images. [[Waveguide (optics)|Waveguides]] or other novel optics are used in optical see-through AR glasses. Wide [[Field of view (computer vision)|field-of-view]] (FOV) lenses are common in VR/MR headsets. | ||
* '''Processing Units:''' Powerful, energy-efficient [[System on a chip|Systems-on-Chip]] (SoCs) with strong CPUs, GPUs, and often dedicated AI/[[Neural processing unit|NPU]]s or co-processors (like Apple's R1 chip<ref name="VisionProAnnounce"/>) handle complex sensor fusion, computer vision tasks, and real-time [[3D rendering]] on-device. | * '''Processing Units:''' Powerful, energy-efficient [[System on a chip|Systems-on-Chip]] (SoCs) with strong CPUs, GPUs, and often dedicated AI/[[Neural processing unit|NPU]]s or co-processors (like Apple's R1 chip<ref name="VisionProAnnounce"/>) handle complex sensor fusion, computer vision tasks, and real-time [[3D rendering]] on-device. | ||
* '''Input Devices:''' Beyond integrated tracking (hand, eye, voice), some systems use handheld [[Controller (computing)|controllers]] ( | * '''Input Devices:''' Beyond integrated tracking (hand, eye, voice), some systems use handheld [[Controller (computing)|controllers]] (for example Meta Quest controllers) providing buttons, joysticks, and [[haptic feedback]]. | ||
=== Software === | === Software === | ||
* '''[[Spatial mapping]] Algorithms:''' Primarily SLAM and related techniques ( | * '''[[Spatial mapping]] Algorithms:''' Primarily SLAM and related techniques (for example visual-inertial odometry) to create real-time 3D environmental maps and track device pose.<ref name="DurrantWhyteSLAM"/> | ||
* '''[[Computer vision]] & [[Artificial intelligence|AI]]/[[Machine learning|ML]]:''' Algorithms for object recognition, [[Gesture recognition|gesture detection]], scene understanding, [[semantic segmentation]], user intent prediction, and optimizing rendering.<ref name="TechTargetWhatIs"/> | * '''[[Computer vision]] & [[Artificial intelligence|AI]]/[[Machine learning|ML]]:''' Algorithms for object recognition, [[Gesture recognition|gesture detection]], scene understanding, [[semantic segmentation]], user intent prediction, and optimizing rendering.<ref name="TechTargetWhatIs"/> | ||
* '''[[Rendering engine|Rendering Engines]]:''' Tools like [[Unity (game engine)|Unity]] and [[Unreal Engine]] provide frameworks for developing 3D environments, handling physics, and supporting AR/VR application development.<ref name="UnityRef"/> | * '''[[Rendering engine|Rendering Engines]]:''' Tools like [[Unity (game engine)|Unity]] and [[Unreal Engine]] provide frameworks for developing 3D environments, handling physics, and supporting AR/VR application development.<ref name="UnityRef"/> | ||
* '''[[Operating system|Operating Systems]] & [[Software development kit|SDKs]]:''' Specialized OSs ( | * '''[[Operating system|Operating Systems]] & [[Software development kit|SDKs]]:''' Specialized OSs (for example Apple [[visionOS]], [[Windows Holographic]], [[Android]] variants) manage spatial tasks. SDKs (for example [[ARKit]], [[ARCore]], [[OpenXR]], MRTK) provide APIs for developers to build spatial applications. | ||
* '''[[Cloud computing|Cloud]] and [[Edge computing]]:''' Used to offload heavy computation (rendering, AI processing, large-scale mapping), enable collaborative multi-user experiences ( | * '''[[Cloud computing|Cloud]] and [[Edge computing]]:''' Used to offload heavy computation (rendering, AI processing, large-scale mapping), enable collaborative multi-user experiences (for example shared spatial anchors, "AR Cloud" concepts), and stream content.<ref name="NvidiaSpatialCloud"/> | ||
* '''Connectivity:''' High-bandwidth, low-latency wireless like [[Wi-Fi 6E]] and [[5G]] are crucial for tetherless experiences and cloud/edge reliance. | * '''Connectivity:''' High-bandwidth, low-latency wireless like [[Wi-Fi 6E]] and [[5G]] are crucial for tetherless experiences and cloud/edge reliance. | ||
Line 47: | Line 48: | ||
Spatial computing is a foundational concept enabling advanced forms of VR, AR, and MR (often grouped under the umbrella term [[Extended Reality|XR]]). While closely related and sometimes used interchangeably in marketing, there are nuances: | Spatial computing is a foundational concept enabling advanced forms of VR, AR, and MR (often grouped under the umbrella term [[Extended Reality|XR]]). While closely related and sometimes used interchangeably in marketing, there are nuances: | ||
* '''[[Virtual Reality]] (VR):''' Creates a fully immersive digital environment replacing the user's real-world view. Spatial computing principles apply ''within'' this virtual space for tracking user movement (room-scale VR), environmental awareness ( | * '''[[Virtual Reality]] (VR):''' Creates a fully immersive digital environment replacing the user's real-world view. Spatial computing principles apply ''within'' this virtual space for tracking user movement (room-scale VR), environmental awareness (for example safety boundaries based on real walls), and interacting with virtual objects using tracked hands or controllers. | ||
* '''[[Augmented Reality]] (AR):''' Overlays digital information onto the real world, typically via smartphones, tablets, or simpler smart glasses. Interaction might be basic. Mobile AR uses spatial computing for plane detection and tracking but often lacks deep environmental understanding. | * '''[[Augmented Reality]] (AR):''' Overlays digital information onto the real world, typically via smartphones, tablets, or simpler smart glasses. Interaction might be basic. Mobile AR uses spatial computing for plane detection and tracking but often lacks deep environmental understanding. | ||
* '''[[Mixed Reality]] (MR):''' A more advanced form of AR where digital objects are integrated more realistically into the physical environment, appearing anchored to and potentially interacting with real surfaces and objects. Users can interact with both physical and virtual elements simultaneously. MR heavily relies on sophisticated spatial computing for real-time mapping, understanding, occlusion, and interaction. Headsets like HoloLens, Magic Leap, and passthrough devices like Vision Pro and Quest 3 are often categorized as MR. | * '''[[Mixed Reality]] (MR):''' A more advanced form of AR where digital objects are integrated more realistically into the physical environment, appearing anchored to and potentially interacting with real surfaces and objects. Users can interact with both physical and virtual elements simultaneously. MR heavily relies on sophisticated spatial computing for real-time mapping, understanding, occlusion, and interaction. Headsets like HoloLens, Magic Leap, and passthrough devices like Vision Pro and Quest 3 are often categorized as MR. | ||
Line 56: | Line 57: | ||
Spatial computing builds upon and overlaps with several earlier computing paradigms: | Spatial computing builds upon and overlaps with several earlier computing paradigms: | ||
* '''[[Ubiquitous computing]] (Pervasive Computing):''' Envisions computers embedded everywhere, becoming invisible parts of daily life (Mark Weiser's vision). Spatial computing shares the goal of moving computation beyond the desktop, but specifically focuses on 3D spatial awareness and interaction, whereas ubiquitous computing is broader ( | * '''[[Ubiquitous computing]] (Pervasive Computing):''' Envisions computers embedded everywhere, becoming invisible parts of daily life (Mark Weiser's vision). Spatial computing shares the goal of moving computation beyond the desktop, but specifically focuses on 3D spatial awareness and interaction, whereas ubiquitous computing is broader (for example smart home devices). Wearable spatial devices like AR glasses align with the ubiquitous vision.<ref name="HandwikiHistory"/> | ||
* '''[[Ambient computing]]:''' Often used interchangeably with ubiquitous computing, emphasizing calm, background operation responsive to user presence, often without traditional screens ( | * '''[[Ambient computing]]:''' Often used interchangeably with ubiquitous computing, emphasizing calm, background operation responsive to user presence, often without traditional screens (for example smart speakers, automated lighting). Spatial computing can be ambient (for example AR glasses providing subtle cues), but often involves explicit visual overlays, contrasting with ambient computing's typical emphasis on screenlessness.<ref name="ArgoDesign Medium"/> | ||
* '''[[Context-aware computing]]:''' Systems that adapt based on current context (location, time, user activity). Spatial computing is inherently context-aware, focusing specifically on real-time ''spatial'' context (geometry, pose, environment). While any context-aware app uses context ( | * '''[[Context-aware computing]]:''' Systems that adapt based on current context (location, time, user activity). Spatial computing is inherently context-aware, focusing specifically on real-time ''spatial'' context (geometry, pose, environment). While any context-aware app uses context (for example GPS location), spatial computing requires understanding and interaction within the 3D physical environment.<ref name="HandwikiHistory"/> | ||
In summary, spatial computing systems are typically context-aware and can be part of ubiquitous/ambient computing scenarios. Its differentiator is the requirement for real-time 3D spatial understanding and interaction, blending digital content directly into the user's perceived physical space. | In summary, spatial computing systems are typically context-aware and can be part of ubiquitous/ambient computing scenarios. Its differentiator is the requirement for real-time 3D spatial understanding and interaction, blending digital content directly into the user's perceived physical space. | ||
Line 68: | Line 69: | ||
* '''Healthcare:''' [[Surgical planning]] using 3D patient models, AR overlays during surgery for navigation<ref name="ChenAR Surgery"/>, immersive medical training simulations, [[Physical therapy|rehabilitation]] exercises using AR/VR, visualizing complex medical data (MRI/CT scans) in 3D.<ref name="SpatialHealthcare"/> | * '''Healthcare:''' [[Surgical planning]] using 3D patient models, AR overlays during surgery for navigation<ref name="ChenAR Surgery"/>, immersive medical training simulations, [[Physical therapy|rehabilitation]] exercises using AR/VR, visualizing complex medical data (MRI/CT scans) in 3D.<ref name="SpatialHealthcare"/> | ||
* '''Education and Training:''' Immersive learning experiences (virtual field trips, science labs), visualizing complex concepts (molecules, historical events) in 3D, complex task training (aircraft maintenance, emergency response) with AR guidance.<ref name="BaccaAR Education"/> | * '''Education and Training:''' Immersive learning experiences (virtual field trips, science labs), visualizing complex concepts (molecules, historical events) in 3D, complex task training (aircraft maintenance, emergency response) with AR guidance.<ref name="BaccaAR Education"/> | ||
* '''Collaboration and Communication:''' Virtual meetings with spatial presence ([[avatar]]s in shared spaces), remote collaboration on 3D projects, shared digital workspaces ( | * '''Collaboration and Communication:''' Virtual meetings with spatial presence ([[avatar]]s in shared spaces), remote collaboration on 3D projects, shared digital workspaces (for example virtual whiteboards, multiple virtual monitors).<ref name="Spatial Collaboration"/> | ||
* '''Retail and E-commerce:''' Virtually trying on clothes or accessories (AR mirrors), placing virtual furniture or appliances in a room using mobile AR apps before purchase.<ref name="IKEA"/> | * '''Retail and E-commerce:''' Virtually trying on clothes or accessories (AR mirrors), placing virtual furniture or appliances in a room using mobile AR apps before purchase.<ref name="IKEA"/> | ||
* '''Entertainment and Gaming:''' Highly immersive VR games with room-scale tracking, location-based AR games blending virtual elements with the real world, interactive spatial storytelling, spatial viewing of 360°/[[Volumetric video|volumetric]] content.<ref name="PokemonGoRef"/> | * '''Entertainment and Gaming:''' Highly immersive VR games with room-scale tracking, location-based AR games blending virtual elements with the real world, interactive spatial storytelling, spatial viewing of 360°/[[Volumetric video|volumetric]] content.<ref name="PokemonGoRef"/> | ||
* '''Navigation and Information Access:''' Contextual information overlaid on the real world ( | * '''Navigation and Information Access:''' Contextual information overlaid on the real world (for example AR directions in streets or airports, information about landmarks), indoor navigation aids. | ||
* '''Architecture and Construction:''' Visualizing architectural designs on-site using AR, virtual walkthroughs of buildings in VR before construction.<ref name="WangAR Construction"/> | * '''Architecture and Construction:''' Visualizing architectural designs on-site using AR, virtual walkthroughs of buildings in VR before construction.<ref name="WangAR Construction"/> | ||
Line 101: | Line 102: | ||
* '''Convergence:''' Further blending with [[Internet of Things|IoT]], [[Cloud computing]], [[Edge computing]], and potentially forming key infrastructure for concepts like the [[Metaverse]]. | * '''Convergence:''' Further blending with [[Internet of Things|IoT]], [[Cloud computing]], [[Edge computing]], and potentially forming key infrastructure for concepts like the [[Metaverse]]. | ||
* '''Accessibility:''' Lower price points over time driving wider consumer and enterprise adoption. | * '''Accessibility:''' Lower price points over time driving wider consumer and enterprise adoption. | ||
* '''Enhanced Interaction:''' Advances in [[ | * '''Enhanced Interaction:''' Advances in [[Brain-computer interface|brain-computer interfaces]] or sophisticated sensor-based inputs (for example EMG wristbands<ref name="MetaEMG"/>) could offer new ways to interact spatially. | ||
Technology leaders like Tim Cook see it as profoundly changing human-computer interaction.<ref name="9to5MacCookMemo"/> Futurists like Cathy Hackl frame it as the next computing wave enabling new forms of communication and machine intelligence.<ref name="HacklIndependent"/> Microsoft emphasizes productivity gains,<ref name="KipmanMR"/> while Meta focuses on social connection in the metaverse. The long-term vision often involves seamlessly blending digital information and interaction into our everyday perception of the physical world. | Technology leaders like Tim Cook see it as profoundly changing human-computer interaction.<ref name="9to5MacCookMemo"/> Futurists like Cathy Hackl frame it as the next computing wave enabling new forms of communication and machine intelligence.<ref name="HacklIndependent"/> Microsoft emphasizes productivity gains,<ref name="KipmanMR"/> while Meta focuses on social connection in the metaverse. The long-term vision often involves seamlessly blending digital information and interaction into our everyday perception of the physical world. | ||
Line 126: | Line 127: | ||
<references> | <references> | ||
<ref name="GreenwoldThesis">Greenwold, Simon A. "Spatial Computing". MIT Master's Thesis, June 2003. [https://dspace.mit.edu/handle/1721.1/87460 Link]</ref> | <ref name="GreenwoldThesis">Greenwold, Simon A. "Spatial Computing". MIT Master's Thesis, June 2003. [https://dspace.mit.edu/handle/1721.1/87460 Link]</ref> | ||
<ref name=" | <ref name="TechTargetGreenwoldQuote">Alexander Gillis & George Lawton (Feb 7 2024). “What is spatial computing?” *TechTarget*. https://www.techtarget.com/searchcio/definition/spatial-computing </ref> | ||
<ref name="Microsoft HoloLens">Microsoft. (2016). "HoloLens: Mixed Reality." Retrieved from https://www.microsoft.com/en-us/hololens</ref> | <ref name="Microsoft HoloLens">Microsoft. (2016). "HoloLens: Mixed Reality." Retrieved from https://www.microsoft.com/en-us/hololens</ref> | ||
<ref name="Magic Leap One"> | <ref name="Magic Leap One">Magic Leap (Jan 23 2025). “Magic Leap 1 End‑of‑Life notice.” Magic Leap Care Knowledge‑Base. https://care.magicleap.com/hc/en-us/articles/360047675792 </ref> | ||
<ref name="ARKit">Apple Developer. "ARKit." Retrieved from https://developer.apple.com/augmented-reality/arkit/</ref> | <ref name="ARKit">Apple Developer. "ARKit." Retrieved from https://developer.apple.com/augmented-reality/arkit/</ref> | ||
<ref name="ARCore">Google Developers. "ARCore." Retrieved from https://developers.google.com/ar</ref> | <ref name="ARCore">Google Developers. "ARCore." Retrieved from https://developers.google.com/ar</ref> | ||
Line 138: | Line 135: | ||
<ref name="OculusQuest">Meta Quest. Retrieved from https://www.meta.com/quest/</ref> | <ref name="OculusQuest">Meta Quest. Retrieved from https://www.meta.com/quest/</ref> | ||
<ref name="VisionProAnnounce">Apple Newsroom. "Introducing Apple Vision Pro: Apple’s first spatial computer." June 5, 2023. [https://www.apple.com/newsroom/2023/06/introducing-apple-vision-pro/ Link]</ref> | <ref name="VisionProAnnounce">Apple Newsroom. "Introducing Apple Vision Pro: Apple’s first spatial computer." June 5, 2023. [https://www.apple.com/newsroom/2023/06/introducing-apple-vision-pro/ Link]</ref> | ||
<ref name="9to5MacCookMemo"> | <ref name="9to5MacCookMemo">Filipe Espósito (Feb 2 2024). “Tim Cook compares Vision Pro launch to iPhone launch in memo to employees.” *9to5Mac*. https://9to5mac.com/2024/02/02/tim-cook-vision-pro-launch-memo/ </ref> | ||
<ref name="CookSpatialWWDC"> | <ref name="CookSpatialWWDC">Apple Newsroom (Jun 5 2023). “Introducing Apple Vision Pro: Apple’s first spatial computer.” https://www.apple.com/newsroom/2023/06/introducing-apple-vision-pro/ </ref> | ||
<ref name="DurrantWhyteSLAM"> | <ref name="DurrantWhyteSLAM">Durrant‑Whyte, H. & Bailey, T. (2006). “Simultaneous localization and mapping: Part I.” *IEEE Robotics & Automation Magazine*, 13(2), 99‑110. https://doi.org/10.1109/MRA.2006.1638022 </ref> | ||
<ref name="CogentSLAM"> | <ref name="CogentSLAM">Cogent Infotech (Jan 1 2025). “Spatial Computing: The Next Frontier in Digital Transformation.” https://www.cogentinfo.com/resources/spatial-computing-driving-the-next-wave-of-digital-transformation </ref> | ||
<ref name="UnityRef">Unity Technologies. (2023). "Unity Engine." Retrieved from https://unity.com/</ref> | <ref name="UnityRef">Unity Technologies. (2023). "Unity Engine." Retrieved from https://unity.com/</ref> | ||
<ref name="NvidiaSpatialCloud">NVIDIA Glossary | <ref name="NvidiaSpatialCloud">NVIDIA Glossary (2025). “What Is Spatial Computing?” https://www.nvidia.com/en-us/glossary/spatial-computing/ </ref> | ||
<ref name="ArgoDesign Medium"> | <ref name="ArgoDesign Medium">Jon Kolko (Sept 1 2022). “Spatial Computing: A Primer.” *Medium (argo‑design)*. https://argodesign.medium.com/spatial-computing-a-primer-b7df9da6f663 </ref> | ||
<ref name="SpatialDesign"> | <ref name="SpatialDesign">Forbes Technology Council (Jun 8 2021). “How Spatial Computing Can Change Life and Work.” *Forbes*. https://www.forbes.com/sites/forbestechcouncil/2021/06/08/how-spatial-computing-can-change-life-and-work/ </ref> | ||
<ref name="SpatialHealthcare"> | <ref name="SpatialHealthcare">Adam Stone (Apr 12 2024). “Spatial Computing: What Healthcare IT Leaders Need to Know.” *HealthTech Magazine*. https://healthtechmagazine.net/article/2024/04/spatial-computing-healthcare-perfcon </ref> | ||
<ref name="ChenAR Surgery">Chen, L., et al. (2020). "Augmented reality in surgical navigation: A review." *International Journal of Computer Assisted Radiology and Surgery*, 15(8), 1357-1367. doi:10.1007/s11548-020-02192-5</ref> | <ref name="ChenAR Surgery">Chen, L., et al. (2020). "Augmented reality in surgical navigation: A review." *International Journal of Computer Assisted Radiology and Surgery*, 15(8), 1357-1367. doi:10.1007/s11548-020-02192-5</ref> | ||
<ref name="BaccaAR Education">Bacca, J., et al. (2014). "Augmented reality trends in education: A systematic review." *Educational Technology & Society*, 17(4), 133-149.</ref> | <ref name="BaccaAR Education">Bacca, J., et al. (2014). "Augmented reality trends in education: A systematic review." *Educational Technology & Society*, 17(4), 133-149.</ref> | ||
Line 156: | Line 153: | ||
<ref name="LaValleVRBook">LaValle, S. M. (2020). *Virtual Reality*. Cambridge University Press.</ref> | <ref name="LaValleVRBook">LaValle, S. M. (2020). *Virtual Reality*. Cambridge University Press.</ref> | ||
<ref name="VisionProPrice">Apple. (2023). "Apple Vision Pro Pricing." Retrieved from https://www.apple.com/shop/buy-vision/apple-vision-pro</ref> | <ref name="VisionProPrice">Apple. (2023). "Apple Vision Pro Pricing." Retrieved from https://www.apple.com/shop/buy-vision/apple-vision-pro</ref> | ||
<ref name="SpatialPrivacy"> | <ref name="SpatialPrivacy">Electronic Frontier Foundation (Dec 29 2022). “A Pivotal Year for the Metaverse and Extended Reality: 2022 in Review.” https://www.eff.org/deeplinks/2022/12/pivotal-year-metaverse-and-extended-reality </ref> | ||
<ref name="WaPoAmbiguity"> | <ref name="WaPoAmbiguity">Shira Ovide (Feb 2 2024). “Apple’s Vision Pro is ‘spatial computing.’ Nobody knows what it means.” *The Washington Post*. https://www.washingtonpost.com/technology/2024/02/02/apple-vision-pro-spatial-computing/ </ref> | ||
<ref name="BuzzwordCritique"> | <ref name="BuzzwordCritique">Ben Thompson (Jun 6 2023). “Apple Vision.” *Stratechery*. https://stratechery.com/2023/apple-vision/ </ref> | ||
<ref name="HacklNotSynonym"> | <ref name="HacklNotSynonym">Cathy Hackl (Apr 15 2024). “What Is Spatial Computing and What Is the Role of AI in this New Computing Paradigm.” *ShortTake Blog, Shorty Awards*. https://shortyawards.com/shorttake/what-is-spatial-computing-and-what-is-the-role-of-ai-in-this-new-computing-paradigm/ </ref> | ||
<ref name="Qualcomm5GXR">Qualcomm | <ref name="Qualcomm5GXR">Qualcomm (2025). “Extended Reality (XR).” https://www.qualcomm.com/research/extended-reality </ref> | ||
<ref name="AbovitzTrueAR"> | <ref name="AbovitzTrueAR">Rony Abovitz (Oct 16 2023). “The State Of Play In Spatial Computing/XR In 2024.” *Medium*. https://medium.com/@rabovitz/the-state-of-play-in-spatial-computing-xr-in-2024-00f0622400da </ref> | ||
<ref name="HacklIndependent"> | <ref name="HacklIndependent">Associated Press (Jun 5 2023). “Why Apple is pushing the term ‘spatial computing’ along with its new Vision Pro headset.” *The Independent*. https://www.independent.co.uk/news/world/americas/apple-ap-san-francisco-tim-cook-zoom-b2489758.html </ref> | ||
<ref name="KipmanMR"> | <ref name="KipmanMR">Lauren Goode (Mar 2 2021). “Strap on a HoloLens and Step Into the AR Conference Room.” *WIRED*. https://www.wired.com/story/hololens-mesh-microsoft-ignite-2021/ </ref> | ||
<ref name="MetaEMG"> | <ref name="MetaEMG">Adario Strange (Mar 9 2021). “Facebook Finally Explains Its Mysterious Wrist Wearable.” *WIRED*. https://www.wired.com/story/facebook-wrist-wearable-human-computer-interactions </ref> | ||
</references> | </references> | ||