<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://vrarwiki.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ruthalas</id>
	<title>VR &amp; AR Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://vrarwiki.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ruthalas"/>
	<link rel="alternate" type="text/html" href="https://vrarwiki.com/wiki/Special:Contributions/Ruthalas"/>
	<updated>2026-04-19T00:34:14Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.0</generator>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Markerless,_inside-out_tracking&amp;diff=5164</id>
		<title>Markerless, inside-out tracking</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Markerless,_inside-out_tracking&amp;diff=5164"/>
		<updated>2015-06-04T15:23:45Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: Spelling&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Marker-less, Inside-Out Tracking is a composite term derived from two separate concepts, and refers to a method of tracking objects in three dimensional space. &lt;br /&gt;
&lt;br /&gt;
[[markerless tracking|&#039;Marker-less&#039;]] refers to the lack of [[fiducial markers]] used in this type of tracking, while [[inside-out tracking|&#039;Inside-Out&#039;]] refers to the method by which data is gathered for the tracking.&lt;br /&gt;
&lt;br /&gt;
[[File:F2Ak4iE.jpg|thumbnail|A room covered in fiducial markers for inside out tracking, at [[Valve Corporation]]&amp;lt;ref&amp;gt;Youtube, Video: &#039;Steam Dev Days&#039;, on Channel: Valve, Published on Feb 11, 2014&amp;lt;/ref&amp;gt;]]&lt;br /&gt;
==Marker-less Tracking== &lt;br /&gt;
Because of the difficult computations required for a software to interpret live camera input, tracking solutions that rely on cameras sometimes place [[fiducial markers]] (which may look like QR codes), in view of the tracking camera. This gives the software a known pattern to look for, which makes computation simpler and faster. &lt;br /&gt;
&lt;br /&gt;
Thus the term, &#039;marker-less&#039; refers to a system that is robust enough that it does not need the aid of printed markers to aid in its interpretation of the three dimensional space.&lt;br /&gt;
&lt;br /&gt;
==Inside-Out Tracking==&lt;br /&gt;
Tracking systems that make use of a camera may be organized into two main branches, &#039;inside-out&#039; and &#039;outside-in&#039; tracking. Both terms refer to the placement of the tracking camera itself, with reference to what it is tracking.&lt;br /&gt;
&lt;br /&gt;
In an [[inside-out tracking|&#039;inside-out&#039; system]], the tracking camera is placed within the item being tracked (for our purposes, like a [[Virtual_Reality#Devices|head mounted display]]), from which vantage point it look &#039;&#039;out&#039;&#039; at the world around it. It uses its changing perspective on the outside world to note changes in position.&lt;br /&gt;
&lt;br /&gt;
In an [[outside-in tracking|&#039;outside-in&#039; system]], a tracking camera is (or cameras are) placed around the space within which the tracked object will move, and use their changing view of the object itself to measure its changes in position.&lt;br /&gt;
&lt;br /&gt;
==Relative Merits==&lt;br /&gt;
&lt;br /&gt;
Both systems have merits; &#039;inside-out&#039; tracking is notable because it can require no outside equipment, which is ideal for a portable device. Unfortunately, in the case of a headset, it places the burden of computation on the headset itself. (A burden which is exacerbated when no fiducial markers are used.)&lt;br /&gt;
&lt;br /&gt;
&#039;Outside-in&#039; tracking can be less computationally demanding, and can make use of multiple cameras to make results more stable and consistent. This comes in part from the reduced chance of [[occlusion]]. Unfortunately it requires a controlled environment and more extensive equipment.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Markerless,_inside-out_tracking&amp;diff=5133</id>
		<title>Markerless, inside-out tracking</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Markerless,_inside-out_tracking&amp;diff=5133"/>
		<updated>2015-06-02T20:45:57Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: References and Category&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Marker-less, Inside-Out Tracking is a composite term derived from two separate concepts, and refers to a method of tracking objects in three dimensional space. &lt;br /&gt;
&lt;br /&gt;
&#039;Marker-less&#039; refers to the lack of [[fiducial markers]] used in this type of tracking, while &#039;Inside-Out&#039; refers to the method by which data is gathered for the tracking.&lt;br /&gt;
&lt;br /&gt;
[[File:F2Ak4iE.jpg|thumbnail|A room covered in fiducial markers fror inside out tracking, at [[Valve Corporation]]&amp;lt;ref&amp;gt;Youtube, Video: &#039;Steam Dev Days&#039;, on Channel: Valve, Published on Feb 11, 2014&amp;lt;/ref&amp;gt;]]&lt;br /&gt;
==Marker-less Tracking== &lt;br /&gt;
Because of the difficult computations required for a software to interpret live camera input, tracking solutions that rely on cameras sometimes place [[fiducial markers]] (which may look like QR codes), in view of the tracking camera. This gives the software a known pattern to look for, which makes computation simpler and faster. &lt;br /&gt;
&lt;br /&gt;
Thus the term, &#039;marker-less&#039; refers to a system that is robust enough that it does not need the aid of printed markers to aid in its interpretation of the three dimensional space.&lt;br /&gt;
&lt;br /&gt;
==Inside-Out Tracking==&lt;br /&gt;
Tracking systems that make use of a camera may be organized into two main branches, &#039;inside-out&#039; and &#039;outside-in&#039; tracking. Both terms refer to the placement of the tracking camera itself, with reference to what it is tracking.&lt;br /&gt;
&lt;br /&gt;
In an &#039;inside-out&#039; system, the tracking camera is placed within the item being tracked (for our purposes, like a [[Virtual_Reality#Devices|head mounted display]]), from which vantage point it look &#039;&#039;out&#039;&#039; at the world around it. It uses its changing perspective on the outside world to note changes in position.&lt;br /&gt;
&lt;br /&gt;
In an &#039;outside-in&#039; system, a tracking camera is (or cameras are) placed around the space within which the tracked object will move, and use their changing view of the object itself to measure its changes in position.&lt;br /&gt;
&lt;br /&gt;
==Relative Merits==&lt;br /&gt;
&lt;br /&gt;
Both systems have merits; &#039;inside-out&#039; tracking is notable because it can require no outside equipment, which is ideal for a portable device. Unfortunately, in the case of a headset, it places the burden of computation on the headset itself. (A burden which is exacerbated when no fiducial markers are used.)&lt;br /&gt;
&lt;br /&gt;
&#039;Outside-in&#039; tracking can be less computationally demanding, and can make use of multiple cameras to make results more stable and consistent. This comes in part from the reduced chance of [[occlusion]]. Unfortunately it requires a controlled environment and more extensive equipment.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Markerless,_inside-out_tracking&amp;diff=5132</id>
		<title>Markerless, inside-out tracking</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Markerless,_inside-out_tracking&amp;diff=5132"/>
		<updated>2015-06-02T20:45:28Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: Spelling and heading&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Marker-less, Inside-Out Tracking is a composite term derived from two separate concepts, and refers to a method of tracking objects in three dimensional space. &lt;br /&gt;
&lt;br /&gt;
&#039;Marker-less&#039; refers to the lack of [[fiducial markers]] used in this type of tracking, while &#039;Inside-Out&#039; refers to the method by which data is gathered for the tracking.&lt;br /&gt;
&lt;br /&gt;
[[File:F2Ak4iE.jpg|thumbnail|A room covered in fiducial markers fror inside out tracking, at [[Valve Corporation]]&amp;lt;ref&amp;gt;Youtube, Video: &#039;Steam Dev Days&#039;, on Channel: Valve, Published on Feb 11, 2014&amp;lt;/ref&amp;gt;]]&lt;br /&gt;
==Marker-less Tracking== &lt;br /&gt;
Because of the difficult computations required for a software to interpret live camera input, tracking solutions that rely on cameras sometimes place [[fiducial markers]] (which may look like QR codes), in view of the tracking camera. This gives the software a known pattern to look for, which makes computation simpler and faster. &lt;br /&gt;
&lt;br /&gt;
Thus the term, &#039;marker-less&#039; refers to a system that is robust enough that it does not need the aid of printed markers to aid in its interpretation of the three dimensional space.&lt;br /&gt;
&lt;br /&gt;
==Inside-Out Tracking==&lt;br /&gt;
Tracking systems that make use of a camera may be organized into two main branches, &#039;inside-out&#039; and &#039;outside-in&#039; tracking. Both terms refer to the placement of the tracking camera itself, with reference to what it is tracking.&lt;br /&gt;
&lt;br /&gt;
In an &#039;inside-out&#039; system, the tracking camera is placed within the item being tracked (for our purposes, like a [[Virtual_Reality#Devices|head mounted display]]), from which vantage point it look &#039;&#039;out&#039;&#039; at the world around it. It uses its changing perspective on the outside world to note changes in position.&lt;br /&gt;
&lt;br /&gt;
In an &#039;outside-in&#039; system, a tracking camera is (or cameras are) placed around the space within which the tracked object will move, and use their changing view of the object itself to measure its changes in position.&lt;br /&gt;
&lt;br /&gt;
==Relative Merits==&lt;br /&gt;
&lt;br /&gt;
Both systems have merits; &#039;inside-out&#039; tracking is notable because it can require no outside equipment, which is ideal for a portable device. Unfortunately, in the case of a headset, it places the burden of computation on the headset itself. (A burden which is exacerbated when no fiducial markers are used.)&lt;br /&gt;
&lt;br /&gt;
&#039;Outside-in&#039; tracking can be less computationally demanding, and can make use of multiple cameras to make results more stable and consistent. This comes in part from the reduced chance of [[occlusion]]. Unfortunately it requires a controlled environment and more extensive equipment.&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Markerless,_inside-out_tracking&amp;diff=5131</id>
		<title>Markerless, inside-out tracking</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Markerless,_inside-out_tracking&amp;diff=5131"/>
		<updated>2015-06-02T20:44:29Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Marker-less, Inside-Out Tracking is a composite term derived from two separate concepts, and refers to a method of tracking objects in three dimensional space. &lt;br /&gt;
&lt;br /&gt;
&#039;Marker-less&#039; refers to the lack of [[fiducial markers]] used in this type of tracking, while &#039;Inside-Out&#039; refers to the method by which data is gathered for the tracking.&lt;br /&gt;
&lt;br /&gt;
[[File:F2Ak4iE.jpg|thumbnail|A room covered in fiducial markers fror inside out tracking, at [[Valve Corporation]]&amp;lt;ref&amp;gt;Youtube, Video: &#039;Steam Dev Days&#039;, on Channel: Valve, Published on Feb 11, 2014&amp;lt;/ref&amp;gt;]]&lt;br /&gt;
==Marker-less Tracking== &lt;br /&gt;
Because of the difficult computations required for a software to interpret live camera input, tracking solutions that rely on cameras sometimes place [[fiducial markers]] (which may look like QR codes), in view of the tracking camera. This gives the software a know pattern to look for, which makes computation simpler and faster. &lt;br /&gt;
&lt;br /&gt;
Thus the term, &#039;marker-less&#039; refers to a system that is robust enough that it does not need the aid of printed markers to aid in its interpretation of the three dimensional space.&lt;br /&gt;
&lt;br /&gt;
==Inside-Out Tracking==&lt;br /&gt;
Tracking systems that make use of a camera may be organized into two main branches, &#039;inside-out&#039; and &#039;outside-in&#039; tracking. Both terms refer to the placement of the tracking camera itself, with reference to what it is tracking.&lt;br /&gt;
&lt;br /&gt;
In an &#039;inside-out&#039; system, the tracking camera is placed within the item being tracked (for our purposes, like a [[Virtual_Reality#Devices|head mounted display]]), from which vantage point it look &#039;&#039;out&#039;&#039; at the world around it. It uses its changing perspective on the outside world to note changes in position.&lt;br /&gt;
&lt;br /&gt;
In an &#039;outside-in&#039; system, a tracking camera is (or cameras are) placed around the space within which the tracked object will move, and use their changing view of the object itself to measure its changes in position.&lt;br /&gt;
&lt;br /&gt;
Both systems have merits; &#039;inside-out&#039; tracking is notable because it can require no outside equipment, which is ideal for a portable device. Unfortunately, in the case of a headset, it places the burden of computation on the headset itself. (A burden which is exacerbated when no fiducial markers are used.)&lt;br /&gt;
&lt;br /&gt;
&#039;Outside-in&#039; tracking can be less computationally demanding, and can make use of multiple cameras to make results more stable and consistent. This comes in part from the reduced chance of [[occlusion]]. Unfortunately it requires a controlled environment and more extensive equipment.&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Markerless,_inside-out_tracking&amp;diff=5130</id>
		<title>Markerless, inside-out tracking</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Markerless,_inside-out_tracking&amp;diff=5130"/>
		<updated>2015-06-02T20:44:12Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Marker-less, Inside-Out Tracking is a composite term derive from two separate concepts, and refers to a method of tracking objects in three dimensional space. &lt;br /&gt;
&lt;br /&gt;
&#039;Marker-less&#039; refers to the lack of [[fiducial markers]] used in this type of tracking, while &#039;Inside-Out&#039; refers to the method by which data is gathered for the tracking.&lt;br /&gt;
&lt;br /&gt;
[[File:F2Ak4iE.jpg|thumbnail|A room covered in fiducial markers fror inside out tracking, at [[Valve Corporation]]&amp;lt;ref&amp;gt;Youtube, Video: &#039;Steam Dev Days&#039;, on Channel: Valve, Published on Feb 11, 2014&amp;lt;/ref&amp;gt;]]&lt;br /&gt;
==Marker-less Tracking== &lt;br /&gt;
Because of the difficult computations required for a software to interpret live camera input, tracking solutions that rely on cameras sometimes place [[fiducial markers]] (which may look like QR codes), in view of the tracking camera. This gives the software a know pattern to look for, which makes computation simpler and faster. &lt;br /&gt;
&lt;br /&gt;
Thus the term, &#039;marker-less&#039; refers to a system that is robust enough that it does not need the aid of printed markers to aid in its interpretation of the three dimensional space.&lt;br /&gt;
&lt;br /&gt;
==Inside-Out Tracking==&lt;br /&gt;
Tracking systems that make use of a camera may be organized into two main branches, &#039;inside-out&#039; and &#039;outside-in&#039; tracking. Both terms refer to the placement of the tracking camera itself, with reference to what it is tracking.&lt;br /&gt;
&lt;br /&gt;
In an &#039;inside-out&#039; system, the tracking camera is placed within the item being tracked (for our purposes, like a [[Virtual_Reality#Devices|head mounted display]]), from which vantage point it look &#039;&#039;out&#039;&#039; at the world around it. It uses its changing perspective on the outside world to note changes in position.&lt;br /&gt;
&lt;br /&gt;
In an &#039;outside-in&#039; system, a tracking camera is (or cameras are) placed around the space within which the tracked object will move, and use their changing view of the object itself to measure its changes in position.&lt;br /&gt;
&lt;br /&gt;
Both systems have merits; &#039;inside-out&#039; tracking is notable because it can require no outside equipment, which is ideal for a portable device. Unfortunately, in the case of a headset, it places the burden of computation on the headset itself. (A burden which is exacerbated when no fiducial markers are used.)&lt;br /&gt;
&lt;br /&gt;
&#039;Outside-in&#039; tracking can be less computationally demanding, and can make use of multiple cameras to make results more stable and consistent. This comes in part from the reduced chance of [[occlusion]]. Unfortunately it requires a controlled environment and more extensive equipment.&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:F2Ak4iE.jpg&amp;diff=5129</id>
		<title>File:F2Ak4iE.jpg</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:F2Ak4iE.jpg&amp;diff=5129"/>
		<updated>2015-06-02T20:36:43Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Markerless,_inside-out_tracking&amp;diff=5128</id>
		<title>Markerless, inside-out tracking</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Markerless,_inside-out_tracking&amp;diff=5128"/>
		<updated>2015-06-02T20:33:33Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: Added summary and explanation of both portions of the term&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Marker-less, Inside-Out Tracking is a composite term derive from two separate concepts, and refers to a method of tracking objects in three dimensional space. &lt;br /&gt;
&lt;br /&gt;
&#039;Marker-less&#039; refers to the lack of [[fiducial markers]] used in this type of tracking, while &#039;Inside-Out&#039; refers to the method by which data is gathered for the tracking.&lt;br /&gt;
&lt;br /&gt;
==Marker-less Tracking==&lt;br /&gt;
Because of the difficult computations required for a software to interpret live camera input, tracking solutions that rely on cameras sometimes place [[fiducial markers]] (which may look like QR codes), in view of the tracking camera. This gives the software a know pattern to look for, which makes computation simpler and faster.&lt;br /&gt;
&lt;br /&gt;
Thus the term, &#039;marker-less&#039; refers to a system that is robust enough that it does not need the aid of printed markers to aid in its interpretation of the three dimensional space.&lt;br /&gt;
&lt;br /&gt;
==Inside-Out Tracking==&lt;br /&gt;
Tracking systems that make use of a camera may be organized into two main branches, &#039;inside-out&#039; and &#039;outside-in&#039; tracking. Both terms refer to the placement of the tracking camera itself, with reference to what it is tracking.&lt;br /&gt;
&lt;br /&gt;
In an &#039;inside-out&#039; system, the tracking camera is placed within the item being tracked (for our purposes, like a [[VR HMDs|head mounted display]]), from which vantage point it look &#039;&#039;out&#039;&#039; at the world around it. It uses its changing perspective on the outside world to note changes in position.&lt;br /&gt;
&lt;br /&gt;
In an &#039;outside-in&#039; system, a tracking camera is (or cameras are) placed around the space within which the tracked object will move, and use their changing view of the object itself to measure its changes in position.&lt;br /&gt;
&lt;br /&gt;
Both systems have merits; &#039;inside-out&#039; tracking is notable because it can require no outside equipment, which is ideal for a portable device. Unfortunately, in the case of a headset, it places the burden of computation on the headset itself. (A burden which is exacerbated when no fiducial markers are used.)&lt;br /&gt;
&lt;br /&gt;
&#039;Outside-in&#039; tracking can be less computationally demanding, and can make use of multiple cameras to make results more stable and consistent. This comes in part from the reduced chance of [[occlusion]]. Unfortunately it requires a controlled environment and more extensive equipment.&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=User_talk:Xinreality&amp;diff=5121</id>
		<title>User talk:Xinreality</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=User_talk:Xinreality&amp;diff=5121"/>
		<updated>2015-06-02T20:04:55Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: Initial Salutations&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;say something here!&lt;br /&gt;
&lt;br /&gt;
==Ruthalas==&lt;br /&gt;
&lt;br /&gt;
Thank you for your kind words!&lt;br /&gt;
&lt;br /&gt;
I&#039;ll be wandering around filling in pages on subjects I have some passing understanding when I can.&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Chaperone&amp;diff=5112</id>
		<title>Chaperone</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Chaperone&amp;diff=5112"/>
		<updated>2015-06-02T18:44:13Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: Added category&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:Stock-vector.jpg|thumbnail|Concept Drawing of Chaperone&#039;s Visual Grid]]&lt;br /&gt;
The Chaperone system is a utility design by Valve to be used with their [[Virtual_Reality#Devices|Head Mounted Display]], the [[HTC Vive]]. Once set up, it keeps track of where a user is in relation to the physical walls around them, and if necessary, shows a blue grid&amp;lt;ref&amp;gt;http://www.tested.com/tech/concepts/504521-htc-vive-vs-oculus-crescent-bay-my-10-vr-takeaways/&amp;lt;/ref&amp;gt; within the user&#039;s virtual space to notify them that they are in close proximity to a physical barrier. The [[HTC Vive]] provides tracking within an approximately 15 foot by 15 foot area, and the Chaperon system provides the user with confidence that they will not collide with physical barriers as they experience their virtual content.&lt;br /&gt;
&lt;br /&gt;
==Purpose==&lt;br /&gt;
The main purpose of the [[HTC Vive]] is to warn the user when they approach a physical barrier, to which they are blind because of the headset they are wearing. This will ideally prevent collisions and minimize accidents. This may help [[virtual reality]] experiences be more immersive because the observer trusts that they are safe to move around in the environment.&lt;br /&gt;
&lt;br /&gt;
A secondary purpose for the Chaperone system is to allow games to interact with the user in a unique way. Because the Chaperone system has information about the user&#039;s environment, virtual applications can react to the user&#039;s surroundings. It could, for example, generate a location that matches the orientation and layout of the user&#039;s room. Conversely, the system could use a techniques like overlapping spaces or [[directed walking]]&amp;lt;ref&amp;gt;http://ict.usc.edu/pubs/Impossible%20Spaces-%20Maximizing%20Natural%20Walking%20in%20Virtual%20Environments%20with%20Self-Overlapping%20Architecture.pdf&amp;lt;/ref&amp;gt; to make traverse-able, virtual environments that seem much larger than the user&#039;s physical space by distorting the user&#039;s perception of distance and rotational displacement.&lt;br /&gt;
&lt;br /&gt;
==Current Limitations==&lt;br /&gt;
At the time of writing, the set up procedure for the Chaperon system involves manually delineating the corners of the physical space with the [[HTC Vive]]&#039;s controllers.&lt;br /&gt;
&lt;br /&gt;
One other limitation of the Chaperone system is that it is currently limited to mapping the floor and walls of a space. This means that if the user&#039;s space includes a couch it would not be mapped and would present an invisible obstacle (it could, however, be defined as the limiting &#039;wall&#039; on that side of the space).&lt;br /&gt;
&lt;br /&gt;
Finally, the physical model built by the Chaperone system is static. If any portion of the setup changes, such as moved furniture, it would not be reflected in the model and therefore not be used to guide the user.&lt;br /&gt;
&lt;br /&gt;
==Likely Additions==&lt;br /&gt;
According to comments by the [[Lighthouse]] developers at conventions and on Twitter&amp;lt;ref&amp;gt;https://twitter.com/vk2zay/status/573909197949009920&amp;lt;/ref&amp;gt;, as well as the emphasis on mapping in its patent&amp;lt;ref&amp;gt;http://tsdr.uspto.gov/#caseNumber=86558185&amp;amp;caseType=SERIAL_NO&amp;amp;searchType=statusSearch&amp;lt;/ref&amp;gt;, it seems likely that the [[HTC Vive]] will eventually use a stereo pair of cameras or a depth camera to aid in &amp;quot;detection and measurement&amp;quot; the user&#039;s surroundings. This might be a one-time calibration, or it could provide constantly updating information to capture changes in the user&#039;s environment. Gathering data in this fashion would also likely allow the recognition of furniture and other small objects.&lt;br /&gt;
&lt;br /&gt;
==History==&lt;br /&gt;
Valve Corporation filed for a patent entitled &#039;CHAPERONE&#039; on March 09, 2015&amp;lt;ref&amp;gt;http://tsdr.uspto.gov/#caseNumber=86558185&amp;amp;caseType=SERIAL_NO&amp;amp;searchType=statusSearch&amp;lt;/ref&amp;gt;. In it is contained this description of the Chaperone system:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Hardware and software, sensors, and beacons for the detection and measurement of physical objects and the representation of such objects in virtual reality environments; Devices used for the detection and measurement of physical objects and the representation of such objects in virtual reality environments; Electronic apparatus for the detection and measurement of physical objects and the representation of such objects in virtual reality environments&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Chaperone&amp;diff=5111</id>
		<title>Chaperone</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Chaperone&amp;diff=5111"/>
		<updated>2015-06-02T18:42:37Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:Stock-vector.jpg|thumbnail|Concept Drawing of Chaperone&#039;s Visual Grid]]&lt;br /&gt;
The Chaperone system is a utility design by Valve to be used with their [[Virtual_Reality#Devices|Head Mounted Display]], the [[HTC Vive]]. Once set up, it keeps track of where a user is in relation to the physical walls around them, and if necessary, shows a blue grid&amp;lt;ref&amp;gt;http://www.tested.com/tech/concepts/504521-htc-vive-vs-oculus-crescent-bay-my-10-vr-takeaways/&amp;lt;/ref&amp;gt; within the user&#039;s virtual space to notify them that they are in close proximity to a physical barrier. The [[HTC Vive]] provides tracking within an approximately 15 foot by 15 foot area, and the Chaperon system provides the user with confidence that they will not collide with physical barriers as they experience their virtual content.&lt;br /&gt;
&lt;br /&gt;
==Purpose==&lt;br /&gt;
The main purpose of the [[HTC Vive]] is to warn the user when they approach a physical barrier, to which they are blind because of the headset they are wearing. This will ideally prevent collisions and minimize accidents. This may help [[virtual reality]] experiences be more immersive because the observer trusts that they are safe to move around in the environment.&lt;br /&gt;
&lt;br /&gt;
A secondary purpose for the Chaperone system is to allow games to interact with the user in a unique way. Because the Chaperone system has information about the user&#039;s environment, virtual applications can react to the user&#039;s surroundings. It could, for example, generate a location that matches the orientation and layout of the user&#039;s room. Conversely, the system could use a techniques like overlapping spaces or [[directed walking]]&amp;lt;ref&amp;gt;http://ict.usc.edu/pubs/Impossible%20Spaces-%20Maximizing%20Natural%20Walking%20in%20Virtual%20Environments%20with%20Self-Overlapping%20Architecture.pdf&amp;lt;/ref&amp;gt; to make traverse-able, virtual environments that seem much larger than the user&#039;s physical space by distorting the user&#039;s perception of distance and rotational displacement.&lt;br /&gt;
&lt;br /&gt;
==Current Limitations==&lt;br /&gt;
At the time of writing, the set up procedure for the Chaperon system involves manually delineating the corners of the physical space with the [[HTC Vive]]&#039;s controllers.&lt;br /&gt;
&lt;br /&gt;
One other limitation of the Chaperone system is that it is currently limited to mapping the floor and walls of a space. This means that if the user&#039;s space includes a couch it would not be mapped and would present an invisible obstacle (it could, however, be defined as the limiting &#039;wall&#039; on that side of the space).&lt;br /&gt;
&lt;br /&gt;
Finally, the physical model built by the Chaperone system is static. If any portion of the setup changes, such as moved furniture, it would not be reflected in the model and therefore not be used to guide the user.&lt;br /&gt;
&lt;br /&gt;
==Likely Additions==&lt;br /&gt;
According to comments by the [[Lighthouse]] developers at conventions and on Twitter&amp;lt;ref&amp;gt;https://twitter.com/vk2zay/status/573909197949009920&amp;lt;/ref&amp;gt;, as well as the emphasis on mapping in its patent&amp;lt;ref&amp;gt;http://tsdr.uspto.gov/#caseNumber=86558185&amp;amp;caseType=SERIAL_NO&amp;amp;searchType=statusSearch&amp;lt;/ref&amp;gt;, it seems likely that the [[HTC Vive]] will eventually use a stereo pair of cameras or a depth camera to aid in &amp;quot;detection and measurement&amp;quot; the user&#039;s surroundings. This might be a one-time calibration, or it could provide constantly updating information to capture changes in the user&#039;s environment. Gathering data in this fashion would also likely allow the recognition of furniture and other small objects.&lt;br /&gt;
&lt;br /&gt;
==History==&lt;br /&gt;
Valve Corporation filed for a patent entitled &#039;CHAPERONE&#039; on March 09, 2015&amp;lt;ref&amp;gt;http://tsdr.uspto.gov/#caseNumber=86558185&amp;amp;caseType=SERIAL_NO&amp;amp;searchType=statusSearch&amp;lt;/ref&amp;gt;. In it is contained this description of the Chaperone system:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Hardware and software, sensors, and beacons for the detection and measurement of physical objects and the representation of such objects in virtual reality environments; Devices used for the detection and measurement of physical objects and the representation of such objects in virtual reality environments; Electronic apparatus for the detection and measurement of physical objects and the representation of such objects in virtual reality environments&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Chaperone&amp;diff=5110</id>
		<title>Chaperone</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Chaperone&amp;diff=5110"/>
		<updated>2015-06-02T18:41:27Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: Created and added image.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:Stock-vector.jpg|thumbnail|Concept Drawing of Chaperone&#039;s Visual Grid]]&lt;br /&gt;
The Chaperone system is a utility design by Valve to be used with their [[Virtual_Reality#Devices|Head Mounted Display]], the [[HTC Vive]]. Once set up, it keep track of where a user is in relation to the physical walls around them, and if necessary, shows a blue grid&amp;lt;ref&amp;gt;http://www.tested.com/tech/concepts/504521-htc-vive-vs-oculus-crescent-bay-my-10-vr-takeaways/&amp;lt;/ref&amp;gt; within the user&#039;s virtual space to notify them that they are in close proximity to a physical barrier. The [[HTC Vive]] provides tracking within an approximately 15 foot by 15 foot area, and the Chaperon system provides the user with confidence that they will not collide with physical barriers as they experience their virtual content.&lt;br /&gt;
&lt;br /&gt;
==Purpose==&lt;br /&gt;
The main purpose of the [[HTC Vive]] is to warn the user when they approach a physical barrier, to which they are blind because of the headset they are wearing. This will ideally prevent collisions and minimize accidents. This may help [[virtual reality]] experiences be more immersive because the observer trusts that they are safe to move around in the environment.&lt;br /&gt;
&lt;br /&gt;
A secondary purpose for the Chaperone system is to allow games to interact with the user in a unique way. Because the Chaperone system has information about the user&#039;s environment, virtual applications can react to the user&#039;s surroundings. It could, for example, generate a location that matches the orientation and layout of the user&#039;s room. Conversely, the system could use a techniques like overlapping spaces or [[directed walking]]&amp;lt;ref&amp;gt;http://ict.usc.edu/pubs/Impossible%20Spaces-%20Maximizing%20Natural%20Walking%20in%20Virtual%20Environments%20with%20Self-Overlapping%20Architecture.pdf&amp;lt;/ref&amp;gt; to make traverse-able, virtual environments that seem much larger than the user&#039;s physical space by distorting the user&#039;s perception of distance and rotational displacement.&lt;br /&gt;
&lt;br /&gt;
==Current Limitations==&lt;br /&gt;
At the time of writing, the set up procedure for the Chaperon system involves manually delineating the corners of the physical space with the [[HTC Vive]]&#039;s controllers.&lt;br /&gt;
&lt;br /&gt;
One other limitation of the Chaperone system is that it is currently limited to mapping the floor and walls of a space. This means that if the user&#039;s space includes a couch it would not be mapped and would present an invisible obstacle (it could, however, be defined as the limiting &#039;wall&#039; on that side of the space).&lt;br /&gt;
&lt;br /&gt;
Finally, the physical model built by the Chaperone system is static. If any portion of the setup changes, such as moved furniture, it would not be reflected in the model and therefore not be used to guide the user.&lt;br /&gt;
&lt;br /&gt;
==Likely Additions==&lt;br /&gt;
According to comments by the [[Lighthouse]] developers at conventions and on Twitter&amp;lt;ref&amp;gt;https://twitter.com/vk2zay/status/573909197949009920&amp;lt;/ref&amp;gt;, as well as the emphasis on mapping in its patent&amp;lt;ref&amp;gt;http://tsdr.uspto.gov/#caseNumber=86558185&amp;amp;caseType=SERIAL_NO&amp;amp;searchType=statusSearch&amp;lt;/ref&amp;gt;, it seems likely that the [[HTC Vive]] will eventually use a stereo pair of cameras or a depth camera to aid in &amp;quot;detection and measurement&amp;quot; the user&#039;s surroundings. This might be a one-time calibration, or it could provide constantly updating information to capture changes in the user&#039;s environment. Gathering data in this fashion would also likely allow the recognition of furniture and other small objects.&lt;br /&gt;
&lt;br /&gt;
==History==&lt;br /&gt;
Valve Corporation filed for a patent entitled &#039;CHAPERONE&#039; on March 09, 2015&amp;lt;ref&amp;gt;http://tsdr.uspto.gov/#caseNumber=86558185&amp;amp;caseType=SERIAL_NO&amp;amp;searchType=statusSearch&amp;lt;/ref&amp;gt;. In it is contained this description of the Chaperone system:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Hardware and software, sensors, and beacons for the detection and measurement of physical objects and the representation of such objects in virtual reality environments; Devices used for the detection and measurement of physical objects and the representation of such objects in virtual reality environments; Electronic apparatus for the detection and measurement of physical objects and the representation of such objects in virtual reality environments&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Stock-vector.jpg&amp;diff=5109</id>
		<title>File:Stock-vector.jpg</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Stock-vector.jpg&amp;diff=5109"/>
		<updated>2015-06-02T18:40:22Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Stock-vector-wireframe-grid-room-space-interior-for-design-and-decoration-vector-illustration-150750170b.jpg&amp;diff=5108</id>
		<title>File:Stock-vector-wireframe-grid-room-space-interior-for-design-and-decoration-vector-illustration-150750170b.jpg</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Stock-vector-wireframe-grid-room-space-interior-for-design-and-decoration-vector-illustration-150750170b.jpg&amp;diff=5108"/>
		<updated>2015-06-02T18:37:19Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Chaperone&amp;diff=5107</id>
		<title>Chaperone</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Chaperone&amp;diff=5107"/>
		<updated>2015-06-02T18:22:07Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: Filled in basic page content- Purpose, Limitations, Likely Additions, History&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Chaperone system is a utility design by Valve to be used with their [[Virtual_Reality#Devices|Head Mounted Display]], the [[HTC Vive]]. Once set up, it keep track of where a user is in relation to the physical walls around them, and if necessary, shows a blue grid&amp;lt;ref&amp;gt;http://www.tested.com/tech/concepts/504521-htc-vive-vs-oculus-crescent-bay-my-10-vr-takeaways/&amp;lt;/ref&amp;gt; within the user&#039;s virtual space to notify them that they are in close proximity to a physical barrier. The [[HTC Vive]] provides tracking within an approximately 15 foot by 15 foot area, and the Chaperon system provides the user with confidence that they will not collide with physical barriers as they experience their virtual content.&lt;br /&gt;
&lt;br /&gt;
==Purpose==&lt;br /&gt;
The main purpose of the [[HTC Vive]] is to warn the user when they approach a physical barrier, to which they are blind because of the headset they are wearing. This will ideally prevent collisions and minimize accidents. This may help [[virtual reality]] experiences be more immersive because the observer trusts that they are safe to move around in the environment.&lt;br /&gt;
&lt;br /&gt;
A secondary purpose for the Chaperone system is to allow games to interact with the user in a unique way. Because the Chaperone system has information about the user&#039;s environment, virtual applications can react to the user&#039;s surroundings. It could, for example, generate a location that matches the orientation and layout of the user&#039;s room. Conversely, the system could use a techniques like overlapping spaces or [[directed walking]]&amp;lt;ref&amp;gt;http://ict.usc.edu/pubs/Impossible%20Spaces-%20Maximizing%20Natural%20Walking%20in%20Virtual%20Environments%20with%20Self-Overlapping%20Architecture.pdf&amp;lt;/ref&amp;gt; to make traverse-able, virtual environments that seem much larger than the user&#039;s physical space by distorting the user&#039;s perception of distance and rotational displacement.&lt;br /&gt;
&lt;br /&gt;
==Current Limitations==&lt;br /&gt;
At the time of writing, the set up procedure for the Chaperon system involves manually delineating the corners of the physical space with the [[HTC Vive]]&#039;s controllers.&lt;br /&gt;
&lt;br /&gt;
One other limitation of the Chaperone system is that it is currently limited to mapping the floor and walls of a space. This means that if the user&#039;s space includes a couch it would not be mapped and would present an invisible obstacle (it could, however, be defined as the limiting &#039;wall&#039; on that side of the space).&lt;br /&gt;
&lt;br /&gt;
Finally, the physical model built by the Chaperone system is static. If any portion of the setup changes, such as moved furniture, it would not be reflected in the model and therefore not be used to guide the user.&lt;br /&gt;
&lt;br /&gt;
==Likely Additions==&lt;br /&gt;
According to comments by the [[Lighthouse]] developers at conventions and on Twitter&amp;lt;ref&amp;gt;https://twitter.com/vk2zay/status/573909197949009920&amp;lt;/ref&amp;gt;, as well as the emphasis on mapping in its patent&amp;lt;ref&amp;gt;http://tsdr.uspto.gov/#caseNumber=86558185&amp;amp;caseType=SERIAL_NO&amp;amp;searchType=statusSearch&amp;lt;/ref&amp;gt;, it seems likely that the [[HTC Vive]] will eventually use a stereo pair of cameras or a depth camera to aid in &amp;quot;detection and measurement&amp;quot; the user&#039;s surroundings. This might be a one-time calibration, or it could provide constantly updating information to capture changes in the user&#039;s environment. Gathering data in this fashion would also likely allow the recognition of furniture and other small objects.&lt;br /&gt;
&lt;br /&gt;
==History==&lt;br /&gt;
Valve Corporation filed for a patent entitled &#039;CHAPERONE&#039; on March 09, 2015&amp;lt;ref&amp;gt;http://tsdr.uspto.gov/#caseNumber=86558185&amp;amp;caseType=SERIAL_NO&amp;amp;searchType=statusSearch&amp;lt;/ref&amp;gt;. In it is contained this description of the Chaperone system:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Hardware and software, sensors, and beacons for the detection and measurement of physical objects and the representation of such objects in virtual reality environments; Devices used for the detection and measurement of physical objects and the representation of such objects in virtual reality environments; Electronic apparatus for the detection and measurement of physical objects and the representation of such objects in virtual reality environments&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=User_talk:Ruthalas&amp;diff=5106</id>
		<title>User talk:Ruthalas</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=User_talk:Ruthalas&amp;diff=5106"/>
		<updated>2015-06-02T15:39:09Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: /* Thank you! */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Welcome to XinReality!==&lt;br /&gt;
Hello Ruthalas, welcome to XinReality. Thank you for creating the page on [[Uncanny valley]]. Let me know if you need anything -[[User:Xinreality|Xinreality]] ([[User talk:Xinreality|talk]]) 19:01, 1 June 2015 (PDT)&lt;br /&gt;
&lt;br /&gt;
==Thank you!==&lt;br /&gt;
(I have no idea if I am responding in the appropriate fashion.)&lt;br /&gt;
&lt;br /&gt;
Thanks Xin, my pleasure. Let me know if I need to modify anything!&lt;br /&gt;
&lt;br /&gt;
--[[User:Ruthalas|Ruthalas]] ([[User talk:Ruthalas|talk]]) 08:38, 2 June 2015 (PDT)&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=User_talk:Ruthalas&amp;diff=5105</id>
		<title>User talk:Ruthalas</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=User_talk:Ruthalas&amp;diff=5105"/>
		<updated>2015-06-02T15:38:59Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: /* Thank you! */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Welcome to XinReality!==&lt;br /&gt;
Hello Ruthalas, welcome to XinReality. Thank you for creating the page on [[Uncanny valley]]. Let me know if you need anything -[[User:Xinreality|Xinreality]] ([[User talk:Xinreality|talk]]) 19:01, 1 June 2015 (PDT)&lt;br /&gt;
&lt;br /&gt;
==Thank you!==&lt;br /&gt;
(I have no idea if I am responding in the appropriate fashion.)&lt;br /&gt;
&lt;br /&gt;
Thanks Xin, my pleasure. Let me know if I need to modify anything!&lt;br /&gt;
--[[User:Ruthalas|Ruthalas]] ([[User talk:Ruthalas|talk]]) 08:38, 2 June 2015 (PDT)&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=User_talk:Ruthalas&amp;diff=5104</id>
		<title>User talk:Ruthalas</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=User_talk:Ruthalas&amp;diff=5104"/>
		<updated>2015-06-02T15:38:25Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Welcome to XinReality!==&lt;br /&gt;
Hello Ruthalas, welcome to XinReality. Thank you for creating the page on [[Uncanny valley]]. Let me know if you need anything -[[User:Xinreality|Xinreality]] ([[User talk:Xinreality|talk]]) 19:01, 1 June 2015 (PDT)&lt;br /&gt;
&lt;br /&gt;
==Thank you!==&lt;br /&gt;
(I have no idea if I am responding in the appropriate fashion.)&lt;br /&gt;
Thanks Xin, my pleasure. Let me know if I need to modify anything!&lt;br /&gt;
--[[User:Ruthalas|Ruthalas]] ([[User talk:Ruthalas|talk]]) 08:38, 2 June 2015 (PDT)&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=User_talk:Ruthalas&amp;diff=5103</id>
		<title>User talk:Ruthalas</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=User_talk:Ruthalas&amp;diff=5103"/>
		<updated>2015-06-02T15:28:22Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: /* Welcome to XinReality! */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Welcome to XinReality!==&lt;br /&gt;
Hello Ruthalas, welcome to XinReality. Thank you for creating the page on [[Uncanny valley]]. Let me know if you need anything -[[User:Xinreality|Xinreality]] ([[User talk:Xinreality|talk]]) 19:01, 1 June 2015 (PDT)&lt;br /&gt;
&lt;br /&gt;
==Thank you!==&lt;br /&gt;
(I have no idea if I am responding in the appropriate fashion.)&lt;br /&gt;
&lt;br /&gt;
Thanks Xin, my pleasure. Let me know if I need to modify anything!&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Uncanny_valley&amp;diff=5091</id>
		<title>Uncanny valley</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Uncanny_valley&amp;diff=5091"/>
		<updated>2015-06-01T20:52:45Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: Corrected image placement.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:Uncanny-valley.jpg|framed|Fig.1 The Uncanny Valley Graph &amp;lt;ref&amp;gt;http://www.johnmckenziehypnotherapist.co.uk/uncanny-valley/&amp;lt;/ref&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
The term &#039;uncanny valley&#039; refers to a situation in which something bears realistic human appearance (or movement), yet appears uncanny or repulsive to observers. The hypothesis is, that as a subject&#039;s appearance becomes more realistic it will tend to engender progressively more familiarity in the viewer- except for a small period just before the likeness is perfect, wherein the subject instead appears repulsive. &lt;br /&gt;
&lt;br /&gt;
The &amp;quot;valley&amp;quot; refers to the dip that occurs if the concept is plotted as a graph of familiarity (y-axis) as a likeness becomes more realistic (x-axis). (See Fig.1)&lt;br /&gt;
&lt;br /&gt;
==Source==&lt;br /&gt;
The term was coined by robotics professor Masahiro Mori in 1970&amp;lt;ref&amp;gt;Kawaguchi, Judit (10 March 2011). &amp;quot;Robocon founder Dr. Masahiro Mori&amp;quot;. Words To Live By. Japan Times. p. 11. &amp;lt;/ref&amp;gt;, and first appeared in print in the 1978 book &#039;&#039;Robots: Fact, Fiction, and Prediction&#039;&#039; by Jasia Reichardt&amp;lt;ref&amp;gt;&amp;quot;An Uncanny Mind: Masahiro Mori on the Uncanny Valley and Beyond&amp;quot;. IEEE Spectrum. 12 June 2012&amp;lt;/ref&amp;gt;. The term has gained popularity with the advent of 3D animation, where it is often used to describe computer generated characters that are intended to look perfectly life-like, but fall short and look disturbing instead.&lt;br /&gt;
&lt;br /&gt;
==Relevance to Virtual Reality==&lt;br /&gt;
The concept of the uncanny valley is particularly relevant to [[virtual reality]] because the 3D graphics which are a staple of virtual environments often exhibit the phenomena when life-like avatars or characters are used. While cinematic 3D renderings have largely progressed past the point of evoking revulsion, the computational constraints of rendering in real-time for virtual reality mandate less complex results, which makes it difficult to create samples that are sufficiently lifelike to move past the uncanny valley. &lt;br /&gt;
&lt;br /&gt;
==Possible Solutions==&lt;br /&gt;
Until real-time rendering for stereo output can computationally support the complex simulations needed to present a photo-realistic human, the best solution available to developers of virtual environments is to stay to the left of the uncanny valley. By using characters and avatars with exaggerated or stylized features a developer can create functional characters that avoid being &#039;not-quite perfect&#039; and unsettling to observers. &lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Uncanny_valley&amp;diff=5090</id>
		<title>Uncanny valley</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Uncanny_valley&amp;diff=5090"/>
		<updated>2015-06-01T18:13:17Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:Uncanny-valley.jpg|framed|Fig.1 The Uncanny Valley Graph &amp;lt;ref&amp;gt;http://www.johnmckenziehypnotherapist.co.uk/uncanny-valley/&amp;lt;/ref&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
The term &#039;uncanny valley&#039; refers to a situation in which something bears realistic human appearance (or movement), yet appears uncanny or repulsive to observers. The hypothesis is, that as a subject&#039;s appearance becomes more realistic it will tend to engender progressively more familiarity in the viewer- except for a small period just before the likeness is perfect, wherein the subject instead appears repulsive. &lt;br /&gt;
&lt;br /&gt;
The &amp;quot;valley&amp;quot; refers to the dip that occurs if the concept is plotted as a graph of familiarity (y-axis) as a likeness becomes more realistic (x-axis). (See Fig.1)&lt;br /&gt;
&lt;br /&gt;
==Source==&lt;br /&gt;
The term was coined by robotics professor Masahiro Mori in 1970&amp;lt;ref&amp;gt;Kawaguchi, Judit (10 March 2011). &amp;quot;Robocon founder Dr. Masahiro Mori&amp;quot;. Words To Live By. Japan Times. p. 11. &amp;lt;/ref&amp;gt;, and first appeared in print in the 1978 book &#039;&#039;Robots: Fact, Fiction, and Prediction&#039;&#039; by Jasia Reichardt&amp;lt;ref&amp;gt;&amp;quot;An Uncanny Mind: Masahiro Mori on the Uncanny Valley and Beyond&amp;quot;. IEEE Spectrum. 12 June 2012&amp;lt;/ref&amp;gt;. The term has gained popularity with the advent of 3D animation, where it is often used to describe computer generated characters that are intended to look perfectly life-like, but fall short and look disturbing instead.&lt;br /&gt;
&lt;br /&gt;
==Relevance to Virtual Reality==&lt;br /&gt;
The concept of the uncanny valley is particularly relevant to virtual reality because the 3D graphics which are a staple of virtual environments often exhibit the phenomena when life-like avatars or characters are used. While cinematic 3D renderings have largely progressed past the point of evoking revulsion, the computational constraints of rendering in real-time for virtual reality mandate less complex results, which makes it difficult to create samples that are sufficiently lifelike to move past the uncanny valley. &lt;br /&gt;
&lt;br /&gt;
==Possible Solutions==&lt;br /&gt;
Until real-time rendering for stereo output can computationally support the complex simulations needed to present a photo-realistic human, the best solution available to developers of virtual environments is to stay to the left of the uncanny valley. By using characters and avatars with exaggerated or stylized features a developer can create functional characters that avoid being &#039;not-quite perfect&#039; and unsettling to observers. &lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Uncanny_valley&amp;diff=5089</id>
		<title>Uncanny valley</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Uncanny_valley&amp;diff=5089"/>
		<updated>2015-06-01T18:12:35Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: Added basic explanation, source, relevance, and possible solutions (First edit, feel free to critique and modify heavily. Just let me know what I am doing wrong.)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The term &#039;uncanny valley&#039; refers to a situation in which something bears realistic human appearance (or movement), yet appears uncanny or repulsive to observers. The hypothesis is, that as a subject&#039;s appearance becomes more realistic it will tend to engender progressively more familiarity in the viewer- except for a small period just before the likeness is perfect, wherein the subject instead appears repulsive. &lt;br /&gt;
&lt;br /&gt;
The &amp;quot;valley&amp;quot; refers to the dip that occurs if the concept is plotted as a graph of familiarity (y-axis) as a likeness becomes more realistic (x-axis). &lt;br /&gt;
&lt;br /&gt;
[[File:Uncanny-valley.jpg|framed|The Uncanny Valley Graph &amp;lt;ref&amp;gt;http://www.johnmckenziehypnotherapist.co.uk/uncanny-valley/&amp;lt;/ref&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
==Source==&lt;br /&gt;
The term was coined by robotics professor Masahiro Mori in 1970&amp;lt;ref&amp;gt;Kawaguchi, Judit (10 March 2011). &amp;quot;Robocon founder Dr. Masahiro Mori&amp;quot;. Words To Live By. Japan Times. p. 11. &amp;lt;/ref&amp;gt;, and first appeared in print in the 1978 book &#039;&#039;Robots: Fact, Fiction, and Prediction&#039;&#039; by Jasia Reichardt&amp;lt;ref&amp;gt;&amp;quot;An Uncanny Mind: Masahiro Mori on the Uncanny Valley and Beyond&amp;quot;. IEEE Spectrum. 12 June 2012&amp;lt;/ref&amp;gt;. The term has gained popularity with the advent of 3D animation, where it is often used to describe computer generated characters that are intended to look perfectly life-like, but fall short and look disturbing instead.&lt;br /&gt;
&lt;br /&gt;
==Relevance to Virtual Reality==&lt;br /&gt;
The concept of the uncanny valley is particularly relevant to virtual reality because the 3D graphics which are a staple of virtual environments often exhibit the phenomena when life-like avatars or characters are used. While cinematic 3D renderings have largely progressed past the point of evoking revulsion, the computational constraints of rendering in real-time for virtual reality mandate less complex results, which makes it difficult to create samples that are sufficiently lifelike to move past the uncanny valley. &lt;br /&gt;
&lt;br /&gt;
==Possible Solutions==&lt;br /&gt;
Until real-time rendering for stereo output can computationally support the complex simulations needed to present a photo-realistic human, the best solution available to developers of virtual environments is to stay to the left of the uncanny valley. By using characters and avatars with exaggerated or stylized features a developer can create functional characters that avoid being &#039;not-quite perfect&#039; and unsettling to observers. &lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Uncanny-valley.jpg&amp;diff=5088</id>
		<title>File:Uncanny-valley.jpg</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Uncanny-valley.jpg&amp;diff=5088"/>
		<updated>2015-06-01T18:08:21Z</updated>

		<summary type="html">&lt;p&gt;Ruthalas: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Ruthalas</name></author>
	</entry>
</feed>