• Python Course
  • Python Basics
  • Interview Questions
  • Python Quiz
  • Popular Packages
  • Python Projects
  • Practice Python
  • AI With Python
  • Learn Python3
  • Python Automation
  • Python Web Dev
  • DSA with Python
  • Python OOPs
  • Dictionaries

Virtual Reality(VR) vs Augmented Reality(AR): What’s the difference?

Reality is merely an illusion, albeit a very persistent one! Can you guess who said that? In case you thought Albert Einstein, you are absolutely correct! (And in case you didn’t, do brush up on your general knowledge!) Anyway, Albert Einstein was probably not thinking about the myriad ways that modern reality can be twisted when he said this, but this quote sure fits the situation. Both Virtual Reality(VR) and Augmented Reality(AR) can be used to change our reality.

Virtual-Reality-vs-Augmented-Reality

They can expand our vision or even transport us to wonderful new places until the only question that remains is “What is real and what is not?”. However, Virtual Reality(VR) and Augmented Reality(AR) are not exactly the same (If they were, they wouldn’t have different names!!!). So this article primarily tries to understand Virtual Reality and Augmented Reality and through this understanding, know the answer to “What’s the Difference Between Virtual Reality(VR) and Augmented Reality(AR)?”.

What is Virtual Reality?

Imagine really traveling through the Pokemon World! This world is lush green and populated with small towns having identical looking nurses and police officers! You run around collecting pokemon and getting occasional electric shocks from Pikachu! This Pokemon World can truly exist for you using Virtual Reality . In other words, Virtual Reality can use technology to create a simulated environment (a Pokemon environment in this case!). This simulated environment can be totally different than the reality of this world and yet you can perceive it as reality. So Virtual Reality is really just that, a “Virtual Reality” that you can move around in and experience as if you were really there.

This is stated quite succinctly by Palmer Luckey, Founder of Oculus Rift as:

Why shouldn’t people be able to teleport wherever they want?

You can view Virtual Reality using a VR headset such as the Oculus Rift S , PlayStation VR , etc. Another option is just using your phone with specially designed VR apps along with Google Cardboard , Daydream View , etc.

What is Augmented Reality?

Imagine traveling through the real world…Yes, you do it every day, but how about the addition of Pokemon! You can run around catching not-real Pokemon using your mobile phones and enjoy the presence (and shocks!) of Pikachu while still remaining in the real world. This can be achieved using Augmented Reality (Has actually been achieved by Pokemon Go! ) So Augmented Reality basically involves using technology to create an “Augmented” version of reality by superimposing digital information over the real world. This can be done using AR apps in smartphones that use the camera to display the real world and superimpose extra information like text and images onto that world. The importance of Augmented Reality cannot be understated.

According to Tim Cook, CEO of Apple Inc.

“I think that a significant portion of the population of developed countries, and eventually all countries, will have AR experiences every day, almost like eating three meals a day. It will become that much a part of you.”

Virtual Reality(VR) vs Augmented Reality(AR)

With Virtual Reality, you can actually experience the Pokemon World. On the other hand, with Augmented Reality, you can experience parts of the Pokemon World in the real world . That’s the major difference between the two realities! Virtual Reality allows for a fully immersive experience for the viewer but it is quite restrictive as it requires a headset. In other words, you can experience a different world using Virtual Reality but you are totally cut-off from the real world for that to happen. On the other hand, Augmented Reality allows more freedom as your normal world view is merely enhanced and not replaced. Also, Augmented Reality is easier to market than Virtual Reality as it does not require any special headsets but only a smartphone (Which most of us already have!!!) This is the reason that Augmented Reality is projected to be more relevant than Virtual Reality in the long-run.

“I’m excited about augmented reality because unlike virtual reality, which closes the world out, AR allows individuals to be present in the world but hopefully allows an improvement on what’s happening presently.”

Let’s see key differences between VR vs AR.

Virtual Reality creates a fully immersive digital environment or experience that simulates the real world or imaginary world. Augmented Reality overlays digital information into the real world.
It generally requires a headset or a similar kind of device to immerse the user into the digital world. It can be accomplished through smartphones or tablets with the help of AR apps.
The user is isolated from the real world while in VR. The user is aware of the real world while experiencing AR.
It requires powerful hardware and software to create a realistic experience. It requires relatively simple technology for the creation.
: PlayStation VR, Samsung Gear VR, and HTC Vive. Pokemon GO, Google Maps AR, and IKEA App.

But overall, both Virtual Reality and Augmented Reality are hot technologies currently and becoming more and more popular (and better!) with time. So be sure to enjoy this persistent illusion that is a new reality for modern times!

In summary, Virtual Reality (VR) immerses users in digital environments, while Augmented Reality (AR) overlays digital elements onto the real world. Both technologies are reshaping how we interact with our surroundings, offering unique experiences and innovative applications. As AR becomes more integrated into our daily lives, and VR continues to evolve, the future holds endless possibilities for these transformative technologies.

Must Read: Top 7 Modern-Day Applications of Augmented Reality (AR) Top Industries using Virtual Reality The illusions in Virtual Reality

What is the difference between Virtual Reality (VR) and Augmented Reality (AR)?

VR creates a fully immersive digital environment, while AR overlays digital information onto the real world.

Is VR or AR more relevant in the long run?

AR is projected to be more relevant in the long run due to its ease of use and integration into everyday devices like smartphones.

What are some challenges faced by VR and AR technologies?

Challenges include hardware limitations, user experience issues, and privacy concerns.

How are VR and AR impacting industries like education and healthcare?

VR and AR are revolutionizing education and healthcare by providing immersive learning experiences and innovative medical applications.

What are the future trends for VR and AR?

Future trends include improved hardware, enhanced user experiences, and greater integration into everyday life.

author

Please Login to comment...

Similar reads.

  • OpenAI o1 AI Model Launched: Explore o1-Preview, o1-Mini, Pricing & Comparison
  • How to Merge Cells in Google Sheets: Step by Step Guide
  • How to Lock Cells in Google Sheets : Step by Step Guide
  • PS5 Pro Launched: Controller, Price, Specs & Features, How to Pre-Order, and More
  • #geekstreak2024 – 21 Days POTD Challenge Powered By Deutsche Bank

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Virtual Reality Versus Augmented Reality Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Advantages of Virtual Reality

Disadvantages of virtual reality, comparison between virtual reality and augmented reality.

Virtual Reality (VR) refers to a high-end user computer interface involving real-time interactions and stimulations that use several sensorial channels which include visual, auditory, tactile, smell and taste. Virtual Reality should not just be taken as a high-end user interface or a medium.

This is because it includes applications that help in providing solutions to problems in different areas for instance in military, medicine and engineering. The ability of a given application to provide a remedy to certain challenges depends on human imagination (Burdea & Coiffet, 2003).

On the other hand, Augmented Reality (AR) aims at supplementing the real world with a virtual world instead of replacing it altogether. In order to achieve this, Augmented Reality makes use of objects generated by a computer and appears to coexist together with the real world (Klopfer, 2008). Many researchers are interested in Augmented Reality for different reasons.

Some of the reasons include enhancing the perception and interaction with real world and undertaking improvement of different tasks in the world. Augmented Reality can also be applied in different areas such as in the medical practices, commerce, engineering, design and inspection, entertainment as well as military field. Classifying the AR system can be done basing on display, tracking and application viewpoint.

According to Yeon Ma and Choi (2007), there are quite a number of positive implications associated with virtual reality. For instance, VR can be used in the medical field during simulated surgery. It can be used train medical students and new doctors.

The use of flight simulators in the military field can serve as an effective way of providing realistic and advanced situations when undertaking military training. Yeon Ma and Choi (2007) are unanimous that in businesses and corporations, virtual Reality provides a convenient form of communication and at the same facilitates a faster collection of data.

Certain stereoscopic displays and computer screens are used to display virtual reality environments. Headphones and speakers can also be used to boost simulation of the environment (Burdea & Coiffet, 2003). In fact, this amounts to one of the merits of a virtual reality environment.

Moreover, advanced virtual environments can now incorporate a force feedback system that provides some of tactile information. This latest integration of virtual reality environment is mainly made use of in gaming applications. The medical field has also benefited greatly from this new mode of a virtual reality environment. The whole system is heptic in nature (Burdea & Coiffet, 2003).

Another merit of a virtual reality set up is that individuals in remote locations can indeed facilitate some virtual presence of each other through telexistence and telepresence modes. A wired glove or the ordinary mouse and key board components of a computer can be used as virtual artifacts in this case in order to enable remote communication between two or more parties.

In a virtual reality set up, the new environment created can be made to appear like a real world. On the other hand, a virtual reality environment can be significantly altered to resemble the world with slight differences. A case example of this type of virtual reality is the Virtual Reality games (Burdea & Coiffet, 2003).

The main disadvantage of Virtual Reality is with regard to the technology needed to carry out a natural or an immersive experience. it has been found out that for a relatively long period of time, the procedure has remained unsuccessful. Some of the systems that allow articulated presence or provide the expected feedback are at times clumsy. This increases the chances of causing problems when using the system.

Another disadvantage of Virtual Reality relates to the negative social impacts caused by immersive environments to the people and the psychological effects that result from the process due to prolonged usage (Yeon Ma & Choi, 2007).

In terms of demerits, it has proved to be cumbersome to develop a virtual reality environment with high-fidelity. Some of the factors that limit this possibility include communication bandwidth, image resolution, and processing power.

Differences between Virtual Reality and Augmented Reality are based on the level of immersion of the system. A major difference between the two is that a Virtual Reality system aims at reaching a fully immersive virtual environment and uses factors generated by a computer.

This is the environment where the user performs his or her task. On the other hand, an Augmented Reality aims at combining both the virtual and real world. This is mainly aimed at assisting a given user to perform a task from a physical setting (Johnson & Sasse, 1999).

Another difference between the two is that Virtual Reality usually limits the physical movement of the user, whereas Augmented Reality requires the system to be portable especially when dealing with the outdoor augmented reality systems.

However, it is pertinent to note that Virtual Reality and Augmented Reality share some common features. For example, they both share three dimensional images and interactivity and can be applied in similar fields (Yeon Ma & Choi, 2007).

Burdea, G., & Coiffet, P. (2003). Virtual Reality technology . Hoboken, N.J: J. Wiley Interscience.

Johnson, C., Sasse, M. A. (1999). International Conference on Human-Computer Interaction & Interact: Human-computer interaction . Amsterdam: IOS Press.

Klopfer, E. (2008). Augmented Learning: Research and Design of Mobile Educational Games .New York: MIT Press.

Yeon Ma, J. & Choi, J.S.(2007). The Virtuality and Reality of Augmented Reality . London: Academy Publisher.

  • The Samsung Galaxy Tab
  • People Attitude about Green Computing
  • Asian Horror in "A Tale of Two Sisters" (2003)
  • The Technology, Applications, and Usage of Augmented Reality
  • Digital Storytelling and Augmented Reality
  • How podcasts differ from radio
  • Multimedia Hardware Key Devices
  • Creating an Access-Security Balance in the Information and Communications Sector
  • Technological impact of the Samsung NC215S Netbook
  • Current and Past Technology
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2019, June 14). Virtual Reality Versus Augmented Reality. https://ivypanda.com/essays/virtual-reality-versus-augmented-reality/

"Virtual Reality Versus Augmented Reality." IvyPanda , 14 June 2019, ivypanda.com/essays/virtual-reality-versus-augmented-reality/.

IvyPanda . (2019) 'Virtual Reality Versus Augmented Reality'. 14 June.

IvyPanda . 2019. "Virtual Reality Versus Augmented Reality." June 14, 2019. https://ivypanda.com/essays/virtual-reality-versus-augmented-reality/.

1. IvyPanda . "Virtual Reality Versus Augmented Reality." June 14, 2019. https://ivypanda.com/essays/virtual-reality-versus-augmented-reality/.

Bibliography

IvyPanda . "Virtual Reality Versus Augmented Reality." June 14, 2019. https://ivypanda.com/essays/virtual-reality-versus-augmented-reality/.

IvyPanda uses cookies and similar technologies to enhance your experience, enabling functionalities such as:

  • Basic site functions
  • Ensuring secure, safe transactions
  • Secure account login
  • Remembering account, browser, and regional preferences
  • Remembering privacy and security settings
  • Analyzing site traffic and usage
  • Personalized search, content, and recommendations
  • Displaying relevant, targeted ads on and off IvyPanda

Please refer to IvyPanda's Cookies Policy and Privacy Policy for detailed information.

Certain technologies we use are essential for critical functions such as security and site integrity, account authentication, security and privacy preferences, internal site usage and maintenance data, and ensuring the site operates correctly for browsing and transactions.

Cookies and similar technologies are used to enhance your experience by:

  • Remembering general and regional preferences
  • Personalizing content, search, recommendations, and offers

Some functions, such as personalized recommendations, account preferences, or localization, may not work correctly without these technologies. For more details, please refer to IvyPanda's Cookies Policy .

To enable personalized advertising (such as interest-based ads), we may share your data with our marketing and advertising partners using cookies and other technologies. These partners may have their own information collected about you. Turning off the personalized advertising setting won't stop you from seeing IvyPanda ads, but it may make the ads you see less relevant or more repetitive.

Personalized advertising may be considered a "sale" or "sharing" of the information under California and other state privacy laws, and you may have the right to opt out. Turning off personalized advertising allows you to exercise your right to opt out. Learn more in IvyPanda's Cookies Policy and Privacy Policy .

  • Undergraduate Admission
  • Student Affairs
  • Events Calendar
  • George W. Bush Presidential Center
  • Prospective Students
  • Current Students
  • Information for Faculty & Staff

Virtual Reality vs Augmented Reality: Comparative Analysis

Dive into the world of virtual reality vs augmented reality. Understand the distinctions between these immersive technologies shaping our digital experiences.

Creative-Technology-girl-wearing-VR-Goggles

Campus Creative Technology

In recent years, technological advancements have given rise to immersive experiences that blur the line between the real and virtual worlds. Augmented reality (AR) and virtual reality (VR) have emerged as two prominent technologies revolutionizing a wide range of industries and transforming how we interact with and consume digital content. As these technologies continue to progress in their development cycles and gain more popularity, it’s essential to understand what each of these technologies are, how they function, their differences and how they are applied in the real world.

What is AR?

Augmented reality integrates digital content into the real-world environment, enhancing the user's perception and interaction with their surroundings in real time. AR technology overlays computer-generated elements such as images, videos or 3D models onto the user's view of the physical world. This seamless integration opens possibilities for creative and practical applications while enhancing the user’s interaction with the physical world around them.

The primary goal of AR is to provide contextual information and enrich the user's experience by augmenting physical reality with virtual elements. AR technology prioritizes interaction with the physical world, helps identify real-world issues and offers solutions with digital overlaying.

What is VR?

Virtual reality (VR) is a fully-simulated environment that immerses users in a computer-generated, three-dimensional world, completely replacing the awareness of the real-world environment. VR technology often involves the use of specialized headsets, which track the user's movements and provide a fully immersive experience.

The primary objective of VR is to create a sense of presence, transporting users to a different reality and allowing them to interact with the virtual environment. VR focuses on engaging multiple senses such as vision, hearing, and touch, to foster an immersive experience where users can interact with new virtual worlds and simulate real-world situations.

Key Differences between AR and VR

Often enough, AR and VR are referenced interchangeably. However, there are a few key differences that separate these technologies such as:

  • User Experience: AR blends virtual content with the real world, enhancing the user's perception of reality in the physical world. VR completely immerses users in a simulated environment, totally disconnecting them from the physical world.
  • Interaction With the Environment: AR enables users to interact with virtual and physical objects in their immediate surroundings. VR restricts interaction to only the virtual environment by requiring specialized controllers or hand-tracking devices to interact.
  • Level of Immersion: AR provides a partially-immersive experience, allowing users to perceive virtual and real-world elements simultaneously. VR offers a fully-immersive experience, placing users in a virtual environment with minimal external stimuli.
  • Hardware Requirements: AR experiences can be accessed through smartphones, tablets or specialized AR glasses. VR experiences can often only be accessed through a dedicated VR headset with controllers and other sense-tracking devices.

Common Uses of AR and VR

While AR and VR have several differences, their uses, functionalities, and roles do often coincide with one another, offering many similar solutions and experiences in various industries. Some of the most common uses of and industries leveraging these technologies include:

  • Gaming and Entertainment : Popular games and apps like Pokémon Go and Snapchat leverage digital image layering AR technology. VR is often found in virtual theme park rides and simulations and used for interactive gaming and storytelling.
  • Education: AR applications overlay informative content onto real-world objects, fostering a learning experience tailored towards student engagement. VR creates interactive learning experiences like showcasing a fully-virtual environment. For example, a historical geographic site can be studied and enables remote collaboration between students.
  • Retail: Retail customers can use AR applications to visualize products when shopping online, which improves decision-making and enhances the customer experience. VR allows brands to showcase their products and services interactively to help improve customer engagement and brand awareness.
  • Healthcare: Surgical planning and training AR applications help healthcare professionals by overlaying medical imaging data onto patients for real-time guidance. VR healthcare applications create virtual environments to distract patients from pain or anxiety and can help with other healthcare treatments like exposure therapy and rehabilitation.

AR and VR technologies are often found in similar industries, but with different roles and functionalities. Companies and organizations are responsible for identifying which technology is most useful and relevant to their processes and goals before integrating them into their business model.

Industry Outlook

Economic and cultural disruption from the coronavirus pandemic have helped propel AR and VR technologies and products into the market. According to Statista , the global extended reality (XR) market that includes augmented reality, virtual reality and mixed reality (MR) reached $29.26 billion in 2022 and is anticipated to rise to over $100 billion by 2026.

Businesses have steadily been incorporating AR and VR technologies into their business models to optimize their operations. Apple recently announced its Apple Vision Pro , which is poised to be the next high-profile competitor to other XR products like the Meta Quest 2 and the PSVR 2 . According to Global Data , the gaming and entertainment industry is predicted to be the biggest demand generator for AR and VR technology in the global market.

Earn an Education in Creative Technology

The expected growth in demand for AR and VR technologies and experiences means companies will need people with the skills to help develop and improve these products and services. A new breed of technology professionals will need to align themselves with the creative technology needs of the future. SMU’s M.A. in Creative Technology is a new program that combines creative and design disciplines with core and emerging digital technologies to generate innovative solutions that are growing in demand across industries. This program merges the arts with technology to help develop creative tech-focused individuals who desire a deeper understanding of AR and VR.

Begin your academic journey towards AR and VR today by empowering yourself with a new approach to creative technology education. Apply today and experience the SMU difference.

Primary navigation menu

Search the smu website.

AUGmentecture logo

Augmented Reality (AR) vs. Virtual Reality (VR): Key Differences

AR vs VR

  • December 19, 2022

Augmented Reality (AR) and Virtual Reality (VR) are both technologies that encompass different applications but allow users to interact with digital content in an environment they could not otherwise access or experience.

As forecasted for 2023 , VR and AR users are estimated to reach over 110 million users in the United States. Indeed, an overwhelming figure. 

Yet, what are AR and VR and what is the difference between AR vs. VR? Some use AR and VR interchangeably due to their similar elements and applications.

This article will detail the showdown of AR vs. VR, their advantages and disadvantages, and their applications. 

Table of Contents

What Is Augmented Reality (AR)?

AR is a technology that takes what we see in the real world and superimposes digital content (images, video, sounds, etc.) onto it in real-time. The purpose of AR is to use the physical world as a canvas and overlay computer-generated elements on top of it.

This technology is currently used for various purposes like gaming, entertainment, education, and other fields. AR allows users to interact with real-world objects in a way that does not necessarily require a headset or goggles, operating on mobile devices or wearable technologies like Microsoft’s Hololens 2 or Lenovo ThinkReality A3 .

For example, AR can help with navigation by providing visible directions through a device screen. So, there is no need to glue one’s eyes to a map while walking around and exploring new cities and sites.

What Is Virtual Reality (VR)?

VR is a digital simulation of a 3D environment that can be interacted with and viewed with a VR headset. VR is the next generation of how people experience the online world since it allows them to immerse themselves in a virtual environment.

Woman wearing a virtual reality headset

VR immerses the user in a virtual environment to explore 3D worlds from the user’s perspective, engage with VR apps , watch movies on a virtual screen, and many more. VR is used on a PC or laptop with VR-dedicated software, but powerful gaming hardware is needed to operate VR headsets. 

Since VR first gained traction  in the gaming and entertainment industries, VR headsets are currently available for platforms such as PlayStation 4 and Xbox One consoles. For the past decade, however, VR applications have expanded to other areas or fields, such as staff training.

AR vs. VR: What’s the Difference

The two technologies carry different functions, creating distinct experiences for users. So, what is the difference between virtual reality and augmented reality?

  • Objective : VR is a computer-generated replica of a three-dimensional environment; the user wears a headset and is immersed in the virtual world, while AR directly overlays digital, visible, and interactive information onto the physical world.
  • Immersion: VR provides an immersive and interactive experience that some users find overly realistic, while AR provides information, such as directions or guidance, without a completely immersive experience for the user. 
  • Experience: In contrast to AR, which can be viewed by everyone around since it layers digital data onto the user’s current environment, VR is a purely subjective experience as it requires wearing goggles or a headset to cut off the user from their immediate surroundings. 
  • Motion Tracking: Another difference between AR and VR is motion tracking. Whereas VR motion tracking sensors are installed on the user’s body, AR relies on the device’s input, such as environment position or orientation.  
  • Bandwidth: VR devices are generally tethered to computers and require a large amount of bandwidth to generate graphics. AR devices are mobile and need comparatively less bandwidth for graphics generation.

Advantages and Disadvantages of Augmented Reality

There are numerous opportunities for AR applications , such as in education, including virtual visits to museums; classroom instruction that requires students to use their imagination; lessons that offer hands-on activities; and multimedia experiences such as tutorials and interactive games. 

Since AR-presented information is accessed in various ways via wearable devices (e.g., Google Glass), smartphones or tablets, it is more accessible and development-prone. As estimated by Statista , 1.7 billion AR user devices will take over the world by storm by 2024. 

Architect woman wearing VR headset is looking at holographic projections of building

However, AR brings new challenges regarding data security since AR-based applications require permission to access a user’s phone camera, sensors, location, etc. 

This might pose privacy concerns as a similar study published on ScienceDirect , a publishing portal for scientific and medical research papers, concluded after examining privacy issues in AR-based applications. 

Lastly, extensive and excessive use of AR-based applications may result in health risks such as eye strain, fatigue, and dizziness which can adversely impact an individual’s life. 

Advantages and Disadvantages of Virtual Reality

As more people opt to use virtual reality to experience an event or even as a form of entertainment, the advantages of VR technology will continue to grow. VR allows users to indulge in experiences they cannot undergo in real life, such as flying, underwater dives, travelogue views of different places, and more.

VR has even made its way into healthcare, from improving surgical procedures to training nurses on managing certain medical conditions. Doctors are even using VR instead of traditional physical therapy exercises as a refreshing option. 

However, the major disadvantage of VR may be that it is still radically new. Long-term implications are still unknown, and with increased reliance on this technology, these implications could become more significant. 

For instance, VR training cannot replace actual training , despite the reduced risks and vast benefits, especially in high-risk or meticulous professions. 

In addition, VR costs can be high , and its applications are limited. Only some have the hardware for VR, and those who do, cannot use it to its full potential due to high latency between what is seen on the screen and the corresponding information picked up by the headset.

Applications of Augmented Reality

AR has been extensively used in various sectors, including education, training, marketing, and architecture. It is also evident how many companies have already utilized AR to boost work efficiency and better meet customers’ demands through better quality service delivery or product offerings. 

Here are some innovative AR-based applications:

  • Apple Measure: An IOS mobile application that automatically measures objects or distances.
  • IKEA Place: An IOS mobile application for virtually furnishing a house before buying any furniture to see what it might look like in real life.
  • AUGmentecture : An AR tool for architects that creates a 3D-modeled virtual visualization of a project on a tablet device.

Applications of Virtual Reality

VR has been revolutionizing the way we experience the world. From simply viewing something to experiencing it, VR is an exciting new technology. VR applications are endless, including gaming, education, healthcare, and entertainment experiences such as movies and concerts.

There are also more practical applications, such as manufacturing training programs and architectural walkthroughs.

Man browsing a virtual world in virtual reality glasses

Here are some exciting VR-based applications:

  • Samsung 837X : A fully immersive retail experience that has been making headlines worldwide. It is a three-story, 11,000-square-foot shop in New York City, offering interactive digital experiences and services.
  • Etsy Virtual House : A virtual reality experience developed by Etsy that allows buyers to walk through a virtual house and view items available for sale by numerous artists. 
  • Space Explorers (The ISS Experience) : The ISS Experience is an Oculus-powered production that gives people a taste of what it would be like to explore deep space.

What Is Mixed Reality (MR)?

MR is a term used to describe different technologies that interact with the user in the real world, as opposed to those on a screen. MR allows people to interact with live environments and objects through the use of various forms of multimedia, such as holograms, 3D images, and sounds.

One could compare MR to VR, but MR is more about reconstructing the physical world than creating an imaginary one.

MR combines VR and AR, hence mixed reality. Rather than being immersed in purely made-up images, users see their physical environment and the interactive digital content through their device’s screen or while wearing special glasses. Unlike VR, MR is not a true virtual reality as it does not completely block out the user’s surroundings or environment.

Role of Virtual Reality and Augmented Reality in the Metaverse

The Metaverse envisioned a world where VR and AR merge to create the ultimate immersive virtual experience. It is no longer AR vs. VR, but a joint immersive world, a simulated environment created with software, such as 3D modeling tools and VR headsets.

Due to the headsets’ built-in eye-tracking feature, these environments are interactive, meaning that users can manipulate objects using their hands and eye movement. In the Metaverse, you can experience a digital environment (VR) with a real-time digital overlay of information (AR) when logging in from a VR headset. 

Other big giants in the tech industry, such as Google and Microsoft, have been trying to delve into similar projects since Meta launched the Metaverse. Other companies have also been keen to follow the steps of the tech giant. For instance, Decentraland Foundation developed Decentraland (MANA) , a type of Metaverse for crypto-enthusiast gamers. 

AR and VR have gained notable recognition and proved to be two of the most powerful technologies in recent years. As AR and VR have been developing in significant strides over the past decades with applications expanding at an incredible rate, they are no longer a novelty but an industry in their own right. 

Related Posts

Green materials for sustainable building

Green Materials for Sustainable Buildings

  • April 11, 2023

Benefits of augmented reality

5 Game-Changing Benefits of AR You Need to Know About

  • March 17, 2023

What is concrete 3D printing

What Is Concrete 3D Printing and Why Is It a Game-Changer for the Building Industry?

  • March 12, 2023

How it Works

Integration

Get Started

Architecture

Construction

Engineering

Integrations

Autodesk Revit

  • Apple Watch
  • Fitness trackers

Wareable

Virtual reality v augmented reality: Which is the future?

Sophie Charara

Fancy spending time in a better version of the real world or new, artificial worlds?

They’re yin and yang, two cutting edge technologies that could change the world, but currently involve dorky hardware and are the subject of fascination for the world’s most influential people in technology.

But while AR and VR have a lot in common, they could also lead us down totally different paths in entertainment, gaming, communication and more. So which will win? Augmented reality overlays virtual 3D graphics onto our real world, augmenting the way we see our everyday life and bringing us more information.

Read this : Augmented reality explained

Virtual reality, however, immerses us in totally new, synthetic worlds with 360 degree views and little to no sensory input from the room your body is actually in.

So how do they actually stack up? Let’s take a look.

VR v AR: The players

Https://www.wareable.com/media/imager/3558 a71bc7a5751beae23d895679b5e81567.jpg

Both AR and VR are quick to find converts, but the two complementary yet contradictory visions of the future also tend to put people into two camps.

Microsoft was one of the early players to invest heavily in AR with HoloLens and its ecosystem. The Redmond company continues to work with the likes of NASA and other companies, like car manufacturers, to find ways to implement AR into everyday life in industry.

One of the biggest players in tech is Apple, and the Cupertino company has also found itself enamored with AR, even though it is yet to show a product. Sure, it’s interested in VR and thinks it could go somewhere, but the thing that really gets them excited is AR. Back in 2016, CEO Tim Cook called it a “core technology,” saying that he expects it to be a big technology, bigger than VR, in the future. “Virtual reality sort of encloses and immerses the person into an experience that can be really cool,” Cook said, “but probably has a lower commercial interest over time. Less people will be interested in that.”

Read this: Everything you need to know about Microsoft HoloLens

There’s also Snap, which has gradually dipped its toes in AR with filters and, as speculated , perhaps upcoming pair of smartglasses . However, while Apple and Microsoft are the big names in AR right now, the augmented reality tech that has the industry going nuts (and throwing cash at) is Magic Leap .

Company CEO Rony Abovitz is a frequent critic of VR as a way forward in both entertainment and gaming, going so far as to call it dangerous. In an Ask Me Anything on Reddit , Abovitz discussed the differences between VR and AR, mostly that VR headsets immerse users in an artificial world and AR incorporated objects and environments from the real world.

“There are a class of devices (see-through and non-see-through) called stereoscopic 3D,” he said. “We at Magic Leap believe these inputs into the eye-brain system are incorrect and can cause a spectrum of temporary and/or permanent neurologic deficits.”

Virtual reality v augmented reality: Which is the future?

“Our philosophy as a company (and my personal view) is to ‘leave no footprints’ in the brain. The brain is very neuroplastic and there is no doubt that near-eye stereoscopic 3D systems have the potential to cause neurologic change.”

Now that Palmer Lucky has left Oculus, the biggest public cheerleader of VR is Mark Zuckerberg , who purchased Oculus for $2 billion for Facebook. “We’re working on VR because I think it’s the next major computing and communication platform after phones,” he said in 2016. “We’ll have the power to share our full sensory and emotional experience with people wherever we’d like.”

Read this: Meet Apple’s AR dream team

Zuckerberg believes in VR so much that he wants to invest $3 billion to get the technology where it needs to be, because he definitely doesn’t think it’s good enough now. While Zuck is all in on VR and the amount to spend on it, he’s a little less enthused by AR, only mentioning that he expects it to go mainstream around 2022 because the technology just isn’t there yet.

Google seems to be taking the same path. The company, right now, is all in on developing its Daydream VR platform. It wanted to make VR extremely accessible with Cardboard, but Daydream’s goal is a little grander: to make mobile VR great. It started with the View headset, and it’ll continue with standalone headsets from the likes of HTC Vive and Lenovo, as well as technologies to make desktop-quality graphics a signature of mobile VR. Its Project Tango AR platform will remain on smartphones and tablets for the time being, but there’s plenty of potential to bring it to headsets and smartglasses in the future.

VR v AR: The experience

Virtual reality v augmented reality: Which is the future?

AR is exciting but we don’t have many ways to experience it. Google Glass was underwhelming, HoloLens has field of vision problems (and still hasn’t launched in a consumer capacity), Snap has filters, Magic Leap is silent, and the rest of it is done on our phone, like Pokemon Go or Google’s Project Tango. Plus, nobody has come close to building AR into glasses we’d actually wear on a day-to-day basis.

Like voice-controlled tech, AR has plenty of pop culture inspiration but no one will get on board until it works every time, all the time. Oh, and until it actually looks cool. Of the two technologies, AR is the one that we are supposed to use on our average day, venturing out of the house in public in front of other people.

VR, on the other hand, is an experience that’s a little easier to understand. You put on a headset and get transported to another world, with two of your senses cut off from reality, tricking your mind into thinking you’re someplace you’re not. Now, the hardware isn’t perfect just yet. There needs to be higher resolution displays, better latency, more immersive ways to feel your VR content and eye-tracking , which can be used to display better graphics and make AI characters in virtual worlds treat you like a person in the real world would.

Read this: The race to mixed reality

Yes, we’ve seen and heard stories of folks taking Samsung Gear VR onto the train, but let’s face it: VR has begun life as an at-home gaming peripheral. For that user, it can be a pretty magical thing right now.

Https://www.wareable.com/media/imager/5890 324b4d0951f0ea95c3ae1fe70a47633e.jpg

While AR will eventually be neatly tucked into the sides of your sports sunnies, VR is always going to have to enclose your eyes and ears with lenses, displays and headphones to work.

AR glasses will come with some social etiquette guidelines, as companies will have to figure out ways to make sure people realize you’re connected while looking them (kind of like Spectacles’ light). That was one of the major public concerns with Google Glass. VR can’t disappear as easily as AR glasses though, since it quickly becomes obvious that someone is ‘plugging in’ to a virtual world for a session. However, there are things like Windows Mixed Reality upcoming headsets, which will use pass-through cameras to see the real world in addition to the virtual.

In general, AR specs are likely to be lighter and more comfortable, letting you combine the real world with the virtual, while VR headsets are bigger, bulkier and more immersive, cutting you off from reality.

As for prices, it’s all a big crapshoot right now. You can get a VR headset for as low as , and you can get premium experiences like PlayStation VR , Oculus Rift or HTC Vive from to . Good luck trying to get in on AR though, with HoloLens dev kits running at a whopping .

VR v AR: The potential

Virtual reality v augmented reality: Which is the future?

For either VR or AR to get mass market appeal, it needs to resonate not only to early adopters, but regular folks who just want to play some games or check their email. But what about the other uses of VR and AR? How do they fit into our world as a whole?

Both VR and AR have been touted as one of the keys to training medical students by replacing textbooks. Medical students can use either to work on digital cadavers or dummies that can easily, and cheaply, be reset for constant reuse by hundreds or thousands of students. Additionally, VR can be used to create digital labs that allows students to get the hands-on experience they need without the cost associated with physical labs.

VR can also be used for emotional needs. You can use VR to distract someone from pain, but you can also use it to create empathy, putting people in situations that they might not be familiar with. On top of that, you can use it to recreate your memories, though that comes with a whole series of ethical questions humanity might not be ready for.

And of course, we’ve also seen both VR and AR in projects such as First Life at the Natural History Museum , Parthenon sculptures and Bronze Age exhibitions at the British Museum . Public spaces are only beginning to explore how the technology can be used to improve experiences. Look no further than Disney, who is interested in bringing AR experiences to its theme parks.

​Wearables in the workplace: The tech taking over your office in 2016

The potential future of AR, however, is all about augmentation. The technology can help keep hands free, so that workers can have the benefit of additional, and useful, information as they work. Imagine construction or factory workers have instant access to blueprints in their eyesight as they look at the construction site. Or NASA engineers and astronauts who see important schematics laid on top of critical mission systems.

Then there are uses we’ve barely thought about yet. For instance, UC Berkeley is working on a way to use AR as a way to communicate with robots. The problem with machines right now is that we have to turn toward secondary screens to get all our information. It’s all tucked away in apps and phones and computers. But what if anyone could look at a Roomba or a drone and instantly see its battery level, current task, and where it’s going next? Wouldn’t that solve a whole bunch of problems people have with automation and robots?

VR v AR: Which is the future?

Virtual reality v augmented reality: Which is the future?

I can’t help you with that, Dave. And that’s because this really comes down to what humanity wants on a whole. Do they want a future based in, well, reality, gaining additional, and useful, information to make their daily lives easier? Or do they want a constructed, artificial reality that is cut off from what we now refer to as reality.

It’s deep stuff, but it’s the fundamental divide between AR and VR, and what makes their yin and yang so interesting to ponder. The solution, however, may be explained by one thing: We’re social creatures. Currently, AR seems to be better equipped to handle the social needs of humans, largely because it’s augmenting our current world, not trying to replace it.

This is why people working in VR, like Facebook, are working so hard to make VR more social. Google’s Daydream platform allows you to watch videos together in virtual reality, for example. Facebook recently launched its Spaces app, an attempt to make it fun to hang out with your friends in VR.

What is possible is that each of them better defines its role. AR could turn into an everyday help for the common person, helping them make better decisions about food, transportation, people and more. VR, on the other hand, could turn into an entertainment activity, whether it be gaming or experiencing great storytelling. In that case, well, the answer to which tech is the future would be both.

' src=

Sophie was Wareable's associate editor. She joined the team from Stuff magazine where she was an in-house reviewer. For three and a half years, she tested every smartphone, tablet, and robot vacuum that mattered.

A fan of thoughtful design, innovative apps, and that Spike Jonze film, she is currently wondering how many fitness tracker reviews it will take to get her fit. Current bet: 19.

Sophie has also written for a host of sites, including Metro, the Evening Standard, the Times, the Telegraph, Little White Lies, the Press Association and the Debrief.

She now works for Wired.

Related Posts

Honor Watch 5

Honor Watch 5 gets IFA 2024 outing

Ray Ban Meta

Meta smartglasses could use heart rate data to deliver soothing music

Apple Watch Ultra 3: All the latest rumors and new features we want to see

Apple Watch Ultra 3: All the latest rumors and new features we want to see

Type above and press Enter to search. Press Esc to cancel.

ORIGINAL RESEARCH article

The past, present, and future of virtual and augmented reality research: a network and cluster analysis of the literature.

\r\nPietro Cipresso,*

  • 1 Applied Technology for Neuro-Psychology Lab, Istituto Auxologico Italiano, Milan, Italy
  • 2 Department of Psychology, Catholic University of the Sacred Heart, Milan, Italy
  • 3 Instituto de Investigación e Innovación en Bioingeniería, Universitat Politècnica de València, Valencia, Spain

The recent appearance of low cost virtual reality (VR) technologies – like the Oculus Rift, the HTC Vive and the Sony PlayStation VR – and Mixed Reality Interfaces (MRITF) – like the Hololens – is attracting the attention of users and researchers suggesting it may be the next largest stepping stone in technological innovation. However, the history of VR technology is longer than it may seem: the concept of VR was formulated in the 1960s and the first commercial VR tools appeared in the late 1980s. For this reason, during the last 20 years, 100s of researchers explored the processes, effects, and applications of this technology producing 1000s of scientific papers. What is the outcome of this significant research work? This paper wants to provide an answer to this question by exploring, using advanced scientometric techniques, the existing research corpus in the field. We collected all the existent articles about VR in the Web of Science Core Collection scientific database, and the resultant dataset contained 21,667 records for VR and 9,944 for augmented reality (AR). The bibliographic record contained various fields, such as author, title, abstract, country, and all the references (needed for the citation analysis). The network and cluster analysis of the literature showed a composite panorama characterized by changes and evolutions over the time. Indeed, whether until 5 years ago, the main publication media on VR concerned both conference proceeding and journals, more recently journals constitute the main medium of communication. Similarly, if at first computer science was the leading research field, nowadays clinical areas have increased, as well as the number of countries involved in VR research. The present work discusses the evolution and changes over the time of the use of VR in the main areas of application with an emphasis on the future expected VR’s capacities, increases and challenges. We conclude considering the disruptive contribution that VR/AR/MRITF will be able to get in scientific fields, as well in human communication and interaction, as already happened with the advent of mobile phones by increasing the use and the development of scientific applications (e.g., in clinical areas) and by modifying the social communication and interaction among people.

Introduction

In the last 5 years, virtual reality (VR) and augmented reality (AR) have attracted the interest of investors and the general public, especially after Mark Zuckerberg bought Oculus for two billion dollars ( Luckerson, 2014 ; Castelvecchi, 2016 ). Currently, many other companies, such as Sony, Samsung, HTC, and Google are making huge investments in VR and AR ( Korolov, 2014 ; Ebert, 2015 ; Castelvecchi, 2016 ). However, if VR has been used in research for more than 25 years, and now there are 1000s of papers and many researchers in the field, comprising a strong, interdisciplinary community, AR has a more recent application history ( Burdea and Coiffet, 2003 ; Kim, 2005 ; Bohil et al., 2011 ; Cipresso and Serino, 2014 ; Wexelblat, 2014 ). The study of VR was initiated in the computer graphics field and has been extended to several disciplines ( Sutherland, 1965 , 1968 ; Mazuryk and Gervautz, 1996 ; Choi et al., 2015 ). Currently, videogames supported by VR tools are more popular than the past, and they represent valuables, work-related tools for neuroscientists, psychologists, biologists, and other researchers as well. Indeed, for example, one of the main research purposes lies from navigation studies that include complex experiments that could be done in a laboratory by using VR, whereas, without VR, the researchers would have to go directly into the field, possibly with limited use of intervention. The importance of navigation studies for the functional understanding of human memory in dementia has been a topic of significant interest for a long time, and, in 2014, the Nobel Prize in “Physiology or Medicine” was awarded to John M. O’Keefe, May-Britt Moser, and Edvard I. Moser for their discoveries of nerve cells in the brain that enable a sense of place and navigation. Journals and magazines have extended this knowledge by writing about “the brain GPS,” which gives a clear idea of the mechanism. A huge number of studies have been conducted in clinical settings by using VR ( Bohil et al., 2011 ; Serino et al., 2014 ), and Nobel Prize winner, Edvard I. Moser commented about the use of VR ( Minderer et al., 2016 ), highlighting its importance for research and clinical practice. Moreover, the availability of free tools for VR experimental and computational use has made it easy to access any field ( Riva et al., 2011 ; Cipresso, 2015 ; Brown and Green, 2016 ; Cipresso et al., 2016 ).

Augmented reality is a more recent technology than VR and shows an interdisciplinary application framework, in which, nowadays, education and learning seem to be the most field of research. Indeed, AR allows supporting learning, for example increasing-on content understanding and memory preservation, as well as on learning motivation. However, if VR benefits from clear and more definite fields of application and research areas, AR is still emerging in the scientific scenarios.

In this article, we present a systematic and computational analysis of the emerging interdisciplinary VR and AR fields in terms of various co-citation networks in order to explore the evolution of the intellectual structure of this knowledge domain over time.

Virtual Reality Concepts and Features

The concept of VR could be traced at the mid of 1960 when Ivan Sutherland in a pivotal manuscript attempted to describe VR as a window through which a user perceives the virtual world as if looked, felt, sounded real and in which the user could act realistically ( Sutherland, 1965 ).

Since that time and in accordance with the application area, several definitions have been formulated: for example, Fuchs and Bishop (1992) defined VR as “real-time interactive graphics with 3D models, combined with a display technology that gives the user the immersion in the model world and direct manipulation” ( Fuchs and Bishop, 1992 ); Gigante (1993) described VR as “The illusion of participation in a synthetic environment rather than external observation of such an environment. VR relies on a 3D, stereoscopic head-tracker displays, hand/body tracking and binaural sound. VR is an immersive, multi-sensory experience” ( Gigante, 1993 ); and “Virtual reality refers to immersive, interactive, multi-sensory, viewer-centered, 3D computer generated environments and the combination of technologies required building environments” ( Cruz-Neira, 1993 ).

As we can notice, these definitions, although different, highlight three common features of VR systems: immersion, perception to be present in an environment, and interaction with that environment ( Biocca, 1997 ; Lombard and Ditton, 1997 ; Loomis et al., 1999 ; Heeter, 2000 ; Biocca et al., 2001 ; Bailenson et al., 2006 ; Skalski and Tamborini, 2007 ; Andersen and Thorpe, 2009 ; Slater, 2009 ; Sundar et al., 2010 ). Specifically, immersion concerns the amount of senses stimulated, interactions, and the reality’s similarity of the stimuli used to simulate environments. This feature can depend on the properties of the technological system used to isolate user from reality ( Slater, 2009 ).

Higher or lower degrees of immersion can depend by three types of VR systems provided to the user:

• Non-immersive systems are the simplest and cheapest type of VR applications that use desktops to reproduce images of the world.

• Immersive systems provide a complete simulated experience due to the support of several sensory outputs devices such as head mounted displays (HMDs) for enhancing the stereoscopic view of the environment through the movement of the user’s head, as well as audio and haptic devices.

• Semi-immersive systems such as Fish Tank VR are between the two above. They provide a stereo image of a three dimensional (3D) scene viewed on a monitor using a perspective projection coupled to the head position of the observer ( Ware et al., 1993 ). Higher technological immersive systems have showed a closest experience to reality, giving to the user the illusion of technological non-mediation and feeling him or her of “being in” or present in the virtual environment ( Lombard and Ditton, 1997 ). Furthermore, higher immersive systems, than the other two systems, can give the possibility to add several sensory outputs allowing that the interaction and actions were perceived as real ( Loomis et al., 1999 ; Heeter, 2000 ; Biocca et al., 2001 ).

Finally, the user’s VR experience could be disclosed by measuring presence, realism, and reality’s levels. Presence is a complex psychological feeling of “being there” in VR that involves the sensation and perception of physical presence, as well as the possibility to interact and react as if the user was in the real world ( Heeter, 1992 ). Similarly, the realism’s level corresponds to the degree of expectation that the user has about of the stimuli and experience ( Baños et al., 2000 , 2009 ). If the presented stimuli are similar to reality, VR user’s expectation will be congruent with reality expectation, enhancing VR experience. In the same way, higher is the degree of reality in interaction with the virtual stimuli, higher would be the level of realism of the user’s behaviors ( Baños et al., 2000 , 2009 ).

From Virtual to Augmented Reality

Looking chronologically on VR and AR developments, we can trace the first 3D immersive simulator in 1962, when Morton Heilig created Sensorama, a simulated experience of a motorcycle running through Brooklyn characterized by several sensory impressions, such as audio, olfactory, and haptic stimuli, including also wind to provide a realist experience ( Heilig, 1962 ). In the same years, Ivan Sutherland developed The Ultimate Display that, more than sound, smell, and haptic feedback, included interactive graphics that Sensorama didn’t provide. Furthermore, Philco developed the first HMD that together with The Sword of Damocles of Sutherland was able to update the virtual images by tracking user’s head position and orientation ( Sutherland, 1965 ). In the 70s, the University of North Carolina realized GROPE, the first system of force-feedback and Myron Krueger created VIDEOPLACE an Artificial Reality in which the users’ body figures were captured by cameras and projected on a screen ( Krueger et al., 1985 ). In this way two or more users could interact in the 2D-virtual space. In 1982, the US’ Air Force created the first flight simulator [Visually Coupled Airbone System Simulator (VCASS)] in which the pilot through an HMD could control the pathway and the targets. Generally, the 80’s were the years in which the first commercial devices began to emerge: for example, in 1985 the VPL company commercialized the DataGlove, glove sensors’ equipped able to measure the flexion of fingers, orientation and position, and identify hand gestures. Another example is the Eyephone, created in 1988 by the VPL Company, an HMD system for completely immerging the user in a virtual world. At the end of 80’s, Fake Space Labs created a Binocular-Omni-Orientational Monitor (BOOM), a complex system composed by a stereoscopic-displaying device, providing a moving and broad virtual environment, and a mechanical arm tracking. Furthermore, BOOM offered a more stable image and giving more quickly responses to movements than the HMD devices. Thanks to BOOM and DataGlove, the NASA Ames Research Center developed the Virtual Wind Tunnel in order to research and manipulate airflow in a virtual airplane or space ship. In 1992, the Electronic Visualization Laboratory of the University of Illinois created the CAVE Automatic Virtual Environment, an immersive VR system composed by projectors directed on three or more walls of a room.

More recently, many videogames companies have improved the development and quality of VR devices, like Oculus Rift, or HTC Vive that provide a wider field of view and lower latency. In addition, the actual HMD’s devices can be now combined with other tracker system as eye-tracking systems (FOVE), and motion and orientation sensors (e.g., Razer Hydra, Oculus Touch, or HTC Vive).

Simultaneously, at the beginning of 90’, the Boing Corporation created the first prototype of AR system for showing to employees how set up a wiring tool ( Carmigniani et al., 2011 ). At the same time, Rosenberg and Feiner developed an AR fixture for maintenance assistance, showing that the operator performance enhanced by added virtual information on the fixture to repair ( Rosenberg, 1993 ). In 1993 Loomis and colleagues produced an AR GPS-based system for helping the blind in the assisted navigation through adding spatial audio information ( Loomis et al., 1998 ). Always in the 1993 Julie Martin developed “Dancing in Cyberspace,” an AR theater in which actors interacted with virtual object in real time ( Cathy, 2011 ). Few years later, Feiner et al. (1997) developed the first Mobile AR System (MARS) able to add virtual information about touristic buildings ( Feiner et al., 1997 ). Since then, several applications have been developed: in Thomas et al. (2000) , created ARQuake, a mobile AR video game; in 2008 was created Wikitude that through the mobile camera, internet, and GPS could add information about the user’s environments ( Perry, 2008 ). In 2009 others AR applications, like AR Toolkit and SiteLens have been developed in order to add virtual information to the physical user’s surroundings. In 2011, Total Immersion developed D’Fusion, and AR system for designing projects ( Maurugeon, 2011 ). Finally, in 2013 and 2015, Google developed Google Glass and Google HoloLens, and their usability have begun to test in several field of application.

Virtual Reality Technologies

Technologically, the devices used in the virtual environments play an important role in the creation of successful virtual experiences. According to the literature, can be distinguished input and output devices ( Burdea et al., 1996 ; Burdea and Coiffet, 2003 ). Input devices are the ones that allow the user to communicate with the virtual environment, which can range from a simple joystick or keyboard to a glove allowing capturing finger movements or a tracker able to capture postures. More in detail, keyboard, mouse, trackball, and joystick represent the desktop input devices easy to use, which allow the user to launch continuous and discrete commands or movements to the environment. Other input devices can be represented by tracking devices as bend-sensing gloves that capture hand movements, postures and gestures, or pinch gloves that detect the fingers movements, and trackers able to follow the user’s movements in the physical world and translate them in the virtual environment.

On the contrary, the output devices allow the user to see, hear, smell, or touch everything that happens in the virtual environment. As mentioned above, among the visual devices can be found a wide range of possibilities, from the simplest or least immersive (monitor of a computer) to the most immersive one such as VR glasses or helmets or HMD or CAVE systems.

Furthermore, auditory, speakers, as well as haptic output devices are able to stimulate body senses providing a more real virtual experience. For example, haptic devices can stimulate the touch feeling and force models in the user.

Virtual Reality Applications

Since its appearance, VR has been used in different fields, as for gaming ( Zyda, 2005 ; Meldrum et al., 2012 ), military training ( Alexander et al., 2017 ), architectural design ( Song et al., 2017 ), education ( Englund et al., 2017 ), learning and social skills training ( Schmidt et al., 2017 ), simulations of surgical procedures ( Gallagher et al., 2005 ), assistance to the elderly or psychological treatments are other fields in which VR is bursting strongly ( Freeman et al., 2017 ; Neri et al., 2017 ). A recent and extensive review of Slater and Sanchez-Vives (2016) reported the main VR application evidences, including weakness and advantages, in several research areas, such as science, education, training, physical training, as well as social phenomena, moral behaviors, and could be used in other fields, like travel, meetings, collaboration, industry, news, and entertainment. Furthermore, another review published this year by Freeman et al. (2017) focused on VR in mental health, showing the efficacy of VR in assessing and treating different psychological disorders as anxiety, schizophrenia, depression, and eating disorders.

There are many possibilities that allow the use of VR as a stimulus, replacing real stimuli, recreating experiences, which in the real world would be impossible, with a high realism. This is why VR is widely used in research on new ways of applying psychological treatment or training, for example, to problems arising from phobias (agoraphobia, phobia to fly, etc.) ( Botella et al., 2017 ). Or, simply, it is used like improvement of the traditional systems of motor rehabilitation ( Llorens et al., 2014 ; Borrego et al., 2016 ), developing games that ameliorate the tasks. More in detail, in psychological treatment, Virtual Reality Exposure Therapy (VRET) has showed its efficacy, allowing to patients to gradually face fear stimuli or stressed situations in a safe environment where the psychological and physiological reactions can be controlled by the therapist ( Botella et al., 2017 ).

Augmented Reality Concept

Milgram and Kishino (1994) , conceptualized the Virtual-Reality Continuum that takes into consideration four systems: real environment, augmented reality (AR), augmented virtuality, and virtual environment. AR can be defined a newer technological system in which virtual objects are added to the real world in real-time during the user’s experience. Per Azuma et al. (2001) an AR system should: (1) combine real and virtual objects in a real environment; (2) run interactively and in real-time; (3) register real and virtual objects with each other. Furthermore, even if the AR experiences could seem different from VRs, the quality of AR experience could be considered similarly. Indeed, like in VR, feeling of presence, level of realism, and the degree of reality represent the main features that can be considered the indicators of the quality of AR experiences. Higher the experience is perceived as realistic, and there is congruence between the user’s expectation and the interaction inside the AR environments, higher would be the perception of “being there” physically, and at cognitive and emotional level. The feeling of presence, both in AR and VR environments, is important in acting behaviors like the real ones ( Botella et al., 2005 ; Juan et al., 2005 ; Bretón-López et al., 2010 ; Wrzesien et al., 2013 ).

Augmented Reality Technologies

Technologically, the AR systems, however various, present three common components, such as a geospatial datum for the virtual object, like a visual marker, a surface to project virtual elements to the user, and an adequate processing power for graphics, animation, and merging of images, like a pc and a monitor ( Carmigniani et al., 2011 ). To run, an AR system must also include a camera able to track the user movement for merging the virtual objects, and a visual display, like glasses through that the user can see the virtual objects overlaying to the physical world. To date, two-display systems exist, a video see-through (VST) and an optical see-though (OST) AR systems ( Botella et al., 2005 ; Juan et al., 2005 , 2007 ). The first one, disclosures virtual objects to the user by capturing the real objects/scenes with a camera and overlaying virtual objects, projecting them on a video or a monitor, while the second one, merges the virtual object on a transparent surface, like glasses, through the user see the added elements. The main difference between the two systems is the latency: an OST system could require more time to display the virtual objects than a VST system, generating a time lag between user’s action and performance and the detection of them by the system.

Augmented Reality Applications

Although AR is a more recent technology than VR, it has been investigated and used in several research areas such as architecture ( Lin and Hsu, 2017 ), maintenance ( Schwald and De Laval, 2003 ), entertainment ( Ozbek et al., 2004 ), education ( Nincarean et al., 2013 ; Bacca et al., 2014 ; Akçayır and Akçayır, 2017 ), medicine ( De Buck et al., 2005 ), and psychological treatments ( Juan et al., 2005 ; Botella et al., 2005 , 2010 ; Bretón-López et al., 2010 ; Wrzesien et al., 2011a , b , 2013 ; see the review Chicchi Giglioli et al., 2015 ). More in detail, in education several AR applications have been developed in the last few years showing the positive effects of this technology in supporting learning, such as an increased-on content understanding and memory preservation, as well as on learning motivation ( Radu, 2012 , 2014 ). For example, Ibáñez et al. (2014) developed a AR application on electromagnetism concepts’ learning, in which students could use AR batteries, magnets, cables on real superficies, and the system gave a real-time feedback to students about the correctness of the performance, improving in this way the academic success and motivation ( Di Serio et al., 2013 ). Deeply, AR system allows the possibility to learn visualizing and acting on composite phenomena that traditionally students study theoretically, without the possibility to see and test in real world ( Chien et al., 2010 ; Chen et al., 2011 ).

As well in psychological health, the number of research about AR is increasing, showing its efficacy above all in the treatment of psychological disorder (see the reviews Baus and Bouchard, 2014 ; Chicchi Giglioli et al., 2015 ). For example, in the treatment of anxiety disorders, like phobias, AR exposure therapy (ARET) showed its efficacy in one-session treatment, maintaining the positive impact in a follow-up at 1 or 3 month after. As VRET, ARET provides a safety and an ecological environment where any kind of stimulus is possible, allowing to keep control over the situation experienced by the patients, gradually generating situations of fear or stress. Indeed, in situations of fear, like the phobias for small animals, AR applications allow, in accordance with the patient’s anxiety, to gradually expose patient to fear animals, adding new animals during the session or enlarging their or increasing the speed. The various studies showed that AR is able, at the beginning of the session, to activate patient’s anxiety, for reducing after 1 h of exposition. After the session, patients even more than to better manage animal’s fear and anxiety, ware able to approach, interact, and kill real feared animals.

Materials and Methods

Data collection.

The input data for the analyses were retrieved from the scientific database Web of Science Core Collection ( Falagas et al., 2008 ) and the search terms used were “Virtual Reality” and “Augmented Reality” regarding papers published during the whole timespan covered.

Web of science core collection is composed of: Citation Indexes, Science Citation Index Expanded (SCI-EXPANDED) –1970-present, Social Sciences Citation Index (SSCI) –1970-present, Arts and Humanities Citation Index (A&HCI) –1975-present, Conference Proceedings Citation Index- Science (CPCI-S) –1990-present, Conference Proceedings Citation Index- Social Science & Humanities (CPCI-SSH) –1990-present, Book Citation Index– Science (BKCI-S) –2009-present, Book Citation Index– Social Sciences & Humanities (BKCI-SSH) –2009-present, Emerging Sources Citation Index (ESCI) –2015-present, Chemical Indexes, Current Chemical Reactions (CCR-EXPANDED) –2009-present (Includes Institut National de la Propriete Industrielle structure data back to 1840), Index Chemicus (IC) –2009-present.

The resultant dataset contained a total of 21,667 records for VR and 9,944 records for AR. The bibliographic record contained various fields, such as author, title, abstract, and all of the references (needed for the citation analysis). The research tool to visualize the networks was Cite space v.4.0.R5 SE (32 bit) ( Chen, 2006 ) under Java Runtime v.8 update 91 (build 1.8.0_91-b15). Statistical analyses were conducted using Stata MP-Parallel Edition, Release 14.0, StataCorp LP. Additional information can be found in Supplementary Data Sheet 1 .

The betweenness centrality of a node in a network measures the extent to which the node is part of paths that connect an arbitrary pair of nodes in the network ( Freeman, 1977 ; Brandes, 2001 ; Chen, 2006 ).

Structural metrics include betweenness centrality, modularity, and silhouette. Temporal and hybrid metrics include citation burstness and novelty. All the algorithms are detailed ( Chen et al., 2010 ).

The analysis of the literature on VR shows a complex panorama. At first sight, according to the document-type statistics from the Web of Science (WoS), proceedings papers were used extensively as outcomes of research, comprising almost 48% of the total (10,392 proceedings), with a similar number of articles on the subject amounting to about 47% of the total of 10, 199 articles. However, if we consider only the last 5 years (7,755 articles representing about 36% of the total), the situation changes with about 57% for articles (4,445) and about 33% for proceedings (2,578). Thus, it is clear that VR field has changed in areas other than at the technological level.

About the subject category, nodes and edges are computed as co-occurring subject categories from the Web of Science “Category” field in all the articles.

According to the subject category statistics from the WoS, computer science is the leading category, followed by engineering, and, together, they account for 15,341 articles, which make up about 71% of the total production. However, if we consider just the last 5 years, these categories reach only about 55%, with a total of 4,284 articles (Table 1 and Figure 1 ).

www.frontiersin.org

TABLE 1. Category statistics from the WoS for the entire period and the last 5 years.

www.frontiersin.org

FIGURE 1. Category from the WoS: network for the last 5 years.

The evidence is very interesting since it highlights that VR is doing very well as new technology with huge interest in hardware and software components. However, with respect to the past, we are witnessing increasing numbers of applications, especially in the medical area. In particular, note its inclusion in the top 10 list of rehabilitation and clinical neurology categories (about 10% of the total production in the last 5 years). It also is interesting that neuroscience and neurology, considered together, have shown an increase from about 12% to about 18.6% over the last 5 years. However, historic areas, such as automation and control systems, imaging science and photographic technology, and robotics, which had accounted for about 14.5% of the total articles ever produced were not even in the top 10 for the last 5 years, with each one accounting for less than 4%.

About the countries, nodes and edges are computed as networks of co-authors countries. Multiple occurrency of a country in the same paper are counted once.

The countries that were very involved in VR research have published for about 47% of the total (10,200 articles altogether). Of the 10,200 articles, the United States, China, England, and Germany published 4921, 2384, 1497, and 1398, respectively. The situation remains the same if we look at the articles published over the last 5 years. However, VR contributions also came from all over the globe, with Japan, Canada, Italy, France, Spain, South Korea, and Netherlands taking positions of prominence, as shown in Figure 2 .

www.frontiersin.org

FIGURE 2. Country network (node dimension represents centrality).

Network analysis was conducted to calculate and to represent the centrality index ( Freeman, 1977 ; Brandes, 2001 ), i.e., the dimension of the node in Figure 2 . The top-ranked country, with a centrality index of 0.26, was the United States (2011), and England was second, with a centrality index of 0.25. The third, fourth, and fifth countries were Germany, Italy, and Australia, with centrality indices of 0.15, 0.15, and 0.14, respectively.

About the Institutions, nodes and edges are computed as networks of co-authors Institutions (Figure 3 ).

www.frontiersin.org

FIGURE 3. Network of institutions: the dimensions of the nodes represent centrality.

The top-level institutions in VR were in the United States, where three universities were ranked as the top three in the world for published articles; these universities were the University of Illinois (159), the University of South California (147), and the University of Washington (146). The United States also had the eighth-ranked university, which was Iowa State University (116). The second country in the ranking was Canada, with the University of Toronto, which was ranked fifth with 125 articles and McGill University, ranked 10 th with 103 articles.

Other countries in the top-ten list were Netherlands, with the Delft University of Technology ranked fourth with 129 articles; Italy, with IRCCS Istituto Auxologico Italiano, ranked sixth (with the same number of publication of the institution ranked fifth) with 125 published articles; England, which was ranked seventh with 125 articles from the University of London’s Imperial College of Science, Technology, and Medicine; and China with 104 publications, with the Chinese Academy of Science, ranked ninth. Italy’s Istituto Auxologico Italiano, which was ranked fifth, was the only non-university institution ranked in the top-10 list for VR research (Figure 3 ).

About the Journals, nodes, and edges are computed as journal co-citation networks among each journals in the corresponding field.

The top-ranked Journals for citations in VR are Presence: Teleoperators & Virtual Environments with 2689 citations and CyberPsychology & Behavior (Cyberpsychol BEHAV) with 1884 citations; however, looking at the last 5 years, the former had increased the citations, but the latter had a far more significant increase, from about 70% to about 90%, i.e., an increase from 1029 to 1147.

Following the top two journals, IEEE Computer Graphics and Applications ( IEEE Comput Graph) and Advanced Health Telematics and Telemedicine ( St HEAL T) were both left out of the top-10 list based on the last 5 years. The data for the last 5 years also resulted in the inclusion of Experimental Brain Research ( Exp BRAIN RES) (625 citations), Archives of Physical Medicine and Rehabilitation ( Arch PHYS MED REHAB) (622 citations), and Plos ONE (619 citations) in the top-10 list of three journals, which highlighted the categories of rehabilitation and clinical neurology and neuroscience and neurology. Journal co-citation analysis is reported in Figure 4 , which clearly shows four distinct clusters.

www.frontiersin.org

FIGURE 4. Co-citation network of journals: the dimensions of the nodes represent centrality. Full list of official abbreviations of WoS journals can be found here: https://images.webofknowledge.com/images/help/WOS/A_abrvjt.html .

Network analysis was conducted to calculate and to represent the centrality index, i.e., the dimensions of the nodes in Figure 4 . The top-ranked item by centrality was Cyberpsychol BEHAV, with a centrality index of 0.29. The second-ranked item was Arch PHYS MED REHAB, with a centrality index of 0.23. The third was Behaviour Research and Therapy (Behav RES THER), with a centrality index of 0.15. The fourth was BRAIN, with a centrality index of 0.14. The fifth was Exp BRAIN RES, with a centrality index of 0.11.

Who’s Who in VR Research

Authors are the heart and brain of research, and their roles in a field are to define the past, present, and future of disciplines and to make significant breakthroughs to make new ideas arise (Figure 5 ).

www.frontiersin.org

FIGURE 5. Network of authors’ numbers of publications: the dimensions of the nodes represent the centrality index, and the dimensions of the characters represent the author’s rank.

Virtual reality research is very young and changing with time, but the top-10 authors in this field have made fundamentally significant contributions as pioneers in VR and taking it beyond a mere technological development. The purpose of the following highlights is not to rank researchers; rather, the purpose is to identify the most active researchers in order to understand where the field is going and how they plan for it to get there.

The top-ranked author is Riva G, with 180 publications. The second-ranked author is Rizzo A, with 101 publications. The third is Darzi A, with 97 publications. The forth is Aggarwal R, with 94 publications. The six authors following these three are Slater M, Alcaniz M, Botella C, Wiederhold BK, Kim SI, and Gutierrez-Maldonado J with 90, 90, 85, 75, 59, and 54 publications, respectively (Figure 6 ).

www.frontiersin.org

FIGURE 6. Authors’ co-citation network: the dimensions of the nodes represent centrality index, and the dimensions of the characters represent the author’s rank. The 10 authors that appear on the top-10 list are considered to be the pioneers of VR research.

Considering the last 5 years, the situation remains similar, with three new entries in the top-10 list, i.e., Muhlberger A, Cipresso P, and Ahmed K ranked 7th, 8th, and 10th, respectively.

The authors’ publications number network shows the most active authors in VR research. Another relevant analysis for our focus on VR research is to identify the most cited authors in the field.

For this purpose, the authors’ co-citation analysis highlights the authors in term of their impact on the literature considering the entire time span of the field ( White and Griffith, 1981 ; González-Teruel et al., 2015 ; Bu et al., 2016 ). The idea is to focus on the dynamic nature of the community of authors who contribute to the research.

Normally, authors with higher numbers of citations tend to be the scholars who drive the fundamental research and who make the most meaningful impacts on the evolution and development of the field. In the following, we identified the most-cited pioneers in the field of VR Research.

The top-ranked author by citation count is Gallagher (2001), with 694 citations. Second is Seymour (2004), with 668 citations. Third is Slater (1999), with 649 citations. Fourth is Grantcharov (2003), with 563 citations. Fifth is Riva (1999), with 546 citations. Sixth is Aggarwal (2006), with 505 citations. Seventh is Satava (1994), with 477 citations. Eighth is Witmer (2002), with 454 citations. Ninth is Rothbaum (1996), with 448 citations. Tenth is Cruz-neira (1995), with 416 citations.

Citation Network and Cluster Analyses for VR

Another analysis that can be used is the analysis of document co-citation, which allows us to focus on the highly-cited documents that generally are also the most influential in the domain ( Small, 1973 ; González-Teruel et al., 2015 ; Orosz et al., 2016 ).

The top-ranked article by citation counts is Seymour (2002) in Cluster #0, with 317 citations. The second article is Grantcharov (2004) in Cluster #0, with 286 citations. The third is Holden (2005) in Cluster #2, with 179 citations. The 4th is Gallagher et al. (2005) in Cluster #0, with 171 citations. The 5th is Ahlberg (2007) in Cluster #0, with 142 citations. The 6th is Parsons (2008) in Cluster #4, with 136 citations. The 7th is Powers (2008) in Cluster #4, with 134 citations. The 8th is Aggarwal (2007) in Cluster #0, with 121 citations. The 9th is Reznick (2006) in Cluster #0, with 121 citations. The 10th is Munz (2004) in Cluster #0, with 117 citations.

The network of document co-citations is visually complex (Figure 7 ) because it includes 1000s of articles and the links among them. However, this analysis is very important because can be used to identify the possible conglomerate of knowledge in the area, and this is essential for a deep understanding of the area. Thus, for this purpose, a cluster analysis was conducted ( Chen et al., 2010 ; González-Teruel et al., 2015 ; Klavans and Boyack, 2015 ). Figure 8 shows the clusters, which are identified with the two algorithms in Table 2 .

www.frontiersin.org

FIGURE 7. Network of document co-citations: the dimensions of the nodes represent centrality, the dimensions of the characters represent the rank of the article rank, and the numbers represent the strengths of the links. It is possible to identify four historical phases (colors: blue, green, yellow, and red) from the past VR research to the current research.

www.frontiersin.org

FIGURE 8. Document co-citation network by cluster: the dimensions of the nodes represent centrality, the dimensions of the characters represent the rank of the article rank and the red writing reports the name of the cluster with a short description that was produced with the mutual information algorithm; the clusters are identified with colored polygons.

www.frontiersin.org

TABLE 2. Cluster ID and silhouettes as identified with two algorithms ( Chen et al., 2010 ).

The identified clusters highlight clear parts of the literature of VR research, making clear and visible the interdisciplinary nature of this field. However, the dynamics to identify the past, present, and future of VR research cannot be clear yet. We analysed the relationships between these clusters and the temporal dimensions of each article. The results are synthesized in Figure 9 . It is clear that cluster #0 (laparoscopic skill), cluster #2 (gaming and rehabilitation), cluster #4 (therapy), and cluster #14 (surgery) are the most popular areas of VR research. (See Figure 9 and Table 2 to identify the clusters.) From Figure 9 , it also is possible to identify the first phase of laparoscopic skill (cluster #6) and therapy (cluster #7). More generally, it is possible to identify four historical phases (colors: blue, green, yellow, and red) from the past VR research to the current research.

www.frontiersin.org

FIGURE 9. Network of document co-citation: the dimensions of the nodes represent centrality, the dimensions of the characters represent the rank of the article rank and the red writing on the right hand side reports the number of the cluster, such as in Table 2 , with a short description that was extracted accordingly.

We were able to identify the top 486 references that had the most citations by using burst citations algorithm. Citation burst is an indicator of a most active area of research. Citation burst is a detection of a burst event, which can last for multiple years as well as a single year. A citation burst provides evidence that a particular publication is associated with a surge of citations. The burst detection was based on Kleinberg’s algorithm ( Kleinberg, 2002 , 2003 ). The top-ranked document by bursts is Seymour (2002) in Cluster #0, with bursts of 88.93. The second is Grantcharov (2004) in Cluster #0, with bursts of 51.40. The third is Saposnik (2010) in Cluster #2, with bursts of 40.84. The fourth is Rothbaum (1995) in Cluster #7, with bursts of 38.94. The fifth is Holden (2005) in Cluster #2, with bursts of 37.52. The sixth is Scott (2000) in Cluster #0, with bursts of 33.39. The seventh is Saposnik (2011) in Cluster #2, with bursts of 33.33. The eighth is Burdea et al. (1996) in Cluster #3, with bursts of 32.42. The ninth is Burdea and Coiffet (2003) in Cluster #22, with bursts of 31.30. The 10th is Taffinder (1998) in Cluster #6, with bursts of 30.96 (Table 3 ).

www.frontiersin.org

TABLE 3. Cluster ID and references of burst article.

Citation Network and Cluster Analyses for AR

Looking at Augmented Reality scenario, the top ranked item by citation counts is Azuma (1997) in Cluster #0, with citation counts of 231. The second one is Azuma et al. (2001) in Cluster #0, with citation counts of 220. The third is Van Krevelen (2010) in Cluster #5, with citation counts of 207. The 4th is Lowe (2004) in Cluster #1, with citation counts of 157. The 5th is Wu (2013) in Cluster #4, with citation counts of 144. The 6th is Dunleavy (2009) in Cluster #4, with citation counts of 122. The 7th is Zhou (2008) in Cluster #5, with citation counts of 118. The 8th is Bay (2008) in Cluster #1, with citation counts of 117. The 9th is Newcombe (2011) in Cluster #1, with citation counts of 109. The 10th is Carmigniani et al. (2011) in Cluster #5, with citation counts of 104.

The network of document co-citations is visually complex (Figure 10 ) because it includes 1000s of articles and the links among them. However, this analysis is very important because can be used to identify the possible conglomerate of knowledge in the area, and this is essential for a deep understanding of the area. Thus, for this purpose, a cluster analysis was conducted ( Chen et al., 2010 ; González-Teruel et al., 2015 ; Klavans and Boyack, 2015 ). Figure 11 shows the clusters, which are identified with the two algorithms in Table 3 .

www.frontiersin.org

FIGURE 10. Network of document co-citations: the dimensions of the nodes represent centrality, the dimensions of the characters represent the rank of the article rank, and the numbers represent the strengths of the links. It is possible to identify four historical phases (colors: blue, green, yellow, and red) from the past AR research to the current research.

www.frontiersin.org

FIGURE 11. Document co-citation network by cluster: the dimensions of the nodes represent centrality, the dimensions of the characters represent the rank of the article rank and the red writing reports the name of the cluster with a short description that was produced with the mutual information algorithm; the clusters are identified with colored polygons.

The identified clusters highlight clear parts of the literature of AR research, making clear and visible the interdisciplinary nature of this field. However, the dynamics to identify the past, present, and future of AR research cannot be clear yet. We analysed the relationships between these clusters and the temporal dimensions of each article. The results are synthesized in Figure 12 . It is clear that cluster #1 (tracking), cluster #4 (education), and cluster #5 (virtual city environment) are the current areas of AR research. (See Figure 12 and Table 3 to identify the clusters.) It is possible to identify four historical phases (colors: blue, green, yellow, and red) from the past AR research to the current research.

www.frontiersin.org

FIGURE 12. Network of document co-citation: the dimensions of the nodes represent centrality, the dimensions of the characters represent the rank of the article rank and the red writing on the right hand side reports the number of the cluster, such as in Table 2 , with a short description that was extracted accordingly.

We were able to identify the top 394 references that had the most citations by using burst citations algorithm. Citation burst is an indicator of a most active area of research. Citation burst is a detection of a burst event, which can last for multiple years as well as a single year. A citation burst provides evidence that a particular publication is associated with a surge of citations. The burst detection was based on Kleinberg’s algorithm ( Kleinberg, 2002 , 2003 ). The top ranked document by bursts is Azuma (1997) in Cluster #0, with bursts of 101.64. The second one is Azuma et al. (2001) in Cluster #0, with bursts of 84.23. The third is Lowe (2004) in Cluster #1, with bursts of 64.07. The 4th is Van Krevelen (2010) in Cluster #5, with bursts of 50.99. The 5th is Wu (2013) in Cluster #4, with bursts of 47.23. The 6th is Hartley (2000) in Cluster #0, with bursts of 37.71. The 7th is Dunleavy (2009) in Cluster #4, with bursts of 33.22. The 8th is Kato (1999) in Cluster #0, with bursts of 32.16. The 9th is Newcombe (2011) in Cluster #1, with bursts of 29.72. The 10th is Feiner (1993) in Cluster #8, with bursts of 29.46 (Table 4 ).

www.frontiersin.org

TABLE 4. Cluster ID and silhouettes as identified with two algorithms ( Chen et al., 2010 ).

Our findings have profound implications for two reasons. At first the present work highlighted the evolution and development of VR and AR research and provided a clear perspective based on solid data and computational analyses. Secondly our findings on VR made it profoundly clear that the clinical dimension is one of the most investigated ever and seems to increase in quantitative and qualitative aspects, but also include technological development and article in computer science, engineer, and allied sciences.

Figure 9 clarifies the past, present, and future of VR research. The outset of VR research brought a clearly-identifiable development in interfaces for children and medicine, routine use and behavioral-assessment, special effects, systems perspectives, and tutorials. This pioneering era evolved in the period that we can identify as the development era, because it was the period in which VR was used in experiments associated with new technological impulses. Not surprisingly, this was exactly concomitant with the new economy era in which significant investments were made in information technology, and it also was the era of the so-called ‘dot-com bubble’ in the late 1990s. The confluence of pioneering techniques into ergonomic studies within this development era was used to develop the first effective clinical systems for surgery, telemedicine, human spatial navigation, and the first phase of the development of therapy and laparoscopic skills. With the new millennium, VR research switched strongly toward what we can call the clinical-VR era, with its strong emphasis on rehabilitation, neurosurgery, and a new phase of therapy and laparoscopic skills. The number of applications and articles that have been published in the last 5 years are in line with the new technological development that we are experiencing at the hardware level, for example, with so many new, HMDs, and at the software level with an increasing number of independent programmers and VR communities.

Finally, Figure 12 identifies clusters of the literature of AR research, making clear and visible the interdisciplinary nature of this field. The dynamics to identify the past, present, and future of AR research cannot be clear yet, but analyzing the relationships between these clusters and the temporal dimensions of each article tracking, education, and virtual city environment are the current areas of AR research. AR is a new technology that is showing its efficacy in different research fields, and providing a novel way to gather behavioral data and support learning, training, and clinical treatments.

Looking at scientific literature conducted in the last few years, it might appear that most developments in VR and AR studies have focused on clinical aspects. However, the reality is more complex; thus, this perception should be clarified. Although researchers publish studies on the use of VR in clinical settings, each study depends on the technologies available. Industrial development in VR and AR changed a lot in the last 10 years. In the past, the development involved mainly hardware solutions while nowadays, the main efforts pertain to the software when developing virtual solutions. Hardware became a commodity that is often available at low cost. On the other hand, software needs to be customized each time, per each experiment, and this requires huge efforts in term of development. Researchers in AR and VR today need to be able to adapt software in their labs.

Virtual reality and AR developments in this new clinical era rely on computer science and vice versa. The future of VR and AR is becoming more technological than before, and each day, new solutions and products are coming to the market. Both from software and hardware perspectives, the future of AR and VR depends on huge innovations in all fields. The gap between the past and the future of AR and VR research is about the “realism” that was the key aspect in the past versus the “interaction” that is the key aspect now. First 30 years of VR and AR consisted of a continuous research on better resolution and improved perception. Now, researchers already achieved a great resolution and need to focus on making the VR as realistic as possible, which is not simple. In fact, a real experience implies a realistic interaction and not just great resolution. Interactions can be improved in infinite ways through new developments at hardware and software levels.

Interaction in AR and VR is going to be “embodied,” with implication for neuroscientists that are thinking about new solutions to be implemented into the current systems ( Blanke et al., 2015 ; Riva, 2018 ; Riva et al., 2018 ). For example, the use of hands with contactless device (i.e., without gloves) makes the interaction in virtual environments more natural. The Leap Motion device 1 allows one to use of hands in VR without the use of gloves or markers. This simple and low-cost device allows the VR users to interact with virtual objects and related environments in a naturalistic way. When technology is able to be transparent, users can experience increased sense of being in the virtual environments (the so-called sense of presence).

Other forms of interactions are possible and have been developing continuously. For example, tactile and haptic device able to provide a continuous feedback to the users, intensifying their experience also by adding components, such as the feeling of touch and the physical weight of virtual objects, by using force feedback. Another technology available at low cost that facilitates interaction is the motion tracking system, such as Microsoft Kinect, for example. Such technology allows one to track the users’ bodies, allowing them to interact with the virtual environments using body movements, gestures, and interactions. Most HMDs use an embedded system to track HMD position and rotation as well as controllers that are generally placed into the user’s hands. This tracking allows a great degree of interaction and improves the overall virtual experience.

A final emerging approach is the use of digital technologies to simulate not only the external world but also the internal bodily signals ( Azevedo et al., 2017 ; Riva et al., 2017 ): interoception, proprioception and vestibular input. For example, Riva et al. (2017) recently introduced the concept of “sonoception” ( www.sonoception.com ), a novel non-invasive technological paradigm based on wearable acoustic and vibrotactile transducers able to alter internal bodily signals. This approach allowed the development of an interoceptive stimulator that is both able to assess interoceptive time perception in clinical patients ( Di Lernia et al., 2018b ) and to enhance heart rate variability (the short-term vagally mediated component—rMSSD) through the modulation of the subjects’ parasympathetic system ( Di Lernia et al., 2018a ).

In this scenario, it is clear that the future of VR and AR research is not just in clinical applications, although the implications for the patients are huge. The continuous development of VR and AR technologies is the result of research in computer science, engineering, and allied sciences. The reasons for which from our analyses emerged a “clinical era” are threefold. First, all clinical research on VR and AR includes also technological developments, and new technological discoveries are being published in clinical or technological journals but with clinical samples as main subject. As noted in our research, main journals that publish numerous articles on technological developments tested with both healthy and patients include Presence: Teleoperators & Virtual Environments, Cyberpsychology & Behavior (Cyberpsychol BEHAV), and IEEE Computer Graphics and Applications (IEEE Comput Graph). It is clear that researchers in psychology, neuroscience, medicine, and behavioral sciences in general have been investigating whether the technological developments of VR and AR are effective for users, indicating that clinical behavioral research has been incorporating large parts of computer science and engineering. A second aspect to consider is the industrial development. In fact, once a new technology is envisioned and created it goes for a patent application. Once the patent is sent for registration the new technology may be made available for the market, and eventually for journal submission and publication. Moreover, most VR and AR research that that proposes the development of a technology moves directly from the presenting prototype to receiving the patent and introducing it to the market without publishing the findings in scientific paper. Hence, it is clear that if a new technology has been developed for industrial market or consumer, but not for clinical purpose, the research conducted to develop such technology may never be published in a scientific paper. Although our manuscript considered published researches, we have to acknowledge the existence of several researches that have not been published at all. The third reason for which our analyses highlighted a “clinical era” is that several articles on VR and AR have been considered within the Web of Knowledge database, that is our source of references. In this article, we referred to “research” as the one in the database considered. Of course, this is a limitation of our study, since there are several other databases that are of big value in the scientific community, such as IEEE Xplore Digital Library, ACM Digital Library, and many others. Generally, the most important articles in journals published in these databases are also included in the Web of Knowledge database; hence, we are convinced that our study considered the top-level publications in computer science or engineering. Accordingly, we believe that this limitation can be overcome by considering the large number of articles referenced in our research.

Considering all these aspects, it is clear that clinical applications, behavioral aspects, and technological developments in VR and AR research are parts of a more complex situation compared to the old platforms used before the huge diffusion of HMD and solutions. We think that this work might provide a clearer vision for stakeholders, providing evidence of the current research frontiers and the challenges that are expected in the future, highlighting all the connections and implications of the research in several fields, such as clinical, behavioral, industrial, entertainment, educational, and many others.

Author Contributions

PC and GR conceived the idea. PC made data extraction and the computational analyses and wrote the first draft of the article. IG revised the introduction adding important information for the article. PC, IG, MR, and GR revised the article and approved the last version of the article after important input to the article rationale.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The reviewer GC declared a shared affiliation, with no collaboration, with the authors PC and GR to the handling Editor at the time of the review.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2018.02086/full#supplementary-material

  • ^ https://www.leapmotion.com/

Akçayır, M., and Akçayır, G. (2017). Advantages and challenges associated with augmented reality for education: a systematic review of the literature. Educ. Res. Rev. 20, 1–11. doi: 10.1016/j.edurev.2016.11.002

CrossRef Full Text | Google Scholar

Alexander, T., Westhoven, M., and Conradi, J. (2017). “Virtual environments for competency-oriented education and training,” in Advances in Human Factors, Business Management, Training and Education , (Berlin: Springer International Publishing), 23–29. doi: 10.1007/978-3-319-42070-7_3

Andersen, S. M., and Thorpe, J. S. (2009). An if–thEN theory of personality: significant others and the relational self. J. Res. Pers. 43, 163–170. doi: 10.1016/j.jrp.2008.12.040

Azevedo, R. T., Bennett, N., Bilicki, A., Hooper, J., Markopoulou, F., and Tsakiris, M. (2017). The calming effect of a new wearable device during the anticipation of public speech. Sci. Rep. 7:2285. doi: 10.1038/s41598-017-02274-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Azuma, R., Baillot, Y., Behringer, R., Feiner, S., Julier, S., and MacIntyre, B. (2001). Recent advances in augmented reality. IEEE Comp. Graph. Appl. 21, 34–47. doi: 10.1109/38.963459

Bacca, J., Baldiris, S., Fabregat, R., and Graf, S. (2014). Augmented reality trends in education: a systematic review of research and applications. J. Educ. Technol. Soc. 17, 133.

Google Scholar

Bailenson, J. N., Yee, N., Merget, D., and Schroeder, R. (2006). The effect of behavioral realism and form realism of real-time avatar faces on verbal disclosure, nonverbal disclosure, emotion recognition, and copresence in dyadic interaction. Presence 15, 359–372. doi: 10.1162/pres.15.4.359

Baños, R. M., Botella, C., Garcia-Palacios, A., Villa, H., Perpiñá, C., and Alcaniz, M. (2000). Presence and reality judgment in virtual environments: a unitary construct? Cyberpsychol. Behav. 3, 327–335. doi: 10.1089/10949310050078760

Baños, R., Botella, C., García-Palacios, A., Villa, H., Perpiñá, C., and Gallardo, M. (2009). Psychological variables and reality judgment in virtual environments: the roles of absorption and dissociation. Cyberpsychol. Behav. 2, 143–148. doi: 10.1089/cpb.1999.2.143

Baus, O., and Bouchard, S. (2014). Moving from virtual reality exposure-based therapy to augmented reality exposure-based therapy: a review. Front. Hum. Neurosci. 8:112. doi: 10.3389/fnhum.2014.00112

Biocca, F. (1997). The cyborg’s dilemma: progressive embodiment in virtual environments. J. Comput. Mediat. Commun. 3. doi: 10.1111/j.1083-6101.1997

Biocca, F., Harms, C., and Gregg, J. (2001). “The networked minds measure of social presence: pilot test of the factor structure and concurrent validity,” in 4th Annual International Workshop on Presence , Philadelphia, PA, 1–9.

Blanke, O., Slater, M., and Serino, A. (2015). Behavioral, neural, and computational principles of bodily self-consciousness. Neuron 88, 145–166. doi: 10.1016/j.neuron.2015.09.029

Bohil, C. J., Alicea, B., and Biocca, F. A. (2011). Virtual reality in neuroscience research and therapy. Nat. Rev. Neurosci. 12:752. doi: 10.1038/nrn3122

Borrego, A., Latorre, J., Llorens, R., Alcañiz, M., and Noé, E. (2016). Feasibility of a walking virtual reality system for rehabilitation: objective and subjective parameters. J. Neuroeng. Rehabil. 13:68. doi: 10.1186/s12984-016-0174-171

Botella, C., Bretón-López, J., Quero, S., Baños, R. M., and García-Palacios, A. (2010). Treating cockroach phobia with augmented reality. Behav. Ther. 41, 401–413. doi: 10.1016/j.beth.2009.07.002

Botella, C., Fernández-Álvarez, J., Guillén, V., García-Palacios, A., and Baños, R. (2017). Recent progress in virtual reality exposure therapy for phobias: a systematic review. Curr. Psychiatry Rep. 19:42. doi: 10.1007/s11920-017-0788-4

Botella, C. M., Juan, M. C., Baños, R. M., Alcañiz, M., Guillén, V., and Rey, B. (2005). Mixing realities? An application of augmented reality for the treatment of cockroach phobia. Cyberpsychol. Behav. 8, 162–171. doi: 10.1089/cpb.2005.8.162

Brandes, U. (2001). A faster algorithm for betweenness centrality. J. Math. Sociol. 25, 163–177. doi: 10.1080/0022250X.2001.9990249

Bretón-López, J., Quero, S., Botella, C., García-Palacios, A., Baños, R. M., and Alcañiz, M. (2010). An augmented reality system validation for the treatment of cockroach phobia. Cyberpsychol. Behav. Soc. Netw. 13, 705–710. doi: 10.1089/cyber.2009.0170

Brown, A., and Green, T. (2016). Virtual reality: low-cost tools and resources for the classroom. TechTrends 60, 517–519. doi: 10.1007/s11528-016-0102-z

Bu, Y., Liu, T. Y., and Huang, W. B. (2016). MACA: a modified author co-citation analysis method combined with general descriptive metadata of citations. Scientometrics 108, 143–166. doi: 10.1007/s11192-016-1959-5

Burdea, G., Richard, P., and Coiffet, P. (1996). Multimodal virtual reality: input-output devices, system integration, and human factors. Int. J. Hum. Compu. Interact. 8, 5–24. doi: 10.1080/10447319609526138

Burdea, G. C., and Coiffet, P. (2003). Virtual Reality Technology , Vol. 1, Hoboken, NJ: John Wiley & Sons.

Carmigniani, J., Furht, B., Anisetti, M., Ceravolo, P., Damiani, E., and Ivkovic, M. (2011). Augmented reality technologies, systems and applications. Multimed. Tools Appl. 51, 341–377. doi: 10.1007/s11042-010-0660-6

Castelvecchi, D. (2016). Low-cost headsets boost virtual reality’s lab appeal. Nature 533, 153–154. doi: 10.1038/533153a

Cathy (2011). The History of Augmented Reality. The Optical Vision Site. Available at: http://www.theopticalvisionsite.com/history-of-eyewear/the-history-of-augmented-reality/#.UelAUmeAOyA

Chen, C. (2006). CiteSpace II: detecting and visualizing emerging trends and transient patterns in scientific literature. J. Assoc. Inform. Sci. Technol. 57, 359–377. doi: 10.1002/asi.20317

Chen, C., Ibekwe-SanJuan, F., and Hou, J. (2010). The structure and dynamics of cocitation clusters: a multipleperspective cocitation analysis. J. Assoc. Inform. Sci. Technol. 61, 1386–1409. doi: 10.1002/jez.b.22741

Chen, Y. C., Chi, H. L., Hung, W. H., and Kang, S. C. (2011). Use of tangible and augmented reality models in engineering graphics courses. J. Prof. Issues Eng. Educ. Pract. 137, 267–276. doi: 10.1061/(ASCE)EI.1943-5541.0000078

Chicchi Giglioli, I. A., Pallavicini, F., Pedroli, E., Serino, S., and Riva, G. (2015). Augmented reality: a brand new challenge for the assessment and treatment of psychological disorders. Comput. Math. Methods Med. 2015:862942. doi: 10.1155/2015/862942

Chien, C. H., Chen, C. H., and Jeng, T. S. (2010). “An interactive augmented reality system for learning anatomy structure,” in Proceedings of the International Multiconference of Engineers and Computer Scientists , Vol. 1, (Hong Kong: International Association of Engineers), 17–19.

Choi, S., Jung, K., and Noh, S. D. (2015). Virtual reality applications in manufacturing industries: past research, present findings, and future directions. Concurr. Eng. 23, 40–63. doi: 10.1177/1063293X14568814

Cipresso, P. (2015). Modeling behavior dynamics using computational psychometrics within virtual worlds. Front. Psychol. 6:1725. doi: 10.3389/fpsyg.2015.01725

Cipresso, P., and Serino, S. (2014). Virtual Reality: Technologies, Medical Applications and Challenges. Hauppauge, NY: Nova Science Publishers, Inc.

Cipresso, P., Serino, S., and Riva, G. (2016). Psychometric assessment and behavioral experiments using a free virtual reality platform and computational science. BMC Med. Inform. Decis. Mak. 16:37. doi: 10.1186/s12911-016-0276-5

Cruz-Neira, C. (1993). “Virtual reality overview,” in SIGGRAPH 93 Course Notes 21st International Conference on Computer Graphics and Interactive Techniques, Orange County Convention Center , Orlando, FL.

De Buck, S., Maes, F., Ector, J., Bogaert, J., Dymarkowski, S., Heidbuchel, H., et al. (2005). An augmented reality system for patient-specific guidance of cardiac catheter ablation procedures. IEEE Trans. Med. Imaging 24, 1512–1524. doi: 10.1109/TMI.2005.857661

Di Lernia, D., Cipresso, P., Pedroli, E., and Riva, G. (2018a). Toward an embodied medicine: a portable device with programmable interoceptive stimulation for heart rate variability enhancement. Sensors (Basel) 18:2469. doi: 10.3390/s18082469

Di Lernia, D., Serino, S., Pezzulo, G., Pedroli, E., Cipresso, P., and Riva, G. (2018b). Feel the time. Time perception as a function of interoceptive processing. Front. Hum. Neurosci. 12:74. doi: 10.3389/fnhum.2018.00074

Di Serio,Á., Ibáñez, M. B., and Kloos, C. D. (2013). Impact of an augmented reality system on students’ motivation for a visual art course. Comput. Educ. 68, 586–596. doi: 10.1016/j.compedu.2012.03.002

Ebert, C. (2015). Looking into the future. IEEE Softw. 32, 92–97. doi: 10.1109/MS.2015.142

Englund, C., Olofsson, A. D., and Price, L. (2017). Teaching with technology in higher education: understanding conceptual change and development in practice. High. Educ. Res. Dev. 36, 73–87. doi: 10.1080/07294360.2016.1171300

Falagas, M. E., Pitsouni, E. I., Malietzis, G. A., and Pappas, G. (2008). Comparison of pubmed, scopus, web of science, and Google scholar: strengths and weaknesses. FASEB J. 22, 338–342. doi: 10.1096/fj.07-9492LSF

Feiner, S., MacIntyre, B., Hollerer, T., and Webster, A. (1997). “A touring machine: prototyping 3D mobile augmented reality systems for exploring the urban environment,” in Digest of Papers. First International Symposium on Wearable Computers , (Cambridge, MA: IEEE), 74–81. doi: 10.1109/ISWC.1997.629922

Freeman, D., Reeve, S., Robinson, A., Ehlers, A., Clark, D., Spanlang, B., et al. (2017). Virtual reality in the assessment, understanding, and treatment of mental health disorders. Psychol. Med. 47, 2393–2400. doi: 10.1017/S003329171700040X

Freeman, L. C. (1977). A set of measures of centrality based on betweenness. Sociometry 40, 35–41. doi: 10.2307/3033543

Fuchs, H., and Bishop, G. (1992). Research Directions in Virtual Environments. Chapel Hill, NC: University of North Carolina at Chapel Hill.

Gallagher, A. G., Ritter, E. M., Champion, H., Higgins, G., Fried, M. P., Moses, G., et al. (2005). Virtual reality simulation for the operating room: proficiency-based training as a paradigm shift in surgical skills training. Ann. Surg. 241:364. doi: 10.1097/01.sla.0000151982.85062.80

Gigante, M. A. (1993). Virtual reality: definitions, history and applications. Virtual Real. Syst. 3–14. doi: 10.1016/B978-0-12-227748-1.50009-3

González-Teruel, A., González-Alcaide, G., Barrios, M., and Abad-García, M. F. (2015). Mapping recent information behavior research: an analysis of co-authorship and co-citation networks. Scientometrics 103, 687–705. doi: 10.1007/s11192-015-1548-z

Heeter, C. (1992). Being there: the subjective experience of presence. Presence 1, 262–271. doi: 10.1162/pres.1992.1.2.262

Heeter, C. (2000). Interactivity in the context of designed experiences. J. Interact. Adv. 1, 3–14. doi: 10.1080/15252019.2000.10722040

Heilig, M. (1962). Sensorama simulator. U.S. Patent No - 3, 870. Virginia: United States Patent and Trade Office.

Ibáñez, M. B., Di Serio,Á., Villarán, D., and Kloos, C. D. (2014). Experimenting with electromagnetism using augmented reality: impact on flow student experience and educational effectiveness. Comput. Educ. 71, 1–13. doi: 10.1016/j.compedu.2013.09.004

Juan, M. C., Alcañiz, M., Calatrava, J., Zaragozá, I., Baños, R., and Botella, C. (2007). “An optical see-through augmented reality system for the treatment of phobia to small animals,” in Virtual Reality, HCII 2007 Lecture Notes in Computer Science , Vol. 4563, ed. R. Schumaker (Berlin: Springer), 651–659.

Juan, M. C., Alcaniz, M., Monserrat, C., Botella, C., Baños, R. M., and Guerrero, B. (2005). Using augmented reality to treat phobias. IEEE Comput. Graph. Appl. 25, 31–37. doi: 10.1109/MCG.2005.143

Kim, G. J. (2005). A SWOT analysis of the field of virtual reality rehabilitation and therapy. Presence 14, 119–146. doi: 10.1162/1054746053967094

Klavans, R., and Boyack, K. W. (2015). Which type of citation analysis generates the most accurate taxonomy of scientific and technical knowledge? J. Assoc. Inform. Sci. Technol. 68, 984–998. doi: 10.1002/asi.23734

Kleinberg, J. (2002). “Bursty and hierarchical structure in streams,” in Paper Presented at the Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2002; Edmonton , Alberta, NT. doi: 10.1145/775047.775061

Kleinberg, J. (2003). Bursty and hierarchical structure in streams. Data Min. Knowl. Discov. 7, 373–397. doi: 10.1023/A:1024940629314

Korolov, M. (2014). The real risks of virtual reality. Risk Manag. 61, 20–24.

Krueger, M. W., Gionfriddo, T., and Hinrichsen, K. (1985). “Videoplace—an artificial reality,” in Proceedings of the ACM SIGCHI Bulletin , Vol. 16, New York, NY: ACM, 35–40. doi: 10.1145/317456.317463

Lin, C. H., and Hsu, P. H. (2017). “Integrating procedural modelling process and immersive VR environment for architectural design education,” in MATEC Web of Conferences , Vol. 104, Les Ulis: EDP Sciences. doi: 10.1051/matecconf/201710403007

Llorens, R., Noé, E., Ferri, J., and Alcañiz, M. (2014). Virtual reality-based telerehabilitation program for balance recovery. A pilot study in hemiparetic individuals with acquired brain injury. Brain Inj. 28:169.

Lombard, M., and Ditton, T. (1997). At the heart of it all: the concept of presence. J. Comput. Mediat. Commun. 3. doi: 10.1111/j.1083-6101.1997.tb00072.x

Loomis, J. M., Blascovich, J. J., and Beall, A. C. (1999). Immersive virtual environment technology as a basic research tool in psychology. Behav. Res. Methods Instr. Comput. 31, 557–564. doi: 10.3758/BF03200735

Loomis, J. M., Golledge, R. G., and Klatzky, R. L. (1998). Navigation system for the blind: auditory display modes and guidance. Presence 7, 193–203. doi: 10.1162/105474698565677

Luckerson, V. (2014). Facebook Buying Oculus Virtual-Reality Company for $2 Billion. Available at: http://time.com/37842/facebook-oculus-rift

Maurugeon, G. (2011). New D’Fusion Supports iPhone4S and 3xDSMax 2012. Available at: http://www.t-immersion.com/blog/2011-12-07/augmented-reality-dfusion-iphone-3dsmax

Mazuryk, T., and Gervautz, M. (1996). Virtual Reality-History, Applications, Technology and Future. Vienna: Institute of Computer Graphics Vienna University of Technology.

Meldrum, D., Glennon, A., Herdman, S., Murray, D., and McConn-Walsh, R. (2012). Virtual reality rehabilitation of balance: assessment of the usability of the nintendo Wii ® fit plus. Disabil. Rehabil. 7, 205–210. doi: 10.3109/17483107.2011.616922

Milgram, P., and Kishino, F. (1994). A taxonomy of mixed reality visual displays. IEICE Trans. Inform. Syst. 77, 1321–1329.

Minderer, M., Harvey, C. D., Donato, F., and Moser, E. I. (2016). Neuroscience: virtual reality explored. Nature 533, 324–325. doi: 10.1038/nature17899

Neri, S. G., Cardoso, J. R., Cruz, L., Lima, R. M., de Oliveira, R. J., Iversen, M. D., et al. (2017). Do virtual reality games improve mobility skills and balance measurements in community-dwelling older adults? Systematic review and meta-analysis. Clin. Rehabil. 31, 1292–1304. doi: 10.1177/0269215517694677

Nincarean, D., Alia, M. B., Halim, N. D. A., and Rahman, M. H. A. (2013). Mobile augmented reality: the potential for education. Procedia Soc. Behav. Sci. 103, 657–664. doi: 10.1016/j.sbspro.2013.10.385

Orosz, K., Farkas, I. J., and Pollner, P. (2016). Quantifying the changing role of past publications. Scientometrics 108, 829–853. doi: 10.1007/s11192-016-1971-9

Ozbek, C. S., Giesler, B., and Dillmann, R. (2004). “Jedi training: playful evaluation of head-mounted augmented reality display systems,” in Proceedings of SPIE. The International Society for Optical Engineering , Vol. 5291, eds R. A. Norwood, M. Eich, and M. G. Kuzyk (Denver, CO), 454–463.

Perry, S. (2008). Wikitude: Android App with Augmented Reality: Mind Blow-Ing. Digital Lifestyles.

Radu, I. (2012). “Why should my students use AR? A comparative review of the educational impacts of augmented-reality,” in Mixed and Augmented Reality (ISMAR), 2012 IEEE International Symposium on , (IEEE), 313–314. doi: 10.1109/ISMAR.2012.6402590

Radu, I. (2014). Augmented reality in education: a meta-review and cross-media analysis. Pers. Ubiquitous Comput. 18, 1533–1543. doi: 10.1007/s00779-013-0747-y

Riva, G. (2018). The neuroscience of body memory: From the self through the space to the others. Cortex 104, 241–260. doi: 10.1016/j.cortex.2017.07.013

Riva, G., Gaggioli, A., Grassi, A., Raspelli, S., Cipresso, P., Pallavicini, F., et al. (2011). NeuroVR 2-A free virtual reality platform for the assessment and treatment in behavioral health care. Stud. Health Technol. Inform. 163, 493–495.

PubMed Abstract | Google Scholar

Riva, G., Serino, S., Di Lernia, D., Pavone, E. F., and Dakanalis, A. (2017). Embodied medicine: mens sana in corpore virtuale sano. Front. Hum. Neurosci. 11:120. doi: 10.3389/fnhum.2017.00120

Riva, G., Wiederhold, B. K., and Mantovani, F. (2018). Neuroscience of virtual reality: from virtual exposure to embodied medicine. Cyberpsychol. Behav. Soc. Netw. doi: 10.1089/cyber.2017.29099.gri [Epub ahead of print].

Rosenberg, L. (1993). “The use of virtual fixtures to enhance telemanipulation with time delay,” in Proceedings of the ASME Winter Anual Meeting on Advances in Robotics, Mechatronics, and Haptic Interfaces , Vol. 49, (New Orleans, LA), 29–36.

Schmidt, M., Beck, D., Glaser, N., and Schmidt, C. (2017). “A prototype immersive, multi-user 3D virtual learning environment for individuals with autism to learn social and life skills: a virtuoso DBR update,” in International Conference on Immersive Learning , Cham: Springer, 185–188. doi: 10.1007/978-3-319-60633-0_15

Schwald, B., and De Laval, B. (2003). An augmented reality system for training and assistance to maintenance in the industrial context. J. WSCG 11.

Serino, S., Cipresso, P., Morganti, F., and Riva, G. (2014). The role of egocentric and allocentric abilities in Alzheimer’s disease: a systematic review. Ageing Res. Rev. 16, 32–44. doi: 10.1016/j.arr.2014.04.004

Skalski, P., and Tamborini, R. (2007). The role of social presence in interactive agent-based persuasion. Media Psychol. 10, 385–413. doi: 10.1080/15213260701533102

Slater, M. (2009). Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philos. Trans. R. Soc. Lond. B Biol. Sci. 364, 3549–3557. doi: 10.1098/rstb.2009.0138

Slater, M., and Sanchez-Vives, M. V. (2016). Enhancing our lives with immersive virtual reality. Front. Robot. AI 3:74. doi: 10.3389/frobt.2016.00074

Small, H. (1973). Co-citation in the scientific literature: a new measure of the relationship between two documents. J. Assoc. Inform. Sci. Technol. 24, 265–269. doi: 10.1002/asi.4630240406

Song, H., Chen, F., Peng, Q., Zhang, J., and Gu, P. (2017). Improvement of user experience using virtual reality in open-architecture product design. Proc. Inst. Mech. Eng. B J. Eng. Manufact. 232.

Sundar, S. S., Xu, Q., and Bellur, S. (2010). “Designing interactivity in media interfaces: a communications perspective,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems , (Boston, MA: ACM), 2247–2256. doi: 10.1145/1753326.1753666

Sutherland, I. E. (1965). The Ultimate Display. Multimedia: From Wagner to Virtual Reality. New York, NY: Norton.

Sutherland, I. E. (1968). “A head-mounted three dimensional display,” in Proceedings of the December 9-11, 1968, Fall Joint Computer Conference, Part I , (ACM), 757–764. doi: 10.1145/1476589.1476686

Thomas, B., Close, B., Donoghue, J., Squires, J., De Bondi, P., Morris, M., et al. (2000). “ARQuake: an outdoor/indoor augmented reality first person application,” in Digest of Papers. Fourth International Symposium on Wearable Computers , (Atlanta, GA: IEEE), 139–146. doi: 10.1109/ISWC.2000.888480

Ware, C., Arthur, K., and Booth, K. S. (1993). “Fish tank virtual reality,” in Proceedings of the INTERACT’93 and CHI’93 Conference on Human Factors in Computing Systems , (Amsterdam: ACM), 37–42. doi: 10.1145/169059.169066

Wexelblat, A. (ed.) (2014). Virtual Reality: Applications and Explorations. Cambridge, MA: Academic Press.

White, H. D., and Griffith, B. C. (1981). Author cocitation: a literature measure of intellectual structure. J. Assoc. Inform. Sci. Technol. 32, 163–171. doi: 10.1002/asi.4630320302

Wrzesien, M., Alcañiz, M., Botella, C., Burkhardt, J. M., Bretón-López, J., Ortega, M., et al. (2013). The therapeutic lamp: treating small-animal phobias. IEEE Comput. Graph. Appl. 33, 80–86. doi: 10.1109/MCG.2013.12

Wrzesien, M., Burkhardt, J. M., Alcañiz, M., and Botella, C. (2011a). How technology influences the therapeutic process: a comparative field evaluation of augmented reality and in vivo exposure therapy for phobia of small animals. Hum. Comput. Interact. 2011, 523–540.

Wrzesien, M., Burkhardt, J. M., Alcañiz Raya, M., and Botella, C. (2011b). “Mixing psychology and HCI in evaluation of augmented reality mental health technology,” in CHI’11 Extended Abstracts on Human Factors in Computing Systems , (Vancouver, BC: ACM), 2119–2124.

Zyda, M. (2005). From visual simulation to virtual reality to games. Computer 38, 25–32. doi: 10.1109/MC.2005.297

Keywords : virtual reality, augmented reality, quantitative psychology, measurement, psychometrics, scientometrics, computational psychometrics, mathematical psychology

Citation: Cipresso P, Giglioli IAC, Raya MA and Riva G (2018) The Past, Present, and Future of Virtual and Augmented Reality Research: A Network and Cluster Analysis of the Literature. Front. Psychol. 9:2086. doi: 10.3389/fpsyg.2018.02086

Received: 14 December 2017; Accepted: 10 October 2018; Published: 06 November 2018.

Reviewed by:

Copyright © 2018 Cipresso, Giglioli, Raya and Riva. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Pietro Cipresso, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

  • Is the Big Tech Boys Club Failing Women?
  • Watch! Wolverine Movies in Order

AR vs. VR vs. MR vs. XR: What's the Difference?

All augmented, virtual, and mixed reality experiences are called ‘extended reality,’ but each is different in very specific ways

augmented reality vs virtual reality essay

  • Emporia State University

Overall Findings

  • Technology Overview
  • Availability
  • Applications
  • Final Verdict
  • Frequently Asked Questions

If you've heard of immersive video games, virtual travel, or AR shopping, then you've no doubt run into labels like augmented reality (AR), virtual reality (VR), mixed reality (MR), and extended reality (XR). We've reviewed these terms in depth to learn what they mean and how they differ so you can have some clarity about which one is right for you.

AR  VR MR XR
Virtual elements overlayed on the real world. Fully virtual experience. Anchored virtual elements that can interact with the real world. Umbrella term for AR, VR, and MR.
Works through a headset or smartphone. Works through a headset. Usually works through a headset.
View your physical surroundings at the same time.  View only the virtual world.  View your physical surroundings at the same time.  
Can be used for free through mobile apps. There are free apps, but they require a headset. Can be used for free through mobile apps.

Extended reality is a blanket term that refers to a group of technologies: augmented reality, virtual reality, and mixed reality. Therefore, a virtual reality headset and an augmented reality headset , for example, while different, are both considered extended reality technologies.

These perception-changing technologies deal with virtual elements, meaning an onboard computer generates all the objects. In VR, the CGI fully covers your vision, so you're immersed in a totally fake world. AR and MR use computer-generated images, too, but since the point is to also see your surroundings, those elements don't take over your whole vision. Instead, your physical environment is enhanced or changed in some way.

You can use some AR and MR implementations from a standard smartphone without needing headgear, but VR requires a full headset.

The rest of this article doesn't include XR in the comparison tables because it's a term used to describe the other three. It's akin to comparing computer hardware with a mouse, keyboard, and webcam.

Technology: VR Blocks Your Vision, AR/MR Doesn't

AR  VR MR
Superimposes virtual elements on real world items. Shows only virtual elements. Superimposes virtual elements on real world items.

Before digging into these XR technologies, we first need to make a clear distinction between how they work. The links at the beginning of this article provide an in-depth look, but for now, you need to know that there's a single factor that separates AR and MR from VR, and that's the fact that VR totally blocks your vision.

Virtual reality is built to hide everything but computer-generated images. Augmented reality and mixed reality are built to show you the real world and the virtual world.

It works this way because AR and MR, as you'll read below, are designed to enhance and change what you're already doing and seeing around you, while VR is designed to replace reality with something completely fake.

Availability: AR Can Run Straight From a Phone

AR  VR MR
Can work through a smartphone. Requires a headset. Usually requires a headset.
Often free when used from a phone. There are free apps, but only after you have the hardware. Most useful after purchasing a headset, but it's not always necessary.

In terms of widespread availability, AR (and sometimes MR) is already in use on smartphones all over the world. By just holding your phone in front of you, you can experience things like live language translations, filters that change how your face looks in real time, and 3D models anchored in space.

Contrast this with virtual reality that's available only through a headset, and it's clear how easily accessible AR and some MR implementations are with nothing more than a smartphone. Most mixed reality experiences are accomplished through a headset, too, but with the line blurred so much between these terms, you could say some form of MR is also possible with just a phone.

Furthermore, there are plenty of free AR/MR apps, so no additional investment is needed to experience those XR types, which can't be said for virtual reality.

Immersion: VR Is the Clear Winner, MR Is Close

AR  VR MR
You see both real and virtual elements. Everything you see is virtual. You see both real and virtual elements.
View simulated elements in the real world. View and interact with fully simulated objects in a fake world. View and interact with fully simulated objects in the real world.

An immersive experience is meant to simulate a different reality, ideally making you forget that you're even in it. VR is the only extended reality method of achieving that because you're completely engrossed in the simulated world. If you walk around in the room with your real legs, you won't know where you're going because you can't see anything but what the computer is generating (though, some VR experiences digitize real obstacles for safety reasons).

However, if you think of immersion as an altered perception where your environment is simply different from what it is usually, then MR is a close second because there's a level of interaction between the virtual and real elements, something that AR doesn't permit.

Mixed reality objects can be anchored in real space, meaning you can physically walk around them and often interact with them as if they were real. It creates a solid bridge between a completely real and a completely virtual environment.

Applications: VR/MR Excel in Education, VR in Entertainment

AR  VR MR
Guided navigation. Fully immersive gameplay. 3D asset collaboration.
Training exercises. Training exercises. Training exercises.
Real-time diagnostics. Real-time, avatar-based socializing. Semi-immersive gameplay.
Gaming and shopping. Virtual movie theater and other entertainment. Enhanced marketing.

There are lots of applications for all three XR types, and many of them bleed into the others.

AR includes your real surroundings, so it's useful for critical information in the real world, like overlaying on a body a hospital patient's vitals or X-ray details for precision surgery. Similar is MR, which isn't as helpful for a scenario like that but instead more beneficial for 'performing' the surgery with virtual objects, something that might be set up during an unqualified surgeon's training phase.

Arguably more relevant to the masses are entertainment and gaming. AR, VR, and MR create fun experiences in their own unique ways, but the deepest immersion level can be achieved only through virtual reality. With VR, an entire movie theater can be erected just for you, and realistic first-person shooter video games and virtual tourism are best enjoyed with a headset and no distractions from the outside world.

AR and MR can drastically change how we shop by letting us do all sorts of neat reality-bending tasks, like trying on clothes, testing how furniture will fit in a room, and viewing customer ratings on top of in-store products. Even VR can provide a fully simulated shopping mall for you to browse through with just a headset.

Final Verdict: They All Have Their Place

All three of these extended reality types are useful, so the one you choose depends entirely on what you want to accomplish. AR and MR are built for truly mixing real and imaginary elements, with the latter having an edge over the former by leaning deeper into the actual mixing of realities. VR doesn't let you view the real world around you, but that's the whole point; it excels in that you're fully immersed in a digital reality that you can enjoy alone or with friends.

If escapism and rich, life-like experiences like gaming are what you're after, you can't go wrong with VR. While MR is nearly synonymous with AR, its advantage is that it feels more real than augmented reality because you can interact with virtual elements that stay where they are regardless of how you view them. AR, however, is much more ubiquitous, available in some form on nearly all modern smartphones, often for free.

Windows Mixed Reality is a virtual- and augmented-reality framework built into Windows 10 and 11. The main hardware it uses is HoloLens 2, a set of AR glasses. WMR also runs Windows-compatible VR games like Beat Saber.

The most common way you'll see businesses use augmented reality is in mobile shopping apps. For example, you might be able to use AR to see how a piece of furniture or art will look in your home. Businesses may also use AR internally for training (e.g., projecting assembly instructions onto a workspace or identifying different work tools).

Get the Latest Tech News Delivered Every Day

  • What Is XR (Extended Reality)?
  • OpenAI Playground vs. ChatGPT: What's the Difference?
  • What Is Mixed Reality?
  • Supervised vs. Unsupervised Learning: What's the Difference?
  • ChatGPT vs. Gemini: What's the Difference?
  • Strong AI vs. Weak AI: What's the Difference?
  • Artificial Intelligence vs. Machine Learning: What's the Difference?
  • How to Use Bing AI in Google Chrome
  • Data Science vs. Artificial Intelligence: What's the Difference?
  • Can Google Home and Alexa Work Together?
  • Machine Learning vs. Deep Learning: What's the Difference?
  • Bixby vs. Siri: Which Is Best?
  • Android Auto vs Alexa Auto Mode: What's the Difference?
  • Everything You Need to Know About Virtual Reality on iPhone
  • Siri vs. Google: Which Assistant Fits Your Needs?
  • What Is Strong AI?

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Virtual Reality vs. Augmented Reality

Profile image of Anderson Romanhuk

One of the biggest confusions in the world of reality and virtual reality. Both are earning a lot of media attention and are promising tremendous growth. what is the difference between the two?

Related Papers

ikhun mamur

augmented reality vs virtual reality essay

Parisa Mehran , Mehrasa Alizadeh

What Is MAVR? How to Bring MAVR to Your Classroom How We Made Augmented Reality a Reality at Osaka University What Inspired Us to Form the JALT MAVR SIG How to Get Involved with the JALT MAVR SIG

Class assignment

Md. Didarul Islam

This paper briefly addresses the phenomenon of Virtual Reality and Augmented Reality in today’s world. Obstacles and challenges of incorporating these two newly invented technological tools have been discussed in the context of Bangladesh. The result of this study shows that there are many challenges of implementing these two technologies at individual and institutional levels in the countries like Bangladesh.

Progress Solved

Natasha Comeau

Allison Cohen and I explore the emerging trends within virtual and augmented reality, including how these technologies are transforming the entertainment, health, education, and manufacturing sectors and the inherent risks that come along with a world full of new realities.

Handbook of Research on K-12 Online and Blended Learning (2nd ed.)Publisher: ETC PressEditors: Kennedy, K, Ferdig, R.E

Enrico Gandolfi

This chapter provides a wide overview of Augmented Reality (AR) and Immersive Virtual Reality (IVR) in education. Even though their role in K-12 online learning and blended environments is still at an early stage, significant efforts have been made to frame their core affordances and constraints, and potential future developments are outlined. Therefore, in the following pages AR and IVR are introduced along with significant research and highlights from scholars and practitioners. Furthermore, a reflection about current challenges and next steps in terms of policies and integration is provided. Additionally, suggestions to help inform further investigations and inquiries are shared. Despite high costs, inadequate pedagogies, and continuously developing technology, these tools can provide a significant opportunity for immersion and will play a key role in future educational settings; therefore, scholars and practitioners need to be properly involved and trained.

Dr.G. Suseendran

Asian Research Publishing Network (ARPN)

NABEEL ALI , MOHAMMED NASSER AL-MHIQANI

Human-Computer Interaction (HCI) is concerned with how humans work with computers and how technologies can accommodate the needs of users to meet their goals. The early phases of Virtual Reality (VR) often involved head-mounted computers in which users immersed themselves, as they could act, perceive, and interact with a three-dimensional world. This paper has two objectives: to conduct a detailed review of VR trends (previous, current, and forthcoming) and to highlight the applications and obstacles that affect each trend. Reliable survey data was obtained from sources such as ISI, Scopus, Springer, IEEE, and Google Scholar, as well as websites. The main contributions of this work are: (1) analysis and summary of VR trends in the past, present, and future, (2) details of the technical limitations of each trend and explication of their applications, technological requirements, and currently available solutions, (3) illustration of the direction, developments, issues, and challenges for each trend (previous, current, and future), (4) identification of the direction and important trends that require more comprehensive studies by future researchers.

Ivan De Boi

This work presents our vision and work‑in‑progress on a new platform for immersive virtual and augmented reality (AR) training. ImmersiMed is aimed at medical educational and professional institutions for educating nurses, doctors, and other medical personnel. ImmersiMed is created with multi‑platform support and extensibility in mind. By creating consistent experiences across different platforms and applications, ImmersiMed intends to increase simulation availability. Furthermore, it is expected to improve the quality of training and prepare students better for more advanced tasks and boost confidence in their abilities. Tools for educators are being provided so new scenarios can be added without the intervention of costly content creators or programmers. This article addresses how Immersive’s mixed platform approach can ease the transition from basic school training to real‑world applications by starting from a virtual reality simulation and gradually let the student move on to guided AR in the real world. By explaining the idea of a single development platform for multiple applications using different technologies and by providing tools for educators to create their own scenarios, ImmersiMed will improve training quality and availability at a low training and simulation costs.

jeff brice , Robin Avni

The spring of 2016 may well be remembered as that mystical moment when the concept of augmented reality crossed over into the collective consumer consciousness through the PokéStops and “Gyms” of Pokémon Go, a location-based game developed by Niantec. For a select group of college students, that same period brought an unprecedented opportunity to create their own augmented reality experiences when the Microsoft HoloLens team approached the Design Department of the Cornish College of Arts, an accredited arts college located in the Pacific Northwest, with a unique and collaborative opportunity. The HoloLens team offered pre-release exposure to their mixed reality HoloLens technology and the creative opportunity for a select group of innovative student talent to generate unique, mixed reality experiences through design, dance, and theatrical performance. This innovative collaboration resulted in the creation of two select groups of creative content – live performances recorded on a 360-degree soundstage and rendered as holographic, volumetric videos, as well as illustrative and animated work created with the use of 2D and 3D renderings produced for viewing through the HoloLens. Jeff Brice, Chair of Design, and Robin Avni, Assistant Professor User Experience, supervised the mixed reality creative project. Both are full-time Cornish faculty with creative, research, and professional technology experience. The Microsoft team was led by Ben Porter, Director of Business Strategy for HoloLens.

Eric Hawkinson , Parisa Mehran , Mehrasa Alizadeh

Mixed, Augmented, and Virtual Realities (MAVR) is not a new concept or area of study, but it is an area that is beginning to be implemented at a larger scale in many other fields. Environments that employ these tools and concepts are being applied to medicine, engineering, and education. There are many working in this area connected to language education in Japan; the authors and many others are working to form a new JALT Special Interest Group, the MAVR SIG. The following is a primer to the current state of the research into MAVR and a discussion of where the field may be headed. Please contact the authors if you are interested in getting involved in the MAVR SIG.

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

Science Park Research Organization & Counselling

Invention Journal of Research Technology in Engineering & Management

Eric Hawkinson

M. Claudia tom Dieck , Timothy Jung

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024
  • How to Order
  • AI Essay Writer
  • AI Detector and Humanizer

Augmented Reality and Virtual Reality: Transforming Realities, Empowering Experiences

Augmented Reality (AR) and Virtual Reality (VR) are two innovative technologies that have the power to reshape how we interact with the digital and physical worlds. AR overlays digital information onto the real world, while VR immerses users into entirely virtual environments. Together, they offer unique and powerful experiences with far-reaching implications across various fields.

Augmented Reality enhances our perception of reality by superimposing digital elements onto the physical environment. AR applications have seen widespread adoption in fields like education, gaming, and retail. For example, educational AR apps enable students to explore interactive 3D models, enhancing their understanding of complex subjects. In gaming, AR games like Pokemon GO have taken the world by storm, blending the virtual and real worlds seamlessly.

On the other hand, Virtual Reality transports users to entirely different realms, providing immersive experiences that go beyond what is possible in the physical world. VR technology has made significant strides in fields like entertainment, training, and therapy. VR gaming provides players with unparalleled immersion, giving them a sense of presence in virtual worlds. In training and simulation, VR allows for safe and realistic practice in high-risk environments, such as flight simulations for pilots or medical training for surgeons.

Additionally, VR is proving to be a valuable tool in therapeutic settings. It has been used to treat phobias, post-traumatic stress disorder (PTSD), and anxiety by exposing patients to controlled virtual environments, facilitating gradual desensitization and healing.

While AR and VR have brought about transformative experiences, they are not without challenges. Technical limitations, such as the need for powerful hardware and the potential for motion sickness in VR, have been areas of concern. As the technology continues to evolve, these challenges are gradually being addressed, making AR and VR more accessible and user-friendly.

Moreover, ethical considerations come into play as AR and VR become more prevalent in our lives. Ensuring that these technologies are used responsibly and respectfully, especially concerning user privacy and data security, is of paramount importance.

Looking ahead, AR and VR hold tremendous potential for enhancing human experiences, communication, and understanding. From revolutionizing education to revolutionizing entertainment and beyond, these technologies will continue to shape the way we perceive and interact with the world around us.

Advertisement

Advertisement

Virtual, mixed, and augmented reality: a systematic review for immersive systems research

  • Original Article
  • Published: 03 January 2021
  • Volume 25 , pages 773–799, ( 2021 )

Cite this article

augmented reality vs virtual reality essay

  • Matthew J. Liberatore   ORCID: orcid.org/0000-0002-5741-6723 1 &
  • William P. Wagner 2  

8138 Accesses

69 Citations

1 Altmetric

Explore all metrics

Immersive systems can be used to capture new data, create new experiences, and provide new insights by generating virtual elements of physical and imagined worlds. Immersive systems are seeing increased application across a broad array of fields. However, in many situations it is unknown if an immersive application performs as well or better than the existing application in accomplishing a specific task. The purpose of this study is to conduct a systematic review of the literature that addresses the performance of immersive systems. This review assesses those applications where experiments, tests, or clinical trials have been performed to evaluate the proposed application. This research addresses a broad range of application areas and considers studies that compared one or more immersive systems with a control group or evaluated performance data for the immersive system pre- and post-test. The results identify those applications that have been successfully tested and also delineate areas of future research where more data may be needed to assess the effectiveness of proposed applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

augmented reality vs virtual reality essay

Similar content being viewed by others

augmented reality vs virtual reality essay

Immersive media experience: a survey of existing methods and tools for human influential factors assessment

augmented reality vs virtual reality essay

XR Accessibility – Learning from the Past and Addressing Real User Needs for Inclusive Immersive Environments

augmented reality vs virtual reality essay

Inclusivity Requirements for Immersive Content Consumption in Virtual and Augmented Reality

Explore related subjects.

  • Artificial Intelligence

Availability of data and material

Results of systematic literature search are available.

Code availability

Code used for literature search included as “ Appendix .”

Abdullah M, Shaikh ZA (2018) An effective virtual reality based remedy for acrophobia. Int J Adv Comput Sci Appl 9(6):162–167

Google Scholar  

Anstadt S, Bradley S, Burnett A (2013) Virtual worlds: relationship between real life and experience in second life. Int Rev Res Open Distance Learn 14(4):160–190

Article   Google Scholar  

Azuma R (1997) A survey of augmented reality. Presence: Teleoper Virtual Environ 6(4):355–385

Azuma R, Baillot Y, Behringer R, Feiner S, Julier S, Macintyre B (2001) Recent advances in augmented reality. IEEE Comput Graphics Appl 21:34–47

Blum J, Rockstroh C, Göritz AS (2019) Heart rate variability biofeedback based on slow-paced breathing with immersive virtual reality nature scenery. Front Psychol 10:2172

Boas Y (2013) Overview of virtual reality technologies. School of Electronics and Computer Science, University of Southampton, Southampton, UK. http://static1.squarespace.com/static/537bd8c9e4b0c89881877356/t/5383bc16e4b0bc0d91a758a6/1401142294892/yavb1g12_25879847_finalpaper.pdf . Accessed 1 Aug 2020

Boell S, Cecez-Kecmanovic D (2015) On being ‘systematic’ in literature reviews in IS. J Inf Technol 30(2):161–173

Bogicevica V, Seob S, Kandampullyc J, Liuc S, Rudd N (2019) Virtual reality presence as a preamble of tourism experience: the role of mental imagery. Tour Manag 74:55–64

Bordnick PS, Traylor AC, Carter BL, Graap KM (2012) A feasibility study of virtual reality-based coping skills training for nicotine dependence. Res Soc Work Pract 22(3):293–300

Borsci S, Lawson G, Jha B, Burges M, Salanitri D (2016) Effectiveness of a multidevice 3D virtual environment application to train car service maintenance procedures. Virtual Real 20(1):41–55

Bouzit M, Popescu G, Burdea G, Boian R (2002) The Rutgers master II-ND force feedback glove. In: Proceedings of IEEE VR 2002 haptics symposium, Orlando FL, March, pp 256–263

Bowman D, McMahan R (2007) Virtual reality: how much immersion is enough? Computer 40:36–43

Bramley I, Goode A, Anderson L, Mary E (2018) Researching in-store, at home: using virtual reality within quantitative surveys. Int J Mark Res 60(4):344–351

Cardoş RA, David OA, David DO (2017) Virtual reality exposure therapy in flight anxiety: a quantitative meta-analysis. Comput Hum Behav 72:371–380

Carlson P, Peters A, Gilbert SB, Vance JM, Luse A (2015) Virtual training: learning transfer of assembly tasks. IEEE Trans Visual Comput Gr 21(6):770–782

Carmigniani J, Furht B, Anisetti M, Ceravolo P, Damiani E, Ivkovic M (2011) Augmented reality technologies, systems and applications. Multimedia Tools Appl 51(1):341–377

Carù A, Cova B (2006) Expériences de marque: comment favoriser l’immersion du consommateur? Décis Mark 41:43–52

Choi S, Cheung H (2008) A versatile virtual prototyping system for rapid product development. Comput Ind 59(5):477–488

Corbetta D, Imeri F, Gatti R (2015) Rehabilitation that incorporates virtual reality is more effective than standard rehabilitation for improving walking speed, balance and mobility after stroke: a systematic review. J Physiother 61(3):117–124

Cruz-Neira C, Sandlikn D, DeFanti T, Kenyon R, Hart J (1992) The CAVE: audio visual experience automatic virtual environment. Commun ACM 35(6):65–72

Cummings J, Bailenson J (2016) How immersive is Enough? A meta-analysis of the effects of immersive technology on user preference. Media Psychol 19:272–309

CyberGlove Systems (2020) CyberGrasp. http://www.cyberglovesystems.com/cybergrasp . Accessed 11 Nov 2020

Czub M, Piskorz J (2018) Body movement reduces pain intensity in virtual reality–based analgesia. Int J Hum-Comput Interact 34(11):1045–1051

de Rooij IJ, van de Port IG, Meijer JWG (2016) Effect of virtual reality training on balance and gait ability in patients with stroke: systematic review and meta-analysis. Phys Ther 96(12):1905–1918

Dobrowolski P, Pochwatko G, Skorko M, Bielecki M (2014) The effects of virtual experience on attitudes toward real brands. Cyberpsychol Behav Soc Netw 17(2):125–128

Dunn J (2017) A visual history of Nintendo’s video game consoles. http://www.businessinsider.com/nintendo-consoles-in-history-photos-switch-2017-1/#before-there-was-the-nes-there-was-the-color-tv-game-nintendo-first-dipped-its-toes-into-console-gaming-by-launching-five-of-these-rectangles-between-1977-and-1980-all-in-its-native-japan-1. Accessed 1 Aug 2020

Ferrer-Garcia M, Pla-Sanjuanelo J, Dakanalis A, Vilalta-Abella F, Riva G, Fernandez-Aranda F, Ribas-Sabaté J (2019) A randomized trial of virtual reality-based cue exposure second-level therapy and cognitive behavior second-level therapy for bulimia nervosa and binge-eating disorder: outcome at six-month follow up. Cyberpsychol Behav Soc Netw 22(1):60–68

Fink A (2005) Conducting research literature reviews: From the Internet to paper. Sage Publications, Thousand Oaks

Flavián C, Ibáñez-Sánchez S, Orús C (2019) Integrating virtual reality devices into the body: effects of technological embodiment on customer engagement and behavioral intentions toward the destination. J Travel Tour Mark 36(7):12. https://doi.org/10.1080/10548408.2019.1618781

Fodor L, Coteț C, Cuijpers P, Szamoskozi Ș, David D, Cristea I (2018) The effectiveness of virtual reality based interventions for symptoms of anxiety and depression: a meta-analysis. Sci Rep 8(1):10323. https://doi.org/10.1038/s41598-018-28113-6

Gerçeker G, Binay Ş, Bilsin E, Kahraman A, Yılmaz H (2018) Effects of virtual reality and external cold and vibration on pain in 7-to 12-year-old children during phlebotomy: a randomized controlled trial. J PeriAnesthesia Nurs 33(6):981–989

Gleasure R, Feller J (2015) A rift in the ground: theorizing the evolution of anchor values in crowdfunding communities through the oculus rift case study. J Assoc Comput Syst 17(1):708–736

Glennon C, McElroy S, Connelly L, Lawson L, Bretches A, Gard A, Newcomer L (2018) Use of virtual reality to distract from pain and anxiety. Oncol Nurs Forum 45(4):545–552. https://doi.org/10.1188/18.ONF.545-552

Glueck A, Han D (2020) Improvement potentials in balance and visuo-motor reaction time after mixed reality action game play: a pilot study. Virt Real 24(2):223–229. https://doi.org/10.1007/s10055-019-00392-y

Gonzalez-Franco M, Pizarro R, Cermeron J, Li K, Thorn J, Hutabarat W, Bermell-Garcia P (2017) Immersive mixed reality for manufacturing training. Frontiers in Robotics and AI, 4. http://journal.frontiersin.org/article/10.3389/frobt.2017.00003/full . Accessed 1 Aug 2020

Gordon NS, Merchant J, Zanbaka C, Hodges LF, Goolkasian P (2011) Interactive gaming reduces experimental pain with or without a head mounted display. Comput Hum Behav 27(6):2123–2128

Gumaa M, Rehan Youssef A (2019) Is virtual reality effective in orthopedic rehabilitation? A systematic review and meta-analysis. Phys Ther 99(10):1304–1325

Guo C, Deng H, Yang J (2015) Effect of virtual reality distraction on pain among patients with hand injury undergoing dressing change. J Clin Nurs 24(1–2):115–120

Huang TL (2019) Psychological mechanisms of brand love and information technology identity in virtual retail environments. J Retail Consum Serv 47:251–264

Igna R, Stefan S, Onac I, Ungur RA, Tatar AS (2014) Mindfulness-based cognitive-behavior therapy (MCBT) versus virtual reality (VR) enhanced CBT, versus treatment as usual for chronic back pain. A clinical trial. J Evid-Based Psychother 14(2):229

igroup.org (2016) igroup presence questionnaire (IPQ) overview. http://www.igroup.org/pq/ipq/index.php Accessed 1 Aug 2020

Israel K, Zerres C, Tscheulin DK (2019) Presenting hotels in virtual reality: does it influence the booking intention? J Hosp Tour Technol 10(3):473–493

Javornik A (2016) Augmented reality: research agenda for studying the impact of its media characteristics on consumer behavior. J Retail Consumer Serv 30:252–261

Jennett C, Cox A, Cairns P, Dhoparee S, Epps A, Tijs T, Walton A (2008) Measuring and defining the experience of immersion in games. Int J Hum Comput Stud 66(9):641–661

Jo D, Kim GJ (2019) IoT + AR: pervasive and augmented environments for “digi-log” shopping experience. Hum-Centric Comput Inf Sci. https://doi.org/10.1186/s13673-018-0162-5

Juan MC, Calatrava J (2011) An augmented reality system for the treatment of phobia to small animals viewed via an optical see-through HMD: comparison with a similar system viewed via a video see-through HMD. Int J Hum–Comput Interact 27(5):436–449

Kalawsky RS (1996) AGOCG Report. Exploiting virtual reality techniques in education and training: technological issues. http://www.agocg.ac.uk/reports/virtual/vrtech/toc.htm Accessed 1 Aug 2020

Karafotias G, Korres G, Teranishi A, Park W, Eid M (2017) Mid-air tactile stimulation for pain distraction. IEEE Trans Haptics 11(2):185–191

Kawulich B, D’Alba A (2019) Teaching qualitative research methods with second life: a 3-dimensional online virtual environment. Virtual Real 23(4):375–384

Kim IC, Lee BH (2012) Effects of augmented reality with functional electric stimulation on muscle strength, balance and gait of stroke patients. J Phys Ther Sci 24(8):755–762

Krumins A (2017) Haptic bodysuits and the strange new landscape of immersive VR. Jan 4, blog entry at https://www.extremetech.com/extreme/241917-haptic-bodysuits-strange-new-landscape-immersive-virtual-reality . Accessed 1 Aug 2020

Ku J, Kim YJ, Cho S, Lim T, Lee HS, Kang YJ (2019) Three-dimensional augmented reality system for balance and mobility rehabilitation in the elderly: a randomized controlled trial. Cyberpsychol Behav Soc Netw 22(2):132–141

Kumar R, Oskiper T, Naroditsky O, Samarasekera S, Zhu Z, Kim J (2017) System and method for generating a mixed reality environment. US Patent No. 9,600,067 B2

Laha B, Sensharma K, Schiffbauer JD, Bowman DA (2012) Effects of immersion on visual analysis of volume data. IEEE Trans Visual Comput Gr 19(4):597–606

Latif U, Shin S (2019) OP-MR: the implementation of order picking based on mixed reality in a smart warehouse. Vis Comput. https://doi.org/10.1007/s00371-019-01745-z

Lau K (2015) Organizational learning goes virtual? A study of employees’ learning achievement in stereoscopic 3D virtual reality. Learn Organ 22(5):289–303

Lee C, Kim Y, Lee B (2014) Augmented reality-based postural control training improves gait function in patients with stroke: randomized controlled trial. Hong Kong Physiother J 32(2):51–57

Lee J, Yoo H, Lee B (2017) Effects of augmented reality-based Otago exercise on balance, gait, and physical factors in elderly women to prevent falls: a randomized controlled trial. J Phys Ther Sci 29(9):1586–1589

Lessiter J, Freeman J, Keogh E, Davidoff J (2001) A cross-media presence questionnaire: the ITC sense of presence inventory. Presence: Teleoper Virtual Environ 10(3):282–297

Li C, Liang W, Quigley C, Zhao Y, Yu L (2017) Earthquake safety training through virtual drills. IEEE Trans Vis Comput Gr 23(4):1275–1284

Liberati N (2013) Improving the embodiment relations by means of phenomenological analysis on the “reality” of ARs. In: 2013 IEEE international symposium on mixed and augmented reality-arts, media, and humanities (ISMAR-AMH) 0, 13–17, 2013. http://doi.ieeecomputersociety.org/10.1109/ISMAR-AMH.2012.6483983

Liberati N (2016) Augmented reality and ubiquitous computing: the hidden potentialities of augmented reality. AI Soc 31(1):17–28

Lima J, McCabe-Bennett H, Antony M (2018) Treatment of storm fears using virtual reality and progressive muscle relaxation. Behav Cognit Psychother 46(2):251–256

Lombard M, Ditton T (1997) At the heart of it all: the concept of presence. J Comput-Med Commun 3(2):1083–6101

Lombard M, Ditton T, Weinstein L (2013) Measuring presence: the temple presence inventory (TPI). Updated September, 15 . http://matthewlombard.com/research/p2_ab.html . Accessed 1 Aug 2020

Loreto-Quijada D, Gutiérrez-Maldonado J, Nieto R, Gutiérrez-Martínez O, Ferrer-García M, Saldana C, Liutsko L (2014) Differential effects of two virtual reality interventions: distraction versus pain control. Cyberpsychol Behav Soc Netw 17(6):353–358

Manzoni GM, Cesa GL, Bacchetta M, Castelnuovo G, Conti S, Gaggioli A, Riva G (2016) Virtual reality–enhanced cognitive–behavioral therapy for morbid obesity: a randomized controlled study with 1 year follow-up. Cyberpsychol Behav Soc Netw 19(2):134–140

Martínez-Navarro J, Bigné E, Guixeres J, Alcañiz M, Torrecilla C (2019) The influence of virtual reality in e-commerce. J Bus Res 100:475–482

Maskey M, Rodgers J, Grahame V, Glod M, Honey E, Kinnear J, Parr J (2019) A randomised controlled feasibility trial of immersive virtual reality treatment with cognitive behaviour therapy for specific phobias in young people with autism spectrum disorder. J Autism Dev Disord 49(5):1912–1927

McLay R, Wood D, Webb-Murphy J, Spira J, Wiederhold M, Pyne J, Wiederhold B (2011) A randomized, controlled trial of virtual reality-graded exposure therapy for post-traumatic stress disorder in active duty service members with combat-related post-traumatic stress disorder. Cyberpsychol Behav Soc Netw 14(4):223–229

McLay R, Baird A, Webb-Murphy J, Deal W, Tran L, Anson H, Klam W, Johnston S (2017) A randomized, head-to-head study of virtual reality exposure therapy for posttraumatic stress disorder. Cyberpsychol Behav Soc Netw 20(4):218–224

McMahan A (2003) Immersion, engagement and presence: a method for analyzing 3-D video games. In: Wolf M, Perron B (eds) The video game theory reader, chap 3. Routledge, New York, pp 67–86

Meng F, Zhang W, Yang R (2014) The development of a panorama manifestation virtual reality system for navigation and a usability comparison with a desktop system. Behav Inf Technol 33(2):133–143

Merel T (2017) The reality of VR/AR growth. Tech Crunch. https://techcrunch.com/2017/01/11/the-reality-of-vrar-growth/ . Accessed 1 Aug 2020

Michaliszyn D, Marchand A, Bouchard S, Martel M, Poirier-Bisson J (2010) A randomized, controlled clinical trial of in virtuo and in vivo exposure for spider phobia. Cyberpsychol Behav Soc Netw 13(6):689–695

Milgram P, Kishino F (1994) A taxonomy of mixed reality visual displays. IEICE Trans Inf Syst E77-D(12):1321–1329

Montero-López E, Santos-Ruiz A, García-Ríos M, Rodríguez-Blázquez R, Pérez-García M, Peralta-Ramírez M (2016) A virtual reality approach to the Trier Social Stress Test: contrasting two distinct protocols. Behav Res Methods 48(1):223–232

Motraghi T, Seim R, Meyer E, Morissette S (2014) Virtual reality exposure therapy for the treatment of posttraumatic stress disorder: a methodological review using CONSORT guidelines. J Clin Psychol 70(3):197–208

Muhanna M (2015) Virtual reality and the CAVE: taxonomy, interaction challenges and research directions. J King Saud Univ—Comput Inf Sci 27(3):344–361

Murcia-Lopez M, Steed A (2018) A comparison of virtual and physical training transfer of bimanual assembly tasks. IEEE Trans Vis Comput Gr 24(4):1574–1583

Narayan M, Waugh L, Zhang X, Bafna P, Bowman D (2005) Quantifying the benefits of immersion for collaboration in virtual environments. In: Proceedings of the ACM symposium on virtual reality software and technology, Monterey, California, USA, 7–9 November

Neguţ A, Matu S, Sava F, David D (2016) Task difficulty of virtual reality-based assessment tools compared to classical paper-and-pencil or computerized measures: a meta-analytic approach. Comput Hum Behav 54:414–424

Ng Y-L, Ma F, Ho F, Ip P, Fu K-W (2019) Effectiveness of virtual and augmented reality-enhanced exercise on physical activity, psychological outcomes, and physical performance: a systematic review and meta-analysis of randomized controlled trials. Comput Hum Behav 99:278–291

Nilsson S, Johansson B, Jonsson A (2010) Cross-organizational collaboration supported by augmented reality. IEEE Trans Visual Comput Graphics 17(10):1380–1392

Nilsson N, Nordahl R, Serafin S (2016) Immersion revisited: a review of existing definitions of immersion and their relation to different theories of presence. Hum Technol 12(2):108–134

Okoli C, Schabram K (2010) A guide to conducting a systematic literature review of information systems research. Sprouts: working papers on information systems, vol 10, no. 26. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1954824 . Accessed 1 Aug 2020

Oleksy T, Wnuk A (2016) Augmented places: an impact of embodied historical experience on attitudes towards places. Comput Hum Behav 57:11–16

Paré G, Jaana M, Sicotte C (2007) Systematic review of home telemonitoring for chronic diseases: the evidence base. J Am Med Inform Assoc 14(3):269–277. https://doi.org/10.1197/jamia.M2270

Paré G, Trudel M-C, Jaana M, Kitsiou S (2015) Synthesizing information systems knowledge: a typology of literature reviews. Inf Manag 52(2):183–199

Paré G, Tate M, Johnstone D, Kitsiou S (2016) Contextualizing the twin concepts of systematicity and transparency in information systems literature reviews. Eur J Inf Syst 25(6):493–508

Parsons T, Gaggioli A, Riva G (2017) Virtual reality research in social neuroscience. Brain Sci 7(4):42

Parveau M, Adda M (2018) 3iVClass: a new classification method for virtual, augmented and mixed realities. Procedia Comput Sci 141:263–270. https://doi.org/10.1016/j.procs.2018.10.180

Perret J, Vander Poorten E (2018) Touching virtual reality: a review of haptic gloves. In: ACTUATOR 2018: 16th international conference on new actuators, June 25–27, pp 270–274

Pickering C, Byrne J (2013) The benefits of publishing systematic quantitative literature reviews for PhD candidates and other early-career researchers. High Educ Res Dev 33(3):534–548. https://doi.org/10.1080/07294360.2013.841651

Pickering C, Grignon J, Steven R, Guitart D, Byrne J (2015) Publishing not perishing: how research students transition from novice to knowledgeable using systematic quantitative literature reviews. Stud High Educ 40(10):1756–1769. https://doi.org/10.1080/03075079.2014.914907

Piskorz J, Czub M (2014) Distraction of attention with the use of virtual reality. Influence of the level of game complexity on the level of experienced pain. Pol Psychol Bull 45(4):480–487

Regenbrecht H, Schubert T (2002) Real and illusory interactions enhance presence in virtual environments. Presence: Teleoper Virtual Environ 11(4):425–434

Reger G, Koenen-Woods P, Zetocha K, Smolenski D, Holloway K, Rothbaum B, Gahm G (2016) Randomized controlled trial of prolonged exposure using imaginal exposure vs. virtual reality exposure in active duty soldiers with deployment-related posttraumatic stress disorder (PTSD). J Consul Clin Psychol 84(11):946–959

Rehman U, Cao S (2019) Comparative evaluation of augmented reality-based assistance for procedural tasks: a simulated control room study. Behav Inf Technol. https://doi.org/10.1080/0144929X.2019.1660805

Repetto C, Gaggioli A, Pallavicini F, Cipresso P, Raspelli S, Riva G (2013) Virtual reality and mobile phones in the treatment of generalized anxiety disorders: a phase-2 clinical trial. Pers Ubiquit Comput 17(2):253–260

Riva G, Waterworth JA (2003) Presence and the self: a cognitive neuroscience approach. Presence-Connect, 3(3)

Rodríguez C, Areces D, Garcia T, Cueli M, González Castro P (2018) Comparison between two continuous performance tests for identifying ADHD: traditional vs. virtual reality. Int J Clin Health Psychol 18:254–263

Ronchi E, Mayorga D, Lovreglio R, Wahlqvist J, Nilsson D (2019) Mobile-powered head-mounted displays versus cave automatic virtual environment experiments for evacuation research. Comput Anim Virtual Worlds 30(6):e1873. https://doi.org/10.1002/cav.1873

Rowe F (2014) What literature review is not: diversity, boundaries, and recommendations. Eur J Inf Syst 23(3):241–255

Sacks R, Perlman A, Barak R (2013) Construction safety training using immersive virtual reality. Constr Manag Econ 31(9):1005–1017. https://doi.org/10.1080/01446193.2013.828844

Sadowsky W, Stanney K (2002) Measuring and managing presence in virtual environments. In: Stanney K (ed) Handbook of virtual environments technology. Lawrence Erlbaum Associates, Mahway, pp 791–806

Schoonheim M, Heyden R, Wiecha JM, Henden T (2014) Use of a virtual world computer environment for international distance education: lessons from a pilot project using second life. BMC Med Educ. https://doi.org/10.1186/1472-6920-14-36

Schroeder R (1996) Possible worlds: the social dynamic of virtual reality technologies. Westview Press, Boulder

Schryen G (2015) Writing qualitative IS literature reviews—guidelines for synthesis, interpretation and guidance of research. Commun Assoc Inf Syst 37:286–325

MathSciNet   Google Scholar  

Schryen G, Benlian A, Rowe F, Shirley G, Larsen K, Petter S, Wagner G, Haag S, Yasasin E (2017) Literature reviews in IS research: what can be learnt from the past and other fields? Commun Assoc Inf Syst. https://doi.org/10.17705/1CAIS.04130

Schubert T, Friedmann F, Regenbrecht H (2001) The experience of presence: factor analytic insights. Teleoper Virtual Environ 10(3):266–281

Shu Y, Huang YZ, Chang SH, Chen MY (2019) Do virtual reality head-mounted displays make a difference? A comparison of presence and self-efficacy between head-mounted displays and desktop computer-facilitated virtual environments. Virtual Real 23(4):437–446

Slater M (2003) A note on presence terminology. Presence Connect 3(3):1–5

Slater M, Wilbur S (1997) A framework for immersive virtual environments (FIVE): speculations on the role of presence in virtual environments. Presence: Teleoper Virtual Environ 6(6):603–616

Slater M, Usoh M, Steed A (1994) Depth of presence in virtual environments. Presence: Teleoper Virtual Environ 3(2):130–144

Smink A, Frowijn S, van Reijmersdal E, van Noort G, Neijens P (2019) Try online before you buy: how does shopping with augmented reality affect brand responses and personal data disclosure. Electron Commer Res Appl 35:100854

Solomon B (2014) Facebook buys oculus, virtual reality gaming startup, for $2 billion. https://www.forbes.com/sites/briansolomon/2014/03/25/facebook-buys-oculus-virtual-reality-gaming-startup-for-2-billion/#d8d8b7024984 . Accessed 1 Aug 2020

Suh A, Prophet J (2018) The state of immersive technology research: a literature analysis. Comput Hum Behav 86:77–90

Suso-Ribera C, Fernández-Álvarez J, García-Palacios A, Hoffman HG, Bretón-López J, Banos RM, Botella C (2019) Virtual reality, augmented reality, and in vivo exposure therapy: a preliminary comparison of treatment efficacy in small animal phobia. Cyberpsychol Behav Soc Netw 22(1):31–38

Tang Y, Au K, Lau H, Ho G, Wu G (2020) Evaluating the effectiveness of learning design with mixed reality (MR) in higher education. Virtual Real. https://doi.org/10.1007/s10055-020-00427-9

Teel E, Gay M, Johnson B, Slobounov S (2016) Determining sensitivity/specificity of virtual reality-based neuropsychological tool for detecting residual abnormalities following sport-related concussion. Neuropsychology 30(4):474–483

Templier M, Paré G (2018) Transparency in literature reviews: an assessment of reporting practices across review types and genres in top IS journals. Eur J Inf Syst 27(5):503–550. https://doi.org/10.1080/0960085X.2017.1398880

Thompson C (2017) Stereographs were the original virtual reality. Smithsonian Magazine . https://www.smithsonianmag.com/innovation/sterographs-original-virtual-reality-180964771/ . Accessed 1 Aug 2020

Thompson T, Steffert T, Steed A, Gruzelier J (2011) A randomized controlled trial of the effects of hypnosis with 3-d virtual reality animation on tiredness, mood, and salivary cortisol. Int J Clin Exp Hypn 59(1):122–142

Turk V (2016) Face electrodes let you taste and chew in virtual reality. https://www.newscientist.com/article/2111371-face-electrodes-let-you-taste-and-chew-in-virtual-reality/ . Accessed 1 Aug 2020

UQO Cyberpsychology Lab. Presence Questionnaire. (2002). http://w3.uqo.ca/cyberpsy/wp-content/uploads/2019/04/QEP_vf.pdf . Accessed August 1, 2020

Valtchanov D, Barton KR, Ellard C (2010) Restorative effects of virtual nature settings. Cyberpsychol Behav Soc Netw 13(5):503–512

Van Baren J, IJsselsteijn W (2004) Measuring presence: a guide to current measurement approaches. http://www8.informatik.umu.se/~jwworth/PresenceMeasurement.pdf . Accessed 1 Aug 2020

Van Kerrebroeck H, Brengman M, Willems K (2017) When brands come to life: experimental research on the vividness effect of virtual reality in transformational marketing communications. Virtual Real 21(4):177–191

vom Brocke J, Simons A, Riemer K, Niehaves B, Plattfaut R, Cleven A (2015) Standing on the shoulders of giants: challenges and recommendations of literature search in information systems research. Commun Assoc Inf Syst 37:205–224

Webster J, Watson RT (2002) Analyzing the past to prepare for the future: writing a literature review. MIS Q 26(2):xiii–xxiii

Wechsler TF, Mühlberger A, Kümpers F (2019) Inferiority or even superiority of virtual reality exposure therapy in phobias?—A systematic review and quantitative meta-analysis on randomized controlled trials specifically comparing the efficacy of virtual reality exposure to gold standard in vivo exposure in agoraphobia, specific phobia and social phobia. Front Psychol 10:1758. https://doi.org/10.3389/fpsyg.2019.01758

Westerfield G, Mitrovic A, Billinghurst M (2015) Intelligent augmented reality training for motherboard assembly. Int J Artif Intell Educ 25(1):157–172

Wiederhold M, Crisci M, Patel V, Nonaka M, Wiederhold B (2019) Physiological monitoring during augmented reality exercise confirms advantages to health and well-being. Cyberpsychol Behav Soc Netw 22(2):122–126

Wilkerson W, Avstreih D, Gruppen L, Beier K-P, Woolliscroft J (2008) Using immersive simulation for training first responders for mass casualty incidents. Acad Emerg Med 15(11):1152–1159. https://doi.org/10.1111/j.1553-2712.2008.00223.x

Wissmath B, Weibel D, Mast F (2010) Measuring presence with verbal versus pictorial scales: a comparison between online- and ex post- ratings. Virtual Real 14(1):43–53

Witmer B, Singer M (1998) Measuring presence in virtual environments: a presence questionnaire. Presence: Teleoper Virtual Environ 7(3):225–240

Witmer B, Jerome C, Singer M (2005) The factor structure of the presence questionnaire. Presence 14(3):298–312

Yang S, Xiong G (2019) Try it on! Contingency effects of virtual fitting rooms. J Manag Inf Syst 36(3):789–822

Yang Z, Shi J, Jiang W, Sui Y, Wu Y, Ma S, Li H (2019) Influences of augmented reality assistance on performance and cognitive loads in different stages of assembly task. Front Psychol 10:1703. https://doi.org/10.3389/fpsyg.2019.01703

Yoo SC, Drumwright M (2018) Nonprofit fundraising with virtual reality. Nonprofit Manag Leadersh 29(1):11–27

Download references

Not applicable.

Author information

Authors and affiliations.

Department of Management and Operations, Villanova School of Business, Villanova University, Villanova, PA, 19085, USA

Matthew J. Liberatore

Department of Accounting and Information Systems, Villanova School of Business, Villanova University, Villanova, PA, 19085, USA

William P. Wagner

You can also search for this author in PubMed   Google Scholar

Contributions

Both authors contributed equally to this research.

Corresponding author

Correspondence to Matthew J. Liberatore .

Ethics declarations

Conflicts of interest.

The authors declare that they have no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Keywords used in Scopus literature search

((TITLE (“immersive system” OR “virtual reality” OR “mixed reality” OR “augmented reality”) OR ABS (“immersive system” OR “virtual reality” OR “mixed reality” OR “augmented reality”))) AND ((TITLE (“experiment” OR “trial” OR “test”) OR ABS (“experiment” OR “trial” OR “test”))) AND (LIMIT-TO (SRCTYPE, “j”)) AND (EXCLUDE (SUBJAREA, “MEDI”) OR EXCLUDE (SUBJAREA, “MATH”) OR EXCLUDE (SUBJAREA, “PHYS”) OR EXCLUDE (SUBJAREA, “NEUR”) OR EXCLUDE (SUBJAREA, “MATE”) OR EXCLUDE (SUBJAREA, “BIOC”) OR EXCLUDE (SUBJAREA, “AGRI”) OR EXCLUDE (SUBJAREA, “EART”) OR EXCLUDE (SUBJAREA, “ENVI”) OR EXCLUDE (SUBJAREA, “CENG”) OR EXCLUDE (SUBJAREA, “IMMU”) OR EXCLUDE (SUBJAREA, “DENT”) OR EXCLUDE (SUBJAREA, “PHAR”) OR EXCLUDE (SUBJAREA, “VETE”) OR EXCLUDE (SUBJAREA, “Undefined”) OR EXCLUDE (SUBJAREA, “ENER”) OR EXCLUDE (SUBJAREA, “CHEM”) OR EXCLUDE (SUBJAREA, “ENGI”)) AND (EXCLUDE (PUBYEAR, 2009) OR EXCLUDE (PUBYEAR, 2008) OR EXCLUDE (PUBYEAR, 2007) OR EXCLUDE (PUBYEAR, 2006) OR EXCLUDE (PUBYEAR, 2005) OR EXCLUDE (PUBYEAR, 2004) OR EXCLUDE (PUBYEAR, 2003) OR EXCLUDE (PUBYEAR, 2002) OR EXCLUDE (PUBYEAR, 2001) OR EXCLUDE (PUBYEAR, 2000) OR EXCLUDE (PUBYEAR, 1999) OR EXCLUDE (PUBYEAR, 1998) OR EXCLUDE (PUBYEAR, 1997) OR EXCLUDE (PUBYEAR, 1996) OR EXCLUDE (PUBYEAR, 1995) OR EXCLUDE (PUBYEAR, 1994) OR EXCLUDE (PUBYEAR, 1993) OR EXCLUDE (PUBYEAR, 1992) OR EXCLUDE (PUBYEAR, 1991) OR EXCLUDE (PUBYEAR, 1984)) AND (EXCLUDE (EXACTSRCTITLE, “Computers And Education”) OR EXCLUDE (EXACTSRCTITLE, “Journal Of Advanced Oxidation Technologies”) OR EXCLUDE (EXACTSRCTITLE, “Computer Applications In Engineering Education”) OR EXCLUDE (EXACTSRCTITLE, “International Journal Of Online Engineering”) OR EXCLUDE (EXACTSRCTITLE, “Journal Of Computing In Civil Engineering”) OR EXCLUDE (EXACTSRCTITLE, “Turkish Online Journal Of Educational Technology”) OR EXCLUDE (EXACTSRCTITLE, “International Journal Of recent Technology And Engineering”) OR EXCLUDE (EXACTSRCTITLE, “Advanced Engineering Informatics”) OR EXCLUDE (EXACTSRCTITLE, “Computers In Education Journal”) OR EXCLUDE (EXACTSRCTITLE, “Journal Of Computing And Information Science In Engineering”) OR EXCLUDE (EXACTSRCTITLE, “AES Journal Of The Audio Engineering Society”) OR EXCLUDE (EXACTSRCTITLE, “Advances In Mechanical Engineering”) OR EXCLUDE (EXACTSRCTITLE, “IEEE Transactions On Biomedical Engineering”) OR EXCLUDE (EXACTSRCTITLE, “International Journal Of Multimedia And Ubiquitous Engineering”) OR EXCLUDE (EXACTSRCTITLE, “Journal Of Telecommunication Electronic And Computer Engineering”) OR EXCLUDE (EXACTSRCTITLE, “British Journal Of Educational Technology”) OR EXCLUDE (EXACTSRCTITLE, “Educational Technology And Society”) OR EXCLUDE (EXACTSRCTITLE, “Journal Of Advanced Research In Dynamical And Control Systems”) OR EXCLUDE (EXACTSRCTITLE, “Journal Of Medical And Biological Engineering”) OR EXCLUDE (EXACTSRCTITLE, “Journal Of Professional Issues In Engineering Education And Practice”) OR EXCLUDE (EXACTSRCTITLE, “Journal Of Science Education And Technology”) OR EXCLUDE (EXACTSRCTITLE, “IEEE Journal On Emerging And Selected Topics In Circuits And Systems”) OR EXCLUDE (EXACTSRCTITLE, “International Journal Of Innovative Technology And Exploring Engineering”) OR EXCLUDE (EXACTSRCTITLE, “Journal Of Educational Computing Research”) OR EXCLUDE (EXACTSRCTITLE, “Annals Of Biomedical Engineering”) OR EXCLUDE (EXACTSRCTITLE, “IEEE Transactions On Circuits And Systems For Video Technology”) OR EXCLUDE (EXACTSRCTITLE, “Interactive Technology And Smart Education”) OR EXCLUDE (EXACTSRCTITLE, “International Journal Of Applied Engineering Research”) OR EXCLUDE (EXACTSRCTITLE, “Asia Pacific Education Researcher”)) AND (LIMIT-TO (LANGUAGE, “English”)) AND (LIMIT-TO (DOCTYPE, “ar”) OR LIMIT-TO (DOCTYPE, “re”)).

Rights and permissions

Reprints and permissions

About this article

Liberatore, M.J., Wagner, W.P. Virtual, mixed, and augmented reality: a systematic review for immersive systems research. Virtual Reality 25 , 773–799 (2021). https://doi.org/10.1007/s10055-020-00492-0

Download citation

Received : 02 August 2020

Accepted : 27 November 2020

Published : 03 January 2021

Issue Date : September 2021

DOI : https://doi.org/10.1007/s10055-020-00492-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Immersive systems
  • Virtual reality
  • Augmented reality
  • Mixed reality
  • Empirical research
  • Systematic review
  • Find a journal
  • Publish with us
  • Track your research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 25 October 2021

Augmented reality and virtual reality displays: emerging technologies and future perspectives

  • Jianghao Xiong 1 ,
  • En-Lin Hsiang 1 ,
  • Ziqian He 1 ,
  • Tao Zhan   ORCID: orcid.org/0000-0001-5511-6666 1 &
  • Shin-Tson Wu   ORCID: orcid.org/0000-0002-0943-0440 1  

Light: Science & Applications volume  10 , Article number:  216 ( 2021 ) Cite this article

126k Accesses

582 Citations

42 Altmetric

Metrics details

  • Liquid crystals

With rapid advances in high-speed communication and computation, augmented reality (AR) and virtual reality (VR) are emerging as next-generation display platforms for deeper human-digital interactions. Nonetheless, to simultaneously match the exceptional performance of human vision and keep the near-eye display module compact and lightweight imposes unprecedented challenges on optical engineering. Fortunately, recent progress in holographic optical elements (HOEs) and lithography-enabled devices provide innovative ways to tackle these obstacles in AR and VR that are otherwise difficult with traditional optics. In this review, we begin with introducing the basic structures of AR and VR headsets, and then describing the operation principles of various HOEs and lithography-enabled devices. Their properties are analyzed in detail, including strong selectivity on wavelength and incident angle, and multiplexing ability of volume HOEs, polarization dependency and active switching of liquid crystal HOEs, device fabrication, and properties of micro-LEDs (light-emitting diodes), and large design freedoms of metasurfaces. Afterwards, we discuss how these devices help enhance the AR and VR performance, with detailed description and analysis of some state-of-the-art architectures. Finally, we cast a perspective on potential developments and research directions of these photonic devices for future AR and VR displays.

Similar content being viewed by others

augmented reality vs virtual reality essay

Advanced liquid crystal devices for augmented reality and virtual reality displays: principles and applications

augmented reality vs virtual reality essay

Achromatic diffractive liquid-crystal optics for virtual reality displays

augmented reality vs virtual reality essay

Breaking the in-coupling efficiency limit in waveguide-based AR displays with polarization volume gratings

Introduction.

Recent advances in high-speed communication and miniature mobile computing platforms have escalated a strong demand for deeper human-digital interactions beyond traditional flat panel displays. Augmented reality (AR) and virtual reality (VR) headsets 1 , 2 are emerging as next-generation interactive displays with the ability to provide vivid three-dimensional (3D) visual experiences. Their useful applications include education, healthcare, engineering, and gaming, just to name a few 3 , 4 , 5 . VR embraces a total immersive experience, while AR promotes the interaction between user, digital contents, and real world, therefore displaying virtual images while remaining see-through capability. In terms of display performance, AR and VR face several common challenges to satisfy demanding human vision requirements, including field of view (FoV), eyebox, angular resolution, dynamic range, and correct depth cue, etc. Another pressing demand, although not directly related to optical performance, is ergonomics. To provide a user-friendly wearing experience, AR and VR should be lightweight and ideally have a compact, glasses-like form factor. The above-mentioned requirements, nonetheless, often entail several tradeoff relations with one another, which makes the design of high-performance AR/VR glasses/headsets particularly challenging.

In the 1990s, AR/VR experienced the first boom, which quickly subsided due to the lack of eligible hardware and digital content 6 . Over the past decade, the concept of immersive displays was revisited and received a new round of excitement. Emerging technologies like holography and lithography have greatly reshaped the AR/VR display systems. In this article, we firstly review the basic requirements of AR/VR displays and their associated challenges. Then, we briefly describe the properties of two emerging technologies: holographic optical elements (HOEs) and lithography-based devices (Fig. 1 ). Next, we separately introduce VR and AR systems because of their different device structures and requirements. For the immersive VR system, the major challenges and how these emerging technologies help mitigate the problems will be discussed. For the see-through AR system, we firstly review the present status of light engines and introduce some architectures for the optical combiners. Performance summaries on microdisplay light engines and optical combiners will be provided, that serve as a comprehensive overview of the current AR display systems.

figure 1

The left side illustrates HOEs and lithography-based devices. The right side shows the challenges in VR and architectures in AR, and how the emerging technologies can be applied

Key parameters of AR and VR displays

AR and VR displays face several common challenges to satisfy the demanding human vision requirements, such as FoV, eyebox, angular resolution, dynamic range, and correct depth cue, etc. These requirements often exhibit tradeoffs with one another. Before diving into detailed relations, it is beneficial to review the basic definitions of the above-mentioned display parameters.

Definition of parameters

Taking a VR system (Fig. 2a ) as an example. The light emitting from the display module is projected to a FoV, which can be translated to the size of the image perceived by the viewer. For reference, human vision’s horizontal FoV can be as large as 160° for monocular vision and 120° for overlapped binocular vision 6 . The intersection area of ray bundles forms the exit pupil, which is usually correlated with another parameter called eyebox. The eyebox defines the region within which the whole image FoV can be viewed without vignetting. It therefore generally manifests a 3D geometry 7 , whose volume is strongly dependent on the exit pupil size. A larger eyebox offers more tolerance to accommodate the user’s diversified interpupillary distance (IPD) and wiggling of headset when in use. Angular resolution is defined by dividing the total resolution of the display panel by FoV, which measures the sharpness of a perceived image. For reference, a human visual acuity of 20/20 amounts to 1 arcmin angular resolution, or 60 pixels per degree (PPD), which is considered as a common goal for AR and VR displays. Another important feature of a 3D display is depth cue. Depth cue can be induced by displaying two separate images to the left eye and the right eye, which forms the vergence cue. But the fixed depth of the displayed image often mismatches with the actual depth of the intended 3D image, which leads to incorrect accommodation cues. This mismatch causes the so-called vergence-accommodation conflict (VAC), which will be discussed in detail later. One important observation is that the VAC issue may be more serious in AR than VR, because the image in an AR display is directly superimposed onto the real-world with correct depth cues. The image contrast is dependent on the display panel and stray light. To achieve a high dynamic range, the display panel should exhibit high brightness, low dark level, and more than 10-bits of gray levels. Nowadays, the display brightness of a typical VR headset is about 150–200 cd/m 2 (or nits).

figure 2

a Schematic of a VR display defining FoV, exit pupil, eyebox, angular resolution, and accommodation cue mismatch. b Sketch of an AR display illustrating ACR

Figure 2b depicts a generic structure of an AR display. The definition of above parameters remains the same. One major difference is the influence of ambient light on the image contrast. For a see-through AR display, ambient contrast ratio (ACR) 8 is commonly used to quantify the image contrast:

where L on ( L off ) represents the on (off)-state luminance (unit: nit), L am is the ambient luminance, and T is the see-through transmittance. In general, ambient light is measured in illuminance (lux). For the convenience of comparison, we convert illuminance to luminance by dividing a factor of π, assuming the emission profile is Lambertian. In a normal living room, the illuminance is about 100 lux (i.e., L am  ≈ 30 nits), while in a typical office lighting condition, L am  ≈ 150 nits. For outdoors, on an overcast day, L am  ≈ 300 nits, and L am  ≈ 3000 nits on a sunny day. For AR displays, a minimum ACR should be 3:1 for recognizable images, 5:1 for adequate readability, and ≥10:1 for outstanding readability. To make a simple estimate without considering all the optical losses, to achieve ACR = 10:1 in a sunny day (~3000 nits), the display needs to deliver a brightness of at least 30,000 nits. This imposes big challenges in finding a high brightness microdisplay and designing a low loss optical combiner.

Tradeoffs and potential solutions

Next, let us briefly review the tradeoff relations mentioned earlier. To begin with, a larger FoV leads to a lower angular resolution for a given display resolution. In theory, to overcome this tradeoff only requires a high-resolution-display source, along with high-quality optics to support the corresponding modulation transfer function (MTF). To attain 60 PPD across 100° FoV requires a 6K resolution for each eye. This may be realizable in VR headsets because a large display panel, say 2–3 inches, can still accommodate a high resolution with acceptable manufacture cost. However, for a glasses-like wearable AR display, the conflict between small display size and the high solution becomes obvious as further shrinking the pixel size of a microdisplay is challenging.

To circumvent this issue, the concept of the foveated display is proposed 9 , 10 , 11 , 12 , 13 . The idea is based on that the human eye only has high visual acuity in the central fovea region, which accounts for about 10° FoV. If the high-resolution image is only projected to fovea while the peripheral image remains low resolution, then a microdisplay with 2K resolution can satisfy the need. Regarding the implementation method of foveated display, a straightforward way is to optically combine two display sources 9 , 10 , 11 : one for foveal and one for peripheral FoV. This approach can be regarded as spatial multiplexing of displays. Alternatively, time-multiplexing can also be adopted, by temporally changing the optical path to produce different magnification factors for the corresponding FoV 12 . Finally, another approach without multiplexing is to use a specially designed lens with intended distortion to achieve non-uniform resolution density 13 . Aside from the implementation of foveation, another great challenge is to dynamically steer the foveated region as the viewer’s eye moves. This task is strongly related to pupil steering, which will be discussed in detail later.

A larger eyebox or FoV usually decreases the image brightness, which often lowers the ACR. This is exactly the case for a waveguide AR system with exit pupil expansion (EPE) while operating under a strong ambient light. To improve ACR, one approach is to dynamically adjust the transmittance with a tunable dimmer 14 , 15 . Another solution is to directly boost the image brightness with a high luminance microdisplay and an efficient combiner optics. Details of this topic will be discussed in the light engine section.

Another tradeoff of FoV and eyebox in geometric optical systems results from the conservation of etendue (or optical invariant). To increase the system etendue requires a larger optics, which in turn compromises the form factor. Finally, to address the VAC issue, the display system needs to generate a proper accommodation cue, which often requires the modulation of image depth or wavefront, neither of which can be easily achieved in a traditional geometric optical system. While remarkable progresses have been made to adopt freeform surfaces 16 , 17 , 18 , to further advance AR and VR systems requires additional novel optics with a higher degree of freedom in structure design and light modulation. Moreover, the employed optics should be thin and lightweight. To mitigate the above-mentioned challenges, diffractive optics is a strong contender. Unlike geometric optics relying on curved surfaces to refract or reflect light, diffractive optics only requires a thin layer of several micrometers to establish efficient light diffractions. Two major types of diffractive optics are HOEs based on wavefront recording and manually written devices like surface relief gratings (SRGs) based on lithography. While SRGs have large design freedoms of local grating geometry, a recent publication 19 indicates the combination of HOE and freeform optics can also offer a great potential for arbitrary wavefront generation. Furthermore, the advances in lithography have also enabled optical metasurfaces beyond diffractive and refractive optics, and miniature display panels like micro-LED (light-emitting diode). These devices hold the potential to boost the performance of current AR/VR displays, while keeping a lightweight and compact form factor.

Formation and properties of HOEs

HOE generally refers to a recorded hologram that reproduces the original light wavefront. The concept of holography is proposed by Dennis Gabor 20 , which refers to the process of recording a wavefront in a medium (hologram) and later reconstructing it with a reference beam. Early holography uses intensity-sensitive recording materials like silver halide emulsion, dichromated gelatin, and photopolymer 21 . Among them, photopolymer stands out due to its easy fabrication and ability to capture high-fidelity patterns 22 , 23 . It has therefore found extensive applications like holographic data storage 23 and display 24 , 25 . Photopolymer HOEs (PPHOEs) have a relatively small refractive index modulation and therefore exhibits a strong selectivity on the wavelength and incident angle. Another feature of PPHOE is that several holograms can be recorded into a photopolymer film by consecutive exposures. Later, liquid-crystal holographic optical elements (LCHOEs) based on photoalignment polarization holography have also been developed 25 , 26 . Due to the inherent anisotropic property of liquid crystals, LCHOEs are extremely sensitive to the polarization state of the input light. This feature, combined with the polarization modulation ability of liquid crystal devices, offers a new possibility for dynamic wavefront modulation in display systems.

The formation of PPHOE is illustrated in Fig. 3a . When exposed to an interfering field with high-and-low intensity fringes, monomers tend to move toward bright fringes due to the higher local monomer-consumption rate. As a result, the density and refractive index is slightly larger in bright regions. Note the index modulation δ n here is defined as the difference between the maximum and minimum refractive indices, which may be twice the value in other definitions 27 . The index modulation δ n is typically in the range of 0–0.06. To understand the optical properties of PPHOE, we simulate a transmissive grating and a reflective grating using rigorous coupled-wave analysis (RCWA) 28 , 29 and plot the results in Fig. 3b . Details of grating configuration can be found in Table S1 . Here, the reason for only simulating gratings is that for a general HOE, the local region can be treated as a grating. The observation of gratings can therefore offer a general insight of HOEs. For a transmissive grating, its angular bandwidth (efficiency > 80%) is around 5° ( λ  = 550 nm), while the spectral band is relatively broad, with bandwidth around 175 nm (7° incidence). For a reflective grating, its spectral band is narrow, with bandwidth around 10 nm. The angular bandwidth varies with the wavelength, ranging from 2° to 20°. The strong selectivity of PPHOE on wavelength and incident angle is directly related to its small δ n , which can be adjusted by controlling the exposure dosage.

figure 3

a Schematic of the formation of PPHOE. Simulated efficiency plots for b1 transmissive and b2 reflective PPHOEs. c Working principle of multiplexed PPHOE. d Formation and molecular configurations of LCHOEs. Simulated efficiency plots for e1 transmissive and e2 reflective LCHOEs. f Illustration of polarization dependency of LCHOEs

A distinctive feature of PPHOE is the ability to multiplex several holograms into one film sample. If the exposure dosage of a recording process is controlled so that the monomers are not completely depleted in the first exposure, the remaining monomers can continue to form another hologram in the following recording process. Because the total amount of monomer is fixed, there is usually an efficiency tradeoff between multiplexed holograms. The final film sample would exhibit the wavefront modulation functions of multiple holograms (Fig. 3c ).

Liquid crystals have also been used to form HOEs. LCHOEs can generally be categorized into volume-recording type and surface-alignment type. Volume-recording type LCHOEs are either based on early polarization holography recordings with azo-polymer 30 , 31 , or holographic polymer-dispersed liquid crystals (HPDLCs) 32 , 33 formed by liquid-crystal-doped photopolymer. Surface-alignment type LCHOEs are based on photoalignment polarization holography (PAPH) 34 . The first step is to record the desired polarization pattern in a thin photoalignment layer, and the second step is to use it to align the bulk liquid crystal 25 , 35 . Due to the simple fabrication process, high efficiency, and low scattering from liquid crystal’s self-assembly nature, surface-alignment type LCHOEs based on PAPH have recently attracted increasing interest in applications like near-eye displays. Here, we shall focus on this type of surface-alignment LCHOE and refer to it as LCHOE thereafter for simplicity.

The formation of LCHOEs is illustrated in Fig. 3d . The information of the wavefront and the local diffraction pattern is recorded in a thin photoalignment layer. The volume liquid crystal deposited on the photoalignment layer, depending on whether it is nematic liquid crystal or cholesteric liquid crystal (CLC), forms a transmissive or a reflective LCHOE. In a transmissive LCHOE, the bulk nematic liquid crystal molecules generally follow the pattern of the bottom alignment layer. The smallest allowable pattern period is governed by the liquid crystal distortion-free energy model, which predicts the pattern period should generally be larger than sample thickness 36 , 37 . This results in a maximum diffraction angle under 20°. On the other hand, in a reflective LCHOE 38 , 39 , the bulk CLC molecules form a stable helical structure, which is tilted to match the k -vector of the bottom pattern. The structure exhibits a very low distorted free energy 40 , 41 and can accommodate a pattern period that is small enough to diffract light into the total internal reflection (TIR) of a glass substrate.

The diffraction property of LCHOEs is shown in Fig. 3e . The maximum refractive index modulation of LCHOE is equal to the liquid crystal birefringence (Δ n ), which may vary from 0.04 to 0.5, depending on the molecular conjugation 42 , 43 . The birefringence used in our simulation is Δ n  = 0.15. Compared to PPHOEs, the angular and spectral bandwidths are significantly larger for both transmissive and reflective LCHOEs. For a transmissive LCHOE, its angular bandwidth is around 20° ( λ  = 550 nm), while the spectral bandwidth is around 300 nm (7° incidence). For a reflective LCHOE, its spectral bandwidth is around 80 nm and angular bandwidth could vary from 15° to 50°, depending on the wavelength.

The anisotropic nature of liquid crystal leads to LCHOE’s unique polarization-dependent response to an incident light. As depicted in Fig. 3f , for a transmissive LCHOE the accumulated phase is opposite for the conjugated left-handed circular polarization (LCP) and right-handed circular polarization (RCP) states, leading to reversed diffraction directions. For a reflective LCHOE, the polarization dependency is similar to that of a normal CLC. For the circular polarization with the same handedness as the helical structure of CLC, the diffraction is strong. For the opposite circular polarization, the diffraction is negligible.

Another distinctive property of liquid crystal is its dynamic response to an external voltage. The LC reorientation can be controlled with a relatively low voltage (<10 V rms ) and the response time is on the order of milliseconds, depending mainly on the LC viscosity and layer thickness. Methods to dynamically control LCHOEs can be categorized as active addressing and passive addressing, which can be achieved by either directly switching the LCHOE or modulating the polarization state with an active waveplate. Detailed addressing methods will be described in the VAC section.

Lithography-enabled devices

Lithography technologies are used to create arbitrary patterns on wafers, which lays the foundation of the modern integrated circuit industry 44 . Photolithography is suitable for mass production while electron/ion beam lithography is usually used to create photomask for photolithography or to write structures with nanometer-scale feature size. Recent advances in lithography have enabled engineered structures like optical metasurfaces 45 , SRGs 46 , as well as micro-LED displays 47 . Metasurfaces exhibit a remarkable design freedom by varying the shape of meta-atoms, which can be utilized to achieve novel functions like achromatic focus 48 and beam steering 49 . Similarly, SRGs also offer a large design freedom by manipulating the geometry of local grating regions to realize desired optical properties. On the other hand, micro-LED exhibits several unique features, such as ultrahigh peak brightness, small aperture ratio, excellent stability, and nanosecond response time, etc. As a result, micro-LED is a promising candidate for AR and VR systems for achieving high ACR and high frame rate for suppressing motion image blurs. In the following section, we will briefly review the fabrication and properties of micro-LEDs and optical modulators like metasurfaces and SRGs.

Fabrication and properties of micro-LEDs

LEDs with a chip size larger than 300 μm have been widely used in solid-state lighting and public information displays. Recently, micro-LEDs with chip sizes <5 μm have been demonstrated 50 . The first micro-LED disc with a diameter of about 12 µm was demonstrated in 2000 51 . After that, a single color (blue or green) LED microdisplay was demonstrated in 2012 52 . The high peak brightness, fast response time, true dark state, and long lifetime of micro-LEDs are attractive for display applications. Therefore, many companies have since released their micro-LED prototypes or products, ranging from large-size TVs to small-size microdisplays for AR/VR applications 53 , 54 . Here, we focus on micro-LEDs for near-eye display applications. Regarding the fabrication of micro-LEDs, through the metal-organic chemical vapor deposition (MOCVD) method, the AlGaInP epitaxial layer is grown on GaAs substrate for red LEDs, and GaN epitaxial layers on sapphire substrate for green and blue LEDs. Next, a photolithography process is applied to define the mesa and deposit electrodes. To drive the LED array, the fabricated micro-LEDs are transferred to a CMOS (complementary metal oxide semiconductor) driver board. For a small size (<2 inches) microdisplay used in AR or VR, the precision of the pick-and-place transfer process is hard to meet the high-resolution-density (>1000 pixel per inch) requirement. Thus, the main approach to assemble LED chips with driving circuits is flip-chip bonding 50 , 55 , 56 , 57 , as Fig. 4a depicts. In flip-chip bonding, the mesa and electrode pads should be defined and deposited before the transfer process, while metal bonding balls should be preprocessed on the CMOS substrate. After that, thermal-compression method is used to bond the two wafers together. However, due to the thermal mismatch of LED chip and driving board, as the pixel size decreases, the misalignment between the LED chip and the metal bonding ball on the CMOS substrate becomes serious. In addition, the common n-GaN layer may cause optical crosstalk between pixels, which degrades the image quality. To overcome these issues, the LED epitaxial layer can be firstly metal-bonded with the silicon driver board, followed by the photolithography process to define the LED mesas and electrodes. Without the need for an alignment process, the pixel size can be reduced to <5 µm 50 .

figure 4

a Illustration of flip-chip bonding technology. b Simulated IQE-LED size relations for red and blue LEDs based on ABC model. c Comparison of EQE of different LED sizes with and without KOH and ALD side wall treatment. d Angular emission profiles of LEDs with different sizes. Metasurfaces based on e resonance-tuning, f non-resonance tuning and g combination of both. h Replication master and i replicated SRG based on nanoimprint lithography. Reproduced from a ref. 55 with permission from AIP Publishing, b ref. 61 with permission from PNAS, c ref. 66 with permission from IOP Publishing, d ref. 67 with permission from AIP Publishing, e ref. 69 with permission from OSA Publishing f ref. 48 with permission from AAAS g ref. 70 with permission from AAAS and h , i ref. 85 with permission from OSA Publishing

In addition to manufacturing process, the electrical and optical characteristics of LED also depend on the chip size. Generally, due to Shockley-Read-Hall (SRH) non-radiative recombination on the sidewall of active area, a smaller LED chip size results in a lower internal quantum efficiency (IQE), so that the peak IQE driving point will move toward a higher current density due to increased ratio of sidewall surface to active volume 58 , 59 , 60 . In addition, compared to the GaN-based green and blue LEDs, the AlGaInP-based red LEDs with a larger surface recombination and carrier diffusion length suffer a more severe efficiency drop 61 , 62 . Figure 4b shows the simulated result of IQE drop in relation with the LED chip size of blue and red LEDs based on ABC model 63 . To alleviate the efficiency drop caused by sidewall defects, depositing passivation materials by atomic layer deposition (ALD) or plasma enhanced chemical vapor deposition (PECVD) is proven to be helpful for both GaN and AlGaInP based LEDs 64 , 65 . In addition, applying KOH (Potassium hydroxide) treatment after ALD can further reduce the EQE drop of micro-LEDs 66 (Fig. 4c ). Small-size LEDs also exhibit some advantages, such as higher light extraction efficiency (LEE). Compared to an 100-µm LED, the LEE of a 2-µm LED increases from 12.2 to 25.1% 67 . Moreover, the radiation pattern of micro-LED is more directional than that of a large-size LED (Fig. 4d ). This helps to improve the lens collection efficiency in AR/VR display systems.

Metasurfaces and SGs

Thanks to the advances in lithography technology, low-loss dielectric metasurfaces working in the visible band have recently emerged as a platform for wavefront shaping 45 , 48 , 68 . They consist of an array of subwavelength-spaced structures with individually engineered wavelength-dependent polarization/phase/ amplitude response. In general, the light modulation mechanisms can be classified into resonant tuning 69 (Fig. 4e ), non-resonant tuning 48 (Fig. 4f ), and combination of both 70 (Fig. 4g ). In comparison with non-resonant tuning (based on geometric phase and/or dynamic propagation phase), the resonant tuning (such as Fabry–Pérot resonance, Mie resonance, etc.) is usually associated with a narrower operating bandwidth and a smaller out-of-plane aspect ratio (height/width) of nanostructures. As a result, they are easier to fabricate but more sensitive to fabrication tolerances. For both types, materials with a higher refractive index and lower absorption loss are beneficial to reduce the aspect ratio of nanostructure and improve the device efficiency. To this end, titanium dioxide (TiO 2 ) and gallium nitride (GaN) are the major choices for operating in the entire visible band 68 , 71 . While small-sized metasurfaces (diameter <1 mm) are usually fabricated via electron-beam lithography or focused ion beam milling in the labs, the ability of mass production is the key to their practical adoption. The deep ultraviolet (UV) photolithography has proven its feasibility for reproducing centimeter-size metalenses with decent imaging performance, while it requires multiple steps of etching 72 . Interestingly, the recently developed UV nanoimprint lithography based on a high-index nanocomposite only takes a single step and can obtain an aspect ratio larger than 10, which shows great promise for high-volume production 73 .

The arbitrary wavefront shaping capability and the thinness of the metasurfaces have aroused strong research interests in the development of novel AR/VR prototypes with improved performance. Lee et al. employed nanoimprint lithography to fabricate a centimeter-size, geometric-phase metalens eyepiece for full-color AR displays 74 . Through tailoring its polarization conversion efficiency and stacking with a circular polarizer, the virtual image can be superimposed with the surrounding scene. The large numerical aperture (NA~0.5) of the metalens eyepiece enables a wide FoV (>76°) that conventional optics are difficult to obtain. However, the geometric phase metalens is intrinsically a diffractive lens that also suffers from strong chromatic aberrations. To overcome this issue, an achromatic lens can be designed via simultaneously engineering the group delay and the group delay dispersion 75 , 76 , which will be described in detail later. Other novel and/or improved near-eye display architectures include metasurface-based contact lens-type AR 77 , achromatic metalens array enabled integral-imaging light field displays 78 , wide FoV lightguide AR with polarization-dependent metagratings 79 , and off-axis projection-type AR with an aberration-corrected metasurface combiner 80 , 81 , 82 . Nevertheless, from the existing AR/VR prototypes, metasurfaces still face a strong tradeoff between numerical aperture (for metalenses), chromatic aberration, monochromatic aberration, efficiency, aperture size, and fabrication complexity.

On the other hand, SRGs are diffractive gratings that have been researched for decades as input/output couplers of waveguides 83 , 84 . Their surface is composed of corrugated microstructures, and different shapes including binary, blazed, slanted, and even analogue can be designed. The parameters of the corrugated microstructures are determined by the target diffraction order, operation spectral bandwidth, and angular bandwidth. Compared to metasurfaces, SRGs have a much larger feature size and thus can be fabricated via UV photolithography and subsequent etching. They are usually replicated by nanoimprint lithography with appropriate heating and surface treatment. According to a report published a decade ago, SRGs with a height of 300 nm and a slant angle of up to 50° can be faithfully replicated with high yield and reproducibility 85 (Fig. 4g, h ).

Challenges and solutions of VR displays

The fully immersive nature of VR headset leads to a relatively fixed configuration where the display panel is placed in front of the viewer’s eye and an imaging optics is placed in-between. Regarding the system performance, although inadequate angular resolution still exists in some current VR headsets, the improvement of display panel resolution with advanced fabrication process is expected to solve this issue progressively. Therefore, in the following discussion, we will mainly focus on two major challenges: form factor and 3D cue generation.

Form factor

Compact and lightweight near-eye displays are essential for a comfortable user experience and therefore highly desirable in VR headsets. Current mainstream VR headsets usually have a considerably larger volume than eyeglasses, and most of the volume is just empty. This is because a certain distance is required between the display panel and the viewing optics, which is usually close to the focal length of the lens system as illustrated in Fig. 5a . Conventional VR headsets employ a transmissive lens with ~4 cm focal length to offer a large FoV and eyebox. Fresnel lenses are thinner than conventional ones, but the distance required between the lens and the panel does not change significantly. In addition, the diffraction artifacts and stray light caused by the Fresnel grooves can degrade the image quality, or MTF. Although the resolution density, quantified as pixel per inch (PPI), of current VR headsets is still limited, eventually Fresnel lens will not be an ideal solution when a high PPI display is available. The strong chromatic aberration of Fresnel singlet should also be compensated if a high-quality imaging system is preferred.

figure 5

a Schematic of a basic VR optical configuration. b Achromatic metalens used as VR eyepiece. c VR based on curved display and lenslet array. d Basic working principle of a VR display based on pancake optics. e VR with pancake optics and Fresnel lens array. f VR with pancake optics based on purely HOEs. Reprinted from b ref. 87 under the Creative Commons Attribution 4.0 License. Adapted from c ref. 88 with permission from IEEE, e ref. 91 and f ref. 92 under the Creative Commons Attribution 4.0 License

It is tempting to replace the refractive elements with a single thin diffractive lens like a transmissive LCHOE. However, the diffractive nature of such a lens will result in serious color aberrations. Interestingly, metalenses can fulfil this objective without color issues. To understand how metalenses achieve achromatic focus, let us first take a glance at the general lens phase profile \(\Phi (\omega ,r)\) expanded as a Taylor series 75 :

where \(\varphi _0(\omega )\) is the phase at the lens center, \(F\left( \omega \right)\) is the focal length as a function of frequency ω , r is the radial coordinate, and \(\omega _0\) is the central operation frequency. To realize achromatic focus, \(\partial F{{{\mathrm{/}}}}\partial \omega\) should be zero. With a designed focal length, the group delay \(\partial \Phi (\omega ,r){{{\mathrm{/}}}}\partial \omega\) and the group delay dispersion \(\partial ^2\Phi (\omega ,r){{{\mathrm{/}}}}\partial \omega ^2\) can be determined, and \(\varphi _0(\omega )\) is an auxiliary degree of freedom of the phase profile design. In the design of an achromatic metalens, the group delay is a function of the radial coordinate and monotonically increases with the metalens radius. Many designs have proven that the group delay has a limited variation range 75 , 76 , 78 , 86 . According to Shrestha et al. 86 , there is an inevitable tradeoff between the maximum radius of the metalens, NA, and operation bandwidth. Thus, the reported achromatic metalenses at visible usually have limited lens aperture (e.g., diameter < 250 μm) and NA (e.g., <0.2). Such a tradeoff is undesirable in VR displays, as the eyepiece favors a large clear aperture (inch size) and a reasonably high NA (>0.3) to maintain a wide FoV and a reasonable eye relief 74 .

To overcome this limitation, Li et al. 87 proposed a novel zone lens method. Unlike the traditional phase Fresnel lens where the zones are determined by the phase reset, the new approach divides the zones by the group delay reset. In this way, the lens aperture and NA can be much enlarged, and the group delay limit is bypassed. A notable side effect of this design is the phase discontinuity at zone boundaries that will contribute to higher-order focusing. Therefore, significant efforts have been conducted to find the optimal zone transition locations and to minimize the phase discontinuities. Using this method, they have demonstrated an impressive 2-mm-diameter metalens with NA = 0.7 and nearly diffraction-limited focusing for the designed wavelengths (488, 532, 658 nm) (Fig. 5b ). Such a metalens consists of 681 zones and works for the visible band ranging from 470 to 670 nm, though the focusing efficiency is in the order of 10%. This is a great starting point for the achromatic metalens to be employed as a compact, chromatic-aberration-free eyepiece in near-eye displays. Future challenges are how to further increase the aperture size, correct the off-axis aberrations, and improve the optical efficiency.

Besides replacing the refractive lens with an achromatic metalens, another way to reduce system focal length without decreasing NA is to use a lenslet array 88 . As depicted in Fig. 5c , both the lenslet array and display panel adopt a curved structure. With the latest flexible OLED panel, the display can be easily curved in one dimension. The system exhibits a large diagonal FoV of 180° with an eyebox of 19 by 12 mm. The geometry of each lenslet is optimized separately to achieve an overall performance with high image quality and reduced distortions.

Aside from trying to shorten the system focal length, another way to reduce total track is to fold optical path. Recently, polarization-based folded lenses, also known as pancake optics, are under active development for VR applications 89 , 90 . Figure 5d depicts the structure of an exemplary singlet pancake VR lens system. The pancake lenses can offer better imaging performance with a compact form factor since there are more degrees of freedom in the design and the actual light path is folded thrice. By using a reflective surface with a positive power, the field curvature of positive refractive lenses can be compensated. Also, the reflective surface has no chromatic aberrations and it contributes considerable optical power to the system. Therefore, the optical power of refractive lenses can be smaller, resulting in an even weaker chromatic aberration. Compared to Fresnel lenses, the pancake lenses have smooth surfaces and much fewer diffraction artifacts and stray light. However, such a pancake lens design is not perfect either, whose major shortcoming is low light efficiency. With two incidences of light on the half mirror, the maximum system efficiency is limited to 25% for a polarized input and 12.5% for an unpolarized input light. Moreover, due to the existence of multiple surfaces in the system, stray light caused by surface reflections and polarization leakage may lead to apparent ghost images. As a result, the catadioptric pancake VR headset usually manifests a darker imagery and lower contrast than the corresponding dioptric VR.

Interestingly, the lenslet and pancake optics can be combined to further reduce the system form. Bang et al. 91 demonstrated a compact VR system with a pancake optics and a Fresnel lenslet array. The pancake optics serves to fold the optical path between the display panel and the lenslet array (Fig. 5e ). Another Fresnel lens is used to collect the light from the lenslet array. The system has a decent horizontal FoV of 102° and an eyebox of 8 mm. However, a certain degree of image discontinuity and crosstalk are still present, which can be improved with further optimizations on the Fresnel lens and the lenslet array.

One step further, replacing all conventional optics in catadioptric VR headset with holographic optics can make the whole system even thinner. Maimone and Wang demonstrated such a lightweight, high-resolution, and ultra-compact VR optical system using purely HOEs 92 . This holographic VR optics was made possible by combining several innovative optical components, including a reflective PPHOE, a reflective LCHOE, and a PPHOE-based directional backlight with laser illumination, as shown in Fig. 5f . Since all the optical power is provided by the HOEs with negligible weight and volume, the total physical thickness can be reduced to <10 mm. Also, unlike conventional bulk optics, the optical power of a HOE is independent of its thickness, only subject to the recording process. Another advantage of using holographic optical devices is that they can be engineered to offer distinct phase profiles for different wavelengths and angles of incidence, adding extra degrees of freedom in optical designs for better imaging performance. Although only a single-color backlight has been demonstrated, such a PPHOE has the potential to achieve full-color laser backlight with multiplexing ability. The PPHOE and LCHOE in the pancake optics can also be optimized at different wavelengths for achieving high-quality full-color images.

Vergence-accommodation conflict

Conventional VR displays suffer from VAC, which is a common issue for stereoscopic 3D displays 93 . In current VR display modules, the distance between the display panel and the viewing optics is fixed, which means the VR imagery is displayed at a single depth. However, the image contents are generated by parallax rendering in three dimensions, offering distinct images for two eyes. This approach offers a proper stimulus to vergence but completely ignores the accommodation cue, which leads to the well-known VAC that can cause an uncomfortable user experience. Since the beginning of this century, numerous methods have been proposed to solve this critical issue. Methods to produce accommodation cue include multifocal/varifocal display 94 , holographic display 95 , and integral imaging display 96 . Alternatively, elimination of accommodation cue using a Maxwellian-view display 93 also helps to mitigate the VAC. However, holographic displays and Maxwellian-view displays generally require a totally different optical architecture than current VR systems. They are therefore more suitable for AR displays, which will be discussed later. Integral imaging, on the other hand, has an inherent tradeoff between view number and resolution. For current VR headsets pursuing high resolution to match human visual acuity, it may not be an appealing solution. Therefore, multifocal/varifocal displays that rely on depth modulation is a relatively practical and effective solution for VR headsets. Regarding the working mechanism, multifocal displays present multiple images with different depths to imitate the original 3D scene. Varifocal displays, in contrast, only show one image at each time frame. The image depth matches the viewer’s vergence depth. Nonetheless, the pre-knowledge of the viewer’s vergence depth requires an additional eye-tracking module. Despite different operation principles, a varifocal display can often be converted to a multifocal display as long as the varifocal module has enough modulation bandwidth to support multiple depths in a time frame.

To achieve depth modulation in a VR system, traditional liquid lens 97 , 98 with tunable focus suffers from the small aperture and large aberrations. Alvarez lens 99 is another tunable-focus solution but it requires mechanical adjustment, which adds to system volume and complexity. In comparison, transmissive LCHOEs with polarization dependency can achieve focus adjustment with electronic driving. Its ultra-thinness also satisfies the requirement of small form factors in VR headsets. The diffractive behavior of transmissive LCHOEs is often interpreted by the mechanism of Pancharatnam-Berry phase (also known as geometric phase) 100 . They are therefore often called Pancharatnam-Berry optical elements (PBOEs). The corresponding lens component is referred as Pancharatnam-Berry lens (PBL).

Two main approaches are used to switch the focus of a PBL, active addressing and passive addressing. In active addressing, the PBL itself (made of LC) can be switched by an applied voltage (Fig. 6a ). The optical power of the liquid crystal PBLs can be turned-on and -off by controlling the voltage. Stacking multiple active PBLs can produce 2 N depths, where N is the number of PBLs. The drawback of using active PBLs, however, is the limited spectral bandwidth since their diffraction efficiency is usually optimized at a single wavelength. In passive addressing, the depth modulation is achieved through changing the polarization state of input light by a switchable half-wave plate (HWP) (Fig. 6b ). The focal length can therefore be switched thanks to the polarization sensitivity of PBLs. Although this approach has a slightly more complicated structure, the overall performance can be better than the active one, because the PBLs made of liquid crystal polymer can be designed to manifest high efficiency within the entire visible spectrum 101 , 102 .

figure 6

Working principles of a depth switching PBL module based on a active addressing and b passive addressing. c A four-depth multifocal display based on time multiplexing. d A two-depth multifocal display based on polarization multiplexing. Reproduced from c ref. 103 with permission from OSA Publishing and d ref. 104 with permission from OSA Publishing

With the PBL module, multifocal displays can be built using time-multiplexing technique. Zhan et al. 103 demonstrated a four-depth multifocal display using two actively switchable liquid crystal PBLs (Fig. 6c ). The display is synchronized with the PBL module, which lowers the frame rate by the number of depths. Alternatively, multifocal displays can also be achieved by polarization-multiplexing, as demonstrated by Tan et al. 104 . The basic principle is to adjust the polarization state of local pixels so the image content on two focal planes of a PBL can be arbitrarily controlled (Fig. 6d ). The advantage of polarization multiplexing is that it does not sacrifice the frame rate, but it can only support two planes because only two orthogonal polarization states are available. Still, it can be combined with time-multiplexing to reduce the frame rate sacrifice by half. Naturally, varifocal displays can also be built with a PBL module. A fast-response 64-depth varifocal module with six PBLs has been demonstrated 105 .

The compact structure of PBL module leads to a natural solution of integrating it with above-mentioned pancake optics. A compact VR headset with dynamic depth modulation to solve VAC is therefore possible in practice. Still, due to the inherent diffractive nature of PBL, the PBL module face the issue of chromatic dispersion of focal length. To compensate for different focal depths for RGB colors may require additional digital corrections in image-rendering.

Architectures of AR displays

Unlike VR displays with a relatively fixed optical configuration, there exist a vast number of architectures in AR displays. Therefore, instead of following the narrative of tackling different challenges, a more appropriate way to review AR displays is to separately introduce each architecture and discuss its associated engineering challenges. An AR display usually consists of a light engine and an optical combiner. The light engine serves as display image source, while the combiner delivers the displayed images to viewer’s eye and in the meantime transmits the environment light. Some performance parameters like frame rate and power consumption are mainly determined by the light engine. Parameters like FoV, eyebox and MTF are primarily dependent on the combiner optics. Moreover, attributes like image brightness, overall efficiency, and form factor are influenced by both light engine and combiner. In this section, we will firstly discuss the light engine, where the latest advances in micro-LED on chip are reviewed and compared with existing microdisplay systems. Then, we will introduce two main types of combiners: free-space combiner and waveguide combiner.

Light engine

The light engine determines several essential properties of the AR system like image brightness, power consumption, frame rate, and basic etendue. Several types of microdisplays have been used in AR, including micro-LED, micro-organic-light-emitting-diodes (micro-OLED), liquid-crystal-on-silicon (LCoS), digital micromirror device (DMD), and laser beam scanning (LBS) based on micro-electromechanical system (MEMS). We will firstly describe the working principles of these devices and then analyze their performance. For those who are more interested in final performance parameters than details, Table 1 provides a comprehensive summary.

Working principles

Micro-LED and micro-OLED are self-emissive display devices. They are usually more compact than LCoS and DMD because no illumination optics is required. The fundamentally different material systems of LED and OLED lead to different approaches to achieve full-color displays. Due to the “green gap” in LEDs, red LEDs are manufactured on a different semiconductor material from green and blue LEDs. Therefore, how to achieve full-color display in high-resolution density microdisplays is quite a challenge for micro-LEDs. Among several solutions under research are two main approaches. The first is to combine three separate red, green and blue (RGB) micro-LED microdisplay panels 106 . Three single-color micro-LED microdisplays are manufactured separately through flip-chip transfer technology. Then, the projected images from three microdisplay panels are integrated by a trichroic prism (Fig. 7a ).

figure 7

a RGB micro-LED microdisplays combined by a trichroic prism. b QD-based micro-LED microdisplay. c Micro-OLED display with 4032 PPI. Working principles of d LCoS, e DMD, and f MEMS-LBS display modules. Reprinted from a ref. 106 with permission from IEEE, b ref. 108 with permission from Chinese Laser Press, c ref. 121 with permission from Jon Wiley and Sons, d ref. 124 with permission from Spring Nature, e ref. 126 with permission from Springer and f ref. 128 under the Creative Commons Attribution 4.0 License

Another solution is to assemble color-conversion materials like quantum dot (QD) on top of blue or ultraviolet (UV) micro-LEDs 107 , 108 , 109 (Fig. 7b ). The quantum dot color filter (QDCF) on top of the micro-LED array is mainly fabricated by inkjet printing or photolithography 110 , 111 . However, the display performance of color-conversion micro-LED displays is restricted by the low color-conversion efficiency, blue light leakage, and color crosstalk. Extensive efforts have been conducted to improve the QD-micro-LED performance. To boost QD conversion efficiency, structure designs like nanoring 112 and nanohole 113 , 114 have been proposed, which utilize the Förster resonance energy transfer mechanism to transfer excessive excitons in the LED active region to QD. To prevent blue light leakage, methods using color filters or reflectors like distributed Bragg reflector (DBR) 115 and CLC film 116 on top of QDCF are proposed. Compared to color filters that absorb blue light, DBR and CLC film help recycle the leaked blue light to further excite QDs. Other methods to achieve full-color micro-LED display like vertically stacked RGB micro-LED array 61 , 117 , 118 and monolithic wavelength tunable nanowire LED 119 are also under investigation.

Micro-OLED displays can be generally categorized into RGB OLED and white OLED (WOLED). RGB OLED displays have separate sub-pixel structures and optical cavities, which resonate at the desirable wavelength in RGB channels, respectively. To deposit organic materials onto the separated RGB sub-pixels, a fine metal mask (FMM) that defines the deposition area is required. However, high-resolution RGB OLED microdisplays still face challenges due to the shadow effect during the deposition process through FMM. In order to break the limitation, a silicon nitride film with small shadow has been proposed as a mask for high-resolution deposition above 2000 PPI (9.3 µm) 120 .

WOLED displays use color filters to generate color images. Without the process of depositing patterned organic materials, a high-resolution density up to 4000 PPI has been achieved 121 (Fig. 7c ). However, compared to RGB OLED, the color filters in WOLED absorb about 70% of the emitted light, which limits the maximum brightness of the microdisplay. To improve the efficiency and peak brightness of WOLED microdisplays, in 2019 Sony proposed to apply newly designed cathodes (InZnO) and microlens arrays on OLED microdisplays, which increased the peak brightness from 1600 nits to 5000 nits 120 . In addition, OLEDWORKs has proposed a multi-stacked OLED 122 with optimized microcavities whose emission spectra match the transmission bands of the color filters. The multi-stacked OLED shows a higher luminous efficiency (cd/A), but also requires a higher driving voltage. Recently, by using meta-mirrors as bottom reflective anodes, patterned microcavities with more than 10,000 PPI have been obtained 123 . The high-resolution meta-mirrors generate different reflection phases in the RGB sub-pixels to achieve desirable resonant wavelengths. The narrow emission spectra from the microcavity help to reduce the loss from color filters or even eliminate the need of color filters.

LCoS and DMD are light-modulating displays that generate images by controlling the reflection of each pixel. For LCoS, the light modulation is achieved by manipulating the polarization state of output light through independently controlling the liquid crystal reorientation in each pixel 124 , 125 (Fig. 7d ). Both phase-only and amplitude modulators have been employed. DMD is an amplitude modulation device. The modulation is achieved through controlling the tilt angle of bi-stable micromirrors 126 (Fig. 7e ). To generate an image, both LCoS and DMD rely on the light illumination systems, with LED or laser as light source. For LCoS, the generation of color image can be realized either by RGB color filters on LCoS (with white LEDs) or color-sequential addressing (with RGB LEDs or lasers). However, LCoS requires a linearly polarized light source. For an unpolarized LED light source, usually, a polarization recycling system 127 is implemented to improve the optical efficiency. For a single-panel DMD, the color image is mainly obtained through color-sequential addressing. In addition, DMD does not require a polarized light so that it generally exhibits a higher efficiency than LCoS if an unpolarized light source is employed.

MEMS-based LBS 128 , 129 utilizes micromirrors to directly scan RGB laser beams to form two-dimensional (2D) images (Fig. 7f ). Different gray levels are achieved by pulse width modulation (PWM) of the employed laser diodes. In practice, 2D scanning can be achieved either through a 2D scanning mirror or two 1D scanning mirrors with an additional focusing lens after the first mirror. The small size of MEMS mirror offers a very attractive form factor. At the same time, the output image has a large depth-of-focus (DoF), which is ideal for projection displays. One shortcoming, though, is that the small system etendue often hinders its applications in some traditional display systems.

Comparison of light engine performance

There are several important parameters for a light engine, including image resolution, brightness, frame rate, contrast ratio, and form factor. The resolution requirement (>2K) is similar for all types of light engines. The improvement of resolution is usually accomplished through the manufacturing process. Thus, here we shall focus on other three parameters.

Image brightness usually refers to the measured luminance of a light-emitting object. This measurement, however, may not be accurate for a light engine as the light from engine only forms an intermediate image, which is not directly viewed by the user. On the other hand, to solely focus on the brightness of a light engine could be misleading for a wearable display system like AR. Nowadays, data projectors with thousands of lumens are available. But the power consumption is too high for a battery-powered wearable AR display. Therefore, a more appropriate way to evaluate a light engine’s brightness is to use luminous efficacy (lm/W) measured by dividing the final output luminous flux (lm) by the input electric power (W). For a self-emissive device like micro-LED or micro-OLED, the luminous efficacy is directly determined by the device itself. However, for LCoS and DMD, the overall luminous efficacy should take into consideration the light source luminous efficacy, the efficiency of illumination optics, and the efficiency of the employed spatial light modulator (SLM). For a MEMS LBS engine, the efficiency of MEMS mirror can be considered as unity so that the luminous efficacy basically equals to that of the employed laser sources.

As mentioned earlier, each light engine has a different scheme for generating color images. Therefore, we separately list luminous efficacy of each scheme for a more inclusive comparison. For micro-LEDs, the situation is more complicated because the EQE depends on the chip size. Based on previous studies 130 , 131 , 132 , 133 , we separately calculate the luminous efficacy for RGB micro-LEDs with chip size ≈ 20 µm. For the scheme of direct combination of RGB micro-LEDs, the luminous efficacy is around 5 lm/W. For QD-conversion with blue micro-LEDs, the luminous efficacy is around 10 lm/W with the assumption of 100% color conversion efficiency, which has been demonstrated using structure engineering 114 . For micro-OLEDs, the calculated luminous efficacy is about 4–8 lm/W 120 , 122 . However, the lifetime and EQE of blue OLED materials depend on the driving current. To continuously display an image with brightness higher than 10,000 nits may dramatically shorten the device lifetime. The reason we compare the light engine at 10,000 nits is that it is highly desirable to obtain 1000 nits for the displayed image in order to keep ACR>3:1 with a typical AR combiner whose optical efficiency is lower than 10%.

For an LCoS engine using a white LED as light source, the typical optical efficiency of the whole engine is around 10% 127 , 134 . Then the engine luminous efficacy is estimated to be 12 lm/W with a 120 lm/W white LED source. For a color sequential LCoS using RGB LEDs, the absorption loss from color filters is eliminated, but the luminous efficacy of RGB LED source is also decreased to about 30 lm/W due to lower efficiency of red and green LEDs and higher driving current 135 . Therefore, the final luminous efficacy of the color sequential LCoS engine is also around 10 lm/W. If RGB linearly polarized lasers are employed instead of LEDs, then the LCoS engine efficiency can be quite high due to the high degree of collimation. The luminous efficacy of RGB laser source is around 40 lm/W 136 . Therefore, the laser-based LCoS engine is estimated to have a luminous efficacy of 32 lm/W, assuming the engine optical efficiency is 80%. For a DMD engine with RGB LEDs as light source, the optical efficiency is around 50% 137 , 138 , which leads to a luminous efficacy of 15 lm/W. By switching to laser light sources, the situation is similar to LCoS, with the luminous efficacy of about 32 lm/W. Finally, for MEMS-based LBS engine, there is basically no loss from the optics so that the final luminous efficacy is 40 lm/W. Detailed calculations of luminous efficacy can be found in Supplementary Information .

Another aspect of a light engine is the frame rate, which determines the volume of information it can deliver in a unit time. A high volume of information is vital for the construction of a 3D light field to solve the VAC issue. For micro-LEDs, the device response time is around several nanoseconds, which allows for visible light communication with bandwidth up to 1.5 Gbit/s 139 . For an OLED microdisplay, a fast OLED with ~200 MHz bandwidth has been demonstrated 140 . Therefore, the limitation of frame rate is on the driving circuits for both micro-LED and OLED. Another fact concerning driving circuit is the tradeoff between resolution and frame rate as a higher resolution panel means more scanning lines in each frame. So far, an OLED display with 480 Hz frame rate has been demonstrated 141 . For an LCoS, the frame rate is mainly limited by the LC response time. Depending on the LC material used, the response time is around 1 ms for nematic LC or 200 µs for ferroelectric LC (FLC) 125 . Nematic LC allows analog driving, which accommodates gray levels, typically with 8-bit depth. FLC is bistable so that PWM is used to generate gray levels. DMD is also a binary device. The frame rate can reach 30 kHz, which is mainly constrained by the response time of micromirrors. For MEMS-based LBS, the frame rate is limited by the scanning frequency of MEMS mirrors. A frame rate of 60 Hz with around 1 K resolution already requires a resonance frequency of around 50 kHz, with a Q-factor up to 145,000 128 . A higher frame rate or resolution requires a higher Q-factor and larger laser modulation bandwidth, which may be challenging.

Form factor is another crucial aspect for the light engines of near-eye displays. For self-emissive displays, both micro-OLEDs and QD-based micro-LEDs can achieve full color with a single panel. Thus, they are quite compact. A micro-LED display with separate RGB panels naturally have a larger form factor. In applications requiring direct-view full-color panel, the extra combining optics may also increase the volume. It needs to be pointed out, however, that the combing optics may not be necessary for some applications like waveguide displays, because the EPE process results in system’s insensitivity to the spatial positions of input RGB images. Therefore, the form factor of using three RGB micro-LED panels is medium. For LCoS and DMD with RGB LEDs as light source, the form factor would be larger due to the illumination optics. Still, if a lower luminous efficacy can be accepted, then a smaller form factor can be achieved by using a simpler optics 142 . If RGB lasers are used, the collimation optics can be eliminated, which greatly reduces the form factor 143 . For MEMS-LBS, the form factor can be extremely compact due to the tiny size of MEMS mirror and laser module.

Finally, contrast ratio (CR) also plays an important role affecting the observed images 8 . Micro-LEDs and micro-OLEDs are self-emissive so that their CR can be >10 6 :1. For a laser beam scanner, its CR can also achieve 10 6 :1 because the laser can be turned off completely at dark state. On the other hand, LCoS and DMD are reflective displays, and their CR is around 2000:1 to 5000:1 144 , 145 . It is worth pointing out that the CR of a display engine plays a significant role only in the dark ambient. As the ambient brightness increases, the ACR is mainly governed by the display’s peak brightness, as previously discussed.

The performance parameters of different light engines are summarized in Table 1 . Micro-LEDs and micro-OLEDs have similar levels of luminous efficacy. But micro-OLEDs still face the burn-in and lifetime issue when driving at a high current, which hinders its use for a high-brightness image source to some extent. Micro-LEDs are still under active development and the improvement on luminous efficacy from maturing fabrication process could be expected. Both devices have nanosecond response time and can potentially achieve a high frame rate with a well-designed integrated circuit. The frame rate of the driving circuit ultimately determines the motion picture response time 146 . Their self-emissive feature also leads to a small form factor and high contrast ratio. LCoS and DMD engines have similar performance of luminous efficacy, form factor, and contrast ratio. In terms of light modulation, DMD can provide a higher 1-bit frame rate, while LCoS can offer both phase and amplitude modulations. MEMS-based LBS exhibits the highest luminous efficacy so far. It also exhibits an excellent form factor and contrast ratio, but the presently demonstrated 60-Hz frame rate (limited by the MEMS mirrors) could cause image flickering.

Free-space combiners

The term ‘free-space’ generally refers to the case when light is freely propagating in space, as opposed to a waveguide that traps light into TIRs. Regarding the combiner, it can be a partial mirror, as commonly used in AR systems based on traditional geometric optics. Alternatively, the combiner can also be a reflective HOE. The strong chromatic dispersion of HOE necessitates the use of a laser source, which usually leads to a Maxwellian-type system.

Traditional geometric designs

Several systems based on geometric optics are illustrated in Fig. 8 . The simplest design uses a single freeform half-mirror 6 , 147 to directly collimate the displayed images to the viewer’s eye (Fig. 8a ). This design can achieve a large FoV (up to 90°) 147 , but the limited design freedom with a single freeform surface leads to image distortions, also called pupil swim 6 . The placement of half-mirror also results in a relatively bulky form factor. Another design using so-called birdbath optics 6 , 148 is shown in Fig. 8b . Compared to the single-combiner design, birdbath design has an extra optics on the display side, which provides space for aberration correction. The integration of beam splitter provides a folded optical path, which reduces the form factor to some extent. Another way to fold optical path is to use a TIR-prism. Cheng et al. 149 designed a freeform TIR-prism combiner (Fig. 8c ) offering a diagonal FoV of 54° and exit pupil diameter of 8 mm. All the surfaces are freeform, which offer an excellent image quality. To cancel the optical power for the transmitted environmental light, a compensator is added to the TIR prism. The whole system has a well-balanced performance between FoV, eyebox, and form factor. To release the space in front of viewer’s eye, relay optics can be used to form an intermediate image near the combiner 150 , 151 , as illustrated in Fig. 8d . Although the design offers more optical surfaces for aberration correction, the extra lenses also add to system weight and form factor.

figure 8

a Single freeform surface as the combiner. b Birdbath optics with a beam splitter and a half mirror. c Freeform TIR prism with a compensator. d Relay optics with a half mirror. Adapted from c ref. 149 with permission from OSA Publishing and d ref. 151 with permission from OSA Publishing

Regarding the approaches to solve the VAC issue, the most straightforward way is to integrate a tunable lens into the optical path, like a liquid lens 152 or Alvarez lens 99 , to form a varifocal system. Alternatively, integral imaging 153 , 154 can also be used, by replacing the original display panel with the central depth plane of an integral imaging module. The integral imaging can also be combined with varifocal approach to overcome the tradeoff between resolution and depth of field (DoF) 155 , 156 , 157 . However, the inherent tradeoff between resolution and view number still exists in this case.

Overall, AR displays based on traditional geometric optics have a relatively simple design with a decent FoV (~60°) and eyebox (8 mm) 158 . They also exhibit a reasonable efficiency. To measure the efficiency of an AR combiner, an appropriate measure is to divide the output luminance (unit: nit) by the input luminous flux (unit: lm), which we note as combiner efficiency. For a fixed input luminous flux, the output luminance, or image brightness, is related to the FoV and exit pupil of the combiner system. If we assume no light waste of the combiner system, then the maximum combiner efficiency for a typical diagonal FoV of 60° and exit pupil (10 mm square) is around 17,000 nit/lm (Eq. S2 ). To estimate the combiner efficiency of geometric combiners, we assume 50% of half-mirror transmittance and the efficiency of other optics to be 50%. Then the final combiner efficiency is about 4200 nit/lm, which is a high value in comparison with waveguide combiners. Nonetheless, to further shrink the system size or improve system performance ultimately encounters the etendue conservation issue. In addition, AR systems with traditional geometric optics is hard to achieve a configuration resembling normal flat glasses because the half-mirror has to be tilted to some extent.

Maxwellian-type systems

The Maxwellian view, proposed by James Clerk Maxwell (1860), refers to imaging a point light source in the eye pupil 159 . If the light beam is modulated in the imaging process, a corresponding image can be formed on the retina (Fig. 9a ). Because the point source is much smaller than the eye pupil, the image is always-in-focus on the retina irrespective of the eye lens’ focus. For applications in AR display, the point source is usually a laser with narrow angular and spectral bandwidths. LED light sources can also build a Maxwellian system, by adding an angular filtering module 160 . Regarding the combiner, although in theory a half-mirror can also be used, HOEs are generally preferred because they offer the off-axis configuration that places combiner in a similar position like eyeglasses. In addition, HOEs have a lower reflection of environment light, which provides a more natural appearance of the user behind the display.

figure 9

a Schematic of the working principle of Maxwellian displays. Maxwellian displays based on b SLM and laser diode light source and c MEMS-LBS with a steering mirror as additional modulation method. Generation of depth cues by d computational digital holography and e scanning of steering mirror to produce multiple views. Adapted from b, d ref. 143 and c, e ref. 167 under the Creative Commons Attribution 4.0 License

To modulate the light, a SLM like LCoS or DMD can be placed in the light path, as shown in Fig. 9b . Alternatively, LBS system can also be used (Fig. 9c ), where the intensity modulation occurs in the laser diode itself. Besides the operation in a normal Maxwellian-view, both implementations offer additional degrees of freedom for light modulation.

For a SLM-based system, there are several options to arrange the SLM pixels 143 , 161 . Maimone et al. 143 demonstrated a Maxwellian AR display with two modes to offer a large-DoF Maxwellian-view, or a holographic view (Fig. 9d ), which is often referred as computer-generated holography (CGH) 162 . To show an always-in-focus image with a large DoF, the image can be directly displayed on an amplitude SLM, or using amplitude encoding for a phase-only SLM 163 . Alternatively, if a 3D scene with correct depth cues is to be presented, then optimization algorithms for CGH can be used to generate a hologram for the SLM. The generated holographic image exhibits the natural focus-and-blur effect like a real 3D object (Fig. 9d ). To better understand this feature, we need to again exploit the concept of etendue. The laser light source can be considered to have a very small etendue due to its excellent collimation. Therefore, the system etendue is provided by the SLM. The micron-sized pixel-pitch of SLM offers a certain maximum diffraction angle, which, multiplied by the SLM size, equals system etendue. By varying the display content on SLM, the final exit pupil size can be changed accordingly. In the case of a large-DoF Maxwellian view, the exit pupil size is small, accompanied by a large FoV. For the holographic display mode, the reduced DoF requires a larger exit pupil with dimension close to the eye pupil. But the FoV is reduced accordingly due to etendue conservation. Another commonly concerned issue with CGH is the computation time. To achieve a real-time CGH rendering flow with an excellent image quality is quite a challenge. Fortunately, with recent advances in algorithm 164 and the introduction of convolutional neural network (CNN) 165 , 166 , this issue is gradually solved with an encouraging pace. Lately, Liang et al. 166 demonstrated a real-time CGH synthesis pipeline with a high image quality. The pipeline comprises an efficient CNN model to generate a complex hologram from a 3D scene and an improved encoding algorithm to convert the complex hologram to a phase-only one. An impressive frame rate of 60 Hz has been achieved on a desktop computing unit.

For LBS-based system, the additional modulation can be achieved by integrating a steering module, as demonstrated by Jang et al. 167 . The steering mirror can shift the focal point (viewpoint) within the eye pupil, therefore effectively expanding the system etendue. When the steering process is fast and the image content is updated simultaneously, correct 3D cues can be generated, as shown in Fig. 9e . However, there exists a tradeoff between the number of viewpoint and the final image frame rate, because the total frames are equally divided into each viewpoint. To boost the frame rate of MEMS-LBS systems by the number of views (e.g., 3 by 3) may be challenging.

Maxwellian-type systems offer several advantages. The system efficiency is usually very high because nearly all the light is delivered into viewer’s eye. The system FoV is determined by the f /# of combiner and a large FoV (~80° in horizontal) can be achieved 143 . The issue of VAC can be mitigated with an infinite-DoF image that deprives accommodation cue, or completely solved by generating a true-3D scene as discussed above. Despite these advantages, one major weakness of Maxwellian-type system is the tiny exit pupil, or eyebox. A small deviation of eye pupil location from the viewpoint results in the complete disappearance of the image. Therefore, to expand eyebox is considered as one of the most important challenges in Maxwellian-type systems.

Pupil duplication and steering

Methods to expand eyebox can be generally categorized into pupil duplication 168 , 169 , 170 , 171 , 172 and pupil steering 9 , 13 , 167 , 173 . Pupil duplication simply generates multiple viewpoints to cover a large area. In contrast, pupil steering dynamically shifts the viewpoint position, depending on the pupil location. Before reviewing detailed implementations of these two methods, it is worth discussing some of their general features. The multiple viewpoints in pupil duplication usually mean to equally divide the total light intensity. In each time frame, however, it is preferable that only one viewpoint enters the user’s eye pupil to avoid ghost image. This requirement, therefore, results in a reduced total light efficiency, while also conditioning the viewpoint separation to be larger than the pupil diameter. In addition, the separation should not be too large to avoid gap between viewpoints. Considering that human pupil diameter changes in response to environment illuminance, the design of viewpoint separation needs special attention. Pupil steering, on the other hand, only produces one viewpoint at each time frame. It is therefore more light-efficient and free from ghost images. But to determine the viewpoint position requires the information of eye pupil location, which demands a real-time eye-tracking module 9 . Another observation is that pupil steering can accommodate multiple viewpoints by its nature. Therefore, a pupil steering system can often be easily converted to a pupil duplication system by simultaneously generating available viewpoints.

To generate multiple viewpoints, one can focus on modulating the incident light or the combiner. Recall that viewpoint is the image of light source. To duplicate or shift light source can achieve pupil duplication or steering accordingly, as illustrated in Fig. 10a . Several schemes of light modulation are depicted in Fig. 10b–e . An array of light sources can be generated with multiple laser diodes (Fig. 10b ). To turn on all or one of the sources achieves pupil duplication or steering. A light source array can also be produced by projecting light on an array-type PPHOE 168 (Fig. 10c ). Apart from direct adjustment of light sources, modulating light on the path can also effectively steer/duplicate the light sources. Using a mechanical steering mirror, the beam can be deflected 167 (Fig. 10d ), which equals to shifting the light source position. Other devices like a grating or beam splitter can also serve as ray deflector/splitter 170 , 171 (Fig. 10e ).

figure 10

a Schematic of duplicating (or shift) viewpoint by modulation of incident light. Light modulation by b multiple laser diodes, c HOE lens array, d steering mirror and e grating or beam splitters. f Pupil duplication with multiplexed PPHOE. g Pupil steering with LCHOE. Reproduced from c ref. 168 under the Creative Commons Attribution 4.0 License, e ref. 169 with permission from OSA Publishing, f ref. 171 with permission from OSA Publishing and g ref. 173 with permission from OSA Publishing

Nonetheless, one problem of the light source duplication/shifting methods for pupil duplication/steering is that the aberrations in peripheral viewpoints are often serious 168 , 173 . The HOE combiner is usually recorded at one incident angle. For other incident angles with large deviations, considerable aberrations will occur, especially in the scenario of off-axis configuration. To solve this problem, the modulation can be focused on the combiner instead. While the mechanical shifting of combiner 9 can achieve continuous pupil steering, its integration into AR display with a small factor remains a challenge. Alternatively, the versatile functions of HOE offer possible solutions for combiner modulation. Kim and Park 169 demonstrated a pupil duplication system with multiplexed PPHOE (Fig. 10f ). Wavefronts of several viewpoints can be recorded into one PPHOE sample. Three viewpoints with a separation of 3 mm were achieved. However, a slight degree of ghost image and gap can be observed in the viewpoint transition. For a PPHOE to achieve pupil steering, the multiplexed PPHOE needs to record different focal points with different incident angles. If each hologram has no angular crosstalk, then with an additional device to change the light incident angle, the viewpoint can be steered. Alternatively, Xiong et al. 173 demonstrated a pupil steering system with LCHOEs in a simpler configuration (Fig. 10g ). The polarization-sensitive nature of LCHOE enables the controlling of which LCHOE to function with a polarization converter (PC). When the PC is off, the incident RCP light is focused by the right-handed LCHOE. When the PC is turned on, the RCP light is firstly converted to LCP light and passes through the right-handed LCHOE. Then it is focused by the left-handed LCHOE into another viewpoint. To add more viewpoints requires stacking more pairs of PC and LCHOE, which can be achieved in a compact manner with thin glass substrates. In addition, to realize pupil duplication only requires the stacking of multiple low-efficiency LCHOEs. For both PPHOEs and LCHOEs, because the hologram for each viewpoint is recorded independently, the aberrations can be eliminated.

Regarding the system performance, in theory the FoV is not limited and can reach a large value, such as 80° in horizontal direction 143 . The definition of eyebox is different from traditional imaging systems. For a single viewpoint, it has the same size as the eye pupil diameter. But due to the viewpoint steering/duplication capability, the total system eyebox can be expanded accordingly. The combiner efficiency for pupil steering systems can reach 47,000 nit/lm for a FoV of 80° by 80° and pupil diameter of 4 mm (Eq. S2 ). At such a high brightness level, eye safety could be a concern 174 . For a pupil duplication system, the combiner efficiency is decreased by the number of viewpoints. With a 4-by-4 viewpoint array, it can still reach 3000 nit/lm. Despite the potential gain of pupil duplication/steering, when considering the rotation of eyeball, the situation becomes much more complicated 175 . A perfect pupil steering system requires a 5D steering, which proposes a challenge for practical implementation.

Pin-light systems

Recently, another type of display in close relation with Maxwellian view called pin-light display 148 , 176 has been proposed. The general working principle of pin-light display is illustrated in Fig. 11a . Each pin-light source is a Maxwellian view with a large DoF. When the eye pupil is no longer placed near the source point as in Maxwellian view, each image source can only form an elemental view with a small FoV on retina. However, if the image source array is arranged in a proper form, the elemental views can be integrated together to form a large FoV. According to the specific optical architectures, pin-light display can take different forms of implementation. In the initial feasibility demonstration, Maimone et al. 176 used a side-lit waveguide plate as the point light source (Fig. 11b ). The light inside the waveguide plate is extracted by the etched divots, forming a pin-light source array. A transmissive SLM (LCD) is placed behind the waveguide plate to modulate the light intensity and form the image. The display has an impressive FoV of 110° thanks to the large scattering angle range. However, the direct placement of LCD before the eye brings issues of insufficient resolution density and diffraction of background light.

figure 11

a Schematic drawing of the working principle of pin-light display. b Pin-light display utilizing a pin-light source and a transmissive SLM. c An example of pin-mirror display with a birdbath optics. d SWD system with LBS image source and off-axis lens array. Reprinted from b ref. 176 under the Creative Commons Attribution 4.0 License and d ref. 180 with permission from OSA Publishing

To avoid these issues, architectures using pin-mirrors 177 , 178 , 179 are proposed. In these systems, the final combiner is an array of tiny mirrors 178 , 179 or gratings 177 , in contrast to their counterparts using large-area combiners. An exemplary system with birdbath design is depicted in Fig. 11c . In this case, the pin-mirrors replace the original beam-splitter in the birdbath and can thus shrink the system volume, while at the same time providing large DoF pin-light images. Nonetheless, such a system may still face the etendue conservation issue. Meanwhile, the size of pin-mirror cannot be too small in order to prevent degradation of resolution density due to diffraction. Therefore, its influence on the see-through background should also be considered in the system design.

To overcome the etendue conservation and improve see-through quality, Xiong et al. 180 proposed another type of pin-light system exploiting the etendue expansion property of waveguide, which is also referred as scanning waveguide display (SWD). As illustrated in Fig. 11d , the system uses an LBS as the image source. The collimated scanned laser rays are trapped in the waveguide and encounter an array of off-axis lenses. Upon each encounter, the lens out-couples the laser rays and forms a pin-light source. SWD has the merits of good see-through quality and large etendue. A large FoV of 100° was demonstrated with the help of an ultra-low f /# lens array based on LCHOE. However, some issues like insufficient image resolution density and image non-uniformity remain to be overcome. To further improve the system may require optimization of Gaussian beam profile and additional EPE module 180 .

Overall, pin-light systems inherit the large DoF from Maxwellian view. With adequate number of pin-light sources, the FoV and eyebox can be expanded accordingly. Nonetheless, despite different forms of implementation, a common issue of pin-light system is the image uniformity. The overlapped region of elemental views has a higher light intensity than the non-overlapped region, which becomes even more complicated considering the dynamic change of pupil size. In theory, the displayed image can be pre-processed to compensate for the optical non-uniformity. But that would require knowledge of precise pupil location (and possibly size) and therefore an accurate eye-tracking module 176 . Regarding the system performance, pin-mirror systems modified from other free-space systems generally shares similar FoV and eyebox with original systems. The combiner efficiency may be lower due to the small size of pin-mirrors. SWD, on the other hand, shares the large FoV and DoF with Maxwellian view, and large eyebox with waveguide combiners. The combiner efficiency may also be lower due to the EPE process.

Waveguide combiner

Besides free-space combiners, another common architecture in AR displays is waveguide combiner. The term ‘waveguide’ indicates the light is trapped in a substrate by the TIR process. One distinctive feature of a waveguide combiner is the EPE process that effectively enlarges the system etendue. In the EPE process, a portion of the trapped light is repeatedly coupled out of the waveguide in each TIR. The effective eyebox is therefore enlarged. According to the features of couplers, we divide the waveguide combiners into two types: diffractive and achromatic, as described in the followings.

Diffractive waveguides

As the name implies, diffractive-type waveguides use diffractive elements as couplers. The in-coupler is usually a diffractive grating and the out-coupler in most cases is also a grating with the same period as the in-coupler, but it can also be an off-axis lens with a small curvature to generate image with finite depth. Three major diffractive couplers have been developed: SRGs, photopolymer gratings (PPGs), and liquid crystal gratings (grating-type LCHOE; also known as polarization volume gratings (PVGs)). Some general protocols for coupler design are that the in-coupler should have a relatively high efficiency and the out-coupler should have a uniform light output. A uniform light output usually requires a low-efficiency coupler, with extra degrees of freedom for local modulation of coupling efficiency. Both in-coupler and out-coupler should have an adequate angular bandwidth to accommodate a reasonable FoV. In addition, the out-coupler should also be optimized to avoid undesired diffractions, including the outward diffraction of TIR light and diffraction of environment light into user’s eyes, which are referred as light leakage and rainbow. Suppression of these unwanted diffractions should also be considered in the optimization process of waveguide design, along with performance parameters like efficiency and uniformity.

The basic working principles of diffractive waveguide-based AR systems are illustrated in Fig. 12 . For the SRG-based waveguides 6 , 8 (Fig. 12a ), the in-coupler can be a transmissive-type or a reflective-type 181 , 182 . The grating geometry can be optimized for coupling efficiency with a large degree of freedom 183 . For the out-coupler, a reflective SRG with a large slant angle to suppress the transmission orders is preferred 184 . In addition, a uniform light output usually requires a gradient efficiency distribution in order to compensate for the decreased light intensity in the out-coupling process. This can be achieved by varying the local grating configurations like height and duty cycle 6 . For the PPG-based waveguides 185 (Fig. 12b ), the small angular bandwidth of a high-efficiency transmissive PPG prohibits its use as in-coupler. Therefore, both in-coupler and out-coupler are usually reflective types. The gradient efficiency can be achieved by space-variant exposure to control the local index modulation 186 or local Bragg slant angle variation through freeform exposure 19 . Due to the relatively small angular bandwidth of PPG, to achieve a decent FoV usually requires stacking two 187 or three 188 PPGs together for a single color. The PVG-based waveguides 189 (Fig. 12c ) also prefer reflective PVGs as in-couplers because the transmissive PVGs are much more difficult to fabricate due to the LC alignment issue. In addition, the angular bandwidth of transmissive PVGs in Bragg regime is also not large enough to support a decent FoV 29 . For the out-coupler, the angular bandwidth of a single reflective PVG can usually support a reasonable FoV. To obtain a uniform light output, a polarization management layer 190 consisting of a LC layer with spatially variant orientations can be utilized. It offers an additional degree of freedom to control the polarization state of the TIR light. The diffraction efficiency can therefore be locally controlled due to the strong polarization sensitivity of PVG.

figure 12

Schematics of waveguide combiners based on a SRGs, b PPGs and c PVGs. Reprinted from a ref. 85 with permission from OSA Publishing, b ref. 185 with permission from John Wiley and Sons and c ref. 189 with permission from OSA Publishing

The above discussion describes the basic working principle of 1D EPE. Nonetheless, for the 1D EPE to produce a large eyebox, the exit pupil in the unexpanded direction of the original image should be large. This proposes design challenges in light engines. Therefore, a 2D EPE is favored for practical applications. To extend EPE in two dimensions, two consecutive 1D EPEs can be used 191 , as depicted in Fig. 13a . The first 1D EPE occurs in the turning grating, where the light is duplicated in y direction and then turned into x direction. Then the light rays encounter the out-coupler and are expanded in x direction. To better understand the 2D EPE process, the k -vector diagram (Fig. 13b ) can be used. For the light propagating in air with wavenumber k 0 , its possible k -values in x and y directions ( k x and k y ) fall within the circle with radius k 0 . When the light is trapped into TIR, k x and k y are outside the circle with radius k 0 and inside the circle with radius nk 0 , where n is the refractive index of the substrate. k x and k y stay unchanged in the TIR process and are only changed in each diffraction process. The central red box in Fig. 13b indicates the possible k values within the system FoV. After the in-coupler, the k values are added by the grating k -vector, shifting the k values into TIR region. The turning grating then applies another k -vector and shifts the k values to near x -axis. Finally, the k values are shifted by the out-coupler and return to the free propagation region in air. One observation is that the size of red box is mostly limited by the width of TIR band. To accommodate a larger FoV, the outer boundary of TIR band needs to be expanded, which amounts to increasing waveguide refractive index. Another important fact is that when k x and k y are near the outer boundary, the uniformity of output light becomes worse. This is because the light propagation angle is near 90° in the waveguide. The spatial distance between two consecutive TIRs becomes so large that the out-coupled beams are spatially separated to an unacceptable degree. The range of possible k values for practical applications is therefore further shrunk due to this fact.

figure 13

a Schematic of 2D EPE based on two consecutive 1D EPEs. Gray/black arrows indicate light in air/TIR. Black dots denote TIRs. b k-diagram of the two-1D-EPE scheme. c Schematic of 2D EPE with a 2D hexagonal grating d k-diagram of the 2D-grating scheme

Aside from two consecutive 1D EPEs, the 2D EPE can also be directly implemented with a 2D grating 192 . An example using a hexagonal grating is depicted in Fig. 13c . The hexagonal grating can provide k -vectors in six directions. In the k -diagram (Fig. 13d ), after the in-coupling, the k values are distributed into six regions due to multiple diffractions. The out-coupling occurs simultaneously with pupil expansion. Besides a concise out-coupler configuration, the 2D EPE scheme offers more degrees of design freedom than two 1D EPEs because the local grating parameters can be adjusted in a 2D manner. The higher design freedom has the potential to reach a better output light uniformity, but at the cost of a higher computation demand for optimization. Furthermore, the unslanted grating geometry usually leads to a large light leakage and possibly low efficiency. Adding slant to the geometry helps alleviate the issue, but the associated fabrication may be more challenging.

Finally, we discuss the generation of full-color images. One important issue to clarify is that although diffractive gratings are used here, the final image generally has no color dispersion even if we use a broadband light source like LED. This can be easily understood in the 1D EPE scheme. The in-coupler and out-coupler have opposite k -vectors, which cancels the color dispersion for each other. In the 2D EPE schemes, the k -vectors always form a closed loop from in-coupled light to out-coupled light, thus, the color dispersion also vanishes likewise. The issue of using a single waveguide for full-color images actually exists in the consideration of FoV and light uniformity. The breakup of propagation angles for different colors results in varied out-coupling situations for each color. To be more specific, if the red and the blue channels use the same in-coupler, the propagating angle for the red light is larger than that of the blue light. The red light in peripheral FoV is therefore easier to face the mentioned large-angle non-uniformity issue. To acquire a decent FoV and light uniformity, usually two or three layers of waveguides with different grating pitches are adopted.

Regarding the system performance, the eyebox is generally large enough (~10 mm) to accommodate different user’s IPD and alignment shift during operation. A parameter of significant concern for a waveguide combiner is its FoV. From the k -vector analysis, we can conclude the theoretical upper limit is determined by the waveguide refractive index. But the light/color uniformity also influences the effective FoV, over which the degradation of image quality becomes unacceptable. Current diffractive waveguide combiners generally achieve a FoV of about 50°. To further increase FoV, a straightforward method is to use a higher refractive index waveguide. Another is to tile FoV through direct stacking of multiple waveguides or using polarization-sensitive couplers 79 , 193 . As to the optical efficiency, a typical value for the diffractive waveguide combiner is around 50–200 nit/lm 6 , 189 . In addition, waveguide combiners adopting grating out-couplers generate an image with fixed depth at infinity. This leads to the VAC issue. To tackle VAC in waveguide architectures, the most practical way is to generate multiple depths and use the varifocal or multifocal driving scheme, similar to those mentioned in the VR systems. But to add more depths usually means to stack multiple layers of waveguides together 194 . Considering the additional waveguide layers for RGB colors, the final waveguide thickness would undoubtedly increase.

Other parameters special to waveguide includes light leakage, see-through ghost, and rainbow. Light leakage refers to out-coupled light that goes outwards to the environment, as depicted in Fig. 14a . Aside from decreased efficiency, the leakage also brings drawback of unnatural “bright-eye” appearance of the user and privacy issue. Optimization of the grating structure like geometry of SRG may reduce the leakage. See-through ghost is formed by consecutive in-coupling and out-couplings caused by the out-coupler grating, as sketched in Fig. 14b , After the process, a real object with finite depth may produce a ghost image with shift in both FoV and depth. Generally, an out-coupler with higher efficiency suffers more see-through ghost. Rainbow is caused by the diffraction of environment light into user’s eye, as sketched in Fig. 14c . The color dispersion in this case will occur because there is no cancellation of k -vector. Using the k -diagram, we can obtain a deeper insight into the formation of rainbow. Here, we take the EPE structure in Fig. 13a as an example. As depicted in Fig. 14d , after diffractions by the turning grating and the out-coupler grating, the k values are distributed in two circles that shift from the origin by the grating k -vectors. Some diffracted light can enter the see-through FoV and form rainbow. To reduce rainbow, a straightforward way is to use a higher index substrate. With a higher refractive index, the outer boundary of k diagram is expanded, which can accommodate larger grating k -vectors. The enlarged k -vectors would therefore “push” these two circles outwards, leading to a decreased overlapping region with the see-through FoV. Alternatively, an optimized grating structure would also help reduce the rainbow effect by suppressing the unwanted diffraction.

figure 14

Sketches of formations of a light leakage, b see-through ghost and c rainbow. d Analysis of rainbow formation with k-diagram

Achromatic waveguide

Achromatic waveguide combiners use achromatic elements as couplers. It has the advantage of realizing full-color image with a single waveguide. A typical example of achromatic element is a mirror. The waveguide with partial mirrors as out-coupler is often referred as geometric waveguide 6 , 195 , as depicted in Fig. 15a . The in-coupler in this case is usually a prism to avoid unnecessary color dispersion if using diffractive elements otherwise. The mirrors couple out TIR light consecutively to produce a large eyebox, similarly in a diffractive waveguide. Thanks to the excellent optical property of mirrors, the geometric waveguide usually exhibits a superior image regarding MTF and color uniformity to its diffractive counterparts. Still, the spatially discontinuous configuration of mirrors also results in gaps in eyebox, which may be alleviated by using a dual-layer structure 196 . Wang et al. designed a geometric waveguide display with five partial mirrors (Fig. 15b ). It exhibits a remarkable FoV of 50° by 30° (Fig. 15c ) and an exit pupil of 4 mm with a 1D EPE. To achieve 2D EPE, similar architectures in Fig. 13a can be used by integrating a turning mirror array as the first 1D EPE module 197 . Unfortunately, the k -vector diagrams in Fig. 13b, d cannot be used here because the k values in x-y plane no longer conserve in the in-coupling and out-coupling processes. But some general conclusions remain valid, like a higher refractive index leading to a larger FoV and gradient out-coupling efficiency improving light uniformity.

figure 15

a Schematic of the system configuration. b Geometric waveguide with five partial mirrors. c Image photos demonstrating system FoV. Adapted from b , c ref. 195 with permission from OSA Publishing

The fabrication process of geometric waveguide involves coating mirrors on cut-apart pieces and integrating them back together, which may result in a high cost, especially for the 2D EPE architecture. Another way to implement an achromatic coupler is to use multiplexed PPHOE 198 , 199 to mimic the behavior of a tilted mirror (Fig. 16a ). To understand the working principle, we can use the diagram in Fig. 16b . The law of reflection states the angle of reflection equals to the angle of incidence. If we translate this behavior to k -vector language, it means the mirror can apply any length of k -vector along its surface normal direction. The k -vector length of the reflected light is always equal to that of the incident light. This puts a condition that the k -vector triangle is isosceles. With a simple geometric deduction, it can be easily observed this leads to the law of reflection. The behavior of a general grating, however, is very different. For simplicity we only consider the main diffraction order. The grating can only apply a k -vector with fixed k x due to the basic diffraction law. For the light with a different incident angle, it needs to apply different k z to produce a diffracted light with equal k -vector length as the incident light. For a grating with a broad angular bandwidth like SRG, the range of k z is wide, forming a lengthy vertical line in Fig. 16b . For a PPG with a narrow angular bandwidth, the line is short and resembles a dot. If multiple of these tiny dots are distributed along the oblique line corresponding to a mirror, then the final multiplexed PPGs can imitate the behavior of a tilted mirror. Such a PPHOE is sometimes referred as a skew-mirror 198 . In theory, to better imitate the mirror, a lot of multiplexed PPGs is preferred, while each PPG has a small index modulation δn . But this proposes a bigger challenge in device fabrication. Recently, Utsugi et al. demonstrated an impressive skew-mirror waveguide based on 54 multiplexed PPGs (Fig. 16c, d ). The display exhibits an effective FoV of 35° by 36°. In the peripheral FoV, there still exists some non-uniformity (Fig. 16e ) due to the out-coupling gap, which is an inherent feature of the flat-type out-couplers.

figure 16

a System configuration. b Diagram demonstrating how multiplexed PPGs resemble the behavior of a mirror. Photos showing c the system and d image. e Picture demonstrating effective system FoV. Adapted from c – e ref. 199 with permission from ITE

Finally, it is worth mentioning that metasurfaces are also promising to deliver achromatic gratings 200 , 201 for waveguide couplers ascribed to their versatile wavefront shaping capability. The mechanism of the achromatic gratings is similar to that of the achromatic lenses as previously discussed. However, the current development of achromatic metagratings is still in its infancy. Much effort is needed to improve the optical efficiency for in-coupling, control the higher diffraction orders for eliminating ghost images, and enable a large size design for EPE.

Generally, achromatic waveguide combiners exhibit a comparable FoV and eyebox with diffractive combiners, but with a higher efficiency. For a partial-mirror combiner, its combiner efficiency is around 650 nit/lm 197 (2D EPE). For a skew-mirror combiner, although the efficiency of multiplexed PPHOE is relatively low (~1.5%) 199 , the final combiner efficiency of the 1D EPE system is still high (>3000 nit/lm) due to multiple out-couplings.

Table 2 summarizes the performance of different AR combiners. When combing the luminous efficacy in Table 1 and the combiner efficiency in Table 2 , we can have a comprehensive estimate of the total luminance efficiency (nit/W) for different types of systems. Generally, Maxwellian-type combiners with pupil steering have the highest luminance efficiency when partnered with laser-based light engines like laser-backlit LCoS/DMD or MEM-LBS. Geometric optical combiners have well-balanced image performances, but to further shrink the system size remains a challenge. Diffractive waveguides have a relatively low combiner efficiency, which can be remedied by an efficient light engine like MEMS-LBS. Further development of coupler and EPE scheme would also improve the system efficiency and FoV. Achromatic waveguides have a decent combiner efficiency. The single-layer design also enables a smaller form factor. With advances in fabrication process, it may become a strong contender to presently widely used diffractive waveguides.

Conclusions and perspectives

VR and AR are endowed with a high expectation to revolutionize the way we interact with digital world. Accompanied with the expectation are the engineering challenges to squeeze a high-performance display system into a tightly packed module for daily wearing. Although the etendue conservation constitutes a great obstacle on the path, remarkable progresses with innovative optics and photonics continue to take place. Ultra-thin optical elements like PPHOEs and LCHOEs provide alternative solutions to traditional optics. Their unique features of multiplexing capability and polarization dependency further expand the possibility of novel wavefront modulations. At the same time, nanoscale-engineered metasurfaces/SRGs provide large design freedoms to achieve novel functions beyond conventional geometric optical devices. Newly emerged micro-LEDs open an opportunity for compact microdisplays with high peak brightness and good stability. Further advances on device engineering and manufacturing process are expected to boost the performance of metasurfaces/SRGs and micro-LEDs for AR and VR applications.

Data availability

All data needed to evaluate the conclusions in the paper are present in the paper. Additional data related to this paper may be requested from the authors.

Cakmakci, O. & Rolland, J. Head-worn displays: a review. J. Disp. Technol. 2 , 199–216 (2006).

Article   ADS   Google Scholar  

Zhan, T. et al. Augmented reality and virtual reality displays: perspectives and challenges. iScience 23 , 101397 (2020).

Rendon, A. A. et al. The effect of virtual reality gaming on dynamic balance in older adults. Age Ageing 41 , 549–552 (2012).

Article   Google Scholar  

Choi, S., Jung, K. & Noh, S. D. Virtual reality applications in manufacturing industries: past research, present findings, and future directions. Concurrent Eng. 23 , 40–63 (2015).

Li, X. et al. A critical review of virtual and augmented reality (VR/AR) applications in construction safety. Autom. Constr. 86 , 150–162 (2018).

Kress, B. C. Optical Architectures for Augmented-, Virtual-, and Mixed-Reality Headsets (Bellingham: SPIE Press, 2020).

Cholewiak, S. A. et al. A perceptual eyebox for near-eye displays. Opt. Express 28 , 38008–38028 (2020).

Lee, Y. H., Zhan, T. & Wu, S. T. Prospects and challenges in augmented reality displays. Virtual Real. Intell. Hardw. 1 , 10–20 (2019).

Kim, J. et al. Foveated AR: dynamically-foveated augmented reality display. ACM Trans. Graph. 38 , 99 (2019).

Tan, G. J. et al. Foveated imaging for near-eye displays. Opt. Express 26 , 25076–25085 (2018).

Lee, S. et al. Foveated near-eye display for mixed reality using liquid crystal photonics. Sci. Rep. 10 , 16127 (2020).

Yoo, C. et al. Foveated display system based on a doublet geometric phase lens. Opt. Express 28 , 23690–23702 (2020).

Akşit, K. et al. Manufacturing application-driven foveated near-eye displays. IEEE Trans. Vis. Computer Graph. 25 , 1928–1939 (2019).

Zhu, R. D. et al. High-ambient-contrast augmented reality with a tunable transmittance liquid crystal film and a functional reflective polarizer. J. Soc. Inf. Disp. 24 , 229–233 (2016).

Lincoln, P. et al. Scene-adaptive high dynamic range display for low latency augmented reality. In Proc. 21st ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games . (ACM, San Francisco, CA, 2017).

Duerr, F. & Thienpont, H. Freeform imaging systems: fermat’s principle unlocks “first time right” design. Light.: Sci. Appl. 10 , 95 (2021).

Bauer, A., Schiesser, E. M. & Rolland, J. P. Starting geometry creation and design method for freeform optics. Nat. Commun. 9 , 1756 (2018).

Rolland, J. P. et al. Freeform optics for imaging. Optica 8 , 161–176 (2021).

Jang, C. et al. Design and fabrication of freeform holographic optical elements. ACM Trans. Graph. 39 , 184 (2020).

Gabor, D. A new microscopic principle. Nature 161 , 777–778 (1948).

Kostuk, R. K. Holography: Principles and Applications (Boca Raton: CRC Press, 2019).

Lawrence, J. R., O'Neill, F. T. & Sheridan, J. T. Photopolymer holographic recording material. Optik 112 , 449–463 (2001).

Guo, J. X., Gleeson, M. R. & Sheridan, J. T. A review of the optimisation of photopolymer materials for holographic data storage. Phys. Res. Int. 2012 , 803439 (2012).

Jang, C. et al. Recent progress in see-through three-dimensional displays using holographic optical elements [Invited]. Appl. Opt. 55 , A71–A85 (2016).

Xiong, J. H. et al. Holographic optical elements for augmented reality: principles, present status, and future perspectives. Adv. Photonics Res. 2 , 2000049 (2021).

Tabiryan, N. V. et al. Advances in transparent planar optics: enabling large aperture, ultrathin lenses. Adv. Optical Mater. 9 , 2001692 (2021).

Zanutta, A. et al. Photopolymeric films with highly tunable refractive index modulation for high precision diffractive optics. Optical Mater. Express 6 , 252–263 (2016).

Moharam, M. G. & Gaylord, T. K. Rigorous coupled-wave analysis of planar-grating diffraction. J. Optical Soc. Am. 71 , 811–818 (1981).

Xiong, J. H. & Wu, S. T. Rigorous coupled-wave analysis of liquid crystal polarization gratings. Opt. Express 28 , 35960–35971 (2020).

Xie, S., Natansohn, A. & Rochon, P. Recent developments in aromatic azo polymers research. Chem. Mater. 5 , 403–411 (1993).

Shishido, A. Rewritable holograms based on azobenzene-containing liquid-crystalline polymers. Polym. J. 42 , 525–533 (2010).

Bunning, T. J. et al. Holographic polymer-dispersed liquid crystals (H-PDLCs). Annu. Rev. Mater. Sci. 30 , 83–115 (2000).

Liu, Y. J. & Sun, X. W. Holographic polymer-dispersed liquid crystals: materials, formation, and applications. Adv. Optoelectron. 2008 , 684349 (2008).

Xiong, J. H. & Wu, S. T. Planar liquid crystal polarization optics for augmented reality and virtual reality: from fundamentals to applications. eLight 1 , 3 (2021).

Yaroshchuk, O. & Reznikov, Y. Photoalignment of liquid crystals: basics and current trends. J. Mater. Chem. 22 , 286–300 (2012).

Sarkissian, H. et al. Periodically aligned liquid crystal: potential application for projection displays. Mol. Cryst. Liq. Cryst. 451 , 1–19 (2006).

Komanduri, R. K. & Escuti, M. J. Elastic continuum analysis of the liquid crystal polarization grating. Phys. Rev. E 76 , 021701 (2007).

Kobashi, J., Yoshida, H. & Ozaki, M. Planar optics with patterned chiral liquid crystals. Nat. Photonics 10 , 389–392 (2016).

Lee, Y. H., Yin, K. & Wu, S. T. Reflective polarization volume gratings for high efficiency waveguide-coupling augmented reality displays. Opt. Express 25 , 27008–27014 (2017).

Lee, Y. H., He, Z. Q. & Wu, S. T. Optical properties of reflective liquid crystal polarization volume gratings. J. Optical Soc. Am. B 36 , D9–D12 (2019).

Xiong, J. H., Chen, R. & Wu, S. T. Device simulation of liquid crystal polarization gratings. Opt. Express 27 , 18102–18112 (2019).

Czapla, A. et al. Long-period fiber gratings with low-birefringence liquid crystal. Mol. Cryst. Liq. Cryst. 502 , 65–76 (2009).

Dąbrowski, R., Kula, P. & Herman, J. High birefringence liquid crystals. Crystals 3 , 443–482 (2013).

Mack, C. Fundamental Principles of Optical Lithography: The Science of Microfabrication (Chichester: John Wiley & Sons, 2007).

Genevet, P. et al. Recent advances in planar optics: from plasmonic to dielectric metasurfaces. Optica 4 , 139–152 (2017).

Guo, L. J. Nanoimprint lithography: methods and material requirements. Adv. Mater. 19 , 495–513 (2007).

Park, J. et al. Electrically driven mid-submicrometre pixelation of InGaN micro-light-emitting diode displays for augmented-reality glasses. Nat. Photonics 15 , 449–455 (2021).

Khorasaninejad, M. et al. Metalenses at visible wavelengths: diffraction-limited focusing and subwavelength resolution imaging. Science 352 , 1190–1194 (2016).

Li, S. Q. et al. Phase-only transmissive spatial light modulator based on tunable dielectric metasurface. Science 364 , 1087–1090 (2019).

Liang, K. L. et al. Advances in color-converted micro-LED arrays. Jpn. J. Appl. Phys. 60 , SA0802 (2020).

Jin, S. X. et al. GaN microdisk light emitting diodes. Appl. Phys. Lett. 76 , 631–633 (2000).

Day, J. et al. Full-scale self-emissive blue and green microdisplays based on GaN micro-LED arrays. In Proc. SPIE 8268, Quantum Sensing and Nanophotonic Devices IX (SPIE, San Francisco, California, United States, 2012).

Huang, Y. G. et al. Mini-LED, micro-LED and OLED displays: present status and future perspectives. Light.: Sci. Appl. 9 , 105 (2020).

Parbrook, P. J. et al. Micro-light emitting diode: from chips to applications. Laser Photonics Rev. 15 , 2000133 (2021).

Day, J. et al. III-Nitride full-scale high-resolution microdisplays. Appl. Phys. Lett. 99 , 031116 (2011).

Liu, Z. J. et al. 360 PPI flip-chip mounted active matrix addressable light emitting diode on silicon (LEDoS) micro-displays. J. Disp. Technol. 9 , 678–682 (2013).

Zhang, L. et al. Wafer-scale monolithic hybrid integration of Si-based IC and III–V epi-layers—A mass manufacturable approach for active matrix micro-LED micro-displays. J. Soc. Inf. Disp. 26 , 137–145 (2018).

Tian, P. F. et al. Size-dependent efficiency and efficiency droop of blue InGaN micro-light emitting diodes. Appl. Phys. Lett. 101 , 231110 (2012).

Olivier, F. et al. Shockley-Read-Hall and Auger non-radiative recombination in GaN based LEDs: a size effect study. Appl. Phys. Lett. 111 , 022104 (2017).

Konoplev, S. S., Bulashevich, K. A. & Karpov, S. Y. From large-size to micro-LEDs: scaling trends revealed by modeling. Phys. Status Solidi (A) 215 , 1700508 (2018).

Li, L. Z. et al. Transfer-printed, tandem microscale light-emitting diodes for full-color displays. Proc. Natl Acad. Sci. USA 118 , e2023436118 (2021).

Oh, J. T. et al. Light output performance of red AlGaInP-based light emitting diodes with different chip geometries and structures. Opt. Express 26 , 11194–11200 (2018).

Shen, Y. C. et al. Auger recombination in InGaN measured by photoluminescence. Appl. Phys. Lett. 91 , 141101 (2007).

Wong, M. S. et al. High efficiency of III-nitride micro-light-emitting diodes by sidewall passivation using atomic layer deposition. Opt. Express 26 , 21324–21331 (2018).

Han, S. C. et al. AlGaInP-based Micro-LED array with enhanced optoelectrical properties. Optical Mater. 114 , 110860 (2021).

Wong, M. S. et al. Size-independent peak efficiency of III-nitride micro-light-emitting-diodes using chemical treatment and sidewall passivation. Appl. Phys. Express 12 , 097004 (2019).

Ley, R. T. et al. Revealing the importance of light extraction efficiency in InGaN/GaN microLEDs via chemical treatment and dielectric passivation. Appl. Phys. Lett. 116 , 251104 (2020).

Moon, S. W. et al. Recent progress on ultrathin metalenses for flat optics. iScience 23 , 101877 (2020).

Arbabi, A. et al. Efficient dielectric metasurface collimating lenses for mid-infrared quantum cascade lasers. Opt. Express 23 , 33310–33317 (2015).

Yu, N. F. et al. Light propagation with phase discontinuities: generalized laws of reflection and refraction. Science 334 , 333–337 (2011).

Liang, H. W. et al. High performance metalenses: numerical aperture, aberrations, chromaticity, and trade-offs. Optica 6 , 1461–1470 (2019).

Park, J. S. et al. All-glass, large metalens at visible wavelength using deep-ultraviolet projection lithography. Nano Lett. 19 , 8673–8682 (2019).

Yoon, G. et al. Single-step manufacturing of hierarchical dielectric metalens in the visible. Nat. Commun. 11 , 2268 (2020).

Lee, G. Y. et al. Metasurface eyepiece for augmented reality. Nat. Commun. 9 , 4562 (2018).

Chen, W. T. et al. A broadband achromatic metalens for focusing and imaging in the visible. Nat. Nanotechnol. 13 , 220–226 (2018).

Wang, S. M. et al. A broadband achromatic metalens in the visible. Nat. Nanotechnol. 13 , 227–232 (2018).

Lan, S. F. et al. Metasurfaces for near-eye augmented reality. ACS Photonics 6 , 864–870 (2019).

Fan, Z. B. et al. A broadband achromatic metalens array for integral imaging in the visible. Light.: Sci. Appl. 8 , 67 (2019).

Shi, Z. J., Chen, W. T. & Capasso, F. Wide field-of-view waveguide displays enabled by polarization-dependent metagratings. In Proc. SPIE 10676, Digital Optics for Immersive Displays (SPIE, Strasbourg, France, 2018).

Hong, C. C., Colburn, S. & Majumdar, A. Flat metaform near-eye visor. Appl. Opt. 56 , 8822–8827 (2017).

Bayati, E. et al. Design of achromatic augmented reality visors based on composite metasurfaces. Appl. Opt. 60 , 844–850 (2021).

Nikolov, D. K. et al. Metaform optics: bridging nanophotonics and freeform optics. Sci. Adv. 7 , eabe5112 (2021).

Tamir, T. & Peng, S. T. Analysis and design of grating couplers. Appl. Phys. 14 , 235–254 (1977).

Miller, J. M. et al. Design and fabrication of binary slanted surface-relief gratings for a planar optical interconnection. Appl. Opt. 36 , 5717–5727 (1997).

Levola, T. & Laakkonen, P. Replicated slanted gratings with a high refractive index material for in and outcoupling of light. Opt. Express 15 , 2067–2074 (2007).

Shrestha, S. et al. Broadband achromatic dielectric metalenses. Light.: Sci. Appl. 7 , 85 (2018).

Li, Z. Y. et al. Meta-optics achieves RGB-achromatic focusing for virtual reality. Sci. Adv. 7 , eabe4458 (2021).

Ratcliff, J. et al. ThinVR: heterogeneous microlens arrays for compact, 180 degree FOV VR near-eye displays. IEEE Trans. Vis. Computer Graph. 26 , 1981–1990 (2020).

Wong, T. L. et al. Folded optics with birefringent reflective polarizers. In Proc. SPIE 10335, Digital Optical Technologies 2017 (SPIE, Munich, Germany, 2017).

Li, Y. N. Q. et al. Broadband cholesteric liquid crystal lens for chromatic aberration correction in catadioptric virtual reality optics. Opt. Express 29 , 6011–6020 (2021).

Bang, K. et al. Lenslet VR: thin, flat and wide-FOV virtual reality display using fresnel lens and lenslet array. IEEE Trans. Vis. Computer Graph. 27 , 2545–2554 (2021).

Maimone, A. & Wang, J. R. Holographic optics for thin and lightweight virtual reality. ACM Trans. Graph. 39 , 67 (2020).

Kramida, G. Resolving the vergence-accommodation conflict in head-mounted displays. IEEE Trans. Vis. Computer Graph. 22 , 1912–1931 (2016).

Zhan, T. et al. Multifocal displays: review and prospect. PhotoniX 1 , 10 (2020).

Shimobaba, T., Kakue, T. & Ito, T. Review of fast algorithms and hardware implementations on computer holography. IEEE Trans. Ind. Inform. 12 , 1611–1622 (2016).

Xiao, X. et al. Advances in three-dimensional integral imaging: sensing, display, and applications [Invited]. Appl. Opt. 52 , 546–560 (2013).

Kuiper, S. & Hendriks, B. H. W. Variable-focus liquid lens for miniature cameras. Appl. Phys. Lett. 85 , 1128–1130 (2004).

Liu, S. & Hua, H. Time-multiplexed dual-focal plane head-mounted display with a liquid lens. Opt. Lett. 34 , 1642–1644 (2009).

Wilson, A. & Hua, H. Design and demonstration of a vari-focal optical see-through head-mounted display using freeform Alvarez lenses. Opt. Express 27 , 15627–15637 (2019).

Zhan, T. et al. Pancharatnam-Berry optical elements for head-up and near-eye displays [Invited]. J. Optical Soc. Am. B 36 , D52–D65 (2019).

Oh, C. & Escuti, M. J. Achromatic diffraction from polarization gratings with high efficiency. Opt. Lett. 33 , 2287–2289 (2008).

Zou, J. Y. et al. Broadband wide-view Pancharatnam-Berry phase deflector. Opt. Express 28 , 4921–4927 (2020).

Zhan, T., Lee, Y. H. & Wu, S. T. High-resolution additive light field near-eye display by switchable Pancharatnam–Berry phase lenses. Opt. Express 26 , 4863–4872 (2018).

Tan, G. J. et al. Polarization-multiplexed multiplane display. Opt. Lett. 43 , 5651–5654 (2018).

Lanman, D. R. Display systems research at facebook reality labs (conference presentation). In Proc. SPIE 11310, Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) (SPIE, San Francisco, California, United States, 2020).

Liu, Z. J. et al. A novel BLU-free full-color LED projector using LED on silicon micro-displays. IEEE Photonics Technol. Lett. 25 , 2267–2270 (2013).

Han, H. V. et al. Resonant-enhanced full-color emission of quantum-dot-based micro LED display technology. Opt. Express 23 , 32504–32515 (2015).

Lin, H. Y. et al. Optical cross-talk reduction in a quantum-dot-based full-color micro-light-emitting-diode display by a lithographic-fabricated photoresist mold. Photonics Res. 5 , 411–416 (2017).

Liu, Z. J. et al. Micro-light-emitting diodes with quantum dots in display technology. Light.: Sci. Appl. 9 , 83 (2020).

Kim, H. M. et al. Ten micrometer pixel, quantum dots color conversion layer for high resolution and full color active matrix micro-LED display. J. Soc. Inf. Disp. 27 , 347–353 (2019).

Xuan, T. T. et al. Inkjet-printed quantum dot color conversion films for high-resolution and full-color micro light-emitting diode displays. J. Phys. Chem. Lett. 11 , 5184–5191 (2020).

Chen, S. W. H. et al. Full-color monolithic hybrid quantum dot nanoring micro light-emitting diodes with improved efficiency using atomic layer deposition and nonradiative resonant energy transfer. Photonics Res. 7 , 416–422 (2019).

Krishnan, C. et al. Hybrid photonic crystal light-emitting diode renders 123% color conversion effective quantum yield. Optica 3 , 503–509 (2016).

Kang, J. H. et al. RGB arrays for micro-light-emitting diode applications using nanoporous GaN embedded with quantum dots. ACS Applied Mater. Interfaces 12 , 30890–30895 (2020).

Chen, G. S. et al. Monolithic red/green/blue micro-LEDs with HBR and DBR structures. IEEE Photonics Technol. Lett. 30 , 262–265 (2018).

Hsiang, E. L. et al. Enhancing the efficiency of color conversion micro-LED display with a patterned cholesteric liquid crystal polymer film. Nanomaterials 10 , 2430 (2020).

Kang, C. M. et al. Hybrid full-color inorganic light-emitting diodes integrated on a single wafer using selective area growth and adhesive bonding. ACS Photonics 5 , 4413–4422 (2018).

Geum, D. M. et al. Strategy toward the fabrication of ultrahigh-resolution micro-LED displays by bonding-interface-engineered vertical stacking and surface passivation. Nanoscale 11 , 23139–23148 (2019).

Ra, Y. H. et al. Full-color single nanowire pixels for projection displays. Nano Lett. 16 , 4608–4615 (2016).

Motoyama, Y. et al. High-efficiency OLED microdisplay with microlens array. J. Soc. Inf. Disp. 27 , 354–360 (2019).

Fujii, T. et al. 4032 ppi High-resolution OLED microdisplay. J. Soc. Inf. Disp. 26 , 178–186 (2018).

Hamer, J. et al. High-performance OLED microdisplays made with multi-stack OLED formulations on CMOS backplanes. In Proc. SPIE 11473, Organic and Hybrid Light Emitting Materials and Devices XXIV . Online Only (SPIE, 2020).

Joo, W. J. et al. Metasurface-driven OLED displays beyond 10,000 pixels per inch. Science 370 , 459–463 (2020).

Vettese, D. Liquid crystal on silicon. Nat. Photonics 4 , 752–754 (2010).

Zhang, Z. C., You, Z. & Chu, D. P. Fundamentals of phase-only liquid crystal on silicon (LCOS) devices. Light.: Sci. Appl. 3 , e213 (2014).

Hornbeck, L. J. The DMD TM projection display chip: a MEMS-based technology. MRS Bull. 26 , 325–327 (2001).

Zhang, Q. et al. Polarization recycling method for light-pipe-based optical engine. Appl. Opt. 52 , 8827–8833 (2013).

Hofmann, U., Janes, J. & Quenzer, H. J. High-Q MEMS resonators for laser beam scanning displays. Micromachines 3 , 509–528 (2012).

Holmström, S. T. S., Baran, U. & Urey, H. MEMS laser scanners: a review. J. Microelectromechanical Syst. 23 , 259–275 (2014).

Bao, X. Z. et al. Design and fabrication of AlGaInP-based micro-light-emitting-diode array devices. Opt. Laser Technol. 78 , 34–41 (2016).

Olivier, F. et al. Influence of size-reduction on the performances of GaN-based micro-LEDs for display application. J. Lumin. 191 , 112–116 (2017).

Liu, Y. B. et al. High-brightness InGaN/GaN Micro-LEDs with secondary peak effect for displays. IEEE Electron Device Lett. 41 , 1380–1383 (2020).

Qi, L. H. et al. 848 ppi high-brightness active-matrix micro-LED micro-display using GaN-on-Si epi-wafers towards mass production. Opt. Express 29 , 10580–10591 (2021).

Chen, E. G. & Yu, F. H. Design of an elliptic spot illumination system in LED-based color filter-liquid-crystal-on-silicon pico projectors for mobile embedded projection. Appl. Opt. 51 , 3162–3170 (2012).

Darmon, D., McNeil, J. R. & Handschy, M. A. 70.1: LED-illuminated pico projector architectures. Soc. Inf. Disp. Int. Symp . Dig. Tech. Pap. 39 , 1070–1073 (2008).

Essaian, S. & Khaydarov, J. State of the art of compact green lasers for mobile projectors. Optical Rev. 19 , 400–404 (2012).

Sun, W. S. et al. Compact LED projector design with high uniformity and efficiency. Appl. Opt. 53 , H227–H232 (2014).

Sun, W. S., Chiang, Y. C. & Tsuei, C. H. Optical design for the DLP pocket projector using LED light source. Phys. Procedia 19 , 301–307 (2011).

Chen, S. W. H. et al. High-bandwidth green semipolar (20–21) InGaN/GaN micro light-emitting diodes for visible light communication. ACS Photonics 7 , 2228–2235 (2020).

Yoshida, K. et al. 245 MHz bandwidth organic light-emitting diodes used in a gigabit optical wireless data link. Nat. Commun. 11 , 1171 (2020).

Park, D. W. et al. 53.5: High-speed AMOLED pixel circuit and driving scheme. Soc. Inf. Disp. Int. Symp . Dig. Tech. Pap. 41 , 806–809 (2010).

Tan, L., Huang, H. C. & Kwok, H. S. 78.1: Ultra compact polarization recycling system for white light LED based pico-projection system. Soc. Inf. Disp. Int. Symp. Dig. Tech. Pap. 41 , 1159–1161 (2010).

Maimone, A., Georgiou, A. & Kollin, J. S. Holographic near-eye displays for virtual and augmented reality. ACM Trans. Graph. 36 , 85 (2017).

Pan, J. W. et al. Portable digital micromirror device projector using a prism. Appl. Opt. 46 , 5097–5102 (2007).

Huang, Y. et al. Liquid-crystal-on-silicon for augmented reality displays. Appl. Sci. 8 , 2366 (2018).

Peng, F. L. et al. Analytical equation for the motion picture response time of display devices. J. Appl. Phys. 121 , 023108 (2017).

Pulli, K. 11-2: invited paper: meta 2: immersive optical-see-through augmented reality. Soc. Inf. Disp. Int. Symp . Dig. Tech. Pap. 48 , 132–133 (2017).

Lee, B. & Jo, Y. in Advanced Display Technology: Next Generation Self-Emitting Displays (eds Kang, B., Han, C. W. & Jeong, J. K.) 307–328 (Springer, 2021).

Cheng, D. W. et al. Design of an optical see-through head-mounted display with a low f -number and large field of view using a freeform prism. Appl. Opt. 48 , 2655–2668 (2009).

Zheng, Z. R. et al. Design and fabrication of an off-axis see-through head-mounted display with an x–y polynomial surface. Appl. Opt. 49 , 3661–3668 (2010).

Wei, L. D. et al. Design and fabrication of a compact off-axis see-through head-mounted display using a freeform surface. Opt. Express 26 , 8550–8565 (2018).

Liu, S., Hua, H. & Cheng, D. W. A novel prototype for an optical see-through head-mounted display with addressable focus cues. IEEE Trans. Vis. Computer Graph. 16 , 381–393 (2010).

Hua, H. & Javidi, B. A 3D integral imaging optical see-through head-mounted display. Opt. Express 22 , 13484–13491 (2014).

Song, W. T. et al. Design of a light-field near-eye display using random pinholes. Opt. Express 27 , 23763–23774 (2019).

Wang, X. & Hua, H. Depth-enhanced head-mounted light field displays based on integral imaging. Opt. Lett. 46 , 985–988 (2021).

Huang, H. K. & Hua, H. Generalized methods and strategies for modeling and optimizing the optics of 3D head-mounted light field displays. Opt. Express 27 , 25154–25171 (2019).

Huang, H. K. & Hua, H. High-performance integral-imaging-based light field augmented reality display using freeform optics. Opt. Express 26 , 17578–17590 (2018).

Cheng, D. W. et al. Design and manufacture AR head-mounted displays: a review and outlook. Light.: Adv. Manuf. 2 , 24 (2021).

Google Scholar  

Westheimer, G. The Maxwellian view. Vis. Res. 6 , 669–682 (1966).

Do, H., Kim, Y. M. & Min, S. W. Focus-free head-mounted display based on Maxwellian view using retroreflector film. Appl. Opt. 58 , 2882–2889 (2019).

Park, J. H. & Kim, S. B. Optical see-through holographic near-eye-display with eyebox steering and depth of field control. Opt. Express 26 , 27076–27088 (2018).

Chang, C. L. et al. Toward the next-generation VR/AR optics: a review of holographic near-eye displays from a human-centric perspective. Optica 7 , 1563–1578 (2020).

Hsueh, C. K. & Sawchuk, A. A. Computer-generated double-phase holograms. Appl. Opt. 17 , 3874–3883 (1978).

Chakravarthula, P. et al. Wirtinger holography for near-eye displays. ACM Trans. Graph. 38 , 213 (2019).

Peng, Y. F. et al. Neural holography with camera-in-the-loop training. ACM Trans. Graph. 39 , 185 (2020).

Shi, L. et al. Towards real-time photorealistic 3D holography with deep neural networks. Nature 591 , 234–239 (2021).

Jang, C. et al. Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina. ACM Trans. Graph. 36 , 190 (2017).

Jang, C. et al. Holographic near-eye display with expanded eye-box. ACM Trans. Graph. 37 , 195 (2018).

Kim, S. B. & Park, J. H. Optical see-through Maxwellian near-to-eye display with an enlarged eyebox. Opt. Lett. 43 , 767–770 (2018).

Shrestha, P. K. et al. Accommodation-free head mounted display with comfortable 3D perception and an enlarged eye-box. Research 2019 , 9273723 (2019).

Lin, T. G. et al. Maxwellian near-eye display with an expanded eyebox. Opt. Express 28 , 38616–38625 (2020).

Jo, Y. et al. Eye-box extended retinal projection type near-eye display with multiple independent viewpoints [Invited]. Appl. Opt. 60 , A268–A276 (2021).

Xiong, J. H. et al. Aberration-free pupil steerable Maxwellian display for augmented reality with cholesteric liquid crystal holographic lenses. Opt. Lett. 46 , 1760–1763 (2021).

Viirre, E. et al. Laser safety analysis of a retinal scanning display system. J. Laser Appl. 9 , 253–260 (1997).

Ratnam, K. et al. Retinal image quality in near-eye pupil-steered systems. Opt. Express 27 , 38289–38311 (2019).

Maimone, A. et al. Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources. In Proc. ACM SIGGRAPH 2014 Emerging Technologies (ACM, Vancouver, Canada, 2014).

Jeong, J. et al. Holographically printed freeform mirror array for augmented reality near-eye display. IEEE Photonics Technol. Lett. 32 , 991–994 (2020).

Ha, J. & Kim, J. Augmented reality optics system with pin mirror. US Patent 10,989,922 (2021).

Park, S. G. Augmented and mixed reality optical see-through combiners based on plastic optics. Inf. Disp. 37 , 6–11 (2021).

Xiong, J. H. et al. Breaking the field-of-view limit in augmented reality with a scanning waveguide display. OSA Contin. 3 , 2730–2740 (2020).

Levola, T. 7.1: invited paper: novel diffractive optical components for near to eye displays. Soc. Inf. Disp. Int. Symp . Dig. Tech. Pap. 37 , 64–67 (2006).

Laakkonen, P. et al. High efficiency diffractive incouplers for light guides. In Proc. SPIE 6896, Integrated Optics: Devices, Materials, and Technologies XII . (SPIE, San Jose, California, United States, 2008).

Bai, B. F. et al. Optimization of nonbinary slanted surface-relief gratings as high-efficiency broadband couplers for light guides. Appl. Opt. 49 , 5454–5464 (2010).

Äyräs, P., Saarikko, P. & Levola, T. Exit pupil expander with a large field of view based on diffractive optics. J. Soc. Inf. Disp. 17 , 659–664 (2009).

Yoshida, T. et al. A plastic holographic waveguide combiner for light-weight and highly-transparent augmented reality glasses. J. Soc. Inf. Disp. 26 , 280–286 (2018).

Yu, C. et al. Highly efficient waveguide display with space-variant volume holographic gratings. Appl. Opt. 56 , 9390–9397 (2017).

Shi, X. L. et al. Design of a compact waveguide eyeglass with high efficiency by joining freeform surfaces and volume holographic gratings. J. Optical Soc. Am. A 38 , A19–A26 (2021).

Han, J. et al. Portable waveguide display system with a large field of view by integrating freeform elements and volume holograms. Opt. Express 23 , 3534–3549 (2015).

Weng, Y. S. et al. Liquid-crystal-based polarization volume grating applied for full-color waveguide displays. Opt. Lett. 43 , 5773–5776 (2018).

Lee, Y. H. et al. Compact see-through near-eye display with depth adaption. J. Soc. Inf. Disp. 26 , 64–70 (2018).

Tekolste, R. D. & Liu, V. K. Outcoupling grating for augmented reality system. US Patent 10,073,267 (2018).

Grey, D. & Talukdar, S. Exit pupil expanding diffractive optical waveguiding device. US Patent 10,073, 267 (2019).

Yoo, C. et al. Extended-viewing-angle waveguide near-eye display with a polarization-dependent steering combiner. Opt. Lett. 45 , 2870–2873 (2020).

Schowengerdt, B. T., Lin, D. & St. Hilaire, P. Multi-layer diffractive eyepiece with wavelength-selective reflector. US Patent 10,725,223 (2020).

Wang, Q. W. et al. Stray light and tolerance analysis of an ultrathin waveguide display. Appl. Opt. 54 , 8354–8362 (2015).

Wang, Q. W. et al. Design of an ultra-thin, wide-angle, stray-light-free near-eye display with a dual-layer geometrical waveguide. Opt. Express 28 , 35376–35394 (2020).

Frommer, A. Lumus: maximus: large FoV near to eye display for consumer AR glasses. In Proc. SPIE 11764, AVR21 Industry Talks II . Online Only (SPIE, 2021).

Ayres, M. R. et al. Skew mirrors, methods of use, and methods of manufacture. US Patent 10,180,520 (2019).

Utsugi, T. et al. Volume holographic waveguide using multiplex recording for head-mounted display. ITE Trans. Media Technol. Appl. 8 , 238–244 (2020).

Aieta, F. et al. Multiwavelength achromatic metasurfaces by dispersive phase compensation. Science 347 , 1342–1345 (2015).

Arbabi, E. et al. Controlling the sign of chromatic dispersion in diffractive optics with dielectric metasurfaces. Optica 4 , 625–632 (2017).

Download references

Acknowledgements

The authors are indebted to Goertek Electronics for the financial support and Guanjun Tan for helpful discussions.

Author information

Authors and affiliations.

College of Optics and Photonics, University of Central Florida, Orlando, FL, 32816, USA

Jianghao Xiong, En-Lin Hsiang, Ziqian He, Tao Zhan & Shin-Tson Wu

You can also search for this author in PubMed   Google Scholar

Contributions

J.X. conceived the idea and initiated the project. J.X. mainly wrote the manuscript and produced the figures. E.-L.H., Z.H., and T.Z. contributed to parts of the manuscript. S.W. supervised the project and edited the manuscript.

Corresponding author

Correspondence to Shin-Tson Wu .

Ethics declarations

Conflict of interest.

The authors declare no competing interests.

Supplementary information

Supplementary information, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Xiong, J., Hsiang, EL., He, Z. et al. Augmented reality and virtual reality displays: emerging technologies and future perspectives. Light Sci Appl 10 , 216 (2021). https://doi.org/10.1038/s41377-021-00658-8

Download citation

Received : 06 June 2021

Revised : 26 September 2021

Accepted : 04 October 2021

Published : 25 October 2021

DOI : https://doi.org/10.1038/s41377-021-00658-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Metaverse, virtual reality and augmented reality in total shoulder arthroplasty: a systematic review.

  • Umile Giuseppe Longo
  • Alberto Lalli
  • Ara Nazarian

BMC Musculoskeletal Disorders (2024)

Experiences and perceptions of palliative care patients receiving virtual reality therapy: a meta-synthesis of qualitative studies

  • Yufei Huang
  • Cunqing Deng
  • Yanping Hao

BMC Palliative Care (2024)

Ultrahigh-fidelity full-color holographic display via color-aware optimization

  • Seung-Woo Nam
  • Byoungho Lee

PhotoniX (2024)

Non-convex optimization for inverse problem solving in computer-generated holography

  • Xiaomeng Sui
  • Liangcai Cao

Light: Science & Applications (2024)

Enhanced contact performance of high-brightness micro-LEDs via ITO/Al anode stack and annealing process

  • Zeyang Meng
  • Wenyun Yang

Scientific Reports (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

augmented reality vs virtual reality essay

  • Open access
  • Published: 09 September 2024

Theoretical foundations and implications of augmented reality, virtual reality, and mixed reality for immersive learning in health professions education

  • Maryam Asoodar   ORCID: orcid.org/0000-0001-6044-6790 1 ,
  • Fatemeh Janesarvatan   ORCID: orcid.org/0000-0001-7152-386X 1 , 3 ,
  • Hao Yu   ORCID: orcid.org/0000-0003-0473-2914 1 &
  • Nynke de Jong   ORCID: orcid.org/0000-0002-0821-8018 1 , 2  

Advances in Simulation volume  9 , Article number:  36 ( 2024 ) Cite this article

114 Accesses

1 Altmetric

Metrics details

Augmented Reality (AR), Virtual Reality (VR) and Mixed Reality (MR) are emerging technologies that can create immersive learning environments for health professions education. However, there is a lack of systematic reviews on how these technologies are used, what benefits they offer, and what instructional design models or theories guide their use.

This scoping review aims to provide a global overview of the usage and potential benefits of AR/VR/MR tools for education and training of students and professionals in the healthcare domain, and to investigate whether any instructional design models or theories have been applied when using these tools.

Methodology

A systematic search was conducted in several electronic databases to identify peer-reviewed studies published between and including 2015 and 2020 that reported on the use of AR/VR/MR in health professions education. The selected studies were coded and analyzed according to various criteria, such as domains of healthcare, types of participants, types of study design and methodologies, rationales behind the use of AR/VR/MR, types of learning and behavioral outcomes, and findings of the studies. The (Morrison et al. John Wiley & Sons, 2010) model was used as a reference to map the instructional design aspects of the studies.

A total of 184 studies were included in the review. The majority of studies focused on the use of VR, followed by AR and MR. The predominant domains of healthcare using these technologies were surgery and anatomy, and the most common types of participants were medical and nursing students. The most frequent types of study design and methodologies were usability studies and randomized controlled trials. The most typical rationales behind the use of AR/VR/MR were to overcome limitations of traditional methods, to provide immersive and realistic training, and to improve students’ motivations and engagements. The most standard types of learning and behavioral outcomes were cognitive and psychomotor skills. The majority of studies reported positive or partially positive effects of AR/VR/MR on learning outcomes. Only a few studies explicitly mentioned the use of instructional design models or theories to guide the design and implementation of AR/VR/MR interventions.

Discussion and conclusion

The review revealed that AR/VR/MR are promising tools for enhancing health professions education, especially for training surgical and anatomical skills. However, there is a need for more rigorous and theory-based research to investigate the optimal design and integration of these technologies in the curriculum, and to explore their impact on other domains of healthcare and other types of learning outcomes, such as affective and collaborative skills. The review also suggested that the (Morrison et al. John Wiley & Sons, 2010) model can be a useful framework to inform the instructional design of AR/VR/MR interventions, as it covers various elements and factors that need to be considered in the design process.

Introduction

Health professions education is a dynamic and complex field that requires constant adaptation to the changing needs of society and the health care system [ 20 , 71 ]. One of the emerging trends in this field is the use of virtual technologies, such as augmented reality (AR), virtual reality (VR), and mixed reality (MR), to enhance the teaching and learning of various skills and competencies. These technologies offer the potential to create immersive, interactive, and realistic environments that can facilitate learning through feedback, reflection, and practice, while reducing the risks and costs associated with real-life scenarios. However, the effective integration of these technologies into health professions education depends on the sound application of instructional design principles and theories, as well as the evaluation of learning outcomes and impacts. This scoping review aims to provide a comprehensive overview of the current state of the art of using AR/VR/MR in health professions education, with a focus on the instructional design aspects and the learning and behavioral outcomes reported in the literature.

Current educational methods in health professions training encompass various approaches. These include problem-based learning [ 70 ], team-based learning [ 1 ], eLearning (Van Nuland et al. [ 19 ]), and simulation-based medical education (SBME) [ 19 ]. Recently, virtual technologies have emerged in alignment with educational trends. Augmented Reality (AR, Virtual Reality (VR, and Mixed Reality (MR are increasingly utilized not only in general education but also specifically in health professions education (Van Nuland et al. [ 19 ],). These technologies offer a range of potential strategies for comprehensive and practical training, contributing to safer patient care [ 19 ].

In the field of healthcare, diverse AR/VR/MR applications are already in use to train healthcare professionals, primarily assisting in surgical procedures for enhanced navigation and visualization [ 9 , 62 ]. These applications aim to facilitate learning through immersion, reflection, feedback, and practice, all while mitigating the inherent risks of real-life experiences. Simulators play a pivotal role in introducing novel teaching methods for complex medical content [ 16 , 21 , 27 , 29 , 35 ]. They allow repeated practice across a wide spectrum of medical disciplines [ 39 , 59 ], Peterson et al. [ 61 ] and may address challenges encountered in traditional health training programs.

VR creates an artificial environment where users interact with computer-generated sights and sounds. It immerses them in a simulated world using devices like headsets and motion sensors [ 69 ]. AR is an interactive overlay onto a real environment, where it offers an extra layer on top of the environment and the user experiences an immersive, interactive setting [ 13 , 27 ]. In MR, elements of VR and AR are combined, and computer graphics interact with elements of the real world, allowing users to interact with both virtual and physical elements simultaneous [ 29 ]. Extended Reality (XR) serves as an umbrella term that unifies Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) into a single category, reducing public confusion [ 6 ].

In short, AR/VR/MR technologies create digital environments that closely resemble real-world features. These environments enable trainees to learn tasks safely, whether within the bounds of realism or in entirely new experiences beyond traditional constraints [ 41 ]. Notably, in healthcare, the use of computer-enhanced learning has led to positive outcomes such as improved patient safety, enhanced training experiences, and cost reduction [ 34 ].

Investigating prior research in the field of AR/VR/MR in healthcare is important, as this reveals the current state of the field and offers guidance to researchers who are seeking suitable topics to explore and educationists who want to improve the teaching and learning at their institutes [ 34 ]. Currently, there is a lack of insight on the effective application of AR/VR/MR particularly in health professions education and their added value based on instructional design models or theories as most reviews have focused on the technological aspects on AR/VR/MR for medical education, or on comparison with other methods.

This review takes a global perspective to identify the usage and potential benefits of including AR/VR/MR tools for education and training of students and professionals in the health domain. Technologies are constantly evolving and there is a need for obtaining an overview of current trends in an educational context. No review, however, was found that had considered to study whether and how instructional design theories or models guided the use of AR/VR/MR for teaching in health professions education to optimize complex learning within a recent time frame. An important aspect in this regard is the theoretical grounding on which the use of methods, technological or otherwise, is based. Already four decades ago, Reigeluth [ 65 ] argued for the grounding of instructional design in sound theoretical models, stating that instruction is often ineffective and knowledge about instructional design needs to be taken into account in order to remedy this problem. In other words, in addition to focusing on what is taught, how it is taught is also of critical importance [ 65 ]. Unfortunately, interventions are often insufficiently or inconsistently grounded in such theoretical models [ 38 ], Reigeluth & Carr-Chellman [ 66 ].

By now, numerous instructional design models exist that can serve as the basis for determining how content should be taught [ 32 ]. The model that is of particular interest to the topic of this review is the model proposed by Morrison et al. [ 55 ]. This model provides instructional designers with flexibility in determining the design steps to be taken and places significant emphasis on selecting the delivery mode, including considering technology’s potential role Obizoba et al. [ 58 ].

Starting from essential elements to be taken into account when planning instructional design (learners, objectives, methods and evaluation), the Morrison et al. [ 55 ] stipulates a circular design process consisting of nine elements: instructional problems, learner characteristics, task analysis, instructional objectives, content sequencing, instructional strategies, designing the message, instructional delivery, and evaluation instruments (Fig.  1 ).

figure 1

Instructional Design by Morrison et al. [ 55 ]

In Table  1 , the elements of this models have been set alongside the ADDIE model showing analyze, design, develop, implement and evaluate. The design of the Morrison et al. [ 55 ] model is purposefully circular, signaling flexibility in terms of the order of elements on which to work on rather than prescribing a rigid linear process. Furthermore, the nine elements are considered to be interdependent Obizoba et al. [ 58 ] [ 3 ]F. Placed around these nine elements are formative evaluation and revision, as well as planning, project management, summative evaluation and support services [ 55 ].

The purpose of the study

There are a number of review studies that explore the application of AR/VR/MR in healthcare education and training. These studies primarily concentrate on evaluating the effectiveness of these technologies in learning [ 10 ], comparing their effectiveness with conventional or other teaching methods (as studied by [ 45 ]), and examining the prevailing trends in this field (as reviewed by [ 31 ]). Currently, there is lack of insight on the application of an instructional design model or instructional theories for the design of education with the integration of AR/VR/MR into education, particularly in health professions education. The first objective of this scoping review is to identify the usage and the potential benefits of including AR/VR/MR tools for education and training of students and professionals in the health domain. Therefore, we will provide a global overview of how AR/VR/MR tools are applied in health professions education and training with regard to the distribution over time, domains, methodologies, rational, outcomes, and findings. The second objective is to investigate whether any instructional design models or instructional theories have been applied when using these tools in designing education. We mapped the results based on the Morrison et al. [ 55 ] model. No other review was found that had considered instructional design theories or models guiding the use of AR/VR/MR for teaching in health professions education considering the recent time frame. To fill that gap in the literature, in this study we located and then analyzed all of the peer-reviewed studies in the mentioned databases in the methods section. The purpose is to present a review of the literature on how AR/VR/MR are used in healthcare educational settings from 2015 until 2020. Therefore, with regard to the use of AR/VR/MR in healthcare education and training, the following research questions (RQ) are addressed:

RQ1: What is the distribution over time of the selected studies?

RQ2: Which domains of healthcare and what types of participants are addressed?

RQ3: What type of (instructional) design/methodologies are used? ( Instructional design aspects + educational theories), how do they map on the Morrison et al. [ 55 ] model?

RQ4: What is the rationale behind the exposure to AR/VR/MR?

RQ5: What types of learning and behavioral outcomes (based on Blooms taxonomy) are encouraged?

RQ6: What are the findings of the selected studies?

In this study, we have conducted a scoping review following the framework proposed by Arksey and O’Malley [ 7 ] . The purpose of this scoping review is to map the existing literature on the topic, identify key concepts, sources of evidence, and gaps in the research. The process began with identifying the research question, followed by identifying relevant studies through a comprehensive search of databases such as PubMed, Web of Science, and other publishers. An iterative selection process was used to determine the inclusion and exclusion criteria, and the selected studies were charted based on their key characteristics and findings. The results were then gathered, summarized, and reported.

This scoping review specifically aims to explore the benefits of using AR/VR/MR tools in health education and training. It will also investigates the application of instructional design models or theories in designing education with these tools.

Databases searched

The electronic databases searched in this review were a set of databases accessible through Libsearch, which is the search engine available through our University library. The databases available through this search engine are: WorldCat.org, Web of science, MEDLINE, SpringerLink, ScienceDirect, Wiley Online Library, Taylor and Francis Journals, ERIC, BMJ Journals, and Sage journals.

Our research focused on papers published from 2015 through the end of 2020. We selected only peer-reviewed papers written in English.

Our data collection was completed before the COVID-19 outbreak, and due to the significant impact of the pandemic on the nature of studies conducted, we deliberately excluded papers published in 2021 and beyond. A preliminary review revealed that the methodologies of studies during this period underwent significant changes. This would have necessitated substantial modifications to our research questions. Consequently, we made the decision to confine our research to the year 2020.

Search terms

The databases were searched using key terms related to virtual, augmented and mixed-reality as well as terms for possible usage of these devices in medicine, health and bio-medical education. The following search string was used:

[("virtual reality" OR "augment* reality" OR "mixed reality") AND (health OR health science* OR medicine OR "medical science*" OR biomed* OR "biomed* science" OR “life science*”)].

Search for education and training in medical, biomedical and health sciences

The search returned a large number of papers n  = 5629 (Fig.  2 ). This set was further screened by manually going through all titles and abstracts for relevant terminology like “AR, VR or MR,” “training,” “education,” “medical,” “biomedical,” and “health sciences”. Papers selected on this basis were collated and duplicates removed ( n  = 414).

figure 2

Flow chart showing the screening process

Selection of papers for inclusion in the review

To select the appropriate studies for inclusion in the review, the full papers ( n  = 414) and the additional papers ( n  = 20) retrieved via cross referencing were screened and a number of further criteria were applied. Selected papers had to (a) include empirical evidence related to the use of AR/VR/MR in education and training, (b) the training had to be in the field of medicine, biomedical sciences or health sciences. The PICOS (population, intervention, comparison, outcome, study design) framework [ 54 ] guided the inclusion and exclusion criteria of this study (Table  2 ).

Coding of selected papers

The papers selected on the basis of the inclusion criteria were coded. To summarize, papers were coded with respect to:

the publication year;

the type of participants addressed in the study;

which one of the AR/VR/MR was used for teaching/learning;

the country and continent where the first author of the paper was based;

behavioral outcomes based on Bloom’s taxonomy: cognitive, affective or psychomotor skills;

the domain of healthcare that AR/VR/MR has been used: neurosurgery, endoscopic surgery, etc.;

what type of (instructional) design/methodologies are used? (Instructional design aspects + educational theories);

the rationale behind using AR/VR/MR for training: whether the AR/VR/MR could offer an environment that could overcome the current limitation. For examples, overcoming limitations on teaching surgical steps, or teaching and practicing psychomotor and cognitive skills, etc.;

variables related to the study: the research design used in the study, categorized as a randomized control trial (RCT); quasi-experimental; survey; correlational or qualitative design; and

the findings of the selected studies.

Quality of the studies

Papers were assessed according to the following criteria: (1) quality of research design: RCT; quasi-experimental controlled study, pre-test/post-test design (an explicit research design had to be present, not just reports on a tool); (2) relevance of the aim of the study for using AR/VR/MR and (3) findings of the study (did the findings of the paper really relate to education/some sort of learning? Were the participants really doing something to learn, rather than for example only testing the tool? Was it used to teach someone to do something?

Consistency and reliability of coding

All authors took part in the identification, coding and quality coding of papers but, for consistency, one of the researchers (MA) oversaw all the coding. A first sample of articles was taken to discuss and align the coding. Subsequently, regular meetings were scheduled between the authors to discuss the papers and their coding.

The systematic search identified a total of 5629 articles (Fig.  3 ). After removing duplicates, 4999 articles were screened for relevance based on title and abstract. As a result, 4585 articles were excluded, leaving 414 articles for full-text review. Cross-reference search identified 20 more articles to be eligible. After full-text review, a total of 184 articles remained relevant for inclusion.

figure 3

Distribution of the studies from 2015 till end of 2020

Distribution of studies over time

Overall, the number of studies including AR/VR/MR in health education, seems to be increasing. A total of 17 (9%) of the 184 articles included in our review were published in 2015; 24 (13%) of the articles were published in 2016, and 23 (12%) in 2017. In 2018, 35 (19%) articles were published, in 2019, 34 (18%) and in 2020 there were 51 (27%) articles. Figure  3 , depicts a rise in the number of studies per year from 2015 till end of 2020.

Domains of healthcare and types of participants

Most research studies primarily explored the application of AR/VR/MR technology in the medical field, specifically for training medical and nursing students in surgical procedures and anatomy courses. However, a limited number of studies investigated other healthcare domains. For instance, twelve studies specifically examined dentistry, while seven studies included biomedical and health sciences students alongside medical students. For the studies focusing on medicine, the majority of uses for AR/VR/MR in teaching was for training surgical skills (Fig.  4 ). Most the surgeries were mainly related to minimally invasive surgeries, like endoscopy, laparoscopy, etc. When counting all the research related to AR/VR/MR in surgery, which also included the research in fields like endoscopy, laparoscopy, etc. we ended up with 69 papers (Fig.  4 ). A second common use for AR/VR/MR in medical education was to teach anatomy, n  = 31 papers (Fig.  4 ). The focus of these studies were on neuroanatomy, 3D learning structures, and improving visual ability on anatomical understandings.

figure 4

Domains of healthcare—categories mentioned here are not mutually exclusive, they can overlap and intersect with one another

In the comprehensive analysis of the studies included, a diverse spectrum of student levels is addressed. This encompasses bachelor students, master’s students, residents, and specialized continuous education. Notably, certain studies also delve into student training programs and multi-level training sessions, which involve a combination of students, residents, and expert specialists (Table  3 ).

The bubble chart in Fig.  5 links study domains and population. As evident, most studies are related to training residents’ surgery skills ( n  = 32) and to teaching anatomy to bachelor students ( n  = 24). The coded number of papers based on the domain and population can be found in Appendix 1, Table A. The reference to the codes can be found in Appendix 2.

figure 5

A visual representation of the study domains and population

Types of Study design/methodologies

For consistency, we took the terms AR, VR or MR used by the authors of the original papers to make our classification. As shown in Fig.  6 , the large majority of studies ( n  = 149; 81%) focused in VR, followed by AR ( N3  = 25; 14%) and MR ( n  = 10; 5%).

figure 6

Distribution of research focus across VR, AR, and MR

We divided the articles and distinguished between studies with qualitative, quantitative or mixed designs. Large majority of studies used a quantitative methodology ( n  = 152; 83%), followed by mixed-methods designs ( n  = 22; 12%), and there were only a very small number of qualitative studies ( n  = 10; 5%) (Fig.  7 ).

figure 7

Distribution of research methodologies: quantitative, mixed-methods, and qualitative

In Fig.  8 , you see that most studies focused on usability aspects of AR/VR/MR ( n  = 53, 29%). Their purpose was typically to see if these tools could be used for a particular purpose, and mostly to check all the functions of the tool. The second most common study methodology is Randomized Controlled Trial (RCT) ( n  = 41, 22%).

figure 8

Types of study methodologies in percentages

To plot the study design against the mode of technology used, Table B in Appendix1, was prepared. The reference to the coded papers can be found in Appendix 2. Figure  9 , clearly shows that 123 papers used VR in quantitative study designs. Eighteen papers used AR in quantitative study designs and 17 studies used VR in mixed method research designs.

figure 9

Distribution of study designs by technology mode

To plot the study methodology against the mode of technology used, Table C in Appendix 1 was prepared. Coded papers in Table C can be found in appendix 2. Figure  10 clearly shows that 40 studies used VR in usability studies, 34 studies used VR in RCT research methodologies and there were 30 experimental studies with VR.

figure 10

Plotting distribution of study methodologies by technology mode

Instructional design aspects and educational theories used in these studies

Looking at instructional design and educational theories in combination with AR/VR/MR, we see that only 44 studies out of the total of 184 had something mentioned about theories or instructional designs that they used for designing their teaching and learning. Interestingly, some studies specifically investigated usability aspects of AR, VR, or MR in medical education but did not incorporate any explicit educational design theory. This underscores the need for intentional integration of instructional design principles and educational theories when implementing these immersive technologies in educational settings. Table 4 displays the different theories that some studies applied for their educational design. These theories have literally been mentioned in the studies by the authors (Table 4 ). Among them, self-directed, competency-based and PBL, and evidence-based learning were most commonly used.

In Table  5 , we tried to link the already existing theories to the underlying elements in an instructional design theory. Here the Morrison et al. [ 55 ] was a good match. The purpose was to show how an instructional design model and, in this case, the different elements of the Morrison et al. [ 55 ] model, could be used as guidelines in designing courses with AR/VR/MR in medical education. We especially looked at the design element in the Morrison et al. [ 55 ] model. We hope to reveal some guidelines for including instructional design aspects when planning to use AR/VR/MR in medical education. While Table  5 clearly indicates that only a limited number of studies have taken instructional design elements into account, it’s worth noting that a small subset of studies did indeed consider these aspects. For example, code 141 is a study by Chheang, et al. [ 15 ], they are relying on instructional strategies like problem-based learning, hoping that these strategies would open new directions for operating room training during surgery. We also see, in the study by Liaw, et al. [ 47 ], (code 113), that VR has been used as an instructional strategy for collaborative learning across different healthcare courses and institutions in preparing for future collaborative-ready workforces. Another example can be the way VR is used in course design and in relation to cognitive load. Vera, et al. [ 75 ], (code 127), show that a certain VR operating tool can be integrated in the residency program which is sensitive to residents' task load, and it could be used as a new index to easily and rapidly assess task (over)load in healthcare scenarios. In another research, (code 24), Küçük, et al. [ 44 ] designed a study to determine the effects of learning anatomy via mobile AR on medical students' academic achievement and cognitive load.

Rationale behind using AR/VR/MR in healthcare education

The predominant motivation behind incorporating AR/VR/MR (Augmented Reality, Virtual Reality, and Mixed Reality) in healthcare education was to address specific limitations. These common limitations included factors such as the absence of realism, the financial burden associated with maintaining real-life props, time constraints, the need to simulate complex scenarios, ensuring a safe and controlled practice environment, managing cognitive load, and facilitating repetitive training opportunities (Table  6 ). For example, VR was used as an alternative to plastic or cadaver models, which were mentioned as being subject to a lack of realism and pertaining high maintenance costs, respectively [ 1 , 8 ]. Furthermore, learners in the wider healthcare field, often needed many hours of practice to master a skill, AR/VR/MR were good examples to provide an efficient field for practice. In some specialties, VR was specifically used because it provided the possibility to set up highly complex scenarios at a low cost. Through the use of VR, these limitations could be overcome and practice could be provided in a safe, controlled setting [ 29 ]. In a similar vein, some studies mentioned that they would use VR to reduce students’ cognitive load [ 16 , 44 ], by manipulating some aspects of the task over others. The ability to manipulate aspects of the task can be useful for both training and assessment.

Another rationale was to improve students’ motivation [ 39 , 50 ] and/or self-directed learning [ 27 , 46 ]. As students are used to using digital technologies in almost all aspects of their lives, using these technologies in education was thought to have a positive impact on their perceptions. This rationale was often mentioned for teaching anatomy, which is a course that students often tend to find uninteresting [ 27 , 44 ].

Moreover, in the context of Augmented Reality (AR), technologies have been employed to enhance student engagement and observation beyond what is achievable under typical circumstances.. For example, AR technologies would be used to overlay information from other modalities (e.g., MRI) on to-be-diagnosed images, making it easier to combine the information in order to locate abnormalities [ 12 ].

We plotted instructional design aspects against the rationale for using AR/VR/MR tools that each research considered for their study design or simulation design (Table  7 ). Since rationale behind using a specific method or tool comes at the analysis part of instructional design, we took the analysis section of the Morrison et al. [ 55 ]. The purpose is to see how relying on the analysis section of an instructional design model can help with logically designing the rationale behind using a tool operated by AR/VR/MR in health education.

The available data shows that some studies considered the learner characteristics by having two groups with different knowledge levels (novice/expert) and compared their performance [ 22 ],code 19). Some provided immersive training as an instructional objective to improve face and content validity [ 24 ],code 20). Some others utilized simulation in order to improve student’s motivation [ 27 ],code 21). Some considered task analysis by providing tasks at different simulations [ 28 , 39 ],codes 22, 23). In other studies, simulation was used for personalized and self-directed learning [ 50 ],codes 26) and some attempted to resolve the issues, difficulties and disadvantages of current methods [ 53 ],code 28).

Types of learning and behavioral outcomes

The AR/VR/MR articles were divided into the different learning and behavioral domains. According to Bloom’s revised taxonomy [ 5 ], three domains can be distinguished: the cognitive, affective and the psychomotor domain. The cognitive domain refers to the mental processes needed to engage in (higher-order) thinking. The affective domain refers to development of students’ values and attitudes, while the psychomotor domain has to do with developing the physical skills required to execute a (professional) task [ 5 ]. Of the included studies, seventy-five used AR/VR/MR for teaching cognitive skills (41%, Fig.  11 . Psychomotor skills were targeted in 53 studies (29%, and 5 studies (3% focused on affective outcomes aiming at improving learners’ confidence in surgery; especially, training in neurosurgery, laparoscopy, orthopaedic, endoscopy, sinus surgery, bone surgery, electro-surgery, and eobotic surgery. It is also interesting to know that fifty-one studies (27%) utilized a mixed skills training.

figure 11

Outcomes of the studies that used AR/VR/MR in healthcare education

The included studies in this review generally categorized an intervention as effective if the majority of the participants achieved significantly higher scores in tests (experiment/control, pre-posttest, exercises) compared to traditional instructional approaches, such as analogue surgery or ultrasound procedures (Table  8 ). Up to 56% of the studies were experimental studies (Fig.  12 ).

figure 12

Effectiveness of included studies

Some studies were considered as partly effective (Table  8 ), when there were no significant differences in all participants scores (19%, Fig.  12 ) (e.g. [ 17 , 35 ],Van Nuland et al., 2016; [ 76 ]). Here, differences among the participating groups in the studies could be attributed to the level of the training or expertise of the learners (e.g., [ 33 ]). Although in some of these studies, students using the more traditional approaches were performing at the same level as the students in the AR/VR/MR group, there were partial differences reported that learning with AR/VR/MR improved aspects like time efficiency, or precision sensitivity (e.g., [ 52 , 64 , 73 , 74 ]).

Some studies did not report any effectiveness (3%, Fig.  12 ). Study by Llena et al. [ 49 ] showed that although students experienced the AR technology as favorable, no significant differences in learning were found between group learning with AR compared to the group learning with traditional teaching methods. In the study by Huang et al. [ 40 ], no differences were found between students learning with a VR model versus a traditional physical model.

There were also studies showing mixed results, with some but not all outcomes improving in the AR/VR/MR conditions (e.g., [ 68 ]). Other studies reported the positive effects of applying AR/VR/MR as usable (e.g., [ 41 , 51 ],Van Nuland et al., 2016), feasible (e.g. [ 67 ]) tool for healthcare training (e.g. [ 47 , 72 , 75 , 76 ]). Few studies considered contextual factors like face/content validity (e.g. [ 30 , 63 ]), construct validity (e.g. [ 1 , 21 , 22 , 56 ]), study protocols [ 4 ], and accuracy (e.g. [ 12 , 43 , 60 ]).

Several studies reported on variables that impact the effectiveness of AR/VR/MR technologies. One commonly mentioned variable was level of expertise: learners/practitioners with more experiences and/or years of training outperformed novices (e.g., [ 37 ]), and experience had a positive effect on skills acquisition when using these technologies (e.g., [ 44 ]). An exception to this was the study of Hudson et al. [ 42 ], in which nurses with more years of practice found it more difficult to use the technology. Furthermore, Lin et al. [ 48 ] reports an effect of gender, in which men tended to reach proficiency sooner than women when using a laparoscopic surgery simulator. Nickel et al. [ 57 ] further indicated that experiencing fun was also relevant for the student’s learning. In the study by Huber et al., [ 41 ] were they investigated the use of VR to improve residents’ surgery confidence, a correlation was found between confidence improvement and students’ perceived utility of rehearsal. In the same study, the authors showed that the effect of the rehearsal on learner’s confidence was further dependent on trainees’ level of experience and on task difficulty. Finally, Chalhoub et al. [ 14 ] found that gamers had an advantage over non-gamers when using a ‘smartphone game’ to learn laparoscopic skills in the first learning session, although all participants improved in a similar manner.

In this comprehensive review of literature, we explored the application of AR/VR/MR technologies in the instruction of various stages of medical and health professions education. We identified six key research questions to guide our investigation: 1) the trend of studies over time, 2) the healthcare domains and participant types included in these studies, 3) the design methodologies and instructional design aspects/educational theories employed in these studies, 4) the benefits and underlying reasons for using AR/VR/MR in medical and health professions education, 5) the kinds of learning and behavioral outcomes promoted by the use of AR/VR/MR in this field, and 6) the results regarding these learning outcomes in studies that examine the use of these technologies in medical and health professions education.

In general, we observed a rising trend in the number of studies focusing on the application of AR/VR/MR in medical and health professions education. This suggests a consistent and growing interest in leveraging these technologies to enhance student learning across various healthcare disciplines. The primary use of these tools was found to be in teaching surgical skills to residents and anatomy skills to undergraduate students.

When examining the research methodologies employed to study the integration of AR/VR/MR, a notable finding was the predominant focus on quantitative methodology. However, given the limited number of participants in programs such as residency or professional training, qualitative methods could offer researchers the opportunity for a more comprehensive analysis of these tools’ usage and provide detailed insights into these complex learning situations [ 2 , 18 ].

It is interesting to note that the study of affective outcomes is often overlooked when integrating AR/VR/MR into health professions education. While studies are typically categorized based on cognitive, psychomotor, and affective outcomes, the majority focus on cognitive aspects, followed by psychomotor outcomes. Only a small number of studies explore the use of AR/VR/MR for teaching affective outcomes.

Usually, when AR/VR/MR is used in contexts related to emotions and affections, it serves more psychological purposes for patients rather than instructional ones [ 26 ]. However, there is potential value in using these technologies for specific situations, such as targeting affective outcomes like empathy (e.g., [ 25 ]).

In the context of 21st-century multidisciplinary healthcare, prioritizing patient needs and addressing their concerns is crucial. Compassionate and appropriate communication within healthcare teams can build patient trust [ 23 ]. To foster interpersonal skills among healthcare providers, it’s important for health professions education programs to emphasize student competencies in the affective domain of learning [ 20 ]. Interestingly, despite its importance, this aspect is less explored compared to other applications of AR/VR/MR in health professions education.

In this review, we not only examined outcomes but also scrutinized the findings from the included studies. These findings were grouped into three categories: experimental design, usability studies, and contextual factors (Table  8 ). Interestingly, not all experimental studies demonstrated effective outcomes for the application of AR/VR/MR in medical and health profession education. Some studies argued that display technologies did not significantly enhance learning across all or most outcome measures (e.g., [ 14 , 17 , 21 , 35 , 40 , 49 , 69 , 76 ]).

This review also uncovered that only a handful of studies built their AR/VR/MR applications based on specific instructional design models or theories, and there is little description on how these applications can be incorporated into the teaching curriculum. As mentioned in the introduction, instructional design should be rooted in robust theoretical models. Instruction is often ineffective, and knowledge about instructional design needs to be considered to address this issue and optimize complex learning. In other words, the focus should not only be on what is taught but also on how it is taught, which is of paramount importance [ 38 ], Reigeluth & Carr-Chellman [ 66 ].

We suggest that several factors should be considered when creating educational materials based on AR/VR/MR. In this review, we recommend using the instructional design model by Morrison et al. [ 55 ]. When focusing on this model, it is crucial to consider the unique value that a virtual environment can add to enhance students’ learning process when addressing instructional problems and strategies. For instance, AR/VR/MR can offer distinct advantages to learning by providing scenarios where patient privacy is crucial Pan, et al. [ 59 ] or where standardization is key [ 43 , 67 , 74 ].

Regarding learner characteristics, it is important for learners to be at ease with the general use of technology and specifically for learning. VR can provide a safe environment for both patients and students to practice essential skills (e.g., [ 8 , 29 , 33 , 57 , 60 , 63 ]).

When considering task analysis , it’s crucial to understand that all students will be performing the same task, leading to the point of standardization. All participants can practice the same task, allowing teachers to manage what everyone is learning. The tasks can be whole-task problems (e.g., students demonstrating they can conduct a full consultation) [ 56 ], or part-tasks (e.g., surgical procedures) [ 43 , 51 , 67 , 76 ]. Similar to the instructional problem mentioned earlier, it’s important to consider the objectives of the task before designing the teaching/learning methodologies and applications.

In terms of instructional objectives , it is a widely accepted practice in education to clearly define intended learning outcomes (ILOs) prior to designing learning and assessment tasks [ 11 ]. This principle holds true for the use of AR/VR/MR in health professions education. As previously mentioned, the application of these technologies should have a specific purpose, rather than being used merely for their “cool” factor or “motivating” qualities (e.g., [ 17 , 27 , 39 , 49 , 50 , 69 ]). The most common justifications found in the studies included in this review were to overcome certain limitations (such as lack of realism, high maintenance costs for real-life props, time constraints, practicing complex scenarios, providing a safe/controlled setting for practice, cognitive load, and the opportunity for repetitive training), to boost students’ motivation, or to enhance students’ observation skills and attentiveness beyond their usual capabilities.

Beyond integration, it’s also crucial to consider where in the curriculum the technology will be most effective, which relates to the aspect of content sequencing . This will depend on the course and curriculum content, as well as the intended learning outcomes (ILOs). In terms of assessment tools, these technologies can also be utilized for evaluation purposes . Particularly in formative assessment, they can offer learning opportunities coupled with feedback for the users [ 36 ].

When discussing all the elements of the Morrison et al. [ 55 ] model, it is equally important to consider instructional delivery , particularly in terms of the necessary resources and support. For instance, teacher training is crucial, as it can not be assumed that teachers are inherently capable of utilizing the technology. This pertains not only to the technological aspects of the application (how does it operate?), but also to the pedagogical aspects (how should it be implemented in class, and how should students be guided?). With the insights from this research and the recommendations based on the Morrison et al. [ 55 ] model, the understanding of new training and practice methods will enable practitioners to choose from a wider range of training options.

Limitations

This review has several limitations. Firstly, we exclusively examined studies that incorporated an intervention and utilized AR/VR/MR to teach knowledge or skills to the healthcare professions population. We ignored all theoretical papers. There might be more discussions in theoretical papers on the use of different educational models and theories. Future work might need to include all sorts of studies to cover a broader picture.

Secondly, we limited ourselves to publications between 2015 and 2020, assuming that this would be the timeline when AR/VR/MR gained more popularity in the health education domain.

Thirdly, our study did not thoroughly investigate the limitations and barriers associated with utilizing AR/VR/MR technologies for educational purposes.. When using these technologies in the classroom, it is necessary to acquire the required equipment and to be able to store it safely, both in terms of physical storage of devices as well as cloud storage of data. Batteries may need to be charged and the equipment must be kept clean. Updates may sometimes be required, and it is possible that these will happen at an inconvenient time (e.g., mid-session). Special requirements may be present for the software to run. For example, it might be necessary to make an account in order to be able to use the software, which must then be arranged while also taking into account data protection rules. The space in which instruction takes place should also be considered. For example, is it necessary that students can walk around? If so, this should also be facilitated. Finally, it is worthy of mentioning that none of the named limitations impairs the value of this work, in fact it provides opportunities to more research and further strengthening this topic.

Conclusion and recommendations for future research

The most important points that stand out when looking at the results of this review are general lack of instructional design theories or models guiding the use of these technologies for teaching and learning, and the abundant use of these tools for teaching courses like anatomy or for designing part-task practice routines in surgery, especially things like offering the possibility of scalability and repeated practice. For the lack of models and theories in course design with AR/VR/MR, we have tried looking at the instructional design model by Morrison et al. [ 55 ] and plotting our findings against this model to help guide further studies on how they can use an instructional design model in designing courses that include AR/VR/MR tools.

In general, when looking at the quality of the existing studies and applications including the educational benefits of these technologies, further studies need to be conducted to gain better insight into the added value of including these expensive and sophisticated tools into our education [ 31 ]. The most common rationales that were found in the included studies referred to overcoming some sort of limitation (lack of realism, high maintenance costs for real life props, time limitations, practicing high complex scenarios, providing safe/controlled setting for practice, cognitive load and, providing the possibility of repetitive training), enhancing students’ motivation or improving students’ observation and attentiveness beyond their normal capabilities.

Availability of data and materials

All relevant data are available in the form of appendices.

Abbreviations

Augmented Reality

Virtual Reality

Mixed Reality

Maastricht University

Public Medical Literature

Educational Research Information Center

Institute Electrical Engineers

Scientific Content on Public Access

Electronic Book Service Company

Research Question

Analysis Design Development Implementation Evaluation

Population, intervention, comparison, outcome, study design

Medical Literature Analysis and Retrieval System Online

Medical Journal

Randomized control trial

Magnetic Resonance Imaging

Abelson JS, Silverman E, Banfelder J, Naides A, Costa R, Dakin G. Virtual operating room for team training in surgery. The American Journal of Surgery. 2015;210(3):585–90. https://doi.org/10.1016/j.amjsurg.2015.01.024 .

Article   PubMed   Google Scholar  

Adams, A., & Cox, A. L. (2008). Questionnaires, in-depth interviews and focus groups (pp. 17–34). Cambridge University Press. http://oro.open.ac.uk/11909/

Akbulut, Y. Implications of two well-known models for instructional designers in distance education: Dick-Carey versus Morrison-Ross-Kemp. Turkish Online Journal of Distance Education. 2007;8(2), 62–68. https://doi.org/10.1.1.501.3625

Alismail, A., Thomas, J., Daher, N. S., Cohen, A., Almutairi, W., Terry, M. H.,  Tan, L. D. Augmented reality glasses improve adherence to evidence-based intubation practice. Adv Med Educ Pract. 2019;10, 279–286. https://doi.org/10.2147/AMEP.S201640

Anderson, L.W. (Ed.), Krathwohl, D. R. (Ed.), Airasian, P.W., Cruikshank, K.A., Mayer, R.E., Pintrich, P.R., Raths, J., & Wittrock, M.C. A taxonomy for learning, teaching, and assessing: A revision of Bloom's Taxonomy of Educational Objectives. Allyn & Bacon. 2001.  http://eduq.info/xmlui/handle/11515/18345

Andrews C, Southworth MK, Silva JN, Silva JR. Extended reality in medical practice. Curr Treat Options Cardiovasc Med. 2019;21:1–12.

Article   Google Scholar  

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Azarnoush, H., Alzhrani, G., Winkler-Schwartz, A., Alotaibi, F., Gelinas-Phaneuf, N., Pazos, V., ... & Del Maestro, R. F. Neurosurgical virtual reality simulation metrics to assess psychomotor skills during brain tumor resection. International journal of computer assisted radiology and surgery. 2015;10(5)603–618. https://doi.org/10.1007/s11548-014-1091-z

Barsom EZ, Graafland M, Schijven MP. Systematic review on the effectiveness of augmented reality applications in medical training. Surg Endosc. 2016;30(10):4174–83. https://doi.org/10.1007/s00464-016-4800-6 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Barteit S, Lanfermann L, Bärnighausen T, Neuhann F, Beiersmann C. Augmented, mixed, and virtual reality-based head-mounted devices for medical education: systematic review. JMIR serious games. 2021;9(3): e29080.

Article   PubMed   PubMed Central   Google Scholar  

Biggs J, Tang C. Teaching for Quality Learning at University: What the Student Does. 4th ed. McGraw-Hill Education; 2011.

Bourdel N, Collins T, Pizarro D, Bartoli A, Da Ines D, Perreira B, Canis M. Augmented reality in gynecologic surgery: evaluation of potential benefits for myomectomy in an experimental uterine model. Surg Endosc. 2017;31(1):456–61. https://doi.org/10.1007/s00464-016-4932-8 .

Carmigniani J, Furht B, Anisetti M, Ceravolo P, Damiani E, Ivkovic M. Augmented reality technologies, systems and applications. Multimedia Tools and Applications. 2011;51(1):341–77. https://doi.org/10.1007/s11042-010-0660-6 .

Chalhoub M, Khazzaka A, Sarkis R, Sleiman Z. The role of smartphone game applications in improving laparoscopic skills. Adv Med Educ Pract. 2018;9:541. https://doi.org/10.2147/AMEP.S162619 .

Chheang V, Fischer V, Buggenhagen H, Huber T, Huettl F, Kneist W, Hansen C. Toward interprofessional team training for surgeons and anesthesiologists using virtual reality. Int J Comput Assist Radiol Surg. 2020;15(12):2109–18. https://doi.org/10.1007/s11548-020-02276-y .

Chowriappa, A., Raza, S. J., Fazili, A., Field, E., Malito, C., Samarasekera, D.,  Eun, D. D. Augmented‐reality‐based skills training for robot‐assisted urethrovesical anastomosis: a multi‐institutional randomised controlled trial. BJU international. 2015;115(2)336–345. https://doi.org/10.1111/bju.12704

Courteille, O., Fahlstedt, M., Ho, J., Hedman, L., Fors, U., Von Holst, H., Möller, H.  Learning through a virtual patient vs. recorded lecture: a comparison of knowledge retention in a trauma case. International journal of medical education. 2018;9:86. https://doi.org/10.5116/ijme.5aa3.ccf2

Creswell, J. W., & Creswell, J. D. Research design: Qualitative, quantitative, and mixed methods approaches. Sage publications. 2017.  http://e-pedagogium.upol.cz/pdfs/epd/2016/04/08.pdf

Datta R, Upadhyay KK, Jaideep CN. Simulation and its role in medical education. Medical Journal Armed Forces India. 2012;68(2):167–72.

Delany C, Watkin D. A study of critical reflection in health professional education: ‘learning where others are coming from.’ Adv Health Sci Educ. 2009;14(3):411–29. https://doi.org/10.1007/s10459-008-9128-0 .

Dharmawardana, N., Ruthenbeck, G., Woods, C., Elmiyeh, B., Diment, L., Ooi, E. H., ... & Carney, A. S. Validation of virtual‐reality‐based simulations for endoscopic sinus surgery. Clinical Otolaryngology. 2015;40(6):569–579. https://doi.org/10.1111/coa.12414

Diment LE, Ruthenbeck GS, Dharmawardana N, Carney AS, Woods CM, Ooi EH, Reynolds KJ. Comparing surgical experience with performance on a sinus surgery simulator. ANZ J Surg. 2016;86(12):990–5. https://doi.org/10.1111/ans.13418 .

Donlan P. Developing affective domain learning in health professions education. J Allied Health. 2018;47(4):289–95.

PubMed   Google Scholar  

Dorozhkin, D., Nemani, A., Roberts, K., Ahn, W., Halic, T., Dargar, S., . De, S. Face and content validation of a Virtual Translumenal Endoscopic Surgery Trainer (VTEST™). Surgical endoscopy. 2016;30(12):5529–5536. https://doi.org/10.1007/s00464-016-4917-7

Dyer E, Swartzlander BJ, Gugliucci MR. Using virtual reality in medical education to teach empathy. Journal of the Medical Library Association: JMLA. 2018;106(4):498.

Emmelkamp PM, Meyerbröker K. Virtual reality therapy in mental health. Annu Rev Clin Psychol. 2021;17:495–519.

Ferrer-Torregrosa J, Jiménez-Rodríguez MÁ, Torralba-Estelles J, Garzón-Farinós F, Pérez-Bermejo M, Fernández-Ehrling N. Distance learning ects and flipped classroom in the anatomy learning: comparative study of the use of augmented reality, video and notes. BMC Med Educ. 2016;16(1):230. https://doi.org/10.1186/s12909-016-0757-3 .

Fischer, M., Fuerst, B., Lee, S. C., Fotouhi, J., Habert, S., Weidert, S., Navab, N. Preclinical usability study of multiple augmented reality concepts for K-wire placement. International journal of computer assisted radiology and surgery. 2016;11(6)1007–1014. https://doi.org/10.1007/s11548-016-1363-x

Freschi C, Parrini S, Dinelli N, Ferrari M, Ferrari V. Hybrid simulation using mixed reality for interventional ultrasound imaging training. Int J Comput Assist Radiol Surg. 2015;10(7):1109–15. https://doi.org/10.1007/s11548-014-1113-x .

Article   CAS   PubMed   Google Scholar  

Fucentese SF, Rahm S, Wieser K, Spillmann J, Harders M, Koch PP. Evaluation of a virtual-reality-based simulator using passive haptic feedback for knee arthroscopy. Knee Surg Sports Traumatol Arthrosc. 2015;23(4):1077–85. https://doi.org/10.1007/s00167-014-2888-6 .

Gerup J, Soerensen CB, Dieckmann P. Augmented reality and mixed reality for healthcare education beyond surgery: an integrative review. Int J Med Educ. 2020;11:1.

Göksu I, Özcan KV, Cakir R, Göktas Y. Content analysis of research trends in instructional design models: 1999–2014. J Learn Des. 2017;10(2):85–109.

Google Scholar  

Gomez PP, Willis RE, Van Sickle KR. Development of a virtual reality robotic surgical curriculum using the da Vinci Si surgical system. Surg Endosc. 2015;29(8):2171–9. https://doi.org/10.1007/s00464-014-3914-y .

Graafland M, Schraagen JMC, Schijven MP. Systematic review of validity of serious games for medical education and surgical skills training. Br J Surg. 2012;99(10):1322–30. https://doi.org/10.1002/bjs.8819 .

Grover, S. C., Garg, A., Scaffidi, M. A., Jeffrey, J. Y., Plener, I. S., Yong, E., Walsh, C. M. Impact of a simulation training curriculum on technical and nontechnical skills in colonoscopy: a randomized trial. Gastrointestinal endoscopy. 2015;82(6);1072–1079. https://doi.org/10.1016/j.gie.2015.04.008

Heeneman S, et al. The Impact of Programmatic Assessment on Student Learning: Theory Versus Practice. Med Educ. 2015;49(5):487–98. https://doi.org/10.1111/medu.12645 .

Holloway, T., Lorsch, Z. S., Chary, M. A., Sobotka, S., Moore, M. M., Costa, A. B., ... & Bederson, J. Operator experience determines performance in a simulated computer-based brain tumor resection task. International journal of computer assisted radiology and surgery. 2015;10(11):1853–1862. https://doi.org/10.1007/s11548-015-1160-y

Honebein, P. C., & Reigeluth, C. M. (2021). Making good design judgments via the instructional theory framework. Design for Learning . Edtechbooks. https://open.byu.edu/id/making_good_design?book_nav=true

Hu A, Shewokis PA, Ting K, Fung K. Motivation in computer-assisted instruction. Laryngoscope. 2016;126:S5–13. https://doi.org/10.1002/lary.26040 .

Huang, C. Y., Thomas, J. B., Alismail, A., Cohen, A., Almutairi, W., Daher, N. S., ... & Tan, L. D. The use of augmented reality glasses in central line simulation:“see one, simulate many, do one competently, and teach everyone”. Advances in medical education and practice. 2018;9:357. https://doi.org/10.2147/AMEP.S160704

Huber T, Paschold M, Hansen C, Wunderling T, Lang H, Kneist W. New dimensions in surgical training: immersive virtual reality laparoscopic simulation exhilarates surgical staff. Surg Endosc. 2017;31(11):4472–7. https://doi.org/10.1007/s00464-017-5500-6 .

Hudson K, Taylor LA, Kozachik SL, Shaefer SJ, Wilson ML. Second Life simulation as a strategy to enhance decision-making in diabetes care: a case study. J Clin Nurs. 2015;24(5–6):797–804. https://doi.org/10.1111/jocn.12709 .

Korzeniowski P, White RJ, Bello F. VCSim3: a VR simulator for cardiovascular interventions. Int J Comput Assist Radiol Surg. 2018;13(1):135–49. https://doi.org/10.1007/s11548-017-1679-1 .

Küçük S, Kapakin S, Göktaş Y. Learning anatomy via mobile augmented reality: effects on achievement and cognitive load. Anat Sci Educ. 2016;9(5):411–21. https://doi.org/10.1002/ase.1603 .

Kyaw, B. M., Saxena, N., Posadzki, P., Vseteckova, J., Nikolaou, C. K., George, PP, .Car, L. T. Virtual reality for health professions education: systematic review and meta-analysis by the digital health education collaboration. Journal of medical Internet research. 2019;21(1):e12959.

Levac D, Espy D, Fox E, Pradhan S, Deutsch JE. “Kinect-ing” with clinicians: A knowledge translation resource to support decision making about video game use in rehabilitation. Phys Ther. 2015;95(3):426–40. https://doi.org/10.2522/ptj.20130618 .

Liaw SY, Soh SL, Tan KK, Wu LT, Yap J, Chow YL, Wong LF. Design and evaluation of a 3D virtual environment for collaborative learning in interprofessional team care delivery. Nurse Educ Today. 2019;81:64–71. https://doi.org/10.1016/j.nedt.2019.06.012 .

Lin, D., Pena, G., Field, J., Altree, M., Marlow, N., Babidge, W., ... & Maddern, G. What are the demographic predictors in laparoscopic simulator performance?. ANZ journal of surgery. 2016;86(12):983–989. https://doi.org/10.1111/ans.12992

Llena C, Folguera S, Forner L, Rodríguez-Lozano FJ. Implementation of augmented reality in operative dentistry learning. Eur J Dent Educ. 2018;22(1):122–30. https://doi.org/10.1111/eje.12269 .

Ma M, Fallavollita P, Seelbach I, Von Der Heide AM, Euler E, Waschke J, Navab N. Personalized augmented reality for anatomy education. Clin Anat. 2016;29(4):446–53. https://doi.org/10.1002/ca.22675 .

Mathews, S., Brodman, M., D'Angelo, D., Chudnoff, S., McGovern, P., Kolev, T., ... & Kischak, P. Predictors of laparoscopic simulation performance among practicing obstetrician gynecologists. American journal of obstetrics and gynecology. 2017;217(5)596-e1. https://doi.org/10.1016/j.ajog.2017.07.002

Mathiowetz V, Yu CH, Quake-Rapp C. Comparison of a gross anatomy laboratory to online anatomy software for teaching anatomy. Anat Sci Educ. 2016;9(1):52–9. https://doi.org/10.1002/ase.1528 .

Medellín-Castillo HI, Govea-Valladares EH, Pérez-Guerrero CN, Gil-Valladares J, Lim T, Ritchie JM. The evaluation of a novel haptic-enabled virtual reality approach for computer-aided cephalometry. Comput Methods Programs Biomed. 2016;130:46–53. https://doi.org/10.1016/j.cmpb.2016.03.014 .

Methley AM, Campbell S, Chew-Graham C, McNally R, Cheraghi-Sohi S. PICO, PICOS and SPIDER: a comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews. BMC Health Serv Res. 2014;14(1):1–10.

Morrison GR, Ross SM, Kemp JE, Kalman H. Designing effective instruction. 6th ed. New York: John Wiley & Sons; 2010.

Nayar, S. K., Musto, L., Fernandes, R., & Bharathan, R. (2018). Validation of a virtual reality laparoscopic appendicectomy simulator: a novel process using cognitive task analysis. Irish Journal of Medical Science (1971-). 2019:1–9. https://doi.org/10.1007/s11845-018-1931-x

Nickel F, Hendrie JD, Bruckner T, Kowalewski KF, Kenngott HG, Müller-Stich BP, Fischer L. Successful learning of surgical liver anatomy in a computer-based teaching module. Int J Comput Assist Radiol Surg. 2016;11(12):2295–301. https://doi.org/10.1007/s11548-016-1354-y .

Obizoba C. Instructional Design Models—Framework for Innovative Teaching and Learning Methodologies. Int J High Educ Manag. 2015;2(1):40–51. https://doi.org/10.24052/IJHEM/2015/v2/i1/3 .

Pan, X., Slater, M., Beacco, A., Navarro, X., Rivas, A. I. B., Swapp, D., ... & Delacroix, S. The responses of medical general practitioners to unreasonable patient demand for antibiotics-a study of medical ethics using immersive virtual reality. PloS one. 2019;11(2)e0146837. https://doi.org/10.1371/journal.pone.0146837

Perin, A., Galbiati, T. F., Gambatesa, E., Ayadi, R., Orena, E. F., Cuomo, V., … Group, E. N. S. S. Filling the gap between the OR and virtual simulation: a European study on a basic neurosurgical procedure. Acta Neurochir. 2018;160(11):2087–97. https://doi.org/10.1007/s00701-018-3676-8 .

Peterson DC, Mlynarczyk GS. Analysis of traditional versus three-dimensional augmented curriculum on anatomical learning outcome measures. Anat Sci Educ. 2016;9(6):529–36. https://doi.org/10.1002/ase.1612 .

Pottle J. Virtual reality and the transformation of medical education. Future Healthcare Journal. 2019;6(3):181. https://doi.org/10.7861/fhj.2019-0036 .

Rahm S, Germann M, Hingsammer A, Wieser K, Gerber C. Validation of a virtual reality-based simulator for shoulder arthroscopy. Knee Surg Sports Traumatol Arthrosc. 2016;24(5):1730–7. https://doi.org/10.1007/s00167-016-4022-4 .

Rasmussen SR, Konge L, Mikkelsen PT, Sørensen MS, Andersen SA. Notes from the field: Secondary task precision for cognitive load estimation during virtual reality surgical simulation training. Eval Health Prof. 2016;39(1):114–20. https://doi.org/10.1177/0163278715597962 .

Reigeluth CM. Instructional design theories and models: An overview of their current status. Routledge. 1983. https://doi.org/10.4324/9780203824283 .

Reigeluth, C. M., & Carr-Chellman, A. A. (Eds.). Instructional-design theories and models, volume III: Building a common knowledge base 2009, (Vol. 3). Routledge.

Sampogna G, Pugliese R, Elli M, Vanzulli A, Forgione A. Routine clinical application of virtual reality in abdominal surgery. Minim Invasive Ther Allied Technol. 2017;26(3):135–43. https://doi.org/10.1080/13645706.2016.1275016 .

Siebert, J. N., Ehrler, F., Gervaix, A., Haddad, K., Lacroix, L., Schrurs, P., ... & Manzano, S. Adherence to AHA guidelines when adapted for augmented reality glasses for assisted pediatric cardiopulmonary resuscitation: A randomized controlled trial. Journal of medical Internet research. 2017;19(5), e183. https://doi.org/10.2196/jmir.7379

Stepan, K., Zeiger, J., Hanchuk, S., Del Signore, A., Shrivastava, R., Govindaraj, S., & Iloreta, A. (2017, October). Immersive virtual reality as a teaching tool for neuroanatomy. In International forum of allergy & rhinology (Vol. 7, No. 10, pp. 1006–1013). https://doi.org/10.1002/alr.21986

Telang A. Problem-based learning in health professions education: an overview. Arch Med Health Sci. 2014;2(2):243.

Thibault GE. The future of health professions education: Emerging trends in the United States. FASEB BioAdvances. 2020;2(12):685–94. https://doi.org/10.1096/fba.2020-00061 .

Tran C, Toth-Pal E, Ekblad S, Fors U, Salminen H. A virtual patient model for students’ interprofessional learning in primary healthcare. PLoS ONE. 2020;15(9): e0238797. https://doi.org/10.1371/journal.pone.0238797 .

Valdis M, Chu MW, Schlachta C, Kiaii B. Evaluation of robotic cardiac surgery simulation training: a randomized controlled trial. J Thorac Cardiovasc Surg. 2016;151(6):1498–505. https://doi.org/10.1016/j.jtcvs.2016.02.016 .

Våpenstad, C., Hofstad, E. F., Bø, L. E., Kuhry, E., Johnsen, G., Mårvik, R., ... & Hernes, T. N. Lack of transfer of skills after virtual reality simulator training with haptic feedback. Minimally Invasive Therapy & Allied Technologies. 2017;26(6):346–354. https://doi.org/10.1080/13645706.2017.1319866

Vera J, Diaz-Piedra C, Jimenez R, Sanchez-Carrion JM, Di Stasi LL. Intraocular pressure increases after complex simulated surgical procedures in residents: an experimental study. Surg Endosc. 2019;33(1):216–24. https://doi.org/10.1007/s00464-018-6297-7 .

Wang, S., Parsons, M., Stone-McLean, J., Rogers, P., Boyd, S., Hoover, K., ... & Smith, A. Augmented reality as a telemedicine platform for remote procedural training. Sensors. 2017;17(10):2294. https://doi.org/10.3390/s17102294

Download references

Acknowledgements

Special thanks to Sanne Rovers for contributing in some meetings for brainstorming ideas.

There was no funding allocated to this research.

Author information

Authors and affiliations.

School of Health Professions Education, Department of Educational Development and Research, Faculty of Health, Medicine and Life sciences, Maastricht University, Universiteitssingel 60, Maastricht, 6229 MD, The Netherlands

Maryam Asoodar, Fatemeh Janesarvatan, Hao Yu & Nynke de Jong

Department of Health Services Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands

Nynke de Jong

School of Business and Economics, Educational Research and Development Maastricht University, Maastricht, The Netherlands

Fatemeh Janesarvatan

You can also search for this author in PubMed   Google Scholar

Contributions

MA took the lead in sketching the outline for this research through several meetings with NdJ. MA, NdJ, HY, and FJ contributed in analyzing the papers and narrowing down the search. MA and FJ wrote the paper. MA and HY designed the figures and the tables.

Corresponding author

Correspondence to Maryam Asoodar .

Ethics declarations

Ethics approval and consent to participate.

This is a literature review. We had no participants in this paper.

Consent for publication

This is a literature review. There were no participants.

Competing interests

No conflict of interest.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Asoodar, M., Janesarvatan, F., Yu, H. et al. Theoretical foundations and implications of augmented reality, virtual reality, and mixed reality for immersive learning in health professions education. Adv Simul 9 , 36 (2024). https://doi.org/10.1186/s41077-024-00311-5

Download citation

Received : 07 September 2023

Accepted : 29 August 2024

Published : 09 September 2024

DOI : https://doi.org/10.1186/s41077-024-00311-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Immersive learning
  • Instructional design models or theories
  • Health professions education

Advances in Simulation

ISSN: 2059-0628

augmented reality vs virtual reality essay

Pardon Our Interruption

As you were browsing something about your browser made us think you were a bot. There are a few reasons this might happen:

  • You've disabled JavaScript in your web browser.
  • You're a power user moving through this website with super-human speed.
  • You've disabled cookies in your web browser.
  • A third-party browser plugin, such as Ghostery or NoScript, is preventing JavaScript from running. Additional information is available in this support article .

To regain access, please make sure that cookies and JavaScript are enabled before reloading the page.

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

AR advertising

Looking at augmented reality (AR) in advertising

Chris Yu

Augmented reality is changing the game for advertisers. Immerse yourself in the wonders of AR advertising. 

(Updated September 2024)

Augmented reality (AR) is emerging as a groundbreaking technology – not just for consumers in the advertising world; it also offers brands the opportunity to create immersive and interactive experiences for their audiences.

With the integration of Artificial Intelligence (AI) and machine learning , AR is set to revolutionize programmatic advertising and marketing strategy overall, providing a new dimension for brands to engage with consumers in a more impactful and memorable way. 

This innovative combination of AR and AI opens up endless possibilities for advertisers to create unique and personalized ad experiences that captivate and inspire.

What is Augmented Reality (AR)?

Augmented reality (AR) refers to technology that enhances the real-world environment by overlaying digital visual elements, sounds, and other sensory stimuli through holographic technology. AR combines digital and physical worlds, enables real-time interactions, and accurately identifies virtual and real objects in 3D.

Augmented Reality (AR) vs. Virtual Reality (VR)

AR uses a real-world setting while VR is completely virtual. AR provides a more effective way to create, organize, and deliver instructional content by integrating digital information into real-world tasks and situations. Unlike virtual reality (VR) where everything is entirely virtual, AR is designed to add digital or virtual elements over real-world views with limited interaction.

Put simply, AR users can control their presence in the real world; VR users are controlled by the system. In addition, VR users typically require a headset device, but AR can be accessed with a smartphone or other digital devices. 

Types of augmented reality advertising

In a technical sense, there are various types of AR with distinct applications and functionalities :

Marker-based AR

This type of AR makes use of a visual marker such as a QR code or fiducial marker. Users simply scan the marker with their mobile device to initiate the interactive experience. However, augmented reality mobile advertising is limited in the sense that it requires a mobile device or technology that has built-in AR support or a dedicated app for the AR to function.

Markerless AR

Markerless AR functions without the need for physical markers (such as QR codes) and instead, uses location-based data like GPS or mobile device accelerometers. Users scan their environment through a mobile app or website, and digital elements are projected onto surfaces such as floors and walls.

It identifies and monitors the user’s surroundings to superimpose virtual content based on spatial relationships and object positioning. This type of AR is commonly used in online shopping or in-game advertising . 

Projection-based AR

As the name suggests, this type of AR uses projectors to project 3D imagery or digital content onto flat surfaces such as walls and floors. It doesn’t fully create immersive environments but the holograms displayed are intended to captivate and engage audiences.

Projection-based AR is commonly used in in-person events such as store openings, pop-up shops, and movie screenings. 

Location-based AR

Location-based AR is a variant of markerless AR. It makes use of geographic data to deploy digital content at precise locations (similar to geotargeting ). The most common example of this is the mobile gaming app Pokémon Go which allows players to trigger various AR functionalities depending on where they are.

Location-based AR has only been used in retail settings to gamify the shopping experience such as virtual scavenger hunts within boutiques to entice customers to explore and earn rewards.

Superimposition-based AR

This type of AR marketing replaces or enhances physical items with digital content. Specifically, it identifies objects or features in the user’s environment (such as book covers or product labels) and overlays digital information onto them. 

For example, in retail settings, this technology can guide customers and provide product information by placing virtual arrows onto the environment to direct shoppers to specific products. A device can reveal the virtual overlays with details such as price, features, and reviews.

Forms and applications of augmented reality marketing

In addition to the technical types of AR, augmented reality in marketing can also take various forms and applications :

  • AR OOH advertising: Augmented reality can be integrated with out-of-home (OOH) advertising . OOH advertising refers to any visual media presented outside of the home such as billboards, bus stop advertising boards, etc.
  • AR print marketing: Traditional print marketing such as postcards, flyers, and direct mail, can be more engaging and elevated by applying AR elements to it. 
  • AR product packaging: AR can also be utilized in various packaging such as adding QR codes to a packaged product that shows how-to videos or a thank-you message.
  • E-Commerce: Through appless AR or web AR, customers are able to view and experience products in 3D from the comfort of their own homes. Some websites can offer a “virtual dressing room” or a “virtual try on” feature that lets customers try on products 
  • AR events and showcases: Events, conferences, product launches and showrooms, and virtual walkthroughs that integrate AR elements can be used to elevate the user experience. Visitors get to have an immersive event experience through 2D video, 3D animation, and volumetric motion capture.
  • Other forms: In addition, there are other various forms of augmented reality advertising. For instance, Snapchat has a feature that uses AR-powered lenses and filters that lets you “try on” various products (such as eyewear or apparel). There are also virtual walkthroughs and showrooms, AR video ads, 3D product showcases, AR games (such as Niantic Wayfarer and the aforementioned Pokemon Go), and more.

Benefits of augmented reality in advertising

Studies show that a great AR ad campaign is typically more effective than traditional advertising techniques. In one study by Meta , 90% of brands found campaigns combining traditional marketing or business-as-usual tactics and AR saw nearly three times the brand lift and costing 59% less on average. 

In another study from eMarketer and ARtillery Intelligence , US mobile advertising spending itself is set to grow over the next year, reaching up to $235.67 billion USD by 2025. Mobile AR advertising revenues worldwide is expected to grow to $39.81 billion by 2027.

This shows there’s a lot of room for potential and growth in the industry and AR advertising is expected to increase in investment and revenue in the coming years. 

Mobile augmented reality revenues worldwide from 2022-2027

Investing in AR advertising provides a lot of value and advantages to campaigns. It provides better user engagement since it offers a more unique, personalized, and interactive experience for customers.

In fact, the Ericsson Emodo Primary Survey Research Study in 2021 found that 68% of respondents agreed or strongly agreed that engaging AR ad experiences reflected positively on the brand on question; 74% said those ads with AR experiences would be more likely to capture their interest or attention than regular ads.

Some companies have also found that AR ads often perform better than display ads, which leads to higher conversion rates. Take for example, Shopify , who found in their study that on average, 3D AR ads generated 94% higher conversion rates than their static 2D counterparts.

Utilizing immersive AR experiences lets brands build deeper emotional ties with their customers and achieve better brand recall. By immersing the audience in the experience and encouraging active engagement, AR increases the likelihood that viewers remember the ads.

Studies show that AR experiences lead to 70% more memory encoding compared to traditional static ads.

Challenges of AR advertising

There are certain challenges, and limitations, to consider when utilizing AR in advertising strategies. For instance:

  • AR-enabled technology is required: One of the biggest barriers marketers face when implementing AR is the need for supporting technology. This could be a smartphone or tablet with a camera. In addition, most AR ad experiences rely on a mobile app to host their content. If your target consumer doesn’t even have the app or technology to view your ads in AR, then it would be a lost cause.
  • Hardware and network limitations: Your campaign is limited to what a user’s device can withstand. The graphics processing capacity, the battery consumption, the network connectivity, and other factors can be very demanding both from the advertiser and the user’s side. This means that your AR campaigns may not be as effective if there are any hardware and software limitations from either side.
  • Expertise required in building the software and ad: It takes technical expertise (such as coding and 3D rendering) to develop AR advertising campaigns. There are AR software development kits and tools that exist. However, they can be very costly. Even outsourcing this work to third-party companies or agencies can be costly if one doesn’t have the tools and resources to build AR campaigns themselves.

Examples of augmented reality ads

Coca-cola launches “#takeataste” ar campaign.

In September 2023, Coca-Cola UK had an innovative AR giveaway and a nationwide digital out-of-home (DOOH) campaign called “#TakeATaste” for its Coke Zero Sugar product in collaboration with Tesco group. 

Smartphone users could interact with some of London’s OOH screens (such as the Picadilly Lights) and change the screens’ AR visuals in real time. They could also scan a QR code that awarded them with both a digital bottle of Coke Zero on their phones and a voucher to claim the real thing in a nearby Tesco.

Toyota uses AR to give users a virtual test drive of its “Crown” line

Toyota collaborated with Yahoo Advertising to create an immersive AR experience in support of their launch of the 2023 Toyota Crown line. 

With the AR experience, users can delve into the car’s exterior and interior, taking a complete 360-degree tour around the vehicle, immersing themselves in the driver’s seat, and even virtually driving the car, all within the confines of their own garage or driveway.

To make the experience even more informative, educational highlights are incorporated to offer shoppers a better understanding of the Toyota Crown’s distinctive features.

In addition to the AR experience, Yahoo helps Toyota reach future Toyota Crown customers through additional digital touchpoints including Digital Out-of-Home (DOOH) advertising, banner displays, and leveraged Connected TV (CTV) pre-roll to reach consumers in their homes. 

The Weeknd holds an AR concert on TikTok

To promote his album “After Hours”, musician The Weeknd held a concert in 2021 on TikTok . The Weeknd became the first artist to use TikTok for an augmented reality concert. 

The concert offered a fully live digital experience with scenes changing with each song. Through AR, users could choose the scenes by voting and sharing comments which were displayed live in the surroundings.

Final thoughts

As AR  evolves and integrates into various industries (including advertising), it has the potential to revolutionize how brands engage with consumers.

By harnessing the power of AI and AR technology, advertisers can create personalized and engaging ad experiences that drive meaningful connections. This innovative combination of AR and AI also opens up endless possibilities for advertisers to create unique, experiential ads that inspire and captivate audiences.

AR has proven that it can play a vital role in the advertising realm. The question is: will it play a role in your advertising?

micro-cta@2x

Learn how illumin unlocks the power of journey advertising

To see more from illumin, be sure to follow us on X   and LinkedIn where we share interesting news and insights from the worlds of AdTech and advertising.

Related Insights

https://illumin.com/wp-content/uploads/2023/09/right-and-wrong-ways-to-use-AI-1.jpg

The right and wrong ways to use AI to grow your brand: a marketers guide

https://illumin.com/wp-content/uploads/2024/09/in-store-or-offline-retail-media-1.jpg

What is the difference between offline and online retail media?

https://illumin.com/wp-content/uploads/2024/09/AI-regulations-1.jpg

Is AI regulated? Yes! Here’s what marketers need to know

https://illumin.com/wp-content/uploads/2024/09/back-to-school.jpg

The best back to school ads for 2024

https://illumin.com/wp-content/uploads/2024/08/Spotify-rebrand.png

Spotify’s ad tech rebrand: everything you need to know

https://illumin.com/wp-content/uploads/2023/12/mamnaged-service.jpg

Is managed service or self-serve right for your next programmatic campaign

https://illumin.com/wp-content/uploads/2024/08/retail-ternds-fall.jpg

Going green, social commerce, faster delivery – retail trends for fall 2024

https://illumin.com/wp-content/uploads/2024/08/ddata-measurement-political.jpg

Data measurement for political advertising: an introduction

https://illumin.com/wp-content/uploads/2023/08/display-ads.png

The complete guide to display advertising

https://illumin.com/wp-content/uploads/2024/08/Video-advertising-guide.png

Your 2024 guide to video advertising

Subscribe for insights

Get the latest updates delivered right to your inbox.

augmented reality vs virtual reality essay

Pico 4 Ultra VS Meta Quest 3: the battle of the best mid-range VR headsets

Will the Quest 3 or Pico 4 Ultra come out ahead?

  • Price and Availability
  • Specs & Performance
  • Mixed reality
  • Which should you buy?

Meta Quest 3

The Meta Quest 3 is currently Meta's best VR headset and it's clearest strength is its software support via Horizon OS. That's not to say the hardware is anything to sniff at mind with good full-color passthrough, and a Snapdragon XR 2 Gen 2 chipset that can handle the XR challenges you throw at it. It's also the cheaper of the two headsets.

  • Vastly better graphics than Quest 2
  • Improved mixed reality
  • Incredible suite of software
  • Pricier than the Quest 2 at launch
  • No eye-tracking
  • The design is good, but not yet perfect

The Pico 4 Ultra

The Pico 4 Ultra has come out swinging with better specs (at a higher price) and some features you won't find on Quest like its Motion Tracker add-ons which are perfect for fully immersive VR gaming. Unfortunately the hardware is let down by its software to some degree, but that doesn't mean you should instantly disregard this Pico headset.

  • Motion Tracker accessories bring foot tracking
  • Great specs for the price
  • Simple and intuitive
  • Software support lacking major exclusives
  • No silicon facial interface in the box
  • Beast feature is a paid add-on

The Pico 4 Ultra VR headset is finally here, and it’s ready to take on the Meta Quest 3 in the mid-range standalone VR headset space. I’ve tested both headsets extensively, and in this guide I’ll tell you if you should go with the tried and tested Quest model, or choose Pico’s potential usurper of the VR throne.

At a glance the Pico 4 Ultra offers better specs at only a marginally higher cost, but Meta’s Quest 3 is backed up by plenty of heavy-hitting VR software exclusives. It’s a close fight, but there can only be one winner – and we’ll let you know which headset comes out on top for value, performance, the mixed-reality experience, software, and features, as well as overall.

For more details on either headset we’d recommend reading our in-depth Meta Quest 3 review and Pico 4 Ultra review – though if you care about spoilers don’t look at their scores as they’ll give the result away.

Pico 4 Ultra VS Meta Quest 3: Price and Availability

  • Meta Quest 3 is cheaper and more widely available
  • Pico 4 Ultra has better specs to justify a higher cost
  • Value verdict: Tie

The Meta Quest 3’s cheapest model costs $499.99 / £479.99 / AU$799.99, while the one and only Pico 4 Ultra model will set you back £529 (around $695 / AU$1,025). As you can see from the specs table below, that additional £50 from Pico nets you 4GB of extra RAM, 128GB of storage and, as you’ll see below, some exclusive tools like cameras for capturing spatial images and video.

So bang for your buck hardware-wise, the devices feel very even, though availability is in the Quest 3’s favor. That’s due to the Pico 4 Ultra only having launched in parts of Europe, including the UK, and Asia, while the Quest 3 is available in more regions – most importantly for our readers the US and Australia.

If you’re in those countries you could theoretically buy the Pico 4 Ultra as an import, though that’ll likely cost you extra and can be a hassle.

The Meta Quest 3 on a notebook surrounded by pens and school supplies on a desk

Pico 4 Ultra VS Meta Quest 3: Specs & Performance

Header Cell - Column 0
Weight580g515g
DisplayTwo LCD screensTwo LCD screens
Display resolution2,160 x 2,160 pixels per eye2,064 x 2,208 pixels per eye
FOV105 degrees110 degrees horizontal, 96 degrees vertical
Refresh rate90Hz72Hz, 80Hz, 90Hz, 120Hz
ChipsetQualcomm Snapdragon XR2 Gen 2Qualcomm Snapdragon XR2 Gen 2
RAM12GB8GB
Storage256GB128GB or 512GB
  • Pico 4 Ultra has equally good or better specs
  • No major apps seriously justify the performance boost yet
  • Specs and performance verdict: Pico 4 Ultra

Going by the above specs table, and as mentioned in the Price section, the Pico 4 Ultra edges ahead of the Meta Quest 3 if we look at the base model of the Quest 3 (the Ultra only has one option).

Pico’s new VR headset will net you 12GB of RAM versus 8GB for the Quest 3; 256GB of storage versus 128GB; and two 2,160 x 2,160 pixels per eye LCD screens versus 2,064 x 2,208 pixels per eye, though the Meta’s Quest 3’s screens can hit a refresh rate up to 120Hz rather than 90Hz.

Both headsets boast the Qualcomm Snapdragon XR2 Gen 2 chipset, which is the current-gen standard for mid-range VR headsets. Though they rely on different operating systems running on those chipsets; Meta has HorizonOS while Pico has Pico OS, and while they both are based on Android HorizonOS tends to be a cleaner and more well optimized experience.

When it comes to performance the differences seem less stark when actually using the headsets.

For me, the two things helping to keep the Quest 3 feeling on a par here are Meta’s excellent HorizonOS optimizations which allow the Meta Quest 3 to efficiently squeeze out every bit of juice from its components to maximize performance – and the fact that the Pico 4 Ultra lacks meaningful exclusives that can fully leverage its superior specs.

That’s not to say the Pico’s extra RAM and storage won’t help (especially with multitasking, which allow sup to 8 windows to be open at once compared to Quest 3's 6), but the Quest 3 equally doesn’t struggle to run any of the VR or MR experiences I’ve tried. So, until the Pico 4 Ultra launches apps that prove otherwise, there’s a part of me that feels the extra RAM is there more for its bark than its bite.

Pico 4 Ultra VS Meta Quest 3: Mixed reality

Girl wearing Meta Quest 3 headset interacting with a jungle playset

  • Very close in quality
  • Neither offers true to life mixed reality
  • Mixed reality verdict: Pico 4 Ultra

Switching back and forth between the two headsets, and testing their passthrough performance in various settings and under different lighting conditions, I can report that there’s no standout winner here.

The Pico 4 Ultra does just push itself ahead with passthrough that’s a little more vibrant, though it does have more noticeable (yet still fairly inconsequential) distortion at the fringes of the screen. The Meta Quest 3’s passthrough is a little grainier too, though when you’re playing a mixed-reality game the differences aren’t super obvious. Neither experience comes close to looking like real life either.

Recent HorizonOS updates have delivered substantial improvements to the Quest 3’s mixed reality quality, and I suspect that if Pico brought the same levels of optimizations to its hardware the differences between the two models would be a lot more stark, with the Pico 4 Ultra more clearly in first place. It still takes the win, but I expected more from its dual 32MP sensor setup.

Pico 4 Ultra VS Meta Quest 3: Software

  • Meta Quest 3 exclusives aren't matched
  • Pico 4 Ultra software is better than base model was at launch
  • Software verdict: Meta Quest 3

So far the headsets have been neck-and-neck, with the Pico 4 Ultra edging out the Meta Quest 3 in a few areas. When it comes to software, however, the Meta Quest 3 is clearly the best option for people who want the most complete catalog of games and apps.

The Pico 4 Ultra does boast many of the best cross-platform VR and MR titles inits collection, however, Meta has a lot of Quest-exclusives, and there are also some non-Quest-exclusive hits (as they’re on other platforms like Steam or PSVR ) that aren’t currently on Pico’s platform for one reason or another.

And the Quest exclusives are mega-hits. We’re talking Beat Saber, Resident Evil 4 VR, Assassin's Creed Nexus, Asgard’s Wrath 2, Batman: Arkham Shadow, Xbox Cloud Gaming, Just Dance VR. Pico’s only noteworthy exclusive is TikTok.

I know lots of people hate exclusives, but like them or not their presence here means that if you’re looking to play the best VR games you’ll miss out on several if you go for a Pico 4 Ultra and not a Meta Quest 3.

Pico 4 Ultra VS Meta Quest 3: Features

The tech inside the Pico 4 Ultra motion tracker

  • Pico can do everything Meta promises and more
  • Motion trackers are a game changer
  • Feature verdict: Pico 4 Ultra

The Pico 4 Ultra and Meta Quest 3 offer generally similar features, though Pico wins out here with two exclusive tools.

The more consequential are its Motion Trackers, which track your feet for significantly more accurate full-body tracking. They’re a lot of fun, and Meta doesn’t have its own alternative. The only downside is that, unless you pick up a pair as part of the preorder bundle (or a similar deal in the future) the Motion Trackers will cost you £79 (for two). Pico says they’re supported by “20+” standalone experiences, which isn’t nothing but is only a small subsection of its catalog.

The other unique feature is the Pico 4 Ultra’s camera for taking spatial photos and spatial videos. The image is the same quality as the passthrough (read: not great) and it’s just generally clunky to use. It’s a novelty, sure, but realistically it’s not something anyone will use regularly to capture stereoscopic content. The Pico 4 Ultra is too big, and the feature not conveniently accessible enough, to warrant carrying the headset around to capture impromptu moments in 3D.

Should you buy the Meta Quest 3 or Pico 4 Ultra?

The battle between the Meta Quest 3 and Pico 4 Ultra is close in most areas, with the two headsets either tying or the Pico 4 pulling slightly ahead in a few categories. However, the Meta Quest 3 not only wins the software category, it demolishes its rival in this respect – and this is why it’s the one to get.

Yes the Pico 4 Ultra has marginally more power and exclusive motion trackers, but it has nothing that truly takes advantage of that extra performance, and few experiences that leverage its unique features – and that power and those features come at a price premium over the Quest 3. Meta’s Quest 3, on the other hand, has a huge library of excellent exclusive VR and MR games and apps that Pico simply doesn’t have an answer for. It’s also the more widely available device.

So when weighing up which headset you’ll enjoy using more, I’m confident that for the vast majority of people it’ll be the Meta Quest 3. I’m not saying you should automatically dismiss the Pico 4 Ultra, but if you’re on the fence between the two I’d recommend that you go for Meta’s device for now.

You might also like

  • New Meta VR headset in FCC certification teases an imminent Quest 3S launch

Journey Lens is reviving an old smart glasses idea to solve a modern problem

  • Meta's canceled Quest Pro 2 could pave the way for a major VR headset redesign

Get daily insight, inspiration and deals in your inbox

Sign up for breaking news, reviews, opinion, top tech deals, and more.

Hamish is a Senior Staff Writer for TechRadar and you’ll see his name appearing on articles across nearly every topic on the site from smart home deals to speaker reviews to graphics card news and everything in between. He uses his broad range of knowledge to help explain the latest gadgets and if they’re a must-buy or a fad fueled by hype. Though his specialty is writing about everything going on in the world of virtual reality and augmented reality.

The iPhone 16 is the best evidence we have that a much cheaper Vision Pro is on the way

How to use a Chromebook

Most Popular

  • 2 This is what the largest LED video wall in the world looks like — Adele's extravagant 44,000sq ft display goes straight into the Guinness World Records book
  • 3 I just canceled my Pixel 9 Pro Fold order and bought the iPhone 16 Pro Max instead – here's why
  • 4 Get the iPhone 16 Pro for free at AT&T with an eligible trade-in
  • 5 What's so ‘Fusion’ about the iPhone 16’s 48MP camera?

augmented reality vs virtual reality essay

  • DOI: 10.1186/s13584-024-00634-8
  • Corpus ID: 272601115

Augmented reality- virtual reality wartime training of reserve prehospital teams: a pilot study

  • Arielle Kaim , Efrat Milman , +4 authors B. Adini
  • Published in Israel Journal of Health… 12 September 2024
  • Medicine, Engineering

14 References

Health care workers and war in the middle east., virtual and augmented reality in intensive care medicine: a systematic review, enhancing disaster response of emergency medical teams through “teams 3.0” training package—does the multidisciplinary intervention make a difference, healthcare professionals’ resilience during the covid-19 and organizational factors that improve individual resilience: a mixed-method study, the use of virtual reality in training paramedics for a mass casualty incident, paramedics’ confidence and perceived competence when attending to varied patient presentations: a mixed-method study., improving paramedic distance education through mobile mixed reality simulation, [the pre-hospital medical treatment of the victims of multi-casualty incidents caused by explosions of suicide bombers during the al-aksa intifada--april 2001 to december 2004: the activity and experience gained by the teams of magen david adom in israel]., psychometric analysis and refinement of the connor-davidson resilience scale (cd-risc): validation of a 10-item measure of resilience., related papers.

Showing 1 through 3 of 0 Related Papers

IMAGES

  1. Augmented Reality vs Virtual Reality

    augmented reality vs virtual reality essay

  2. What Is The Difference Between Augmented Reality And Virtual Reality

    augmented reality vs virtual reality essay

  3. Augmented Reality vs Virtual Reality: Similarities and Differences

    augmented reality vs virtual reality essay

  4. Write A Comparative Essay Comparing And Contrasting Virtual Reality

    augmented reality vs virtual reality essay

  5. Augmented Reality vs Virtual Reality I Differences with examples

    augmented reality vs virtual reality essay

  6. What Is The Difference Between Virtual Reality And Augmented Reality

    augmented reality vs virtual reality essay

VIDEO

  1. Augmented Reality vs Virtual Reality Understanding the Differences and Application

  2. Virtual and Augmented Reality

  3. Augmented Reality AR and Virtual Reality VR

  4. Augmented Reality AR and Virtual Reality VR #science #shortvideo #technology #shortvideo #shorts

  5. The Evolution of Virtual Reality: From Concept to Immersion #shorts

  6. What is Augmented Reality AR Explained in Sinhala

COMMENTS

  1. Virtual Reality(VR) vs Augmented Reality(AR): What's the difference?

    But overall, both Virtual Reality and Augmented Reality are hot technologies currently and becoming more and more popular (and better!) with time. So be sure to enjoy this persistent illusion that is a new reality for modern times! Conclusion. In summary, Virtual Reality (VR) immerses users in digital environments, while Augmented Reality (AR) overlays digital elements onto the real world.

  2. Virtual Reality Versus Augmented Reality Essay

    Virtual Reality (VR) refers to a high-end user computer interface involving real-time interactions and stimulations that use several sensorial channels which include visual, auditory, tactile, smell and taste. Virtual Reality should not just be taken as a high-end user interface or a medium. Get a custom essay on Virtual Reality Versus ...

  3. Augmented Reality (AR) vs. Virtual Reality (VR): What's the ...

    The terms virtual reality and augmented reality get thrown around a lot. VR headsets, such as the Meta Quest 2 or the Valve Index, and AR apps and games, such as Pokemon Go, are popular. They ...

  4. Virtual Reality vs Augmented Reality: Comparative Analysis

    However, there are a few key differences that separate these technologies such as: User Experience: AR blends virtual content with the real world, enhancing the user's perception of reality in the physical world. VR completely immerses users in a simulated environment, totally disconnecting them from the physical world.

  5. (PDF) Virtual Reality and Augmented Reality

    This chapter provides a wide overview of Augmented Reality (AR) and Immersive Virtual Reality (IVR) in education. Even though their role in K-12 online learning and blended environments is still ...

  6. Augmented Reality (AR) vs. Virtual Reality (VR ...

    December 19, 2022. Augmented Reality (AR) and Virtual Reality (VR) are both technologies that encompass different applications but allow users to interact with digital content in an environment they could not otherwise access or experience. As forecasted for 2023, VR and AR users are estimated to reach over 110 million users in the United States.

  7. Virtual reality v augmented reality: Which is the future?

    Back in 2016, CEO Tim Cook called it a "core technology," saying that he expects it to be a big technology, bigger than VR, in the future. "Virtual reality sort of encloses and immerses the ...

  8. The Past, Present, and Future of Virtual and Augmented Reality Research

    Introduction. In the last 5 years, virtual reality (VR) and augmented reality (AR) have attracted the interest of investors and the general public, especially after Mark Zuckerberg bought Oculus for two billion dollars (Luckerson, 2014; Castelvecchi, 2016).Currently, many other companies, such as Sony, Samsung, HTC, and Google are making huge investments in VR and AR (Korolov, 2014; Ebert ...

  9. Virtual and augmented reality 2020

    With roots going back to Ivan Sutherland's research in the 1960s, virtual reality (VR) reached a plateau in the early 1990s when the promise was demonstrable in universities and research laboratories. Although we all knew we had a long way to go, hype overtook the field, leading to impossible expectations. Fortunately, a few years later the Internet became the latest hot topic, leaving VR ...

  10. (PDF) Augmented Reality Vs. Virtual Reality

    Augmented reality along with virtual reality is inverted images of one in one more along with what each. technology seeks to finish and also the source for the individual. Virtual reality supplies ...

  11. A Systematic Literature Review on Extended Reality: Virtual, Augmented

    PDF | Extended reality (XR), here jointly referring to virtual, augmented, and mixed (VR, AR, MR) reality, is becoming more common in everyday working... | Find, read and cite all the research you ...

  12. Analyzing augmented reality (AR) and virtual reality (VR) recent

    Augmented Reality (AR) and Virtual Reality (VR) technologies have the potential to revolutionize the world of education and provide a more immersive and engaging learning experience for students. Through the use of these technologies, students can be immersed in a variety of different visuals, audio cues, and simulations, which can help to ...

  13. Virtual and Augmented Reality

    Virtual and augmented reality technologies have entered a new near-commodity era, accompanied by massive commercial investments, but still are subject to numerous open research questions. This special issue of IEEE Computer Graphics and Applications aims at broad views to capture the state of the art, important achievements, and impact of several areas in these dynamic disciplines. It contains ...

  14. AR vs. VR vs. MR vs. XR: What's the Difference?

    XR. Virtual elements overlayed on the real world. Fully virtual experience. Anchored virtual elements that can interact with the real world. Umbrella term for AR, VR, and MR. Works through a headset or smartphone. Works through a headset. Usually works through a headset. View your physical surroundings at the same time.

  15. Augmented Reality vs. Virtual Reality: Differences and Similarities

    The main fields in which Augmented and Virtual Reality is applied nowadays and important AR and VR devices are described and some difference and similarities of Augmented Reality and Virtual Reality will be discussed. -This paper presents an overview of basic aspects of Augmented Reality (AR) and Virtual Reality(VR) . It describes the main fields in which Augmented and Virtual Reality is ...

  16. Augmented Reality VS Virtual Reality

    Augmented reality (AR) which is defined as that "the real time use of information in the form of graphics, text, audio, or other virtual enhancements integrated with real world objects". Augmented reality involves auditory, overlaying visual or other sensory information onto the world in order to enhance one's experiment.AR works on to by ...

  17. Virtual Reality vs. Augmented Reality

    Md. Didarul Islam. This paper briefly addresses the phenomenon of Virtual Reality and Augmented Reality in today's world. Obstacles and challenges of incorporating these two newly invented technological tools have been discussed in the context of Bangladesh. The result of this study shows that there are many challenges of implementing these ...

  18. Difference Between Augmented Reality Vs Virtual Reality

    It is not always virtual reality vs. augmented reality- they do not always operate independently of one another, and in fact are often blended together to generate an even more immersing experience. For example, haptic feedback-which is the vibration and sensation added to interaction with graphics-is considered an augmentation.

  19. Comparing virtual reality vs. augmented reality in promoting COVID-19

    Virtual Reality (VR) and Augmented Reality (AR) are innovative technologies that can be employed as effective tools for creating health interventions by altering psychological distance. 3 VR immerses users in artificially created digital environment and provides simulated sensory information that makes the environment seem real to the user. 4 AR augments the physical world by projecting and ...

  20. Augmented Reality VS Virtual Reality -AI Essay Examples

    AR overlays digital information onto the real world, while VR immerses users into entirely virtual environments. Together, they offer unique and powerful experiences with far-reaching implications across various fields. Augmented Reality enhances our perception of reality by superimposing digital elements onto the physical environment.

  21. Virtual, mixed, and augmented reality: a systematic review for

    2.1 Immersion "Immersion" and "presence" are important concepts for research in immersive systems. Nilsson et al. note that "the term immersion continues to be applied inconsistently within and across different fields of research connected with the study of virtual reality and interactive media."This observation is confirmed by our review of the literature.

  22. Augmented reality and virtual reality displays: emerging technologies

    With rapid advances in high-speed communication and computation, augmented reality (AR) and virtual reality (VR) are emerging as next-generation display platforms for deeper human-digital ...

  23. Theoretical foundations and implications of augmented reality, virtual

    Background Augmented Reality (AR), Virtual Reality (VR) and Mixed Reality (MR) are emerging technologies that can create immersive learning environments for health professions education. However, there is a lack of systematic reviews on how these technologies are used, what benefits they offer, and what instructional design models or theories guide their use. Aim This scoping review aims to ...

  24. Exploring Virtual Reality and Augmented Reality Development

    2 Virtual Reality and Augmented Reality Virtual Reality (VR) and Augmented Reality (AR) represent two cutting-edge technological advancements reshaping digital and physical interactions. Corroborating this assertion, Al-Ansi et al. (2023), in their study to explore the development of AR and VR in education contend that these technological advancements are the most innovative currently ...

  25. Empowering Knowledge With Virtual and Augmented Reality

    Recent global advancements in ICT technologies have motivated our research to assess the impact of Virtual and Augmented Reality on teaching and learning. In particular, we conducted a rigorous study on the effects on learning by subjecting classes of students to an experience of these emerging technologies and measuring their feelings before and after the experiment through questionnaires. We ...

  26. Information systems research of immersive technologies

    This short essay intends to offer some general guidance for potential IS research of each of these abovementioned categories related to immersive technologies. Before going into detailed discussions, it is worth noting some general thoughts. ... CDW Corporation. (2023). Virtual reality vs. Augmented reality vs. Mixed reality. Retrieved January ...

  27. Looking at augmented reality (AR) in advertising

    Augmented reality (AR) refers to technology that enhances the real-world environment by overlaying digital visual elements, sounds, and other sensory stimuli through holographic technology. AR combines digital and physical worlds, enables real-time interactions, and accurately identifies virtual and real objects in 3D. Augmented Reality (AR) vs ...

  28. Pico 4 Ultra VS Meta Quest 3: the battle of the best mid-range VR

    The Pico 4 Ultra has come out swinging with better specs (at a higher price) and some features you won't find on Quest like its Motion Tracker add-ons which are perfect for fully immersive VR gaming.

  29. Utilizing Motion Capture to Quantify Physical Workload in Augmented

    This study developed a regression model that effectively predicts the physical demands imposed by various AR modules, based on the observed slouching scores, which demonstrate a notable decline in slouching scores as participants progress through the lecture modules. This study examines the ergonomic impact of augmented reality (AR) technologies in educational contexts, with a focus on ...

  30. Augmented reality- virtual reality wartime training of reserve

    DOI: 10.1186/s13584-024-00634-8 Corpus ID: 272601115; Augmented reality- virtual reality wartime training of reserve prehospital teams: a pilot study @article{Kaim2024AugmentedRV, title={Augmented reality- virtual reality wartime training of reserve prehospital teams: a pilot study}, author={Arielle Kaim and Efrat Milman and Eyal Zehavi and Amnon Harel and Inbal Mazor and Eli Jaffe and Bruria ...