In Part 1 of this article we explored the current state of CGI, game, and contemporary VR systems. Here in Part 2 we look at the limits of human visual perception and show several of the methods we’re exploring to drive performance closer to them in VR systems of the future.

Guest Article by Dr. Morgan McGuire

Dr. Morgan McGuire is a scientist on the new experiences in AR and VR research team at NVIDIA. He’s contributed to the Skylanders, Call of Duty, Marvel Ultimate Alliance, and Titan Quest game series published by Activision and THQ. Morgan is the coauthor of The Graphics Codex and Computer Graphics: Principles & Practice. He holds faculty positions at the University of Waterloo and Williams College.

Note: Part 1 of this article provides important context for this discussion, consider reading it before proceeding.

Reinventing the Pipeline for the Future of VR

We derive our future VR specifications from the limits of human perception. There are different ways to measure these, but to make the perfect display you’d need roughly the equivalent to 200 HDTVs updating at 240 Hz. This equates to about 100,000 megapixels per second of graphics throughput.

Recall that modern VR is around 450 Mpix/sec today. This means we need a 200x increase in performance for future VR. But with factors like high dynamic range, variable focus, and current film standards for visual quality and lighting in play, the more realistic need is a 10,000x improvement… and we want this with only 1ms of latency.

We could theoretically accomplish this by committing increasingly greater computing power, but brute force simply isn’t efficient or economical. Brute force won’t get us to pervasive use of VR. So, what techniques can we use to get there?

Rendering Algorithms

Foveated Rendering
Our first approach to performance is the foveated rendering technique—which reduces the quality of images in a user’s peripheral vision—takes advantage of an aspect of human perception to generate an increase in performance without a perceptible loss in quality.

Because the eye itself only has high resolution right where you’re looking, in the fovea centralis region, a VR system can undetectably drop the resolution of peripheral pixels for a performance boost. It can’t just render at low resolution, though. The above images are wide field of view pictures shrunk down for display here in 2D. If you looked at the clock in VR, then the bulletin board on the left would be in the periphery. Just dropping resolution as in the top image produces blocky graphics and a change in visual contrast. This is detectable as motion or blurring in the corner of your eye. Our goal is to compute the exact enhancement needed to produce a low-resolution image whose blurring matches human perception and appears perfect in peripheral vision (Patney, et al. and Sun et al.)

Light Fields
To speed up realistic graphics for VR, we’re looking at rendering primitives beyond just today’s triangle meshes. In this collaboration with McGill and Stanford we’re using light fields to accelerate the lighting computations. Unlike today’s 2D light maps that paint lighting onto surfaces, these are a 4D data structure that stores the lighting in space at all possible directions and angles.

They produce realistic reflections and shading on all surfaces in the scene and even dynamic characters. This is the next step of unifying the quality of ray tracing with the performance of environment probes and light maps.

Real-time Ray Tracing
What about true run-time ray tracing? The NVIDIA Volta GPU is the fastest ray tracing processor in the world, and its NVIDIA Pascal GPU siblings are the fastest consumer ones. At about 1 billion rays/second, Pascal is just about fast enough to replace the primary rasterizer or shadow maps for modern VR. If we unlock the pipeline with the kinds of changes I’ve just described, what can ray tracing do for future VR?

The answer is: ray tracing can do a lot for VR. When you’re tracing rays, you don’t need shadow maps at all, thereby eliminating a latency barrier Ray tracing can also natively render red, green, and blue separately, and directly render barrel-distorted images for the lens. So, it avoids the need for the lens warp processing and the subsequent latency.

In fact, when ray tracing, you can completely eliminate the latency of rendering discrete frames of pixels so that there is no ‘frame rate’ in the classic sense. We can send each pixel directly to the display as soon as it is produced on the GPU. This is called ‘beam racing’ and eliminates the display synchronization. At that point, there are zero high-latency barriers within the graphics system.

Because there’s no flat projection plane as in rasterization, ray tracing also solves the field of view problem. Rasterization depends on preserving straight lines (such as the edges of triangles) from 3D to 2D. But the wide field of view needed for VR requires a fisheye projection from 3D to 2D that curves triangles around the display. Rasterizers break the image up into multiple planes to approximate this. With ray tracing, you can directly render even a full 360 degree field of view to a spherical screen if you want. Ray tracing also natively supports mixed primitives: triangles, light fields, points, voxels, and even text, allowing for greater flexibility when it comes to content optimization. We’re investigating ways to make all of those faster than traditional rendering for VR.

In addition to all of the ways that ray tracing can accelerate VR rendering latency and throughput, a huge feature of ray tracing is what it can do for image quality. Recall from the beginning of this article that the image quality of film rendering is due to an algorithm called path tracing, which is an extension of ray tracing. If we switch to a ray-based renderer, we unlock a new level of image quality for VR.

Real-time Path Tracing
Although we can now ray trace in real time, there’s a big challenge for real-time path tracing. Path tracing is about 10,000x more computationally intensive than ray tracing. That’s why movies takes minutes per frame to generate instead of milliseconds.

Under path tracing, the system first traces a ray from the camera to find the visible surface. It then casts another ray to the sun to see if that surface is in shadow. But, there’s more illumination in a scene than directly from the sun. Some light is indirect, having bounced off the ground or another surface. So, the path tracer then recursively casts another ray at random to sample the indirect lighting. That point also requires a shadow ray cast, and its own random indirect light…the process continues until it has traced about about 10 rays for each single path.

But if there’s only one or two paths at a pixel, the image is very noisy because of the random sampling process. It looks like this:

Film graphics solves this problem by tracing thousands of paths at each pixel. All of those paths at ten rays each are why path tracing is a net 10,000x more expensive than ray tracing alone.

To unlock path tracing image quality for VR, we need a way to sample only a few paths per pixel and still avoid the noise from random sampling. We think we can get there soon thanks to innovations like foveated rendering, which makes it possible to only pay for expensive paths in the center of the image, and denoising, which turns the grainy images directly into clear ones without tracing more rays.

We released three research papers this year towards solving the denoising problem. These are the result of collaborations with McGill University, the University of Montreal, Dartmouth College, Williams college, Stanford University, and the Karlsruhe Institute of Technology. These methods can turn a noisy, real-time path traced image like this:

Into a clean image like this:

Using only milliseconds of computation and no additional rays. Two of the methods use the image processing power of the GPU to achieve this. One uses the new AI processing power of NVIDIA GPUs. We trained a neural network for days on denoising, and it can now denoise images on its own in tens of milliseconds. We’re increasing the sophistication of that technique and training it more to bring the cost down. This is an exciting approach because it is one of several new methods we’ve discovered recently for using artificial intelligence in unexpected ways to enhance both the quality of computer graphics and the authoring process for creating new, animated 3D content to populate virtual worlds.

Computational Displays

The displays in today’s VR headsets are relatively simple output devices. The display itself does hardly any processing, it simply shows the data that is handed to it. And while that’s fine for things like TVs, monitors, and smartphones, there’s huge potential for improving the VR experience by making displays ‘smarter’ about not only what is being displayed but also the state of the observer. We’re exploring several methods of on-headset and even in-display processing to push the limits of VR.

Solving Vergence-Accommodation Disconnect
The first challenge for a VR display is the focus problem, which is technically called the ‘vergence-accommodation disconnect’. All of today’s VR and AR devices force you to focus about 1.5m away. That has two drawbacks:

  1. When you’re looking at a very distant or close up object in stereo VR, the point where your two eyes converge doesn’t match the point where they are focused (‘accommodated’). That disconnect creates discomfort and is one of the common complaints with modern VR.
  2. If you’re using augmented reality, then you are looking at points in the real world at real depths. The virtual imagery needs to match where you’re focusing or it will be too blurry to use. For example, you can’t read augmented map directions at 1.5m while you’re looking 20m into the distance while driving.

We created a prototype computational light field display allows you to focus at any depth by presenting light from multiple angles. This display represents an important break with the past because computation is occurring directly in the display. We’re not sending mere images: we’re sending complex data that the display converts into the right form for your eye. Those tiny grids of images that look a bit like a bug’s view of the world have to be specially rendered for the display, which incorporates custom optics—a microlens array—to present them in the right way so that they look like the natural world.

That first light field display was from 2013. Next week, at the ACM SIGGRAPH Asia 2018 conference, we’re presenting a new holographic display that uses lasers and intensive computation to create light fields out of interfering wavefronts of light. It is harder to visualize the workings here, but relies on the same underlying principles and can produce even better imagery.

We strongly believe that this kind of in-display computation is a key technology for the future. But light fields aren’t the only approach that we’ve taken for using computation to solve the focus problem. We’ve also created two forms of variable-focus, or ‘varifocal’ optics.

This display prototype projects the image using a laser onto a diffusing hologram. You look straight through the hologram and see its image as if it was in the distance when it reflects off a curved piece of glass:

We control the distance at which the image appears by moving either the hologram or the sunglass reflectors with tiny motors. We match the virtual object distance to the distance that you’re looking in the real world, so you can always focus perfectly naturally.

This approach requires two pieces of computation in the display: one tracks the user’s eye and the other computes the correct optics in order to render a dynamically pre-distorted image. As with most of our prototypes, the research version is much larger than what would become an eventual product. We use large components to facilitate research construction. These displays would look more like sunglasses when actually refined for real use.

Here’s another varifocal prototype, this one created in collaboration with researchers at the University of North Carolina, the Max Planck Institute, and Saarland University. This is a flexible lens membrane. We use computer-controlled pneumatics to bend the lens as you change your focus so that it is always correct.

Hybrid Cloud Rendering
We have a variety of new approaches for solving the VR latency challenge. One of them, in collaboration with Williams College, leverages the full spread of GPU technology. To reduce the delay in rendering, we want to move the GPU as close as possible to the display. Using a Tegra mobile GPU, we can even put the GPU right on your body. But a mobile GPU has less processing power than a desktop GPU, and we want better graphics for VR than today’s games… so we team the Tegra with a discrete GeForce GPU across a wireless connection, or even better, to a Tesla GPU in the cloud.

This allows a powerful GPU to compute the lighting information, which it then sends to the Tegra on your body to render final images. You get the benefit of reduced latency and power requirements while actually increasing image quality.

Reducing the Latency Baseline
Of course, you can’t push latency to less than the frame rate. If the display updates at 90 FPS, then it is impossible to have latency less than 11 ms in the worst case, because that’s how long the display waits between frames. So, how fast can we make the display?

We collaborated with scientists at the University of North Carolina to build a display that runs at sixteen thousand binary frames per second. Here’s a graph from a digital oscilloscope showing how well this works for the crucial case of a head turning. When you turn your head, latency in the screen update causes motion sickness.

In the graph, time is on the horizontal axis. When the top green line jumps, that is the time at which the person wearing the display turned their head. The yellow line is when the display updated. It jumps up to show the new image only 0.08ms later…that’s about 500 times better than the 20ms you experience in the worst case on a commercial VR system today.

The renderer can’t run at 16,000 fps, so this kind of display works by Time Warping the most recent image to match the current head position. We speed that Time Warp process up by running it directly on the head-mounted display. Here’s an image of our custom on-head processor prototype for this:

Unlike regular Time Warp which distorts the 2D image or the more advanced Space Warp that uses 2D images with depth, our method works on a full 3D data set as well. The picture on the far right shows a case where we’ve warped a full 3D scene in real-time. In this system, the display itself can keep updating while you walk around the scene, even when temporarily disconnected from the renderer. This allows us to run the renderer at a low rate to save power or increase image quality, and to produce low-latency graphics even when wirelessly tethered across a slow network.

The Complete System

As a reminder, in Part 1 of this article we identified the rendering pipeline employed by today’s VR headsets:

Putting together all of the techniques just described, we can sketch out not just individual innovations but a completely new vision for building a VR system. This vision removes almost all of the synchronization barriers. It spreads computation out into the cloud and right onto the head-mounted display. Latency is reduced by 50-100x and images have cinematic quality. There’s a 100x perceived increase in resolution, but you only pay for pixels where you’re looking. You can focus naturally, at multiple depths.

We’re blasting binary images out of the display so fast that they are indistinguishable from reality. The system has proper focus accommodation, a wide field of view, low weight, and low latency…making it comfortable and fashionable enough to use all day.

By breaking ground in the areas of computational displays, varifocal optics, foveated rendering, denoising, light fields, binary frames and others, NVIDIA Research is innovating for a new system for virtual experiences. As systems become more comfortable, affordable and powerful, this will become the new interface to computing for everyone.

All of the methods that I’ve described can be found in deep technical detail on our website.

I encourage everyone to experience the great, early-adopter modern VR systems available today. I also encourage you to join us in looking to the bold future of pervasive AR/VR/MR for everyone, and recognize that revolutionary change is coming through this technology.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


  • FSX76

    Very interesting and fascinating read.
    The future is looking bright – thanks Nvidia.

  • VRgameDevGirl

    Can’t wait to see where we are at in 5 years. 10=holodeck??? J/k

  • doug

    Please edit the latency graph’s “horizontal access” to “horizontal axis.”

    • benz145

      Good spot, fixing!

  • victor

    so glad nvidia is driving vr so strongly!

    • Graham J ⭐️

      I mostly agree, though as a GPU maker they don’t have a lot of incentive to strive for computational efficiency. Still, their research should help the industry as a whole.

      • Jerald Doerr

        I’ed have to disagree on that one.. They absolutley benefit from there research.. without it they would mostlikly lose to AMD..

        • Graham J ⭐️

          I didn’t mean to suggest that they don’t benefit from their own research; certainly they do. But they do so by coming up with new uses for GPU power, which they of course sell.

        • Subash

          NVidia builds closed systems and mostly bottleneck for the high computation development. Ask someone who packs their binary blobs which get called from CUDA/CUDNN layers. These research will end up in another NVidia monopoly unless VR pioneers from FB would have already found an embedded solution.

  • Jerald Doerr

    Top of the line article! Thank you!

  • Duane Aakre

    It is an interesting read to see all the things required to achieve a VR image that is imperceptible from reality. However, it seems like it will take a considerable amount of time to get there.

    As a lowly end user, I think a timeline of incremental steps we will probably see over the next five years in real consumer products would be a nice addendum to this article.

    Thanks, Duane

  • Brad Neuberg

    Incredibly exciting! NVIDIA really is a leader in researching these kinds of things rather than chasing other companies tail lights.

    • Karen

      Google is paying 97$ per hour,with weekly payouts.You can also avail this.
      On tuesday I got a great New Land Rover Range Rover from having earned $11752 this last four weeks..with-out any doubt it’s the most-comfortable job I have ever done .. It sounds unbelievable but you wont forgive yourself if you don’t check it
      !da184d:
      ➽➽
      ➽➽;➽➽ http://GoogleNewNetJobsDailyOpportunities/earn/hourly ★✫★★✫★✫★★✫★✫★★✫★✫★★✫★✫★★✫★✫★★✫★✫★★✫★✫★★✫★✫★★✫★✫★★✫★✫:::::!da184luuuuus

    • Subash

      Wait! Did you even know Oculus first broke the ice several years ago and gave the world first usable consumer VR prototype? The concept of TimeWarp was invented for lower power & refresh systems (so was SpaceWarp) Nvidia is chasing the tail light here.

  • anonymouse

    NVIDIA Corp loves to make programmers rewrite their code over and over just for increased amounts of realism. Who needs more realistic blood splattering everywhere? Bloodsuckers on the ecosystem!

  • oompah

    The real estate in front of eyes is sacred. It should not be used for processing or for heavy electronics. It should only be used to DISPLAY what has been processed in some other place.
    I expect that 2 optical fibers (one for each eye) should arrive at this sacred real estate and the minimal electronics (mabbe vibrating micro mirrors) should project these to the retinas similar to electron gun of old (glass tube) TVs. In other words , the scene should be rendered elsewhere , mabbe in pocket , or at belt , or at back of neck or at the backpack and then the output be carried onto the 2 optical fibers terminating at eyes.
    I consider the present HMD of MS, Google, Vive , oculus etc as dinosaurs , already old tech , considering that a 7 yr old child cant use it for more than half hr and for that also he has to hold the HMD with hands being too heavy , which kills the realism.

    Guys more research required for ease of use & ergonomics , the tech looks good but its equipment should not be heavier than regular glasses.

    Got it , no , get lost

    • Mac

      Seems like they are considering every option as far as rendering location. They even talk about only rendering the timewarp on the headset with other computation being done off of the headset. Also, this: “As with most of our prototypes, the research version is much larger than what would become an eventual product. We use large components to facilitate research construction.”

  • Edward Morgan

    “This is called ‘beam racing’”
    No. It is not. Stop that.

    There isn’t even a beam to race in a modern VR headset. You heard a cool term from gaming history(The Atari era, in this case), and are co-opting it for your own purposes. You are DEFECATING ON HISTORY in a vain attempt to look cool, and it is not appreciated.

  • GPUGhost

    Everything sounds great except the cloud rendering. I know Nvidia love to sell their cloud rendering service but it simply won’t work because of network latency. Sending user input/head track to the cloud, rendering the image, then sending it back will always take too long. Their 16k fps display can’t solve network latency.

    • Fürjessy-Űrgamm Ákos

      Doesn’ t matter, this round of VR spawned out of quixotism anyway ( “free fov” ) .

  • daveinpublic

    Awesome! I wonder what they mean by “we want to move the GPU as close as possible to the display” and then immediately after “we team the Tegra with a discrete GeForce GPU across a wireless connection, or even better, to a Tesla GPU in the cloud”.. using a wireless connection seems like the GPU would be just as far from the display. Maybe they’re just saying they can add a mobile GPU in the headset for partial speed up.

  • Muhammad Jihad ✓ᵛᵉʳᶦᶠᶦᵉᵈ

    Wow, what a great read.

  • Muhammad Jihad ✓ᵛᵉʳᶦᶠᶦᵉᵈ

    I wonder what more can be done to reduce motion sickness. I’ve had the Rift DK1, DK2 and now the retail version and while motion sickness is much reduced since the days of DK1, I still want to puke whenever I play a game where I move in the game in a way that doesn’t match what I’m really found – such as walking – it’s nearly vomit inducing within a minute or two (sometimes faster). I know from going to DK1 to retail that performance improvements help a lot in non-movement sitatuions (low frame rate games or laggy games make many more people sick than motion does) but is there an answer for locomotion in VR? I mean I get sick in real life when riding rides at amusement parks or riding as a passenger in a car when a woman is driving – so I’m guessing the issue is me and there isn’t a lot that can be done to eliminate the issue – only reduce it.

    • mcnbns

      What about when you’re a passenger and a man is driving?

      • Muhammad Jihad ✓ᵛᵉʳᶦᶠᶦᵉᵈ

        Then I’m normally OK.

        • brandon9271

          Your vestibular system is sexist. Lol

    • Raphael

      There won’t be any magic VR cure if you vomit easily offline as well. Multi-plane depth displays will reduce symptoms for some people and we could do with those vestibular headphones that probably won’t ever appear.

      • Muhammad Jihad ✓ᵛᵉʳᶦᶠᶦᵉᵈ

        Interesting, I had never heard about a headphone solution – but that makes since if the tech exists.

    • Denny Unger

      Until we’re using “safe” galvanic stimulation to trick your inner ear into physically feeling artificial accelerations, you’ll be stuck with best practice methods for awhile. And by that I mean any experience that uses teleportation, vection portals and any method to obfuscate vestibular mismatch. The other popular method is anticipatory psychology, where a user can precisely manage and control the amount of perceived acceleration/rotation/direction (think Lone Echo) but that technique isn’t as effective for the majority.

      • Muhammad Jihad ✓ᵛᵉʳᶦᶠᶦᵉᵈ

        So do we yet have an understanding of why some people get motion sick and some do not? I listened to Buzz Aldrin years ago talk about it and from watching the huge number of people go through NASA training and seeing who got sick and who didn’t, he believes it’s somehow tied to the ‘sense’ of direction of people – people who have a very strong sense of direction – like the ones you can spin around and they can still tell you which way is north – he said they tended to get the most sick and thinks they’re linked. I’ve never read anything about that, but personally going through my list of family and friends, those that get sick on roller coasters and stuff, all of them have very good senses of direction – and most of those who don’t, didn’t get sick. It’s not 100%, but it’s an interesting little tidbit – and I’m not sure if it’s a helpful clue or not, but I do think he’s onto something.

        Also Oculus has Lone Echo for sale today so I’m going to try it. I generally don’t even bother with anything that they rate ‘moderate’ since that in every other case has meant getting sick, but with the number of people talking about Lone Echo handing motion differently, I’ll see what happens – plus I can return it anyways.

        • Patrick Hogenboom

          We use 2 senses for our balance, the eyes and the inner ear. How much each of those contributes to the result is different per person. In some people the eyes dominate, for them the (virtual) visual input is the truth and there is no conflict. For the people who’s inner ear dominates, the visual input conflicts and they get sick.

          • Muhammad Jihad ✓ᵛᵉʳᶦᶠᶦᵉᵈ

            Interesting, I never heard it explained that way.

    • However good HMDs get you will still need an affordable locomotion platform to prevent sim sickness that is due to the visual-vestibular mis-match. Simply being able to stand, turn and move your legs overcomes most of the problems developers are unable to fix. In the meantime VR is getting a bad name because the software solutions don’t work.
      Sorry for the plug but that’s what we do and we’ve shipped them to 30 countries and counting.

      Incredible article btw. nVidia deserve every success

      • Muhammad Jihad ✓ᵛᵉʳᶦᶠᶦᵉᵈ

        Yeah, I do wonder if that would make a difference or not. I’m trying to get my mind around the idea of if I’m moving, in place, like on a treadmill, if the motion sickness would be lessened or not. Good point in any case, and I agree on the article, it’s really great.

        • I can only give you our empirical evidence that so many people try the ROVR and say that’s the first time they haven’t felt sick in VR. It doesn’t stop dizziness if you don’t have 6DOF or the tracking is slightly off as when emulating W key (because it can only be on of off) but overwhelmingly it seems that most sickness is caused by your vision not matching your inner ear when turning. That might be linked to why you feel sick IRL when spinning round but not going forward??
          Anyway I would definitely say it lessens sickness a great deal.

  • Rafael

    Denoising path tracking it isn’t realistic it seems to me. Isn’t enough with Ray Tracing? 3D Map Lights will do enough good effect.

  • vivid

    Hard to read this article without daydreaming what future VR will look like!
    And we are so close!

  • Mark Rejhon

    240 Hz is not the final frontier, because for “strobefree ULMB” or “blurless sample-and-hold” — you need much more refresh rate. Real-life doesn’t strobe so to get low-persistence without strobing, you need much more than 240 Hz to achieve low persistence WITHOUT strobing/pulsing.

    Blur Busters was the world’s first mainstream website to test a 480 Hz display, see http://www.blurbusters.com/480hz
    We were able to tell the difference between 240Hz and 480Hz, despite the LCD limitations in pixel transition speed. It almost looked like “strobeless ULMB” — but not quite. (we need more than 1000fps @ 1000Hz for “strobeless ULMB” or “blurless sample-and-hold”)

    Currently, the Blur Busters Law is “1ms of persistence is 1 pixel of motion blurring during 1000 pixels/second”. MPRT == persistence == strobe length. VR head-turning can go more than 8000 pixels/second for an 8K headset turning slowly. Even with 1ms persistence (which requires 1000fps @ 1000 Hz to void pulsing/strobing) still produces 8 pixels of motion blurring when trying to read tiny text on walls while head-turning.

    To solve the problem of GPU horsepower, future reprojection/timewarp technologies will amplify 100fps -> 1000fps — we currently call it “Frame Rate Amplification Technology” (we have a thread on this in the Display Engineering forum of Blur Busters forums)

    I had a small contract with the Oculus Kickstarter on low-persistence research (about one year before they became a real headquarters / Facebook bought them), and I’m the author of a peer-reviewed conference paper — so even despite diminishing points of returns — 240Hz is far from final frontier.

  • Mark Rejhon

    Another thought experiment that Blur Busters coined several years ago (in Blur Busters Forums) is the “Holodeck Turing Test”.

    It is a blind test where someone is given one of two identical headsets (one is like a transparent ski goggles, and the other is a VR headset), and asking if the world they are seeing is real or virtual.

    In the lines of “Wow, I didn’t know I was wearing a VR headset, instead of wearing transparent ski goggles!”

    This is a fantastic article towards these lines, although 240Hz is not the final frontier if you go with strobeless ULMB (low-persistence without strobing) — The real number is far closer to 10,000Hz to eliminate all diminishing points of returns. That said, NVIDIA is fully aware of this — they tested 16,000 Hz augmented reality.

  • WOW, amazing! I knew about some techniques (like foveat rendering), but not about others. Now the questions becomes: this research is great, but when all these research ideas will be all implemented into a commercial device? 2020?

  • Theo Noetzli

    Why not render for each eye seperately with SLI? That could probably cut in half the rendering cost for one graphics card. Of course there’s some overhead but the gain is still big.

    • Nick Herrick

      Early, VR SLI in the way you describe was thought to be a way to dramatically increase performance. However that would be “Brute Forcing” it, and it’s almost always better to work smarter than to work harder. Since both eyes see mostly the same thing, there really is no reason to render a right eye image, and a left eye image. A better way is to render one image, and the slight differences for both eyes. Look up single pass stereo.

      Another thing to think about. How many people have a high end gaming computer? Now, of those people, how many have a VR headset? Now, of that group, how many have SLI? Since devoting time and resources costs money, how much would you be willing to devote to that tiny niche?

      The ideal solution is to increase efficiency, not power.

      • Theo Noetzli

        Nick you’re right. I didn’t see it under this light. Need to get smarter ;-)

  • Graham J ⭐️

    Almost all of the power reduction of Pascal comes from the node shrink rather than architectural improvements.

    • polysix

      You don’t know for sure if it’s “only” because it also benefits them. Don’t be so cynical. I’m sure for this type of indepth problem solving (and many of which are striving for ultra realism while reducing GPU load) it takes more than just someone with $$$ signs in their eyes. Obviously money will always motivate, they want to be at the forefront and the best GPU/tech for VR, but there has to be passion there too and it seems they have it. I’m sure they could fine ‘easier’ ways to make money than chasing what many called impossible just a few years back

      • Graham J ⭐️

        What I know is that companies don’t spend cash researching technologies unless they believe there will be an ROI. It’s not cynicism, it’s just how business works. They wouldn’t be researching VR if doing so wouldn’t sell chips.

        They lost out on the big console GPU contracts and PC gaming isn’t that big so it makes sense they’d invest in a technology that demands bigger GPUs. That doesn’t mean they’re not passionate about it, but it doesn’t mean that passion is the reason either.

    • lenne 0816

      People seem to wholly ignore the sudden, soon, inevitable end of node shrinks, its an omg so much better echo chamber when in fact nothing has changed, at all.

  • polysix

    Excellent research and potential solutions. Thank god large companies that matter are taking this seriously from a tech point of view and not letting it stagnate.

    Nice one Nvidia.

  • Sergey Navrozhin

    This research is really on point. All the described issues must be eliminated before we get a worthy VR/AR/MR HMD. Worthy in terms of technology that might finally change our habits (instead of being just a niche solution). A monitor wasn’t always there with a computer. They were merged to provide a better interface. This interface is now outdated. It’s flat, it’s immovable and doesn’t represent the real world, which we all live in.

  • Michael Hildebrand

    Notice almost no mention of power consumption. Especially at the end, when he summarizes the dream state, looks like he leaves that out, even while mentioning “comfortable and fashionable for all day”.

    Regardless, great article! I think a good chunk of the rendering talk went over my head, wish he made a video or something to go along with the article.

  • Khalil Vennie

    Wait, does this mean widespread Raytracing could be sooner than we think? I had no idea Volta was gonna be Raytracing optimized.