The newest graphical fad in the video games world is ray tracing. The concept is phenomenally simple—light ray paths traced independently as pixels on your graphical plane, also accounting for how the light beams interact with whatever other objects they meet along the way. It’s a very simple idea, but it’s very hard for most computers to achieve, owing to the calculations necessary to accurately trace the path of a ray of light.
The end result can be worth it, with rendered images looking fantastic. So far, common wisdom has held that ray tracing would need special hardware, such as RTX cores, to be implemented properly. However, Metro Exodus Rendering Programmer Ben Archard told Digital Foundry in an interview that there are other viable paths to achieve the same end result too, which, by being less expensive than specialized cores, could mean we see the feature implemented in next generation consoles.
“It doesn’t really matter – be it dedicated hardware or just enough compute power to do it in shader units, I believe it would be viable. For the current generation – yes, multiple solutions is the way to go,” he said, noting that specialized hardware isn’t the only way to achieve ray tracing.
“In terms of the viability of ray tracing on next generation consoles, the hardware doesn’t have to be specifically RTX cores. Those cores aren’t the only thing that matters when it comes to ray tracing. They are fixed function hardware that speed up the calculations specifically relating to the BVH intersection tests. Those calculations can be done in standard compute if the computer cores are numerous and fast enough (which we believe they will be on the next gen consoles). In fact, any GPU that is running DX12 will be able to “run” DXR since DXR is just an extension of DX12.
“Other things that really affect how quickly you can do ray tracing are a really fast BVH generation algorithm, which will be handled by the core APIs; and really fast memory. The nasty thing that ray tracing does, as opposed to something like say SSAO, is randomly access memory. SSAO will grab a load of texel data from a local area in texture space and because of the way those textures are stored there is a reasonably good chance that those texels will be quite close (or adjacent) in memory. Also, the SSAO for the next pixel over will work with pretty much the same set of samples. So, you have to load far less from memory because you can cache and awful lot of data.
“Working on data that is in cache speeds things up a ridiculous amount. Unfortunately, rays don’t really have this same level of coherence. They can randomly access just about any part of the set of geometry, and the ray for the next pixels could be grabbing data from and equally random location. So as much as specialised hardware to speed up the calculations of the ray intersections is important, fast compute cores and memory which lets you get at your bounding volume data quickly is also a viable path to doing real-time RT.”
The long and short of it is that given powerful enough hardware on next generation consoles, ray tracing should be possible to achieve even without specialized cores present in the machines. Of course, developers will have to make trade offs to achieve that, and it remains to be seen how many of them will attempt to do so.