The Big Interview: Crytek Talks About CryEngine, DX12, Cloud And Optimizing Games For New Consoles

GamingBolt speaks to Crytek’s US Engine Business Development Manager Sean Tracy to know all about CryEngine and the latest games development trends.

Crytek has always been at the forefront of developing games that push the underlying hardware and set new benchmarks for video games graphics. Over the years Crytek has mastered the art of squeezing  every bit out of power out of consoles and PCs, whether it be the PS3/Xbox 360 or the new consoles (Xbox One), the German based company surely knows how to push barriers.

With the hype behind PS4 and Xbox One finally settling down, GamingBolt got in touch with  Crytek’s US Engine Business Development Manager Sean Tracy to know more about the company’s plan for CryEngine. We were able to ask Sean a wide range questions varying from development optimization to the potential of cloud gaming. Check out Sean’s responses below.

Ravi Sinha: CRYENGINE has seen a significant change since entering the fourth generation. This change was done in order to create an always-evolving engine for developers to use but what else facilitated it? Did competition from the likes of Epic and Unreal Engine 4 spur you on?

Sean Tracy: CRYENGINE did indeed undergo significant changes and improvements when entering the fourth generation. It is a very important direction for us to maintain and deliver an always evolving engine to our own internal developers as well as to our licensees. As we’ve proven over the years, few companies take graphics as seriously as Crytek. Throughout this time we’ve dedicated ourselves to evolving the industry through art, design and engineering, turning our once proprietary CRYENGINE technology into a beast capable of pumping out some of the most convincing visuals ever run through silicon.

As you can see from our own ethos we are less spurred on by any competition and more by our own needs internally. We must, and will continue to, evolve the CRYENGINE to meet the high demands of our own developers and licensees with the principal focus on enabling them to achieve AAA high fidelity gaming experiences that gamers have become accustomed to when playing any game powered by the CRYENGINE.

cryengine

The current state of video game rendering results in either strong diffuse at very high contrast to break uniformity, or they have overly glossy surfaces with noisy and sparkling highlights. We wanted to instead achieve a far more believable and realistic CG experience similar to what you would get in film and offline rendering.

Ravi Sinha: Crytek has always maintained that CRYENGINE is meant to be scale-able and designed for next gen. Now that both the consoles are out, what are your thoughts on the PS4’s unified and Xbox One’s embedded memory architecture?

Sean Tracy: We are delighted with the updates to the next-gen hardware but of course always want more! The unified architecture of the APU’s allows us to easily leverage massive amounts of resources for all kinds of features including rendering, physics, animation and more. Though the PS4 and Xbox-One don’t offer an enormous jump over the previous generation in terms of raw processing power, the custom AMD APU’s within both platforms represent a huge leap forward in terms of integration and capability.

This seen to us as a thin and very fast intercommunication layer between was used to be dealt with as separate chips, (CPU and GPU). Practically this yields great results when used for re-projection techniques like reflections or occlusion as well as for massive compute shader calculations which we use for all our deferred lighting computation.

Ravi Sinha: We recently interviewed a developer who stated that it will take no time for artists to fill up 8GB of RAM. Are we going to see a shortage of RAM budget [again] in this new console cycle?

Sean Tracy: I would have to agree with the viewpoint that 8 gigs can easily be filled up, but also keep in mind that developers don’t necessarily even have access to all 8 gigs of it. For example the Xbox One retains some of the RAM for OS purposes. Since technology, as Ray Kurweil states, progresses exponentially, we will soon find that the computational requirements of games will quickly hit the ceiling of a few gigs of ram.

We already had to manage quite intensely our memory usage throughout Ryse and this will be one of the limiting factors surely in this generation. As hardware gets stronger the complexity of scenes can be increased and the dynamism within them. However, with that said it’s not the raw power alone that will allow for photorealistic graphics but technology that intelligently scales and utilizes all that the hardware has to offer.

Ravi Sinha: The fourth generation of CRYENGINE was beautifully realized in Ryse: Son of Rome, which Crytek developed internally. However, it seemed to be a bigger leap in terms of cinematic presentation and post-processing rather than the jaw-dropping increase in visuals we saw when going from Far Cry to Crysis (and which currently distinguishes Crysis 3 from the competition). Can this be chalked up to the team still needing time to understand the Xbox One’s architecture or just getting used to what CRYENGINE has to offer?

Sean Tracy: Instead of adding a lot of additional rendering features, which was rampant even for us in the previous generation, we opted to spend most of our efforts on the consistency of the shading and lighting models, trying to unify them under a more physically based framework.

This allowed Ryse to look very different from previous and even competing titles.

The current state of video game rendering results in either strong diffuse at very high contrast to break uniformity, or they have overly glossy surfaces with noisy and sparkling highlights. We wanted to instead achieve a far more believable and realistic CG experience similar to what you would get in film and offline rendering. I would say that the technology and the results of using this type of technology is what you are likely noticing.

cryengine

Our tools used for cinematics are directly integrated to the CRYENGINE which really facilitates things for our cinematic team as they can iterate, test and iterate directly in engine with all the assets they will use for the final cutscene. As you give these artists more time to work and allow them to spend less time fighting against technology you will get that much higher quality work from them.

Ravi Sinha: What can you tell us about some of CRYENGINE’s newest features, such as the lighting system and tesselation? How does physically based shading and pixel accurate displacement mapping change the overall look and feel of a game?

Sean Tracy: The biggest addition to the CRYENGINE was the introduction of a new rendering system PBS, otherwise known as Physically Based Shading. Our advanced model simulates the interaction between light & materials using real physical laws; the law of conservation of energy and Fresnel equations.

A good way to explain why we would move to something like this is the consideration of what photo real really means when it comes to games. Photoreal has been achievable for quite some time, however, take a photo referenced texture and move it in the world and move lights across it and you’ll see that if it doesn’t react as the material would in the real world then immersion, and photorealism is broken.

To some the benefits to Physically Based Shading are obvious. For the rest of us less technical types it is important to understand what the transition to physically based shading means and why it’s so important.

Replicating how light behaves in the real world leads far more natural and believable results to ensure that materials look plausible regardless of the current lighting conditions (provided those light conditions are within physical bounds); ultimately resulting in a greater consistency across the board.

The first law to discuss is the law of conservation of energy in practice. It should be noted that this significantly simplifies material setup for content creators.

Basic Explanation:

• Smooth Surface reflects light source sharp and clear
• As material roughness increases specular highlight becomes wider and less bright
A Traditional Shading Model would likely have two parameters
• Size
• Intensity

However, physical Laws say that these are coupled:

• On rough surfaces light is distributed over larger area
• Must thus be less intense at a single point

Not Trivial Math:

• Results in a factor that be applied to the BRDF of the shader

Roughness is the main factor:

• Controls both size and brightness of the highlight simultaneously.
• Additional to the highlight the roughness has a significant impact on reflectance.

Microscopic irregularities influence several shading aspects:

• Light rays reflected to different angles cause more blurry reflections for rougher surfaces
• CRYENGINE couples surface roughness with per-pixel normal

Conceptually closely related:

• Roughness values stored in normal map alpha
• Normals define surface variation at marco scale
• Roughness define surface variation on mirco scale

The second portion of PBS is Index of Refraction and Fresnel Equations.
The Index of Refraction defines the amount of light reflected vs. refracted
This is typically a commonly known physical number
CRYENGINE converts this IOR to sRGB and stores this as specular color
Artists then pick the appropriate specular color from a table with common values.

The Fresnal equations describe reflection and refraction of light depending on incidence angle. In less sophisticated renderers fresnal bias scale and factor are set per material by the artists. In the CRYENGINE we no longer need to do this as all materials react correctly to Fresnel since the engine calculates this automatically, further implying material setup.

Ravi Sinha: As stated, Ryse was praised for its overall cinematic presentation. Could you break down the integrated cinematic tools that come with the new CRYENGINE and how they helped aid Ryse in achieving a distinct look among other next gen titles?

Sean Tracy: Ryse used some improvements internally developed which focus on film and visualization. Using features like a physically based camera, real values taken from actual cameras used in film productions, is just one of the many things that helped us achieve such a high fidelity cinematic presentation.

Our tools used for cinematics are directly integrated to the CRYENGINE which really facilitates things for our cinematic team as they can iterate, test and iterate directly in engine with all the assets they will use for the final cutscene. As you give these artists more time to work and allow them to spend less time fighting against technology you will get that much higher quality work from them.

Conceptually we’ve maintained the same cinematic toolset that was used in the Crysis games. It was really just a matter of adding some further workflow improvement type features as well as supporting some new techniques including Geom Caching. On that note Geom Caching is one of the more obvious improvements as it allows artists to simulate any type of animation/effect in their DCC tool and it is read from a point cache straight from disk. These are excellent for cutscenes where very complex events are happening and manually adding bones or other real-time simulation requirements is just not feasible.

cryengine

In terms of Xbox One and PS4 we are finding the similarities nice, however, there are still drastic differences. A challenge right now for us is actually supporting all the previous AND Next Gen platforms in a single code base.

Ravi Sinha: CRYENGINE supports not only the Xbox One and PC but PS4 and Wii U as well. What are the advantages of developing for such uniform, PC-like architectures these days with regards to next gen consoles?

Sean Tracy: In terms of Xbox One and PS4 we are finding the similarities nice, however, there are still drastic differences. A challenge right now for us is actually supporting all the previous AND Next Gen platforms in a single code base. This is a very difficult thing technically as there are version differences and different requirements for each platform, this includes Wii-U as well. With all that said, however, we are doing very well in this newest generation as we’ve maintained PC parity with the platforms.

Ravi Sinha: What kind of differences does optimizing CRYENGINE for the PS4 offer versus optimizing on the Xbox One or Wii U?

Sean Tracy: As we technically only have a single shipped game on Xbox One and none for PS4 I can’t go too deep in answering this. Optimizing the CRYENGINE for the next-gen really centered around bandwidth allocation and was one of the biggest reason we switched to a new g-buffer layout which allowed all light computations to be done on a compute shader. This significantly improved our performance. Further improvements like pre-calculating sun shadows on some of the large shadow cascasdes made a very big reduction in drawcalls and thus increased performance further.

Rashid Sayed: Furthermore, several developers have been pretty straight forward about the differences between the PS4’s and Xbox One’s GPU. What is Crytek’s take on the same?

Sean Tracy: Again it’s a bit difficult to speak in terms of a comparison between the two platforms as Ryse was never run on a PS4. We have found though that testing the CRYENGINE on PS4 yields similar results.

Ravi Sinha: When Crysis and its subsequent games first debuted, there was concern that the current generation of gaming platforms wouldn’t be able to successfully showcase it. How does it feel now being in a new generation which can cater to all of your engine needs? How will you further push the boundaries of what CRYENGINE is capable of, even on the Xbox One and PS4?

Sean Tracy: I think there is no question that Crysis was ahead of the curve. People still even today use it to benchmark their PC’s. However, let it be noted, that against most people’s expectation we were able to successfully ship the Crysis 1 conversion for the last generation of consoles meaning that we weren’t far from being able to do this. At Crytek we love to push the boundaries of what is possible and we have a saying internally “disrespect the impossible” and having Crysis 1 run on a Xbox 360 was surely something most people thought would be impossible.

The keys for the next gen lie in the fidelity and believability of characters and their animation, the environment and its destructibility and finally physically based rendering and shading.

We also expect to be pioneers in the compute space as truly leveraging the full power of the next-generation platforms will require the mastery of GPGPU (General Purpose computing on GPU). Luckily we’ve been preparing for this and have been “next-generation ” for quite some time and we are well on our way.

xbox-one_ps4

In Ryse our graphics engineers created a system called tiled shading to take advantage of the Xbox One. This splits the screen into toles and generates a list of all the lights affective each title using a compute shader. It then cull’s light by min/max extennts of the tile. We then loop over the light list for each tile and apply shading.

Ravi Sinha: The debate is still going on regarding which platform is more suited for outputting to 1080p/60 FPS. How long do you think it will be before more developers can properly exploit the power of next gen consoles, especially with regards to CRYENGINE?

Sean Tracy: CRYENGINE has classically been designed at a 30FPS target. Keep in mind that running at 60FPS means you have to fit your whole game, renderer and all into only 16ms vs 33ms for 30FPS!

Just to give you an idea of what this means practically is that the renderer for Crysis 3 alone required a bit more than 16ms which doesn’t include everything else going that’s happening in the game.

Albeit, you can surely achieve 60 FPS with the CRYENGINE but I can’t say for a certainty that you could retain the amount of scene complexity that games like Ryse and Crysis currently have. As we fight to squeeze every millisecond out from the next gen consoles we usually don’t see a huge value sacrificing 50% of our computational time per frame at least for the types of high fidelity realistic scenes that we include in our games. We leverage every bit of computational time we can get so that we can deliver this directly to the gamer through unrivaled visuals.

Ravi Sinha: With the engine ever-evolving, is there any concern that some features might take longer to properly implement on next-gen hardware?

Sean Tracy: A great observation! This is a very real challenge that we face as an engine. We have a mandate to maintain parity between PC and our console versions. This means that though it might be relatively easy to accomplish something on PC we must find a more sophisticated solution for the console or risk not using the feature.

Thankfully in this generation where a huge amount of research is happening, and continues to happen, is in the compute shader which the platforms do quite well. Thus as long as we keep in mind that all the features we develop must be identical, or at least have equivalanets, for the console platforms.

Rashid Sayed: What are your thoughts on the Xbox One’s eSRAM and the current bottleneck it is posing to developers? Do you think this bottleneck will be patched in the future or will developers still need to produce workarounds?

Sean Tracy: I can’t speak to wether this will be patched or improved in the future, but I wouldn’t expect it as a developer. Game developers are masters at engineering workarounds to hardware limitations and I think you’ll see unique and novel technology developed just for such a purpose.

Rashid Sayed: Having said that, the Xbox One’s eSRAM is definitely suited for tiled textures, which we know has potential. What kind of advantages does CRYENGINE bring when used with tiled textures?

Sean Tracy: CRYENGINE has a unique and novel solution for this and was shipped with Ryse. One of the problems when using Deferred Shading is that it’s very heavy on bandwidth usage/memory traffic. This gets exponentially worse as overlapping lights cause considereable amounts of redundant read and write operations.

In Ryse our graphics engineers created a system called tiled shading to take advantage of the Xbox One. This splits the screen into toles and generates a list of all the lights affective each title using a compute shader. It then cull’s light by min/max extennts of the tile. We then loop over the light list for each tile and apply shading.

In practice this made for the biggest bandwidth save we could have hoped for, as just reading the Gbuffer once and writing shading results once at the end for each pixel. Only a single compute shader was used in Ryse for light culling and executing entire lighting and shading pipelines (with some small exceptions for complex surfaces like skin and hair).

xbox one amd

Microsoft’s announcement of DirectX12 is very interesting to us as we are always pushing the boundaries of what is possible on current and even next gen hardware.

Rashid Sayed: In my opinion, one of the high points of Crysis 3 was the AI. How are you taking it a step above in the latest iteration of CryEngine?

Sean Tracy: The AI in Crysis 3 was indeed quite good. Thus we haven’t massively updated the AI system but rather cleaned and re-factored the current system to make the creation of behaviors and navigation easier for a designer. The underlying system though continues to be very suitable. A recent and somewhat interesting improvement overall for CRYENGINE is the support of that same Crysis 3 AI in a Multiplayer environment allowing for cooperative play opportunities when making games with the CRYENGINE.

Rashid Sayed: Xbox One has on an on board audio processor. How does CRYENGINE utilizes that? Furthermore, How does CRYENGINE tackle a situation where there is no audio processor, for example the PlayStation 4?

Sean Tracy: The xma and now the newer wxma format are proprietary xbox codecs for compressed audio. They basically have audio processing power set aside so that it doesn’t use the CPU for decoding any other platform, such as the PS4 or WiiU, uses software decoding wholly depending on the CPU for decoding. The CRYENGINE technically doesn’t decode the audio as we rely on middleware packages such as FMOD or WWise for this process which are fairly standard across the industry.

Rashid Sayed: Microsoft recently announced DirectX12 and its strong push towards cloud gaming. What are your thoughts on this and what kind of benefits will this give to CryEngine, should you decide to use them?

Sean Tracy:  Microsoft’s announcement of DirectX12 is very interesting to us as we are always pushing the boundaries of what is possible on current and even next gen hardware. With cloud gaming and DirectX12 it is still very early days and any of the benefits for developers would just be pure conjecture at this point, however, it should be noted that there will clearly be a benefit to players who will be able to access powerful machines and hardware through the cloud to be able to experience high end PC gaming on much less expensive setups or perhaps even mobile and tablets.

This is a real force multiplier for developers. On DirectX12 any more control that is given to the developers versus relying on manufacturer drivers is in our opinion a good thing so we hope that it brings with it some benefits for leveraging the power of the hardware in new and unique ways.

Rashid Sayed: What is the next stage of improvements/features that the latest iteration of CRYENGINE will offer?

Sean Tracy: I’ve touched on quite a few of the features we continue to elaborate on and we will continue to push the boundaries on the compute side of things. Additionally one of our main directions is getting the CRYENGINE to run on many different platforms.
We’ve got some tricks still up our sleeves that we will reveal in the months and years ahead and I expect that gamers will be delighted with the advancements we are dedicated to bringing to the industry and real time rendering/gaming in general.


  • JerkDaNERD7

    “Now that both the consoles are out, what are your thoughts on the PS4’s “unified” and Xbox One’s embedded memory architecture?”

    That’s an odd question, because both have unified memory…it’s an APU. Matters of fact, XOne has a faster unified memory bus than the PS4.

    • MrSec84 .

      That’s not how it works, just because it’s an APU that doesn’t automatically make the memory set-up a unified one.

      Xbox One’s eSRAM & DDR3 are separate pools of memory, with separate buses.
      PS4 has one pool of RAM, one bus, which is why it’s a unified bus.

      Xbox One’s memory isn’t faster, 8GBs at 176GB/s is vastly superior to 8GBs at 68GB/s with 32MBs at 192GB/s, because of a little thing called flexibility.
      Faster access to the large pool of storage allows for way more freedom to developers.

    • xrobibn

      But you must know that the memory don’t move 176GB/s or 68GB/s the 100% of time, it has latency, so in CPU or GPGPU purposes, the DDR3 is a lot of faster than GDDR5, becasuse these computational work needs less memory, but a quickest acces

      And maybe still in graphics, watch the the texture problems in Trials Fusions on Ps4

    • Georges

      Trials Fusions texture streaming, pop in on both consoles
      Eurogamer:
      The console releases also struggle with asset draw speed. Much like its predecessor, Trials Fusion uses a virtual texturing system, similar to id Software’s Rage, which involves wrapping the world’s geometry in one single giant texture and then streaming in segments as you drive to the right. This setup is ideal for Trials, where the user’s viewpoint is usually predictable, and the results are stunning for an early digital release on uncharted next-gen hardware. Texture quality and shadowing are largely identical across PC and next-gen, and from rocky mountain descents to fluid neon-tinged loops, the visual variety in Fusion is a welcome continuation of the trend started by 2009’s Trials Evolution. However, we do notice PS4 and Xbox One struggling to stream segments of the world fast enough from their hard disks, resulting in some pop-in.

      These streaming issues are especially glaring on PS4 during the Turbine Terror stage, where the blurred texture beneath our tyres fails to update until we’re far past it. Pop-in also flares up aggressively when restarting from various checkpoints on Xbox One, and here even shadow maps can be found snapping into view. In the reference PC version, however, pop-in goes almost unnoticed if you’re playing on a machine with an SSD to stream from.

      http://www.eurogamer.net/articles/digitalfoundry-2014-trials-fusion-face-off

      Also dont forget you tube compression even more for PS4 and its higher resolution

    • ps4lol

      Trials Fusion texture streaming times improve with installing an SSD into the PS4.,

    • ME3X12

      It’s blatantly obvious the X1 version has better looking textures and PS4 has more washed out looking textures. This has been the case in a lot of games where the X1 has better looking textures. So what good is 1080p when the game has more washed out looking textures? I’ll take the X1 better textures all day everyday.

      http://www.youtube.com/watch?v=FcklldryRhw

    • ps4lol

      Digital Foundry proves that PS4 multiplat games consistently run at higher res and/or framerate.

      Some use cherry picked screenshots from where a streamed texture was 0.01 seconds from fully loading and try to use it as false proof PS4 has worse textures. This is wrong and deceptive.

      And don’t even start with the delusional “sharp popping textures” from the crushed blacks and hideous upscaling sharpness/contrast filter that they ended up removing.

      Xbox One AAA multiplats (Watch Dogs, Witcher 3, CoD: Advanced Warfare) will run 720-900p for the lifetime of the system.

      PS4 could run Ryse, Forza, Dead Rising 3, or any Xbox exclusive at higher res/framerate/effects, as it has more powerful hardware.

      Factual PS4 Hardware Advantages: +6 CUs, +560 GFlops (44% greater), +16 ROPs, +6 ACEs/CQs, better GPGPU support, better performing CPU, and faster unified memory. PS4 OS may also have less overhead or reserves.

      Both Sony and MS have world class coders that will extract every bit of performance out of their consoles with their drivers/APIs/SDKs. The difference is PS4 simply has more powerful hardware to work with, so it will always stay ahead in graphics performance.

      Every console has a power budget that can be put towards resolution, framerate, or visual effects. PS4 has a higher total budget than Xbox. Therefore Forza 5 would run better on PS4, and Driveclub would run worse on Xbox.

      PS4 will have better performing games for the entire generation as it has more powerful hardware, you can’t overcome a hardware gap with better drivers/SDKs. Any game running on Xbox One can be run with better framerate/resolution/visual effects on PS4.

    • Lujo

      You’re the MisterX follower, dont ya?

      Nice try, dude. Xbone version of Trials has sharper texture @900p. WTF!!!!???

    • ps4lol

      Yes misterxmedia’s cultists are well known to invade this blog.

    • MrSec84 .

      I gave theoretical max GB/s for both consoles, so it’s a fair comparison.
      Both DDR3 & GDDR5 have comparable latency, from 10 to 12ns delay, vast latency differences are a fanboy invention
      Highly parallel GPGPU tasks are actually more suited to GDDR5 because of it’s high bandwidth.
      As latency is a myth what you’ve stated is a fallacy, PS4’s CPU actually handles single core processing tasks faster, as was proven by a recent texture processing benchmark run on both Xbox One, PS4 & also a specific PC.

      Actually Xbox One was the system to have worse texture pop-in problems, across entire tracks, PS4 was more localized to specific sections of certain tracks.
      This was confirmed on Eurogamer.

    • ps4lol

      Trials Fusion texture streaming times improve with installing an SSD into the PS4.

    • ps4lol

      Wrong

      http://gamingbolt.com/project-cars-uses-xbox-one-esram-for-deferred-render-targets-careful-use-mitigates-ps4s-unified-memory-advantage

      “Our engine uses a light pre-pass style rendering approach and after experimenting with a number of different variations we found it was more efficient to use eSRAM to hold the deferred render targets. Careful use of eSRAM like this for the various render stages mitigates some of the advantage that PS4 has with its faster unified GDDR5 memory,” he said to GamingBolt.

    • JerkDaNERD7

      Memory bus, keyword “bus”. ALL AMD APUs has a “heterogeneous system architecture” as is any SoC designed chip. The difference in both consoles own design is that XOne has a faster 30 gb/s bus for coherent memory than PS4’s 20 gb/s. But PS4 is a straight forward design with a faster RAM.

      Andy Tudor in that article was only stating that certain uses of the eSRAM would mitigate PS4 advantage in comparison to the conventional method of rendering because of different memory. So when it comes down to efficient bandwidth allocation, Xbox One is better designed hence the word balanced.

    • MrSec84 .

      False, only a few AMD APUs are HSA, because memory coherency is a must, you can’t have memory coherency when a processor has to access data from different areas of the system, they will rarely be useful at the same time, which is a key factor in HSA.

      If the key word is “bus”, then it doesn’t fit what Xbox One has, because it has “buses”.

      PS4’s design is far more efficient, because of the memory coherency, the fact that GDDR5 can be both written to & Read from within a single cycle, the overall size advantage of the faster memory pool, it all makes for a more efficient design.

      The Internal CPU to GPU buses in both consoles have absolutely nothing to do with the speed of both system’s memory pools.

      It’s a fallacy that 32MBs of eSRAM would mitigate the difference between Xbox One’s DDR3 & PS4’s GDDR5, because of the added flexibility such a large pool of fast GDDR5 brings.
      It’s the fact that PS4 can both store large files & swap out either small bits or whole chunks at once which means in real world use PS4’s memory is likely at least 2X faster than what Xbox One has.
      PS4 having all of it’s bandwidth in one place, actually means it’s the more balanced design as far as memory set-up goes.

    • JerkDaNERD7

      What?! All APUs are HSA. Your confusing memory pool and whole system architectures, lol! HSA is NOT about programming memory bandwidth but programming heterogeneous parallel devices…which all APUs are LOL!

      Yes, XOne like PS4 has many buses…duh, lol! XOne has “buses” from the GPU going both to the CPU and eSRAM. Memory-“cache”-coherency is what I’m stating here with a memory bus coherency of 30 gb/s which IS the important factor here. This is what the big deal was with hUMA until that rumor was shot down because both consoles are based off the Jaguar architecture. Both naturally has their implementation. XOne being the sole emphasis use of the technology. Memory pool is a single large location of memory space. Memory coherency is the “real-time” access of cache memory for allocated use and it’s faster on XOne than PS4. Only thing that will truly benefit this is tiling technology which I’m sure you know of.

      The problem with XOne is not increased power overtime but waiting for it’s true implementation. Both consoles will improve overtime but not in power but developer workaround and techniques. So it’s fallacy to take your word over a developer of a graphically demanding game (Andy Tudor)

    • ps4lol

      Factual PS4 Hardware Advantages: +6 CUs, +560 GFlops (44% greater), +16 ROPs, +6 ACEs/CQs, better GPGPU support, better performing CPU, and faster unified memory. PS4 OS may also have less overhead or reserves.

      Both Sony and MS have world class coders that will extract every bit of performance out of their consoles with their drivers/APIs/SDKs. The difference is PS4 simply has more powerful hardware to work with, so it will always stay ahead in graphics performance.

      Every console has a power budget that can be put towards resolution, framerate, or visual effects. PS4 has a higher total budget than Xbox. Therefore Forza 5 would run better on PS4, and Driveclub would run worse on Xbox.

      PS4 will have better performing games for the entire generation as it has more powerful hardware, you can’t overcome a hardware gap with better drivers/SDKs. Any game running on Xbox One can be run with better framerate/resolution/visual effects on PS4.

      Even if Xbox had a far more powerful CPU and 10000GB of 10000 GB/s memory, it’s ability to render graphics is STILL limited by the weaker GPU. There’s no getting around the weaker GPU, there’s no free lunch.

      DDR3+ESRAM is still a size and bandwidth bottleneck and difficult to code for. The DMA registers help transfer data between DDR3 and ESRAM, they aren’t super special sauce.

      If they’re both running at the same resolution the Xbox version will have lower framerate and/or visual effects, or the PS4 hardware isn’t being pushed.

      Xbox has memory size and bandwidth bottlenecks, weak GPU and GPGPU, only 16 ROPs, and OS virtualization overhead that all degrade gaming performance. Take your pick.

      PS4’s large GPGPU advantage will widen as devs take advantage of it. It’s not just 2 to 8 ACEs (asynchronous compute engines), but the volatile bit, unified memory, and added bandwidth paths from CPU direct to GPU. Not only does PS4 have more CUs to do compute on, but it can do compute work more efficiently with less impact on rendering.

      Examples of GPGPU include Resogun’s voxels, Infamous’ particle system, The Order’s soft body, cloth, and object destruction physics, and MGS’s simulated weather. To port those to Xbox devs will need to reserve already limited CUs for compute or remove those features entirely (as in MGS). And last I checked, PS4’s CPU still performs better than Xboxes, for whatever reason.

      Exclusively 1080p 60 FPS games on PS4: MGS V, CoD Ghosts, FFXIV, Tomb Raider, MLB The Show 14, Resogun, Trials Fusion

      Digital Foundry proves that PS4 multiplat games consistently run at higher res and/or framerate.

      Some use cherry picked screenshots from where a streamed texture was 0.01 seconds from fully loading and try to use it as false proof PS4 has worse textures. This is wrong and deceptive.

      And don’t even start with the delusional “sharp popping textures” from the crushed blacks and hideous upscaling sharpness/contrast filter that they ended up removing.

      Xbox One AAA multiplats (Watch Dogs, Witcher 3, CoD: Advanced Warfare) will run 720-900p for the lifetime of the system.

      PS4 could run Ryse, Forza, Dead Rising 3, or any Xbox exclusive at higher res/framerate/effects, as it has more powerful hardware.

      PS4 has Kaveri-like APU/GPGPU/unified memory architecture devs can take advantage of to boost performance in the future. It’s not just “a standard PC in a box”.

    • MrSec84 .

      You miss the point entirely.
      Nice deflection away from the actual point I raised.

      The system having multiple buses is not the issue, in order for a device to be HSA compliant the CPU & GPU both need access to the same memory space, all memory space.
      Because XBox One’s CPU cannot directly read from or write to eSRAM this means the system is not memory coherent.
      Cache is only part of the story, it’s only a part of the data image, main memory is the full picture & why CPU & GPU having access to the same memory space is so important to HSA implementation.

      PS4’s CPU & GPU both having direct access to the one pool of RAM for everything actually means it is HSA compliant.
      The ability to both read & write within the same memory cycle takes this one step further.
      In other words PS4’s CPU & GPU can both see the full picture, at once & either can change that full picture as needed, this is the most efficient set-up & one of the reasons why PS4 will perform far better as time goes on.
      PS4’s design is built to take full advantage of the hardware it has, Xbox One’s isn’t.

      AMD said it themselves what makes a HSA device, the details are here:

      http://www.overclock.net/a/amd-fusion-and-the-hsa-revolution-what-is-hsa-and-why-it-will-be-revolutionary

      I’ll believe AMD themselves before I believe anyone else, since they invented they’re the ones designing the architecture.
      PS4 fits the diagrams for memory set-up, Xbox One doesn’t.

    • Lujo
    • JerkDaNERD7

      All of that was just rumor. It was a slip up that was taken as fact and exaggerated on sites like these and N4G.

    • Lujo

      It is not a rumor, it was said straight from the horse mouth! Why AMD said that will not give any comment about Xbone vs. PS4. after that statement from AMD employee? Go to AMD webpage and search hUMA architecture design and PS4 is built just like that.

    • JerkDaNERD7

      Dude, exactly what was stated. It was a mistake by a suit of the company NOT an AMD engineer. The console has ALREADY been checked through and developed for, THERE IS NO hUMA, because hUMA is ONLY compatible with the Kaveri GCN. Geez, this is old news…derp!

  • Team ICE

    Kudos to the Crytek team on X1’s audio processor advantage:
    “They basically have audio processing power set aside so that it doesn’t use the CPU for decoding any other platform, such as the PS4 or WiiU, uses software decoding wholly depending on the CPU for decoding.”

    A year or two will be amazin’

    • ineedgames

      Don’t want all of that useless sound getting in the way of them awesome graphics.

    • ps4lol
    • ME3X12

      Whatever form of audio that’s being used in the PS4 it’s already known X1 has the better more sophisticated audio chip.

    • ps4lol

      LOL they disabled the awful upscaling filter already. Do you even own an Xbox? LOL

      Digital Foundry proves that PS4 multiplat games consistently run at higher res and/or framerate with the same texture data.

    • ME3X12

      They did not disabled it they just patched it it still upscales. The way it was was a design choice by MS they liked the super deep blacks and super texture detail it gave. I for one liked it also but many seemed to not like it so they patched it and it now softened the blacks but it still looks good and still upscales everything to 1080p. You don’t know squat obviously.

    • Lujo

      PS4 HAS AMD’S TRUEAUDIO DSP – Confirmed! Sorry, Xbone fans! Xbone doesn’t have better audio chip.

      http://www.neogaf.com/forum/showthread.php?t=715035

    • Cinnamon267

      They look “sharper” because there used to be a sharpening filter. It looked ugly so they removed it. You can’t be “sharper” when compared to native.

  • jent

    The line of questions clearly shows a bias. I am glad that the interviewee did a good job to deflect and redirect that bias.

  • incendy

    Good to hear that CryEngine will be taking better advantage of Parallel Computing in the future. Definitely the area that allows for the most performance gains in both next gen consoles. As for the engine being optimized for 30fps, I don’t like it. Basically saying our engine is sacrificing gameplay for graphics. I get that it is who they are, but it would definitely make me want to use another engine that prioritizes gameplay.

  • ME3X12

    Oh juicy so much butthurt in here already. So what we take from this folks is. Once again the X1 is clearly the way more forward thinking system. The X1 has the faster higher clocked CPU and it has a dedicated audio chip that will allow further stress off the CPU. Also again Tile resources are the future and already in a launch game RYSE is the best looking game to date and it’s using a form of Tiled texture shaders and it’s already knows the esram has a huge advantage with this.

    Also clod gaming is real it works and it’s coming and DX12 is real it’
    s awesome and it’s going to bring massive performance boost to the CPU and allow the GPU to run far more efficient in X1. Good times a coming for the X1 and the X1 already has the best looking game to date in RYSE.

    Oh and by the way bu bu bu 1080p guess what bu bu bu RYSE 900p >>> and most other games have better looking textures on the X1 Thief, Trials Fusion, and BF4 prime examples

    • ps4lol

      Factual PS4 Hardware Advantages: +6 CUs, +560 GFlops (44% greater), +16 ROPs, +6 ACEs/CQs, better GPGPU support, better performing CPU, and faster unified memory. PS4 OS may also have less overhead or reserves.

      Both Sony and MS have world class coders that will extract every bit of performance out of their consoles with their drivers/APIs/SDKs. The difference is PS4 simply has more powerful hardware to work with, so it will always stay ahead in graphics performance.

      Every console has a power budget that can be put towards resolution, framerate, or visual effects. PS4 has a higher total budget than Xbox. Therefore Forza 5 would run better on PS4, and Driveclub would run worse on Xbox.

      PS4 will have better performing games for the entire generation as it has more powerful hardware, you can’t overcome a hardware gap with better drivers/SDKs. Any game running on Xbox One can be run with better framerate/resolution/visual effects on PS4.

      Even if Xbox had a far more powerful CPU and 10000GB of 10000 GB/s memory, it’s ability to render graphics is STILL limited by the weaker GPU. There’s no getting around the weaker GPU, there’s no free lunch.

      DDR3+ESRAM is still a size and bandwidth bottleneck and difficult to code for. The DMA registers help transfer data between DDR3 and ESRAM, they aren’t super special sauce.

      If they’re both running at the same resolution the Xbox version will have lower framerate and/or visual effects, or the PS4 hardware isn’t being pushed.

      Xbox has memory size and bandwidth bottlenecks, weak GPU and GPGPU, only 16 ROPs, and OS virtualization overhead that all degrade gaming performance. Take your pick.

      PS4’s large GPGPU advantage will widen as devs take advantage of it. It’s not just 2 to 8 ACEs (asynchronous compute engines), but the volatile bit, unified memory, and added bandwidth paths from CPU direct to GPU. Not only does PS4 have more CUs to do compute on, but it can do compute work more efficiently with less impact on rendering.

      Examples of GPGPU include Resogun’s voxels, Infamous’ particle system, The Order’s soft body, cloth, and object destruction physics, and MGS’s simulated weather. To port those to Xbox devs will need to reserve already limited CUs for compute or remove those features entirely (as in MGS). And last I checked, PS4’s CPU still performs better than Xboxes, for whatever reason.

      Exclusively 1080p 60 FPS games on PS4: MGS V, CoD Ghosts, FFXIV, Tomb Raider, MLB The Show 14, Resogun, Trials Fusion

      Digital Foundry proves that PS4 multiplat games consistently run at higher res and/or framerate.

      Some use cherry picked screenshots from where a streamed texture was 0.01 seconds from fully loading and try to use it as false proof PS4 has worse textures. This is wrong and deceptive.

      And don’t even start with the delusional “sharp popping textures” from the crushed blacks and hideous upscaling sharpness/contrast filter that they ended up removing.

      Xbox One AAA multiplats (Watch Dogs, Witcher 3, CoD: Advanced Warfare) will run 720-900p for the lifetime of the system.

      PS4 could run Ryse, Forza, Dead Rising 3, or any Xbox exclusive at higher res/framerate/effects, as it has more powerful hardware.

      PS4 has Kaveri-like APU/GPGPU/unified memory architecture devs can take advantage of to boost performance in the future. It’s not just “a standard PC in a box”.

    • Georges

      Have you ever heard about youtube compression? XD

  • ME3X12

    In a few years we will all see that the X1 is the better designed more forward thinking system that will show it’s balance of forward thinking power. Right now PS4 has a brute force advantage based on doing things with older or current dev tools and methods. In a few years with next gen dev tools and DX12 the X1 is going to be a BEAST!!!

    • ps4lol

      Factual PS4 Hardware Advantages: +6 CUs, +560 GFlops (44% greater), +16 ROPs, +6 ACEs/CQs, better GPGPU support, better performing CPU, and faster unified memory. PS4 OS may also have less overhead or reserves.

      Both Sony and MS have world class coders that will extract every bit of performance out of their consoles with their drivers/APIs/SDKs. The difference is PS4 simply has more powerful hardware to work with, so it will always stay ahead in graphics performance.

      Every console has a power budget that can be put towards resolution, framerate, or visual effects. PS4 has a higher total budget than Xbox. Therefore Forza 5 would run better on PS4, and Driveclub would run worse on Xbox.

      PS4 will have better performing games for the entire generation as it has more powerful hardware, you can’t overcome a hardware gap with better drivers/SDKs. Any game running on Xbox One can be run with better framerate/resolution/visual effects on PS4.

      Even if Xbox had a far more powerful CPU and 10000GB of 10000 GB/s memory, it’s ability to render graphics is STILL limited by the weaker GPU. There’s no getting around the weaker GPU, there’s no free lunch.

      DDR3+ESRAM is still a size and bandwidth bottleneck and difficult to code for. The DMA registers help transfer data between DDR3 and ESRAM, they aren’t super special sauce.

      If they’re both running at the same resolution the Xbox version will have lower framerate and/or visual effects, or the PS4 hardware isn’t being pushed.

      Xbox has memory size and bandwidth bottlenecks, weak GPU and GPGPU, only 16 ROPs, and OS virtualization overhead that all degrade gaming performance. Take your pick.

      PS4’s large GPGPU advantage will widen as devs take advantage of it. It’s not just 2 to 8 ACEs (asynchronous compute engines), but the volatile bit, unified memory, and added bandwidth paths from CPU direct to GPU. Not only does PS4 have more CUs to do compute on, but it can do compute work more efficiently with less impact on rendering.

      Examples of GPGPU include Resogun’s voxels, Infamous’ particle system, The Order’s soft body, cloth, and object destruction physics, and MGS’s simulated weather. To port those to Xbox devs will need to reserve already limited CUs for compute or remove those features entirely (as in MGS). And last I checked, PS4’s CPU still performs better than Xboxes, for whatever reason.

      Exclusively 1080p 60 FPS games on PS4: MGS V, CoD Ghosts, FFXIV, Tomb Raider, MLB The Show 14, Resogun, Trials Fusion

      Digital Foundry proves that PS4 multiplat games consistently run at higher res and/or framerate.

      Some use cherry picked screenshots from where a streamed texture was 0.01 seconds from fully loading and try to use it as false proof PS4 has worse textures. This is wrong and deceptive.

      And don’t even start with the delusional “sharp popping textures” from the crushed blacks and hideous upscaling sharpness/contrast filter that they ended up removing.

      Xbox One AAA multiplats (Watch Dogs, Witcher 3, CoD: Advanced Warfare) will run 720-900p for the lifetime of the system.

      PS4 could run Ryse, Forza, Dead Rising 3, or any Xbox exclusive at higher res/framerate/effects, as it has more powerful hardware.

      PS4 has Kaveri-like APU/GPGPU/unified memory architecture devs can take advantage of to boost performance in the future. It’s not just “a standard PC in a box”.

    • Georges

      That made me smile ;) ‘better designed more forward thinking system’ ;)

      http://www.eurogamer.net/articles/digitalfoundry-2014-why-xbox-ones-media-strategy-failed

  • qiplayer

    The whole crysis2 and crysis wars community hates Crytek because you are gonna close the multiplayer. Many players won’t ever buy again your product. Be more reaponsible!!
    You never came to answer to the community on the forums, lastly Cevat Yerly said that they selled 1’500’000 copies on humble bundle and now you are gonna close the game!!!!
    Shame on Crytek, you deserve the gamer to pirate your next products. We have no other choice and you don’t even answer, nor on your forums nor by petitions nor via email nor via phone.

  • ME3X12
  • GHz

    Great interview! AND…

    “Ravi Sinha asked, ” With the engine ever-evolving, is there any concern that some features might take longer to properly implement on next-gen hardware?”

    That’s the best question in the entire interview!!!

    I’ve pointed out this fact before and mostly got attacked for it…But anyways…

    Part of the answer to that question was, “This means that though it might be relatively easy to accomplish something on PC we must find a more sophisticated solution for the console or risk not using the feature.”

    Remember the Epic demo on PS4, and how how despite the GREAT job the ps4 did on mimicking that no interactive demo, they had to revert to pre bake lighting used in the unreal engine 3 because the PS4 couldn’t do real time GI within the demo? (not saying that the ps4 can’t do GI, but in this case @ the time it couldnt)

    Meanwhile Lion head found, ” a more sophisticated solution” to implement GI in a full fledged game for XB1.

    Lionhead’s solution was since then integrated into Unreal Engine 4.

    Here’s a quote on how Lionhead not only implemented a next gen feature(GI), but they improved upon it & made it even better to suite their needs.

    “Epic has previously researched a real-time GI method called SVOGI with promising results, but it was never quite fast enough. Of all the papers on the subject, Crytek’s Light Propagation Volumes seemed like the best solution for our game. It was fast, scalable and could also support reflections and secondary occlusion. However, this method was a DirectX 9 implementation, and we felt we could do even better with modern DirectX 11-class hardware and the power of compute shaders.

    We developed an implementation of Light Propagation Volumes, specifically targeted at modern GPUs (such as the one in Xbox One). Our method supports 2nd-Order Spherical Harmonics, which gives better directionality than the original method.”
    http://www.lionhead.com/blog/2014/april/17/dynamic-global-illumination-in-fable-legends/

    Not all devs are created equal. Some are just simply smarter than others.

    This is a great interview and it revealed plenty.

39 queries. 0.315 seconds