PS4 And Xbox One Feature Modern CPUs, Having Faster Memory Isn’t Useful If The Processor Stalls

“Both machines feature very modern processor designs which include features such as out of order execution,” says David Miles, CTO of BabelFlux.

Posted By | On 02nd, Aug. 2014 Under News | Follow This Author @GamingBoltTweet


xbox-one_ps4

GamingBolt got in touch with David Miles who is the CTO of BabelFlux. BabelFlux are behind Navpower, a path finding tool which has been used extensively in several AAA games. We asked David about how exactly does the middleware take advantage of specific features of the new consoles i.e. the unified GDDR5 Memory of the PS4 and the extremely fast memory eSRAM of the Xbox One which has a bandwidth of 204 Gb/s.

“Even though the memory bandwidth on both the PS4 and the Xbox One is fabulous, the critical thing for us is that both machines feature very modern processor designs which include features such as out of order execution,” David said to GamingBolt.

“That’s critical for anyone writing path-finding or AI systems, because it’s rare to be able to execute any significant portion of code without having to make a decision based on the data you’re traversing, and conditionally go off to execute a different piece of code. So having the fastest memory in the world isn’t useful if the processor stalls whenever you have to branch to another part of the code.”

Let us know your thoughts in the comments section below and stay tuned for more news and updates.


Awesome Stuff that you might be interested in

  • d0x360

    I genuinely can’t wait for the day when developers really get a handle on using not only multiple cores but also ooe. Sure developers make use of extra cores these days but we are really only starting to see benefits now despite having access to the technology for years. Its a rally complex architecture. You have to be careful making sure threads are all doing the right thing at the right time. We still don’t have engines properly splitting tasks efficiently but the PS3 AND Cell actually taught developers a significant lesson on out of order execution.

    We really need a whole new language to take full advantage but that’s years off. I hope someday we also find a way to move away from simple yes no binary. It might sound odd but having yes no maybe is the most likely path to true AI. 0,1,2. Or -1,0,1

    • hiawa23

      I am curious how the PS3 and the cell taught devs anything. The only games utlized it were the exclusives which were much fewer than the total games released last gen Seems the only devs utilized it was the Sony teams.

    • d0x360

      Lessons learned transfer between studios both first and 3rz party. If a first party Dev does something really groundbreaking they roll it into the SDK which enables 3rd parties to see what was done and build on it.

    • MrSec84 .

      FYI the Cell processor was an in-order chip design, so developers wouldn’t have learned anything about out of order execution from programming on it.

      It’s really this console generation that developers will be getting used to using out of order CPUs, since that’s what consoles have now.

    • d0x360

      Yes it was, the PPC component but the spe design was a little different to how a normal CPU functions so they learned a lot about multi threading as well as executing instructions out of order and how to make them play nice.

    • MrSec84 .

      Technically multithreading has been around since the 1950’s, simultaneous multithreading has been in CPUs since 2002, so it’s not even like PS3 or 360 were the first platforms to use the technlogy.

      Multithreading and out of order execution are two different things entirely, but no PS3 wasn’t an OoOE design, it was most definitely an In Order design, as was the 360.

      Now you could argue that PS3 & 360 having multithreading in their architecture meant it forced software developers to take advantage of the more parallel nature of the designs to get the most out of both system, so that helped to evolve coding within the gaming industry, it wasn’t exactly some paradigm shift though, just a more commonplace feature of the hardware in both systems.

    • d0x360

      I know and that’s what I’m saying. By having the game consoles go multicore it basically forced developers into using said cores. They had been around on PC for quite a while but developers pretty much ignored them.
      Also further by Sony going the total custom and outside the box route with the design of cell who’s ppc core was in order, the use of the spe’s worked (in terms of how you used them) like out of order.

      I know that saying that isn’t technically correct but there isn’t really a better way to describe how the contribute in relation to the article at hand.

      I’m kind of sad cell didn’t take off. The original vision was to embed a cell broadband engine in basically everything and via networking they could all contribute to computational tasks so for example your home PC might need a boost in CPU resources so it would wake up your toaster of fridge or both… Or more. Cell was fantastic at crunching numbers.

      It was such a fantastic idea, hopefully it gets more development and we see something like it in the future.

    • d0x360

      BTW thanks for jumping in and adding to the conversation. All good info and hopefully it inspires people to look into this stuff and learn about hardware

    • Anon

      I don’t know. I generally consider it the catalyst for the paradigm shift I’ve seen, but it’s perhaps more debatable than I’d have thought.

      I would say it’s fundamentally altered the way we design both low- and even high- level systems, especially with a looking towards a more stream processing style approach to parallelism. Concerns over shared data access, memory re-ordering, atomicity, parallel splitting and merging, cache utilization, etc., are all the more important now in the realm of high-performance, general purpose hardware multithreading than in the single-core software multithreading approaches single-core days gone by. The code complexity ramps up, yes, but on the other side you weigh up just how much you stand to lose through heavy, serial code paths that it just becomes a necessity to go wide somehow.

      The most trivial of stalls has the potential to cost so much in throughput now too as the executing code can stall more than just it’s own core. A top-down rethink really was needed to stay thoughtful of the future. The SPE’s did a great job of really making devs notice this and think about just how beneficial (and necessary) fine-grained, stream processed tasks are/will be. It took time to really get on top of, and really nailed home the benefits of paradigms like ‘data-oriented design’ in this context. Well, that along with the processor-memory performance gap that shows no signs of breaking pattern.

      As far as going from ‘In-order executation’ to ‘out-of-order execution’ goes, there’s not much to get to grips with. Some may have even let off a sigh of relief when they found out the consoles were going with OoOE. Going in the reverse can be punishing for unbeknownst programmers though.

    • extermin8or2

      the SPE’s required multi threading- Naughty Dog seeped to have the hang of utilising them to their max so it’ll be interesting to see what they do with ps4, a more easily programmed for architecture and the multi threading will have to be used. I mean look at the uncharted 4 trailer for the game to be anywhere near that (and that was claimed to be in engine captured from a ps4) they’ll have to use alot of the resource’s available. (and the SPE’s deffinitely had some element of out of order execution, I’m certain of this because I remember ND talking about it in a devlopment video.)

    • d0x360

      Naughty dog probably had the best handle on squeezing the performance out of the cell. They really did a fantastic job. True technical wizardry.

    • demfax

      PS4 has GPGPU and hUMA/HSA features that Naughty Dog and other 1st party devs will take advantage of. There’s lots of room to grow.

      Games like infamous 2nd son already use GPGPU to run the particle effects like smoke and neon.

    • d0x360

      All modern gpu’s have those features…well huma is AMD terminology and it certainly helps they also make CPU’s.

      Makes integration easy, no messy business deals lol.

  • MrSec84 .

    This article is very misleading with the comment of eSRAM being extremely fast, I mean in real world bandwidth GDDR5 eats it alive, with 172GB/s, vs 150GB/s through eSRAM.

    GDDR5 in PS4 also beats DDR3, which only has a bandwidth of 55GB/s.

    204GB/s is theoretical maximum for eSRAM, nowhere near usable.

    Since bandwidths in XBox One can’t be combined (being that they’re independent pools of RAM) everyone should be comparing GDDR5 to eSRAM & DDR3.

    • rudero

      Agree.
      It’s just media outlets and fanboys trying to beat down the significant disparity between consoles.
      This very topic was brought up in a fanboy article yesterday saying they are basically the same thing and that the added GDDR5 doesn’t really improve anything due to the GPU unable to read it at the given speeds.
      It is crap and the more media outlets do this the more I hate Microsoft and their shills.
      Been doing crap like this since for ever with any competing products.

    • MrSec84 .

      Lol, I’m glad someone sees sense on this.
      It really makes me laugh when people try to act like an AMD GPU can’t handle that kind of bandwidth, when a 7850 has a bandwidth of 153.6GB/s, PS4’s got more CUs, TMUs & more ACEs than that, along with the CPU needing it’s own bandwidth, so it makes no logical sense that PS4 couldn’t take advantage of all 172GB/s of useable bandwidth.

    • Guest

      Hey what do you know about the PS4’s buses, from what i understand, the CPU has a ~20GB bus between the CPU and GDDR5, then it also has the Onion and Onion + buses which run at ~20GB when doing simultaneous read and writes. Then there’s Garlic, the 176GB bus between the GPU and GDDR5. Is there anything im getting wrong there?

    • Psionicinversion

      its got absolutely nothing to do with the memory system. Its the speed of the processor in how it handles ooe, im not sure what its all about but i assume its like driving down a road going to a destination then suddenly deciding to turn off that road to go on another. the speed at which the CPU can process that request is what makes it quick. if the CPU chokes from to much data or cant handle request at a speedy rate then its not good for it

    • MrSec84 .

      Of course bandwidth is important, if you don’t have enough speed in the memory system then the CPU & GPU won’t be fed at all times, which is important in making the most out of the hardware.

      In both of these systems the CPU cores aren’t expected to handle all of the compute tasks, that’s why they both have weaker CPU power, but are superior on the GPU end.

      The memory system is important to the platform’s brains as a whole, not just the CPU.

      The CPU wouldn’t be using more than it can handle, unless tasks are poorly coded by developers or the APIs in both are poorly engineered.

    • Psionicinversion

      yeah but bandwidth needed for such things is really low, the reason it has 176GB/sec is for texture transfer and thats it really, need a big pipe to shove that stuff down. like ddr3 66GB/s max comes no where near to being fully used in PC’s for gaming but my GPU sure does need its 280GB/s of GDDR5 memory bandwidth.

      There counting on the compute units to offload cpu tasks cus they went with low power cpu architecture to save money, and to save on watts. also it was a cost based decision.

    • Guest

      Modern CPUs have around 20 to 30GBs bandwidth, while the nigher the bandwidth the better PERIOD for graphics and not just textures, res, aa, af, ao, even framerates, etc. all rely on bandwidth. Why do you think these new so called “next gen” systems still only use weak AA solutons like FXAA? becuase eof their bandwidth being low.

    • Psionicinversion

      yeah but the bandwidth required for path finding functions of AI and needing to use out of order executions for branching stuff wont need to be very high so its down to the speed of the cpu more than anything

    • MrSec84 .

      Nope the reason it has all of that bandwidth is because of a variety of other things besides textures.

      Geometry eats up a lot of data, as does AI, physics & particle effects.
      If anything in modern game design, with the way things will be coded going forward it’s actually physics that will need the most amount of memory space & fast movement of data within a system.

      You only have to think about the amount of objects interacting with one another in a gun fight, in a warzone within a game like Battlefield, parts of levels & materials interacting with each other, to realize just how much data that will use.

      Textures are a small part of that, hence why PS4’s GPU was modded for that focus in game creation & having a larger GPU means more capabilities in handling all of that physics data, which subsequently means more bandwidth & the ability for said bandwidth to be able to store more data (it’s not even about just the bandwidth, but storage & access when needed, hence why PS4 has such flexibility, using GDDR5 was about being able to read & write to the large pool of memory too).

      The physics capabilities is why PS4 is designed the way it is, with a fast pool of RAM & the highly parallel nature of the APU.
      PS4 is modded more for GPGPU because of the future of game design being about physics & that’s predominantly why such a fast pool of large memory was used.

      Textures are a part of the story, because they’re part of what makes up the graphical image we see as gamers, but when Partially Resident textures can handle 32 Terabytes worth of texture data & RAM only had to stream in portions that you’re seeing it becomes apparent that large amounts of bandwidth won’t be used for textures.

    • Psionicinversion

      ah right so more bandwidth = good. Star Citizen has a completely accurate physics model, every single thing you do from a gun shot to manouvering a super carrier has physics calculated and applied thats fine for PU when its online, so even if the server is calculating the physics does it calculate any part locally to?

      theres 50 mission single player to which should have physics applied to everything you do as well so more bandwidth the better? the 390x i.e. VI 2 should start using HBM atm i read 1GB = 128GB/s so if there 4Hi 4GB = 512GB/s. would a game like that so physics heavy eat up all that bandwidth so should be able perform faster?

    • Guest

      You’ve just shown that you do not know what you are talking about.

    • Psionicinversion

      the article is talking about a dev that does path finding AI, in what world does he need even 60GB/s of bandwidth, or even 10 GB/s to the CPU for stuff that is mostly done in CPU? sure the data needs to reach the CPU in the first place but its hardly going to need much room

    • corvusmd

      Both Max Bandwidths are theoretical..so if you’re gonna compare them…compare both theories or both real world…not Theory to Real world.

    • MrSec84 .

      In PS4 the theoretical bandwidth of GDDR5 is 176GB/s, but the real world was confirmed by the Oddworld developers to be 172GB/s.
      172GB/s is achievable in PS4’s memory system.

      In Xbox One the maximum theoretical bandwidth is 204GB/s in eSRAM & 68GB/s in DDR3, but real world bandwidth was confirmed by Microsoft’s own engineers to be 150GB/s in eSRAM & 55GB/s in DDR3.

      I was comparing real world across all memory types.

    • ShowanW

      Of course the PS4 is more powerful… But please don’t use OddWorld as an example.

      Game isn’t demanding at all

    • MrSec84 .

      I wasn’t using it for that reason, just because that developer confirmed the real world bandwidth of the system.

    • Cigi

      Sorry but no. The actual bandwith is confirmed by Sony themselves to be 135GBs. And you can actually Call all memory channels on the xbox at ones that is ddr3 esram, move Engines. And you can also read and write simultaniously.

    • demfax

      Wrong.

    • MrSec84 .

      Oddworld devs confirmed what they’d been able to achieve in real world bandwidth use on PS4:

      “It means we don’t have to worry so much about stuff, the fact that the memory operates at around 172GB/s is amazing, so we can swap stuff in and our as fast as we can without it really causing us much grief.
      Read more at http://gamingbolt.com/oddworld-inhabitants-dev-on-ps4s-8gb-gddr5-ram-fact-that-memory-operates-at-172gbs-is-amazing#74Yw1BSx66bDiiwT.99

      FYI DDR3 can’t read and write at the same time, so that’s wrong, also eSRAM isn’t accessible by the CPU, only the GPU.

      In PS4 both the CPU & GPU have access to the same pool of memory.
      Move engines aren’t a memory channel, they’re a tiny cache to move data between different components on the APU die.

      Even Mark Cerny (the architecture head of the PS4) stated the PS4 has 176GB/s to GDDR5 & that this is accessible by both CPU & GPU.

    • Cigi

      I am sorry to say – but you are just wrong.

      Yes they have acces to the same pool – but not though the same (onion vs garlic)….

      Jenner wouldn’t go into details on the levels of bandwidth available for each bus owing to confidentiality agreements, but based on our information the GPU has full access to the 176GB/s bandwidth of the PS4’s GDDR5 via Garlic, while the Onion gets by with a significantly lower amount, somewhere in the 20GB/s region (this ExtremeTech analysis of the PS4 APU is a good read).

      http://www.eurogamer.net/articles/digitalfoundry-how-the-crew-was-ported-to-playstation-4

      Second thing is it implies a “GPU only” peak of only about 135 GB/s for PS4. This is probably similar to how Xbox One ESRAM is rated at 204 GB/s, but it’s architects said it’s more like 140-150 GB/s in real world usage. Any synthetic maximum wont be reached in real code, and this gives us an idea where that falls on PS4.

      http://gamrconnect.vgchartz.com/thread.php?id=179167
      http://i.imgur.com/0LOwYux.jpg

    • kstuffs

      In the PS4, the CPU has only a 20 GB/s BW to the GDDR5. Only the GPU has a peak bandwidth of 176 GB/s. Sony is pretty tight-lip about the Onion BW (CPU to GDDR5). I wonder why? Pretty vocal about the 176 GB/s ideal peak BW but totally secretive in BW of CPUGDDR5. If anything, MS has been pretty open about their architecture detailing the BW of DDR, eSRAM, GPU to DDR3, GPU to eSRAM, CPU to DDR3, and so on.

      http://www.eurogamer.net/articles/digitalfoundry-how-the-crew-was-ported-to-playstation-4
      http://www.extremetech.com/gaming/154924-secrets-of-the-ps4-heavily-modified-radeon-supercharged-apu-design/2

      Not only that, the actual possible BW from GPU to GDDR5 is about 135-140 GB/s (see slide 13) of the graph from Sony. http://develop.scee.net/files/presentations/gceurope2013/ParisGC2013Final.pdf

    • demfax

      20 GB/s is more than enough for the CPU to GDDR5 bandwidth.

      You are misinterpreting that slide and wrong.

      In essentially every way, the PS4’s hardware is more powerful than the XBO’s.

    • kstuffs

      I don’t think he confirmed anything about his code. He’s saying that the memory operates at 172 GB/s (which is likely a mistake for 176 GB/s) not that “we were able to measure 172 GB/s of BW on our actual code”. The person who interviewed also did a poor job of asking a follow on question. Something like do mean 176 GB/s? Or even is that the actual measured code BW?

      This is nothing but a repeat of 176 GB/s total bandwidth (peak theoretical). His wordings are ambiguous enough to have two interpretations (yours and mine).

    • MrSec84 .

      No he made an outright statement about the speed the RAM operates at, if that speed was in fact theoretical he wouldn’t have used the statement of “the fact that the memory operates at around 172GB/s is amazing”, there’s no room within his words to indicate anything besides bandwidth being around 172GB/s.

      The statement was made almost a year ago, no statement has been made to refute it, after all of this time, so nothing negates what Gilray said, memory operates at 172GB/s.

      Gilray’s words aren’t ambiguous in the slightest, it’s an outright statement, has an obvious meaning, so there’s is in no way ambiguous, it’s fact.

    • Michael Norris

      Wrong,it’s something like 156gb for the Gpu and 20gb for the cpu.The Gpu used in Ps4 has a maxed bandwidth of 156GB.The Gpu used in Xbone has around 96GB,what hurts it is the DDR3,not enough Esram to really push for 1080p with out some cutbacks compared to Ps4.

    • Cigi

      No you are wrong – look at the slide – this is from SONY themselves!
      Also in regard to the esram the issue here is to use it as it was intended as scratchpad memory – as MS states. This will solve the issues of 1080P. By the way did you look at the new games coming for XO. 1080P on COD AW, Destiney etc. so there is no issue here any more. It is just a developer issue – not a hardware issue.

    • MrSec84 .

      No Sony hasn’t confirmed any such thing, there’s no outright statement if 135GB/s being the real world maximum available to developers.

      Just Add Water confirmed that around 172GB/s is what they achieved out of the hardware, that’s a fact.

      In Xbox One developers can’t access eSRAM from the CPU, Move Engines can’t make DDR3 capable of something it’s inherently incapable of doing, Xbox One’s 8GB pool can only either read or write within a given cycle, never both at the same time, hence why DDR3 loses in speed compared to GDDR5.

      Only eSRAM can read and write and the same time & that’s only 32MBs, which is tiny, there’s also a rather limiting bandwidth of only 150GB/s, which isn’t a lot for such small pool of memory, certainly not enough to make up for the small 55GB/s available 8GBs of DDR3 in XBox One.

      PS4 actualy beats out both DDR3 & eSRAM with it’s 8GBs of GDDR5 at 172GB/s of real world bandwidth (confirmed as I said by developers of Oddworld).

    • Starman

      propaganda ….

    • MrSec84 .

      Truth, keep trying.

    • Guest

      MrSec is right, 172gb is the real world bandwidth of the PS4 (178gb theoritically) and 150/55gb is the X1’s and was confirmed by MS

    • Michael Norris

      No you are wrong,176gb is real world performance for GDDR5.MS pretty much lied about the 200gb+ figure they counted it for both read and write and combined the DDR3 bandwidth.

    • Edonus

      Take it back to the pony stable. You are comparing the theoretic low of eSram to close to max gddr5. If that doesn’t destroy you point enough try this….. The gddr5 176 BW is based on 8GBs of ram available. But they don’t use 8GBs it’s some where around 5. While the 32mbs of eSram which has a BW of 204GBs max and 150GBs low is always used for in its entirety because it is hooked directly to the GPU. Go ahead and spin that.

      Also the engineers of the X1 already came out on record saying you can combine the BW of the eSram and ddr3 ram because they can operate on different tasks at the same time. It’s not like you are either using one or the other.

    • Failz

      In other words, both consoles are slow. Get over it. Peasants vs Peasants are the funniest things to see. What was stated in this article are facts and esRam is incredibly fast and even faster then GDDR5 Ram when used correctly. It explains why MS chosen to boost there CPU to 1.75ghz over 1.6ghz that’s in the PS4. GPU is nothing without a good CPU behind it and both consoles lack in CPUs.

      Sucker Punch even admitted to it.
      http://www.dualshockers.com/2014/04/14/sucker-punch-seeking-more-ways-to-use-ps4s-ram-cpu-a-bottleneck-but-theres-room-for-improvement/

    • Paolo Napoli

      This is real speed GDDR5

    • MrSec84 .

      http://gamingbolt.com/oddworld-inhabitants-dev-on-ps4s-8gb-gddr5-ram-fact-that-memory-operates-at-172gbs-is-amazing

      “It means we don’t have to worry so much
      about stuff, the fact that the memory operates at around 172GB/s is
      amazing, so we can swap stuff in and our as fast as we can without it
      really causing us much grief.”

    • Paolo Napoli

      This is an official slide from Sony, sorry 😉

    • MrSec84 .

      Where in that slide does it say what the maximum real world bandwidth is?
      The answer is nowhere.

      It’s a display of the fraction of bandwidth CPU uses compared to the GPU at a given point in time.

      I actually gave an example of what a developer stated was the operational speed of bandwidth in PS4.

    • kstuffs

      That’s why in a non-ideal world, you have resource contention. The slides is telling you that even with minimal CPU BW, the GPU maximum BW is still 135-140 GB/s. Unless of course you live in an ideal world, or the CPU is doing nothing.

    • demfax

      Main RAM contention between CPU and GPU is an issue on both consoles.

    • MrSec84 .

      No one claiming RAM access as 135GB/s has given a single quote validating their claims.

      The slides don’t outright state 135GB/s or 140GB/s as maximum speed available to the GPU, whole hardware setup or anything.

      As I said before Just Add Water developers, that made Oddworld for PS4 have said that bandwidth to RAM was around 172GB/s, that means CPU, GPU and other on die components are sharing that bandwidth, it’s what they achieved out of the console.

    • demfax

      That is an example showing CPU/GPU contention, not max GDDR5 bandwidth.

    • Cigi

      It is an example of real wold senarios – which should be the only thing that is interesting – not theorethical numbers right. We play real wold games – dont you ?

    • kstuffs

      You have to understand the CPU is doing nothing (i.e. brain dead).

    • Starman

      WRONG ….Another fanboy opinion …..or should I say propaganda.

    • MrSec84 .

      No it’s right, Xbox One numbers were given in Eurogamer’s architectural interview with the MS engineers.

      The Oddworld developers confirmed what they were able to achieve on PS4.

      There’s no propaganda, these are facts, try harder next time.

    • Cigi

      You are totally wrong. They are totally differenct concepts and cant be comkpared like you do. Also the CPU eats into the bandwith of the GPU – and is also using the onion bridge wich is 20GBs – so the toal bandwith is nothing like they claim. And as shown below Sony also themselves admit a realtime bandwith of 135GB on the GPU.
      But the point is that the XBOX one has a lot of options to improve on this – as i expect we are beginning to see with all nyw anounced games beeing 1080P and at min. on par with the PS4 !!!

      http://www.dualshockers.com/2014/04/04/microsoft-explains-why-the-xbox-ones-esram-is-a-huge-win-and-helps-reaching-1080p60-fps/

      DMA engines, Direct Memory Access, special registers that pull push data around with their own que system separate of the CPU. ie: MS’s custom “MOVE ENGINES” on the X1 SoC are what is being referred to here.

      They are saying they omitted the ability for the CPU to access the ESRAM because they have (four iirc) move engines that can do this concurrently without the CPU being bothered. This is a massive advantage in real world bandwidth and real world latency numbers for the system’s combine ESRAM and DDR3.

      Basically, the difference being the PS4 has two direct channels to memory (CPU+GPU) that can write concurrently but both require the commands to be cached, the Xbox has far more options (CPU to DD3, GPU to DD3, GPU to ESRAM, Move engines to ESRAM, Move engines to DDR3) of which some can read/write/copy concurrently.

      This is only a big deal if an engine is designed to access them. On the PS4, nothing needs to be done to get full speed, but that also means there is the less potential for optimization, the result is an immediate benefit for PS4, and a delayed on for X1. Expect big gains in new engines and poorer performance in old engines that do not utilize the concurrent read/write/move. Exactly what we are seeing with launch multiplats on old engines, and much better performance on games designed specifically for X1 (Ryse and Forza).

    • demfax

      Unified GDDR5 > DDR3 + ESRAM for real world games performance.

  • This just further validates why MSFT went with a higher processor speed vs DDR5

    • Psionicinversion

      what are you talking about. DDR5 doesnt exist…CPU speed means nothing if the GPU cant pump out what it needs to.

    • corvusmd

      I agree, this SEEMS like this is leaning in MS’s favor (not that I am a dev or anything, but just based on what I have read). This is something that appear to me at leas that MS can “solve” much easier than Sony. Perhaps this is part of why, despite there being rather substantial differences in specs on paper…in the real world applications the differences are minor at best.

    • Well, the PS4’s GPU strength is still a serious advantage. But in terms of efficiency and reducing bottlenecks, XB1 would seem to do a better job balancing the hardware – not that we can really tell the difference much, since everyone is primarily focusing on graphics comparisons.

    • Guest

      How is X1 more “balanced” then the PS4? Its got a serious bottleneck in both the DDR3 and the eSRAM (one being to slow and the other being to small), then its got a serious deficiencies in the GPU in comparison to the PS4, and why only 16ROPS? when everybody knows 32ROPS is whats needed for 1080p on a ATi card. Where is this better bablance you speak of?

    • Matt

      Microsoft is putting all it’s effort into tiled resources… the lower latency ram and small amount of Esram are tailored for the task.

    • Guest

      So tell me Matt, what are the eSRAM RAM’s latency number then? Oh thats right you dont know them do you, just like you dont know the DDR3 or the GDDR5 latency numbers. And if lower latency was so important to ms then why did they go with 2133MHz DDR3 RAM when they could have stuck to their original plans and have use 1866MHZ DDR3 RAM they were originally going to use which has even lower latencies than the 2133MHZ RAM they went with? Oh, thats right, its because when it comes to graphics, bandwidth is what matters not latency. It’s sad how MS has all you fools so fooled. They are teaching you flase information, and you people are buying it as fact.

    • Matt

      Esram latency is ~0 as the GPU is read/writing directly to it since it’s built on die. No matter what the frequency DDR3 is running it will always read/write faster than GDDR5 but at the expense of limited data packet size. GDDR is slower but makes up for it by sending larger data packets with the bandwidth mainly used to hold and transfer texture data. Using GDDR5 in conjunction with a CPU will cause the CPU to stall and idle as it waits for the data packets to arrive.

      With tiled resources the bandwidth becomes less of an advantage compared to being able to send and receive data faster.

    • Guest

      More or less 0 eh? Keep dreaming kid. L1 is even closer than bs eSRAM and even that doesnt have ~0, keep trying. And you honestly believe that a few ns between the X1’s DDR3 and GDDR5 will make a real difference? Im done with you, you know nothing!

    • Matt

      Please tell me more than scoffing insults if you know more… I’m a PC gamer that’s keeping tabs on where the tech is going, alright. I hate your type… keep trolling and pretending. Your not impressing anyone nor proving any points. Your such a tool.

      Tiled resources are the future of XB1 and PC gaming. Try to talk yourself out of that. WINDOWS/DX = PC GAMING.

    • MrSec84 .

      Both consoles can use PRT, which is a tiled resources technique that all AMD GPUs can handle, only because PS4’s large pool of memory is faster it’s actually more capable to this area.

      32MBs of memory with a useable bandwidth of 150GB/s (numbers confirmed by Microsoft themselves) doesn’t beat 8GBs of memory with a useable bandwidth of 172GB/s (numbers confirmed by the creators of Oddworld for the PS4).

      It’s only eSRAM that’s perhaps a bit better in latency compared to GDDR5, DDR3 actually has comparable latency when used at the clock Xbox One has it set.

      The complicated design of Xbox One’s chipset actually adds latency to the process of data management, whereas PS4 is a more HSA design, with both CPU & GPU having access to the same memory system.

    • demfax

      Tiled resources as some kind of Xbox savior is nonsense straw grasping.

    • demfax

      PS4 is more powerful and more balanced.

    • Guest

      Do you even understand any of this stuff? or do you just go around making up stuff to try and make yourself feel better. Pathetic really.

    • Dude, seriously, you just sound stupid quoting soundbite benchmarks you don’t even understand. Here, read this article from Digital Foundry from a while back where the XB1 hardware engineers explain their decision not to use GDDR5. Good luck understanding it.

      http://www.eurogamer.net/articles/digitalfoundry-vs-the-xbox-one-architects

    • Guest

      I’ve read that and you’re the one that sounds stupid thinking i dont understand. Good luck understanding it? Its all elementary talk to me. You’re the one who doesnt understand it and bought the propaganda. And if you werent such a gullible and naive idiot you’d know that’s bs and the real reason they didnt go with GDDR5 is because they didnt want to pay for it and didnt think they would be able to get 8GBs of it anyways. And MS themselves said they werent going for the most powerful hardware. That article was nothing more than PR damage control, to fool you and you got fooled.

    • Have fun talking to yourself moron, I get better things to do jack@$$…

    • Guest

      Whats the matter, mad? Cant handle to truth so you have to resort to childish antics? You’re the moronic idiot to stupid to even know it. Later kid, have fun spewing more of your stupidities and nonsense.

      And p.s, its “got” not “get” and yeah, you obviously have much better things to do with your time other than post a bunch of comments on a game site, loser!

    • corvusmd

      Well “Guest” says that PS4 is better than X1, so I’ll sell my X1 tomorrow instead of believing actual devs. Shame I was having a ton of fun with it, it was doing all my heavy lifting.

      Oh and it’s…”What’s the matter? Mad?” ” Can’t handle THE truth, so you have to resort to childish antics?” Speaking of which….do you remember saying “Do you even understand any of this stuff? or do you just go around making up stuff to try and make yourself feel better. Pathetic really” before he even said a word?

      I know you’re a delusional fanboy that wants to jump all over everyone that doesn’t say PS4 is king, and that X1 is a good machine….but all you’re doing is showing your fanboy colors…I suggest you check it at the door.

    • Not mad, had plans. And it’s unbelievably clear how much of an @$$hole you are. If I met you in real life you’d have a bloody nose by now if not a few broken ribs, regardless of how old you are, which with your mouth and attitude probably signifies you’re in your teens if not your twenties.

      And if you couldn’t tell, your trolling here has caused multiple people to ignore you. Grow up and get a real life instead of taking whatever pissed you off in your day job out on people having civil discussions on a site.

    • Kamille
    • If you read the original article @ http://gamingbolt.com/substance-engine-increases-ps4-xbox-one-texture-generation-speed-to-14-mbs-12-mbs-respectively, that benchmark is focused on texture decompression only. I’d rather trust a review that is a little more robust on overall cpu capabilities.

    • Guest

      No, this does not validate why Ms went with a higher cpu speed, and you should know the only reason Ms did that was because they knew that people knew that the PS4 was more powerful and they did this to try and offset some of that advantage and trick people there wasnt as big of a difference. And show me the proof that X1’s cpu performs better.

    • Psionicinversion

      not really, there cooling solution can handle a higher increase in clock speed so may as well try get as much out of the system as you can

    • Guest

      Yes, you’re right, and their fan being bigger along with more breathing in the case allowed them to do it. But why do you think they did it? Its not because they were planning to, its because they knew how their system looked in comparison to the PS4 and resorted to drastic measures to try and close the gap and perception and you should know that. So we are both right.

    • Psionicinversion

      not really if PS4’s cooling solution allowed them to up the frequency safely then theyd of done it to. but id say its on the edge, with APU and RAM sitting very close to the power brick, theres alot of heat being generated that needs to be dissipated and cant expect everyone to put it in an open space to safely breathe. could be in a tv unit so have be careful

    • Guest

      Umm, where did i say that Sony could up their speed? Are you seeing things? You may wanna have that checked out. And btw, Sony’s cooling system is actually more advanced than X1’s, FACT!

    • Psionicinversion

      you didnt you but can you actually read? i said MS upped there CPU because they could, if sony could of upped there speed of cpu or gpu they would of. Sonys cooling is more advanced but it still has a thermal limit were it can effectively get rid of heat, you up the speeds past what it can get rid off and the cooling solution becomes less effective til it flat out doesnt work so to say MS did it just for marketing is bs. you want the fastest system possible especially as both are so incredibly weak and thats it

    • demfax

      PS4’s CPU performs better than Xbox according to devs and benchmark tests.

    • I disagree. PS4 GPU is staggeringly more powerful, but not CPU. CPU on XB1 is 1.75Ghz, on PS4 it’s 1.6 GHZ. Both are AMD Jaguar’s.

    • demfax

      Despite apparently being clocked 150MHz lower, PS4’s CPU performs better according to devs and benchmarks.

    • I tried a google search and haven’t seen such reports, can you fwd any?

    • demfax

      Substance Engine benchmark, and developer Matt on neogaf.

      http://www.neogaf.com/forum/showpost.php?p=94264594&postcount=50

    • Any Engine isn’t really a fair comparison since they’re more heavily based on the GPU. I’ll try looking up Matt’s profile on neogaf later but that seems an odd thing vs looking straight up clock speed capabilities on AMD’s website…

  • Dirkster_Dude

    A friend of mine has more money than sense, but he is also a gamer so he spent a ridiculous amount of money on a new gaming PC. Every part of the PC such as CPU, motherboard, memory was the best money could buy. Before he bought his PC he did research and found benchmark tests that while there was an improvement of having faster memory it was less than 10% for him. The limiting factor was the CPU and System Bus (motherboard) NOT the memory. He also asked about differences in the gaming consoles and how the PS4 is 50% more powerful. They said it looked great on paper, but the 50% is probably more like 25% at most because of other limitations no one likes to hear. PS: He broke down and bought both an PS4 and XB1. It is good to not have to compromise.

    • Psionicinversion

      the faster memory has actually bin debunked at being kind of useless because the higher the frequency the more latency it has so there was about 1 fps difference. That was a test done by linustechtips on youtube. so bandwidth is higher but its slower in responding to requests so evens it out. The CPU to GPU bus is slower and needs to be addressed really but probably wont be sorted for some time. Hope he used game benchmark tests and not synthetic because while there maybe difference in synthetic its negligible in a real world situation

    • demfax

      In essentially every way, the PS4’s hardware is more powerful than the XBO’s.

    • Dirkster_Dude

      I would only agree that it is probably a better design, but the same thing was said about the PS3 vs. 360 and it made absolutely no difference in the end. All the exclusives so far for either the PS4 or XB1 are just as high quality as the exclusives for the opposing console. If it was truly more powerful in every way their would be more of a gap especially at the start.

    • MrSec84 .

      This generation is very different from the last.
      Last gen you had PS3 with the weaker GPU, but a cell processor, which was essentially a main core, with 7 additional cores, which were basically shaders, a split pool of memory.

      360 had the more straightforward design, 3 cores in the CPU, that were significantly weaker than PS3’s cell, but a more powerful GPU than the RSX in PS3 & one pool of main memory, with eDRAM buffer attached to the GPU.

      This gen you’ve got an APU in both PS4 & XB1, the same design of 8 AMD Jaguar cores, PS4 has a GPU with 2X the Rops, 50% more shaders, 50% more texture mapping units, 4X the ACE’s & 3X the real world bandwidth to it’s 8GBs of main memory (172GB/s vs 55GB/s), along with the same level of latency as Xbox One’s 8GBs of DDR3.

      ESRAM is also slower in real world bandwidth compared to GDDR5, 32MBs at 150GB/s doesn’t compare to 8GBs at 172GB/s.

      In common comparison PS4 is handling games at higher resolutions, with more stable frame rates, just look at Watch Dogs, Battlefield 4, Thief, Assassin’s Creed, all look more crisp.

      This gen you have the same manufacturer of CPU & GPU in both consoles, but PS4 has a noticeably more powerful GPU & a faster, more straightforward memory pool.

      As things go on things can easily widen as developers use more features in PS4, as it has a lot more potential within it’s design than Xbox One.

  • Starman

    XB1 CPU is much faster than Sony’s CPU …

    • demfax

      Wrong.

    • MrSec84 .

      Now this is XBox One propaganda.

      PS4’s CPU benchmarks at over 16% faster than Xbox One’s, despite XBox One’s CPU being clocked higher.

      PS4’s APU is much faster, has faster access to it’s large pool of memory & a more efficient design than what Xbox One has.


 

Copyright © 2009-2015 GamingBolt.com. All Rights Reserved.