PS4 API Can Go Down To The Metal, Specific Optimizations Not On PC Will Improve Performance Further

Confetti FX founder Wolfgang Engel talks about PS4 API and compute shader optimizations.

Posted By | On 12th, Jun. 2014 Under News | Follow This Author @GamingBoltTweet


ps4 amd

With all the hype behind DirectX 12, the upcoming API update for Xbox One and PC which is due in 2015, it will be foolish to think Sony are not working to improve their own API. We asked Confetti FX founder Wolfgang Engel about how Sony’s custom API for the PS4 will stack up against DX 12.

“Sony’s own custom API is more low-level and definitely something that graphics programmers love. It gives you a lot of control. DirectX 12 will be a bit more abstract because it has to work with many different GPUs, while the PS4 API can go down to the metal,” he explained.

It’s important to note that Wolfgang is not criticizing DirectX and in fact he thinks it will turn out to be great.

Wolfgang Engel was at the GDC booth this year where he spoke about Compute Shader Optimizations across three AMD GPUs: RADEON 6770, RADEON 7750 and RADEON 7850. Will those same optimization methods be applicable on the PlayStation 4?

“Yes, but there will be other PS4 specific optimizations, not available on PC, that will help increase performance even more,” explains Wolfgang.

What are your thoughts on this? Let us know in the comments section below.


Awesome Stuff that you might be interested in

  • d0x360

    The direct x that runs on a console is different than direct x on PC. It works the same way but its slimmed down and optimized for specific hardware. Developers don’t have to use any prebuilt api if they don’t want to. They can make their own they just don’t because its infinitely easier to use the tools built right into and supported by the SDK…tools like direct x.

    • Sticky Notes

      This is true. The Direct X that runs on the Xbox One will be highly optimized. Still, it looks like the PS4 has the advantage as it metal is literately more powerful on the PS4.

    • d0x360

      Oh definitely. Numbers don’t lie on that one. Regardless of which platform someone chooses there are going to be beautiful games that are fun to play. That’s why I bought both and even if someone needs to wait for price cuts every gamer should try to have both.

  • doom guy

    Here we go again.

  • Minecraft Greek

    I’ve heard that MS put special DX12 command sets into their chip that can’t be accessed until DX12 is released, essentially leaving PS4 with DX11+ commands, even down to the metal, they couldn’t replicate DX12 support like that.

    • Brian Murphy

      Uh no, the PS4 doesn’t use the actual DX API, it uses OpenGL and a DX Variant. Completely different API’s. That having been said, you cannot just ‘lock out’ commands, from hardware you have 100% access to.

      The limitations are API/SDK support, meaning, as time progresses and the SDK’s get more advanced and refined, they can add any number of bells and whistles.

    • Psionicinversion

      it depends if parts of DX12 need hardware acceleration that isnt availble in current chips but most will drivers related. If it does need hardware acceleration and that acceleration isnt avaible in PS4’s chip then it could do it in software but would be quite expensive performance wise

    • Guest

      Like what? What hardware acceleration would it have that PS4 couldnt do in hardware? When they both have the same HW. (8-core Jaguar CPU and AMD GCN GPU)

    • Psionicinversion

      there both slightly modified. MS is custom built as they licensed the tech so it could have some kind of hardware acceleration for something new in DX12 that no one knows about. You dont know, 360 Xenos GPU was the first GPU to have hardware tesselation in it and no other GPU on the market had it at that point. So you never know

    • Guest

      DX12 will activate the secret stacked misterxchips and make Xbox One 5+ teraflops.

    • YOUDIEMOFO

      HAHAHAHAHAH nice……

      Amazing how people truly think that there is going to be sooooooo much more power and performance from these systems with these API’s that will get released.

      That is nice about these “secret chips” inside the Bone….. People can really make themselves believe anything they want can’t they……….?

    • Guest

      They want to believe.

    • Brian Murphy

      Yeah, that’s the thing, the PS4/X1 APU are from the same stock, just tweaked by Sony and Microsoft, Sony just crammed a shitload more shaders, CU’s, and ACE’s. into the space where Microsoft put the ESRAM.

      Besides, what you’re referring to is hardware prevented, not a software thing. Yes, they could do software acceleration, but Sony would be making that themselves, Microsoft wouldn’t be ‘locking’ anything out, given Sony doesn’t use DX.

    • Don Karam

      Microsoft seem to be giving the impression that when both the PlayStation 4 and Xbox One are utilised to their fullest extent, then the Xbox One will finally outperform the PlayStation 4 computationally.

      I seems this was the case with the previous generation; The PlayStation 3 went from computationally slower to faster than its rival when the PlayStation 3 and Xbox 360 were utilised to their fullest extent.

      Does someone know with certainty what will happen this generation regarding overall computational performance? I find it hard to believe that Microsoft would make such a stupid hardware design; there has to be an advantage to it. I can’t believe AMD wouldn’t have said something to Microsoft if this design had no redeeming qualities.

    • Psionicinversion

      the problem with that was the PS3 was technically more powerful than the 360 from the start anyway. They just had a nightmare with the SPU’s but this time the x1’s hardware is weaker so there trying to pick it up using cloud computation. TBH its MS fault for not realising the amount of stream processors in it wasnt enough, if they were building this 5 years ago maybe they thought the hardware would be powerful enough but turned out to be not the case

    • Brian Murphy

      I don’t see what else you’d expect from any company, though. Are you thinking that Microsoft (Or Sony) would actually come out and admit their machine is less powerful? That’s just not how PR works.

      The Ps3, was always on paper, more powerful than the 360, the problem with the PS3 was it was difficult to tap into, and that took time for devs to get comfortable with, and their API’s to become advanced enough for devs to really utilize that power. In this gen, the XB1 is actually more difficult to program for (ESRAM), and less powerful (also ESRAM). Virtually the opposite of last gen.

      This isn’t to say the XB1 is garbage, it’s not…it just doesn’t have as much going for it as the PS4.

    • Guest

      PS3 was more powerful than 360, but a nightmare to code for. PS4 is both more powerful AND easier to code for. It’s a win/win for PS4.

    • Psionicinversion

      yeah i know theyd have to implement it in opengl BUT like tesselation for example, you couldnt do that in software i dont even think you can do it now as it takes to much power to emulate and needs hardware acceleration so if its something like that then sony couldnt do it. MS built there APU themselves and licensed the tech so i cant imagine theyd spend $3billion making a $h1t APU it doesnt make sense unless they love flushing money down the toilet

    • Brian Murphy

      Well, correction here, neither Sony or Microsoft ‘made’ their APU. The took existing AMD APU’s, and tweaked them to the specifications that they desired w/ AMD.

      But, you’re also assuming Microsoft new exactly what Sony was up to at the time. Which doesn’t sound plausible since they appeared caught off-guard when Sony announced 8g of GDDR5.

      Most companies are very, very careful about non-disclosure agreements, and keeping things secret. There’d be no way for Microsoft to know what Sony was up to, unless AMD let them see the design for the APU…which would open AMD up to the most devastating lawsuits Sony could lob at them.

    • Psionicinversion

      well they prolly new sony was going go with GDDR5 but werent counting on 8GB of it. MS was going to go with it but didnt think the prices would drop to like they are now

    • Brian Murphy

      See now that I don’t believe. Microsoft knows how the prices of RAM fluctuates, they’ve been in the PC industry for quite some time, so they understand how it works.

      I think they simply just dropped the ball (It happens). They pulled a Sony, got confident in their position, and allowed themselves to be overtaken by their competition.

      Seems unlikely, but it happens, Apple, Microsoft, Sony, Nintendo, Sega etc.. they all do it from time to time.

    • Psionicinversion

      well there was production problems with GDDR5 as it wasnt getting high enough densities till very recently so it kept the prices high, to high for a budget games console

    • jacksjus

      I read that MS was forced to ditch GDDR5 due to the Kinect’s expense. Now they dropped Kinect. Let’s face the facts and accept that they F’d up big time this gen. Don’t be surprised if MS drops an X1-S version with new hardware. They are getting smoked in the market and they cannot afford to be this far behind over the next few years. If things don’t tighten up by the end of 2015, expect a newer X1 model.

    • Psionicinversion

      they wont release a new x1 model with different hardware cus that defeats the objective of owning a console not to mention itll render the old one obsolete. Best thing they can do is starting to plan there next one now

    • LGK

      Where did you hear that?

    • Guest

      Misterxmedia’s blog, a cult of delusional, extremely hateful fanboys that thinks there’s secret stacked chips inside the Xbox that ms is hiding from the world.

    • Minecraft Greek

      Never said anything about secret cult chips? I just said that MS put in DX12 instructions for their upcoming API…

    • LGK

      I think you’re replying to the wrong person brah

    • Guest

      And i heard that Sony put special commands sets into their chip that cant be accessed until they update their API (GNM), essentially leaving X1 with DX12 commands while their API is equivalent to DX13+, even down to the metal, they couldnt replicate Sony’s API support like that.

    • Minecraft Greek

      OK Guest.

    • Guest

      Stop drinking the misterxcultist kool-aid.

    • chrisredfield31

      PS4 does not use DX at all, first and foremost.

    • Minecraft Greek

      NO, I just mean that DX12 is compatible with features that chipsets don’t use yet. If utilized it could make the limited shader cores more usefull, and no not talking about stacked cultist chips that “guest”keeps uttering. Just a rumor, simply put, that MS did customize the shaders a bit to make certain effects less costly on the hardware.

    • Guest

      Go back to misterxturd’s blog and stay there please.

  • Psionicinversion

    PC gets lower level access with massively improved performance GPU’s = console death

    • Guest

      Make sense, that was stupid.

    • Psionicinversion

      ok DX12 low level access removing CPU bottleneck and getting closer to the hardware… 2016 Pascal, unified memory architecture although not as fast as a proper unified memory will help to speed things up ALOT as long as its implemented properly in the OS your running on. console advantages are disappearing so you might be paying alot more next time round to keep up with PC’s

    • Guest

      Keep dreaming.

    • Lamanuwa

      Just take one chip for example lets say R290X, you still have a dozen board manufacturers making OC versions, Memory arrangements, memory speeds, and for some chips even different memory capacities and different BUS implementations .

      This is what makes it harder for developers to get down to the metal.

      Now pair that with all kinds of CPU’s audio chips and NICs you have what we call a bloody mess.

    • Psionicinversion

      no it doesnt lol AMD and nvidia release reference boards which are mostly clocked exactly the same, they are on AMD’s at least there might be slight variations in clock speed on nvidia but nothing major and that reference design is the target you aim at. Just cus the boards are made by a dozen different manufacturers dont mean anything. Bus widths are usually scaled down with weaker GPU’s as it doesnt need as much bandwidth cus it cant process it fast enough which is why you then choose medium or high settings etc. Memory config target the standard one, higher memory capacities usually come out later anyway.

      Down to the metal has got nothing to do with all that its because DirectX basically uses a common feature set to get the job done and is bloated and non specific. with dx12 and mantle you can code closer to the metal i.e. use the enhancements made to specific architectures to make it faster because no GPU is truely maxed, far from it because of the inefficiencies and non specific nature of DX is holding it back. I think even though dx supports Fermi i thinkdevs wont code it for that theyll code for Kepler (6xxx/7xxx nvidia) and maxwell GPUs seeing as it wont be out till next year may as well ditch the old stuff, and GCN 1/1.1/2 (6xxx/7xxx/R9 290’s and the next GPU’s R9 3xxx series) the difference between nvidia set and AMD set of architecture wont be massive as later version are built upon the previous one so build it for the core one then add the extra support which gets activated when that card is detected.

      NIC’s laughable windows handles that. Audio chips you having a laugh? stereo/ 2.1/5.1/7.1 Dolby surround DTS etc are all supported by those chips even realtek onboard to some extent. The main difference with the soundcards is audio clarity and you only need a dedicated sound card if youve got a really good sound system becuase a rubbish sound system wont benefit from the clearer signal a decent soundcard will put through and all thats handled by the sound card driver not the devs as audio stuff like that is pretty standardized.

      With AMD’s TrueAudio there 2 plugins for it 1 is Wwise and forget the other one but its a middleware and is designed to be easy to use and should only take 1-2 days to code in the plugin then the plugin handles everything on its own and AMD’s drivers handle the TrueAudio bit. Game devs involvement is really small.

      With CPU’s well even though they say the game engine scales to 8 cores like frostbite i dont think it really does much tbh thatll apply to consoles to cus if you can get 8 cores working on consoles you can get it working on a normal CPU although an i7 may only use 4 or 5 cores cus there like 2 times more powerful than a console. Theres only AMD and Intel and both use standardized instruction sets i dont think theres any specific instructions to either CPU these days. I could wrong but thats it,

      Thats my take on it i could be wrong in places but coding to the metal will add to development time undeniably but the performance increases you can get may be worth it to push high visuals

    • Lamanuwa

      I get what you are trying to say. But here’s the problem. When you say audio is handled by middleware, NICs handled by OS API,etc what you are really doing is proving my point. I didn’t say you can’t get closer to the metal on the PC side. I’m merely saying not as close as you can get if you were just developing for consoles.

      Think about this, the whole “down to the metal approach came about at the advent of these next gen consoles, mostly influenced by AMD and DICE, why?

      Because no one puts that much effort into rewriting machine code so they are better optimised unless there’s plenty of profitable platform commonalities they can depend on.

      I agree that things are getting better with reference cards and manufacturers having to stick to those minimum specs, but still my point stands because you get many reference cards over a period of a year and sometimes twice a year.

    • Psionicinversion

      err nope to the metal has always bin a console thing thats why PS3 and 360 games look better and better as the years go on. Consoles have always had a low level api and to the metal coding.

      With sound and NIC’s the PS OS and xbox OS handles that stuff via drivers the game doesnt handle any of it so its still the same as a PC in that repsect.

      AMD wanted DICE to MULTITHREAD there engine cus AMD CPU’s are rubbish at single threaded performance but are good at multithreaded but not many engines are/were truley multithreaded so it needed be done otherwise they would of gotten even worse performance out of BF4.

      Manufacturers dont stick to those minimum specs well they have to or exceed it. they cant go under spec… its the same as intel and AMD they create reference motherboards so the hardware partners have something to build off of. You only get reference cards at the start of a new architecture becuase the board is new to them. the R9 290 had reference boards cus the architectures and bus widths were different so the manufacturer can build off that design. The R9 280’s and below didnt have one neither did the 7xxx cus there all the same card fundamentally as the 6xxxx series. the 780Ti might of had one but AMD’s GPU run very hot with the standard blower and whereas nvidias dont so AMD needs to have custom cooler on em.

      But the point is you target a 780, the refernce specs of that is what you shoot for doesnt matter if anything else is overclocked or not cus that performance isnt certain but a refernce speed is. Thats what they target.

    • Lamanuwa

      You keep proving my point. Which is, to be clear

      “Down to the metal programming, in other words assembly instructions can be implemented further and more extensively on consoles than PC purely because the hardware for consoles are fixed and stay fixed for a longer period of time”

      Again I didn’t say it can’t be done for PC just not as much as fixed hardware.

    • Psionicinversion

      buildihng a PC is easy,… writing code is iobvisuky alot harder but even still when the right tols makes it alot easier.

      I think you know whow dead rising 3 is coming to PC those type if X1 gameswil maybe see how easy it is to port. Forxa Horizon 2 car models looks like how they should of bin at forxa 5 relase. so ill buy day 1 if the models are they same lets face it PC is 3-4 tom,es better so original car models should be avaialable

    • Lamanuwa

      True. Keep up the good work!

    • Guest

      Yep, the paupers can’t accept that fact with their $400 plastic PoS.

    • Sammy

      Ummm.. last time I checked, the Xbox One was $400 and made of… wait for it…. plastic.

    • Guest

      You must have mental issues to literally hate a piece of plastic and people who like gaming on it, or anyone who remotely understands technology.

  • Manoj Varughese

    This is clearly evident from that Uncharted trailer.

  • Ahmz Cro

    DX12 will help the XB1 but it won’t make the console more powerful than the hardware specs the console was released with.

    • Failz

      The Hardware with better Software will run better. Doesn’t matter how much hardware advantage the opposition has. If it hasn’t got the Software then it will run worse. Infamous SS creators already said the PS4 CPU is Bottlenecked. What MS are working on with DX12 is to remove that CPU bottleneck to allow better calculations. Like it or hate it. Facts are facts.

    • Jecht_Sin

      And Naughty Dog, home of the ICE team, is laughing at you.

    • Guest

      PS4s CPU performs better than Xbox One’s CPU.

      Sony 1st party devs will extract every bit of performance out of PS4 in software, keeping it in the lead.

  • Guest

    Well here is that Confetti FX guy now saying the Sony API is even more down to the metal, in other words, better.

  • Zac

    “Compute Shader Optimizations across three AMD GPUs: RADEON 6770, RADEON 7750 and RADEON 7850”

    Sounds alot like the Wii U, X1, & PS4 gpu’s. Well, similar families but not same shader builds. I just wonder why he would emphasize on those three.

  • Guest

    PauperStation lies and overhype that will underdeliver.

    • Guest

      8+ million PS4s and counting.

    • Psionicinversion

      fcuk derp baghead

  • Minecraft Greek

    DX12 isn’t about being close to the metal, it’s about not wasting resources for one. In tech demos they showed open world games running in HD with streaming textures using as little as 16MB of texture cache. At the same time, it divides up processes between the 8 CPU cores vs typical processes which only work on Core 0. The efficiencies of these technologies eliminate typical bottlenecks in systems, even heterogeneous memory systems like Xbox One and PS4. Both of which have shared RAM access to the CPU.

    On top of that, rumors are that certain instructions which require multiple shader cores to process on traditional GPU’s (coded to metal or not) can actually be done faster on a single shader because MS asked AMD to put in some DX12 specific commands to allow for some new shader tech to be cost free.

    • Jecht_Sin

      The tiles resources, the ones “streamed” in the 16MB, are part of DX11.2, thus there already. They didn’t do much, did they?

      http://blogs.windows.com/windows/b/extremewindows/archive/2014/01/30/directx-11-2-tiled-resources-enables-optimized-pc-gaming-experiences.aspx

    • Minecraft Greek

      So wait, you only want to talk about one of the things I wrote about, and not the distributed computing efficiency or the possibility of a DX12 instruction set?

    • Guest

      Complete load of misterxnonsense.

      Anyone with technical knowledge knew PS4’s game graphics performance would be better for the entire generation as soon as the specs were official. That was well over a year ago. Apparently it’s taking some people a LONG time to come to terms with reality. It’s going to be a very long generation for you.

  • Minecraft Greek

    Fine fine, but both Sony and MS are gonna be all up in the clouds someday, and at that point local power won’t matter….dat cloud tho.


 

Copyright © 2009-2015 GamingBolt.com. All Rights Reserved.