<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>
	Comments on: Tech Analysis: Can The PS4 Pro Deliver &#8216;Native&#8217; 4K In Modern Games?	</title>
	<atom:link href="https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games/feed" rel="self" type="application/rss+xml" />
	<link>https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games</link>
	<description>Get a Bolt of Gaming Now!</description>
	<lastBuildDate>Thu, 17 Nov 2016 17:23:00 +0000</lastBuildDate>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.3</generator>
	<item>
		<title>
		By: InterTriplete		</title>
		<link>https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-308426</link>

		<dc:creator><![CDATA[InterTriplete]]></dc:creator>
		<pubDate>Thu, 17 Nov 2016 17:23:00 +0000</pubDate>
		<guid isPermaLink="false">http://gamingbolt.com/?p=277169#comment-308426</guid>

					<description><![CDATA[So, basically, the answer is NO, it can&#039;t. This is another $ony failure, ps4pro is NOT &quot;pro&quot; at all, even without a 4K blueray reader the Xbxo One S has... What a shame. Another way to easily waste 400$+, after having spent 400$ two years ago for no exclusive, an ashaming psn, and nono of the promised performances...]]></description>
			<content:encoded><![CDATA[<p>So, basically, the answer is NO, it can&#8217;t. This is another $ony failure, ps4pro is NOT &#8220;pro&#8221; at all, even without a 4K blueray reader the Xbxo One S has&#8230; What a shame. Another way to easily waste 400$+, after having spent 400$ two years ago for no exclusive, an ashaming psn, and nono of the promised performances&#8230;</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Hvd		</title>
		<link>https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-303311</link>

		<dc:creator><![CDATA[Hvd]]></dc:creator>
		<pubDate>Sun, 25 Sep 2016 04:24:00 +0000</pubDate>
		<guid isPermaLink="false">http://gamingbolt.com/?p=277169#comment-303311</guid>

					<description><![CDATA[no it cant i have a r9 380x it takes a r9 390x or 290x to do 4k/30fps on pc.the ps4 pro mite have the same tflops but its clocked slower then the video card.there is also the 5 year old jaguar cpu thats in to compared to a newish cpu.which means bottleneck

my pc is a r9 380x,fx 6350 and 8 gb of ddr3 and the ps4 pro is still weaker then this and i have a mid range pc.i just can wait untill after the xbox scorpio launches then get a gpu after and it will be stronger then both consoles.

the gpus coming out after vega will destroy these consoles when they launch.i can buy a mid range gpu in 2 more years and it would destroy both consoles for only $250.


consoles will always be behind a pc mid range gpu.]]></description>
			<content:encoded><![CDATA[<p>no it cant i have a r9 380x it takes a r9 390x or 290x to do 4k/30fps on pc.the ps4 pro mite have the same tflops but its clocked slower then the video card.there is also the 5 year old jaguar cpu thats in to compared to a newish cpu.which means bottleneck</p>
<p>my pc is a r9 380x,fx 6350 and 8 gb of ddr3 and the ps4 pro is still weaker then this and i have a mid range pc.i just can wait untill after the xbox scorpio launches then get a gpu after and it will be stronger then both consoles.</p>
<p>the gpus coming out after vega will destroy these consoles when they launch.i can buy a mid range gpu in 2 more years and it would destroy both consoles for only $250.</p>
<p>consoles will always be behind a pc mid range gpu.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: orion		</title>
		<link>https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302679</link>

		<dc:creator><![CDATA[orion]]></dc:creator>
		<pubDate>Sat, 17 Sep 2016 19:40:00 +0000</pubDate>
		<guid isPermaLink="false">http://gamingbolt.com/?p=277169#comment-302679</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302242&quot;&gt;James Fitzgerald&lt;/a&gt;.

ahaha damn right]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302242">James Fitzgerald</a>.</p>
<p>ahaha damn right</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Arjun Krishna Lal		</title>
		<link>https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302578</link>

		<dc:creator><![CDATA[Arjun Krishna Lal]]></dc:creator>
		<pubDate>Fri, 16 Sep 2016 15:13:00 +0000</pubDate>
		<guid isPermaLink="false">http://gamingbolt.com/?p=277169#comment-302578</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302576&quot;&gt;d0x360&lt;/a&gt;.

I&#039;d still think though that, at least with AMD, what constitutes a flagship isn&#039;t as hard and fast as with NV. They could well be selling the 480 on a minimal margin or even as a loss-leader because they badly want to generate sales volume. The 480&#039;s in an awkward position because of this. If it was at 28nm, it&#039;d have been a much larger GPU than normal for a midranger, and similar in size with Hawaii/Grenada. Also, just my two cents, but wasn&#039;t the 390X/290X performance difference mostly due to better thermal management and that 50 MHz clockspeed bump? Haven&#039;t had a 290X but I hear they throttled horribly. In contrast the Strix 390X I had stuck very closely to its rated clockspeed. A 10-15 percent performance difference here could simply be because the 290X spends most of its time throttled down to 900ish MHz. What&#039;s your experience?  HardOCP apparently did a test along the lines I&#039;d mentioned--they underclocked the 390X&#039;s core and memory clocks to match a 290X, then compared it to a 290X http://www.hardocp.com/article/2015/06/18/msi_r9_390x_gaming_8g_video_card_review/9#.V9wKqPB96Uk

There&#039;s a 1 FPS difference, which is close to the 2 percent figure I&#039;d mentioned earlier :P 



As for the performance differences across revisions, would be very interesting to see how that plays out with more DX12 titles coming out. Also, how are you getting Mankind Divided to run at a solid 60? I get horrible stutter every time I head outdoors, at 1080p.]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302576">d0x360</a>.</p>
<p>I&#8217;d still think though that, at least with AMD, what constitutes a flagship isn&#8217;t as hard and fast as with NV. They could well be selling the 480 on a minimal margin or even as a loss-leader because they badly want to generate sales volume. The 480&#8217;s in an awkward position because of this. If it was at 28nm, it&#8217;d have been a much larger GPU than normal for a midranger, and similar in size with Hawaii/Grenada. Also, just my two cents, but wasn&#8217;t the 390X/290X performance difference mostly due to better thermal management and that 50 MHz clockspeed bump? Haven&#8217;t had a 290X but I hear they throttled horribly. In contrast the Strix 390X I had stuck very closely to its rated clockspeed. A 10-15 percent performance difference here could simply be because the 290X spends most of its time throttled down to 900ish MHz. What&#8217;s your experience?  HardOCP apparently did a test along the lines I&#8217;d mentioned&#8211;they underclocked the 390X&#8217;s core and memory clocks to match a 290X, then compared it to a 290X <a href="http://www.hardocp.com/article/2015/06/18/msi_r9_390x_gaming_8g_video_card_review/9#.V9wKqPB96Uk" rel="nofollow ugc">http://www.hardocp.com/article/2015/06/18/msi_r9_390x_gaming_8g_video_card_review/9#.V9wKqPB96Uk</a></p>
<p>There&#8217;s a 1 FPS difference, which is close to the 2 percent figure I&#8217;d mentioned earlier 😛 </p>
<p>As for the performance differences across revisions, would be very interesting to see how that plays out with more DX12 titles coming out. Also, how are you getting Mankind Divided to run at a solid 60? I get horrible stutter every time I head outdoors, at 1080p.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: d0x360		</title>
		<link>https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302576</link>

		<dc:creator><![CDATA[d0x360]]></dc:creator>
		<pubDate>Fri, 16 Sep 2016 14:39:00 +0000</pubDate>
		<guid isPermaLink="false">http://gamingbolt.com/?p=277169#comment-302576</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302575&quot;&gt;Arjun Krishna Lal&lt;/a&gt;.

I understand what you are saying but directly comparing performance between their GCN generations is extremely difficult.

Reason being is amd doesn&#039;t do what nvidia does which is release a card on all parts of the spectrum.  How many flagship cards has amd released in the last 4 years?

I&#039;d argue that answer is 2.  The 290x and the Fury.  Everything else has been more budget friendly and more of a refined version of previous cards. They have used the newer revisions of GCN to get better performance out of older hardware.  The 390 is essentially the 290 with a GCN revision and that&#039;s all.  It got its performance boost there.

Back to comparing generation&#039;s and why it&#039;s difficult...it&#039;s because like I said amd doesn&#039;t do what I would consider a flagship card every generation.  So there isn&#039;t a new 290x or Furyx on a consistent basis that also corresponds to a new generation of GCN.  It&#039;s hard to put my thoughts into words here because it&#039;s a complex question.

I think if you changed their sales model and they only (for example) made 1 card every year and that card had a new version of GCN you would be able to see the difference a little easier but since there is no like for like card it&#039;s difficult.

Also remember GCN is also about their focus on improving their api support.  You notice through each revision dx11 performance grows where before amd was waaaay behind.  They also have great support for dx12 and vulkan and have had that support for a very long time, much longer than nvidia which means older cards like the 2xx series will suddenly become relevant again with new api.  My 290x went from being able to run doom at 1080p with almost everything on ultra at 60fps with some slight dips...then vulkan hit and I was able to raise AA, tessellation and also bring everything in game to its highest setting as well as jump to 1440p and I was seeing over 130fps.

Same goes for Mankind Divided.  I was getting a nearly solid 60 at 1080p with everything maxed except textures because it says above high you need more than 4 gigs of vram and setting it to even very high would drop me to 30..that&#039;s a 50% loss in performance under dx11.

Once the dx12 update went live I was able to again bump resolution to 1440p but even better I was able to set textures to ultra and still get a locked 60 with zero dips (vsync enabled).  So GCN does mean something it&#039;s just hard to quantify especially across revisions.]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302575">Arjun Krishna Lal</a>.</p>
<p>I understand what you are saying but directly comparing performance between their GCN generations is extremely difficult.</p>
<p>Reason being is amd doesn&#8217;t do what nvidia does which is release a card on all parts of the spectrum.  How many flagship cards has amd released in the last 4 years?</p>
<p>I&#8217;d argue that answer is 2.  The 290x and the Fury.  Everything else has been more budget friendly and more of a refined version of previous cards. They have used the newer revisions of GCN to get better performance out of older hardware.  The 390 is essentially the 290 with a GCN revision and that&#8217;s all.  It got its performance boost there.</p>
<p>Back to comparing generation&#8217;s and why it&#8217;s difficult&#8230;it&#8217;s because like I said amd doesn&#8217;t do what I would consider a flagship card every generation.  So there isn&#8217;t a new 290x or Furyx on a consistent basis that also corresponds to a new generation of GCN.  It&#8217;s hard to put my thoughts into words here because it&#8217;s a complex question.</p>
<p>I think if you changed their sales model and they only (for example) made 1 card every year and that card had a new version of GCN you would be able to see the difference a little easier but since there is no like for like card it&#8217;s difficult.</p>
<p>Also remember GCN is also about their focus on improving their api support.  You notice through each revision dx11 performance grows where before amd was waaaay behind.  They also have great support for dx12 and vulkan and have had that support for a very long time, much longer than nvidia which means older cards like the 2xx series will suddenly become relevant again with new api.  My 290x went from being able to run doom at 1080p with almost everything on ultra at 60fps with some slight dips&#8230;then vulkan hit and I was able to raise AA, tessellation and also bring everything in game to its highest setting as well as jump to 1440p and I was seeing over 130fps.</p>
<p>Same goes for Mankind Divided.  I was getting a nearly solid 60 at 1080p with everything maxed except textures because it says above high you need more than 4 gigs of vram and setting it to even very high would drop me to 30..that&#8217;s a 50% loss in performance under dx11.</p>
<p>Once the dx12 update went live I was able to again bump resolution to 1440p but even better I was able to set textures to ultra and still get a locked 60 with zero dips (vsync enabled).  So GCN does mean something it&#8217;s just hard to quantify especially across revisions.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Arjun Krishna Lal		</title>
		<link>https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302575</link>

		<dc:creator><![CDATA[Arjun Krishna Lal]]></dc:creator>
		<pubDate>Fri, 16 Sep 2016 14:26:00 +0000</pubDate>
		<guid isPermaLink="false">http://gamingbolt.com/?p=277169#comment-302575</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302568&quot;&gt;d0x360&lt;/a&gt;.

Well yes. It&#039;s what AMD calls the various revisions of the core architecture it&#039;s deployed since 2012 with the 7970, succeeding VLIW4 used in the 6xxx series. There are multiple revisions of GCN, and these revisions have to do with both incremental improvements to the architecture, including in the underlying logic. What I was bringing into focus here is,  whether or not there is a tangible performance difference between GCN revisions. I&#039;ve had a 390X and a 480 (and a 380X) to test at various points of time. There is a direct correlation between clockspeed, the number of shader cores, and your framerate at the end of the day. (duh).  If all other factors are the same and if you run two GPUs at the same clockspeed with the same number of shaders (or normalise for the number of shaders with a bit of division), then if there is any difference in performance, you can ascribe that to gains in architectural efficiency. If I run my both my 980 Ti and my old 780 Ti at 1058 MHz, I will get ~50 percent higher framerates with the 980 Ti, despite the fact that both have a near-identical number of shaders and nearly the same memory bandwidth. Why? Because of a substantial change in the underlying core architecture. Applying that same logic here: In shader-bound scenarios, the 480 equals the 390X when you normalise for the number of shader cores and clockspeed. If a Polaris part had 2816 cores and ran at the same clockspeed as the 390X/290X, it would perform almost exactly the same. If a Polaris part had 2048 cores and ran at the same clockspeeds, it would perform almost exactly the same as a 380X. The 380X performs identically to the 280X/7970 in shader-bound scenarios. What does that say? It means yes, you can make an apples-to-apples comparison between a 380X, a 390X, a 480, and whatever else has been in AMD&#039;s entire product portfolio for the past four and a half years. As for the 290x not being a $200 part. Die size correlates to manufacturing cost. Because transistors are obviously smaller at the 14nm process than at 28nm, a die with the same number of cores will cost less, because it&#039;s smaller and therefore cheaper. Hawaii is twice the size of Polaris 10. In terms of economics, higher yields, experience (not having to spend hundreds of millions on designing a new architecture)  and economies of scale could bring the marginal cost of one Hawaii GPU down a bit, but it will never be as cheap to produce as Polaris 10. That&#039;s why a 290X will never cost $200 unless it&#039;s old stock no one wants. Which is why they no longer manufacture Hawaii since it is obsolete from a price/performance perspective. Whether or not a card is a &quot;budget&quot; or a flagship card has nothing at all to with how it performs in absolute terms. Newer budget parts obviously perform higher than older parts in higher price brackets. It&#039;s also entirely up to the company to decide what&#039;s a budget product and what isn&#039;t. A Oneplus 3 costs roughly the same as many midrange or &quot;budget&quot; phones. But it features a premium build and top-of-the-line specs. Oneplus makes that work for them by using a just-in-time production model and focusing on online, as opposed to convention promotions. AMD is making the 480 work for them at $200 by swallowing slightly smaller margins per unit to reach out to a bigger market so it&#039;s certainly fair to compare it to flagships from one or two years ago.]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302568">d0x360</a>.</p>
<p>Well yes. It&#8217;s what AMD calls the various revisions of the core architecture it&#8217;s deployed since 2012 with the 7970, succeeding VLIW4 used in the 6xxx series. There are multiple revisions of GCN, and these revisions have to do with both incremental improvements to the architecture, including in the underlying logic. What I was bringing into focus here is,  whether or not there is a tangible performance difference between GCN revisions. I&#8217;ve had a 390X and a 480 (and a 380X) to test at various points of time. There is a direct correlation between clockspeed, the number of shader cores, and your framerate at the end of the day. (duh).  If all other factors are the same and if you run two GPUs at the same clockspeed with the same number of shaders (or normalise for the number of shaders with a bit of division), then if there is any difference in performance, you can ascribe that to gains in architectural efficiency. If I run my both my 980 Ti and my old 780 Ti at 1058 MHz, I will get ~50 percent higher framerates with the 980 Ti, despite the fact that both have a near-identical number of shaders and nearly the same memory bandwidth. Why? Because of a substantial change in the underlying core architecture. Applying that same logic here: In shader-bound scenarios, the 480 equals the 390X when you normalise for the number of shader cores and clockspeed. If a Polaris part had 2816 cores and ran at the same clockspeed as the 390X/290X, it would perform almost exactly the same. If a Polaris part had 2048 cores and ran at the same clockspeeds, it would perform almost exactly the same as a 380X. The 380X performs identically to the 280X/7970 in shader-bound scenarios. What does that say? It means yes, you can make an apples-to-apples comparison between a 380X, a 390X, a 480, and whatever else has been in AMD&#8217;s entire product portfolio for the past four and a half years. As for the 290x not being a $200 part. Die size correlates to manufacturing cost. Because transistors are obviously smaller at the 14nm process than at 28nm, a die with the same number of cores will cost less, because it&#8217;s smaller and therefore cheaper. Hawaii is twice the size of Polaris 10. In terms of economics, higher yields, experience (not having to spend hundreds of millions on designing a new architecture)  and economies of scale could bring the marginal cost of one Hawaii GPU down a bit, but it will never be as cheap to produce as Polaris 10. That&#8217;s why a 290X will never cost $200 unless it&#8217;s old stock no one wants. Which is why they no longer manufacture Hawaii since it is obsolete from a price/performance perspective. Whether or not a card is a &#8220;budget&#8221; or a flagship card has nothing at all to with how it performs in absolute terms. Newer budget parts obviously perform higher than older parts in higher price brackets. It&#8217;s also entirely up to the company to decide what&#8217;s a budget product and what isn&#8217;t. A Oneplus 3 costs roughly the same as many midrange or &#8220;budget&#8221; phones. But it features a premium build and top-of-the-line specs. Oneplus makes that work for them by using a just-in-time production model and focusing on online, as opposed to convention promotions. AMD is making the 480 work for them at $200 by swallowing slightly smaller margins per unit to reach out to a bigger market so it&#8217;s certainly fair to compare it to flagships from one or two years ago.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: d0x360		</title>
		<link>https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302568</link>

		<dc:creator><![CDATA[d0x360]]></dc:creator>
		<pubDate>Fri, 16 Sep 2016 13:06:00 +0000</pubDate>
		<guid isPermaLink="false">http://gamingbolt.com/?p=277169#comment-302568</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302525&quot;&gt;Arjun Krishna Lal&lt;/a&gt;.

While I appreciate the effort you aren&#039;t talking to the right person.  The 480 is a budget card the 390 is not.

A 290x can match the 480 in alot of games.  The 290x still isn&#039;t $200 today. 
GCN is essentially a way to name their fab process and instruction set. That&#039;s it.  It&#039;s a marketing buzzword that actually means something]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302525">Arjun Krishna Lal</a>.</p>
<p>While I appreciate the effort you aren&#8217;t talking to the right person.  The 480 is a budget card the 390 is not.</p>
<p>A 290x can match the 480 in alot of games.  The 290x still isn&#8217;t $200 today.<br />
GCN is essentially a way to name their fab process and instruction set. That&#8217;s it.  It&#8217;s a marketing buzzword that actually means something</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Arjun Krishna Lal		</title>
		<link>https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302527</link>

		<dc:creator><![CDATA[Arjun Krishna Lal]]></dc:creator>
		<pubDate>Fri, 16 Sep 2016 05:45:00 +0000</pubDate>
		<guid isPermaLink="false">http://gamingbolt.com/?p=277169#comment-302527</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302154&quot;&gt;Chuck&lt;/a&gt;.

Because I&#039;ve seen temporal reconstruction in action in plenty of other titles, notable Quantum Break. At 1080p, while the temporal upscale is noticeably better than regular 720p, it&#039;s very, very easy to notice that it&#039;s not running at native resolution. This is more or less what the Pro does--the fundamental limit is that if you&#039;re starting with a lower framebuffer (1440p), there&#039;s just no way you can make pixel data appear out of thin air.]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302154">Chuck</a>.</p>
<p>Because I&#8217;ve seen temporal reconstruction in action in plenty of other titles, notable Quantum Break. At 1080p, while the temporal upscale is noticeably better than regular 720p, it&#8217;s very, very easy to notice that it&#8217;s not running at native resolution. This is more or less what the Pro does&#8211;the fundamental limit is that if you&#8217;re starting with a lower framebuffer (1440p), there&#8217;s just no way you can make pixel data appear out of thin air.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Arjun Krishna Lal		</title>
		<link>https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302526</link>

		<dc:creator><![CDATA[Arjun Krishna Lal]]></dc:creator>
		<pubDate>Fri, 16 Sep 2016 05:42:00 +0000</pubDate>
		<guid isPermaLink="false">http://gamingbolt.com/?p=277169#comment-302526</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302396&quot;&gt;Alex Proctor&lt;/a&gt;.

Because the target&#039;s 4K/30 in most cases. If you&#039;re reasonable with your settings, (medium-low) that&#039;s something that a 390X/RX 480 can manage.]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302396">Alex Proctor</a>.</p>
<p>Because the target&#8217;s 4K/30 in most cases. If you&#8217;re reasonable with your settings, (medium-low) that&#8217;s something that a 390X/RX 480 can manage.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Arjun Krishna Lal		</title>
		<link>https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302525</link>

		<dc:creator><![CDATA[Arjun Krishna Lal]]></dc:creator>
		<pubDate>Fri, 16 Sep 2016 05:38:00 +0000</pubDate>
		<guid isPermaLink="false">http://gamingbolt.com/?p=277169#comment-302525</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302155&quot;&gt;d0x360&lt;/a&gt;.

Uh...AMD claims a roughly 15 percent architectural gain with GCN 4, but if you do the math you find that this is a, well, hypothetical best-case. Most benchmarks show the 480 as being a good deal slower than the 390X, more in line with the 390. Now even supposing that they performed identically, let&#039;s look at the math: in shader-bound scenarios, 2816 GCN cores running at 1050 MHz (in the 390X) perform roughly equal to 2304 GCN cores at 1266 MHz (in the 480).  the clockspeed uplift alone (from 1050 to 1266 MHz), is 20 percent. To normalise GCN core performance clock for clock, multiply 2304 x 1.2. At the same clockspeed, 2764 GCN 4 cores would deliver the same shading performance as 2816 GCN cores. Even in a scenario where the RX 480 equals the 390X (which it doesn&#039;t, in most cases) that would translate into a roughly 2 percent increment in GCN performance due to architectural gains. Although the 480 has lower memory bandwidth (and one could claim that greater shading performance from supposedly more efficient GCN cores could offset this), AMD themselves claim that effective bandwidth gain thanks to color compression is around 30 percent. This would largely negate the 390X memory bandwidth advantage, at least at 1080p. As such, the comparison to a GCN 1.2 part stands. You&#039;d mentioned that GCN architectural improvements are noticeable from generation to generation, but here also, I think the benches say otherwise: The R9 380X is a Tonga (GCN 1.2) part with the same of shaders as the 7970/280X, at more or less the same clockspeed, and slightly lower memory bandwidth. It performs slightly worse than the older 7970 based on GCN 1.0. Also, it&#039;d also be inaccurate to see the Pro as performing in between the 470 and the 480. The 470 is a very slightly cut-down 480, running at slightly lower clocks (the AIB versions are actually above stock 480 clocks) but with 12 percent fewer shader cores--every else is untouched. It hands in near-equal figures to the 4 GB RX 480.Clockspeed makes a huge difference here: at 911 MHz, the PS4 Pro is clocked 26 percent above the reference 470. Even factoring in the 470&#039;s shader core deficit, we&#039;re still talking about the 470--not the 480--offering substantially more performance than the PS4 pro&#039;s GPU. About VR, I&#039;m kinda wondering the same thing: the PS4 Pro is so much better suited to offering high quality 1080/60 VR experiences than the original PS4. But Sony claims that ALL PS4 Pro content has to also work on the PS4. That means, at least for the foreseeable future, whatever VR content we&#039;ll see on the Pro will be constrained due to having to support the original PS4.]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://gamingbolt.com/tech-analysis-can-the-ps4-pro-deliver-native-4k-in-modern-games#comment-302155">d0x360</a>.</p>
<p>Uh&#8230;AMD claims a roughly 15 percent architectural gain with GCN 4, but if you do the math you find that this is a, well, hypothetical best-case. Most benchmarks show the 480 as being a good deal slower than the 390X, more in line with the 390. Now even supposing that they performed identically, let&#8217;s look at the math: in shader-bound scenarios, 2816 GCN cores running at 1050 MHz (in the 390X) perform roughly equal to 2304 GCN cores at 1266 MHz (in the 480).  the clockspeed uplift alone (from 1050 to 1266 MHz), is 20 percent. To normalise GCN core performance clock for clock, multiply 2304 x 1.2. At the same clockspeed, 2764 GCN 4 cores would deliver the same shading performance as 2816 GCN cores. Even in a scenario where the RX 480 equals the 390X (which it doesn&#8217;t, in most cases) that would translate into a roughly 2 percent increment in GCN performance due to architectural gains. Although the 480 has lower memory bandwidth (and one could claim that greater shading performance from supposedly more efficient GCN cores could offset this), AMD themselves claim that effective bandwidth gain thanks to color compression is around 30 percent. This would largely negate the 390X memory bandwidth advantage, at least at 1080p. As such, the comparison to a GCN 1.2 part stands. You&#8217;d mentioned that GCN architectural improvements are noticeable from generation to generation, but here also, I think the benches say otherwise: The R9 380X is a Tonga (GCN 1.2) part with the same of shaders as the 7970/280X, at more or less the same clockspeed, and slightly lower memory bandwidth. It performs slightly worse than the older 7970 based on GCN 1.0. Also, it&#8217;d also be inaccurate to see the Pro as performing in between the 470 and the 480. The 470 is a very slightly cut-down 480, running at slightly lower clocks (the AIB versions are actually above stock 480 clocks) but with 12 percent fewer shader cores&#8211;every else is untouched. It hands in near-equal figures to the 4 GB RX 480.Clockspeed makes a huge difference here: at 911 MHz, the PS4 Pro is clocked 26 percent above the reference 470. Even factoring in the 470&#8217;s shader core deficit, we&#8217;re still talking about the 470&#8211;not the 480&#8211;offering substantially more performance than the PS4 pro&#8217;s GPU. About VR, I&#8217;m kinda wondering the same thing: the PS4 Pro is so much better suited to offering high quality 1080/60 VR experiences than the original PS4. But Sony claims that ALL PS4 Pro content has to also work on the PS4. That means, at least for the foreseeable future, whatever VR content we&#8217;ll see on the Pro will be constrained due to having to support the original PS4.</p>
]]></content:encoded>
		
			</item>
	</channel>
</rss>
