How next-gen consoles can be faster than your high-end PC's GPU

Spoiler alert: Technology improves over time

That's going to be a running theme throughout this. Technology is a moving target. It's as simple as that. The tech in the PS5 and XSX is using a more advanced process node than what's in the current high-end GPUs from Nvidia. Great, simple as that. Nvidia's on last-gen, and the consoles are on next-gen tech. Mystery solved, we can all go home. 

If only it were that easy. This discussion doesn't seem to be satisfying people, especially the PC enthusiasts that are just laughing at people that claim that consoles could ever hold a candle to their expensive PC. 

Disclaimer: Not just a console fanboy

Now I know where this is going to go. Many of the readers are likely on the verge of blowing into a tangent about the many reasons why PC is better than console and I as the writer of this article am just a console fanboy defending precious consoles. I am not saying consoles are a better choice. This is about the performance and specs alone on the consoles being better than the majority of PCs. I'm not here to discuss the objectively superior peripheral support, modding, ability to tinker and customize, the productivity apps, content creation, browser implementation, KB+M interface, royalty-free software, etc., all available on PC and not on consoles, and that's just scratching the surface. This is not a discussion about what to buy. Merely, it is a hardware analysis. 

Let me be very clear about this: I'm primarily a PC gamer. I do most of my gaming on PC and have invested in a decent PC. I've got a decent though not exceptional setup too: 
  • Ryzen 7 2700X
  • GTX 1080ti
  • 32GB DDR4-3000
  • 1TB WD Black NVMe SSD, TLC, 3400MB/s read. 
  • ASUS ROG Swift PG279Q 165Hz g-sync IPS
Let me make another statement that's very clear: My PC is above average. It is also worse in almost every hardware metric compared to both PS5 and XSX. 


I can't credit this meme as I've no idea where it came from, but it makes a valid point that we're going to address here. In order to accept the performance coming from these next gen consoles, two critical things need to happen; we need to understand that our current PC hardware is overpriced, and we have to understand that the consoles are as powerful as they are mainly because they benefit from a big leap that's about to happen in graphics tech that will not only benefit PC, but actually benefit PC before it comes to console. 

Now let's dive into it

Forget about the price of the 2080ti (and 2080)

One of the most common arguments I see is that there's no way a console can compare with the performance of a 2080ti because the 2080ti is $1200. It's $1200 and the consoles will be MAYBE as high as $600, and the consoles have all the rest of the system. The 2080ti is only the Graphics Card for $1200. There's no way this is possible. 

What if I told you in 2015 that there was going to be a console that would come out in a year that would perform nearly as well as the GTX Titan released in 2013 for $999? However, that's what happened. The PS4 Pro is a 4.2 TFLOP console that performs in many cases better than the original Titan. This is because technology improves over time, and that titan was overpriced for its time. In 2014 Nvidia released the GTX 970 with nearly Titan levels of performance for $329, and then in 2016, the RX 480 and GTX 1060 came out, around that RTX Titan performance tier for $250. After the price of that performance had dramatically dropped, the PS4 Pro launched with GPU performance in that same class. Had someone told you two years after the $999 Titan launched that there was a console coming with similar performance, you'd have been forgiven to think they were exaggerating. 

Now, fast forward to the 2080ti. I'm not saying that the consoles will be faster than the 2080ti. They almost certainly will not be. However it is important to remember how massively overpriced the 20 series in general is. The 1080ti came out in 2017, over 3 years ago, it'll be about 3.5 years as of the release of the PS5. The 1080ti launched at a price of $699. 

When the 2070, 2080, and 2080ti launched, Nvidia made the choice not to improve cost per frame. There are two main reasons for this that go hand in hand. The main reason was lack of competition from AMD. They did not have a viable product faster than the 1080, and it didn't seem like any were on the horizon. This means they weren't inclined to reduce prices. They however felt justified in keeping prices high because of the inclusion of Ray Tracing cores and Tensor cores, which they heavily advertised as being game changers.  In hindsight, we know Ray Tracing on these cards would almost always best left off even in the few supported titles, and DLSS made visuals worse until DLSS 2.0 which is still very limited in support. However at the time Nvidia played up their usefulness and effectively conned gamers into accepting no reduction in price for ultimately the same performance. 

When you compare the price of the 2080 to the price of the XSX and PS5, you're comparing a 3-year-old price per performance metric, thus is out of date, and we're about to finally see a movement and improvement from that. 

Let's talk about process nodes. 

As time moves, any given process node gets improved on, and new better ones are developed. This means processors with higher density (the same number of transistors on a smaller die, more transistors on the same die, or a combination of). Different companies offer different process nodes, and even market them in different ways, with similar nodes containing different numbers in nm. Fortunately, all modern GPUs, from all of Nvidia's GPU, all of Radeon's 7nm GPUs, and the next gen consoles, are all manufactured by the same company: TSMC. This makes it a lot easier to compare process nodes as you can use their own metrics. 

last-gen 16/12nm process is very cheap

When Nvidia released the 20 series, the 7nm process was new and expensive. It was ready for prime time, as the Apple A12 chip used in the iPhone XS was released 8 days before the 20 series, was on TSMC's 7nm process, but it was likely passed over because of the higher cost. The 20 series was on 12nm, which was nothing more than a minor variant of the 16nm process that the 10 series was released on. The die size of the 20 series was notably larger than that of the 10 series as it had more transistors, and not a significant density improvement, but that 16/12nm node had become much cheaper in the years that it had been in production.  It's possible and even likely that the 754mm^2 TU102 in the 2080ti was no more expensive to produce than the 471mm^2 GP102 in the 1080ti, at only 60% larger. There's no reason to think that Nvidia could not be selling the 2080ti for $699 or less now and still be turning a massive profit. Just in case I hadn't driven home the point that Nvidia is making bank on the massively overpriced 2080ti. 

Next-gen 7nm is also decreasing in cost and increasing density. 

7nm has been out for a couple of years now. I call it next-gen but it's really very much current-gen, as Radeon and AMD have been using them in Vega, Navi, and Ryzen products for over a year now. Let's have a look to see how many more transistors Nvidia can fit into that space. 

First, we have TSMC's marketing material. They say that 10nm is about a 2x increase in density compared to 16nm, 7nm is then a 1.6x increase in density over 10nm, making 7nm a 3.2x increase in density over 16nm. For 12nm to 7nm, we can look at the transistor count of the 815mm^2 V100 compared to the 826mm^2 A100, as they're similar size, and the V100 has 21.1 billion transistors, and the A100 has 54.2 billion transistors. Yep, that's about a 2.53x improvement in density over the 12nm node currently used in the 2080 Super and 2080ti. It seems Moore's Law is not yet dead after all (regardless of what Tom may have to say about it haha). 

Now we have to remember... that 7nm node that is 2.5x more dense than what Nvidia is currently using launched before Nvidia's current high-end GPUs. This only reinforces the fact that Nvidia's GPUs are on an old tired node. As nodes mature, they get cheaper to manufacture, and higher performance. It's common for a later sample of a particular processor to overclock better than an early one, as the process matures and binning improves. While that 7nm node was too expensive 2 years ago, it's come down in price quite a lot. That 7nm process is being used in the consoles, in Nvidia's next generation Ampere, and in Radeon's next-gen RDNA2 cards, desktop GPUs using the same architecture found in the consoles. 

Let's talk TFLOPS and Die sizes

I want to first point out that TFLOPS are not the ultimate indicator of performance. Here's a few examples: Radeon Vega 56 had 10.5 TFLOPS, while being very similar in performance to a 1070, which had only 7.8 TFLOPS; the Vega architecture was far more tuned for compute than gaming. The 2080 only has about 10 TFLOPS despite performing very comparably (slightly better) to a 1080ti with 11.3 TFLOPS. TFLOPS is a simple calculation of the number of unified shader cores multiplied by their clock speed and does not account for architecture, memory, or any other metric. It does tend to give a general rough view of performance. 

It's also important to note that while Radeon lagged massively in the performance per TFLOPS (which is not actually depend entirely on the architecture but can vary on clock speed, RAM, and other metrics) in the Vega era, they've tightened it up drastically with Navi. The 2070 Super at 9.0 TFLOPS and RX 5700XT at 9.75TFLOPS have very similar performance. It's unlikely that we'll see a reduction in performance per TFLOPS in Radeon's next-generation RDNA2 architecture present in the consoles, so the worst-case for the PS5 is that it performs only a bit better than a 2070 Super and the XSX performs similarly to a 2080 Super, based on those numbers. Best-case, based on how much better the performance per TFLOPS is, the XSX could end up being about midway between 2080S and 2080ti and the PS5 could match the 2080S. 

Looking at die sizes, we get simialr reinforcement that the consoles should be in this class of performance. The Radeon VII performed about on par with a 1080ti/2080, at 331mm^2, and the 5700XT, at about the same performance of the 2070 Super, is 251mm^2. The XSX die which had a more advanced architecture (RDNA2 vs RNDA of the 5700XT and Vega of the VII) is about 360mm^2 with maybe ~80mm^2 used by the CPU, so we're looking at around a 280mm^2 GPU. 

This performance is coming to PC First

I hear so many people comparing the next-gen consoles to current-gen PC hardware. The main issue with this (other than all the ones I've already highlighted) is the illusion of perspective. The consoles are big news right now but the reality is they're still 5 months away. Right NOW, we don't have that new GPU tech yet, so it's natural to compare the consoles to what we know. Right NOW it looks like the consoles are going to match or beat a $700 GPU, and maybe even hold a candle to that $1200 GPU. But in a few months after the next generation of GPUs comes to PC and absolutely thrashes what we have now, we could be in a situation where that $700 today GPU is beaten by a $350 GPU then. Then if that console launches at $599 comparing to a $350 GPU, it's going to seem a LOT more reasonable. Note: I don't know what the prices of the coming GPUs will be. I hope they're reasonable as they certainly 'can' be if Nvidia and AMD decide to be reasonable. 

Conclusion + TL;DR

Comparing the PS5 and XSX to Turing GPUs is as silly as comparing the PS4 Pro to Kepler (GTX 700 series), or the PS4 to Fermi (GTX 400 and 500 series). This is not a new story. Technology improves over time, and the consoles tend to be competitive or better than the PC GPUs that are a couple years old, but not better than the GPUs that come out in the same year as the consoles. Yes, the next gen consoles are better than the average gaming PC right now, but by the time the consoles are actually released, the next massive wave of GPUs will have already hit the PC market, and the claims of their performance will make a LOT more sense. 

Thank you for reading!

As always, come visit me in Discordland! I've got a little flourishing community that's so far been rather active and we've had a lot of great tech discussions with, and I'd love more people around to tell me why I'm wrong. 

Discord: https://discord.gg/CHfha8V
Patreon: 
https://www.patreon.com/MeyerTechRants


PS: This blog post is dedicated to a member of my discord community, with whom a conversation prompted me to write this just so I could better explain my position. Thanks Spudarion! 

Comments

Popular posts from this blog

Hardware-accelerated IO in consoles coming to PC

The best competition in CPU History is NOW

What to expect with a PS5 pro... if they even make one.