Leakers revise RTX 5090 and RTX 5080 power draw to 575W and 360W respectively — Up to 27% higher than last-generation RTX 40 GPUs
Just in time to keep you cozy for winter.
Leaked power specifications of Nvidia's RTX 5090 and RTX 5080 GPUs allege up to 27% increase in power compared to the last generation, though still less than initially expected, courtesy of KopiteKimi and hongxing2020 at X. We aren't sure if these numbers correspond to the TGP or TDP. However the numbers still offer a glimpse of what to expect from Nvidia's next-gen Blackwell GPUs; which are alleged to be announced by Nvidia Founder and CEO Jensen Huang during his January 6 keynote.
A few months ago, preliminary details suggested that the RTX 5090 and RTX 5080 would consume upwards of 400W and 600W of power, respectively. It seems the power rating has been toned down, to some extent at least. Still, these numbers show a notable increase over the last generation (Ada) and the previous generation (Ampere).
On a generational-to-generational basis, the RTX 5090 reportedly draws 27% more power than its predecessor. This delta increases to 64% if we expand the scope of our comparison to Ampere's RTX 3090. Likewise, the RTX 5080 sees a 12.5% jump in power consumption against the RTX 4080 and the RTX 3080.
GPU | TDP |
---|---|
RTX 5090 (Rumored) | 575W |
RTX 5080 (Rumored) | 360W |
RTX 4090 | 450W |
RTX 4080 | 320W |
RTX 3090 | 350W |
RTX 3080 | 320W |
As it stands, the RTX 5090 is expected to deploy Nvidia's flagship GB202-300-A1 die with 21,760 CUDA cores (170 SMs) and 32GB of GDDR7 memory across a 512-bit interface. Its sibling, the RTX 5080 is rumored to offer 10,752 CUDA cores (84 SMs) on the GB203-400-A1 chip, 16GB of GDDR7 memory at 30 Gbps speeds, and a 256-bit memory bus.
Blackwell for data centers is fabricated using a custom 5nm-grade (4NP) node from TSMC, which is said to be an enhanced version of 4N seen on Ada Lovelace and not a full-fledged node jump. This likely explains why Nvidia is pushing power limits to extract every last drop of performance. Still, 4NP delivers approximately 30% higher density than 4N but we'll leave the exact transistor density details for Nvidia.
Blackwell is rumored to debut with the RTX 5090, RTX 5080, and RTX 5070 series, citing a leak from Zotac. Pricing and performance are highly likely to be detailed by Jensen at Nvidia's forthcoming keynote in just a few days time.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Hassam Nasir is a die-hard hardware enthusiast with years of experience as a tech editor and writer, focusing on detailed CPU comparisons and general hardware news. When he’s not working, you’ll find him bending tubes for his ever-evolving custom water-loop gaming rig or benchmarking the latest CPUs and GPUs just for fun.
-
spongiemaster These are peak power usage ratings,. While they need to be accounted for, they don't tell us how much power the cards will typically draw. 4090 was rated 100W more than the 3090 yet could use less power when gaming.Reply
https://tpucdn.com/review/nvidia-geforce-rtx-4090-founders-edition/images/power-gaming.png
Granted, there's no significant node improvement like with Ada. Reviews aren't far off. -
pclaughton I kept waiting for the industry to hit a correction and focus on efficiency for a while, but sadly, consumers are voting with their wallets and don't seem to care.Reply -
Gururu
It’s a lot like cars in the 70s. The biggest, baddest, fastest cars were the least efficient and greatest waste generators. But still, you could buy a Datsun to meet your needs.pclaughton said:I kept waiting for the industry to hit a correction and focus on efficiency for a while, but sadly, consumers are voting with their wallets and don't seem to care. -
redgarl If the power draw is so high, it is because Nvidia have trouble meeting their performance targets.Reply -
Johannesbr
We should see the top performing cards performing at the same level or not more than 3 -5 percent better than the previous generation but using much less power. The chase for ever increasing power consumption when technology is giving us the chance to slow things down is disappointing.pclaughton said:I kept waiting for the industry to hit a correction and focus on efficiency for a while, but sadly, consumers are voting with their wallets and don't seem to care. -
das_stig With these power numbers you have to start thinking NV hitting a performance wall like Intel did in the old days, more power for little true real world gains.Reply
I think AMD doing the right thing, concentrate on low/mid tiers, AI, power efficiency and come back in a couple of years and surprise NV. -
terroralpha
this is a bunch of horse s**t. you don't seem to know what the word "efficiency" means or you're just a little slow.pclaughton said:I kept waiting for the industry to hit a correction and focus on efficiency for a while, but sadly, consumers are voting with their wallets and don't seem to care.
GPUs are getting more efficient every launch. GPUs are able to perform more work for every watt consumed when compared to the models they replace. 4090 uses less power on average than the 3090 but runs OVER 50% faster. these efficiency gains are absolutely mind blowing. it's exactly why nvidia can't satisfy demand and their stock has been skyrocketing.
at the same time the top end GPUs are bigger leaps than before. if you don't want a 575W GPU, then don't spend $2000+ on the biggest GPU money can buy.
https://tpucdn.com/review/nvidia-geforce-rtx-4090-founders-edition/images/power-gaming.pnghttps://tpucdn.com/review/nvidia-geforce-rtx-4090-founders-edition/images/relative-performance_3840-2160.png -
jp7189
that's likely true on a direct core to core comparison, but with more cores, likely more clockspeed and gobs more memory bandwidth it'll be hard to compare directly.Johannesbr said:We should see the top performing cards performing at the same level or not more than 3 -5 percent better than the previous generation but using much less power. The chase for ever increasing power consumption when technology is giving us the chance to slow things down is disappointing.
I expected gddr7 + 512 bit bus to take a big bite out of the power budget, but now I'm seeing rumors of gddr7 in laptops, so maybe that stuff is more efficient than I think.
Nah. No power wall in sight. A fully enabled ad102 (rtx6000) at 300w is a very good performer. Contrast that with 1200w datacenter gpu packages and it's plain there's a very wide range of power targets available for whatever market they want to aim at.das_stig said:With these power numbers you have to start thinking NV hitting a performance wall like Intel did in the old days, more power for little true real world gains.
I think AMD doing the right thing, concentrate on low/mid tiers, AI, power efficiency and come back in a couple of years and surprise NV. -
JTWrenn The number mean nothing until we see performance per watt. If they pushed 20% higher on top end power but perform 30% faster across the board then they are an efficiency gain...not a big one but still that is how efficiency works. it's not about less power, it's about power per performance.Reply
I think the big push will be dlss 4 and comparing dlss performance per power draw. I think there will be a large jump there compared to non dlss performance and that will long term for Nvidia be where its largest gains go. Wil the first DLSS nvidia based console coming shortly in the Switch 2 I think it will get even wider adoption and we will start to see that be the performance leader.
AI will slowly overshadow all of it and make performance comparisons that much harder.
To me it is more a price per perf issue as that I don't think they will control but we can hope they go the super route instead of the 4080 route.