• Welcome to the Lightroom Queen Forums! We're a friendly bunch, so please feel free to register and join in the conversation. If you're not familiar with forums, you'll find step by step instructions on how to post your first thread under Help at the bottom of the page. You're also welcome to download our free Lightroom Quick Start eBooks and explore our other FAQ resources.
  • Stop struggling with Lightroom! There's no need to spend hours hunting for the answers to your Lightroom Classic questions. All the information you need is in Adobe Lightroom Classic - The Missing FAQ!

    To help you get started, there's a series of easy tutorials to guide you through a simple workflow. As you grow in confidence, the book switches to a conversational FAQ format, so you can quickly find answers to advanced questions. And better still, the eBooks are updated for every release, so it's always up to date.

My Recent GPU Experience (Gigabyte Nvidia RTX 4070 Ti 12GB)

Status
Not open for further replies.

Gnits

Senior Member
Joined
Mar 21, 2015
Messages
2,359
Location
Dublin, Ireland.
Lightroom Experience
Power User
Lightroom Version
Classic
Lightroom Version Number
Classic 12.2.1
Operating System
  1. Windows 10
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

"This is a relatively long discussion. It is worth reading the discussion in sequence, as posted."

To assist people returning to this discussion, the following are direct links to frequently used posts.

Post #20 contains a link to a raw file to use, if you wish to compare the performance of your rig to others.
https://www.lightroomqueen.com/comm...e-nvidia-rtx-4070-ti-12gb.47572/#post-1315509

Post #30 contains a summary of Denoise performance stats for a range of GPU’s and CPU’s from readers of this post.
https://www.lightroomqueen.com/comm...ia-rtx-4070-ti-12gb.47572/page-2#post-1315545

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

I am posting this on the Lightroom Classic forum, as I think it is most relevant to Classic users.

With GPU prices softening and the release of the 4xxx range I decided to purchase the Gigabyte Nvidia RTX 4070 Ti 12GB. I was aware, before purchase that I needed a min 750 Watt GPU. I already had a recently purchased Corsair RM850X model and plenty of spare Corsair supplied compatible internal power cables.

The GPU was tricky to install because of its size and the need to install an awkward bracket support system. The GPU was supplied with a Y cable, where the single end was plugged into the GPU and the two other ends plugged into separate power cables (not supplied by Gigabyte) which were then plugged into the PSU. I used Corsair cables, supplied with my PSU.

I triple checked that I had the polarity of all the cable connections correct, square pins fitted into square holes and rounded pins fitted into rounded holes etc. When I powered up my PC….. nothing happened. Nothing. No lights, no fans starting up, no status numbers on the internal motherboard display. Totally dead... and no clue as to what the problem might be. I feared the worst.

I completely disassembled my PC and removed the Corsair power supply. I then followed the Corsair instructions on how to test the PSU. All tests passed and all (28 I think) pin outs had exactly the correct voltage settings.

I rebuilt my rig, this time reinstalling my old GPU. With much relief, it booted perfectly. I was worried that many of my main components, such as motherboard, processor, PSU, etc had been toasted.

I sent a query to Gigabyte, requesting advice and trouble shooting steps, detailing the steps I had taken to test my PSU and my overall rig.

At the same time, I started researching any known issues with this range of GPU’s. I was horrified to discover many incidents where GPU’s, cables and other components had melted following installation of 4xxx GPU’s. My heart sunk. Many of the horror stories pointed to the supplied Y cable as the source. GPU makers pointed towards incorrectly seated motherboards, gpus and or cables. I can confirm that I had triple checked my connections and listened in each case for the ‘click’ as firm connections were made.

I ordered a Corsair 600W PCIe 5.0 12VHPWR Type-4 PSU Power cable , directly from Corsair. It was been shipped from France and would take a week.

6 days passed and two events occurred on the same day. 1). My Corsair 12VHPWR cable arrived and 2). I received a response from Gigabyte. Essentially, Gigabyte told me to check my GPU (which I had already told Gigabyte I had done) or contact the supplier of the GPU (Amazon)).

So, I installed the Corsair 12VHPWR cable I had just purchased, connecting the GPU to the power supply. There was no need for me to triple check all the connections as the connectors would only connect in the correct way. I gingerly turned on power to the PSU and pressed the start button on the PC….. … the fan on the PSU slowly started to turn …. lights appeared within the GPU, numbers started to appear on the motherboard led display and the boot sequence started.

My PC…came to life… with the sound of the 'Hallelujah Chorus' reverberating in the background.



My Summary.

1. The response from Gigabyte was totally unacceptable.
2. In my opinion, the Y cable supplied by Gigabyte was inappropriate for the intended purpose.
3. Gigabytes response was seriously underwhelming and totally ignored the elephant in the room… ie the Y power cable.

My advice to anyone installing a modern, fairly high powered GPU is to triple check the connections needed to support the GPU and procure a single, fit for purpose, cable assembly which connects the GPU directly to the PSU, without any 3rd party connectors. Make sure you have a modern PSU of sufficient wattage and make sure you have sufficient spare slots in your PSU to cater for twin connections if required to power your GPU.

Ps. I have noticed a very pleasing improvement in Lr responsiveness…... very timely… as the third event which occurred on the same day was the release of a new Lr version, with the new GPU focused noise reduction and mask features. Serendipity ??? ‘Sweet’.

I am posting this story in the hope that others will not have to experience the pain and anguish I have done and help avoid a potential catastrophic meltdown of lots of very expensive components. I know every config is different ... but the bottom line is that modern GPUs are power hungry and need cables fit for purpose.

Here is the cable I purchased directly from Corsair. In my opinion, this should be supplied with GPU's which need dual PSU power slots.

CorsairPsuGPUCable.jpg
 
Last edited by a moderator:
Absolutely thrilled with my recent upgrade to the Gigabyte Nvidia RTX 4070 Ti 12GB! The performance is a game-changer, delivering stunning graphics and seamless multitasking. Kudos to Gigabyte and Nvidia for pushing the boundaries of gaming and creative possibilities. A true powerhouse for the modern digital enthusiast.
What did you upgrade from?
 
I was able to test the provided image with a 6800xt and 10850k. Time was between 23 and 25 seconds.

Hope this helps someone because I could barely find any direct comparisons between AMD and NVIDIA. Still not a perfect comparison since I'd rather masking performance be benchmarked... But that's understandably harder to get good numbers with.
 
Hope this helps someone because I could barely find any direct comparisons between AMD and NVIDIA.
There are plenty of websites wiht comparisons of AMD and NVidia GPU cards, but they are for GAMING, not Adobe creative programs. Not very useful for us.
 
There are plenty of websites wiht comparisons of AMD and NVidia GPU cards, but they are for GAMING, not Adobe creative programs. Not very useful for us.
This might help. One thing Puget often state is that the lead constantly flip flops between the big players. So, I guess the message is to pick what you are comfortable with. There is one hitch with those using OMS gear, should you wish to use the AI Noise Reduction add-on to OM Workspace, it will work only with an Nvidia GPU.
https://www.pugetsystems.com/labs/a...c-amd-threadripper-7000-vs-intel-xeon-w-3400/
 
This might help. One thing Puget often state is that the lead constantly flip flops between the big players. So, I guess the message is to pick what you are comfortable with. There is one hitch with those using OMS gear, should you wish to use the AI Noise Reduction add-on to OM Workspace, it will work only with an Nvidia GPU.
https://www.pugetsystems.com/labs/a...c-amd-threadripper-7000-vs-intel-xeon-w-3400/
I was hoping to rely more on Puget but their methodology did not align with our workflow and was misleading for us. At one point we tried a 3060 (non-Ti) and while some things improved, the general responsiveness of Lightroom was consistently lower. We tried using studio drivers, a complete wipe with DDU and reinstall, and even overclocking a bit (I wouldn't recommend it for a typical professional workflow but wanted to see if, in theory, we could have any improvement). The only thing that didn't make using Lightroom worse was using the GPU for display, but not image processing, but that's a waste of money when you have integrated graphics that work fine.

All that said, I don't want this to seem like I think Puget is doing a bad job. The fact of the matter is that I could dig into their detailed results and methodology to understand why it doesn't align with one specific workflow. The only benchmark I could find that directly compared modern graphics cards in Lightroom/Photoshop used an approach from Procyon (?) and had the Intel A770 at the top... that card was a complete flop and I have no idea why because their benchmark only has a general overview. Luckily we live by a Microcenter which has an excellent return policy, and based on the results from this thread we're looking to pick up a 4070 over a 7800XT.

My graphics card buying hypothesis for Lightroom is this: Excluding AI Denoise, low-power graphics cards do not do offer enough over integrated graphics to be worth their price. Low-powered graphics card may introduce enough overhead that they can make Lightroom/Photoshop perform worse. (We are working with 21MP photos, this may not be true for larger photos where a lower-powered GPU may still be useful). At a given price point, Nvidia and AMD both have competitive offerings, but based on these results from AI denoise, when using "AI" features lightroom can make better use of Nvidia's specialized cores to give them a slight edge in price to performance.

I wish I had more data to prove or disprove the above, but based on my experience and the limited data I could collect (mostly from this thread so thank you so much!) I think it's the best I have.
 
This is a fascinating post, but it seems that you are assuming more knowledge or understanding than most of us have. My questions below are not "challenging" you but trying to learn from your approach.

I was hoping to rely more on Puget but their methodology did not align with our workflow

Can you elaborate. All of us have (slightly) different workflows, which can change over time as we further develop our Lightroom (and Photoshop) skills.
and was misleading for us. At one point we tried a 3060 (non-Ti) and while some things improved, the general responsiveness of Lightroom was consistently lower. We tried using studio drivers, a complete wipe with DDU and reinstall, and even overclocking a bit (I wouldn't recommend it for a typical professional workflow but wanted to see if, in theory, we could have any improvement). The only thing that didn't make using Lightroom worse was using the GPU for display, but not image processing, but that's a waste of money when you have integrated graphics that work fine.
It sounds like you are also an experienced Windows user and PC builder. Is that so?

All that said, I don't want this to seem like I think Puget is doing a bad job. The fact of the matter is that I could dig into their detailed results and methodology to understand why it doesn't align with one specific workflow.
Can you explain how any of us could "dig into" their detailed results and recalculate (?) the conclusions based on our own workflows? For me, my workflow is changing with the release of new AI-based features.

The only benchmark I could find that directly compared modern graphics cards in Lightroom/Photoshop used an approach from Procyon (?

Who? Link?
) and had the Intel A770 at the top... that card was a complete flop and I have no idea why because their benchmark only has a general overview. Luckily we live by a Microcenter which has an excellent return policy, and based on the results from this thread we're looking to pick up a 4070 over a 7800XT.

How did you conclude that you preferred the 4070?
My graphics card buying hypothesis for Lightroom is this: Excluding AI Denoise, low-power graphics cards do not do offer enough over integrated graphics to be worth their price.
Not everyone has a CPU with built-in graphics.

Low-powered graphics card may introduce enough overhead that they can make Lightroom/Photoshop perform worse. (We are working with 21MP photos, this may not be true for larger photos where a lower-powered GPU may still be useful). At a given price point, Nvidia and AMD both have competitive offerings, but based on these results from AI denoise, when using "AI" features lightroom can make better use of Nvidia's specialized cores to give them a slight edge in price to performance.

How did you come to this conclusion?
I wish I had more data to prove or disprove the above, but based on my experience and the limited data I could collect (mostly from this thread so thank you so much!) I think it's the best I have.
That's the problem with benchmarking systems used for complex software with many different functions.

When I build a new system or do a significant upgrade, I always "overspec" my system so that as Lightroom (in this case) adds functionality that taxes the hardware, I have a few years of service built-in. That's my choice, and I know that some people will disagree. I can appreciate how for a Lightroom user who is does not have good technical knowledge of PCs or Macs, all this can be confusing and even stressful. Perhaps resulting in "paralysis by analysis."
 
Hey Phil, jumping from A to C without stopping by B is my specialty (unfortunately) so I'm just happy you found it useful at all :LOL:. Happy to go over my reasoning, and I'll be even happier if you find any flaw in my logic. I'm a hobbyist when it comes to building computers so more experienced than most, but much much less experienced than some.

Can you elaborate. All of us have (slightly) different workflows, which can change over time as we further develop our Lightroom (and Photoshop) skills.
Can you explain how any of us could "dig into" their detailed results and recalculate (?) the conclusions based on our own workflows? For me, my workflow is changing with the release of new AI-based features.
I think these questions have pretty much the same answer. Thanks to how detailed Puget is, every filter, transformation, etc... for all graphics cards they test have its own separate timing info. I don't know if it's worth the effort to "recalculate" a score but I just look at the rows that might effect me and try to make an informed decision from there.

https://www.pugetsystems.com/pic_disp.php?id=64694
https://www.pugetsystems.com/pic_disp.php?id=65516

For Lightroom (first link), most of the waiting time I want to reduce is from AI masking, which is not benchmarked. The other thing I might care about is scrolling, but this seems to be limited by something other than raw GPU computer, because even a 3090 is not significantly faster than a 2060 in this area. Unfortunately, I haven't found any results for the iGPU I have (UHD 770) but based on its performance I'm guessing its close to that other limiting factor than the UHD 630 they suggest.

I suspect Puget has recognized the importance of AI tools in Lightroom and Photoshop, and is not releasing new benchmarks of modern GPUs until they have a reliable test for them. Of course, this isn't just for the sake of giving us the best results... I'm sure they don't want to put effort into testing just to (possibly incorrectly) show that there is no need for an expensive top-of-the-line GPU that they can sell.

Who? Link?
https://youtu.be/nkh9VGCY8as?t=249
at 4:09

And I realized I phrased this completely incorrectly I should have said top of the cards I'm considering, i.e. less than $600. The A770 is still below 4080, 3080, and 4090 in this benchmark. I mentally dismissed these cards for being too expensive and forgot to clarify that in this post. For further explanation, I think this may be because the score is inflated from the A770 doing abnormally well in one test, even if its overall performance is lackluster. Puget has some interesting results from Topaz:

https://www.pugetsystems.com/labs/a...ance-analysis/#Topaz_AI_GPU_Benchmark_Results

For whatever reason it seems like if all you need to do is sharpen images, Intel is multiple generations ahead of AMD/NVIDIA

How did you conclude that you preferred the 4070?
Considering my positive experience with the 6800 XT going with that would probably be the safest choice. There are a few reasons I've decided to go with the 4070 though.
1) The 4070 time to AI Denoise that someone posted here. Is faster than what I got with the 6800XT.
2) I want to buy the card at Microcenter, because they are very easy to deal with and return if something doesn't work out, but they do not have the 6800XT in stock.
3) The 7800 XT is barely an upgrade to the 6800XT, in some benchmarks it even seems to do worse. (https://www.techspot.com/review/2734-amd-radeon-7800-xt/) I know that's a gaming benchmark, but it establishes the rough performance class of the card.
4) AMD's Pro driver set is over a year old at this point. NVIDIA regularly updates theirs.

Not everyone has a CPU with built-in graphics.
A good point, and one I probably unfairly took for granted. When I initially spec'd the system AMD did not have integrated graphics and that was the main reason I went with Intel. At the time, I think that was a good choice, even though some of the newer AI features do seem to work better with a dGPU. I don't know if I'd make the same recommendation now, but given the prices of GPUs then it was the right choice.

How did you come to this conclusion?
This is definitely a fault of my thoughts meandering. My comparisons between the iGPU UHD 770 and the dGPU 3060 are not really conclusive. It's based on personal experience (and only one instance at that) and really more conjecture than conclusion. I called it a hypothesis because I'd love for it to be something that someone could test, but understandably it is probably not worth the time. That said, if someone were to ask about getting a new computer I would probably suggest sticking with an iGPU for cost reasons unless they are willing to get at least something midrange, or do really need AI denoise. I would also recommend overspecing the power supply, in case they wanted to add a GPU later though (which is what I did for the current computer).
That's the problem with benchmarking systems used for complex software with many different functions.
100% this. I do use my computer for gaming (it's where the 6800XT is from) and I am used to having many sources of high quality benchmarks to make a choice buying a GPU. Having to go based on conjecture is not really what I'm used to.

When I build a new system or do a significant upgrade, I always "overspec" my system so that as Lightroom (in this case) adds functionality that taxes the hardware, I have a few years of service built-in. That's my choice, and I know that some people will disagree. I can appreciate how for a Lightroom user who is does not have good technical knowledge of PCs or Macs, all this can be confusing and even stressful. Perhaps resulting in "paralysis by analysis."
I think that's a valid approach. As someone who likes to follow tech I personally like to get exactly what I need, while "overspecing" on expansion and power (never cheap out on power supplies!). This gives me the flexibility and a budget to expand into what I need later, and old hardware can always be resold so I think it comes out cheaper. If you don't want to deal with assessing your computer every year, I think it makes sense to overspec and not worry about it.
 
Unfortunately, I haven't found any results for the iGPU I have (UHD 770) but based on its performance I'm guessing its close to that other limiting factor than the UHD 630 they suggest.
I'm running an Intel i7 12700K CPU. It has UHD Graphics 770 iGPU. I chose that for the build over a year ago because I was well aware that LrC was not a big user of GPUs. I thought to save money by having an iGPU only. It worked great. I was so happy with my choice. Then came AI Denoise and the goal posts were moved substantially. It took over four minutes to Denoise a 20MP file. It eventually became untenable, and I installed an MSI GeForce RTX 4060 Ventus. It now takes just over 17 seconds, not twice as fast but 14 times as fast.
Why GeForce - because I sometimes use an AI Noise Reduction add-on to OM Workspace which requires Nvidia.
Why RTX - because RTX has tensor cores required for AI and other goodies like ray tracing. GTX has none of that.
Why the 40 series - for about the same speed, it is considerably more power efficient than the older 30 series.
Why the Ventus - it's MSI's budget line without the gaming bells and whistles.
The A770 is still below 4080, 3080, and 4090 in this benchmark. I mentally dismissed these cards for being too expensive and forgot to clarify that in this post. For further explanation, I think this may be because the score is inflated from the A770 doing abnormally well in one test, even if its overall performance is lackluster. Puget has some interesting results from Topaz:
Until not long ago Intel Arc GPUs (A770) were effectively unusable for LrC. I believe that has been largely fixed but there are still issues. I expect that will be ironed out in time. However, the Intel GPUs are a bit more power hungry than the comparable Nvidias so it may mean a PSU upgrade. Intel is relatively new to GPUs compared to Nvidia so is still on the learning curve.
 
  • Like
Reactions: BCP
I think these questions have pretty much the same answer. Thanks to how detailed Puget is, every filter, transformation, etc... for all graphics cards they test have its own separate timing info. I don't know if it's worth the effort to "recalculate" a score but I just look at the rows that might effect me and try to make an informed decision from there.

https://www.pugetsystems.com/pic_disp.php?id=64694
https://www.pugetsystems.com/pic_disp.php?id=65516

Thanks. Now I can see the basis for some of your conclusions. That said, unless I'm actively shopping for a component, my eyes tend to glaze over when confronted by these detailed tables. Curiosity or a desire to "just keep up with trends" is not enough of a motivation.



For Lightroom (first link), most of the waiting time I want to reduce is from AI masking, which is not benchmarked. The other thing I might care about is scrolling, but this seems to be limited by something other than raw GPU computer, because even a 3090 is not significantly faster than a 2060 in this area. Unfortunately, I haven't found any results for the iGPU I have (UHD 770) but based on its performance I'm guessing its close to that other limiting factor than the UHD 630 they suggest.

I suspect Puget has recognized the importance of AI tools in Lightroom and Photoshop, and is not releasing new benchmarks of modern GPUs until they have a reliable test for them. Of course, this isn't just for the sake of giving us the best results... I'm sure they don't want to put effort into testing just to (possibly incorrectly) show that there is no need for an expensive top-of-the-line GPU that they can sell.

Adobe seems to be actively exploiting their "Sensei" technology to roll out additional AI-based features wiht every dot release every two months. If Puget Systems believes this to be the case, then they are probably waiting for a few more dot releases until they release an updated benchmark.

Note the meaning in Japanese for sensei. https://www.nhk.or.jp/lesson/english/questions/0004.html
https://youtu.be/nkh9VGCY8as?t=249
at 4:09

And I realized I phrased this completely incorrectly I should have said top of the cards I'm considering, i.e. less than $600. The A770 is still below 4080, 3080, and 4090 in this benchmark. I mentally dismissed these cards for being too expensive and forgot to clarify that in this post. For further explanation, I think this may be because the score is inflated from the A770 doing abnormally well in one test, even if its overall performance is lackluster.
The Procyon benchmark suite seemed interesting, but it's clearly not something for sale or monthly subscription to individual consumers. Too bad. :confused:

Puget has some interesting results from Topaz:

https://www.pugetsystems.com/labs/a...ance-analysis/#Topaz_AI_GPU_Benchmark_Results

For whatever reason it seems like if all you need to do is sharpen images, Intel is multiple generations ahead of AMD/NVIDIA

I have to wonder why. This card does look interesting, but until Intel has been in the market for a longer period of time, I would be hesitant to get one of their cards. Intel has such strategic challenges in their core business right now and with their new foundry business that I could easily imagine a top executive deciding to exit the GPU business.
Considering my positive experience with the 6800 XT going with that would probably be the safest choice. There are a few reasons I've decided to go with the 4070 though.
1) The 4070 time to AI Denoise that someone posted here. Is faster than what I got with the 6800XT.
2) I want to buy the card at Microcenter, because they are very easy to deal with and return if something doesn't work out, but they do not have the 6800XT in stock.
3) The 7800 XT is barely an upgrade to the 6800XT, in some benchmarks it even seems to do worse. (https://www.techspot.com/review/2734-amd-radeon-7800-xt/) I know that's a gaming benchmark, but it establishes the rough performance class of the card.
4) AMD's Pro driver set is over a year old at this point. NVIDIA regularly updates theirs.


Good reasons, but I would be interested in an AMD vs. NVidia comparison using Pro Drivers. Would Puget Systems do that? Dunno. Would anyone else do that? Probably not.
This is definitely a fault of my thoughts meandering. My comparisons between the iGPU UHD 770 and the dGPU 3060 are not really conclusive. It's based on personal experience (and only one instance at that) and really more conjecture than conclusion. I called it a hypothesis because I'd love for it to be something that someone could test, but understandably it is probably not worth the time.

I doubt that anyone in this forum, even the gurus :) could or would do conclusive comparisons.
 
I'm just providing a quick update in case someone comes to this thread later. Overall, I'm extremely happy with the 4070. It feels comparable to the 6800XT, maybe slightly better, but without a head-to-head comparison, I hesitate to actually say which one is better. That said, if you can't notice a difference, does it even matter? Despite being a little more expensive, I'm happy with the 4070 even if it doesn't have any performance advantage since it has a longer warranty, is a little more power efficient, and receives more frequent updates. I'll update with the AI-denoise time when I get around to it.

The only thing that concerns me is the amount of VRAM Lightroom and Photoshop can use. I'm surprised 2D programs can consistently use 75% - 90% of the 12GB available to the 4070. That said it's possible (and I think likely, albeit without any supporting evidence) that this is just a red herring and Adobe may just be caching functions that might be used, and it doesn't require that much VRAM for any one operation.
 
I'm surprised 2D programs can consistently use 75% - 90% of the 12GB available to the 4070.
Remember that LR relies on using the GPU for all its AI masking, Denoise, and Lens Blur, and the AI models implementing these functions are much more resource intensive than traditional graphics operations supporting image display. That's one of the reason LR trips over bugs in graphics drivers that other applications such as video editing and games don't.

Adobe may just be caching functions that might be used, and it doesn't require that much VRAM for any one operation.
One of the computational bottlenecks is getting image data to and from main memory into the GPU, so the more image data and intermediate results can be stored in video memory, the better.
 
Data to date, updated 4/28/2023. I'll try to update this note as new info comes in. Corrections solicited, will leave in the order received for now so you can match up.

View attachment 20988

While I do not know if anyone else is still interested in these benchmarks, this is how I landed in this thread... :whistle:
To add an "all-time slow" to this list:
EPYC 7313P (8 Core VM out of 16), 64 GB RAM, Radeon Pro WX 4100
Estimated: 900 seconds
Actual: 457 seconds

On a sidenote: This was done after an update to the current GPU driver version. With an older GPU driver version, the estimate was about twice as long while the actual was about three times as long.

The old WX 4100 does a good job for me for anything but AI denoise.
When I read the other systems' speeds, I'm seriously considering buying a new GPU...
 
The old WX 4100 does a good job for me for anything but AI denoise.

Surprising, given the age of this GPU card.
When I read the other systems' speeds, I'm seriously considering buying a new GPU...
Can't hurt. Be prepared for some sticker shock. Spend a bit more than the "minimum" and you will get more useful years out of the new card. Adobe seems to be developing a lot of new AI-based features for DEVELOP that require significant GPU processing power.
 
While I do not know if anyone else is still interested in these benchmarks, this is how I landed in this thread... :whistle:
To add an "all-time slow" to this list:
EPYC 7313P (8 Core VM out of 16), 64 GB RAM, Radeon Pro WX 4100
Estimated: 900 seconds
Actual: 457 seconds

On a sidenote: This was done after an update to the current GPU driver version. With an older GPU driver version, the estimate was about twice as long while the actual was about three times as long.

The old WX 4100 does a good job for me for anything but AI denoise.
When I read the other systems' speeds, I'm seriously considering buying a new GPU...
For AI Denoise, in the AMD eco system, make sure your new GPU has AI Accelerators, the more the better. The WX 4100 has no AI Accelerators.
 
Hello,

Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz
Windows 10
16 Gb RAM
GPU Radeon 7870
LR 13.1

File basket photo : Denoise 9 min 53 sec
Canon R7 Files : Denoise from 8 min to 10 min
 
Hello,

Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz
Windows 10
16 Gb RAM
GPU Radeon 7870
LR 13.1

File basket photo : Denoise 9 min 53 sec
Canon R7 Files : Denoise from 8 min to 10 min
Pretty good for a 12-year-old computer, but...
 
Hello,

Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz
Windows 10
16 Gb RAM
GPU Radeon 7870
LR 13.1

File basket photo : Denoise 9 min 53 sec
Canon R7 Files : Denoise from 8 min to 10 min

Hello,

For information purpose, I have disabled the GPU Acceleration in Lightroom.
File basket photo : Denoise is now 5 min 38 sec.

Anyway, GPU worked at ~ 100% and CPU at ~ 15% like letting GPU Acceleration enabled
 
Hello,

For information purpose, I have disabled the GPU Acceleration in Lightroom.
File basket photo : Denoise is now 5 min 38 sec.

Anyway, GPU worked at ~ 100% and CPU at ~ 15% like letting GPU Acceleration enabled
As someone else mentioned before in this thread, the AI denoise ignores the GPU acceleration setting and always uses the GPU.
 
Status
Not open for further replies.
Back
Top