• Welcome to the Lightroom Queen Forums! We're a friendly bunch, so please feel free to register and join in the conversation. If you're not familiar with forums, you'll find step by step instructions on how to post your first thread under Help at the bottom of the page. You're also welcome to download our free Lightroom Quick Start eBooks and explore our other FAQ resources.
  • Stop struggling with Lightroom! There's no need to spend hours hunting for the answers to your Lightroom Classic questions. All the information you need is in Adobe Lightroom Classic - The Missing FAQ!

    To help you get started, there's a series of easy tutorials to guide you through a simple workflow. As you grow in confidence, the book switches to a conversational FAQ format, so you can quickly find answers to advanced questions. And better still, the eBooks are updated for every release, so it's always up to date.

My Recent GPU Experience (Gigabyte Nvidia RTX 4070 Ti 12GB)

Gnits

Senior Member
Joined
Mar 21, 2015
Messages
2,130
Location
Dublin, Ireland.
Lightroom Experience
Power User
Lightroom Version
Classic 12.2.1
Operating System
  1. Windows 10
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

"This is a relatively long discussion. It is worth reading the discussion in sequence, as posted."

To assist people returning to this discussion, the following are direct links to frequently used posts.

Post #20 contains a link to a raw file to use, if you wish to compare the performance of your rig to others.
https://www.lightroomqueen.com/comm...e-nvidia-rtx-4070-ti-12gb.47572/#post-1315509

Post #30 contains a summary of Denoise performance stats for a range of GPU’s and CPU’s from readers of this post.
https://www.lightroomqueen.com/comm...ia-rtx-4070-ti-12gb.47572/page-2#post-1315545

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

I am posting this on the Lightroom Classic forum, as I think it is most relevant to Classic users.

With GPU prices softening and the release of the 4xxx range I decided to purchase the Gigabyte Nvidia RTX 4070 Ti 12GB. I was aware, before purchase that I needed a min 750 Watt GPU. I already had a recently purchased Corsair RM850X model and plenty of spare Corsair supplied compatible internal power cables.

The GPU was tricky to install because of its size and the need to install an awkward bracket support system. The GPU was supplied with a Y cable, where the single end was plugged into the GPU and the two other ends plugged into separate power cables (not supplied by Gigabyte) which were then plugged into the PSU. I used Corsair cables, supplied with my PSU.

I triple checked that I had the polarity of all the cable connections correct, square pins fitted into square holes and rounded pins fitted into rounded holes etc. When I powered up my PC….. nothing happened. Nothing. No lights, no fans starting up, no status numbers on the internal motherboard display. Totally dead... and no clue as to what the problem might be. I feared the worst.

I completely disassembled my PC and removed the Corsair power supply. I then followed the Corsair instructions on how to test the PSU. All tests passed and all (28 I think) pin outs had exactly the correct voltage settings.

I rebuilt my rig, this time reinstalling my old GPU. With much relief, it booted perfectly. I was worried that many of my main components, such as motherboard, processor, PSU, etc had been toasted.

I sent a query to Gigabyte, requesting advice and trouble shooting steps, detailing the steps I had taken to test my PSU and my overall rig.

At the same time, I started researching any known issues with this range of GPU’s. I was horrified to discover many incidents where GPU’s, cables and other components had melted following installation of 4xxx GPU’s. My heart sunk. Many of the horror stories pointed to the supplied Y cable as the source. GPU makers pointed towards incorrectly seated motherboards, gpus and or cables. I can confirm that I had triple checked my connections and listened in each case for the ‘click’ as firm connections were made.

I ordered a Corsair 600W PCIe 5.0 12VHPWR Type-4 PSU Power cable , directly from Corsair. It was been shipped from France and would take a week.

6 days passed and two events occurred on the same day. 1). My Corsair 12VHPWR cable arrived and 2). I received a response from Gigabyte. Essentially, Gigabyte told me to check my GPU (which I had already told Gigabyte I had done) or contact the supplier of the GPU (Amazon)).

So, I installed the Corsair 12VHPWR cable I had just purchased, connecting the GPU to the power supply. There was no need for me to triple check all the connections as the connectors would only connect in the correct way. I gingerly turned on power to the PSU and pressed the start button on the PC….. … the fan on the PSU slowly started to turn …. lights appeared within the GPU, numbers started to appear on the motherboard led display and the boot sequence started.

My PC…came to life… with the sound of the 'Hallelujah Chorus' reverberating in the background.



My Summary.

1. The response from Gigabyte was totally unacceptable.
2. In my opinion, the Y cable supplied by Gigabyte was inappropriate for the intended purpose.
3. Gigabytes response was seriously underwhelming and totally ignored the elephant in the room… ie the Y power cable.

My advice to anyone installing a modern, fairly high powered GPU is to triple check the connections needed to support the GPU and procure a single, fit for purpose, cable assembly which connects the GPU directly to the PSU, without any 3rd party connectors. Make sure you have a modern PSU of sufficient wattage and make sure you have sufficient spare slots in your PSU to cater for twin connections if required to power your GPU.

Ps. I have noticed a very pleasing improvement in Lr responsiveness…... very timely… as the third event which occurred on the same day was the release of a new Lr version, with the new GPU focused noise reduction and mask features. Serendipity ??? ‘Sweet’.

I am posting this story in the hope that others will not have to experience the pain and anguish I have done and help avoid a potential catastrophic meltdown of lots of very expensive components. I know every config is different ... but the bottom line is that modern GPUs are power hungry and need cables fit for purpose.

Here is the cable I purchased directly from Corsair. In my opinion, this should be supplied with GPU's which need dual PSU power slots.

CorsairPsuGPUCable.jpg
 
Last edited by a moderator:
My system has an Intel i7-12700K CPU with 32GB of DDR5 RAM. It has integrated GPU (UHD Graphics 770). It's plenty fast enough for LrC including masking. However, with AI Denoise, the integrated GPU struggles at 4 min 30 sec on a 20MP file.
Like most of us, I would suggest, I'm making more and more use of Denoise. So, I bit the bullet and installed a MSI GeForce RTX 4060 Ventus 8GB. After some issues with having two GPUs, which was fixed in BIOS, it now does Denoise on a 20MP file in just 18sec. That's 15x faster. Very happy! Incidentally, it comes in white to match the motherboard so it continues the stormtrooper theme :)
 
Well done. I always felt that the 4060 might be the sweet spot in terms of price / performance. 4060 family announced but not available when I decided to upgrade. Do you notice an improvement in overall Lr Classic interoperability.

Upgrading to Sony A7rv and its 60MB files has been a multiple whammy for me… in that it added a super extra load to normal post processing, but also, I discovered the super Depth Of Field bracketing features, so now I also have to process 3-10 captured image per final single image output. I only use this feature selectively, but it shows that keeping on the right side of the performance curve is a constant
battle.

I did not upgrade to Sony A7rv to get 60MB files, it was a tax associated with getting all the other features I was looking for.
 
Do you notice an improvement in overall Lr Classic interoperability.
There is nothing particularly dramatic at this point. It always was fairly snappy. With burst shooting, import seems to take an inordinate amount of time. There was no improvement with the discrete GPU. According to Task Manager, the CPU does all the import duties.
 
Well done. I always felt that the 4060 might be the sweet spot in terms of price / performance. 4060 family announced but not available when I decided to upgrade. Do you notice an improvement in overall Lr Classic interoperability.

Upgrading to Sony A7rv and its 60MB files has been a multiple whammy for me… in that it added a super extra load to normal post processing, but also, I discovered the super Depth Of Field bracketing features, so now I also have to process 3-10 captured image per final single image output. I only use this feature selectively, but it shows that keeping on the right side of the performance curve is a constant
battle.

I did not upgrade to Sony A7rv to get 60MB files, it was a tax associated with getting all the other features I was looking for.
I've set it up with the Creator Ready driver which seemed the obvious choice. I'm thinking, the precision of the Creator Ready driver is certainly important for CAD and graphic design but not so much for photography. The Game Ready driver may give us a bit more speed. It's probably inconsequential but wondering if you have tried it?
 
I've set it up with the Creator Ready driver which seemed the obvious choice. I'm thinking, the precision of the Creator Ready driver is certainly important for CAD and graphic design but not so much for photography. The Game Ready driver may give us a bit more speed. It's probably inconsequential but wondering if you have tried it?
I suspect I have set it up with the more conservative driver. I will check it tomorrow. I was so pleased to get card and pc working without frying everything that I did not experiment too much. It is probably time for me to check the status of various driver updates. I am not dealing with large batches of images lately, so have not had a reason to fine tune. Lots of dark winter months ahead to work on optimising my overall workflow.
 
I suspect I have set it up with the more conservative driver. I will check it tomorrow. I was so pleased to get card and pc working without frying everything that I did not experiment too much. It is probably time for me to check the status of various driver updates. I am not dealing with large batches of images lately, so have not had a reason to fine tune. Lots of dark winter months ahead to work on optimising my overall workflow.
Not a problem.
Actually, my biggest issue after installing the RTX 4060 was having two GPUs, the integrated and the discrete. Articles I've read indicated that one can run both by changing the BIOS, however there is no performance advantage. Tried that but the system didn't like it at all. I had to set the BIOS to accept only one GPU. Luckily it chose the RTX 4060. Windows doesn't even see the other one. There may be a better way, but I'll leave that for another day.
 
I started with a GPU on the motherboard but also an early model external GPU. I installed the 4070 GPU physically, but ran the system off the early GPU, as I wanted to make sure I had a working screen attached. I then configured the 4070 and connected a hdmi cable to my screen. So, I still have 2 external GPU’s installed but only using the 4070.

As far as I recall, I made no changes to BIOS.

The reason I started this thread was a. The lack of documentation re proper GPU power cables to use. b. My frustration of a totally unresponsive Pc when I installed the 4070 GPU (because supplied cables were not fit for purpose) and c. Lack of information on real world performance metrics, especially in relation to use with Lr.

I suspect / hope the arrival of M3 based systems will shed further light on the evolving architecture of GPU based parallel processing.
 
I started with a GPU on the motherboard but also an early model external GPU. I installed the 4070 GPU physically, but ran the system off the early GPU, as I wanted to make sure I had a working screen attached. I then configured the 4070 and connected a hdmi cable to my screen. So, I still have 2 external GPU’s installed but only using the 4070.

As far as I recall, I made no changes to BIOS.

The reason I started this thread was a. The lack of documentation re proper GPU power cables to use. b. My frustration of a totally unresponsive Pc when I installed the 4070 GPU (because supplied cables were not fit for purpose) and c. Lack of information on real world performance metrics, especially in relation to use with Lr.

I suspect / hope the arrival of M3 based systems will shed further light on the evolving architecture of GPU based parallel processing.
Fixed my issue in BIOS. Both GPUs are now enabled. If I find an issue with LrC and two GPUs, the integrated GPU can easily be disabled in Task Manager. Ran Denoise with both GPUs. The internal GPU didn't even flinch, which is probably just as well.
Regarding cables, the PSU has two 6x2 outlets. The GPU has one 6x2 inlet. The PSU came with a 1 to 1 cable and a 2 into 1 cable. Bearing in mind your early comments, I took particular care. I read somewhere that using the 2 into one cable may spread the load in the PSU but like you found, there is precious little documentation with the unit. I chose to use the 1 to 1 cable. I just seemed neater. All good so far.
 
And I am back again with another result. This time dedicated gpu, Radeon rx 7800 xt. I did 10 test photos in a raw and it took 4 min 20 seconds. 26 seconds per picture. Was a bit weird as I wasn't expecting any pauses in the processing in between. Looking at the gpu load it seems that there is a pause of about 2-3 seconds. So this gpu is a slower than rtx 4070. Seems like on par with 3060ti/4060ti.
 
I suspect I have set it up with the more conservative driver. I will check it tomorrow. I was so pleased to get card and pc working without frying everything that I did not experiment too much. It is probably time for me to check the status of various driver updates. I am not dealing with large batches of images lately, so have not had a reason to fine tune. Lots of dark winter months ahead to work on optimising my overall workflow.
On this issue of Nvidia Studio Driver versus the Game Ready Driver, I've done some testing. I ran Denoise (at 50%) three times on 10 images a time with both drivers. There was a 2% difference in time which is well within my stopwatch button pressing error. So, we can say that there is negligible difference in performance at least as far LrC is concerned. That surprised me. I would have thought the Game Ready Driver was faster. I then ran the same images using both drivers at 100% to see if the was any difference. Viewed at 800%, I can say that they were both pixel perfect identical. That also surprised me. I was expecting perhaps some artifacts from the Game Ready Driver, but no. So, the upshot is, it doesn't matter which of the two driver types is used.
 
On this issue of Nvidia Studio Driver versus the Game Ready Driver, I've done some testing. I ran Denoise (at 50%) three times on 10 images a time with both drivers. There was a 2% difference in time which is well within my stopwatch button pressing error. So, we can say that there is negligible difference in performance at least as far LrC is concerned. That surprised me. I would have thought the Game Ready Driver was faster. I then ran the same images using both drivers at 100% to see if the was any difference. Viewed at 800%, I can say that they were both pixel perfect identical. That also surprised me. I was expecting perhaps some artifacts from the Game Ready Driver, but no. So, the upshot is, it doesn't matter which of the two driver types is used.
Great post.

By any chance, did you note how much space each driver set required on your system drive?
 
Great to have real world feedback.

On a parallel topic, on Windows I turned off file / folder indexing as well as setting the default Explorer template to generic and have found my rig is far more responsive. It appears that Windows recognises folders which contain only images, applies a Photo style template, which causes it to extract a pile of metadata from the images, which all consumes chunks of cpu cycles. I can see Explorer now works at warp speed, while previously I could see a directory listing sometimes crawl down the screen. I am annoyed with myself as I was aware of the basics of this from my early DOS days.
 
On this issue of Nvidia Studio Driver versus the Game Ready Driver, I've done some testing. I ran Denoise (at 50%) three times on 10 images a time with both drivers. There was a 2% difference in time which is well within my stopwatch button pressing error. So, we can say that there is negligible difference in performance at least as far LrC is concerned. That surprised me. I would have thought the Game Ready Driver was faster. I then ran the same images using both drivers at 100% to see if the was any difference. Viewed at 800%, I can say that they were both pixel perfect identical. That also surprised me. I was expecting perhaps some artifacts from the Game Ready Driver, but no. So, the upshot is, it doesn't matter which of the two driver types is used.
I looked into the difference between Nvidia Studio and Game Ready drivers last year. It appears that there's a single sequence of driver versions. Game Ready versions are released frequently, several times a month on average. Once a month Nvidia claims to do extra testing with the latest version on creative apps, and that version gets released as both a Studio and a Game Ready driver. You can see the sequence of versions via the Beta and Older Drivers search, e.g.

1700544082305.png


The more frequent releases of Game Ready drivers allows Nvidia to push out patches for particular games quickly. But the received wisdom is that they have more bugs than Studio drivers, which get more testing on creative apps. Which is why Nvidia and Adobe recommend Studio drivers for creative apps.

So if a Studio and Game Ready driver have the same version number, they're exactly the same and should produce exactly the same results with identical performance.
 
Great to have real world feedback.

On a parallel topic, on Windows I turned off file / folder indexing as well as setting the default Explorer template to generic and have found my rig is far more responsive. It appears that Windows recognises folders which contain only images, applies a Photo style template, which causes it to extract a pile of metadata from the images, which all consumes chunks of cpu cycles. I can see Explorer now works at warp speed, while previously I could see a directory listing sometimes crawl down the screen. I am annoyed with myself as I was aware of the basics of this from my early DOS days.
Thanks for that. I always thought indexing should make searches faster and wondered why it took so interminably long.
Great post.

By any chance, did you note how much space each driver set required on your system drive?
To tell you the truth, I have no idea how drivers are installed and stored. I looked in Windows>System32>DriverStore>FileRepository and found nothing of any substantial size. However, when downloading the drivers using the app GeForce Experience, it reported 546.4MB to download for the Game Ready Driver and 546.2MB for the Studio Driver.
 
GeForce Experience, it reported 546.4MB to download for the Game Ready Driver and 546.2MB for the Studio Driver.
I think there is a lot of bloatware in those downloads, if you are not a gamer.
 
I think there is a lot of bloatware in those downloads, if you are not a gamer.
I don't doubt that with Nvidia, but I can't find where it's gone.
 
Good morning
I'm new to the forum
I have a question: my current configuration is razen 5 5600G + rtx 4060 8 GB
on cr3 of 20 mpixel denoising is 17s
my system works in pci express 3x because the processor which does not manage pci express 4x
Do you think that a processor managing 4x would significantly improve denoising times?
Thanks for your help
 
Despite the warnings by the OP, Gnits, I fell into a similar trap. As mentioned before, with my integrated GPU, AI Denoise took a bit over 4 minutes. So, I installed an MSI GeForce RTX 4060 Ventus 8G discrete GPU. All was going well. It was doing AI Denoise in 17-18 seconds. However, there was an issue with booting. It was irregular needing multiple pressings of the reset button to get a proper reboot. At the suggestion of an MSI forum member, I upgraded the PSU from an EVGA 650W to a Corsair 850W unit. Calculations showed that 650W was barely enough and could be the cause of the booting issues. The Corsair didn't work. I took it to be faulty and brought it back. It tested fine in the shop. They were horrified to learn that, for convenience, I was still using some of the EVGA cables. A rookie mistake. As Gnits pointed out, the plugs and sockets are standardised and polarized. What could go wrong? A lot apparently. The plugs are standardised, but not the wiring between them. So, went back sheepishly, changed the cables to Corsair and it powered up.
Unfortunately, the booting issue remained. It wasn't the lack of power. After much toing and froing on the forum, changing things in and out, I moved the GPU from the top PCIe slot to the second slot. Problem solved. Both slots had the same specs so it doesn't matter performance wise. I suspect it wasn't a fault as such but the relative proximity of the slots to the CPU, wasn't taking account of in the VBIOS, at least not for that GPU.
 
I read through this topic not long ago, and since it's still getting replies, I figure I'll ask here...

Windows 11 Pro, i9-12900 cpu, 64gb memory, Nvidia 4060 gpu, everything in the box is on fast M.2 drives, and images are on an 8TB Samsung SSD.
LRC is 13.0.2

I DIDN’T do any empirical testing before I replaced the AMD 5700XT that was in the system with the 4060TI, so I don’t KNOW how much faster, if any, things run… It’s just my perception, unreliable as that may be.

In the preferences, the “Use GPU for Exports” box is checked, and it SAYS “full graphics acceleration is enabled.”

I disabled the onboard GPU, so everything should be using the 4060. Is there any reason I can't enable that since everything important should be using the 4060? Or should I leave it off?

I'm trying to figure out whether the 4060 GPU is actually doing anything for processing, and when.

I exported 50 full-sized jpgs. CPU stayed around 14-15% busy. GPU usage went up to about 15%, memory stayed around 5GB, maybe up to 5.5GB, but it didn’t seem like any major speed improvement with the exports. But, I don't usually export 50 full-sized jpegs.

I took 10 images and ran Denoise. GPU jumped to between 75 and 95%, CPU stayed at 1%, and GPU memory stayed at around 5GB. I don't use Denoise, but it was definitely significantly faster than when I've fiddled with it in the past.
It definitely seems to be working for Denoise.

I generated 50 1:1 previews. The CPU goes 95-100% busy, the GPU never gets over 3% (1% most of the time), and memory usage barely went up from 1GB to about 1.4. As with the exports, I can't say it's not faster, but if it is, it's not enough faster to be notable. I tried it a couple times, restarting LRC between, and the time didn't vary - about 68 seconds +/- half a second.

Last, I ran an image through Topaz Photo AI, and it put the GPU and 100%, CPU stayed around 10-12% until the save, when it went up. But, in Photo AI it's significantly faster. I'd say anywhere between 2x and 4x faster.

Is there some specific way to see if the GPU is doing the 1:1 previews? Or it is simply "if it's doing the other stuff, it must be doing the previews"?

I've noticed that if something in LRC uses GPU memory, it goes up, but it never comes back down unless I exit and restart LRC.
 
By "onboard GPU", if you mean the integrated GPU (iGPU), the one that comes with the CPU chip on some CPUs, I've found you don't need to disable it. LrC simply ignores it, and it doesn't seem to interfere. It might have uses for other apps.
Regarding your discrete GPU (RTX 4060Ti), it sounds like you are concerned that it doesn't do much. By design LrC has never been a big user of GPUs. It's ironic considering we are dealing with image processing but that's how it is, or rather was. So, yes, for normal processing, you'll see very little use of the GPU. However, times have changed. AI Denoise is heavily dependent on AI capable GPUs such as the Nvidia RTX range. My RTX 4060 8G runs at close to 100% with Denoise then apart for a few blips, has a snooze leaving import, 1:1 previews, export and everything else to the CPU.
 
Is there some specific way to see if the GPU is doing the 1:1 previews? Or it is simply "if it's doing the other stuff, it must be doing the previews"?

The GPU is NOT (yet) used for preview generation, as is the case with most Library functions.
 
Thanks. That matches what I'm seeing. Topaz tools use the GPU a lot, but I don't see much use when building a few hundred (or thousand) 1:1 previews, which I inferred from the Lightroom requirements - "8GB to enable GPU supported preview generation" - that Lightroom would do.
  • GPU with DirectX 12 support
  • 4GB of VRAM for 4K or greater displays
  • 8GB of VRAM to enable GPU supported preview generation and export
So Adobe appears to say Lightroom DOES use the GPU, I'm not seeing much use of the GPU, and Jim Wilde says it ISN'T used... Does it or doesn't it?
 
So Adobe appears to say Lightroom DOES use the GPU, I'm not seeing much use of the GPU, and Jim Wilde says it ISN'T used... Does it or doesn't it?
It seems to me LrC uses what works best. If you have a powerful CPU, lots of RAM but a mediocre GPU, it will preference the CPU, particularly if "Use Graphics Processor" is set to Auto.
 
Back
Top