Ps and LrC Artificial intelligence, hardware and performance

Status
Not open for further replies.

thommy

Active Member
Joined
Aug 27, 2011
Messages
248
Location
Sweden
Lightroom Experience
Advanced
Lightroom Version
Classic
Lightroom Version Number
Lightroom Classic version: 11.3
Operating System
  1. Windows 10
Hi

With more and more AI being used for processing images in both Photoshop and Lightroom, will it put higher demands on better GPU:s and/or CPU:s?
Or is the main process being done in the cloud and where better internet bandwith is of more importance?
Maybe a mix of both worlds?
One example in LrC is when you use the new Adaptive presets and Select Sky, it calculates estimated time to perform the action.
What is that calculation based upon?
Any tips on how to best prepare for future AI implementations?

Thommy
 
My take is that as hardware vendors adds increased capacity, which enables software vendors to add new features that can now be practically delivered. In other words, don't get hung up on the term 'AI'. There is no measurable definition of AI over a clever new algorithm. There is nothing to say that the new algorithm was not derived by machine learning. IMHO they are all just new features.
Any tips on how to best prepare for future AI implementations?
Just like any new platform purchase, leave yourself excess capacity that can be utilized as new features come out. This means the latest family of hardware components. For example, the latest release of PS now requires a minimum of 2GB of VRAM. The Nviduia card on my 2014 PC only has 1.5GB so I can't upgrade. I don't upgrade working hardware until I have to. I am contemplating buying a new video card so I can upgrade and I can only do that because I have a desktop with spare PCI slots. Not sure I'd be able to upgrade a laptop.
 
You will have to ask Adobe. They have not shown their hand yet from what I have seen.
With that stated, MS and Apple have both made significant improvements in core OS level libraries to assist in the use of AI. This is mostly done by exposing the GPU for processing. So, if Adobe is using a third party library for the AI work (and I am too lazy to investigate) instead of re-inventing the wheel, you could expect that there will be increasing reliance on the GPU.
When I upgrade, if I have a space in my budget after defining the base system, it will go roughly 2/3 to GPU and 1/3 to CPU.

Tim
 
With more and more AI being used for processing images in both Photoshop and Lightroom, will it put higher demands on better GPU:s and/or CPU:s?
Or is the main process being done in the cloud and where better internet bandwith is of more importance?
Maybe a mix of both worlds?
It seems to be “all of the above,” although in different mixes depending on the software.

For example, in Photoshop, filters based on machine learning are in the Neural Filters category. Some of those filters tell you whether they are processing on device or in the cloud. From what I understand, one reason processing might be done in the cloud is when a developer decides that the ML model is not practical to distribute/install, such as being a very large model, so they keep it on their servers and send you the result. This is workable when the total time to upload, calculate, and download would be less than doing it on device.

Photoshop-Neural-Filter-device-vs-cloud.png


will it put higher demands on better GPU:s and/or CPU:s?…Any tips on how to best prepare for future AI implementations?

Graphics and video became more practical when the GPU was added — hardware specialized to accelerate those calculations. More recently, processors added specialized hardware support to accelerate specific video codecs. Although the GPU is very important for AI, CPUs/SoCs now include coprocessors specialized for machine learning too. Apple Silicon SoCs include what they call a Neural Engine, and although I’m less familiar with PCs apparently the Tensor Cores in Nvidia graphics hardware are also valuable for accelerating AI/machine learning features.

M2-Neural-Engine.jpg


In the future, assuming accelerating machine learning features becomes a bigger priority, the CPU will probably become less important for that if ML hardware acceleration is available. What will probably become more important is which GPU and AI/ML coprocessors your computer has. But what will also be important is how well all components are balanced and integrated (e.g. memory bandwidth), so that different ML-based features can achieve what they need to regardless of what proportions of CPU, GPU, AI/ML processor, and cloud processing they use in their implementation.
 
It seems to be “all of the above,” although in different mixes depending on the software.

For example, in Photoshop, filters based on machine learning are in the Neural Filters category. Some of those filters tell you whether they are processing on device or in the cloud. From what I understand, one reason processing might be done in the cloud is when a developer decides that the ML model is not practical to distribute/install, such as being a very large model, so they keep it on their servers and send you the result. This is workable when the total time to upload, calculate, and download would be less than doing it on device.

View attachment 18858



Graphics and video became more practical when the GPU was added — hardware specialized to accelerate those calculations. More recently, processors added specialized hardware support to accelerate specific video codecs. Although the GPU is very important for AI, CPUs/SoCs now include coprocessors specialized for machine learning too. Apple Silicon SoCs include what they call a Neural Engine, and although I’m less familiar with PCs apparently the Tensor Cores in Nvidia graphics hardware are also valuable for accelerating AI/machine learning features.

View attachment 18860

In the future, assuming accelerating machine learning features becomes a bigger priority, the CPU will probably become less important for that if ML hardware acceleration is available. What will probably become more important is which GPU and AI/ML coprocessors your computer has. But what will also be important is how well all components are balanced and integrated (e.g. memory bandwidth), so that different ML-based features can achieve what they need to regardless of what proportions of CPU, GPU, AI/ML processor, and cloud processing they use in their implementation.
Great explanation - thanks for the info.
I havent noticed the progress bar in Photoshop, but they surely tell us where the process is being done - great.

Thommy
 
I havent noticed the progress bar in Photoshop
I got the progress bar screen shot from the bottom of the Neural Filters window when it was running one of the filters that processes on device. The progress bar is that small and short, so it’s easy to miss if using a big screen.
 
Status
Not open for further replies.
Back
Top