• Welcome to the Lightroom Queen Forums! We're a friendly bunch, so please feel free to register and join in the conversation. If you're not familiar with forums, you'll find step by step instructions on how to post your first thread under Help at the bottom of the page. You're also welcome to download our free Lightroom Quick Start eBooks and explore our other FAQ resources.
  • Stop struggling with Lightroom! There's no need to spend hours hunting for the answers to your Lightroom Classic questions. All the information you need is in Adobe Lightroom Classic - The Missing FAQ!

    To help you get started, there's a series of easy tutorials to guide you through a simple workflow. As you grow in confidence, the book switches to a conversational FAQ format, so you can quickly find answers to advanced questions. And better still, the eBooks are updated for every release, so it's always up to date.
  • 14 October 2024 It's Lightroom update time again! See What’s New in Lightroom Classic 14.0, Mobile & Desktop (October 2024)? (includes fixes in 14.0.1) for Feature updates, new cameras and lenses, Tethering changes, and bug fixes.

Develop module Resolution with generative remove

Status
Not open for further replies.

JuergenGulbins

New Member
Premium Classic Member
Joined
May 17, 2019
Messages
7
Location
Germany
Lightroom Experience
Advanced
Lightroom Version
Classic
Lightroom Version Number
13.4
Operating System
  1. macOS 13 Ventura
Dose anybody know the maximum resolution with generative remove— is it 1024 x 1024 or 2048 x 2048? (with LrC 13.4)
 
It's 2048.
 
Are you sure, Paul? I thought it was 1024 x 1024 pixels.
 
Is that the maximum pixel dimensions of a replacement patch? So will a small replacement patch potentially have dimensions 2048 x 2048 (or 1024 x 1024)?
 
I'm not sure I understand the question. In fact I'm pretty sure I don't.

My understanding is that If I have an image that is 6000 x 8000 pixels and I use any of the "remove" tools (clone, heal , remove) to change some pixels in a "patch" the over all dimensions of the image remains 6000 x 8000 which means that the tool replaced pixels in the patch on a one to one basis thus retaining the original resolution. What am I missing?
 
I'm not sure I understand the question. In fact I'm pretty sure I don't.

My understanding is that If I have an image that is 6000 x 8000 pixels and I use any of the "remove" tools (clone, heal , remove) to change some pixels in a "patch" the over all dimensions of the image remains 6000 x 8000 which means that the tool replaced pixels in the patch on a one to one basis thus retaining the original resolution. What am I missing?

I think the 1024x1024 number in question is the size of the patch.


Sent from my iPhone using Tapatalk
 
Is that the maximum pixel dimensions of a replacement patch? So will a small replacement patch potentially have dimensions 2048 x 2048 (or 1024 x 1024)?
That's correct. The Generative replacement using this tool (I stand by 2048).

Try it by using a high-res image, uncropped, and Gen AI to remove an object. It's good, but the resolution drop is noticeable, especially in a prominent position (I removed a guy next to a bride where they were the only two in the photo, needed a really decent output, and went back to old-fashioned Ps Clone to do it, the quality just wasn't good enough!). Great for lots of small patches though.
 
I checked this with Adobe just after release, and they confirmed that the Generative Remove outputs are a maximum of 2048px x 2048px. There had been some confusion as Julieanne had said 1024px in one of her videos that was recorded before it changed.
 
You may want to double check this with Rikk. I did a quick test to verify this, and the results are very disappointing and even seem to contradict the 1024 size, let alone that 2048! It's pretty much impossible to select an exact size in Lightroom, but I think I selected something that is clearly bigger than 1024x1024, but smaller than 2048x2048 (as you can see in the upper left corner, the total image size is 7008 x 4672). The result has 'UPSCALED' written all over it, if you ask me.
1 2024-07-08 15-13-48.jpg

1 2024-07-08 15-15-00.jpg


So then, just as an extra test, I reset this and this time I selected a circle that is clearly within 1024x1024 pixels:
1 2024-07-08 15-18-24.jpg


Look at the result now! It is better, but still clearly less detailed and less sharp than the original image, suggesting that even this might be upscaled (although I think it is not upscaled, but 'just bad'). I really don't know what to believe when it comes to the 'native' size of Generative Remove...
1 2024-07-08 15-18-47.jpg
 
Hi Johan, I don't think any of the Generative... commands attempt to match grain or noise.
As a human you know that its grass either side and getting further away, so clone, patch or content aware would probably work better.
I'd expect Generative AI to get better quite quickly but for now Photoshop's remove tool is my go to.
 
I know that and I’m not talking about grain or noise. I am talking about real detail. Instead of grass stems I see a mushy green mass. And in the test with the large patch, I also see things that look like repetition patterns to me. That makes me wonder if what really happens with patches between 1024 and 2048 pixels is that Firefly generates a patch of 1024 pixels maximum, but that patch is upscaled at the Adobe server (using AI technology) to 2048 pixels maximum, so that is what Lightroom Classic receives (if the patch that is needed is even bigger than 2048 pixels, then Lightroom Classic will upscale it further). This would explain the difference I see in this test, while it is still true that Generative Remove creates patches up to 2048 pixels.

BTW, this was just a test, to see if I can verify the 2048 pixels claim. It was not meant as a real attempt to remove something, so the option of using other tools is irrelevant.
 
I think it is not upscaled, but 'just bad'
agreed
I see a mushy green mass
LOL
I also see things that look like repetition patterns
repetition is not caused by upscaling

Firefly generates a patch of 1024 pixels maximum, but that patch is upscaled at the Adobe server (using AI technology) to 2048
Photoshop( Beta) April 2024 offered "Generative Fill", "Generative Expand", "Background Replace" followed by "Enhance Detail" was described as pretty much that but maybe using the SuperZoom Neural Filter tech rather than AI server tech.
I didn't / haven't tried it but assume that "Enhance Detail" would be good at finding edges and filling in the gaps and wouldn't really help with very low detail mushy green.
 
repetition is not caused by upscaling
You don’t know that. Simple upscaling using interpolation does not cause repetition, but AI upscaling of AI generated content might under certain circumstances.

Photoshop( Beta) April 2024 offered "Generative Fill", "Generative Expand", "Background Replace" followed by "Enhance Detail" was described as pretty much that but maybe using the SuperZoom Neural Filter tech rather than AI server tech.
I didn't / haven't tried it but assume that "Enhance Detail" would be good at finding edges and filling in the gaps and wouldn't really help with very low detail mushy green.
Yes, and I suspect that Lightroom Classic Generative Remove might be using the same technology for patches that are larger than 1024 pixels. I find it a bit hard to believe that Photoshop needs to use this technology because 1024 pixels is still the limit that Firefly generates, and Lightroom Classic would not need it because it gets twice the resolution as Photoshop does.
 
There have been a number of examples posted in the long feedback thread in the Adobe forum (which I monitor carefully) where the details of the replacements look lower-resolution or mushy, e.g. grass and stonework looking lower-resolution, fuzzy, mushy. But some of them weren't really noticeably (e.g. grass details) unless you did some moderate pixel-peeping, zooming in -- viewing the image on a 27" display, the lower-resolution replacement didn't stand out. At least one was pretty noticeable.

In my own quicky test of replacing stonework on a sidewalk, the lower resolution became noticeable for selectionsd about 2K x 2K, plus or minus.
 
There have been a number of examples posted in the long feedback thread in the Adobe forum (which I monitor carefully) where the details of the replacements look lower-resolution or mushy, e.g. grass and stonework looking lower-resolution, fuzzy, mushy. But some of them weren't really noticeably (e.g. grass details) unless you did some moderate pixel-peeping, zooming in -- viewing the image on a 27" display, the lower-resolution replacement didn't stand out. At least one was pretty noticeable.

In my own quicky test of replacing stonework on a sidewalk, the lower resolution became noticeable for selectionsd about 2K x 2K, plus or minus.
I agree. In order to make the difference quite visible, I took screenshots of a 200% zoom, so in practice it may still be quite acceptable. My point however was that the test shows a clear difference between generative remove of a patch that is smaller than 1024x1024 pixels, and a patch (including the same area) that is bigger than 1024x1024 pixels but smaller than 2048x2048 pixels. That seems to contradict the claim that the maximum resolution with generative remove is 2048x2048 pixels.
 
I've read through the long feedback thread in the Adobe Forum, johnrellis you have the patience of saint!

I'm not sure I understand the question. In fact I'm pretty sure I don't.

My understanding is that If I have an image that is 6000 x 8000 pixels and I use any of the "remove" tools (clone, heal , remove) to change some pixels in a "patch" the over all dimensions of the image remains 6000 x 8000 which means that the tool replaced pixels in the patch on a one to one basis thus retaining the original resolution. What am I missing?

Califdan, when you wrote this I was under the belief that was how it worked, you were missing nothing.
However.
I've been experimenting with the Generative Expand (Beta) within InDesign. AFAIK this is also using the 2048x2048 pixel AI from Firefly so may have some relevance to Adobe's other Apps.
InDesign's Links Panel gives you a lot of in-your-face information, for example number of pixels in the placed image and colour space.

My Thoughts were that I could use this to generate bleed on one edge of a photo.
The first test produced slightly soft mushy expansion compared to the image, I initially put this down to not matching texture and grain.
My Camera captures 4000x6000pixels my rotated and cropped export was 3745 x 5618pixels
In InDesign the image is cropped again but that doesn't make any difference to the generated image.
I noticed that my generated image was 2916x4000pixels
I pixel-peeped the generated image against the original jpeg. Its not just the expanded area that has been generated but the whole image, and the new image has less resolution (the long edge is now 4000pixels) - the best scenario here is that its been downsampled but even that is pretty damaging to quality to a pixel based image.
Cropping the image with InDesign makes zero difference, the Generated image is always 4000pixels and includes the cropped area.

The only way I could stop the downsample and retain the same quality for the bulk of the image was to first destructively crop to <2048x2048.
I destructively cropped to 1907x2047 and then expanded to an exact square ran Generative Expand which gave me 2047x2047pixels and also very good results in the expanded area (no obvious mush or change in texture).

In That long feedback thread; lots of people are asking to use the generative fill to fill the white bits in a Lightroom pano.
I know I'm thrhowing a lot of IF and BUTs at it ... But if AI takes your 8000pixel image and gives you back 4000pixels and even that 4000 has been upscaled from 2048 or 2000pixels.... And for a Pano that process could start at 16000pixels.
So the question maybe do you always get mush from big MegaPixel images?

For Pixel Peepers it maybe select a square in Photoshop, generate, repeat may be here for a while.

Attached jpegs the top left window in both is AI generated the one to its right is original as is everything to the right of that, if you pixel peep at one of the white windows you can clearly see the difference in resolution between the crop from 4000 and image26 which is 2047 pixel square and a good match to my original.

One final point is that Generative AI jpegs are DeviceRGB (without a profile), they don't pick up the color profile from the original, so for me there was enough of a colour shift in InDesign for me to see it my original was AdobeRGB1998, my InDesign Document had sRGB Working Space.
 

Attachments

  • GenAIImage_12 crop from 4000pixel.jpg
    GenAIImage_12 crop from 4000pixel.jpg
    1.7 MB · Views: 32
  • GenAIImage_26.jpeg
    GenAIImage_26.jpeg
    590.3 KB · Views: 34
One final point is that Generative AI jpegs are DeviceRGB (without a profile), they don't pick up the color profile from the original
There are a number of examples posted in the forum of removing objects against a blue sky, and the blue of the replacement doesn't match the rest of the sky. I wonder if this is related...
 
I've read through the long feedback thread in the Adobe Forum, johnrellis you have the patience of saint!

Califdan, when you wrote this I was under the belief that was how it worked, you were missing nothing.
.....
.....
I pixel-peeped the generated image against the original jpeg. Its not just the expanded area that has been generated but the whole image, and the new image has less resolution (the long edge is now 4000pixels) - the best scenario here is that its been downsampled but even that is pretty damaging to quality to a pixel based image.
Cropping the image with InDesign makes zero difference, the Generated image is always 4000pixels and includes the cropped area.
..
This is pretty concerning. If I am following all this correctly, this has all been Lightroom (Classic ?) focused.

I have been doing much of my work with these tools in Photoshop beta xx.
Should I be doing more "pixel peeping" to see exactly what I am getting size wise? I have generally only been looking at final results on my monitor.
I should probably look at a photo after I have "resized" it - whatever that means if I have different sized patches inside an image.

Jim
 
I would assume the color profile would be the same in Photoshop and Lightroom since you are introducing pixels into an image and neither applications support more than one ICC Color Profile at a time, (you can't have a Layer using a separate ICC to the rest of the image? can you) I'd expect AI just works on the RGB numbers and doesn't consider ICC Profiles at all.
InDesign you can have any number of ICC on the same page.

Photoshop, Lightroom, and InDesign are probably using AI similarly , I wouldn't expect them to be the same though.
I was surprised to see that my approx 6000pixel long edge became 4000pixels. I can't believe that this happens in Photoshop or Lightroom.
I'm just wondering if the AI patches are sometimes bad because:
The whole image gets downscaled.
The AI generates new.
The whole image gets upscaled
Patched pixels are kept and the rest discarded.

In Photoshop you can definitely avoid this possibility by selecting squares of 1024 pixels.
 
Last edited:
I think that bringing inDesign into the discussion does not help. InDesign is a completely different app, that creates documents that can contain pictures with different color profiles for each image (but also not one picture with two profiles). Because the document is intended for print, I could imagine that it is no problem for inDesign to downscale an image if that image has way more pixels than needed for the final document, and it should not have to consult the user for that. Lightroom and Photoshop are different. They will not downscale an image all by themselves.

I would expect no color profile issues with generative remove in Lightroom, for the following reason. If you remove a patch of say 500x500 pixels, then Lightroom will send more than just that patch to Firefly, in order for Firefly to match the new pixels to the surrounding area of that patch. That is exactly why generative remove can create these unexpected results when you remove something next to a cropped edge. I expect Firefly to match the new pixels to the surrounding pixels by using RGB numbers, and so it should be irrelevant for Firefly what these numbers represent in terms of color space (sRGB or AdobeRGB or MelissaRGB).
 
InDesign is not downscaling the image. Generative Expand (beta) is replacing the image with a downscaled version. InDesign would only downscale during Export to PDF or Print, also the image does not have way more pixels than needed, the crop of the picture frame in InDesign makes no difference to the AI you get the whole un-cropped image returned.
The InDesign reference is not important, I also know know that Generative Expand is entirely different to Lightroom's Generative Remove. Just curious as to why the results are similar for large pixel images.
 
Status
Not open for further replies.
Back
Top