I've read through the long feedback thread in the Adobe Forum, johnrellis you have the patience of saint!
I'm not sure I understand the question. In fact I'm pretty sure I don't.
My understanding is that If I have an image that is 6000 x 8000 pixels and I use any of the "remove" tools (clone, heal , remove) to change some pixels in a "patch" the over all dimensions of the image remains 6000 x 8000 which means that the tool replaced pixels in the patch on a one to one basis thus retaining the original resolution. What am I missing?
Califdan, when you wrote this I was under the belief that was how it worked, you were missing nothing.
However.
I've been experimenting with the Generative Expand (Beta) within InDesign. AFAIK this is also using the 2048x2048 pixel AI from Firefly so may have some relevance to Adobe's other Apps.
InDesign's Links Panel gives you a lot of in-your-face information, for example number of pixels in the placed image and colour space.
My Thoughts were that I could use this to generate bleed on one edge of a photo.
The first test produced slightly soft mushy expansion compared to the image, I initially put this down to not matching texture and grain.
My Camera captures 4000x6000pixels my rotated and cropped export was 3745 x 5618pixels
In InDesign the image is cropped again but that doesn't make any difference to the generated image.
I noticed that my generated image was 2916x4000pixels
I pixel-peeped the generated image against the original jpeg. Its not just the expanded area that has been generated but the whole image, and the new image has less resolution (the long edge is now 4000pixels) - the best scenario here is that its been downsampled but even that is pretty damaging to quality to a pixel based image.
Cropping the image with InDesign makes zero difference, the Generated image is always 4000pixels and includes the cropped area.
The only way I could stop the downsample and retain the same quality for the bulk of the image was to first destructively crop to <2048x2048.
I destructively cropped to 1907x2047 and then expanded to an exact square ran Generative Expand which gave me 2047x2047pixels and also very good results in the expanded area (no obvious mush or change in texture).
In That long feedback thread; lots of people are asking to use the generative fill to fill the white bits in a Lightroom pano.
I know I'm thrhowing a lot of IF and BUTs at it ... But if AI takes your 8000pixel image and gives you back 4000pixels and even that 4000 has been upscaled from 2048 or 2000pixels.... And for a Pano that process could start at 16000pixels.
So the question maybe do you always get mush from big MegaPixel images?
For Pixel Peepers it maybe select a square in Photoshop, generate, repeat may be here for a while.
Attached jpegs the top left window in both is AI generated the one to its right is original as is everything to the right of that, if you pixel peep at one of the white windows you can clearly see the difference in resolution between the crop from 4000 and image26 which is 2047 pixel square and a good match to my original.
One final point is that Generative AI jpegs are DeviceRGB (without a profile), they don't pick up the color profile from the original, so for me there was enough of a colour shift in InDesign for me to see it my original was AdobeRGB1998, my InDesign Document had sRGB Working Space.