• Welcome to the Lightroom Queen Forums! We're a friendly bunch, so please feel free to register and join in the conversation. If you're not familiar with forums, you'll find step by step instructions on how to post your first thread under Help at the bottom of the page. You're also welcome to download our free Lightroom Quick Start eBooks and explore our other FAQ resources.
  • Stop struggling with Lightroom! There's no need to spend hours hunting for the answers to your Lightroom Classic questions. All the information you need is in Adobe Lightroom Classic - The Missing FAQ!

    To help you get started, there's a series of easy tutorials to guide you through a simple workflow. As you grow in confidence, the book switches to a conversational FAQ format, so you can quickly find answers to advanced questions. And better still, the eBooks are updated for every release, so it's always up to date.

Is Sensei Search available with Lightroom Classic?

Status
Not open for further replies.

clee01l

Senior Member
Lightroom Guru
Premium Classic Member
Premium Cloud Member
Joined
Jun 20, 2009
Messages
21,551
Location
Houston, TX USA
Lightroom Experience
Power User
Lightroom Version
Cloud Service
Lightroom Version Number
Lightroom Classic 9.2.x
Operating System
  1. macOS 10.15 Catalina
The AI search function in Lightroom (mobile) has great potential. Has it been added to LR Classic and if it has how do I implement ?
 
Can you describe Sensei Search. With Classic Sensei is the feature when applying Auto but I'm not aware of a search an option . I have never installed Lightroom so I can't comment.
 
No, I don't believe it has been implemented in Classic.
 
Each image is analysed and automatically "tagged" with various words which are then used by the search engine. So type "dog" into the search bar and you should get all pictures containing a dog (though of course you can further refine to search to narrow down what you might be looking for). It's by no means perfect, but could be a great assist if you haven't ever keyworded your library.
 
Can you describe Sensei Search. With Classic Sensei is the feature when applying Auto but I'm not aware of a search an option . I have never installed Lightroom so I can't comment.
In Lightroom Mobile I can filter on Sensei phrases like Flower or "Water Lily". and Lightroom will find all of the images that contain what it thinks to be a flower or more narrowly a "Water Lily"
As Jim Says, it has not been implemented In Classic.

Here's how I solved it. In Lightroom Classic I have a lot of images that are "Flowers". Some have not been keyword. In Lightroom Mobile I can perform a Sensei Search filter and Lightroom Mobile will find all of the "Flowers" . I can do the same for "Water Lily" each of these Lightroom mobile filters can be captured in an Album. The Album shows up in LR Classic as a Collection and I can Keyword it there
 
Last edited:
The AI search function in Lightroom (mobile) has great potential. Has it been added to LR Classic and if it has how do I implement ?
It could not be added because this is something that works by auto-tagging when the images are uploaded to the cloud. Sensei runs in that cloud. So at best it could only work with synced images in Lightroom Classic. You can see this very well in Lightroom desktop if you search for a subject directly after you have added the images with that subject. Lightroom won't find them. Then repeat the search ten minutes or so later and it will find them.
 
Each image is analysed and automatically "tagged" with various words which are then used by the search engine
What Adobe using for doing the tagging of the image? As johnrellis above mentions, his plug-in uses Google (I use his plug-in).

For effective searches you need the tagging done ahead of time and stored rather than performed at run-time; too slow.
 
What Adobe using for doing the tagging of the image? As johnrellis above mentions, his plug-in uses Google (I use his plug-in).
They are using their own AI system called Sensei. :)

Adobe Sensei: machine learning and artificial intelligence

Everyone using Cloudy is helping train it. Here's a comment on their FAQ page:

Machine learning analysis on your content
Adobe uses automated systems to analyze your content using techniques such as machine learning in order to improve our apps and websites. This analysis may occur as the content is sent or received via an Adobe website, or when the content is stored on Adobe servers
 
Everyone using Cloudy is helping train it.
Their various web pages do seem to imply that Sensei photo search may be trained using photos uploaded to the Lightroom Cloudy. But the web pages I've found don't specifically mention Sensei photo search. For example:
https://helpx.adobe.com/manage-account/using/machine-learning-faq.html
That page indicates that Adobe "may" analyze your uploaded "content".

In general, training machine learning requires labeled training sets, and for Sensei photo search that would mean large numbers of photos with accurate labels ("dog", "cat", "bridge", etc.). While keywords and captions attached to Cloudy photos might be good enough to serve that purpose, they also might be too noisy to be useful for high-quality training.

By the way, if anyone is concerned, the web page above gives information on how to opt out from having any of your uploaded content analyzed for machine-learning purposes.
 
I agree. I doubt that Sensei can be trained with uploaded photos if the users doesn't tell Sensei what is in the photo.
The interesting question is if you look at all uploaded, all the billions of photos, and billions of keywords and captions and titles.... and feed it into an AI, would it learn?

Sure, some people will label a cat as a dog, or there will be 5 animals in the same photo and 3 keywords and which is which -- but the number of photos available to Adobe is unbelievably large.

That's the weird thing about deep learning. It's not necessarily about clean data, but about lots and lots of data.
 
The interesting question is if you look at all uploaded, all the billions of photos, and billions of keywords and captions and titles.... and feed it into an AI, would it learn?

Sure, some people will label a cat as a dog, or there will be 5 animals in the same photo and 3 keywords and which is which -- but the number of photos available to Adobe is unbelievably large.

That's the weird thing about deep learning. It's not necessarily about clean data, but about lots and lots of data.
The point is that most people who use Lightroom desktop will probably not label their photos at all. In that case even ten million photos won't train Sensei, and that is what I said.

And the other point is that you cannot tell Sensei it was wrong. If I search my online photos for 'elephant', Lightroom brings up about one hundred pictures that indeed contain elephants, and one picture that contains two rhinos. I can tell Sensei that this picture contains rhinos, but not that it does not contain elephants. So even after I added the keyword rhino, the picture is still found when I search for elephants.

By the way: I tried this again after I reported it more than a year ago, and the result is exactly the same. Sensei hasn't learned anything in this respect in the last year.
 
By the way: I tried this again after I reported it more than a year ago, and the result is exactly the same. Sensei hasn't learned anything in this respect in the last year.

I have noticed that over the last year Sensei has improved its ability to recognize extremely well-known landmarks. By "extremely" I'm meaning almost-unique objects such as the Eiffel Tower that everyone on the planet should know, not non-descript buildings like Buckingham Palace. While Google should have the advantage of using StreetView to recognize many more spots, Sensei seems to have progressed from "complete inability" to "occasionally".
 
Last edited:
And the other point is that you cannot tell Sensei it was wrong.
I've noticed that if you do a search on LR Web, you get a message at the bottom of the screen asking "Was this search helpful? YES NO". If you click "NO" then you get an opportunity to supply feedback. I have no idea what happens with this feedback, but *maybe* it gets used to help them train the system?
 
I've noticed that if you do a search on LR Web, you get a message at the bottom of the screen asking "Was this search helpful? YES NO". If you click "NO" then you get an opportunity to supply feedback. I have no idea what happens with this feedback, but *maybe* it gets used to help them train the system?
Maybe, but I doubt it. Even nore direct feedback did not lead to an improvement.
 
...non-descript buildings like Buckingham Palace. While Google should have the advantage of using StreetView to recognize many more spots, Sensei seems to have progressed from "complete inability" to "occasionally".
Google Cloud Vision does indeed recognize Buckingham Palace, from several different views:
1591727680350.png


But Cloud Vision also recognizes many locations (what it calls "landmarks") unrelated to Street View, from obvious ones:

Half Dome, Yosemite:
1591728137348.png


to much less obvious places:

Green Cove, Cape Breton Highlands National Park:
1591728712877.png


Vail Ski Resort, Colorado:
1591728862922.png


to ones that are seemingly completely obscure:

Tower Bridge, London:
1591728073470.png


Google is leveraging all the photos on Google Maps, which are tagged with coordinates, and those coordinates have associated labels from Google Maps. Years ago Google had bought a company that encouraged users to share geotagged photos, and now Google Maps directly encourages people to share photos. I don't know if Google's Terms of Service allow them to incorporate geotagged photos from Google Photos in the training of Cloud Vision.

In general, while Sensei seems reasonably good at recognizing generic objects ("elephants", "bridge"), it's hard to see how Adobe can compete with Google's access to labeled photos.
 
That very much reflects my tests, John, as I'd thrown Buckingham Palace and similar obvious mountains at both. I don't know if CloudVision uses StreetView data, but the sheer amount of data available to Google must dwarf whatever Adobe can crunch.
 
Don't forget that GPS data is likely to be a part of the algorithm, so even small details or bigger features could be easily linked to names.

Sent using Tapatalk
 
I don't know if Google Cloud Vision ever uses assigned GPS coordinates for helping to label photos, but in the examples I posted above, none had assigned GPS coordinates when they were submitted to Cloud Vision.
 
That very much reflects my tests, John, as I'd thrown Buckingham Palace and similar obvious mountains at both. I don't know if CloudVision uses StreetView data, but the sheer amount of data available to Google must dwarf whatever Adobe can crunch.
John, As we say in the United States, "You ain't just whistling Dixie." Google has economies of scale that will crush competitors in many categories. (GPS device anyone?) How many high street bookstores still remain, thanks to Amazon? Whether or not that's a good outcome, that's open to debate.
 
Status
Not open for further replies.
Back
Top