• Welcome to the Lightroom Queen Forums! We're a friendly bunch, so please feel free to register and join in the conversation. If you're not familiar with forums, you'll find step by step instructions on how to post your first thread under Help at the bottom of the page. You're also welcome to download our free Lightroom Quick Start eBooks and explore our other FAQ resources.
  • Stop struggling with Lightroom! There's no need to spend hours hunting for the answers to your Lightroom Classic questions. All the information you need is in Adobe Lightroom Classic - The Missing FAQ!

    To help you get started, there's a series of easy tutorials to guide you through a simple workflow. As you grow in confidence, the book switches to a conversational FAQ format, so you can quickly find answers to advanced questions. And better still, the eBooks are updated for every release, so it's always up to date.
  • 14 August 2024 It's Lightroom update time again! See What’s New in Lightroom Classic 13.5, Mobile & Desktop (August 2024)? for the bug fixes. Hopefully this will fix many of the sync issues reporting in Classic 13.3 and 13.4.

Catalogs Catalog Backup to NAS - So Slooow !

Status
Not open for further replies.

mist_d

New Member
Joined
Sep 14, 2022
Messages
13
Lightroom Version Number
13
Operating System
  1. macOS 14 Sonoma
Do you guys know why Lightroom’s Classic catalog backup is so slow when backing up directly to a NAS shared folder ?

I mean really slow compared to local disk.

I am running a Synology 1522+ on a 10 GB network with 5 Red Pro HDDs (7200 rpm), connected to my M1 Max Mac Studio.

I have an SMB share with a dedicated folder for Lightroom catalog backups (backup runs everytime lightroom exits)

If i choose the backup destination to this mounted shared folder, the backup process is waaay slower compared to local disk.

I now that is not related to file transfer since the NAS is working fast (500MB+/sec) when copying files.

Somehow, for some reason, the backup process doesn’t like NASes that much :)

So, any ideas ? :)

PS: Also i would love if Adobe would implement an option to automatically delete backups older than X
 
1. Are the network cards at both ends 10Gbit cards. I assume from your details above that you are using a 10Gbps switch / ports, but often there are old 100Gbps cheap switches hidden/forgotten in obscure places.... eg under a desk.

2. Internal spinning Sata6 drives max out typically at 150MB/s , often less, especially when writing.
3. Create a test folder with 100GB of image files on your system and try copy and paste that to a test folder on your Nas. Time it in seconds. Repeat a few times and see if you get consistent results.

What is the real world measured transfer speed.

If you use some useful utilities (Beyond Compare, GoodSynch *)to do this copy you will get a visual picture of the status / progress of the copy and you may get actual measurements of the transfer rate.


I put in a NAS many years ago.... but was shocked by the actual real world bandwidth experience and within 48 hours bought spinning disks (Sata 6) to put into my PC instead. The then brand new Synology Nas was immediately relegated as a backup destination.

I have been waiting years and years for 10G networks to become available at non commercial (ie rip off) prices. I assume wireless has taken over as the network of choice for most.... but I do see the possibilities of having a 10Gbps based Nas and will watch for positive real world experiences..
 
The network is FULL 10 GB at both ends, wired with STP Cat 7 + STP Cat 6A short patch cords.
Mac Studio has a 10 GB network card built-in and I use Synology's 10 GB Add-on Card + a 10 GB switch (Qnap QSW-2104-2T).
So, yes, 100% positive, full 10 GB between the two :)

WD Red Pro (6TB) are about 220 MB/sec individual.
In my NAS, using RAID 6 ("basic" RAID 6 ... not SHR-2) with 5 HDDs i get consistent around 600 MB/s write and 900 MB/s read, using various disk speed tests, in multiple "runs".

Screenshot 2024-06-20 at 01.36.19.png

I also had consistent 500MB+ transfer rate when copying bigger tasks (ex: migrating my data to NAS)

My Lightroom Catalog backup zip archive is 545 MB.
If i copy this one manually to NAS (copy/paste the created zip file), it takes literally like 1 second. :)
Lightroom Catalog Backup done on local disk - 14 Seconds (full process)
Lightroom Catalog Backup done on NAS (SMB share/ 10GB Network) : 1 Minute 38 Seconds (full process)
* integrity test and catalog optimizing are enabled in both scenarios.

Basically, when using the NAS as the destination for the catalog backup, it takes almost 10 times longer than on local disk.
Majority of this time is spent on "Copying Catalog" (the biggest time is spent here) and "Compressing catalog backup" (a little bit less but still long)


PS: There is also a thread on Adobe's forum regarding this. But, you know .... just "crickets" from Adobe. I was curios maybe some of you guys figured this out.
 
Last edited:
A good looking setup…

I cannot explain the Adobe delay…… other than I suspect Adobe may have extra logic built in to handle network copies or they are not using the best network routines for such copies.

While the scale of the difference is very high… it is still a short time.

Perhaps setup GoodSynch job to copy the catalog, either scheduled (say every night) or on demand or both, so the Catalog Backup is just an automated routine. At a frequency of your choice (say weekly or monthly) have Lr do the backup with all optimisations turned on.
 
Can you monitor network traffic, can you check for collisions…. Maybe the network copy is doing a a block comparison to ensure file is copied over the lan. Just pondering possible causes for the delay.
 
I monitored the network traffic during a catalog backup to NAS and didn't see any collisions.
I think it's how they implemented the network backup i suppose.

Let's assume that the archive creation with all the checks takes those 14 seconds done locally. The archive copy even with checksums enabled should be pretty much instant on my network.

I think they do it rather inefficient: sending the "naked" catalog in blobs (small parts) over the network (with checksum verification for those blobs) and then doing the integrity checks, optimization and packaging (zip creation) directly on the NAS.

It's not that long in my case, but still annoying seeing the backup process hang there compared to when i do it on local disk and also knowing that a simple copy takes literally 1 second.

Yes, until now i did the backups on local drive and used Carbon Copy Cloner (similar to GoodSync) to copy them to NAS.
 
I had forgotten that the Lr backup has a built in zip or compression phase. This may be a factor.

I use dedicated backup software which backs up all my data files , incl catalog and images from 2023 and 2024 (Macrium Reflect, Windows only). This does everything for me in a single automated step (ie full backup once a week, incremental backups every morning and 4 generations of backups retained automatically. I just set and forget. I get an email at 5.30 or 6 pm confirming that the backup has been successful.

The key point I am making is that it is totally automated, very efficient, takes up none of my time. I am not familiar with CCC as it is a Mac only product, so not sure if it can be configured to copy the Catalog files, compressing the result and automatically removing old copies to a defined cycle of backups.

Maybe the best option is to do the Lr backup locally on exit to a local folder and have CCC scheduled to sync this backup folder to your NAS as a nightly exercise.
 
I think they do it rather inefficient: sending the "naked" catalog in blobs (small parts) over the network (with checksum verification for those blobs) and then doing the integrity checks, optimization and packaging (zip creation) directly on the NAS.
Have you done other bulk file transmissions, e.g a file copy in either direction directly from MacOS? Have you checked with Synology?

Not to get too, too geeky here, but if the transmission is only short "blob" TCP "packets," that would be extremely inefficient. The way that TCP/IP networking works, each sent packet requires an acknowledgement packet back, so the sender knows that the data transmission was received.successfully. The "Maximum Transmission Unit" is often 1536, but there are other considerations. In any case, that is not just a small "blob."

I am assuming (but the IT guys in this forum might correct me) that Adobe applications do not get involved with the network "stack," but simply feed data into a "socket," which then is responsible for data transmission. That all said, there is clearly an issue.

I have no plans to get a NAS but I'm sure others on this forum either have this problem or are contemplating getting a NAS. I don't know anything about MacOS. Try this search: macos poor performance with synology nas
 
Have you done other bulk file transmissions, e.g a file copy in either direction directly from MacOS? Have you checked with Synology?
Yes, and it's blazing fast over SMB (10GB network). For example the Catalog Backup ZIP archive (545 MB) takes 1 second to copy.

If you configure Lightroom to backup the catalog directly over SMB to the mounted shared folder ... it takes literally 10 times longer than on local disk.
It's a matter on how Lightroom deals with backups over network for sure, since the actual file transfer (zipped backup) is literally 1 second (tested) using copy/paste.

When i analyzed the network traffic using Wireshark (Mac app for network traffic) ... i noticed something about "blobs" and the .lrcat file , but i might be wrong since i'm not that proficient in inspecting network packets :)

It's not MacOS, nor Synology since pretty much everything else works flawless: file transfers over SMB, files/folder syncing using third party apps, file transfers on the browser using Synology's DSM etc. All working perfect fine, at the expected speed.

There is just a big performance degradation (10x worse) within Lightroom catalog backup mechanism when using NAS as destination.

PS: For reference, here is a similar topic on Adobe's forum where people are complaining about the same thing since 2020 without a fix yet :
https://community.adobe.com/t5/ligh...gs-at-quot-copying-catalog-quot/td-p/11700115
 
I have a NAS. It is configured as JBOD, no RAID for me. I do not send my LrC Catalog backups to the NAS. I do use the NAS for Time Machine Backups and Acronis backups and a self backup (1 Large and 2 small volumes to one Large volume). The Problems that I see from the OP is the use of RAID which adds redundancy and parity checking. While these are good for data protection and data integrity, they do slow down the write processes.

I have a 1 Gigabit network which is slower that a 10Gigabit but faster than any WiFi connected to a 10Gigabit network.
 
The Problems that I see from the OP is the use of RAID which adds redundancy and parity checking. While these are good for data protection and data integrity, they do slow down the write processes.
They actually increase performance compared to "classic" JBODs / individual drives (HDD's) since you are "aggregating" the read/write of each drive. The more drives in the array, the better the performance.

The write penalty on the RAID comes because of the parity checks BUT this penalty comes from the TOTAL "aggregated" performance not from the individual drives read/write transfer rate.

For example, my entire RAID 6 array of 5 HDDs over 10GB Network performs kinda like a regular "local" SATA SSD on writes (500-600 MB/sec) ... and way better (900 MB/sec) on reads. So basically this is more than twice the "write" performance of any individual, locally connected, HDD and more than 4 times the "read" performance.

The performance hit is rooted in the backup process and the way Lightroom deals with catalog backups directly on a network share. The very same backup process plummet to the ground when done directly over to the NAS as destination (10x slower) ... and is not related merely to the file transfer.

Nothing to do with the RAID array performance, RAID "accepts" and saves the backup file (zip archive) in 1 second :) .
Lightroom is doing some inefficient jiu-jitsu during this process.
 
It might be worth flagging an official bug report with Adobe. You have good data to backup the performance issue. You may get a response from Adobe or you may encourage Adobe to do something about it.

I think that 10Gbps networks will become increasingly important and this in turn will encourage more use of NAS.

Adobe will need to deal with it sooner or later. It must be very easy for Adobe to test out the scenario you have described.
 
They actually increase performance compared to "classic" JBODs / individual drives (HDD's) since you are "aggregating" the read/write of each drive. The more drives in the array, the better the performance.

The write penalty on the RAID comes because of the parity checks BUT this penalty comes from the TOTAL "aggregated" performance not from the individual drives read/write transfer rate.

For example, my entire RAID 6 array of 5 HDDs over 10GB Network performs kinda like a regular "local" SATA SSD on writes (500-600 MB/sec) ... and way better (900 MB/sec) on reads. So basically this is more than twice the "write" performance of any individual, locally connected, HDD and more than 4 times the "read" performance.

The performance hit is rooted in the backup process and the way Lightroom deals with catalog backups directly on a network share. The very same backup process plummet to the ground when done directly over to the NAS as destination (10x slower) ... and is not related merely to the file transfer.

Nothing to do with the RAID array performance, RAID "accepts" and saves the backup file (zip archive) in 1 second :) .
Lightroom is doing some inefficient jiu-jitsu during this process.
My bottom line is that you never need RAID unless yuor business needs 7X24 uptime on data. I got burned once when a RAID controller (hardware)failed and the data was stored in a Proprietary filesystem. I'll never trust my data to a proprietart RAD device and of course a RAID controller that might be a point of failure.
 
My bottom line is that you never need RAID unless yuor business needs 7X24 uptime on data. I got burned once when a RAID controller (hardware)failed and the data was stored in a Proprietary filesystem. I'll never trust my data to a proprietart RAD device and of course a RAID controller that might be a point of failure.
True, RAID is mostly good for uptime, some increased performance than standard HDDs, huge storage capacities and lastly some features that together with modern file systems (like ZFS / BTRFS) ensure data-integrity at rest (file corruption protection).

It's a really really BAD idea to treat RAID as backup, since, as you mentioned, is a single point of failure .... no matter the RAID "flavor".

Yeah, hardware raid is true evil, no kidding :)

If your hardware raid controller dies ... data is TOAST or only recoverable by the same model of controller.
The good part is that hardware raid is pretty much obsolete nowadays. I wouldn't use it either for the same reason.

Synology's RAID 6 is software RAID (they also have their proprietary SHR-2 or SHR-1 software raids ... but i don't use those)
This Raid array can be mounted in any Linux system with a little bit of juggling around SSH terminal and code (you literally connect and "mount" all the HDDs to a PC running Linux and you re-construct the array accessing all the data no problem)

..... but that is a "end-of-days" type of last resort, usually you have backups (on-site / off-site / cloud) or you might get another Synology NAS (although i hate the "vendor lock-in" idea) and restore all of your data.

There are some downsides for sure, but also the benefits are really appealing for me :

- you have literally a small server that "works" for you when you sleep
- you have automatically data-integrity protection with checksums and self-healing "data-scrubbing" that protects all your files from bit / corruption.
- Raid parity + checksums + BTRFS / ZFS file system + "data scrubbing" task that runs automatically every 3 months ... will ensure that your files are rock solid and bulletproof at rest. No corrupted files ever. (this one was big for my decision)
- your HDDs are automatically tested for potential failure (scheduled SMART tests done automatically)
- your backups are done while you are asleep even with your workstation being off : backups on-site, off-site, to the cloud etc.
- you have snapshots and file system versioning (although some backup software are capable of this on local SSDs) ... versions that can also be replicated off-site.
- you can play around with some "self-hosted" concepts and ideas: having your own "google photos" / photography archive service that can accessed all over your house, from multiple devices / multiple users ... You can even "expose" those self-hosted services to internet if you are brave enough :) (fearing ransomware)
- sometimes a NAS benefits the whole family: giving each of the family member their own "space" in a centralized household data storage "vault" with capacity that far exceeds those found on single drives.

I agree though: sometimes this is way overkill and it's not a solution for everybody.
 
Status
Not open for further replies.
Back
Top