• Welcome to the Lightroom Queen Forums! We're a friendly bunch, so please feel free to register and join in the conversation. If you're not familiar with forums, you'll find step by step instructions on how to post your first thread under Help at the bottom of the page. You're also welcome to download our free Lightroom Quick Start eBooks and explore our other FAQ resources.
  • Stop struggling with Lightroom! There's no need to spend hours hunting for the answers to your Lightroom Classic questions. All the information you need is in Adobe Lightroom Classic - The Missing FAQ!

    To help you get started, there's a series of easy tutorials to guide you through a simple workflow. As you grow in confidence, the book switches to a conversational FAQ format, so you can quickly find answers to advanced questions. And better still, the eBooks are updated for every release, so it's always up to date.
  • Dark mode now has a single preference for the whole site! It's a simple toggle switch in the bottom right-hand corner of any page. As it uses a cookie to store your preference, you may need to dismiss the cookie banner before you can see it. Any problems, please let us know!

12-16TB external storage: (1) how fast is enough? (2) move from NAS to DAS. (3) bit rot issues?

Status
Not open for further replies.

Selwin

Active Member
Joined
Nov 28, 2010
Messages
907
Location
The Netherlands
Lightroom Experience
Advanced
Lightroom Version
Classic
Lightroom Version Number
10.1 Mac Intel, soon Mac Silicon
Operating System
  1. macOS 10.15 Catalina
  2. iOS
Hello all,
This is a question about which external (non-NAS) storage to get for safe and fast Lightroom operation:
Q1: How fast is fast enough for Lightroom?
Consider the circumstance that I will be upgrading my 2013 MacBook Pro to a Silicon 16" Macbook Pro in 1-2 years.
Q2: What storage is safe against mishap due to hard drive failure
Consider me practising the 3-2-1 backup strategy being 3 instances of the same data, 2 at home and 1 remote (rotating)

I'm moving away from NAS for my RAW photo storage. As a DS415play replacment, I almost pulled the trigger on a DS1621+ with 6 8TB drives to accomodate my growing photo storage needs. Then when I calculated its power draw and yearly energy consumption (around 800 kWh) I became conscious about my own ignorance and wasted ecological footprint. I will keep my 415play which consumes way less energy, and I am considering to upgrade to SSD drives to reduce even more.

Anyway I'm going over my requirements again:
OLD requirements (for NAS):
- Photos available anywhere in the home (hence the NAS)
- 10GbE speed (I don't need full 1000MB/s speed, but I don't want to be limited to 100MB/s)
- 24TB total usable RAID6 (Synology SHR2) storage for photos, media library and NAS backups
- btrfs file system, allowing for enhanced (but still far from 100%!) bit rot / flipping / bad sector protection
From ecological perspective (I could easily afford the NAS and the energy bill) I am leaving the new NAS idea.

NEW requirements (for ext drive):
- Photos available only at my main desk, allowing for external drives connected through TB3 / USB3
- Adequate speed for Lightroom: how fast does it need to be not to be a bottleneck?
- 16TB total usable storage (for photos only)
- "some" protection against bit rot / flipping / bad sector protection

What are you using for external drives?

So far I have eyed the following options:
G technology 14/28TB 2 bay drive. 14TB RAID1, TB3/USB-C so it will connect directly to a Silicon Macbook Pro
Promise Pegasus R4 4 bay drive: 12TB RAID5. TB3, might need to update the drives to higher capacity

So I am open to your suggestions!
 
I don't speak Mac, but let me speak briefly to bit rot. Generally speaking raid (any raid) is a poor defense against bit rot. Bit rot is usually defined as undetected change in data. Raid can only defend against detected failures.

To defend against bit rot you need some sort of logical safety net above the physical storage, either in the file system or in an application layer. Some file systems, like zfs and btrfs, Microsoft's aborted foray using ReFs do this by setting the order of updates (to assist in recovery) and periodically doing a separate vertical checksum (over a file as opposed to disk sector) to notice unanticipated changes. Sadly these have not been widely adapted for home use (or even commercial use frankly).

The need to defend against bit rot goes up as size goes up due to statistics; all hardware error checks are probabilistic to some degree, aiming to detect errors one in a million, billion, trillion -- whatever. Sadly disk space often does not keep up with changes in underlying platform. So your odds with 1TB of ever seeing bit rot are a lot lower than 12 TB. Still low. Very low, by the way. But never zero.

I do not know if there are storage systems for the MAC designed to use these file systems. Another approach is for you to do it above the application level -- use a program (maybe a backup program) that notices changes. For example, if you have a change in the checksum of a file but no update of modification date, that's a red flag. These too have faded -- there was an ImageVerifier at one point (I think that's the name), I wrote one for windows for lightroom, but none ever had much interest. Adobe implemented data checksums in DNG (but only for the image section, not the whole file). Lacking any other technique that is worth considering, though using DNG's has other implications not all good. I am not sure the downsides outweight the bitrot upsides; but if you are using a DNG workflow anyway, definitely use this and check regularly.

Hopefully people with Mac knowledge can comment on specific solutions. Heck, maybe Mac (which is a unix clone anyway) has a zfs or btrfs or similar file system.

All that said -- your FAR larger risk, in my opinion, is application or human error. A favorite historical footnote is one Lightroom release, when you ran it, it picked the folder on your drive alphabetically first and removed it. Not image folder, just... a folder. Now that got noticed quickly, but "stuff" happens, and a lightroom release might one day corrupt or lose a bunch of photos -- how soon would you notice? Bitrot and Raid defense does nothing here, zero. The only solution is LONG version history backups. Your 3-2-1 is somewhat pointless without versioning. Be sure you have it. You need to be able to get any old version of an image file going back even years (bear in mind in a raw workflow they should almost never change, so this is NOT a lot of additional storage). This is also a great protection against human error.

Enough rambling... hopefully the Mac experts will show up soon.
 
For performance, speed of drive containing the LrC catalog is of concern (depending on how patient you are). However, the speed of the drive where the images live is only impactful in a few situations

1) when you import images
2) If LR needs to create a larger preview (e.g. when you display an image in Develop Module and don't already have 1:1 preview of it)
3) If you are using an image file type that stores XMP data inside the file (e.g. JPG. DNG), then whenever you save metadata to disk
4) Exporting images

There may be a few others as well, however the take away idea is that LrC does a lot of read/write operations on the catalog but very little on the actual image files. So, performance on the catalog file is more important than performance on the image files. I have both my catalog and images on a WD 4tb "My Passport" portable external HD and for the most part am ok with performance - but I'm not uber critical of non lightning fast response time. If I do something in LrC that takes 3 or 4 seconds it's not a big deal for me. However, if such things drive you crazy then first put your catalog on a solid state drive (SSD) and your images on as fast a drive as you can knowing full well that the catalog drive will have significantly more performance value than the images drive.
 
Hey guys, thanks a bunch for replying so soon and so elaborately. I appreciate that a lot! Just a few quick points as I see my OP was not complete:
1. My 3-2-1 (or actually 3-2-2) strategy involves versioning on #2, using a Mac backup app called Carbon Copy Cloner. I hope - I mean really hope - that it will in fact bump a file into the versioning chain if some of its bits get flipped but otherwise remains the same date and size.
2 My current MBP has 512GB of internal SSD and I tested it against USB3 SSD for catalog: storing the catalogs on the internal SSD is noticeably more responsive.
3. That is when I went all in on previews on the local SSD and moved all of my images (even recent ones) to the NAS I currently still use.
4. I don't convert to DNG, I leave all my 5DsR and 5D4 images to CR2
5. @Ferguson you are right that versioning is the only real saviour when it comes to any type of drive failure, otherwise we simply backup the corrupt files to our #2 and #3.
6. For my next MBP I intend to get 2TB of internal SSD storage. Such an SSD storage upgrade is a major premium investment at the californian fruit company, but I think it will pay off. I tend to keep my laptops for 8-10 years.

Based on your replies I reckon a single relatively fast TB3 platter drive will do fine. I may get a RAID0 dual bay just to speed things up a little.

Thanks!
 
Great advice from the others. I’ll just add a few points…

Lightroom Classic speed of image access exists at multiple levels:
  • Original images. These can be on a fast hard disk; an SSD isn’t necessary because an image is fully read in just the first time. After that it’s cached and a preview generated. As you edit, the image’s cache is updated, and the preview re-rendered to match what the current edits look like. I don’t think Lightroom Classic goes back to the original file until its data is no longer present in the Camera Raw cache.
  • Camera Raw cache and previews. Because it’s the cache and previews that are continually churned while you edit, edits are more responsive if the previews and cache are on fast storage. This is where an SSD helps. Because previews are stored in the same folder as the catalog, storing the catalog on a fast volume like an SSD helps editing be more responsive. For the cache, you can set that location in Lightroom Classic > Preferences, Performance tab, Camera Raw Cache Settings, Location path.
  • Library grid scrolling: If a folder hasn’t been viewed recently (so its previews are not in RAM), speed depends on how quickly any pre-built image previews can be pulled off the volume, so again an SSD helps with the read time of those previews. If the previews are not yet built, then the constraints are the number and speed of CPU cores to build the previews.
  • Previews cached in RAM. If image or thumbnail previews happen to be cached into RAM (the fastest place to read from), then the more RAM Lightroom Classic has beyond 12GB or so, the bigger that RAM cache can be, and the fewer grid scrolling delays you’re supposed to have, in theory anyway.
The bottom line there is that it’s actually just the Camera Raw cache and the Lightroom Classic previews location that benefit most from being stored on the fastest volume you have, where an SSD would help. If you store them on the non-removable internal storage of a new Mac, that 2500MB+/sec internal NVMe SSD is likely to be the fastest storage you own by a fair margin. But the original images don’t have to be on a volume that fast.

Hopefully people with Mac knowledge can comment on specific solutions. Heck, maybe Mac (which is a unix clone anyway) has a zfs or btrfs or similar file system.
I am not a file system expert, but I think I read that Apple avoided zfs because of licensing issues. Instead Apple wrote their own APFS and I think it’s required for formatting the boot volume of an Apple Silicon Mac.

There should be a lot more flexibility for what format you use for external volumes. I don’t use an NAS right now but I assume that its file system gets abstracted out over the network so it doesn’t have to be directly supported by the Mac either, is that correct? If I’m right about that then an NAS is where there is the most potential for a Mac to be able to store images on a file system that’s more resistant to bit rot. Then it’s down to which file systems are supported by the particular NAS you bought.

Another approach is for you to do it above the application level -- use a program (maybe a backup program) that notices changes. For example, if you have a change in the checksum of a file but no update of modification date, that's a red flag.

Selwin mentioned using Carbon Copy Cloner. I use that for bootable backups, and it has an option in Advanced Settings that might help here: Find and Replace Corrupted Files. The first time you enable it, it displays a warning that says, among other things:
The “Find and Replace corrupted files” option causes CCC to re-read every file on the source and destination, calculate a checksum, then use that checksum to determine if each file should be copied. We recommend using this option on weekly or monthly backups.

Carbon-Copy-Cloner-Find-and-Replace-Corrupted-Files-1x.jpg


The reason CCC recommends using this option on weekly or monthly backups is that it makes a backup take much longer.

Normally, utilities like CCC figure out what’s changed by comparing the file system directories (indexes) of the source and destination, not the actual files. That’s how an incremental backup can be fast: Just look at the directories, see if the Mac file system record is different than in the last backup, and if it is, back up that file. But checking the directory doesn't find out if any bits got flipped in the file data itself. The CCC option above compares each file bit by bit, which can find a flipped bit, but that level of precision makes a backup of a large volume take forever. I have not used that option much yet; I think I might if the time comes when I can afford large fast SSDs for backup. Because with large hard disks, that option just takes too long.

I’m hoping that a future Apple update to APFS brings it fast file integrity protection, so that we don’t have to find corrupted files through time-consuming bit-level comparisons by a third party utility.
 
Lightroom Classic speed of image access exists at multiple levels:
  • Original images. These can be on a fast hard disk; an SSD isn’t necessary because an image is fully read in just the first time. After that it’s cached and a preview generated. As you edit, the image’s cache is updated, and the preview re-rendered to match what the current edits look like. I don’t think Lightroom Classic goes back to the original file until its data is no longer present in the Camera Raw cache.
  • Camera Raw cache and previews. Because it’s the cache and previews that are continually churned while you edit, edits are more responsive if the previews and cache are on fast storage. This is where an SSD helps. Because previews are stored in the same folder as the catalog, storing the catalog on a fast volume like an SSD helps editing be more responsive. For the cache, you can set that location in Lightroom Classic > Preferences, Performance tab, Camera Raw Cache Settings, Location path.
A bit more information on a couple of points:

Original Images: unless the user has set Preferences to use Smart Previews for editing, the original images are always read whenever the user opens an image in Develop. Yes, with raw files the Camera Raw cache entry (or Fast Load Data in the case of DNGs) is loaded first, and that's what is initially presented to the user. However, the full raw/DNG file is silently read and converted in the background and when completed it replaces the initial small cache entry in the system cache (if you watch the histogram carefully you would likely see a very small change after a few seconds when that swap takes place). Victoria has flow-charted the Develop loading sequence on page 464 of the Missing FAQ book. However, because this reading of the original image happens in background, the underlying point that an SSD isn't necessary for original image storage remains true (I keep mine on a USB3 connected G-Tech external drive, with no issues).

Camera Raw cache and previews: obviously there's a lot of churn on the system cache while editing, but none on the Camera Raw cache (I don't think these are ever updated after the initial creation), and there's not much churn on the library previews during editing (or more accurately there wasn't the last time I checked this some years ago). IIRC, the library preview is effectively discarded as soon as you apply any edit (as it's immediately out of date), although it's not initially deleted just in case the user uses UNDO). A new preview file is created, but initially only contains the smaller thumbnails that are needed to update the Navigator and Filmstrip during editing. The Standard preview component of the preview pyramid isn't created until the user moves onto another image or switches back to Library. I agree that catalog and previews being on an SSD helps general catalog and Library responsiveness, but as the previews aren't used at all during editing their location on a fast drive won't specifically help editing be more responsive.
 
JIm, that is interesting (and complex to me) stuff. I have a post elsewhere on here re an external SSD but my iMac does have a Fusion drive. Would the Fusion be fast enough to warrant the need to transfer the catalogue to an external SSD redundant?
 
Using a NAS for images does indeed allow you to pick any file system you want (it's even very easy to build your own NAS box, for years I ran zfs for a backup system NAS). The downside of using a NAS for production is they tend to not be as reliable due to the communication aspect. Especially as so many people prefer wireless over wired. But especially if wired, with adequate switch/router setup, they can be reasonably reliable for image (not catalog) storage. I would tend to prefer them for backup, however. You can always compare primary to backup to detect bit rot in primary.
 
Thanks so much for all your elaborate replies. I'm on the go currently so will read when I have time. Just wanted to let you know I just ordered a 8TB dual bay external drive for RAW file storage + two 10TB single drives for rotational backup. This is a more economic option and wasting way less earth resources. I'll use the advice above to tweak performance in my setup.
 
I used Nas for a while, but gave up when I discovered how slow the connection was. I repurposed my Nas for backup over a network and reverted to Sata Spinning disks for images and SSD for o/s and apps and Lr Catalog/Cache files.

I have just built a new PC.... using a Ryzen 3900x, 64GB of fast memory, 2 very fast M2 drives for o/s and apps (Pcie4) and I now store my images on a Thunderbolt 3 external enclosure. Still really at the commissioning stage, cannot report on performance yet, but I see glimpses of promise.

I was thinking that I would have to contemplate installing a 10Gb network and a 10Gb NAS solution when I needed further capacity, but suspect that 4 or 8 bay versions of the following will give me plenty of options downstream.
https://www.owcdigital.com/products/thunderbay-8

Word of caution. While Laptops may have USB C ports, they may not be Thunderbolt 3 ports if you want max bandwidth.
Most desktop motherboards do not natively support Thunderbolt, but the Asus 570 Creator (which I use) has natively support for Thunderbolt 3, built in10Gb network ports, supports multiple M2 pcie4 drives and has very good heat management. Speed of the disks used in an enclosure has a big impact on write performance, (ie maybe do not be tempted to use older disks from an old system and use the latest gen Sata and check the specs for throughput.

My Nas drive is asleep when not in use and wakes up when requested to do so to complete my automated backups.
 
Yea... that's my worry... Thunderbolt3 offers a lot more bandwidth promise. Glad to see stuff like the OWC Thunderbay.
 
Selwyn,

My backup strategy involves using a relatively sophisticated (for home use, anyway) backup program, which supports versioning and backs up all my data, photos, etc. to a separate hard drive. I use a new backup hard drive each year, and I have been saying my backup hard drives for almost 10 years now. It's not perfect, but if I do encounter bit rot in 2020, I can always try retrieving the same file in 2019, 2018, etc.
 
My understanding is that (some?) Synology NAS units do have automatic bitrot detection and recovery. See eg:
 
Thank you for linking to this article. For my situation it's not relevant anymore, as I've decided to use external hard drives now, which is a cheaper and better solution in my situation.
However, I have read about btrfs before and it's supposed to be great. The only caveat is that -- or so I have been told -- it will only detect anomalies in files that are accessed. In other words: files that you don't touch in years will not go through a detection process. Therefore, as stated above, multiple backups appear to be the only 100% -ish fail safe way to be able to revert to a backup in case your master drive has a deficient file.
Someone mentioned he has dedicated backup drives for every year, thus creating multiple independent cold backups of the same files.
 
Thank you for linking to this article. For my situation it's not relevant anymore, as I've decided to use external hard drives now, which is a cheaper and better solution in my situation.
However, I have read about btrfs before and it's supposed to be great. The only caveat is that -- or so I have been told -- it will only detect anomalies in files that are accessed.
I have not used btrfs, but zfs has a process you can set up to run periodically that does a scan to validate all files. I would assume the same sort of thing is possible there. Any detection of un-forced errors (i.e. that just happen not during access, i.e. the "rot" part) is going to require access to notice.
 
That might be possible with btrfs as well. My old Synology is still on EXT4 so it doesn't support it. And I don't even want to think about the time it would take for the old lady to work through 8 TB worth of pics every month or so...
Anyway I am now checking out the CCC backup options as suggested above. I think that will be a great way to add some extra surveillance.
 
That might be possible with btrfs as well. My old Synology is still on EXT4 so it doesn't support it. And I don't even want to think about the time it would take for the old lady to work through 8 TB worth of pics every month or so...
Anyway I am now checking out the CCC backup options as suggested above. I think that will be a great way to add some extra surveillance.
But that's the point of intelligent subsystems, you do not have to think about it. Almost all production (not home) raid systems will do a surface scan periodically, prioritized (or not) lower than normal activity, to detect gross errors (e.g. an unreadable block on one drive, then auto-recovered from a redundant). The zfs scan was like that, same idea, it just happened in background and you only found out if you looked or there was an uncorrectable problem (I can't recall how you found that out without looking, or if you could; probably some syslog type alert).

It's one reason I remain so frustrated with Lightroom in this regard -- they do all sorts of background work, and do it nicely without you as a user needing to know anything about it. But not validation or other data security, that is still left as a complete exercise for the user (though they do catalog integrity checking, but only if you manually OK it, same with DNG scans).
 
Thank you for linking to this article. For my situation it's not relevant anymore, as I've decided to use external hard drives now, which is a cheaper and better solution in my situation.
However, I have read about btrfs before and it's supposed to be great. The only caveat is that -- or so I have been told -- it will only detect anomalies in files that are accessed. In other words: files that you don't touch in years will not go through a detection process. Therefore, as stated above, multiple backups appear to be the only 100% -ish fail safe way to be able to revert to a backup in case your master drive has a deficient file.
Someone mentioned he has dedicated backup drives for every year, thus creating multiple independent cold backups of the same files.
My understanding is that Synology support a scheduled task that scans the whole drive periodically. What it does if it finds an error it can't fix, I don't know.
 
Okay people my old Synology NAS has bit rot. This is a warning to those who think their data is safe as long as the dashboard shows "System Integrity - healthy". The theory - as also explained above - is that disk faults (bit rot) has come to appear gradually without the controller noticing it because it doesnt examine every sector periodically. The reason this particular system does finally notice it, is because I did a complete overhaul of the storage pool, so every sector that new data is now written on passes the controller.
Ye be warned, if you have an older NAS, don't think you're safe when the dashboard shows all green;)
 
What are you using for external drives?

So far I have eyed the following options:
G technology 14/28TB 2 bay drive. 14TB RAID1, TB3/USB-C so it will connect directly to a Silicon Macbook Pro
Promise Pegasus R4 4 bay drive: 12TB RAID5. TB3, might need to update the drives to higher capacity
A lot was already said, so I just want to address one point: Raid 1 or 5. Raid is only useful to guarantee high uptime of your storage system. In my case, I decided that having a solid local backup system is good enough as the damage from disc failure is really low. In a drive failure scenario you only have to retrieve the most current files to be productive again - which is really fast on a local system. The recovery of the rest will take some time - but would not be critical. Dissing Raid gives you significantly more storage space, which I do value much more. Your needs might vary - but I think it is still worth a thought.

edit: as for external drives: I settled on WD drives. They gave me solid performance over the last years - usually becoming obsolete due to storage size but never due to failure.
 
Last edited:
Hello all,
This is a question about which external (non-NAS) storage to get for safe and fast Lightroom operation:
Q1: How fast is fast enough for Lightroom?
Consider the circumstance that I will be upgrading my 2013 MacBook Pro to a Silicon 16" Macbook Pro in 1-2 years.
Q2: What storage is safe against mishap due to hard drive failure
Consider me practising the 3-2-1 backup strategy being 3 instances of the same data, 2 at home and 1 remote (rotating)

I'm moving away from NAS for my RAW photo storage. As a DS415play replacment, I almost pulled the trigger on a DS1621+ with 6 8TB drives to accomodate my growing photo storage needs. Then when I calculated its power draw and yearly energy consumption (around 800 kWh) I became conscious about my own ignorance and wasted ecological footprint. I will keep my 415play which consumes way less energy, and I am considering to upgrade to SSD drives to reduce even more.

Anyway I'm going over my requirements again:
OLD requirements (for NAS):
- Photos available anywhere in the home (hence the NAS)
- 10GbE speed (I don't need full 1000MB/s speed, but I don't want to be limited to 100MB/s)
- 24TB total usable RAID6 (Synology SHR2) storage for photos, media library and NAS backups
- btrfs file system, allowing for enhanced (but still far from 100%!) bit rot / flipping / bad sector protection
From ecological perspective (I could easily afford the NAS and the energy bill) I am leaving the new NAS idea.

NEW requirements (for ext drive):
- Photos available only at my main desk, allowing for external drives connected through TB3 / USB3
- Adequate speed for Lightroom: how fast does it need to be not to be a bottleneck?
- 16TB total usable storage (for photos only)
- "some" protection against bit rot / flipping / bad sector protection

What are you using for external drives?

So far I have eyed the following options:
G technology 14/28TB 2 bay drive. 14TB RAID1, TB3/USB-C so it will connect directly to a Silicon Macbook Pro
Promise Pegasus R4 4 bay drive: 12TB RAID5. TB3, might need to update the drives to higher capacity

So I am open to your suggestions!
Huge loaded question. Do a lot of research. I'm biased. I don't like NAS and really don't like any RAID period. I like all my photo files from my lifetime on one drive and right now they are on an internal 8TB sata SSD (which was 800 bucks 6 months ago, but the price is going down). My operating system and LR are of course on a M.2 PCIe SSD. I "back up" (sync) that 8TB SSD to 2 internal 8TB spinning HDD hard drives and also to 3 external spinning HDDs that I keep in various places. If that 8TB SSD fails, I have an exact duplicate of that drive, which only had data on it. I don't even have to restore in that case. It's all there if that primary 8TB fails.
Now that is if you are not a pro and can afford to be down for a little bit (couple of hours) as you get up and running again from the synced (backup) disk and get the LR catalog linked to it or the files copied back onto another SSD primary drive in the PC.
But that is the best no-NAS, no-RAID solution and is great if you are not an earning pro and just doing your own thing off a PC and laptop. I have about 6TB of image files. That is a hell of a lot of images....
 
Huge loaded question. Do a lot of research. I'm biased. I don't like NAS and really don't like any RAID period. ...
Now that is if you are not a pro and can afford to be down for a little bit (couple of hours) as you get up and running again from the synced (backup) disk and get the LR catalog linked to it or the files copied back onto another SSD primary drive in the PC.
But that is the best no-NAS, no-RAID solution and is great if you are not an earning pro and just doing your own thing off a PC and laptop. I have about 6TB of image files. That is a hell of a lot of images....

I agree that RAID can create more problems than it solves and very few people or shops require a 7X24X365 uptime solution. Too often, people confuse RAID for backup and it is not. Also the proprietary RAID controller and proprietary Filesystem present an unrecoverable single point of failure.

Using a NAS is OK and was developed to share data across networks with multiple computers. If yours is the only computer accessing the data on the NAS, then you have throttles you data flow to the ethernet data speeds. A buss mounted drive or TB3 connected EHD will always be a faster data connection than Ethernet.


Sent from my iPad using Tapatalk
 
Status
Not open for further replies.
Back
Top