"Bits" is always relative to something, but people tend to say "bits" as a shorthand without context, which creates confusion. There are more distinctions to be aware of here.
Apple seems insistent on an upgrade from 14 to the latest 64 bit version of the OS.
There never was a 14-bit macOS. The original Mac (1984) ran Classic Mac OS on a 32-bit processor with a 16-bit data bus. By 1988 the Mac ran on a processor that was 32 bits throughout. The powerful new Apple Silicon processors are 64-bit only, so partly to prepare the Mac software base for that, a few years ago Apple started requiring all software to be compatible with 64-bit processing.
As the others said, none of that applies to the number of bits making up an image file, which is a totally different context. Even here there has often been confusion when people talk about image specs, because to be precisely understood, you have to be clear about whether you’re talking about the number of bits in total
, or about the number of bits per channel
For example, people started talking about “32-bit images” over 20 years ago. But those are not the same as the 32-bit images we talk about today. Back then, it was about having four channels of 8 bits each (red, green, blue, and alpha for transparency). 4 x 8 = 32. Today we would think of that as an 8 bits per channel image. When we talk about 32-bit images today, like the ones Photoshop and Lightroom use for HDR, we actually mean 32 bits per channel, not in total. Also, some people talk about “48-bit scanners” but that actually means a scanner that saves 16 bits of data in each of three channels (red, green, blue). That’s really 16 bits per channel; 16 x 3 = 48.
To avoid that kind of confusion when discussing bits and multi-channel images, it’s best to say “bits per channel” and not just “bits.” That’s consistent with the Image > Mode submenu in Photoshop.
You did get me wondering and since I shoot Nikon, I found this article
which states "RAW files are 16-bit files whereas JPEGs are 8-bit files
The linked article is actually not sufficiently clear on this. Not all raw files are 16 bits. Early raw-capable cameras saved at 10- or 12-bit raw files, hundreds of models in the last few years save 12-or 14-bit raw files, and currently a few high-end models save 16-bit raw files. Even though it’s a Nikon article, it sort of muddies the differences between the bit depth of the sensor, the bit depth of the raw file saved by the camera, and the bit depth of the non-raw file format the raw file may be converted to. For example, you can shoot with a 14-bit sensor set to save a 12-bit raw file, then when converted to Photoshop or TIFF format it can then become an 8- or 16-bits per channel RGB file.
Not even all Nikon cameras save 16-bit raw files. The Nikon D1X (2001) was one of those early models that saved 12-bit raw files, and the D850 (2017) can save 12- or 14-bit raw files.
If you noticed I wasn’t saying “bits per channel” in this last raw section even though I just said that we should, that’s because a raw file has only one channel of data (it hasn’t yet been demosaiced into three RGB channels).
The answer to the original question is that Lightroom Classic 11 doesn’t change the types of files that you can import, compared to earlier versions.