BDanielMayfield wrote:I find this fascinating, since telescopes and how they work have been of interest to me for many years. So, if I'm following you correctly Chris, the limiting factor isn't the optics, but it's the data storage capacity per pixel? What is the max photon count of today's data collectors?
All professional cameras use CCD sensors, except for some special applications. Most amateurs astroimagers also use cameras with CCD sensors, although CMOS technology is improving, and such sensors are sometimes used by amateurs (especially those imaging with DSLRs). I'll talk about CCDs here, although there are many parallels with the way CMOS detectors work.
Each pixel occupies an area of silicon (and a volume as well, although generally the area is the important parameter). When a photon hits the top surface of the pixel, it produces an electron, which is stored in the volume of that pixel as a charge. The number of electrons that can be stored depends on the volume of silicon available (and since the pixel is fairly planar, that really means the area). So as a rule, the bigger the pixel the better, and professional cameras often have very large pixels (20-50 um is common for some cameras; amateur cameras usually have 5-10 um pixels).
Good sensors can store 100,000 or more electrons per pixel. But when the camera is read, there is some noise introduced, typically on the order of 10 electrons. So the true dynamic range might be around 1:10,000. Signal below 10 electrons is buried in noise, signal over 100,000 electrons is saturated (clipped). This dynamic range can be represented digitally by 13-14 bits.
An emission nebula might contain stars with an apparent magnitude of 4, and the faintest nebulosity will be close to the sky background, mag 22 or so. That's a dynamic range of around 1:15 million (about 24 bits). So there is a huge difference between the dynamic range of our best cameras and the actual dynamic range of the regions we image.
There are tricks that can be used. When the intent is largely aesthetic, images can be collected at several different exposure times and then combined with clever masking. Images can also be collected with short exposures and then added together. The signal adds linearly, and noise quadratically, so the S/N increases... but not as fast as we'd like. The real problem is the previously mentioned readout noise, which just keeps piling up with each subimage. So we'd like lots of short images to reduce overexposure, but one single long image to get the lowest readout noise.
Terrestrial photographers might recognize a similarity between the above mentioned stacking of subexposures and the technique called high dynamic range imaging (HDR) which works the same way, combining images with different exposures to create an image with more dynamic range than the camera sensor is intrinsically capable of delivering.
BTW, the bloating of stars is mostly an optical issue. Diffraction makes all the stars appear to have a finite diameter, rather than being true point sources. The brighter the star, the farther out along that diffracted diameter we can see. Some bloating may also be caused by internal reflections in elements before the sensor, and also by scattering in the sensor itself.