apodman wrote:First, an ideal case: I take my pictures from way out in space, so there is (1) no ambient light bias and (2) no atmospheric scattering. My optics are perfect, so there is (3) no diffraction effect. My CCD is very advanced, so there is (4) no CCD bias and (5) no digital noise. My subject is (6) only distant stars that are each represented by a single pixel. My resolution (e.g., pixels per arc second) is fine enough to leave (7) more than 1 pixel of space between stars. (8) Every pixel in my field remains perfectly fixed through my time exposure, and/or I am able to align every pixel perfectly when stacking multiple frames.
I'd get rid of (4). Any bias (e.g. dark current) can be subtracted, and any noise that the bias introduces (e.g. dark current noise) can be lumped in with (5). I'd also toss (6) as resolution doesn't matter to the argument. Above all, I'd allow the system to have some noise, and eliminate (3), as even with perfect optics you have diffraction. The other items are subject to arbitrary reduction, simply by engineering. Diffraction and noise are here to stay, and I see no point in creating an artificial world here.
If you had a perfect sensor, it would never saturate. That is, each pixel would just keep counting photons. With such a sensor, there is no maximum exposure. The longer you image, the better your S/N gets, and the better your detail gets as a result. In practice, sensors
do saturate at some point, meaning there is a maximum possible exposure before you lose data. That's where stacking comes in, since stacked images can be arbitrarily deep. As a rule, with stacked images, you care about total exposure time, and again, the greater the time, the greater the information content.
The blurring effects you include: (2) atmospheric scattering and (8) tracking error, as well as atmospheric refractive effects (seeing) don't, in practice, have any impact on the above except for very short exposures. That is, for any exposure longer than a few seconds, the blurring is the same. High resolution images are possible by making many (hundreds or thousands) of very fast exposures, selecting the relatively few that beat the seeing, and then stacking enough together to get a long enough exposure time for good S/N. That is usually called
lucky imaging. It is only useful for bright objects: the Sun, Moon, and planets.
But in a typical real case, for example, you can find a group of 3 or 4 stars very few pixels apart with a dimmer star in the area between them. Factor (2) randomizes the distribution of scattered light around the central pixel of each star throughout the exposure or from one frame to another. At some relative dimness of the central star, at some closeness in pixels of the stars to each other, and at some length of exposure or number of stacked frames, must not that star become indiscernible from the sum of all other effects?
No, that doesn't happen. Outside the lucky imaging example above, the longer you expose, the better the S/N. Even though the actual star images are infinitely wide, and you can expose long enough that their outer edges may be huge, and overlap each other, their peaks are still spatially separate. Pick your display settings properly, and you'll see the same distinct stars.