Ann wrote:What I do wonder is what is gained from a scientific point of view from producing an image from three different filters where one filter is totally contained inside another filter.
Actually, there can be more information present in that case than when you have two filters that don't overlap at all. In one case you have additive data, in the other subtractive (very much like additive and subtractive color). Either way, different wavelengths of light produce different ratios between the two channels, right? I think that you'll agree, looking at the transmission curve for the F606W filter, that it's basically clear. If you look through it, you'll see everything with a very slight yellow cast, due to the cutoff of deep blue and violet (filters like this are sometimes called "minus violet", and are popular with visual observers because they eliminate most chromatic aberration when using a telescope). If you look at a deep red source through this filter, you'll see it as red- the light is passed. If you look at it through the overlapping F555W filter, you won't see anything, because that filter blocks red. So by using this pair of filters, you've narrowed the spectral range of the source... which is the purpose of filters.
There is a technique for producing color images from just luminance and two color channels (actually, broadcast color TV does something like this). If you have an unfiltered (luminance) image, along with red and blue channels (perhaps because you were doing photometry, and those were good filter choices), you can make an RGB image by assigning the red channel to the red data, the blue channel to the blue data, and the green channel to (L - R - B). Green is synthesized by removing the red and blue from the luminance.
So, Chris, this is my question. I have noticed that Hubble/ESA images are usually produced with no blue filter. Usually these images are produced using only two filters, and infrared one centered at 814 nm and a visual one centered at 606 nm. To me that last filter sounds like its aim was to capture yellow-orange light, but you tell me that this is a clear filter, aimed at capturing all optical light, like old black and white photography.
We tend to think of "color" from the trichromatic human experience. But in the broadest sense, color just conveys some sort of spectral information, and the simplest way to do that is with a bichromatic system (which is much more common in the animal world than trichromacy). The Hubble camera can only take an image through one filter at a time, and observing time is valuable. So the fewer filters you use, the better. I think a great deal of effort went into planning the characteristics of the filters that are available- considering particularly the ways that the data can be used by adding and subtracting different components to synthesize others. Does that make sense?
But I still can't can't understand the reason for adding a third filter which is completely contained inside the "clear" filter. You know very well, Chris, that I would have wanted a blue filter that was not contained inside the clear filter for aesthetic reasons. But we are not talking aesthetics here.
Well, in this case it is obvious that there was no blue data to work with. But you don't necessarily need a dedicated blue channel. Using the same technique I described above, you could have data from a clear filter (typically with an IR cut), and separate red and green channels which fully overlap that, and then synthesize blue from (L-R-G). That gives you the same information as if you had used a blue filter, but it saves one entire exposure.
It seems to me that you would most probably get better scientific results by using a short-wave filter that was not entirely contained within the clear filter. It would, or so I think, be easier to tell the difference between blue stragglers and main sequence stars that way.
I don't know that the intent of the study was to identify blue stragglers. However, even if that was a goal, the filters used might be ideal for this purpose. I don't really know- stellar spectroscopy isn't my area of expertise. But here's an important point: when you make an image, the intensity you measure for any source has a signal component- the number of photons you actually record, and a noise component- the uncertainty on that value. Measuring light sources follows the rules of Poisson statistics, which means that the uncertainty is equal to the square root of the signal. If you count 100 photons, you only know that the true intensity of the source is 100±5; your signal-to-noise ratio (S/N) is 10. If you collect 10,000 photons, you know the actual source intensity is 10,000±50; your S/N is 100- much better. So in astronomical imaging, a primary goal is always to collect as many photons as possible. One way to do this is to increase the collection area, which is one reason for making big telescopes. But filters work by excluding photons (that's what "filter" means!) The narrower the bandwidth of the filter, the fewer photons you record, and the worse your S/N. Just look at the APOD images that have narrowband data, and you'll see that those channels typically involve
hours of exposure time, because that's what you need to get enough photons when your filter is only 6nm wide. RGB exposures are typically much shorter, because those broad filters allow more light to be collected in the same time.
The scientific value of the HST camera data is closely linked to its S/N. So whenever possible, I think that observing programs are planned around using the widest pass filters possible (a review of the
complete filter list shows that there are quite a few wide filters available). By choosing them carefully, high S/N data can be collected, and some narrow band (although lower S/N) data can still be synthesized from the overlap regions- without requiring additional dedicated exposures.
I was surprised at this:
These filters might be good for distinguishing typical blue stragglers (which will be strong in the G and B channels) from other main sequence stars (many of which will be stronger in the G channel relative to the B channel), from young stars (which will be comparatively strong in the R channel, with my channel nomenclature reflecting the display device, not the filters used).
I'm not absolutely sure what you are saying here, so correct me if I'm wrong. I haven't questioned the use of an infrared filter, so I'm still going to assume that the infrared filter is going to be used. So, okay, Chris. Are you saying that young stars, regardless of spectral class, will be comparatively strong in the R channel, but not in the I channel? So we can distinguish a B8V main sequence star from a 12,000 degrees Kelvin horizontal branch star from the fact that the main sequence star will be noticeably brighter than the horizontal branch star in the R channel, but not in the I channel?
Don't read too much into my example: it was hypothetical. My only point was that different classes of stars will show different signal ratios through the filters that were used, and from this a clever data analyst can probably figure out all sorts of useful things about what classes are present. I think a blackbody curve can be fully specified by three samples taken at different wavelengths, which means an unambiguous temperature can be determined. The choice of sample wavelengths used may be very odd from a visual standpoint, and may not even make an accurate visual reconstruction possible. But that is often not the goal, so the filters are chosen based on other criteria.
My impression - and I will be grateful to you if you explain to me why I am wrong! - is that the people at ESA who use Hubble to take their pictures are somehow almost perversely unwilling to look at the universe through a blue filter. They want to avoid doing it because they have an aversion against it.
There is exactly one broadband blue filter available on the camera- a Johnson B filter, and there are no narrowband blue filters. I think this reflects the fact that the blue part of the spectrum is not all that valuable scientifically. Or, more precisely, there isn't much information to be obtained by imaging that part of the spectrum that can't be obtained as well by looking elsewhere. So this filter is just not used very often by most researchers- from ESA or anywhere else.