Re: APOD: Jupiter and Ring in Infrared from Webb (2022 Jul 20)
Posted: Sun Jul 24, 2022 7:14 pm
Sorry. The Fits-header shows filter: F322W2 and pupil: F323N. I think both were used.
APOD and General Astronomy Discussion Forum
https://asterisk.apod.com/
There are four wheels, two each for the long wavelength channel and the short wavelength channel (which have different optical paths). They are often used in combination. For instance, when the F323N filter (long wavelength pupil wheel) is used, it is typically paired with the F322W2 filter (long wavelength filter wheel) for blocking purposes.VictorBorun wrote: ↑Thu Jul 28, 2022 6:01 pm I thought there may be 2 filter wheels like 2 layers with light passing a filter in each wheel, but no, there is only one wheel:
We operated the Filter Wheel Assembly first, cycling it through all eight of its positions in both forward and reverse directions. Those eight filter wheel positions include five long-pass order-separation filters, two finite-band target acquisition filters, and an ‘opaque' position
This page, https://jwst-docs.stsci.edu/jwst-near-i ... am-filters, shows the wavelengths involved for these filters:Red (screen): NIRCam F322W2-F323N (this is not a subtraction function, both filters were used at the same time)
Blue: NIRCam F212N
Background is a grayscale combination of both filters. There were gaps in the data that had to be filled in using either filter to complete the other.
Infrared waves have longer wavelengths than visible light and can pass through dense regions of gas and dust in space with less scattering and absorption.
It seems to me that the gap that is noted on the right side of the image is due to IR waves passing through the dust and gases that are present so that they are not being scattered/reflected or absorbed. The gap probably does not exist in visible light.Why evening clouds gap, 670 km at the equator?
Why bright polar cups?
The dog image is single channel (that is, grayscale data) mapped to a pseudocolor palette. It is in a wavelength band that represents energy being emitted from the dog, not reflected in any way. Intensity has been mapped to color, so whether brighter is hotter is purely dependent upon the chosen palette.sallyseaver wrote: ↑Sat Jul 30, 2022 11:33 am Judy/Geckzilla, who processed the JWST data, confirmed the following on https://geckzilla.com/:This page, https://jwst-docs.stsci.edu/jwst-near-i ... am-filters, shows the wavelengths involved for these filters:Red (screen): NIRCam F322W2-F323N (this is not a subtraction function, both filters were used at the same time)
Blue: NIRCam F212N
Background is a grayscale combination of both filters. There were gaps in the data that had to be filled in using either filter to complete the other.
F322W-F323N: 2.5 to 4.1 microns
F212N: 2.1 to 2.15 microns
This page from NASA, https://science.nasa.gov/ems/07_infraredwaves, has an interesting discussion related to interpreting infrared [IR] imaging.
It has some images showing a calibration between temp and colors in a dog image. This Jupiter image is not calibrated to specific temperatures but we can get a sense of hot and cold. Here is my reasoning.
Consider the image of the dog at the NASA page noted above.
Consider that the light areas of the IR image go with areas of the dog where the temp is high and IR energy is reflected (not absorbed). Dark areas of the IR image go with areas of the dog where temp is not high and IR energy is absorbed.
Chris, Thank you for straightening me out about the temperature interpretation of the IR imaging. I know about black body radiation, of course, but I obviously need more understanding about how this meshes with the usual absorption and refraction of incident EM waves in the IR range.The dog image is single channel (that is, grayscale data) mapped to a pseudocolor palette. It is in a wavelength band that represents energy being emitted from the dog, not reflected in any way. Intensity has been mapped to color, so whether brighter is hotter is purely dependent upon the chosen palette.
The Jupiter image is constructed from multichannel data assigned to a false color palette. Again, we are not seeing reflected IR but emitted IR. In general, for any single channel, the signal strength is proportional to temperature. But a longer wavelength will show cooler temperatures, so as soon as you combine two or more wavelength channels, you can no longer assume that what is brightest in the image is also the warmest. You need to consider the intensity in each channel separately to make any unambiguous assessment of temperature.
the rgb coding is inverse:
Which demonstrates exactly why monotonic coding can be a very bad idea. The first mapping shows MUCH more detail than the second.VictorBorun wrote: ↑Tue Aug 02, 2022 4:32 pmthe rgb coding is inverse:
2.0 μm ↦ red
2.14 μm ↦ green
2.16 μm ↦ blue
If we change the coding to
2.0 μm ↦ blue
2.14 μm ↦ green
2.16 μm ↦ red
Sharpening up Jupiter.jpg
I'm probably showing my ignorance again, but isn't the second encoding still monotonic, just opposite in direction to the first?Chris Peterson wrote: ↑Tue Aug 02, 2022 4:45 pmWhich demonstrates exactly why monotonic coding can be a very bad idea. The first mapping shows MUCH more detail than the second.VictorBorun wrote: ↑Tue Aug 02, 2022 4:32 pmthe rgb coding is inverse:
2.0 μm ↦ red
2.14 μm ↦ green
2.16 μm ↦ blue
If we change the coding to
2.0 μm ↦ blue
2.14 μm ↦ green
2.16 μm ↦ red
Sharpening up Jupiter.jpg
Yes. Not the best word choice, but a hard thing to describe without a lot of words. We have been somewhat consistently using it in several discussions to refer to a mapping where there is a one-to-one correspondence between the input and output wavelengths. (Still not great wording, but hopefully you get the drift.)johnnydeep wrote: ↑Tue Aug 02, 2022 7:00 pmI'm probably showing my ignorance again, but isn't the second encoding still monotonic, just opposite in direction to the first?Chris Peterson wrote: ↑Tue Aug 02, 2022 4:45 pmWhich demonstrates exactly why monotonic coding can be a very bad idea. The first mapping shows MUCH more detail than the second.VictorBorun wrote: ↑Tue Aug 02, 2022 4:32 pm
the rgb coding is inverse:
2.0 μm ↦ red
2.14 μm ↦ green
2.16 μm ↦ blue
If we change the coding to
2.0 μm ↦ blue
2.14 μm ↦ green
2.16 μm ↦ red
Sharpening up Jupiter.jpg
I would think any true one-to-one mapping of input to output wavelengths would preserve all detail, though it could be less visible if the output range is compressed versus the input range. And on the other hand a many to one mapping would clearly lose info/detail, and a one to many mapping might show false details that don't exist in reality. Or am I misunderstanding yet again?Chris Peterson wrote: ↑Tue Aug 02, 2022 7:22 pmYes. Not the best word choice, but a hard thing to describe without a lot of words. We have been somewhat consistently using it in several discussions to refer to a mapping where there is a one-to-one correspondence between the input and output wavelengths. (Still not great wording, but hopefully you get the drift.)johnnydeep wrote: ↑Tue Aug 02, 2022 7:00 pmI'm probably showing my ignorance again, but isn't the second encoding still monotonic, just opposite in direction to the first?Chris Peterson wrote: ↑Tue Aug 02, 2022 4:45 pm
Which demonstrates exactly why monotonic coding can be a very bad idea. The first mapping shows MUCH more detail than the second.
I just mean a mapping where longer wavelengths in the source correspond to longer ones in the final image. And that does not necessarily result in the clearest image.johnnydeep wrote: ↑Wed Aug 03, 2022 1:00 pmI would think any true one-to-one mapping of input to output wavelengths would preserve all detail, though it could be less visible if the output range is compressed versus the input range. And on the other hand a many to one mapping would clearly lose info/detail, and a one to many mapping might show false details that don't exist in reality. Or am I misunderstanding yet again?Chris Peterson wrote: ↑Tue Aug 02, 2022 7:22 pmYes. Not the best word choice, but a hard thing to describe without a lot of words. We have been somewhat consistently using it in several discussions to refer to a mapping where there is a one-to-one correspondence between the input and output wavelengths. (Still not great wording, but hopefully you get the drift.)johnnydeep wrote: ↑Tue Aug 02, 2022 7:00 pm
I'm probably showing my ignorance again, but isn't the second encoding still monotonic, just opposite in direction to the first?
Alright. So the details are still present (i.e., not totally lost), just obscured to our eyes due to the mapping choice.Chris Peterson wrote: ↑Wed Aug 03, 2022 1:09 pmI just mean a mapping where longer wavelengths in the source correspond to longer ones in the final image. And that does not necessarily result in the clearest image.johnnydeep wrote: ↑Wed Aug 03, 2022 1:00 pmI would think any true one-to-one mapping of input to output wavelengths would preserve all detail, though it could be less visible if the output range is compressed versus the input range. And on the other hand a many to one mapping would clearly lose info/detail, and a one to many mapping might show false details that don't exist in reality. Or am I misunderstanding yet again?Chris Peterson wrote: ↑Tue Aug 02, 2022 7:22 pm
Yes. Not the best word choice, but a hard thing to describe without a lot of words. We have been somewhat consistently using it in several discussions to refer to a mapping where there is a one-to-one correspondence between the input and output wavelengths. (Still not great wording, but hopefully you get the drift.)
Well, if you have three or less channels going in, and you map them directly to some combination of red, green, and blue, nothing is lost. But, as you say, details will be more or less visible to our eyes depending on the order of the mapping. If you have more than three input channels, or channels that are mapped to mixes of RGB (e.g. a channel mapped to yellow) then you lose information in the final image.johnnydeep wrote: ↑Wed Aug 03, 2022 1:54 pmAlright. So the details are still present (i.e., not totally lost), just obscured to our eyes due to the mapping choice.Chris Peterson wrote: ↑Wed Aug 03, 2022 1:09 pmI just mean a mapping where longer wavelengths in the source correspond to longer ones in the final image. And that does not necessarily result in the clearest image.johnnydeep wrote: ↑Wed Aug 03, 2022 1:00 pm
I would think any true one-to-one mapping of input to output wavelengths would preserve all detail, though it could be less visible if the output range is compressed versus the input range. And on the other hand a many to one mapping would clearly lose info/detail, and a one to many mapping might show false details that don't exist in reality. Or am I misunderstanding yet again?
Ok, got it.Chris Peterson wrote: ↑Wed Aug 03, 2022 2:05 pmWell, if you have three or less channels going in, and you map them directly to some combination of red, green, and blue, nothing is lost. But, as you say, details will be more or less visible to our eyes depending on the order of the mapping. If you have more than three input channels, or channels that are mapped to mixes of RGB (e.g. a channel mapped to yellow) then you lose information in the final image.johnnydeep wrote: ↑Wed Aug 03, 2022 1:54 pmAlright. So the details are still present (i.e., not totally lost), just obscured to our eyes due to the mapping choice.Chris Peterson wrote: ↑Wed Aug 03, 2022 1:09 pm
I just mean a mapping where longer wavelengths in the source correspond to longer ones in the final image. And that does not necessarily result in the clearest image.
I know it probably doesn't come up much, but what about CMY (or CMYK)? Would you have information loss with that?Chris Peterson wrote: ↑Wed Aug 03, 2022 2:05 pm Well, if you have three or less channels going in, and you map them directly to some combination of red, green, and blue, nothing is lost. But, as you say, details will be more or less visible to our eyes depending on the order of the mapping. If you have more than three input channels, or channels that are mapped to mixes of RGB (e.g. a channel mapped to yellow) then you lose information in the final image.
I don't know of any display devices that use CMY. Subtractive color schemes like CMY are used in printing. They also have a smaller gamut, so there is more possibility for information loss. But in principle, the same thing applies to any mapping. The main point for not losing information is that the input channels be mapped to native output channels, not to mixes of those channels.bystander wrote: ↑Wed Aug 03, 2022 2:31 pmI know it probably doesn't come up much, but what about CMY (or CMYK)? Would you have information loss with that?Chris Peterson wrote: ↑Wed Aug 03, 2022 2:05 pm Well, if you have three or less channels going in, and you map them directly to some combination of red, green, and blue, nothing is lost. But, as you say, details will be more or less visible to our eyes depending on the order of the mapping. If you have more than three input channels, or channels that are mapped to mixes of RGB (e.g. a channel mapped to yellow) then you lose information in the final image.
I finally got what you were saying all along: one of the 2 near-IR wheels has no clear filter and must be positioned as to let the light pass one of the wide filter surrogateChris Peterson wrote: ↑Thu Jul 28, 2022 6:24 pmThere are four wheels, two each for the long wavelength channel and the short wavelength channel (which have different optical paths). They are often used in combination. For instance, when the F323N filter (long wavelength pupil wheel) is used, it is typically paired with the F322W2 filter (long wavelength filter wheel) for blocking purposes.VictorBorun wrote: ↑Thu Jul 28, 2022 6:01 pm I thought there may be 2 filter wheels like 2 layers with light passing a filter in each wheel, but no, there is only one wheel:
We operated the Filter Wheel Assembly first, cycling it through all eight of its positions in both forward and reverse directions. Those eight filter wheel positions include five long-pass order-separation filters, two finite-band target acquisition filters, and an ‘opaque' position
This image was constructed from two data channels, one imaged through the F323N/F322W2 pair, and the other through the F212N filter (short wavelength filter wheel).