CCD sensitivity
CCD sensitivity
heres a concern about pixel sensitivity. i guess each pixel on every CCD has different sensitivity & presummably this sensitivity should be ~1 sigma.
But the following plot for Polaris (UMi) shows >6 sigma difference between different pixels (especially between zenith angle 27.8-28 ) . Is this anything to worry about?
The plot shows the data for 4 consecutive nights overlaped. (counts vs Zenith angle).
http://www.phy.mtu.edu/~vpshetti/nsl/umi.jpg
But the following plot for Polaris (UMi) shows >6 sigma difference between different pixels (especially between zenith angle 27.8-28 ) . Is this anything to worry about?
The plot shows the data for 4 consecutive nights overlaped. (counts vs Zenith angle).
http://www.phy.mtu.edu/~vpshetti/nsl/umi.jpg
Tilvi
Michigan Tech. University, MI.
Michigan Tech. University, MI.
Tilvi,
Yes, different pixels have different sensitivities. One way to discern this is to look at NSL frames of fog or uniform clouds. Then one knows that all the pixels in an area are uniformly illuminated, so that any differences between pixels above random counting errors can be attributed to differential pixel sensitivity.
The bad news is that pixel sensitivity can change over time. Many telescope therefore take an image of a uniformly illuminated surface at the beginning and end of an observing run so as to be able to correct for differential pixel sensitivity. Things like this have been discussed with respect to CONCAM images, but so far the data is uncorrected for it. If you or someone can come up with a good scheme to do this, then please tell us. Until then, we are confined to looking for brightness fluctuations at a higher level.
- RJN
Yes, different pixels have different sensitivities. One way to discern this is to look at NSL frames of fog or uniform clouds. Then one knows that all the pixels in an area are uniformly illuminated, so that any differences between pixels above random counting errors can be attributed to differential pixel sensitivity.
The bad news is that pixel sensitivity can change over time. Many telescope therefore take an image of a uniformly illuminated surface at the beginning and end of an observing run so as to be able to correct for differential pixel sensitivity. Things like this have been discussed with respect to CONCAM images, but so far the data is uncorrected for it. If you or someone can come up with a good scheme to do this, then please tell us. Until then, we are confined to looking for brightness fluctuations at a higher level.
- RJN
Dr Nemiroff,RJN wrote:Tilvi,
Yes, different pixels have different sensitivities. One way to discern this is to look at NSL frames of fog or uniform clouds. Then one knows that all the pixels in an area are uniformly illuminated, so that any differences between pixels above random counting errors can be attributed to differential pixel sensitivity.
The bad news is that pixel sensitivity can change over time. Many telescope therefore take an image of a uniformly illuminated surface at the beginning and end of an observing run so as to be able to correct for differential pixel sensitivity. Things like this have been discussed with respect to CONCAM images, but so far the data is uncorrected for it. If you or someone can come up with a good scheme to do this, then please tell us. Until then, we are confined to looking for brightness fluctuations at a higher level.
- RJN
If we can get some short exposures frames (say 1 sec, that would avoid the stars being captured on the CCD) for a clear night and then take the ratio with the light frames. I suppose, since the CCD's sensitivity won't change over few weeks time, that this frame can be used, atleast for a month and should give us fairly good results with errors <2 sigmas.
heres a PDF document giving some details about the sensitivity gradient on a CCD.
http://www.ugastro.berkeley.edu/~cdang/files/lab3.pdf
Tilvi
Michigan Tech. University, MI.
Michigan Tech. University, MI.
heres two plots showing how pixel sensitivity can affect the photometry data even upto level of 0.2 (delta m) for the same star. But since these are systematic errors, we can avoid such errors. e.g. this error can be corrected by doing flat field correction.RJN wrote:Tilvi,
Yes, different pixels have different sensitivities. One way to discern this is to look at NSL frames of fog or uniform clouds. Then one knows that all the pixels in an area are uniformly illuminated, so that any differences between pixels above random counting errors can be attributed to differential pixel sensitivity.
- RJN
The first figure shows two plots (with change of scale, but same data) and their ratio, while the second figure shows the change of delta m when there is a change in pixel sensitivty by 200 counts.
The squared data points shows the addition of 200 counts and the respective change in delta M.
Delta M=2.5*log(S1/S2)
A flat field frame of 1-2 sec exposure on a clear night should work (?).
suggestions?
Tilvi
Michigan Tech. University, MI.
Michigan Tech. University, MI.
Tilvi,
Interesting plots! I think pixel sensitivity is combined there with the stars' point spread function (PSF) spreading out onto the five brightest pixels in a different way. Theoretically, each of the pixels could be uniformly sensitive and still C5 would change as the star's PSF moves across the CCD grid. So these two effects should be separated, if possible.
Also, the rate of stellar photons hitting the CCD is always higher, in certain pixels, then the rate of background light hitting the CCD. So shorter exposures should not be effective in determining relative pixel sensitivity. I still therefore think clouds and/or fog is the best way to calibrate pixel sensitivity.
- RJN
Interesting plots! I think pixel sensitivity is combined there with the stars' point spread function (PSF) spreading out onto the five brightest pixels in a different way. Theoretically, each of the pixels could be uniformly sensitive and still C5 would change as the star's PSF moves across the CCD grid. So these two effects should be separated, if possible.
Also, the rate of stellar photons hitting the CCD is always higher, in certain pixels, then the rate of background light hitting the CCD. So shorter exposures should not be effective in determining relative pixel sensitivity. I still therefore think clouds and/or fog is the best way to calibrate pixel sensitivity.
- RJN
Dr RJNRJN wrote:Tilvi,
Interesting plots! I think pixel sensitivity is combined there with the stars' point spread function (PSF) spreading out onto the five brightest pixels in a different way. Theoretically, each of the pixels could be uniformly sensitive and still C5 would change as the star's PSF moves across the CCD grid. So these two effects should be separated, if possible.
Also, the rate of stellar photons hitting the CCD is always higher, in certain pixels, then the rate of background light hitting the CCD. So shorter exposures should not be effective in determining relative pixel sensitivity. I still therefore think clouds and/or fog is the best way to calibrate pixel sensitivity.
- RJN
That makes sense. But still if we take a flat field with uniform illuminated screen, we should be able to rectify this systematic error. And since this correction will apply to each and every pixel, we need not worry about how the PSF behaves for C5 or C9.
I think taking flat field with clouds/ fog won't help since CONCAM sees the whole sky. This might work for the telescopes and for telescopes most of the time flat fielding is done by capturing the images of the sky at dusk (dawn) and with short exposures.
http://www.starlink.rl.ac.uk/star/docs/ ... ode15.html
I think unless we correct for these type of systematic errors where there are uncertainities of 0.2 or greater delta M, CONCAM would see all the stars as constant ,other than very bright stars such as spica (and which saturates most of the time).
Tilvi
Michigan Tech. University, MI.
Michigan Tech. University, MI.
Tilvi,
It is really good that you are studying flat-fielding and the link you gave was excellent. We should use the information there as a base for what we do for flat-fielding future CONCAM images.
I agree that flat-fielding will reduce the systematic errors on CONCAM images. I think we have already demonstrated 0.1 dm (1 sigma) for bright stars, but we can always do better. You are right that the smaller the dm we can see, the more stars we can see that will show this variability.
That page you linked to does not mention using short exposures for flat fielding. I suspect my reasoning given previously is the reason.
We cannot do sky or dome flats with CONCAMs. I think clouds and fog are our only hope. Although CONCAMs see the whole sky, the pixel to pixel sensitivity variability is on a small scale. So we can develop an algorithm that corrects for pixel sensitivity only compared to those pixels that are nearby, say the surrounding 8 pixels, surrounding 15, or even surrounding nearest 24 pixels.
Perhaps WOLF can recognize when a cloud or fog is being imaged, and update its latest flat-field map in the regions where the cloud or fog are determined to create a nearly uniformly lit area. Keeping a "latest flat field map" for each CONCAM might be studied to see how much better the images are than non-flat fielding. I think it could well reduce systematic error dramatically as you indicated.
As I thing about it, perhaps CONCAMs can keep a "best dark frame" as well instead of relying on a completly new one every 8 frames. WOLF can then use the new one every 8th frame to augment its "best one", not simply replace it. That would get rid of dark frame cosmic rays and again improve photometry.
- RJN
It is really good that you are studying flat-fielding and the link you gave was excellent. We should use the information there as a base for what we do for flat-fielding future CONCAM images.
I agree that flat-fielding will reduce the systematic errors on CONCAM images. I think we have already demonstrated 0.1 dm (1 sigma) for bright stars, but we can always do better. You are right that the smaller the dm we can see, the more stars we can see that will show this variability.
That page you linked to does not mention using short exposures for flat fielding. I suspect my reasoning given previously is the reason.
We cannot do sky or dome flats with CONCAMs. I think clouds and fog are our only hope. Although CONCAMs see the whole sky, the pixel to pixel sensitivity variability is on a small scale. So we can develop an algorithm that corrects for pixel sensitivity only compared to those pixels that are nearby, say the surrounding 8 pixels, surrounding 15, or even surrounding nearest 24 pixels.
Perhaps WOLF can recognize when a cloud or fog is being imaged, and update its latest flat-field map in the regions where the cloud or fog are determined to create a nearly uniformly lit area. Keeping a "latest flat field map" for each CONCAM might be studied to see how much better the images are than non-flat fielding. I think it could well reduce systematic error dramatically as you indicated.
As I thing about it, perhaps CONCAMs can keep a "best dark frame" as well instead of relying on a completly new one every 8 frames. WOLF can then use the new one every 8th frame to augment its "best one", not simply replace it. That would get rid of dark frame cosmic rays and again improve photometry.
- RJN
-
- Ensign
- Posts: 78
- Joined: Tue Jul 27, 2004 1:45 pm
- Location: Back at Tel Aviv University after a sabbatical
Flat-fielding CONCAM images
Indeed, the accepted mode for obtaining flat-field (FF) images is to take the twilight sky. Anotehr option, used mainly by amateur observers, is to image an illuminated white screen that is close tot he telescope thus is very out of focus. Tis will not work for CONCAMs. The most 'modern' way to FF is to use "super-flats" (SF).
An SF is obtained by combining images obtained throughout the night. When these are in pixel coordinates, the location of a certain star will change during the night because ofthe daily motion. Most of the instances a pixel will image "sky". Therefore, here is the recipe to obtain an SF:
1. Take all the night images and eliminate cosmic rays
2. Do a median filter among the pixels at the same pixel coordinates. This will select the most representative value this pixel had, i.e. ths sky.
3. Use this image as FF, i.e., divide all the images by a normalized version of the median.
The SF is better than twilight flats because it has the same spectral composition as the night images and some pixel-to-pixel variance is spectral.
Noah
An SF is obtained by combining images obtained throughout the night. When these are in pixel coordinates, the location of a certain star will change during the night because ofthe daily motion. Most of the instances a pixel will image "sky". Therefore, here is the recipe to obtain an SF:
1. Take all the night images and eliminate cosmic rays
2. Do a median filter among the pixels at the same pixel coordinates. This will select the most representative value this pixel had, i.e. ths sky.
3. Use this image as FF, i.e., divide all the images by a normalized version of the median.
The SF is better than twilight flats because it has the same spectral composition as the night images and some pixel-to-pixel variance is spectral.
Noah
I like the SF method. It might have problems near the poles, though, as stars don't move so much there and a single star might dominate the entire night. Like Polaris, for example. Anway we might try several of these ideas out in a "flat frame Olympics" and see what works best in practice. (This will be similar to a "photometry Olympics" that decided on the C5 brightness estimator, and the "background Olympics", which decided on a background method but then was eclipsed when Lior used his own approach with WOLF.)
The last from from Kitt Peak last night might be the type of frame that is useful for generating flat fields. It is not completely uniform but perhaps flatness can be computed locally and/or the frame-wide gradients can be taken out by a spherical harmonic fit. Here is the image. - RJN
The last from from Kitt Peak last night might be the type of frame that is useful for generating flat fields. It is not completely uniform but perhaps flatness can be computed locally and/or the frame-wide gradients can be taken out by a spherical harmonic fit. Here is the image. - RJN
-
- Ensign
- Posts: 78
- Joined: Tue Jul 27, 2004 1:45 pm
- Location: Back at Tel Aviv University after a sabbatical
Flat field response
Bob:
I guess that the frame you showed was taken during a foggy night when light was reaching the CONCAM from a nearby, high-optical-depth layer. This would make an ideal flat field because in such a situation the illumination of the lens is uniform. The "limb darkening" is, I believe, due to the reduction of the effective lens aperture presented to the CCD by regions of the sky that are seen by the CONCAM at an oblique angle [cos(diameter) effect]. In a real sky exposure I believe that while this kind of vignetting would take place, the sky would be bright because one would see more atmospheric emission near the horizon and also because each pixel would include more square kilometers of the layer where the atmospheric emission takes place.
Noah
I guess that the frame you showed was taken during a foggy night when light was reaching the CONCAM from a nearby, high-optical-depth layer. This would make an ideal flat field because in such a situation the illumination of the lens is uniform. The "limb darkening" is, I believe, due to the reduction of the effective lens aperture presented to the CCD by regions of the sky that are seen by the CONCAM at an oblique angle [cos(diameter) effect]. In a real sky exposure I believe that while this kind of vignetting would take place, the sky would be bright because one would see more atmospheric emission near the horizon and also because each pixel would include more square kilometers of the layer where the atmospheric emission takes place.
Noah
Re: Flat field response
I am still wondering, if the images such as the one posted by Dr Nemiroff can be used for flat fielding. Because for me it seems to be the effect of either the lens or atmosphere and the not the actual pixel sensitivity gradient. The circular portion shows the brightness due to moon (?).nbrosch wrote:Bob:
I guess that the frame you showed was taken during a foggy night when light was reaching the CONCAM from a nearby, high-optical-depth layer. This would make an ideal flat field because in such a situation the illumination of the lens is uniform. The "limb darkening" is, I believe, due to the reduction of the effective lens aperture presented to the CCD by regions of the sky that are seen by the CONCAM at an oblique angle [cos(diameter) effect]. In a real sky exposure I believe that while this kind of vignetting would take place, the sky would be bright because one would see more atmospheric emission near the horizon and also because each pixel would include more square kilometers of the layer where the atmospheric emission takes place.
Noah
.
Tilvi
Michigan Tech. University, MI.
Michigan Tech. University, MI.
-
- Ensign
- Posts: 78
- Joined: Tue Jul 27, 2004 1:45 pm
- Location: Back at Tel Aviv University after a sabbatical
Re: Flat field response
Tilvi:tilvi wrote:
I am still wondering, if the images such as the one posted by Dr Nemiroff can be used for flat fielding. Because for me it seems to be the effect of either the lens or atmosphere and the not the actual pixel sensitivity gradient. The circular portion shows the brightness due to moon (?).
.
I do not know the exact circumstances the image was taken. Assuming this was during heavy fog, even if the full Moon was up in the sky, there should be no way to derive its location from the image. This is because fog is extremely optically-thick and the light that reaches the CONCAM lens has been "scambled" by being scattered many times by th water droplets. You can check this by searching for the peak intensity of images taken during heavy fog and full Moon; if the peak moves with the Moon I am wrong.
Noah
Re: Flat field response
Dr NoahTilvi:
I do not know the exact circumstances the image was taken. Assuming this was during heavy fog, even if the full Moon was up in the sky, there should be no way to derive its location from the image. This is because fog is extremely optically-thick and the light that reaches the CONCAM lens has been "scambled" by being scattered many times by th water droplets. You can check this by searching for the peak intensity of images taken during heavy fog and full Moon; if the peak moves with the Moon I am wrong.
Noah
That makes sense. I am just trying to convince myself. I have gone through this night's movie file. The intensity doesn't seems to move continually, but it does change the location. And this happens, I guess, since the fog is not thick enough to make the moon light appear uniform over the whole lens. Heres the .gif file
http://nightskylive.net/kp/kp040904/movie-kp040904.gif
Heres another file from RH
http://nightskylive.net/rh/rh040307/movie-rh040307.gif
Tilvi
Michigan Tech. University, MI.
Michigan Tech. University, MI.
-
- Ensign
- Posts: 78
- Joined: Tue Jul 27, 2004 1:45 pm
- Location: Back at Tel Aviv University after a sabbatical
Foggy flats
Tilvi, in both cases (KP & RH) it does not seem that the fog is very thick. This is because in the KP movie it is possible to see some nearby instruments in some frames and in both movies some frames have striations, probably clouds seen through the fog.
You might want to try some artivicial fog. One way is to use a theatrical "fog machine" if you can borrow one. Another is to pour some liquid nitrogen near the CONCAM. Be careful, this might be dangerous. You will have to experiment to see when you manage to create a thick enough fog.
Another way of creating uniform illumination is to use a device called "integrating sphere". This is a spherical body coated inside with a matte white finish. Illumination is inserted through a port on the side of the sphere and the device to be tested is mounted at a pot 90 degrees away. This ensures that no direct light can enter the device being tested and that the arriving light has been "randomized" by multiple scattering.
If you cannot get access to an integrating sphere, you can make your own "integrating can" by using a largish metal cylinder, cutting 90-degree ports in the cylinder, and painting the inside with white matte paint. This should work pretty well, I believe.
Cheers,
Noah
You might want to try some artivicial fog. One way is to use a theatrical "fog machine" if you can borrow one. Another is to pour some liquid nitrogen near the CONCAM. Be careful, this might be dangerous. You will have to experiment to see when you manage to create a thick enough fog.
Another way of creating uniform illumination is to use a device called "integrating sphere". This is a spherical body coated inside with a matte white finish. Illumination is inserted through a port on the side of the sphere and the device to be tested is mounted at a pot 90 degrees away. This ensures that no direct light can enter the device being tested and that the arriving light has been "randomized" by multiple scattering.
If you cannot get access to an integrating sphere, you can make your own "integrating can" by using a largish metal cylinder, cutting 90-degree ports in the cylinder, and painting the inside with white matte paint. This should work pretty well, I believe.
Cheers,
Noah