by Chris Peterson » Fri Mar 01, 2013 3:28 pm
MargaritaMc wrote:Wikipedia tells me that the visible spectrum for humans is 390 - 700nm, so this image extends to either side of what our eyes can see. Is that correct?
I'm still battling with understanding how astro-photography gets the images that it does.
Edit: - I read the numbers incorrectly. COULD anyone explain how this image is made? :roll:
An image- any image- is really just a big array of numbers. Every pixel, under the hood, is one or more numerical values. With multispectral images like today's- that is, an image with data collected at different wavelengths (typically using filters that pass only specific wavelength ranges), each pixel is represented by a single numerical value for each wavelength observed. These values correspond to intensity- the more signal acquired at that wavelength range, the larger the number.
Almost all display devices are based on a color mapping system using red, green, and blue as primaries (printing methods usually use cyan, magenta, and yellow). So turning the data collected by a camera into an image for our eyes is simply a matter of mapping the numeric values at each pixel into a color, i.e. a specific ratio of red, green, and blue. There are many ways of doing this. For an image with only a single value per pixel, the most common is to assign that value directly to the red, green, and blue channels, which gives a grayscale image. For data with two values per pixel, it is common to map the values to two of the primaries, and construct the third from the sum or difference of the two values. For data with three values per pixel, the most common strategy is simply to directly assign each to its own primary. When there are more than 3 spectral data channels, as in today's APOD, some sort of mathematical transformation is applied, which converts multiple input values to three output values, one for each primary. For instance, you could take the value from the longest wavelength channel, and apply it directly to red. You could take the next shorter, and assign it to "yellow", meaning a mix of red and green. The next could be just green. Next, cyan, by mixing some into green and some into blue. And so forth. There are many ways of doing this- if you do a search on color theory, there are many good articles online.
[quote="MargaritaMc"]Wikipedia tells me that the visible spectrum for humans is 390 - 700nm, so this image extends to either side of what our eyes can see. Is that correct?
I'm still battling with understanding how astro-photography gets the images that it does.
Edit: - I read the numbers incorrectly. COULD anyone explain how this image is made? :roll: [/quote]
An image- any image- is really just a big array of numbers. Every pixel, under the hood, is one or more numerical values. With multispectral images like today's- that is, an image with data collected at different wavelengths (typically using filters that pass only specific wavelength ranges), each pixel is represented by a single numerical value for each wavelength observed. These values correspond to intensity- the more signal acquired at that wavelength range, the larger the number.
Almost all display devices are based on a color mapping system using red, green, and blue as primaries (printing methods usually use cyan, magenta, and yellow). So turning the data collected by a camera into an image for our eyes is simply a matter of mapping the numeric values at each pixel into a color, i.e. a specific ratio of red, green, and blue. There are many ways of doing this. For an image with only a single value per pixel, the most common is to assign that value directly to the red, green, and blue channels, which gives a grayscale image. For data with two values per pixel, it is common to map the values to two of the primaries, and construct the third from the sum or difference of the two values. For data with three values per pixel, the most common strategy is simply to directly assign each to its own primary. When there are more than 3 spectral data channels, as in today's APOD, some sort of mathematical transformation is applied, which converts multiple input values to three output values, one for each primary. For instance, you could take the value from the longest wavelength channel, and apply it directly to red. You could take the next shorter, and assign it to "yellow", meaning a mix of red and green. The next could be just green. Next, cyan, by mixing some into green and some into blue. And so forth. There are many ways of doing this- if you do a search on color theory, there are many good articles online.