Sputnick wrote:psyched wrote:I just wish a photograph was a photograph instead of an artist's conception, which this one obviously is.
It's easy to veer off from discussing things to discussing names of things. What does it matter if the thing in question is called a photograph or a simulacrumgraph or something else? Well, I suppose it can matter when names of things are applied precisely. One would want to distinguish between a photograph and a painting because they were produced differently. These days the term "image" is used more and more frequently, I guess because there are more different ways of producing images than there were in the past. "Image" is a broader term than photograph or painting, for example, and a less precise one in denoting the means of creating the image. "Image" seems to refer more to the impression on the retina of the observer than to the means of creation of the image.
A rose by any other name would smell as sweet.
Our senses don't normally see in the infrared or UV, and if they did perhaps the image in question would look to us somewhat like it appears on apod .. and art of course can imitate reality .. but I prefer to see things as they are to our normal senses. If we begin to alter colours, what stops us from altering squares to circles. If someone wants to present an image on apod which has been digitally altered, then they should at least present it as such. Perhaps your 'rose' quotation is a good example, because with humans altering roses the sweet scent has gone out of most of them.
There are several posts in this thread about the relationship between '
things as they are to our normal senses' (and similar) and (most of) the APOD images; I'm choosing this one to quote pretty much at random.
Let's remind ourselves that even ordinary film photography does not reproduce things as they are to our normal sight ... careful, quantitative analysis of the response curves of human rods and cones+how the brain interprets the signals from them compared with the imaging system that is a typical colour film camera+commercial developer will show, objectively, that there are many differences. Some are subtle, some imperceptible, and some obvious ... for example, colours are more saturated in normal colour film photography. Then there's the 'red eye' in flash photography, especially in photographs from older cameras that worked with just one bright flash - my guess is that Sputnick's album of family snaps includes some (flash) photos that have people with 'red eye' and some that don't.
"
If someone wants to present an image on apod which has been digitally altered, then they should at least present it as such". It's worth taking some time to think about this a bit.
For starters, as far as I know, every APOD is a .jpg file, and every image that has been 'JPEG processed' has been digitally altered.
Second, 'digitally altered' in respect of images from instruments (cameras, detectors, etc) aboard spacecraft, such as the HST or MESSENGER, is kinda meaningless, because the images are digital from the get-go and getting them to a ground station necessarily involves alteration.
Third, HST images all have a kind of 'red eye' effect: cosmic ray hits. When a cosmic ray hits a CCD pixel, it causes the well to saturate; if you present the image without digitally altering it to remove these, you'll see lots of annoying white points all over it at random (not just HST images of course, almost all detectors on spacecraft have to deal with this kind of noise).
No doubt Sputnick (and emc?) does not count these three kinds of digital alteration, but why not?
A great many APOD images are from instruments on spacecraft (
example,
example,
example) or contain a component therefrom (
example); some are taken by ground-based telescopes that record parts of the electromagnetic (EM) spectrum we humans cannot detect (
example). In these the APODs may be accurately described as visual representations of data.
One more thing, composites. Take two APOD examples,
NGC 1132 and
Cen A. Both deep sky objects can be detected in far more parts of the EM spectrum than just the few wavebands represented in these images - gamma, UV, IR, microwave, ... so these images are visual representations of just a small part of the whole, awesome picture. And just as we need to get above the Earth's atmosphere to 'see' in the x-ray band (and gamma rays, and UV, and (most of the) IR, and ...), there is emission from these objects that we cannot see from our vantage point in this part of the Milky Way galaxy, because it's blocked by the interstellar medium (UV blueward of the Lyman limit, radio redward of the local plasma frequency).
OK, two more things; composites part 2:
APOD 7 Oct 2008. It's certainly taken in the visual waveband, but is nothing like what you would see with your own eyes if only they had the resolution, integration time, and contrast of the HST. Why? Because this composite has been created from data taken using several narrow-band filters - it samples only a tiny part of visual waveband that your eyes are sensitive to.