Below are some beautiful photographs taken with the Hubble Space Telescope. How does one take an astronomy science image? How is taking a photograph with the Hubble Space Telescope, or other astronomy telescopes, similar to taking an image with a regular digital camera? How is collecting images with scientific telescopes very different than taking a photograph with an ordinary digital camera? In this exercise, we will walk through some of the steps that scientists use to create a clean detailed image from what begins as a large stack of dirty-looking snapshots.

Image source: http://nasa.gov

A digital photograph, whether taken with a store-bought camera, or a sophisticated camera on the Hubble Space Telescope, operates by recording photons that land on a camera's digital chip. We may think of photons as individual bits of light. The photons that a camera records may originate from a distant galaxy, a classroom ceiling light, or numerous other places. The figure below describes two cameras taking snapshots.

Image source: http://quest.nasa.gov

The sports photographer's snapshot and the Hubble Telescope's snapshot might result in the following photographs. While they might look very different, they were both taken by the process of a digital camera's detector recording incoming photons.

Image sources: http://nfltouchdown.com; http://nasa.gov

Cameras on the Hubble Space Telescope and other advanced observatories use digital camera detectors just like a sports photographer does. However, unlike a photographer at a sports event, an image from the Hubble Space Telescope can often require extensive processing which involves combining several hundred images into a single final picture.

Consider the Hubble picture shown on the left below. It shows a dusty disk around a star. Planets are forming inside the disk. In this image, light from the star at the center has been blocked off through computer image processing, in order to make it easier to see the ring-shaped disk without the bright glare of the center star interfering. For this display, we added an artificial star-shaped symbol in the middle, just to show you the position of the central star.

Image sources: http://nasa.gov (left); Hubble MAST archive (right)

What is the image on the shown above on the right? Believe it or not, the image on the right is also an image of the exact same star and disk, taken with the exact same camera on the Hubble Space Telescope. The image on the left was actually created from the image on the right, along with many tens of similar images. It was only through careful computer processing of the image on the right, that the star's surrounding ring-shaped disk became visible. To understand how scientists can generate detailed images, from what begins as a blurry burst of light, let us consider the following example.


The following snapshot of the Orion constellation was taken with a digital camera attached to a telescope. In order to take this photo, the camera's shutter opened for a short period of time, like half a second, recorded the incoming photons, and then closed its shutter to stop more photons landing. The incoming photons that were recorded by the camera's detector during this half second resulted in the following image.

If the camera's shutter instead stayed open for 2 seconds collecting photons, rather than just a half second, what would the image most likely look like?

Images' Source: http://reztycira.blogspot.com



Since the shutter is kept open longer, the detector has more time to collect photons, like a bucket collecting water drops in a rainstorm. By spending more time collecting photons, the camera is better able to detect light from the faintest stars. Try using the simulation below to see how using different exposure times lets us see fainter and fainter stars.



Is it possible to add images, in the same way that we add numbers? What does it mean to add one image to another image? Is there a reason why adding, or combining images together, would be useful? Let us first consider addition in another situation. Imagine that you have the following plate of food.

Can we add plates of food similar to the way we add numbers?

Which plate of food would result from the above addition?



Instead of a plate of food, consider the following snapshot of a distant galaxy, taken with another space telescope, NASA's Galaxy Evolution Explorer. Rather than food on a plate, a digital image can be thought of as photons of light on a digital detector. The more photons we add to the "plate", the brighter the image on the detector gets.

Imagine that we took three snapshots of the distant galaxy, and then added them with a software program, the same way we added the plates of food.

What would the final image look like?

Images' Source: http://nasa.gov



The above example shows how scientists can take many snapshots of the night sky, and then use software to combine the images into a single high quality image. Scientists can even add up hundreds of images, sometimes taken on different nights, and sometimes even taken years apart!

Adding images isn't the only way that scientists can improve the sensitivity of images. Modern digital detectors are very powerful at recording photons, but they are not perfect. The following example shows faint stars and galaxies imaged with a camera on a powerful telescope.

Image Source: http://nasa.gov

Unfortunately, the above image was taken with a detector that has some defects. These defects reside at different parts of the detector and look very similar to stars. How can we tell the difference between a faint star and a detector defect? To help find the difference, let us take a second image with an identical exposure time, but with the camera shutter closed, so that no photons from the sky can reach the detector. If the detector works perfectly, the image should look completely black, since no outside light is reaching the detector. If the detector has defects, the image will not look completely black. Below is the new shutter-closed image next to the original science image.

Comparing the two images above, is there any way that you could distinguish between real stars, and points of light that look like stars, but are really just defects in the detector? Write your answer below.

Which of the following three stars are true stars, and which are defects in the detector? For each labeled star, select whether it is a star or a defect.



Stars A and C are false stars, coming from defects in the chip. Star B is a true star. We know that stars A and B can't be real because they appear even when the camera shutter is closed, and no photons from the night sky can reach the detector. A true star, like Star B, disappears when the camera shutter is closed.

The shutter-closed image is what we call a calibration dark. Is there a way that we can use this calibration dark to improve the quality of the sky image? Below we try an experiment. Just like we added images previously, we can also subtract images:

Which of the following images would result from the above subtraction?



Image C would result from subtracting the two images. This is because we took an image that included stars and detector defects, and then subtracted out the detector defect signals. The result is an image of the night sky that is clean of detector defects.

Can we subtract more complicated features to improve sensitivities in an image? Consider the following image, which was taken with NASA's Spitzer Space Telescope.

Image Credit: image generated by webpage authors with data from the Spitzer archive (archive.spitzer.caltech.edu).

The above is an image of a bright star. The points of light surrounding it are other stars far off in the background. With the light collecting power of the Spitzer Telescope, the central star looks so bright that it fills up most of the field of view. Even though the central star looks big, it is far away enough that its actual width is as small as a single point in this image, similar to the points of starlight you see in the night sky with your own eyes. But the telescope detects so much light from the star that the star's glare spreads out over most of the image.

What if there was a faint planet around this star located at the position indicated by the red arrow. Is there anyway that we could see the planet, amid the overwhelming glare of the bright starlight? Is it possible to somehow subtract the glare of the central star in order to be able to see a faint planet? Let us consider another image of a star, collected with the Spitzer Space Telescope.

Image Credit: image generated by webpage authors with data from the Spitzer archive (archive.spitzer.caltech.edu).

The image above is another bright star of similar brightness. Notice that it looks similar to the star that we looked at previously, but without all of the background stars. The star looks similar in shape to the previous star because the telescope optics cause all point-like stars to end up having the same characteristic wide shape once they hit the detector, as long as they are similar in brightness.

Now is the important part. What happens when we subtract the two previous images?

Which of the following images results from the above subtraction?



The correct answer is A. The now smaller star shape in the center left is the leftover starlight that was not quite subtracted out. But you should now be able to see a number of faint objects near the central star, which were previously lost among the bright glare. What happened to that faint planet signal we were trying to find near the star? Compare the before and after to see.

As a result of the star glare subtraction, an object at the position of the red arrow is now clearly visible, as well as other faint points of light nearby. In this science image, the indicated point of light was actually another background star, instead of a real planet. But the same technique shown here is used with NASA images from both the Spitzer Space Telescope and the Hubble Space Telescope in order to image faint planets next to much brighter stars.

Let us now return to the original images we saw from the Hubble Space Telescope.


The image on the left was indeed created from the image on the right. Some of the steps involved in this process included the following:

  1. Collect several tens of images like the image on the right.
  2. Use calibration images to remove detector defects.
  3. Add up individual images to create a final image.
  4. Subtract the light from the central star.

In this instance here, the scientists also used software to create the red color. While this particular display is not focusing on a planet, it is revealing the ring which could have never been seen without removing the bright light of the central star.