Difference between revisions of "Aperture Photometry Overview"
m |
m |
||
Line 32: | Line 32: | ||
#Summing up the light in the object (define the size of the aperture to use, subtracting off the background). | #Summing up the light in the object (define the size of the aperture to use, subtracting off the background). | ||
#Apply aperture corrections, if necessary. | #Apply aperture corrections, if necessary. | ||
+ | |||
+ | '''Detection of objects''' -- If you're doing photometry by hand, it's easy to recognize at least the brighter sources just using your own eye. But, if you're doing photometry automatically (like with MOPEX), you need to teach the computer what to look for. I think it's probably easy to imagine the difference between finding bright single point sources on a flat black background, versus trying to find a wide variety of point source brightnesses, some crowded, inamongst complicated nebulosity. If that's not clear, imagine just this - you probably can imagine having the computer look for peaks that are at least several times brighter than a dark background, but then what happens when you raise the background level by a constant amount? The sources are less easily distinguished above the background. MOPEX has many different parameters (check the online help files) that you can control to affect its ability to detect sources. | ||
+ | |||
+ | '''Centroiding''' -- Centroiding is finding the center of the object. If your aperture is not centered exactly right on the object, you will derive incorrect photometry. (You can empirically discover this in APT by manually changing the center and watching how the photometry changes.) Usually the computer can centroid to within a small fraction of a pixel. This is easier for the computer (or a human) when the star is well-sampled, e.g., when many pixels define the point sources. For images like those for IRAC-1, usually only 2 pixels define a detection, so the results are more sensitive to your ability to centroid. Neither MOPEX nor APT have many parameters that you can control to affect this algorithm in their respective routines. | ||
+ | |||
Revision as of 02:39, 15 May 2008
Aperture Photometry
Much of the first several paragraphs on aperture photometry from ground-based telescopes was originally ruthlessly copied without permission from the document entitled "Photometry Using IRAF" by Lisa A. Wells, from 1994, found on the IRAF photometry documentation page.
There are many techniques involved in doing aperture photometry and these methods vary from one astronomer to another. Some observers use large apertures for their measurements to account for seeing, tracking, and focus variations, while others use small apertures and apply aperture corrections. The sky algorithm used may vary according to the chip characteristics and the data There are a number of ways to do the standard calibration so be sure to observe standards in a way that is compatible with the calibration package you wish to use.
Space-based data does not have to worry about variations in seeing and focus (generally), and the sky background is a function primarily of the direction in which you are looking. Moreover, space-based data usually arrive on your desktop already calibrated. So, space-based data are much easier to process than ground-based data. However, you still need to use care in selecting your photometry reduction parameters, because it is very easy to shoot yourself in the foot.
Some references on the theory and techniques of aperture photometry are
- Golay, M., "Introduction to Astronomical Photometry," D. Reidel Publishing, Dordrecht, Holland
- Hardie, Robert H., 1962, in "Stars and Stellar Systems," Vol. 2, "Astronomical Techniques", ed. W. A. Hiltner, University of Chicago Press, 178
- Harris, W. E., 1990, PASP, 102, 949
- Harris, W. E., FitzGerald, M. P., and Reed, B. C., 1981, PASP, 93, 507
- Howell, S. B., 1989, PASP, 101, 616
- Howell, S. B., (ed.), 1991, "Astronomical CCD Observing and Reduction Techniques", ASP Conf. Series, Vol. 23... in particular, see DaCosta, G., "Basic Photometry Techniques", page 90
- Philip, A. G. Davis, (ed.), 1979, "Problems of Calibration of Multicolor Photometric Systems," Dudley Observatory, Schenectady, New York
- Stetson, P. B., 1987, PASP, 99, 191
- Stetson, P. B., 1990, PASP, 102, 932
- Stetson, P. B., and Harris, W. E., 1988, AJ, 96, 909
The basic principle of aperture photometry is to sum up the observed flux within a given radius from the center of an object, then subtract the total contribution of the sky background within the same region, leaving only the flux from the object to calculate an instrumental magnitude. The aperture size is important, since seeing, tracking, and focus errors affect the amount of flux within the stellar profile. The noise grows linearly with radius as the stellar flux trails off in the wings of the profile. Increasing the size of the aperture will increase the Poisson shot noise of the background sky and any flat field errors that may be nearby The signal-to-noise ratio of the flux measurement reaches a maximum at an intermediate aperture radius shown by Howell (1989). The use of a smaller radius introduces the problem that the fraction of the total flux measured will vary for objects of different flux from image to image. Aperture corrections must be used in this latter case.
If you are working with ground-based data, you need to worry about extinction corrections to the data, so extinction stars need to be observed and reduced along with the object data The extinction stars should be observed at airmasses corresponding to the range in airmass of the program objects. Color and zero point corrections are often applied to the instrumental magnitudes as well to put them on the standard system defined by a set of observed standard stars -- these same standard stars can also be used as the extinction stars. These stars should be chosen prior to observing so that their colors bracket those of the program objects -- a good rule of thumb is to have at least a 0.5 magnitude range in the colors of the standards to determine reasonable calibrations.
Once you have calibrated data, the basic series of steps for doing aperture photometry are as follows, with many different options and parameters for each step:
- Detect objects in image, if doing this automatically (human eyes are good at this).
- Determine center of object.
- Determine background -- e.g. determine what the signal would be in the aperture if the star was not there. (Usually means defining an annulus around the object at some distance from it.)
- Summing up the light in the object (define the size of the aperture to use, subtracting off the background).
- Apply aperture corrections, if necessary.
Detection of objects -- If you're doing photometry by hand, it's easy to recognize at least the brighter sources just using your own eye. But, if you're doing photometry automatically (like with MOPEX), you need to teach the computer what to look for. I think it's probably easy to imagine the difference between finding bright single point sources on a flat black background, versus trying to find a wide variety of point source brightnesses, some crowded, inamongst complicated nebulosity. If that's not clear, imagine just this - you probably can imagine having the computer look for peaks that are at least several times brighter than a dark background, but then what happens when you raise the background level by a constant amount? The sources are less easily distinguished above the background. MOPEX has many different parameters (check the online help files) that you can control to affect its ability to detect sources.
Centroiding -- Centroiding is finding the center of the object. If your aperture is not centered exactly right on the object, you will derive incorrect photometry. (You can empirically discover this in APT by manually changing the center and watching how the photometry changes.) Usually the computer can centroid to within a small fraction of a pixel. This is easier for the computer (or a human) when the star is well-sampled, e.g., when many pixels define the point sources. For images like those for IRAC-1, usually only 2 pixels define a detection, so the results are more sensitive to your ability to centroid. Neither MOPEX nor APT have many parameters that you can control to affect this algorithm in their respective routines.
PSF photometry
To come.