Difference between revisions of "Photometry"

From CoolWiki
Jump to navigationJump to search
m
Line 30: Line 30:
 
#Note that the BRIGHTER the star, the SMALLER the magnitude.  Thus, a magnitude of 1 is brighter than a magnitude 6 star.  
 
#Note that the BRIGHTER the star, the SMALLER the magnitude.  Thus, a magnitude of 1 is brighter than a magnitude 6 star.  
 
#Note that magnitudes are LOGARITHMIC, such that a magnitude of 1 is REALLY A LOT brighter than a magnitude 6 star -- in fact, 100 times brighter.  
 
#Note that magnitudes are LOGARITHMIC, such that a magnitude of 1 is REALLY A LOT brighter than a magnitude 6 star -- in fact, 100 times brighter.  
#A magnitude is really a flux ratio. It is defined as follows, where M's are magnitudes and F's are fluxes:
+
#A magnitude is really a flux ratio. It is defined as follows, where M's are magnitudes and F's are fluxes:<math>M_1 - M_2 = 2.5 \times \log \left(\frac{F_2}{F_1}\right)</math>
<math>M_1 - M_2 = 2.5 \times \log \left(\frac{F_2}{F_1}\right)</math>
 
 
#''Apparent'' magnitudes are the brightness an object appears to us to have from here. ''Absolute'' magnitudes are the brightness an object would have, were it at a "standard" distance of 10 parsecs. (A parsec is 3.26 light years.)
 
#''Apparent'' magnitudes are the brightness an object appears to us to have from here. ''Absolute'' magnitudes are the brightness an object would have, were it at a "standard" distance of 10 parsecs. (A parsec is 3.26 light years.)
  

Revision as of 18:42, 6 March 2008

Your local astronomy textbook or (if neccessary) the Wikipedia entry for photometry is probably a better place to go for a longer general introduction.

Overview

After you have obtained an astronomical image of an object, the next thing you might want to do is quantitatively compare -- compare using numbers -- the brightnesses (or fluxes) of two objects. For example, if you have two measurements of a star in two different wavelengths, you might be able to tell from the image that one is brighter than the other, but brighter by how much? Half a magnitude? 5 magnitudes? (wondering what a 'magnitude' is? see below.)

Photometry is the measurement of the brightness of an object. The brightness that is recorded on an electronic detector (or any kind of detector) is a combination of the brightness of the source plus the brightness of the background that the source is on.

The way astronomers measure the brightnesses of objects in a given bandpass (filter) is to total up all (or most) of the light detected from the object in that image. But, if you think for a bit about how you might do this, you can imagine that it gets complicated pretty quickly. How should we total up the light? Since the pixels are square, should we sum up all the light that hits within a box centered on the object? Or a circle, since the object's point-spread-function (the shape of the point source in the image) is (usually) circularly symmetric? Should we count fractional pixels, or just whole pixels? What if the thing whose flux we are interested in getting is located on a region of nebulosity... how should we subtract off the contribution from the background so that we get just the flux from the object in which we are interested? And what if two (or more) objects are very close to each other - how should we separate them?

Because of all of these decisions, every astronomer does things a little differently. (It’s kind of amazing that any two astronomers working on the same object and the same wavelength ever get the same answer.) Everyone does what they think is right, and usually they get the same answer - if one person uses a square aperture, and another person uses a circular aperture, they probably subtract off the background component differently too, and as a result they get the same answer for the total flux for the object.

Some definitions

The shape of the point-source pattern is called the “point spread function (PSF),” or sometimes (especially within the Spitzer universe) the "point response function (PRF)." (Technically these two things are subtly different, but never mind that for now.) The PSF is HUGE, and there is a lot of flux surprisingly far from the star that needs to be included. (There is more information on flux and brightness in the Units page.)

One can measure fluxes from point sources (like stars) in two ways: aperture photometry or PSF fitting.

  • Aperture photometry measures all of the flux within a (usually circular) aperture centered on the star, minus the flux in an annulus (doughnut-shape) around the aperture. This is quick, but can lead to large errors (especially if the background is complicated), and is essentially impossible in crowded fields (where there are lots of stars close together). One must take into account fractional pixels within the aperture (which matters particularly when the units are MJy/sr because the area of the pixel is implied in the "per steradian" part of that unit -- (there is more information on these units in the Units page.) Usually one needs to apply an aperture correction to correct for the ‘missing’ flux outside the aperture.
  • PSF fitting takes the basic shape of the PSF (you tell the computer what you want it to use) and matches it to the point source, thereby taking into account the fluxes at large distances from the star, as well as ignoring complicated structure in the background and other nearby point sources.

The software that the Spitzer Science Center has developed, MOPEX, does both aperture and PRF fitting, and understands the units of Spitzer images. In practice with MOPEX, one needs to use a combination of aperture photometry for the brightest stars and PRF fitting for the rest. And, as of July 2007, one ought to use MOPEX aperture photometry for IRAC, and either aperture or PRF fitting for MIPS - this has to do with how well-sampled the PRFs are in the various channels.

Real-life example

In the case of the center of the galaxy in one of the Spitzer projects, a great deal of the "background" is in fact from the surrounding host galaxy. But what we are interested in is only the light from the center and not from the rest of the galaxy. To account for this, we will determine how much light is coming from where the center of the galaxy is, and then compare it to how much light is coming from near the center of the galaxy. We assume that the "background" near the core of the galaxy is the same as the background right on the core of the galaxy, so by subtracting the two, we are left with only the brightness of the center of the galaxy.

Magnitudes

Astronomy sometimes uses funny units. For example, one fundamental unit of brightness that is used all the time is a unit defined by the ancient Greeks, specifically Hipparchus and Ptolemy. Please consult your local astronomy textbook for more details, but here are the most important parts:

  1. Note that the BRIGHTER the star, the SMALLER the magnitude. Thus, a magnitude of 1 is brighter than a magnitude 6 star.
  2. Note that magnitudes are LOGARITHMIC, such that a magnitude of 1 is REALLY A LOT brighter than a magnitude 6 star -- in fact, 100 times brighter.
  3. A magnitude is really a flux ratio. It is defined as follows, where M's are magnitudes and F's are fluxes:Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://en.wikipedia.org/api/rest_v1/":): {\displaystyle M_1 - M_2 = 2.5 \times \log \left(\frac{F_2}{F_1}\right)}
  4. Apparent magnitudes are the brightness an object appears to us to have from here. Absolute magnitudes are the brightness an object would have, were it at a "standard" distance of 10 parsecs. (A parsec is 3.26 light years.)

Most often, the results of doing photometry on an astronomical image are reported in magnitudes.

For more information on units and fluxes, including specifically as it applies to Spitzer data, see this page on units.