IV. 2MASS Data Processing


10. Quality Assurance

Quality Assurance (hereafter "QA") is the final analysis ensuring that the 2MASS data meet Level 1 specifications. While data were still being collected at Mt. Hopkins and Cerro Tololo, QA was responsible for closing the loop with the observatory by determining which of the tiles could forever be checked off as "done," and which needed high or low priority re-scans. For reprocessing of data for the All-Sky Release, QA was responsible for assigning quality scoring to all scans based on a consistent and uniform set of criteria across the sky. Many regions of the sky had multiple observations, so this uniform set of scores assured that the All-Sky Release Catalogs could be built from the very best scans of each region.

There are three steps in QA:

The philosophy behind the QA grading scheme is to provide a numerical score for each scan indicating the likelihood that those data meet the Level 1 specifications. The best score, quality=10, is given to scans that have a 100% chance of meeting these requirements. The worst score, quality=0, is reserved for scans known to fail the Level 1 requirements. For scans in between these two regimes, integral quality scores between 0 and 10 are assigned.

A scan's quality score is assessed from a number of diagnostics:

For readers wanting more in-depth discussion of the quality diagnostics, the sections below describe the steps in Final Science QA. Figures linked to the discussion are all taken from a random night, 000129s, i.e., 2000 Jan 29 south. (For detailed descriptions of the QA diagnostic plots, the reader is referred to this subsection.)

a. Photometricity

The scatter in mean zero-points for the six individual measures in a calibration scan set was computed as a first diagnostic of the photometric stability. Figure 1 shows an example of the nightly photometric solutions (fits to the mean zero-points of each six-calibration-scan set as a function of UT; see IV.8.b), which were reviewed by eye for each night, providing a second check of the photometric stability. A third check -- statistics on the magnitude differences of SNR>20 stars falling in the region of overlap between adjacent scans -- was also used, and Figure 2 shows the resulting average magnitude per scan pair as a function of UT, which were also checked by eye.

Figure 1Figure 2

Using the above diagnostics, a photometric quality factor (fct1) was computed for each photometric solution. (Sometimes nights were divided into separate intervals with independent solutions if, for example, a brief period of clouds interrupted data collection partway through the night.) This factor considers the number of calibration scan sets going into the night's photometric solution, the photometric dispersion in each calibration scan set, and the size of the photometric scatter in scan overlaps. It was computed via the formula fct1 = pfct1*pfct2*pfct3, where the three subfactors are described as follows:

It should be noted that some scans do not have overlapping scans taken on the same night, meaning that there are no stars in common with which to judge the stability of the photometry on a scan-to-scan basis. Here other indicators, such as the background plots and jump counters (discussed below), may indicate the presence of clouds. In these cases, fct1 can be further downgraded to 0.0 if the scan, or set of scans, is believed to be non-photometric.

A suite of diagnostic plots, providing other internal checks of the photometricity, were also reviewed for each night's data:

Figure 3

These plots were added to final processing as additional checks of the photometry. Review of these plots provided a first characterization of the dataset at large, often suggesting more in-depth analysis of the data once they were loaded into the databases. These plots did not, however, directly affect the photometric scoring of scans, since none of the problems uncovered were severe enough to warrant additional downgrades.

b. Sensitivity/Backgrounds (Airglow)/Meteor Blanking

For each scan a photometric sensitivity parameter (hereafter "PSP"; see VI.2) was computed from a convolution of the seeing shape and background level. It correlates with the probability that a scan will meet the Level 1 specifications for sensitivity. The conversion of PSP value into an actual probability was slightly different for each detector, making this value observatory dependent. The northern camera had its H-band array replaced in mid-survey, so there was date dependence as well. These values were calculated automatically by the QA pipeline and converted into a sensitivity quality factor, fct2, as follows:

Table 1: Conversion of PSP values into fct2
Actual
Probability
North Ks PSP  (& 
H before 990701)
North H PSP
(after 990701)
South H PSP South Ks PSP fct2
>75% <= 10.85 <= 9.0 <= 9.6 <= 10.6 1.0
50-75% <= 11.11 <= 9.3 <= 9.8 <= 10.9 0.8
25-50% <= 11.35 <= 9.5 <= 10.3 <= 11.7 0.5
0-25% <= 11.85 <= 9.7 <= 11.7 <= 11.7 0.3
0% > 11.85 > 9.7* > 11.7 > 11.7 0.1
*These Northern H-band PSP values resulted in downgrades of only fct2=0.3, not 0.1.

It should be noted that under photometric conditions, the sensitivity at J-band always met Level-1 specifications and so was not a factor in the computation. Only the H- and Ks-band PSP values affected the probability. Figure 4 shows the PSP values versus scan number, which provided the QA reviewer a visual summary of the automatically-generated fct2 values.

The QA reviewer also examined plots (Figure 5) of the frame background level per band. These plots were instrumental in showing the onset of clouds, but also alerted the reviewer to other problems, such as extreme airglow variations or transient sources entering the field of view.

Figure 4Figure 5

A diagnostic known as Cnoise(4) was used to automatically flag scans with such dramatic airglow variations that residual structure remained in the image data. This Cnoise(4) statistic is the difference between the measured Atlas Image background noise (after modelling large-scale gradients and structure) and the theoretical noise expected from the overall background level. Of the three 2MASS bandpasses, H-band shows by far the largest effect from OH airglow variations, so the H-band Cnoise(4) value was used as the sole diagnostic for the airglow quality parameter, fct5. For values of H-band Cnoise(4) < 4.5, the airglow quality factor remained at fct5=1.0; for values of H-band Cnoise(4) > 4.5, the airglow quality factor was downgraded to fct5=0.1. This downgrade was overridden in cases where the the logarithm of the scan's maximum source density (determined in subregions along the scan length) was greater than 4.2, or when a visual inspection of the image data by the QA reviewer showed no obvious problems caused by the airglow. QA reviewers were also asked to examine image data for scans with values of 2.5 < Cnoise(4) < 4.5, to look for any problems not automatically receiving a downgrade.

There were also concomitant diagnostics, known as "jump counters," that counted the number of frames in a scan where the frame background exceeded the average background of its adjacent frames by >0.5 times the root-sum-squared pixel noise. For scans with three or more H- or Ks-band jumps (out of 247 total frames), the QA pipeline automatically alerted the reviewer to examine the image data for problems. These counters were excellent diagnostics of extreme airglow variations, clouds, and electronic anomalies.

Finally, the automated QA pipeline produced images of each frame from which a transient source was removed. This transient source removal was aimed primarily at removing meteor streaks and satellite trails from the data frames, although other one-time sources, such as scattered light from bright stars near the array edge, were also eliminated. The images in this subsection showed every frame for which a transient source was detected and removed and the extent to which the image was blanked. QA reviewers examined all these images to monitor whether the blanked sources were indeed transient and whether their removal had a beneficial result. As a result of these (and other) visual inspections of the data, a (fortunately small) list of remaining anomalies (see II.4b) was amassed.

c. Seeing

For a scan to receive a high quality score, the seeing had to be within tolerance, point source images had to be round, and the final pipeline processing was able to track the seeing on timescales better than or comparable to the variation timescale of the seeing itself. To this end, several quality diagnostics were developed.

Figure 6Figure 7

d. Astrometry

Each QA review also included a check of several plots related to the astrometric quality of the data:

Occasionally, small astrometric anomalies were uncovered in some scans -- all of which were investigated in more detail outside the normal QA process -- but none of these problems was severe enough to warrant a downgrade to the final quality score.

Figure 8Figure 9Figure 10Figure 11

e. Science Diagnostics

Another suite of plots served as astrophysical checks of each night's data. All plots were reviewed each night to ensure that there were no patterns/anomalies suggesting a non-astrophysical imprint on the data:

No problems were uncovered that resulting in the direct downgrading of scans.

f. Miscellaneous Diagnostics

To check that the flagging of minor planets was working correctly, the QA subsystem checked that any low-numbered asteroids (i.e., one of the first 500 asteroids discovered, as numbered by the IAU) were correctly correlated to a 2MASS source. Because all of these low-numbered asteroids are bright, they should correlate with a 2MASS source when a 2MASS scan covers their predicted positions. QA reviewers were asked to study the output of the correlations or non-correlations (usually there were only a few, if any, such asteroids on any given night), and no problems were seen. The only non-correlations occurred when the predicted asteroid positions were very close to a scan edge.

The final QA check was a monitor of the differences between the final processing and the preliminary processing for the incremental releases. Any differences in scoring were noted, and the reason for their changes documented and understood, before the final grades were approved:

Figure 12

g. Final Quality Scoring

These resultant checks of the above diagnostics were noted in a final summary form (see this page) by the QA reviewer for each night. These results were encapsulated into a final grade for each science scan. Each scan is scored using a base quality number of 10 multiplied by the minimum of the individual quality factors detailed above; that is, grade = 10 * min(fct1, fct2, fct3, fct4, fct5), where fct1 is allowed to range from 0.0 to 1.0, and all others from 0.1 to 1.0 only. The grade will always be 1, unless the photometric quality factor fct1=0.

The final step in the QA review process was a submittal of this final summary form to the Principal Investigator, Michael Skrutskie, for an independent assessment of the diagnostics. At this point any disagreements with the scoring could be discussed (which rarely was required), before the night's scoring was declared as official.

[Last Update: 2003 Mar 13, J.D. Kirkpatrick & R. Hurt]


Previous page. Next page.
Return to Explanatory Supplement TOC Page.