IPAC 2MASS Working Group Meeting #91 Minutes

IPAC 2MASS Working Group Meeting #91 Minutes 4/02/96

Attendees: R. Beck, T. Chester, R. Cutri, T. Evans, J. Fowler, T. Jarrett, D. KirkPatrick, G. Kopan, B. Light, H. McCallon


  1. Action Items From the PS CDR
  2. DAOFIND vs. Kernel Size
  3. Anti-Persistence in the Flattening Computation


  1. Action Items From the PS CDR -- The Action Items from the Point Source CDR were appended to last week's minutes. Since then, some corrections and additions have been made, and a new list has been distributed by R. Cutri. The questions of due dates and closure mechanisms were raised. The due dates may all be considered to be three to five months from the present time, with the highest priorities going to the liens and the prioritization of the action items left to the judgment of the cognizant engineers. The closure mechanism will be an email memo to R. Cutri with "cc: 2mass" explaining how the item was closed.

  2. DAOFIND vs. Kernel Size -- B. Light reported on a study he and T. Jarrett have done regarding the effect on the output of DAOFIND of various smoothing kernel sizes applied to the coadded images. Seven coadded images of a scan of M67 were generated, once with the bilinear interpolation and six more times with "Weinberg smoothing kernels" with H values of 0.1, 0.2, 0.3, 0.4, 0.5, and 0.6. DAOFIND was run with the same thresholds on the seven coadds, and the results were compared. Firm conclusions could not be drawn because the DAOFIND thresholds really need to be optimized for each value of H if comparisons are to be fair, and without reliable information on which detections are real and which are false, judgments regarding the appearing and disappearing of certain detections are potentially unreliable. It was decided that the overscanning information available for the M67 field would be used to develop a truth table, i.e., a list of detections that are almost certain to be real, spanning the range of magnitudes from the brightest down to the faintest that could be processed on a single scan. Then the DAOFIND thresholds will be tuned for the H values in a reduced range. It appears that an H value between 0.3 and 0.4 is best for galaxy work, so this range will be examined. This work is being done to close PIXPHOT/FIND liens 1.b and 1.c on the list distributed by R. Cutri.

  3. Anti-Persistence in the Flattening Computation -- A splinter group met after the main meeting to review ideas concerning the anti-persistence problem in the flattening computation (see the minutes to meeting number 89). The idea of identifying persistence contamination in the flattening stacks by deviations from linearity over the scan had been developing. A desire was also expressed to use more of the data frames in stacks, e.g., instead of 6 25-frame stacks, which use 58% of the data, 10 26-frame stacks would use 100% of the data. The lower amount was used in the early protopipeline work simply because changes in the flattening were not anticipated to be necessary more often, and the need to avoid unnecessary CPU usage was felt acutely. The stack parameters were set up early on, and in the absence of obvious needs to change them, as well as the presence of plenty of attention-demanding tasks, they were never changed.

    The appearance of the anti-persistence effect and the refined baseline hardware plan make it reasonable to expand the raw data usage for flattening computations, and the plan has evolved to near complete usage. Part of the puzzle involves the calibration scans, which are currently expected to be one degree long and composed of 42 frames. It is expected that flattening solutions for calibration scans may have to be borrowed for survey scans through extremely dense regions such as the galactic plane, and the use of two 21-frame stacks does not appear to be an advisable approach for flattening calibration scans. Since only one stack seems feasible for these scans, it appears desirable to be able to use a single 42-frame stack. This would simplify the borrowing for survey scans as well as apply maximum information for the calibration scans themselves.

    DFLAT uses a PARAMETER statement to dimension the stack arrays, so it should be easy to expand the maximum stack size from 25 to 42. This will make the program larger; it has been a 16 MB program and would become a 25 MB program. This seems acceptable, however, given the advantages expected and the large amount of memory anticipated for the production machines.

    Examination of some cases of anti-persistence has revealed that it is a more subtle effect than had been appreciated. The eye's high efficiency at spotting artifacts in the coadded images may not be shared by a method based on deviations from linearity without a lot of false triggerings. This method will still be implemented by changing the DFLAT code to threshold in units of the trimmed-average standard deviation and apply the corrective action to the sky offset correction frames rather than the mask, but in addition it appears that the persistence contamination may be attacked more effectively by using the 42-frame stacks on the survey scans as well as the calibration scans (also thereby eliminating the temptation to prepare two versions of DFLAT).

    A 260-frame scan can be covered by six 42-frame stacks with only eight frames being left out. The 42-frame stacks can use trimming of the outer 12 pixels on each end to eliminate any persistence contamination, leaving 18 frames for the trimmed averaging (more than in the protopipeline, hence lower flattening noise). The linearity-deviation filter can probably be thresholded at a higher level to catch any problems that get past the highly trimmed averaging.

    Some final discussion of this subject involved the possibility of identifying persistence contamination in the stack by its appearance in the sorted pixel array. In order to trim-average, the 42 values for an unmasked pixel in the stack will be sorted, then the high and low values will be dis carded before the rest are averaged. With persistence contamination present, a recognizable signature is almost certain to be present in the sorted array. Some use could possibly be made of this information. Meanwhile, the approach will be to expect the larger stack size and more extensive trimming to solve the problem. This will be the baseline approach in the next version of the PIXCAL/DFLAT SDS, and an integrated processor will be delivered in the near term for testing with scans known to contain persistence problems.