IPAC 2MASS Working Group Meeting Minutes, #57
IPAC 2MASS Working Group Meeting #57 Minutes, 3/07/95
ATTENDEES C.Beichman, T.Chester, T.Conrow, R.Cutri, D.Engler,
T.Evans, J.Fowler, L.Fullmer, G.Kopan, B.Light,
C.Lonsdale, H.McCallon, S.Wheelock, J.White
- Implementation Plan
- Effect of Seeing on Galaxy Processing
- Simulation Including Dead Pixels
- Protocam Pipeline Changes for April Data
- Diffraction Artifacts
- Quick Look Turnaround
- Time-Dependent PSF
- Perl-Script Multi-CPU Usage
- Implementation Plan --
C. Beichman reported that the 2MASS Implementation Plan has been sent to
NASA HQ and UMASS, where its warm reception is a source of optimism. The
plan involves about half of the IPAC employees and covers a period of time
from the present through the year 2002.
- Effect of Seeing on Galaxy Processing --
T. Chester reported that studies of the effect of seeing on galaxy
processing have shown that for total PSFs between 1 and 3 arcsec FWHM, no
significant effect in star/galaxy discrimination is found, provided a reasonably
accurate PSF model is employed. The loss of sensitivity with a 3-arcsec
PSF is negligible for galaxies, and shape-parameter calculations are still
dominated by the undersampling due to pixel size.
- Simulation Including Dead Pixels --
B. Light reported that tests involving the inclusion of dead pixels in
the Simulator (first discussed in last week's minutes) have now been extended
to use the mask generated by CFlat (previously only the input mask had been
used; CFlat turns off additional pixels based on responsivity anomalies).
The CFlat output mask has more dead pixels indicated around the periphery of
existing islands of masked pixels, increasing the local systematic effects.
In addition, artificial masks have been used. One artificial mask was
obtained by rotating the CFlat output mask through 90 degrees and ORing the
result to the unrotated mask. Another artificial mask was obtained by
turning off randomly selected pixels until 2% were masked off.
These tests showed that aperture photometry was significantly affected
by the higher fraction of dead pixels, but KAMPhot photometry was not, for
typical scans and the current operating parameters (e.g., aperture size,
thresholds, etc.). The slight degradation of KAMPhot results is still
overwhelmed by the effects of interpixel response structure (modelled as a
central hole of 0.075 arcsec radius and a square border of 0.075 arcsec
width, within which the responsivity is zero).
- Protocam Pipeline Changes for April Data --
Changes to the protocam pipeline for the April data were discussed.
Several different areas were involved and are described in separate subsections
- Diffraction Artifacts --
C. Beichman reported on a study of artifacts created by the
diffraction spikes caused by the telescope secondary mirror supports.
The rate at which such false point sources are produced is significant,
and automated removal of them within the pipeline is desirable. He will
provide guidelines for code changes to J. White, who will include them
in the code that removes ghosts and persistence objects.
- Quick Look Turnaround --
R. Cutri pointed out that although the data-taking period is only a
few weeks and the pipeline is not optimized for fast turnaround, we
should nevertheless not overlook the possibility of getting information
regarding any anomalies found early on at IPAC to the observatory in
time to take corrective steps. This should be possible if the observations
to be taken and analyzed first are carefully planned so that the
types of observations most likely to profit from quick response are
obtained near the beginning of the observing period.
- Time-Dependent PSF --
C. Beichman requested that some effort be expended to provide
better tracking of PSF variations than has been done up to now. The
guidelines offered were that PSF time variations should be modelled
with at least a frequency of once per 24-hour period, certainly no more
than once per scan, and not including spatial variations over the focal
plane. Several ways to do this were discussed, and each involved
T. Chester reported that T. Jarrett has a program that examines
coadded images and (among other things) determines an effective PSF.
This could, in principle, be used to generate time-tagged PSF FWHM
values, but would require special code and modifications of the pipeline
to fit it in. This effort would not produce code portable into
the 2MAPPS pipeline, which goes against a guideline urged by J. Fowler
with respect to new software developed at this time.
G. Kopan and J. Fowler had already looked at the possibility of
getting PSF FWHM values for KAMPhot to use from the existing PSF and
PSFGRID programs. These can run on the aperture-photometry results from
DAOPhot that are used in the frame-offsets code, but best results have
been obtained in the past by using KAMPhot output for its significantly
better centroid results, without which the DAOPhot input would effectively
convolve the PSFs with a centroiding error distribution that
appears too difficult to deconvolve. In addition, PSF and PSFGRID have
been used only on scans ideal for PSF determination, i.e., high density
of point sources but unconfused. It is not clear that results from
sparse or confused scans would be usable by KAMPhot, and double-passing
the data would be required.
Since KAMPhot has code to compute its own PSF from the scan data it
is processing, it could conceivably be made to provide time-dependent
PSF modelling. The disadvantages to this approach are that it requires
very high CPU usage to do this, and it cannot be trusted to produce
stable estimates. Code to modify its PSF computation to make it more
stable would not be usable in 2MAPPS, wherein a separate method is
The 2MAPPS design for PSF estimation could be attempted, but this
means rushing the development of that capability, which is not currently
scheduled to be underway at this time. The method involves the STATS
subsystem estimating the PSF FWHM from the same data it is using for
frame-offset computation. Since some statistically useful number of
point sources must be examined to get a stable estimate, the plan is to
output a time-tagged estimate once every N (TBD) point sources, since
this allows higher frequency time tracking of the PSF in dense regions,
and while the tracking frequency will be lower in sparser regions, a
smaller number of point sources will be affected. What STATS actually is
to estimate is the value for a one-parameter seeing model that describes
the optical PSF convolved with a seeing disk; the parameter is expected
to be derivable from the FWHM estimate, and the whole algorithm will be
constrained to make it stable. This, however, is a lot to develop and
test before May.
KAMPhot obtains its PSF model in a sufficiently modular way that
patching in various algorithms' results does not appear to involve
significant resources or risk. G. Kopan is currently working on the
STATS subsystem, but not the PSF-estimation part, only the frame-offset
determination part. Nevertheless, it was decided that he and B. Light
will pursue modifying his code to include the computation of a 90%-encircled
light value to be passed to KAMPhot for use in setting up its
PSF model. If this does not run into significantly threatening obstacles,
it will be used to obtain the desired PSF time dependence; otherwise, we
will have to rethink how to handle this concern.
J. Fowler pointed out that some form of PSF RTB (Regression Test
Baseline) will need to be set up so that we will know what the effects
of various code changes are. A subset of existing protocam scans will be
identified for this purpose. These will cover the variations in density
and confusion needed but will be kept small enough not to introduce disk
- Perl-Script Multi-CPU Usage --
The April data will be processed on karloff, which has just had its
CPUs upgraded from 50 MHz to 75MHz (G. Kopan has run benchmarks that
show the throughput has indeed scaled upwards by 50%), and it is expected
that a second similar machine will be available also. In order to make
use of the multiple CPUs, it has been suggested that T. Conrow's
perl-script protocode be expanded to handle the running of entire
individual scans as client/server tasks (see last week's minutes).
This involves significant modifications to the pipeline and the
perl-script code, but a cursory inspection by T. Conrow, G. Kopan,
and J. White of what will be needed has indicated that it should be
possible with reasonable expenditure of effort. If successful, it
would constitute a major step forward in 2MAPPS pipeline design,
and so it will be pursued. If unforeseen obstacles arise to make it
impractical in the short term, then techniques developed by
R. Beck for use on separate sparcstations will be employed.