Here are the questions/answers that occurred during the ISO SWS workshop
at IPAC 18-21 November 1997.

ISO SWS workshop questions during lectures.

I. Tuesday 18 November 1997 - Do Kester's Talk - The SWS Instrument

1. Where were the cross talk matrix values determined?
In the lab. Checked in orbit. Not seen to change much. Updated
once for each detector in the Performance Verification phase of the mission.


2. Theoretical resolving power. SWS01 AOT speed 4 degrades resolution by
a factor of 2. An extended source will then degrade the resolution by
another factor of 2?  

SWS01 scans degrade resolution due to low sampling of grating positions.
Observing an extended source will degrade the resolution further.
However, observing an extended source with SWS01 should not decrease
the resolution as bad as you might think (i.e. down by a factor of 2 twice
for every factor ~2 increase in speed).
Sooner or later inflight values will make their way into CAL19.
Because the instrumental profile for extended sources is broader than for 
point sources, the AOT1 "smearing" across this profile is less severe than
for point sources.


3. Does pipeline 6 include the theoretical resolving power correction?
No. 


4. How many kinds of glitch slopes can you see?
Glitches produce a step. If the step is large then the after effects
may be tails.


5. Conversion from bits/sec to volts/sec.
The conversion factors are in the calibration file CAL5. See IDUM sec 2.10

6. Does the slope change after a glitch?
Not normally, but strong glitches leave "tails".


7. Does the pipeline reduce data that was not optimized for other
detectors? 
The pipeline reduces all data from all detectors. 
Data that cannot be calibrated properly are carried along but are excluded
from the final AAR product.

All data, where a valid wavelength could be assigned (whether serendipitous
or not) and which have valid slopes (not overflow, not more than 3 glitches,
etc.) make it into the AAR irrespective of whether the user wants them.

8. Is there any reason we should trust SPD data?
All SPDs are scientifically validated, regardless of signal level. This
is true for pipeline (OLP) version 5.0 and later. You would be wise to
reprocess ERD -> SPD for earlier versions to take more recent detector
calibrations into account, as well as updates to wavelength calibration.
Dark current subtraction is treated between SPD & AAR.


9. Is the gain set by source brightness?
You can set different gains for different detectors.
You the observer had to set the gain for each observation (by inputting
the flux level). If the gain level is wrong, it is because you did not
put in the correct flux estimate.


10. How certain are the cross talk corrections?
The cross talk corrections are vaild for ~90% of all detector data.
Corrections for the remaining ~10% may include, e.g. band 4 output containing
strong particle hits.


11. Are errors propagated through the pipeline?
Errors are first computed from the fitting to the linearized signal
ramps, then propagated through the stages of dark current subtraction
and response calibration during the SPD -> AAR chain. The latter
computations are still problematic (memory, glitch tails, drift, for
example, not well accounted for), so the "ramp errors" are still the
most useful values for assigning statistical (not photometric) quality
to each datum. This can be complemented by examining the number of
valid samples used in each fit. These values are stored in the SPD tag
"offset" or in the AAR tag "tint".

Errors are determined from the slopes only. 
The standard deviations are just  goodness of fit numbers.
St. devs. can be used for weighting purposes (must be careful to use
them correctly).


II. Wednesday 19 November 1997 Pat Morris' Talk - Calibration Issues

1. What is LVDT?
LVDT = linear voltage displacement transducer.
It tells us the angle of incidence on the grating.


2. What is the resolution limit for the LVDTs?
Somewhere between ~1/10 and ~1/8 of a resolution element, depending on
grating section.


3. At what point should we reprocess (i.e. start with ERD data) data?

If your data are processed in versions 5.0 and later, then the SPD
is in good shape (scientifically valid).  There was, however,
recent updates to the grating wavelength calibration, and it could
improve your wavelenghts only if (1) you have S02 or S06 data, and
(2) your data were obtained since rev650 or so when the last
update was inserted in IA but not yet into the official pipeline.
In this case, remember to use CAL_SELECT and set CAL16_E to the "test"
level.  This improvement is only marginal (according to Do).

If your data are processed in versions earlier than 5.0, then
you should reprocess the SPD from the ERD in IA to take advantage
of better detector calibrations (including removal of obsolete or
buggy modules) and grating wavelength calibration.
Remember to copy your official pipeline SPD header into your
IA SPD header to get the proper ISO velocity-towards-target correction.
I.e.,
IA3 TEST> myspd=dspd(erd) ; or whatever steps you took to get a new spd
IA3 TEST> theirspd=read_fspd('swspXXXXXXXX.fits')
IA3 TEST> myspd.header=theirspd.header

Now, all SPDs and AARs processed or reprocessed with the official pipeline 
6.0 and later will have new reconstructed spacecraft pointing
information in the header.  The spacecraft star tracking system was
recalibrated in rev370.  THIS DOES NOT MEAN THAT YOU SHOULD REPROCESS
YOUR DATA  IF OBTAINED BEFORE OR AFTER REV370.  You cannot obtain
recontructed pointing in IA.  This can only be done in the official
pipeline (and at SIDT for testing purposes), and thus will be present
only in the OLP_6x products on your CD-ROM. Note that this is reference
information, nothing is actually done with it during ERD->SPD->AAR.
SIDT working to validate the claimed current and reconstructed pointings,
and developing AAR-level flux correction tool that incorporates latest 
beam profiles.


4. Does the latest pipeline use the newest CAL files? Yes, except for the
recent wavelength calibration update.
The largest errors in wavelength calibration will be due to mispointing,
not the calbration. Also we expect to put in new CAL13 and CAL25 around
Christmas, affecting some observations done since Rev 450.


5. Is there a atask that tells you the resolution at a given wavelength?
Yes, it is called 'resolution', which comes from an optical model of SWS.
use in IDL:  ia3ws> print,resolution(wavlength,'band')
Resolution package will giv eyou either extended or point source resolution
for SWS01 AOT and you can specify a speed. Look under help in IDL for detailed
usage of resolution.


6. Is there any improvemnt in SWS02 resolution?
SWS02 AOT hits every grating position over the specified wavelength
range. It is as good as it can get.


7. If you have a bright source and are able to get the best resolution
possible from pointing and CAL files, are super resolution techniques 
possible? 
Maybe in the post-ops phase.


8. What is a photometric check for the SWS instrument?
A photometric check measures the signal from an internal calibration lamp
with the shutter closed, for every detector except band 1.
It is used in the calculation of drift.


9. What does 'calibration check' mean?
check out Schaeidt et al. 1995, A&A.
 

10. Are there GUIs to access the CAL files? No.


11. Are "bad" detectors (i.e. noisy detectors) flagged as bad in the pipeline?
No. Erratic detecters should be removed "by hand" as a detector that is 
erratic may start behaving itself again without warning.


12. Is there anything we can do to correct for errors in the relative
spectral response function? No, there is nothing the user can do to
alter the RSRF correction. And rightly so: the user should not tinker
with the RSRF.The user can be made aware of possible RSRF artifacts
with latest documentation (IDUM, and/or web releases).


III> 19 Nov 1997 Do Kester's Demo Talk - De-tailing and deriving pulse shape 

1. Does the pulse shape change with each reset? Sometimes in some detectors.
Most of the time it is reasonably stable.


2. Is there more than one time constant for each detector?
AC time constants are really constant. The time constant for pulse
shape might change (see question 1 in this section).


3. Can we alter the number of points a glitch throws out?
i.e. the software throws out 7 points. Can we throw out 10?
Yes, but you do not want to end up throwing out all of your data.
You have to use CAL03 and rerun DSPD again. I would *not* advise it.


4. How are tails fit?
Using an emperical model.


IV. 19 November 1997 Adwin Boogert's Talk - SPD -> AAR

1. Does the most recent pipeline (OLP6.x) have the fscal procedure applied
to it?  No, a serious bug was discovered in 'fscal', and it is not applied
any longer in OLP6.0 and higher.  As a result, you should reprocess
all of your AOT 6 and 7 data from OLP5.x.


2. The pipeline is sometimes flagging good science data as reference data for 
S06 and is removing it from the AAR products. 
 The flagging went wrong in both pipeline 5 and 6. But in pipeline 6
only valid science data is transferred from SPD to AAR with the
extract_aar command, and thus erroneously flagged valid data is lost.
Use the /all keyword in extract_aar to avoid this. The (real)
reference scans will then still be in the AAR of course. Use the
`lines=..' option in `cleanstruct' to remove these.


3. What effect will subtracting the wrong dark have on the data?
 You will have a slope in the resulting data, which you can recognize
by comparing the up and down scans, which won't match. If the dark is
seriously wrong, then residual RSRF features may be seen. For weak
signals, a wrong dark may result in negative fluxes. Use 'dark_inter'
to define a better dark.


4. How often are darks taken? 

Dark current internal checks (non-AOT) are taken on a weekly basis but they 
change with each observation.
There are dark measurements taken for each detector at
the beginning of each up scan, and (mostly) at the end of the down
scans. In between these dark observations, the dark in band 2 and 4
may change non-linearly due to memory effects, which can be a problem
for low flux levels (see answer 3).


5. Is the dark current subtraction arbitrary?
 Dark current subtraction is OK for band 1 and 3 data. For band 2 and
4 the problems described in answer 3 may occur. Negative fluxes will
occur only for very weak sources, with signals comparable to the dark
current. Overall slope differences between up and down scans may occur
also for brighter sources, up to a few 100 Jansky's (see for example
SWS-IDUM Fig. 5.5).

6. Do reference scans affect only bright sources?
 No, reference scans influence the dark current only in band 2 and 4,
and if the flux at the reference wavelength is different from the flux
level before the scan break, i.e. for data with a steep/changing
continuum (see Fig. 5.8 in the SWS-IDUM).


7. How are fringes fit?
 One can correct for wavelength calibration and resolution differences
between RSRF and SPD, using `resp_inter' instead of `respcal' in IA.
Otherwise one can use the `aarfringe' tool on AAR level, either in IA
or ISAP.  For details see the notes on the fringe demo and the online
help.


8. How do we remove reference scans in AAR products (if we have elected
to keep them in). 
 If you use the IA command `showstruct' you see there is a column in
the AAR structure called `LINE'. Now, the reference scans have another
line number than the data. You can remove specific lines using
`cleanstruct' in IA.


9. What are detector jumps?
 These are jumps in the dark current. One can discern 'single'
detector jumps, which have a decrease or increase in the dark current
in one detector over at least 10 seconds. It is not known what they
are caused by. Somewhat less frequent are the 'multiple' detector
jumps. They occur in several neighboring detectors in one scan
direction and are a sudden increase in dark current, which falls off
exponentially with time in a few seconds. They may be caused by cosmic
ray hits on the electronics. Both types of jumps should be corrected
for, using `dark_inter', or the affected data should be removed
with `cleanstruct' or `mask_inter'.


10. What is the recommended way to look at each scan?
 Use the plotaar or plotspd commands in IA, or look at your AAR in
ISAP. If needed, one can select the up and down scans, and put them in
different structures (either AAR or SPD). The IA commands to do this
are (for non-rebinned data):
        spd_down=select(spd,test_status(spd,/swdown))
        spd_up=select(spd,1 XOR test_status(spd,/swdown))
For rebinned data one can use the SDIR or LINE columns to select up and
down scans.

Also, On the SPD level the most convenient and powerful tool is the ia3 routine
mask_inter. 

11. What can you flatfield to?
 Flatfielding ('sws_flatfield') tries to correct for small calibration
differences between all the scans per AOT band. It is not easy to find
a correct reference spectrum to shift/multiply the different scans to.
Flatfield uses as default reference spectrum the mean of all the down
scans. Personally, I always use the rebinned up scans as reference to
the up scans, and the rebinned down scans as reference to the down
scans. This gives a good impression of the calibration uncertainties
(mainly dark current). One could also use spectra of other telescopes
(e.g. IRAS-LRS) as reference spectrum for flatfielding
 
12. What command do you use to rebin the data?
 In IA it is 'aar_out=sws_rebin(aar_in, resolution=..,oversample=..)'.
It does the rebinning separately for up and down scans. The 'lines=0'
option can be used to combine both scan directions.

Also sap_rebin in ISAP (GUI).
Aarfilt (ia3) also does some binning.


13. What can you rebin to? Anything?
You can define your own reference wavelength scale or use another dataset
as the reference.


V. 20 November 1997  Russ Shipman Demo
(Russ asked me to sit in on his demo and log the questions. Here they
are Russ).

1. What is the goal of this demo?

2. Ehat kind of data are you looking at?

3. What is the approximate flux level of the source?

4. Where there is not data (i.e. band 1, aperture 4) is it flagged?

5. Can you do any SPD data in ISAP?  

6. How do they decide where to cut out the data
(for removing a spike)?

7. Is there problems going back to pipeline once you have removed some data?

8. confusion in the definition of the word 'scan'

9. the cleanstruct routine will remove nodata flagged data.