Flux calibration involves (1) fitting a slope to the 24Hz voltage detector samples and (2) converting this to a flux. This is, of course, complicated by the presence of non-linearities in the system, glitches , and how well the conversion of voltage/sec to fluxes is known. Parts 1 and 2 are handled by Derive-SPD and AA separately.

At constant illumination the output of the SWS detectors can be approximated as a voltage changing linearly with time;

The increase of this voltage (i.e. the slope *S*) is dependent
on the radiation falling onto the detector, the physical quantity of interest.
In Derive-SPD a slope and offset (*O*) are derived from the
24 Hz data for each reset interval. See section 8.4 for a
discussion of this and the errors on it.

In normal data frames all samples in a reset interval are used. For a 1 second
integration this time is 17/24 seconds - the first 7 samples are thrown away
as being affected by the reset, leaving 17 samples that can be used. For a two
second integration the time is (17+23)/24 seconds, as 1 sample is thrown away
in the last second due to the reset pulse. For an integration lasting *K*
seconds the effective integration time is seconds.

The accuracy is directly estimated from the fit residuals which allow the
computation of the standard deviation of the derived photo-current. Obviously,
the accuracy depends not only on the intensity of the source (*I*), but also on
how well the ramps have been previously linearized and therefore on the
measurement error of the RC time constants. A statistical weight is
computed which is inversely proportional to the error on *I* and proportional
to the number of measurements between two detector resets. This weight will be
used by Auto-Analysis to compute the average photo-current for each ramp. It is
expected that this error will dominate all previously described ones.

If within a reset period a glitch (or any
other anomaly) is detected, knowledge of the offset level *O* is lost, and the
reset integration is stopped. The integration is subsequently continued after
the glitch until a reset pulse (or another glitch) is detected. If glitches have
occurred within a reset interval the slopes *S* of the different parts of the
reset interval are averaged together (weighted by the standard deviation
of those slopes).

Currently the SWS flux calibration as performed in AA rests on the
assumption that the measured current slope (in
V/sec) is a linear combination of source flux (in Jy),
instrumental gain ( V/sec/Jy) and dark current *D*(*t*)
( V/sec);

Note that in this equation it is implicitly assumed that *all* memory
effects (see section 5.5) can be neglected
or have been removed. A full treatment of these effects would result in some
sort of convolution integral for the right hand side of eqn. 7.2.

Following this equation the actual source flux is reconstructed by first subtracting the dark current from the measured slopes, and subsequently dividing them by the instrumental gain.

The instrumental gain is split into several (hopefully) orthogonal components;

Here *G*(*t*) contains all the gain variations occurring on the timescale
of the observation. *G*(*t*) is derived from the observation itself, from
reference scan and/or up-down scan
data (see sections 4.6.2, 8.3.5. 8.3.6
and 8.3.6). In principle *G*(*t*) should be unity, in practice it
will vary around unity during an observation, and in OLP V6 it is set to 1.

The factor is used to account for long term (i.e. different SWS observations) variations in the responsivity of the instrument. It is determined by comparing the instrument response when the internal calibrator is switched on to the expected value for that response based on calibration observations.

The conversion from V/sec to Jy is contained in . It is taken from a calibration table (one for each AOT band) which in turn is derived from special calibration observations (section 8.3.8).

K. Leech with contributions from

the SWS Instrument Dedicated Team (SIDT)

and the SWS Instrument Support Team (SIST)