XClose

UCL Department of Space and Climate Physics

Home
Menu

SWIFT Data Processing

The following sub-sections summarise the data reduction performed on the Swift-UVOT images in order to populate the catalogue with sources and their physical properties. For each of the UVOT target observation, there are many more serendipitous sources in the field of view and the goal of this work is to detect as many of these sources as possible and compile them into a catalogue.

Raw Data

The UVOT usually uses the full 17x17 arcmin2 field of view (FOV), with the pixels binned 2x2 to give a raw image of 1024x1024 pixels. The RAW image is built up on board and then telemetered to the ground. There are other modes: using a smaller window, or recording both the time and position of each photon individually (called 'event' mode), but only image or event data taken in the standard full field mode with exposures longer than 10s were used in this version of the catalogue. Event mode data taken with the full FOV were included by building the data into a raw image in the first stage of processing using the pointing history.

To construct the catalogue, the raw data provided by the HEASARC were processed through a purpose-built pipeline, based on the Swift UVOT FTOOLS available in HEASOFT. The pipeline is constructed as a series of processing engines, which advance the data through each stage of catalogue construction. For each observation dataset (identified by the ObsID number) and for each filter, all images are processed and then stacked to achieve maximum sensitivity prior to source detection. Thus within each ObsID one image per filter is searched for sources. In the final stage of catalogue production the individual source lists are merged together.

Raw images, and images which have been processed into sky coordinates can be obtained from the Swift special archive. 

Swift special archive for raw or sky processed data

Processing Engines

The catalogue processing uses a purpose-built pipeline, based on the FTOOLs designed for Swift UVOT by HEASARC. There are four levels, or 'engines' through which the data are processed in turn, starting with the raw data.

The process starts with a dataset defined by ObsID, in which there is a raw or event data file for each filter, plus supplementary files with pointing information etc. 

Engine 1

Creates raw images from data taken in event mode.

Locates bad pixels.

Removes modulo-8 pattern from the on-board centroiding algorithm.

Identifies any image artefacts (readout streaks, scattered light features etc.) and makes a quality map which accompanies the main image through the processing steps. The map is used in the final stages to flag sources with quality issues.

Engine 2

Rotates images into celestial coordinates.

Uses reference stars from the USNO-B1 catalogue to fine tune the sky coordinates.

Creates exposure maps corresponding to the sky images.

Engine 3

Stacks, for each ObsID, the different exposures for each filter.

Stacks corresponding quality maps and exposure maps.

Generates a stacked large scale sensitivity map.

Engine 4

Calculates background maps for each stacked image.

Detects sources whose count rates exceed a threshold above the background.

Matches the sources with the quality map made in stage 1 to add quality flags to the source list.

When all the data has passed through the four engines, the source lists are concatenated to form the source catalogue, and cross-correlated to identify sources which have been observed in more than one observation.

Bad Pixels

The positions of the bad pixels are recorded in the Calibration Database (CALDB). No attempt is made to correct for them, but instead sources which lie on bad pixels are flagged in the catalogue. The task UVOTBADPIX produces the bad-pixel maps for the current observation.

Most of the bad pixels lie at the edges and corners of the detector: 

Bad pixels on a flat field image

Red marks pixels which are consistently giving too high a rate, and black marks low pixels.

Image quality is ultimately propagated through the source detection process and attached to the sources within the six catalogue columns: "V QUALITY FLAG", "B QUALITY FLAG" etc. Sources with this flag will have additional unquantified uncertainties associated with their brightness and location. It is thus left to the investigator's discretion whether to include these sources or not. 

The association of sources with bad pixels is complicated by the on-board 'shift-and-add' process which adjusts the position of the detector window slightly to compensate for any change in pointing during the exposure, monitored by the position of tracking stars. Bad pixels therefore tend to shuffle locally across an exposure. This is taken into account on the ground by using the tracking history. One bad detector pixel can therefore affect several image pixels. 

Flatfielding or Large Scale Sensitivity

Due to the photon-counting nature of the detector, there is no CCD bias charge to subtract from the images. 

Changes in sensitivity across the detector are calibrated on large and small scales (and recorded in the CALDB). The large scale sensitivity (LSS) map is slightly different for each filter, and can give corrections of up to 8% at the edges. The stacking of raw images (which may have been taken at slightly different pointing positions) means that for an individual source the LSS correction has to be built up of several components: one for each raw image. This is performed using the task UVOTSKYLSS. 

Maps of LSS for all filters

The LSS map for each filter, (including the white filter that is not used in this catalogue). 

 The small scale sensitivity map is set to unity at present and so a correction was not included for this in the catalogue.

The detector dark count (i.e. random photons detected when the shutter is closed) is very low (about 7x10-5 per second per pixel) and does not need to be corrected for.

Fixed-pattern correction

All images are corrected for the mod-8 fixed-pattern correction in the first stage of processing using the FTOOL UVOTMODMAP. 

To increase the resolution of the detector, each real CCD pixel is subdivided into 8x8 sub pixels by use of a centroiding algorithm. The incoming photon ultimately results in a 'splash' of 107 photons on the CCD detector, reaching approximately 3x3 pixels. By measuring the amount of charge in the pixels adjacent to the central one, it is possible to calculate whereabouts within that pixel the peak must have been located. A look-up table on board allows this to take place extremely quickly, but some residual patterning remains in the raw images. 

The residual pattern can be removed by measuring the level of the pattern in the sky background and re-organising the photon positions slightly. Because there is no multiplication or division in the correction algorithm, the number of photons is conserved. The length scale of the pattern is larger than the point source FWHM (2-3 sub pixels), but smaller than the photometry apertures (10 sub pixels radius) used to generate the catalogue photometry. The correction will thus not affect the photometry, but may make the position of the source more accurate.

On the left is a raw image of the sky background; the right has had the mod8 pattern corrected. 

Bright sources suffering from strong coincidence loss (when two photons are confused for a single event) have an additional fixed pattern around them that is not corrected by the mod8 correction because it has different characteristics from the background. In particular, sources develop a dark ring around a bright source. The measurement of the shape of the source might be affected, but in addition spurious sources are sometimes detected in the wings of the bright source, or photometry of nearby sources can be affected. 

Square Patterning around the bright centre of a galaxy caused by coincidence loss
The centre of this galaxy has a square pattern around it due to coincidence loss
Source Detection

For each ObsID and for each filter, the images are stacked to achieve the highest sensitivity to faint sources, using UVOTIMSUM. One image per filter is searched for sources. 

The source detection and measurement is carried out within a modified version of the UVOT FTOOL called UVOTDETECT. This is based on the well-known source detection software Sextractor (Bertin and Arnouts 1996). UVOTDETECT was customised for this catalogue processing to get optimised source detection for all the most problematic UVOT images without having to intervene to change any parameters. 

Within UVOTDETECT a background map is constructed using either sigma-clipping, or, for low backgrounds, a specific algorithm based on Poisson statistics. The background is extrapolated outside the limits of the sky-rotated image. 

Groups of 8 or more connected pixels which are brighter than the background by more than 1.5 sigma are identified as sources. An 'Absolute' rather than 'Relative' detection threshold was used to maintain the same threshold across the image, the threshold being carefully optimised to minimise spurious sources while detecting as many real sources as possible. 

Since we are using stacked images, we have to supply the corresponding stacked exposure map. 

The background level is used to calculate a 3 sigma upper limit on any  sources NOT detected. Because the background is measured for the whole stacked image, this upper limit is recorded in the SUMMARY table, as a value valid for anywhere within the image.

The UVOTDETECT performs the coincidence loss correction, the change in sensitivity with time correction and finally the zero-point calibration using information from the calibration database. The positions, dimensions and orientation of the sources are also measured and they are characterised as either point or extended sources by comparison with the calibrated PSF. Extended sources have a flag set in the columns headed V_EXTENDED etc. 

The count rates and magnitudes are calculated using the isophotal method, and the sources only retained if the signal to noise ratio is greater than 3. Later in the process only sources with a signal to noise greater than 5 in at least one filter are retained in the combined source list. The significances are also recorded in the catalogue. 

The information for setting quality flags for each source is picked up from the stacked quality map by the task UVOTFLAGQUAL, and the flag added in an extra column to the source list. 

Calibration database (CALDB) at Heasarc

Photometry, magnitudes and fluxes

UVOTDETECT performs the coincidence loss correction, the correction of reduction of sensitivity with time and finally the zero-point calibration as described in the papers listed below. The zero points are given in the description of the filters.

Comparing the catalogue processing with uvotsource

We can compare the magnitudes listed in the catalogue (obtained using UVOTDETECT) with those obtained directly using the FTOOL UVOTSOURCE that is normally used in data analysis and has been extensively tested for accuracy. In the figure below we have compared the magnitudes obtained in these two ways for the sources listed for OBSID 00020073001, for all 6 filters. The coloured points have a non-zero quality flag. 

In tests comparing the two methods we find that UVOTSOURCE tends to obtain a slightly fainter magnitude than in the catalogue (by around a tenth of a magnitude). The discrepancy is due to a different method of background determination: in UVOTSOURCE an angular region around each source is used, whereas UVOTDETECT uses a global background map. For the brighter sources most of those not lying on the x=y line have a non-zero quality flag i.e. there is some issue associated with the source that may mean a standard measurement is not appropriate. However, for the brightest sources (those nearing saturation) the catalogue can give a measurement that is too bright by a few tenths of a magnitude. This is due to the fact that UVOTDETECT does not use the same aperture photometry method as UVOTSOURCE, but an isophotal method. 

Magnitudes measured using UVOTSOURCE compared with those in the catalogue
Figure 1: Magnitudes measured using UVOTSOURCE compared with those in the catalogue

Conversion to Fluxes

The magnitudes are converted into fluxes using ratios given in the CALDB calculated for a range of GRB models and recorded in the catalogue for the convenience of the user, although it must be noted that these conversions are not suitable for all spectral types. A different set of flux conversion ratios are available in the CALDB document calculated using a range of star types (valid for UVOT B-V > -0.36). For a quick fix: to convert from the fluxes in the catalogue to values derived for star spectral types, multiply the FLUX values in the catalogue by the factors given here:

v0.998uvw10.963
b0.893uvm20.884
u0.940uvw20.965

Table 1: Conversion factors to adjust the flux from GRB models to stellar models.

CALDB document on counts to flux conversion For even more flux conversions, please see Appendix A of Brown et al. 2010 Photometric calibration of the Swift UVOT by Poole et al. 2008 An updated UV calibration for the Swift/UVOT by Breeveld et al. 2011

Astrometry Correction

The raw images taken directly from UVOT have to be translated into Sky coordinates (Right Assension and Declination) using rotation and translation according to the direction of pointing of the spacecraft. Auxiliary data files are supplied with each OBSID with the Spacecraft attitude information as well as housekeeping files. The FTOOL SWIFTXFORM is used for this.

The same transformation has to be performed on the exposure map and the quality map because these start off in raw coordinates. 

Distortion

The distortion has been calibrated and a distortion map is read from the CALDB. 

Shift and Add

To avoid getting blurred images due to space craft drift during an exposure, five or more stars are selected at the start of the exposure by the on-board software and each frame is shifted to match the position of those stars before the frame is added to the image as it is built up on board.

Aspect correction

Once the RAW image has been transformed into sky coordinates the FTOOL UVOTASPCORR is used to refine the aspect. This detects the brighter stars in the image and matches them against USNO B1.0 catalogue stars. If the aspect matching is successful the CRVAL1, CRVAL2 and ASPCORR keywords in the headers of the image files are updated, and these keywords are also copied into the sky-rotated quality maps.

If the aspect correction fails, the task can be run  on the sky-rotated source map image which is a simplified source image used for the quality map generation.

After making the aspect corrections the script calls the task UVOTEXPMAP to generate the exposure maps corresponding to the images. 

Astrometry Correction

The raw images taken directly from UVOT have to be translated into Sky coordinates (Right Assension and Declination) using rotation and translation according to the direction of pointing of the spacecraft. Auxiliary data files are supplied with each OBSID with the Spacecraft attitude information as well as housekeeping files. The FTOOL SWIFTXFORM is used for this.

The same transformation has to be performed on the exposure map and the quality map because these start off in raw coordinates. 

Distortion

The distortion has been calibrated and a distortion map is read from the CALDB. 

Shift and Add

To avoid getting blurred images due to space craft drift during an exposure, five or more stars are selected at the start of the exposure by the on-board software and each frame is shifted to match the position of those stars before the frame is added to the image as it is built up on board.

Aspect correction

Once the RAW image has been transformed into sky coordinates the FTOOL UVOTASPCORR is used to refine the aspect. This detects the brighter stars in the image and matches them against USNO B1.0 catalogue stars. If the aspect matching is successful the CRVAL1, CRVAL2 and ASPCORR keywords in the headers of the image files are updated, and these keywords are also copied into the sky-rotated quality maps.

If the aspect correction fails, the task can be run  on the sky-rotated source map image which is a simplified source image used for the quality map generation.

After making the aspect corrections the script calls the task UVOTEXPMAP to generate the exposure maps corresponding to the images. 

Quality Flagging

Any Potential issues in the images that may affect the quality of the source photometry are flagged in the catalogue, to enable a user to make an informed decision as to how reliable the result is. 

The main product of the task UVOTFLAGQUAL is the quality map, one map being created to correspond to each raw image. The flagging of image artefacts is based on the detection of bright sources and uses a set of calibrated thresholds to set associated artefact flags in the quality map. When the images are rotated into sky coordinates and stacked for detection, the quality maps are also rotated and stacked so that the features can be propagated through to the final entry of the source in the catalogue. 

The types of quality issues are listed in the table on the quality flag statistics page in the Catalogue Properties section. Some issues (such as the source lying close to a bright object) have obvious meanings. A few might need further explanation: 

Readout streak: since there is no shutter, the detector is continually exposed even during the reading out of the CCD by means of vertical frame-transfer. Thus a bright source leaves a streak in the vertical direction during the transfer time. Occasionally UVOTDETECT detects spurious sources on the readout streaks, and in addition any faint sources nearby may have its photometry affected. Smoke rings and halo rings are out-of-focus images of bright sources caused by internal reflections within the detector window. Smoke rings are small (about 30 arc sec diameter) and are displaced radially from the bright source. They can produce spurious sources, or affect photometry of real sources. Halo rings are larger and usually faint, but can affect the measurement of the background. They are only seen inside the central area of the detector; outside the central area they are truncated or vanish completely. Mod-8 noise pattern is described in the section Fixed Pattern Correction. Sources with count rates approaching 1 count per image frame are subject to coincidence loss, which distorts the PSF and gives rise to a modulo-8 pattern in the region surrounding the source. The morphologies of such sources cannot be recovered and hence they are flagged during construction of the catalogue. The rate limit above which this flag is set is 0.6 counts per frame (~54c/s for a full frame image). Multiple Exposure values occur near the edge of the image, where stacked images don't exactly line up. Then you can have more than one exposure time within the source region. 

 

An image with the corresponding quality map
Figure 1: A rotated and stacked image with the corresponding rotated and stacked quality map. The blue indicates a readout streak, green is a smoke ring etc.

The task also produces a source map containing only the pixels corresponding to the bright source regions. This source map can be useful if the aspect correction on the main image fails.

Quality Flag Statistics

Any quality issue at the position of a source is assigned to the source as a flag and listed in the final columns of the SOURCES table. Each flag is a binary number whose bits correspond to the presence of an image artefact (see Table 1) . A table of statistics is given below

Bit numberReasonInteger Value
 0Cosmetic defects (BAD PIXELS) within the source region 1
 1Source on a READOUT STREAK 2
 2Source on a “SMOKE RING” 4
 3Source on a DIFFRACTION SPIKE 8
 4Source affected by MOD-8 noise pattern 16
 5Source within a “HALO RING” 32
 6Source near to a BRIGHT source 64
 7MULTIPLE EXPOSURE values within photometry aperture 128
 8Source within an EXTENDED FEATURE 256

Table 1: Quality flag values for the UVOTSSC source catalogue. 

An example image is shown below with detected sources identified: green for sources with no quality flag set, red for sources with a quality flag set. The number indicates the flag. Table 1 gives the meanings of the flags. Multiple flags are summed to give the final quality flag value.

e.g. If source contains one or more bad pixels (value 1) and lies on a read-out streak (value 2), the final quality flag value will be 3.

Note that since the sources are detected on summed images, (after stacking the raw images in an ObsID), it is possible to have a flag assigned to the source that is not relevant to all of the images in the stack.

For the numbers of flags set, please see Table 2. For more examples see Quality Flagging in the Processing section.

Sources in a B filter image: the red circles identify the sources with a quality flag
 

Figure 1: Sources in a B filter image: the red circles identify the sources with a quality flag. The identity of the flag (or flags) is given by a number, which may be a sum of several flags.

 
 VBUUVW1UVM2UVW2
no flags set84.886.388.2 90.085.088.4
0000000
17.53.44.03.03.72.4
20.91.80.40.50.80.4
31.80.40.60.080.030.02
41.01.80.90.30.050.08
50.20.070.20.30.10.2
62.34.72.40.81.40.8
73.53.13.23.67.14.2
80.71.01.72.93.74.6

Table 2 gives the statistics (in percentages) of sources bearing flags in the UVOTSSC source catalogue. Note that the columns do not add up to 100% because some sources have more than one flag set. 

Source Collation

When all the data has passed through the four engines, sources of low significance are removed and the source lists are concatenated to form the source catalogue. If the distance between two point sources is less than 1.5 arcsec then the detections are assumed to be associated with the same source. Sources observed in more than one observation and/or in more than one filter are thus identified and all the sources are given unique source numbers. Those sources observed in the same ObsID but in more than one filter are listed on one line of the source list. The sources are sorted in the catalogue by RA.

Once all the unique sources have been identified the distances between the sources are calculated to enable the user to make an assessment of crowdedness or confusion. These are recorded up to a maximum of 30 arcsec. 

Other parameters for the catalogue also have to be determined at this stage, such as the number of filters the source was detected in. 

See also: 

Swift's low (90 minute) orbit and pointing constraints prevent it from observing any one thing for very long, so Swift observing time is broken up into 'snapshots' of 5 to 45 minutes each. A set of snapshots make up an observation 'segment'. Observation segments for different targets are interwoven to allow the most efficient use of the observing time.  

Each snapshot might consist of several UVOT exposures using different filters. Usually the filter wheel is only rotated once per snapshot. The exposures can be anything from about 10s long to a few thousand seconds. All the data from a given observation segment are given the same observation identification number (ObsID); an 11-digit identifier. 

The UVOT usually uses the full 17x17 arcmin2 field of view, with the pixels binned 2x2 to give a raw image of 1024x1024 pixels. The image is built up on board and then telemetered to the ground. There are other modes: using a smaller window, or recording both the time and position of each photon individually (called 'event' mode), but only the standard 17x17 images with exposures longer than 10s were used in this version of the catalogue. Event mode data were included by building the data into a raw image in the first stage of processing using the pointing history.

To construct the catalogue, the raw data provided by the HEASARC were processed through a purpose-built pipeline, based on the Swift UVOT FTOOLS available in HEASOFT (HEASOFT 6.11). The pipeline is constructed as a series of processing engines, which advance the data through each stage of catalogue construction. For each observation dataset (identified by the ObsID number) and for each filter, all images are processed and then stacked to achieve maximum sensitivity prior to source detection. Thus within each ObsID one image per filter is searched for sources. In the final stage of catalogue production the individual source lists are merged together.

The sources are calibrated using information stored in the Calibration Database (CalDB) and described in Poole et al. (2008) and Breeveld et al. (2010) with an update to the UV photometry in Breeveld et al. (2011)