Image Processing | Earth Observing Laboratory

Image Processing

29 OCT 1992
     ----------------------        NOTES         ----------------------

     All NOAA AVHRR and DMSP OLS image files in the Kuwait Data Archive
     have undergone pre-processing.  This consists of 

        (1) hand navigation of the image using the Sea Space
            Terascan software (see ABOUT HAND NAVIGATION below), 
        (2) calibration of the AVHRR IR data to brightness
        (3) possible  "patching" of missing data lines -- not
            done very often, and 
        (4) interpolation to a lat-long grid.  

     The images were all reinterpolated to a Cartesian lat-long grid.
     The corner points of our geographical analysis area are:

        NE Corner:  57 deg 15.0 min E, 33 deg 33.0 min N ? (was 30.0)
        SW Corner:  43 deg 45.0 min E, 21 deg 30.0 min N

     Within this domain the NOAA AVHRR data has been processed to a
     1200 x 1200 grid, and the DMSP OLS data (BOTH visible and IR) has been
     processed to a 2400 x 2400 grid.  This is a "rectangular" map projection, 
     essentially a simple lat-long grid, but with the grid spacings adjusted 
     so that the relative grid increments correspond to equal spacings in 
     km at the center of the image (roughly 550 m for the dmsp data, and 1 
     or 1.1 km for the noaa data).   NOTE:  The sensor resolution for the 
     dmsp data is typically 500 m (at nadir) for the visible ols data, but 
     only 2.5 km for the IR data.  The sensor resolution varies as a function
     of the distance from nadir.  In both cases, however, the data has been
     gridded (or over-gridded) to the same 500 m grid. 

     The center coordinate is at the center of the four adjacent pixels
     (since there are an even number of samples).  The nominal grid spacing
     in both the E-W direction (longitude difference between adjacent pixels)
     and in the N-S direction (latitude difference between adjacent pixels)
     is given below.

     The grid spacing for the NOAA data is:

        13.50 degrees / (1200-1 grid intervals) = 
             0.011259 degrees of longitude
        12.05 degrees / (1200-1 grid intervals) =
             0.010050 degrees of latitude

     The grid spacing for the DMSP data is:

        0.004023 degrees of latitude
        0.005627 degrees of longitude

     These can be converted to kilometer intervals using any earth model or
     geoid of choice.  Using the Smithsonian Meteorological Tables (Tables
     162 and 163) for the International Ellipsoid of 1924, the grid spacings
     are nominally:

        N-S "y" interval    1.113 km at 20-21 degrees N
                            1.114 km at 27-28 degrees N
                            1.115 km at 32-33 degrees N

        E-W "x" interval    1.171 km at 21 degrees N
                            1.118 km at 27 degrees N
                            1.052 km at 33 degrees N
        N-S "y" interval    0.556 km at 20-21 degrees N
                            0.557 km at 27-28 degrees N
                            0.557 km at 32-33 degrees N

        E-W "x" interval    0.585 km at 21 degrees N
                            0.558 km at 27 degrees N
                            0.526 km at 33 degrees N

     Users are cautioned that in general, the inaccuracies in the orbital
     elements, satellite reference time standards, roll-pitch-yaw estimates,
     sensor alignment, and the interactive hand navigation steps are likely
     to be much more significant than inaccuracies in an earth model.

     The data on the MSS has been kept in its native TDF (i.e. Sea
     Space) format.  This is essentially a variety of CDF.  The
     format consists of a 644 byte header, the data itself, and then
     a trailer (all in a single file).  The data can be accessed
     by simply skiping the header, reading the data (all channels
     sequential), and ignoring the CDF stuff at the end.  This is
     essentially what textract.exe does, but one channel at a time.

     The NOAA data is stored as "short int" variables (2 bytes or 16 bits)
     per pixel, in units of albedo x 100 and/or brightness temp x 100.
     Given a 1200 x 1200 array, this means that the channel 1 data will
     occupy 2,880,000 bytes (2 x 1200 x 1200 = 2,880,000), followed by
     channel 2 data, 3, 4, and 5 (for a total of 14,400,000 bytes).  The
     overall file sizes will be a bit bigger, since you have to add in the
     644 byte header and the variable length TDF trailer.

     The same pattern applies to the DMSP data, with the exception that
     the data is stored as "byte" variables (8 bits). The high resolution
     visible data only maintains 64 gray levels (with a variable gain,
     hence not calibrated rigorously) in the data, but is still stored
     in byte format.  The low resolution IR data uses all 256 levels
     available to it, and can be converted to brightness temperature
     via the relation:

                 T  =  (I - 176.69)/2.125
     where T is in degrees C and I is the (integer) byte value.  For the
     DMSP files, this means that the file sizes will be 11,520,000 
     (2400 x 2400 x 2 = 11,520,000) and a bit more counting the header and
     TDF trailer.  

     ---------------------    ABOUT RESOLUTION    ----------------------
     Why is the IR data LOW RESOLUTION when it uses 256 levels, and the
     visible OLS data HIGH RESOLUTION when it uses 64 levels?  Shouldn't
     this be reversed?

     High/low resolution is used in the context of sensor SPATIAL resolution.
     The 64 gray-level data has a ground resolution of 500 meters, while the
     256 level IR data has a reduced resolution of 2.5 kilometers.  The number
     of data bits is in an inverse relationship because of two factors 
     (1) data transmission rates  -- the 500 m resolution data generates
     a lot of data to transmit and just couldn't keep up with more data bits
     per pixel, and (2) the more rapidly scanning sensor taking the 500 m
     resolution data just doesn't have the dwell time to justify more 
     significant digits.  

     ------------------    ABOUT HAND NAVIGATION    --------------------

     The nominal image positioning using the transmitted position, time,
     and attitude values is often in error by as much as 10 km.  The "hand"
     navigation step involves displaying the image in "satellite-sensor"
     coordinates (lines and samples), with a geometrical transformation of
     a coast-line data base as an overlay on the image.  In an iterative series 
     of steps, the reported time and attitude parameters can be adjusted to
     improve the registration of the image relative to the coast-line data base.
     There are many different ways to do this step.  In general, however,
     the final result is clearly better than the unadjusted registration,
     with accuracy often quoted as good to 1 (or more honestly 1-2) km.
     Not all areas of the image will be registered "perfectly".  Cloud
     cover, sensor quality, and position of the satellite track will all
     affect the quality of the navigation.   After the image is navigated,
     it is regridded into the "standard" lat-long grid used in the data base.  
     This permits scientists not interested in the preliminary processing 
     steps, or not having the appropiate software, to have access to the "full" 
     resolution multi-spectral data sets for all available imagery.