If you wish to create an attribute table to identify the different land cover classifications, an easy reference table is available under the PROCESS DESCRIPTION section.
Additional Use Constraints: None
Pre-Processing Steps: Each individual TM scene was georeferenced to 50 differentially processed Global Positioning Systems (GPS) ground control points in UTM NAD27 meters to an RMSE +/- 0.5 pixels.
Ancillary data sets: Subsequent field work and the use of collateral data such as USGS maps, TIGER road data, and National Wetland Inventory data led to further refinements in the image classification.
Shoreline features can be extracted from Landsat images by detecting the land/water interface. However, care must be used to avoid misinterpreting tidal differences as changes in shorelines, since the satellite images from which these land cover images are derived and acquired at different tidal stages, depending on when the satellite is overhead. The land cover classifications represent the instantaneous state of the shoreline at the moment of image acquisition. C-CAP data are mapped at 1:100,000 scale with 22 standard classes constituting major landscape components. They are not jurisdictional (can't be used for permitting) and will not identify individual species. However, they are useful for identifying regional landscape patterns, major functional niches, environmental impact assessment, urban planning, and zoning applications. If you need change analysis data at this scale, C-CAP may be your only option. C-CAP is designed around a 1 to 5 year revisit cycle. Land Cover is the complete human and natural landscape recorded as surface components - forest, water, wetlands, concrete, asphalt, etc. Land cover can be documented by analyzing spectral signatures of satellite and aerial imagery. Land Use is the documentation of human uses of the landscape - residential, commercial, agricultural, etc. Land use can be inferred but, not explicitly derived from satellite and aerial imagery. There is no spectral basis for land use determination in satellite imagery. C-CAP data can be used to identify concrete and asphalt as land cover, but we can only infer that these materials denote a residential or commercial use.
Post-Processing Steps: This data was projected into UTM NAD83 zone 17 for general distribution. Known Problems: None
Accuracy Results: This data set was found to be 89.4% accurate with a Kappa coefficient of 0.879
Baseline Classification Process: The South Carolina land cover/change classification product was processed using an iterative classification approach. Landsat Thematic Mapper data for path/row/date(s): 15/36 19950105, 16/36 19950105, 16/37 19950105, 16/38 19950105 were analyzed and mosaicked to create a land cover inventory for South Carolina. The scene was classified, focusing first on separating major categories (e.g. water, forest, marsh, herbaceous upland, and developed) using standard supervised classification techniques. Numerous individual areas were chosen as training sites for the land cover classification. The mean and covariance statistics for these areas are passed to an isodata classification algorithm which assigns an unknown pixel to the class in which it has the highest probability of being a member. Then iterative unsupervised classifications were performed on each major category individually by masking out all other major categories. With this type of unsupervised classification, the computer is allowed to query the multispectral properties of the masked scene using user specified criteria and to identify X mutually exclusive clusters in N-dimensional feature space. By masking out all data but a single major category, the spectral variance is greatly reduced thus decreasing classification errors. After several classification iterations of the masked data, final classification labels were assigned to the spectral clusters. Changes among major categories were permitted to occur even at this stage of processing. Subsequent field work and the use of collateral data such as USGS maps, TIGER road data, and National Wetland Inventory data led to further refinements in the image classification. In small areas where landcover class confusion could not be separated spectrally, human pattern recognition was used to recode the data. A spatial filter was applied to the final classification data file.
Change Classification Process:
Landsat Thematic Mapper data for path/row(s) 15/36 19950105, 16/36 19950105, 16/37 19950105, 16/38 19950105 were analyzed to arrive at a land cover for South Carolina. The change date land cover classification was in part derived from the baseline classification. Only the pixels in the January 5, 1995 image that changed spectrally from the change date image were classified for the December 9, 1990 data file. All other pixels were simply replaced with the baseline image classification.
It is possible to simply identify the amount of change between two images by image differencing the same band in two images which have previously been rectified to a common basemap. Image differencing involves subtracting the imagery of one date from that of another. The subtraction results in positive and negative values in areas of radiance change and zero values in areas of no-change in a new 'change image'. The images are subtracted resulting in an signed 16-bit analysis with pixel values ranging from -255 to 255. The results were transformed into positive unsigned 16-bit values by adding a constant, c. The operation is expressed mathematically as: Dijk = BVijk(1) - BVijk(2) + c where Dijk = change pixel value BVijk(1) = brightness value at time 1 BVijk(2) = brightness value at time 2 c = a constant (e.g., 255). i = line number j = column number k = a single band (e.g. TM band 4).
The 'change image' produced using image differencing usually yields a BV distribution approximately gaussian in nature, where pixels of no BV change are distributed around the mean and pixels of change are found in the tails of the distribution. A threshold value was carefully chosen to identify spectral 'change' and 'no-change' pixels in the 'change image.' A 'change/no-change' mask was derived by performing image differencing on band 4, and Normalized Difference Vegetation Index (NDVI) of the two date dataset and recoded into a binary mask file. The 'change/no-change' mask was then overlaid onto the earlier date of imagery and only those pixels which were detected as having spectrally changed were viewed as candidate pixels for categorical change.
Change Detection Database
The change date and baseline land cover classifications were compared on a pixel by pixel basis using a change detection matrix. This traditional post-classification comparison yields 'from land cover class - to land cover class' change information. Many pixels with sufficient change to be included in the mask of candidate pixels in the spectral change process did not qualify as categorical land cover change. This method may reduce change detection errors (omission and commission) and provides detailed 'from-to' change class information. The technique reduces effort by allowing analysts to focus on the small amount of area that has changed between dates.
Attributes
0 Background 1 Unclassified 2 High Intensity Developed 3 Low Intensity Developed 4 Cultivated Land 5 Grassland 6 Deciduous Forest 7 Evergreen Forest 8 Mixed Forest 9 Scrub/Shrub 10 Palustrine Forested Wetland 11 Palustrine Scrub/Shrub Wetland 12 Palustrine Emergent Wetland 13 Estuarine Forested Wetland 14 Estuarine Scrub/Shrub Wetland 15 Estuarine Emergent Wetland 16 Unconsolidated Shore 17 Bare Land 18 Water 19 Palustrine Aquatic Bed 20 Estuarine Aquatic Bed 21 Tundra 22 Snow/Ice