US20020104891A1 - Smart card for storage and retrieval of digitally compressed color images - Google Patents

Smart card for storage and retrieval of digitally compressed color images Download PDF

Info

Publication number
US20020104891A1
US20020104891A1 US09/836,116 US83611601A US2002104891A1 US 20020104891 A1 US20020104891 A1 US 20020104891A1 US 83611601 A US83611601 A US 83611601A US 2002104891 A1 US2002104891 A1 US 2002104891A1
Authority
US
United States
Prior art keywords
pixels
smart card
block
target pixel
blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/836,116
Inventor
Anthony Otto
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wada Ayao
Original Assignee
Wada Ayao
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wada Ayao filed Critical Wada Ayao
Priority to US09/836,116 priority Critical patent/US20020104891A1/en
Assigned to WADA, AYAO reassignment WADA, AYAO ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: OTTO, ANTHONY H.
Publication of US20020104891A1 publication Critical patent/US20020104891A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F7/00Mechanisms actuated by objects other than coins to free or to actuate vending, hiring, coin or paper currency dispensing or refunding apparatus
    • G07F7/08Mechanisms actuated by objects other than coins to free or to actuate vending, hiring, coin or paper currency dispensing or refunding apparatus by coded identity card or credit card or other personal identification means
    • G07F7/10Mechanisms actuated by objects other than coins to free or to actuate vending, hiring, coin or paper currency dispensing or refunding apparatus by coded identity card or credit card or other personal identification means together with a coded signal, e.g. in the form of personal identification information, like personal identification number [PIN] or biometric data
    • G07F7/1008Active credit-cards provided with means to personalise their use, e.g. with PIN-introduction/comparison system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/34Payment architectures, schemes or protocols characterised by the use of specific devices or networks using cards, e.g. integrated circuit [IC] cards or magnetic cards
    • G06Q20/346Cards serving only as information carrier of service
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/005Statistical coding, e.g. Huffman, run length coding

Definitions

  • the invention relates generally to information signal processing, and more particularly relates to a smart card containing a programmable microchip having a memory for storing digitally compressed color images, such as for storing a color identification photograph.
  • run length encoding One technique that has been used in digital encoding of image information is known as run length encoding.
  • the scan lines of a video image are encoded as a value or set of values of the color content of a series of pixels along with the length of the sequence of pixels having that value, set of values, or range of values.
  • the values may be a measure of the amplitude of the video image signal, or other properties, such as luminance or chrominance.
  • Statistical encoding of frequent color values can also be used to reduce the number of bits required to digitally encode the color image data.
  • One basic encoding process is based upon the block truncation coding (BTC) algorithm published by Mitchell and Delp of Purdue University in 1979.
  • BTC block truncation coding
  • the basic BTC algorithm breaks an image into 4 ⁇ 4 blocks of pixels and calculates the first and second sample moments. Based upon an initial discriminator value, set to the first sample moment (the arithmetic mean), a selection map of those pixels lighter or darker than the discriminator value is determined, along with a count of the lighter pixels. From the first and second sample moments, the sample variance, and therefore, the standard deviation, can be calculated. The mean, standard deviation, and selection map are preserved for each block.
  • the original BTC method is limited to a grayscale image, so that it would be desirable to adapt the BTC method to extend the BTC method to include YCrCb full-color. It would also be desirable to adapt the BTC method to handle delta values, allowing multi-level, or hierarchical, encoding and allowing encoding of differences between frames or from a specified background color.
  • RGB color space illustrated in FIG. 1, that can be represented by a conventional three-dimensional Cartesian coordinate system, with each axis representing 0 to 100% of the red, green and blue values for the color value of the pixel.
  • a grayscale line can be described as running diagonally from black at 0% of each component to white at 100% of each. Since human vision can only discriminate a limited number of shades of color values, by selecting representative color values in such a color space, a limited number of color values can be used to approximate the actual color values of an image such that the human eye can not differentiate between the actual color values and the selected color values.
  • HVS Hue-Value-Saturation
  • Hue is defined as the particular color in the visible spectrum ranging from red through green and blue to violet.
  • Value is defined as the brightness level, ignoring color.
  • Saturation is defined as the intensity of the particular color or the absence of other shades in the mixture.
  • the HVS system can be represented by a generally cylindrical coordinate system with a polar base consisting of hue as the angle, and saturation as the radius.
  • the value or brightness component is represented as the altitude above the base.
  • the actual visible colors do not occupy the entire cylinder, but are approximately two cones, base to base with their vertices at 0% up to 100% on the value scale.
  • the base is tilted in this example because the maximum saturation for blue occurs at a much lower brightness than the maximum saturation of green.
  • the YCrCb color space in order to represent digitized NTSC/PAL video in a Cartesian coordinate system, the YCrCb color space is used. Because the smart card system of the invention operates in the YCrCb color space, the smart card system provides for a novel color space conversion from 15- or 24-bit RGB. Eight-bit grayscale images are also supported.
  • the chrominance components, Cr and Cb are two axes that correspond to the polar hue and saturation components in the HVS system.
  • the Y, or luminance, component corresponds to the brightness axis in the HVS graph.
  • Typical implementations of digitally representing color values in this fashion use floating point arithmetic (11 multiplications and 9 additions/subtractions per pixel) or 16-bit integer arithmetic (9 multiplications, 9 additions/subtractions and 3 division per pixel). Both of these methods are quite wasteful of computing power, particularly on smaller microcontrollers. There is thus a need for a system for representing color values of digitized images that takes advantage of the limitations of human vision in discriminating color in color images in order to reduce the software and hardware requirements, particularly for storage of such color images in smart cards and databases.
  • Smart cards are commonly approximately the same shape and size of a common credit card, and typically contain a programmable microchip, having a memory such as a read only memory, or a read/write memory. Information stored in the memory of the card can be detected by a card interface device such as a card reader or connector.
  • noise can seriously interfere with the efficiency of any image compression process, lossy or lossless, because a compression engine must use more unnecessary data to encode noise as if it were actual subject material. Since lossy compression tends to amplify noise creating more noticeable artifacts upon decompression, lossy compression processes therefore typically attempt to remove some of the noise prior to compressing the data. Such preprocessing filters must be used very carefully, because too little filtering will not have the desired result of improved compression performance, and too much filtering will make the decompressed image cartoon-like.
  • chromakeying is a process of replacing a uniform color background (usually blue) from behind a subject.
  • a common application of this process is a television weather reporter who appears to be standing in front of a map. In actuality, the reporter is standing in front of a blue wall while a computer generated map is replacing all of the blue pixels in the image being broadcast.
  • preprocessing filters can remove noise from the area surrounding a subject of interest in an image, subtle changes in lighting or shading can remain in the original image which can be eliminated by chromakeying.
  • chromakey system in compression of color images for storage on smart cards and databases, in order to replace the background with a solid color to increase the visual quality of the compressed image. It would also be desirable to automate and simplify the chromakey process, to simplify the chromakey process for the operator.
  • the present invention meets these and other needs.
  • the present invention provides for an improved contact-less smart card for digitally storing color images, such as color identification photographs, compressed into 512 to 2,048 bytes.
  • the smart card of the invention accepts rectangular images in 16 pixel increments ranging from 48 to 256 pixels on a side. The typical size is 96 ⁇ 96 pixels.
  • the smart card of the invention is designed for use in very low computational power implementations such as with 8-bit microcontrollers, possibly with an ASIC accelerator.
  • the present invention accordingly provides for a smart card containing a programmable microchip having a memory for storing a digitally compressed image containing image data consisting of a plurality of scan lines of pixels with scalar values, such as for an identification photograph, for storage of the image.
  • the image data is filtered by evaluating the scalar values of individual pixels in the image with respect to neighboring pixels, and statistically encoded by encoding the image data, by dividing the image into an array of blocks of pixels, and encoding each block of pixels into a fixed number of bits that represent the pixels in the block.
  • the image data is filtered by evaluating each individual pixel as a target pixel and a plurality of pixels in close proximity to the target pixel to determine an output value for the target pixel.
  • each individual pixel preferably is evaluated by evaluating a sequence of five pixels, including two pixels on either side of the target pixel and the target pixel itself, for each target pixel.
  • an average of the data for a window of the pixels immediately surrounding the target pixel is determined for those pixels surrounding the target pixel that are within a specified range of values, according to the following protocol: if all five pixels are within the specified range, the output target pixel is determined to be the average of the four pixels in a raster line, two on each side of the target pixel; if the two pixels on either side are within a specified range and both sides themselves are within the range, the filtered output target pixel data is determined to be the average of the two pixels on each side of the target pixel; if the two pixels on either side of the target pixel and the target pixel itself are within a specified range, and the other two pixels on the other side are not within the specified range, the output target pixel is determined to be the average of the two neighboring pixels closest in value to the target pixel values and that fall within the specified range; if the five pixels are all increasing or decreasing, or are within a specified range, the output target pixel is determined to be the average of two pixels on
  • the image data is statistically encoded by dividing the image into an array of 4 ⁇ 4 squares of pixels, and encoding each 4 ⁇ 4 square of pixels into a fixed bit length block containing a central color value, color dispersion value, and a selection map that represent the sixteen pixels in the block.
  • each block contains a central color value and a color dispersion value
  • the image data is statistically encoded by determining a first sample moment of each block as the arithmetic mean of the pixels in the block, and the central color value of each block is set to the arithmetic mean from the first sample moment of the pixels in the block.
  • a second sample moment of the pixels in the block is determined, and the color dispersion value of each the block is determined by determining the standard deviation from the first and second sample moments.
  • a first absolute moment is determined by determining an average of the difference between the pixel values and the first sample moment, wherein the color dispersion value is set to the first absolute moment.
  • the digital image data is converted to the YCrCb color space.
  • the digital color image data is converted to the YCrCb color space by converting the image data from the RGB color space.
  • lookup tables of selected color values are utilized for color space conversion, and in one preferred approach, the digital image data is converted to the YCrCb color space by utilizing nine 256-entry one-byte lookup tables containing the contribution that each R, G and B make towards the Y, Cr and Cb components.
  • the statistical encoding of the image data in another presently preferred aspect, is accomplished by determining a first sample moment of each block as the arithmetic mean of the pixels in the block, determining a second sample moment of the pixels in the block, and determining the selection map from those pixels in the block having values lighter or darker than the first sample moment.
  • the image data is statistically encoded by determining a first sample moment of each block as the arithmetic mean of the pixels in the block, determining a second sample moment of the pixels in the block, and determining the selection map from those pixels in the block having values lighter or darker than the average of the lightest and darkest pixels in the block.
  • the statistical encoding of the image data is accomplished by determining a classification of each block, quantifying each block, and compressing each block by codebook compression using minimum redundancy, variable-length bit codes.
  • This classification of each the block involves classifying each the block according to a plurality of categories, and in a presently preferred embodiment, classification of each the block comprises classifying each of the blocks in one of four categories: null blocks exhibiting little or no change from the higher level or previous frame, uniform blocks having a standard deviation less than a predetermined threshold, uniform chroma blocks having a significant luminance component to the standard deviation, but little chrominance deviation, and pattern blocks having significant data in both luminance and chrominance standard deviations.
  • the statistical encoding of the color image data can also further involve determining the number of bits to be preserved for each component of the block after each block is classified. In one presently preferred variant, this can also further involve selecting a quantizer defining the number of bits to be preserved for each component of the block according to the classification of the block to preserve a desired number of bits for the block. In another presently preferred variant, the number of bits for the Y and Cr/Cb components of the blocks to be preserved are determined independently for each classification. In a currently preferred option, all components of each block for pattern blocks are preserved. In another currently preferred option, the mean luminance and chrominance, standard deviation luminance, and a selection map for uniform chroma blocks are preserved.
  • a classification of each block is determined by matching the texture map of the block with one of a plurality of common pattern maps for uniform chroma and pattern classified blocks, and compressing each block by codebook compression can comprise selecting codes from multiple codebooks.
  • the digital image compression also involves a hierarchical, or multilevel component. Each image is first divided into an array of 4 ⁇ 4 level-one blocks. Then 4 ⁇ 4 blocks of representative values from the level-one are encoded into higher level, or level-two blocks of central color values. Each level two block describes a lower resolution 16 ⁇ 16 pixel image. The process continues to 64 ⁇ 64, 256 ⁇ 256, and even 1024 ⁇ 1024 pixel blocks. The number of levels are selected so that four to fifteen top level blocks in each dimension remain.
  • the compression system of the invention uses only two levels.
  • the image data is statistically encoded by dividing the image into an array of 4 ⁇ 4 squares of pixels, and multi-level encoding the central color values of each 4 ⁇ 4 square of lower level blocks.
  • multi-level encoding is repeated until from four to fifteen blocks remain on each axis of a top level of blocks.
  • the top level of blocks is reduced to residuals from a fixed background color.
  • each successive lower level block is reduced to the residuals from the encoded block on the level above.
  • the pixel values are reduced to the residuals from the encoded level one blocks.
  • a datastream is prepared for storage in level order.
  • the datastream is prepared for storage in block order.
  • Another currently preferred embodiment of the method of the invention further involves adding compressed residuals between input pixel data and level-one decoded blocks to thereby provide loss-less digital compression of the image.
  • the present invention also provides for a smart card for storage of a digitally compressed color image such as a color identification photograph, the color image containing color image data consisting of a plurality of scan lines of pixels with color values, wherein the color image data is filtered by evaluating the color values of individual pixels in the color image with respect to neighboring pixels, and statistically encoding the color image data by dividing the color image into an array of blocks of pixels, and encoding each block of pixels into a fixed number of bits that represent the pixels in the block.
  • the color image data is filtered by evaluating each individual pixel as a target pixel and a plurality of pixels in close proximity to the target pixel to determine an output value for the target pixel.
  • the color image data is filtered by evaluating a sequence of five pixels, including two pixels on either side of the target pixel and the target pixel itself, for each target pixel.
  • the color image data is filtered by determining an average of the data for a window of the pixels immediately surrounding the target pixel for those pixels surrounding the target pixel that are within a specified range of values, according to the following protocol: if all five pixels are within the specified range, the output target pixel is determined to be the average of the four pixels in a raster line, two on each side of the target pixel; if the two pixels on either side are within a specified range and both sides themselves are within the range, the filtered output target pixel data is determined to be the average of the two pixels on each side of the target pixel; if the two pixels on either side of the target pixel and the target pixel itself are within a specified range, and the other two pixels on the other side are not within the specified range, the output target pixel is determined to be the average of the two neighboring pixels closest in value to the target pixel values and that fall within the specified range; if the five pixels are all increasing or decreasing, or are within a specified range, the output target
  • a currently preferred aspect of the smart card for a digitally compressed color image further involves the replacement of background in the image being compressed with a scalar value, in order to reduce noise in the image, and to increase the visual quality of the compressed image.
  • the background in the image being compressed is replaced with a scalar value comprises setting an initial chromakey value and delta values.
  • the initial chromakey value and background scalar value are set by capturing one or more calibration images of the background prior to capturing an image without a subject of interest in place, consisting substantially of background pixels, and determining the average and standard deviation of one or more calibration images to set at least an initial chromakey scalar value and range.
  • the initial chromakey value and background scalar value are set by capturing an image with a subject of interest in place, and beginning in the upper-left and upper-right corners of the one or more calibration images, collecting pixel data down and towards the center of the image until an edge or image boundary is encountered, and determining the average and standard deviation of those pixels to set at least an initial chomakey value and range.
  • the pixel data are collected from a plurality of images.
  • the initial chromakey value and background scalar value are set by manually specifying an initial chromakey value and range without respect to the properties of an individual image being captured prior to image capture.
  • Preferably replacement of the background in the image being compressed involves determining an initial chromakey mask of pixels in the input image that are near the chromakey value.
  • three delta components are used to describe a rectangular region in YCrCb color space.
  • one delta component describes a spherical region in YCrCb color space.
  • the three delta components can in an alternate preferred embodiment describe a hollow cylindrical segment in HSV color space.
  • a further preferred aspect of the smart card for a digitally compressed color image further involves the removal of artifacts from the initial chromakey mask.
  • the artifacts are removed by a) initially determining the background mask set of pixels; b) removing pixels from the mask set that have less than a predetermined threshold of neighboring pixels included in the mask set; c) adding pixels to the mask set that have more than a predetermined threshold of neighboring pixels included in the mask set; and repeating steps b) and c) a plurality of times.
  • artifacts are removed by applying a sliding linear filter of five pixels once horizontally and once vertically to adjust a plurality of target pixels of the initial chromakey mask, and adjusting each target pixel to be included in the chromakey mask if the target pixel is initially not included in the chromakey mask, the pair of pixels on either side of the target pixel are in the chromakey mask, and the target pixel is not near an edge; adjusting each target pixel to be included in the chromakey mask if the target pixel is initially not included in the chromakey mask, and the two adjacent pixels on either side of the target pixel are included in the chromakey mask; adjusting each target pixel to be included in the chromakey mask if the target pixel is initially not included in the chromakey mask, and if three of the adjacent pixels a distance of two or less pixels away from the target pixel are included in the chromakey mask; adjusting
  • background is used herein to identify the area around a subject in an image. A significant part of replacing background in the color image being compressed with a solid color comprises determining an initial chromakey value and range of the colors in the background.
  • background color is used herein to mean a fixed color that is subtracted from each pixel in a specified background area of an image prior to level one encoding, and can be either be copied from a replacement color or supplied by an operator.
  • chromakey color and chromakey refer particularly to the color that is the center of a specified area of colors that are to be replaced, generally calculated from the accumulated pixels in the area, or supplied by an operator.
  • the “replacement color” is a fixed color that is used to replace all pixels indicated in the final chromakey mask, and it can be either copied from the chromakey color, or supplied by an operator.
  • the step of calibrating comprises capturing at least one calibration image of the background prior to capturing an image with a subject of interest in place, consisting substantially of background pixels, and determining the average and standard deviation of the at least one calibration image to set at least an initial chromakey color and range.
  • chromakey range is used herein to refer to the amount that pixels can differ from the chromakey color and be included in the pixels to be replaced, and is also calculated from the accumulated pixels or is supplied by an operator.
  • Another preferred aspect of the smart card for a digitally compressed color image further comprises conversion of digital color image data to the YCrCb color space.
  • the conversion of digital color image data to the YCrCb color space involves conversion of the color image data from the RGB color space.
  • the digital color image data is converted to the YCrCb color space by utilizing lookup tables of selected color values for color space conversion, and in one preferred approach, the digital color image data is converted to the YCrCb color space by utilizing nine 256-entry one-byte lookup tables containing the contribution that each R, G and B make towards the Y, Cr and Cb components.
  • the color image data is statistically encoded by dividing the color image into an array of 4 ⁇ 4 squares of pixels, and encoding each 4 ⁇ 4 square of pixels into a fixed number of bits containing a central color value, color dispersion value, and a selection map that represent the sixteen pixels in the block.
  • each of the blocks contains a central color value and a color dispersion value
  • the statistical encoding of the image data involves determining a first sample moment of each block as the arithmetic mean of the pixels in the block, and the central color value of each block is set to the arithmetic mean from the first sample moment of the pixels in the block.
  • One presently preferred option of this embodiment involves determining a second sample moment of the pixels in the block, and determining the color dispersion value of each block by determining the standard deviation from the first and second sample moments.
  • Another presently preferred option of this embodiment involves determining a first absolute moment by determining an average of the difference between the pixel values and the first sample moment, and wherein the color dispersion value is set to the first absolute moment.
  • the image data is statistically encoded by determining a first sample moment of each block as the arithmetic mean of the pixels in the block, determining a second sample moment of the pixels in the block, and determining the selection map from those pixels in the block having values lighter or darker than the first sample moment.
  • the selection map can be determined from those pixels in the block having values lighter or darker than the average of the lightest and darkest pixels in the block.
  • Another presently preferred embodiment of the smart card for a digitally compressed color image provides for statistical encoding of the color image data by encoding two levels of blocks of each 4 ⁇ 4 square of pixels, with the two levels including level one blocks and level two blocks, and the level two blocks including central color values.
  • the level two blocks are reduced to residuals from a fixed background color
  • the level one blocks are reduced to residuals from decoded level two blocks.
  • the present invention also provides for a smart card for storing a digitally compressed color image, wherein the color image contains image data consisting of a plurality of scan lines of pixels with scalar values, wherein the image data is filtered by evaluating the scalar values of individual pixels in the image with respect to neighboring pixels, and wherein the image data is statistically encoded by dividing the image into an array of blocks of pixels, and encoding each block of pixels into a fixed number of bits that represent the pixels in the block by classifying each said block, quantifying each said block, and compressing each said block by codebook compression using minimum redundancy, variable-length bit codes.
  • each said block is classified according to a plurality of categories.
  • each of the blocks are classified in one of four categories: 1) null blocks exhibiting little or no change from the higher level or previous frame, 2) uniform blocks having a standard deviation less than a predetermined threshold, 3) uniform chroma blocks having a significant luminance component to the standard deviation, but little chrominance deviation, and 4) pattern blocks having significant data in both luminance and chrominance standard deviations.
  • the number of bits to be preserved can be determined for each component of the block after each said block is classified.
  • a quantizer can be selected defining the number of bits to be preserved for each component of the block according to the classification of the block to preserve a desired number of bits for the block.
  • the number of bits for the Y and Cr/Cb components of the blocks to be preserved are determined independently for each classification. All components of each block can be preserved for pattern blocks, and all components of a central color, the mean luminance and chrominance, standard deviation luminance, and a selection map can be preserved for uniform chroma blocks. In another option, all three components of the central color value can be preserved for uniform blocks.
  • one preferred implementation provides for recording the run length of null blocks without preserving components of the null blocks.
  • the smart card for a digitally compressed color image can further involve matching of the texture map of the block with one of a plurality of common pattern maps for uniform chroma and pattern classified blocks; and compression according to codes from multiple codebooks.
  • the invention also provides for a smart card for storing a digitally compressed datastream of image data, stored in block order, with the image data consisting of a plurality of scan lines of pixels with scalar values, wherein the image data is filtered by evaluation of the scalar values of individual pixels in the image with respect to neighboring pixels, and the image data is statistically encoded by dividing the image into an array of blocks of pixels and encoding each block of pixels into a fixed number of bits that represent the pixels in the block.
  • the datastream is prepared for storage in block order by selecting a block order to first process those portions of the image that are most important to facial identification.
  • the block order provides a circle group layout.
  • the corners of the image are truncated.
  • the block order provides an oval group layout, and in a preferred option, the corners of the image are truncated.
  • the block order can provide a bell group layout, and the corners of the image may also be truncated.
  • the datastream is prepared for storage in block order by dividing the blocks into groups.
  • the blocks can, for example, be divided into groups by assigning a portion of the maximum compressed bytes to each group.
  • the division of the blocks into groups can also involve adjusting quality-controlling thresholds upon completion of each group.
  • only level two block information is transmitted on the last block to be processed if the information is near a maximum limit of compressed bytes.
  • the compression of the image can also be repeated starting at a lower quality level if necessary to process the entire image into a maximum limit of compressed bytes.
  • FIG. 1 is a schematic representation of an RGB color space known in the prior art
  • FIG. 2 is a schematic representation of NTSC/PAL video color system in the HVS color space known in the prior art
  • FIG. 3 is a schematic representation of NTSC/PAL video color system in the YCrCb color space known in the prior art
  • FIG. 4 is a schematic diagram illustrating image acquisition for storage and use on a smart card
  • FIG. 5 is a schematic diagram of an overview of the compression of color image data for storage and use on a smart card
  • FIGS. 6 to 10 illustrate color image data preprocessing filter protocols for storage of color image data on a smart card
  • FIGS. 11A to 11 D show a flow chart for the color image data preprocessing for storage of color image data on a smart card
  • FIG. 11E is a flow chart of the options for setting the chromakey color and range for storage of color image data on a smart card
  • FIG. 11F is a diagram illustrating the automatic chromakey process for storage of color image data on a smart card
  • FIGS. 12A to 12 C show a flow chart for multilevel encoding of color image data for storage of color image data on a smart card
  • FIG. 13 is a flow chart illustrating the encoding a bitstream for storage of color image data on a smart card
  • FIGS. 14A to 14 E show a flow chart for codebook compression for storage of color image data on a smart card
  • FIGS. 15A to 15 D show a flow chart for encoding pattern maps for storage of color image data on a smart card
  • FIG. 16A is a chart illustrating a 96 ⁇ 96 pixel image divided into four groups to provide adaptive compression
  • FIG. 16B is a chart illustrating the non-truncated and truncated circle, oval and bell shaped layouts for pixel blocks;
  • FIG. 17 shows a flow chart for encoding luminance or chrominance values by codebook lookup for storage of color image data on a smart card
  • FIG. 18 is a flow chart of adaptive compression
  • FIG. 19 is an illustration of the format of the data stream
  • FIGS. 20A, B, C and D are tables of inputs and corresponding edge filters
  • FIG. 21 is a flowchart illustrating post-processing spatial filtering
  • FIG. 22 is a schematic diagram of a smart card according to the invention.
  • BTC statistical encoding can be used to reduce the number of bits required to digitally encode image data
  • the BTC method is limited to simple encoding of grayscale images.
  • Digital color image compression methods typically use floating point arithmetic or 16-bit integer arithmetic, which are quite wasteful of computing power, particularly for encoding of color image data on smart cards and databases.
  • Noise can also seriously interfere with the efficiency of the image compression process, and although preprocessing filters can be used to remove noise, too much filtering can make the decompressed image cartoon-like, while too little filtering may not be sufficient to improve compression performance.
  • the present invention accordingly provides for a smart card for storage of digitally compressed color images such as color identification photographs, although the invention is equally applicable to gray scale images.
  • Digital color image data typically from a video camera or an existing digitized photograph are first converted from the RGB (Red-Green-Blue) color space to the YCrCb (Luminance-Chrominance) color space.
  • the step of converting digital color image data to the YCrCb color space comprises utilizing lookup tables of selected color values for color space conversion, and in one preferred approach, the step of converting digital color image data to the YCrCb color space comprises utilizing nine 256-entry one-byte lookup tables containing the contribution that each R, G and B make towards the Y, Cr and Cb components.
  • BCr[i] 225 ⁇ 0.713 ⁇ 0.114 ⁇ i/ 255 ⁇ 0.072 ⁇ i 6)
  • BCb[i] 225 ⁇ 0.564 ⁇ 0.886 ⁇ i/ 255 ⁇ 0.441 ⁇ i 9)
  • the table can be used to convert a pixel from RGB to YCrCb as follows:
  • This method requires 8304 bytes of constant ROM, six 8-bit additions and nine table lookups.
  • the nine table lookups might require a 16-bit addition each, but more likely, the microcontroller could handle the lookup through an opcode or built-in addressing mechanism.
  • the invention includes a unique preprocessing filter with three goals: 1) reducing noise without removing important face features, 2) sharpen blurred edges, and 3) not to be computationally complex.
  • the preprocessing filter utilizes a five pixel window on a single scan line to determine the output value for the center pixel. For each target pixel, a sequence of five pixels, including 2 pixels on either side of the target pixel and the target pixel itself, are evaluated. Five cases are accounted for in the following discussion, which is directed only to the component of luminance, for simplicity. All three components (YCrCb) are included in the actual filters.
  • an average of the data for the pixels immediately surrounding the target pixel is taken, for those pixels surrounding the target pixel that are within a specified range of values. If all five pixels are within specified limits, the output is the average of four pixels in a raster line (A, B, D, E), two on each side of the target (C). If the two pixels on either side are within a specified range and both sides themselves are within the range, the target pixel is treated as impulse noise. As is illustrated in FIG. 7, the filtered output target pixel data is the average of the four pixels (A, B, D, E) on each side of the target pixel (C). Referring to FIG.
  • the target pixel (C) is considered to be an edge pixel.
  • the output target pixel (C) is the average of the two pixels (A, B or D, E) on the matching side. If the five pixels are all increasing or decreasing (or are within a small range to account for ringing or pre-emphasis typically found in analog video signals), the target pixel is considered to be in the midst of a blurred edge. As is shown in FIG. 9, the output target pixel is then the average of two pixels (A, B) on whichever side is closest in value to the target pixel. As is illustrated in FIG.
  • FIGS. 11A to 11 D illustrate the color image data preprocessing according to the method of the present invention.
  • Background in the image being compressed can be replaced with a scalar value, in order to reduce noise in the image, and to increase the visual quality of the compressed image.
  • the step of replacing background in the image being compressed with a scalar value can also involve setting an initial chromakey value and delta values.
  • Methods illustrated in the flow chart of FIG. 11E are used to set the initial chromakey value and range: calibrated image, automatic, automatic-accumulated, and manual.
  • calibrated image prior to capturing an image with the subject of interest in place, one or more calibration images of the background consisting substantially entirely of background pixels are captured. The average and standard deviation of those entire images are determined, and are used to set at least an initial chromakey value and range.
  • an image is captured with the subject in place.
  • pixels are collected down and towards the center until an edge or image boundary is encountered.
  • the average and standard deviation of those pixels are calculated and used to set the initial chromakey value and range.
  • the selection of pixels is carried out as in the automatic chromakey process, but the background pixel data are collected across several images.
  • the average and standard deviation of those collected pixels are determined and used to set at least the initial chromakey value and range.
  • the initial chromakey value and range are specified without respect to the properties of an individual image being captured prior to image capture.
  • each pixel used for accumulating calibration data is converted to the YCrCb color space.
  • the Y, Cr, and Cb values and their squares are accumulated along with a count of the pixels accumulated.
  • the average pixel value is calculated by dividing the accumulated Y, Cr, and Cb values by the number of pixels accumulated. This average is used as the chromakey value. From the Y, Cr, and Cb values and their squares, the standard deviation of the accumulated pixels can be calculated. Separate coefficients, for Y and C can be specified that are multiplied by the standard deviation to become chromakey delta values specifying the variance from the chromakey values to determine the ranges for each of the Y, Cr, and Cb components.
  • the chromakey values and delta values or variances from the chromakey values used to determine the ranges can be “normalized” by removing the chrominance component of the chromakey value and increasing the chrominance components of the chromakey delta values. In other cases, the chromakey value can be adjusted so that the value plus or minus the delta values for each component does not cross zero.
  • a mask of colors closely matching the chromakey value is created.
  • Three delta components are preferably used to describe a rectangular region in YCrCb color space.
  • the three delta components can in an alternate preferred embodiment describe a hollow cylindrical segment in HSV color space.
  • the differences between the Y, Cr, and Cb values of the pixels are compared with the Y, Cr, and Cb of the components of the chromakey values. If all three of the differences are within the Y, Cr, and Cb chromakey delta values, then a bit in the mask is set.
  • one delta component describes a spherical region in YCrCb color space.
  • the method of the invention can further comprise the step of removing artifacts from the initial chromakey mask.
  • the initial chromakey mask typically contains three types of artifacts that must also be removed.
  • the term “chromakey mask” is used herein to mean the array of on/off bits that indicate whether a pixel is to be replaced in the chromakey process.
  • the first type of artifact arises from small areas of pixels in the background that are not included in the chromakey mask set of pixels replacing background pixels, but should be included in the mask set.
  • the second type of artifact arises from small areas of pixels that are included in the chromakey mask set, but that are actually part of the subject.
  • the third type of artifact arises from those pixels creating a halo effect around the subject where the background and subject tend to blended for a few pixels around the boundary of the subject.
  • Erosion is the process of removing pixels from the mask that have few neighbors included in the mask, such as those pixels having less than a predetermined threshold number of adjacent pixels a given distance away; it is used to correct the second type of artifact.
  • Dilation is the process of adding pixels to the mask that have most of their neighboring pixels included in the mask, such as those pixels having more than a predetermined threshold number of adjacent pixels a given distance away included in the mask; it is used to correct the first type of artifact.
  • the step of removing artifacts comprises the steps of: a) initially determining the background mask set of pixels; b) removing pixels from the mask set that have less than a predetermined threshold of neighboring pixels included in the mask set; c) adding pixels to the mask set that have more than a predetermined threshold of neighboring pixels included in the mask set; and repeating steps b) and c) a plurality of times.
  • the third type of artifact can be corrected by utilizing more dilation passes than erosion passes.
  • a replacement color is substituted into the original image for each pixel that is “on” in the mask.
  • the application developer has the option of using the chromakey value color as the replacement color or specifying a fixed replacement color. Further, the developer can use the replacement color as the background color for the first level encoding step or specifying another fixed value.
  • the currently preferred method typically uses a cube-shaped region bounded by the chromakey value, plus and minus the chromakey delta for determining a range for each component
  • the region may be replaced by a spherically-shaped region determined by a distance parameter from the chromakey value, or alternatively the chromakey calculations may be done in a HSV (Hue, Saturation, Value) color space which would result in a wedge-shaped region.
  • HSV Human, Saturation, Value
  • the first portion of the actual process of compression typically involves dividing the image into an array of 4 ⁇ 4 squares of pixels, and encoding each 4 ⁇ 4 square of pixels into a fixed bit length block containing a central color value, color dispersion value, and a selection map that represent the sixteen pixels in the block. Then 4 ⁇ 4 blocks of representative values from the level-one are encoded into higher level, or level-two blocks of central color values. Each level two block describes a lower resolution 16 ⁇ 16 pixel image.
  • the process continues to 64 ⁇ 64, 256 ⁇ 256, and even 1024 ⁇ 1024 pixel blocks.
  • the number of levels are selected so that four to fifteen top level blocks in each dimension remain.
  • the compression system of the invention uses only two levels.
  • the step of statistically encoding the image data comprises dividing the image into an array of 4 ⁇ 4 squares of pixels, and multi-level encoding the central color values of each 4 ⁇ 4 square of lower level blocks.
  • the step of multi-level encoding is repeated until from four to fifteen blocks remain on each axis of a top level of blocks.
  • the top level of blocks is reduced to residuals from a fixed background color.
  • each successive lower level block is reduced to the residuals from the encoded block on the level above.
  • the pixel values are reduced to the residuals from the encoded level one blocks.
  • each block contains a central color value and a color dispersion value
  • the step of statistically encoding the image data comprises determining a first sample moment of each block as the arithmetic mean of the pixels in the block, and the central color value of each block is set to the arithmetic mean from the first sample moment of the pixels in the block.
  • One presently preferred option of this embodiment involves determining a second sample moment of the pixels in the block, and determining the color dispersion value of each the block by determining the standard deviation from the first and second sample moments.
  • a first absolute central moment can be determined, to quantify the dispersion around the central value, and the color dispersion value is set to the first absolute moment.
  • Another presently preferred option of this embodiment involves determining a first absolute moment by determining an average of the difference between the pixel values and the first sample moment, and wherein the color dispersion value is set to the first absolute moment.
  • a selection map of those pixels having color values less than or greater than a discriminator set to the first sample moment is determined, along with a count of the lighter pixels.
  • the step of statistically encoding the image data comprises determining a first sample moment of each block as the arithmetic mean of the pixels in the block, determining a second sample moment of the pixels in the block, and determining the selection map from those pixels in the block having values lighter or darker than the average of the lightest and darkest pixels in the block.
  • the sample variance and the standard deviation can thus be determined based upon the first and second sample moments.
  • the mean, standard deviation, and selection map are preserved for each block.
  • ⁇ Y ⁇ square root ⁇ square root over (Y 2 ) ⁇ ( ⁇ overscore (Y) ⁇ ) 2
  • the selection map m i for each block is determined as is illustrated in FIGS. 12A to 12 C, where:
  • each 4 ⁇ 4 block of pixels is collected into a 16 element buffer, in which the index ranges from 0 to 15.
  • the first and second moments are determined. Squares are preferably determined by table lookup using an 8-bit table of squares rather than by multiplication.
  • the mean and standard deviation are determined, using a square 12 function to determine the square of a 12-bit number based upon the same 8-bit table of squares above. The root function finds roots by binary search of the same 8-bit table of squares.
  • the selector map is determined from the mean luminance value mY for the selector.
  • the one bits in the map mark those pixels that are “darker” than the mean.
  • the signed differences are accumulated from the mean in each chrominance (Cr/Cb) channel. If the Cr channel decreases when the luminance increases, dCr is inverted. If the Cb channel decreases when the luminance increases, dCb is inverted.
  • values are normalized.
  • the second half of the compression process involves taking the fixed bit (8 and 16 bit) length blocks encoded by the previous multilevel encoding step, and compressing them using minimum redundancy, variable-length bit codes.
  • the basic process for compressing a single encoded block comprises three steps: classification, quantization, and codebook compression.
  • the parameters used for the particular block must be established.
  • the parameters specify tolerances for how blocks are classified, how many bits of which component in the encoded block will be preserved, and how precisely the selection map is preserved. Different quality parameters may be used for different levels. For adaptive compression, each region of the image will use a different parameter set.
  • Adaptive compression is the process of making certain areas of the image that are considered to be more important look better, and is accomplished in two basic parts.
  • the level two (L2) blocks of the image are divided into groups and a portion of the total compressed data is allocated to each group.
  • the compression thresholds are adjusted for each group to ensure that the image is compressed within the allocated space.
  • the image is typically divided into three or four groups.
  • the groups are generally layed out in concentric circles with the highest priority area in the center. Targets for the amount of compressed data used for encoding each of the groups are also determined. Usually the highest priority group has two to three times the bits per pixel as the least priority group.
  • FIG. 16A The following example, illustrated in FIG. 16A, is of a 96 ⁇ 96-pixel image divided into four groups with an overall maximum of 1600 bytes of compressed data.
  • Level Two Target Group Bytes per L2 Group Blocks Bytes Block Bits per Pixel A 4 400 100 3.13 B 8 400 50 1.56 C 12 400 33.33 1.04 D 12 400 ⁇ 33.33 ⁇ 1.04 Total 36 1600 44.44 1.39
  • Truncation is bypassed on images having less than four blocks on either side.
  • Blocks associated with the first one-sixth of the distances are included in the first group.
  • the remainder of the first half of the distances are included in the second group.
  • the remaining blocks are included in the third group.
  • all blocks having a distance equal to the target are included in the target group.
  • one-fourth of the maximum compressed bytes are allocated to the first group, one-fourth to the second group, and the remaining one-half to the third group. These allocations are adjusted for the number of blocks that are actually included in the group.
  • Blocks associated with the first ninth are included in the first group.
  • Blocks associated with the second and third ninth are assigned to the second group.
  • Blocks associated with the fourth and fifth ninth are assigned to the third group. All remaining blocks are included in the fourth group.
  • all blocks having a distance equal to the target are included in the earlier group.
  • one-fourth of the maximum compressed bytes are allocated to each group. These allocations are also adjusted for the number of blocks assigned to the groups.
  • a sample set of quality level values is shown in the table: Quality Map Map Level TYU TCU TYN TCN Error Group 9 0 0 0 0 0 4 8 0 0 1 1 1 4 7 1 1 1 1 1 4 6 1 1 2 2 1 4 5 2 2 2 2 1 4 4 2 2 3 3 3 3 3 3 3 3 2 3 2 3 3 3 3 3 3 1 3 3 4 4 4 2 0 4 4 4 4 2 2
  • the quality level is automatically decreased after processing each group of blocks, except when at the highest quality level. If at the end of a block group, the number of accumulated bytes of compressed data exceeds the target for that group (either as calculated above or specified by the application), the quality level is decreased a second step. In addition, when processing the last group, each block is checked against the remaining allocation. If a block is beyond the target, only the level two data is preserved. If the maximum bytes allowed is exceeded, the process is repeated starting one quality level lower. The flowchart in FIG. 17 illustrates this process.
  • the basic codebook compression process consists of three steps: First, blocks are classified into four categories—null, uniform, uniform chroma, and pattern. Second, the number of bits for the Y and Cr/Cb components may be reduced, differently for each classification. Third, for uniform chroma and pattern classified blocks the texture map is tested against three groups of simpler, more common “pattern maps.” Where the pattern map is sufficiently similar to the texture map from the encoder, it is used. Otherwise the entire 16-bit texture map is kept, as is described further below.
  • Another aspect of the method of the invention currently preferably involves preparing a datastream for storage or transmission in level order.
  • the method of the invention involves preparing a datastream for storage or transmission in block order.
  • Another currently preferred embodiment of the method of the invention further involves the step of adding compressed residuals between input pixel data and level-one decoded blocks to thereby provide loss-less digital compression of the image.
  • blocks can be processed for storage or transmission of the datastream for decoding in either block order or level order.
  • block order for each top level block, a top-level block is processed followed by the lower level blocks within the top level block.
  • This method allows adaptive decompression or selective processing of top-level blocks.
  • level order processing all of the blocks of the top level are processed first, then each intermediate level, followed by the lowest level processing.
  • the step of decompressing comprises restoring the components of the blocks to the original number of bits based upon block classification.
  • the step of decompressing each block comprises decompressing each block by codebook decompression.
  • the step of preparing a datastream for storage or transmission in block order comprises selecting a block order to first process those portions of the image that are most important to facial identification.
  • the block order provides a circle group layout.
  • the corners of the image are truncated.
  • the block order provides an oval group layout, and in a preferred option, the corners of the image are truncated.
  • the block order can provide a bell group layout, and the corners of the image may also be truncated.
  • the step of preparing a datastream for storage or transmission in block order comprises dividing the blocks into groups.
  • the blocks can, for example, be divided into groups by assigning a portion of the maximum compressed bytes to each group.
  • the step of dividing the blocks into groups can also involve adjusting quality-controlling thresholds upon completion of each group.
  • only level two block information is transmitted on the last block to be processed if the information is near a maximum limit of compressed bytes.
  • the compression of the image can also be repeated starting at a lower quality level if necessary to process the entire image into a maximum limit of compressed bytes.
  • the state data is defined as the minimum information that must be the same on the encoding system and the decoding system so that a compressed data bitstream can be successfully decoded.
  • the state data consists of the following items: (1) base rows, (2) base columns, (3) quantizer for level one and level two, (4) codebook identifiers for each classification, luminance, chrominance, and group 3 maps, and (5) group layout identifier.
  • the state data parameters be set the same (or have the same default values) on both the encoder and decoder.
  • the state data (or at least the variable items) must be kept with the compressed data. Otherwise, the compressed data would be unusable.
  • a quantizer value defines the values used for the b YU , b YP , b CU , and b CP bit count parameters, which are discussed farther below.
  • a sample table of values for the quantizer and the corresponding bit count parameters is shown here: Quantizer b YU b YP b CU b CP 1 4 4 4 4 2 5 4 4 4 3 5 5 4 4 4 5 4 5 4 5 5 5 4 4 6 5 5 5 4 7 6 4 4 4 . . . 63 8 8 8 8 8 8 8 8 8 8
  • the level 2 blocks are sorted.
  • the same sort order algorithm must be used in the decoder.
  • the encoder and decoder will process the level two blocks in the same order.
  • the compressed data for each level two block will usually be followed by its sixteen level one blocks.
  • the level one blocks will be processed from the upper left corner across the top row, continuing from the left of the second row, and finishing in the lower right corner.
  • Two escape codes are defined. The first signals the end of compressed data. The other is used when level one data is to be skipped.
  • FIG. 19 illustrates the format of the data stream.
  • codebooks are used in the basic compression process, one each for block classification, luminance difference, chrominance difference, and group three pattern maps, as is described further below. Different applications will have different distributions of values to be compressed.
  • the system of statistical encoding known as Huffman coding is used for constructing variable bit length codebooks based upon the frequency of occurrence of each symbol. For the ultimate use of this technique, a new set of codebooks would need to be constructed for each image and transmitted to the decoder. However, this process is usually not practical.
  • the method of the present invention preferably includes several codebooks optimized for a variety of applications. Typically a single set of codebooks is used for an image, but if necessary, each set of parameters can specify different codebooks.
  • null blocks exhibit little or no change from the higher level or previous frame. Run lengths of one to eight null blocks are collected, and no other information is preserved. Uniform blocks have a relatively low standard deviation, being less than a predetermined threshold, and are therefore relatively uniform in their change in color from the higher level or previous frame. The mean values for all three components are preserved.
  • Uniform chroma blocks have a significant luminance component to the standard deviation, but little chrominance deviation.
  • the mean luminance and chrominance, standard deviation luminance, and a suitable selection map are preserved.
  • Pattern blocks have significant data in both luminance and chrominance standard deviations. All components of the block are preserved.
  • An additional classification, called an escape code is also used to navigate the compressed bitstream.
  • the number of bits to be preserved for each component of the block is set as follows: ⁇ overscore (Y) ⁇ ⁇ overscore (Cr) ⁇ ⁇ overscore (Cb) ⁇ ⁇ Y ⁇ Cr ⁇ Cb MAP Null 0 0 0 0 0 0 0 0 0 Uniform b YU b CU b CU 0 0 0 Uniform Chroma b YU b CU b CU b YP 0 0 Yes Pattern b YU b CU b b b b YP b b CP b CP Yes
  • the selection map is preserved along with the color data.
  • the run length of null blocks are recorded without preserving components of the null blocks.
  • Three groups of common selection maps are identified by the compression method of the invention. The first two groups are fixed while the application developer can select from several codebooks for the third group. If a suitable match cannot be found in the three groups, the entire texture map is preserved.
  • each map actually represents two.
  • Group Members Implied Maps Encoding 1 00FF H 3333 H FF00 H CCCC H 3 bits 2 0FFF H 7777 H F000 H 8888 H 4 bits 1111 H 000F H EEEE H FFF0 H 3
  • By Codebook typically 5 to 9 bits 4 Actual Texture Map 17 bits
  • the decoded colors from a block depend upon the number of bits in the map, if a map that is substituted has a different number of bits, the standard deviation components of the block are adjusted. For each individual block, the bitstream is written as is illustrated in FIG. 17.
  • the classification codebook contains twelve entries, eight run lengths of null blocks and one each for uniform, uniform chromas, and pattern blocks, plus an entry for preceding escape codes. Escape codes are dependent upon the implementation and can be used to signal the end of an image, end of a block run, skipping to a different block, and the like.
  • the luminance and chrominance codebooks contain the most often observed delta values—the luminance typically including +25 to ⁇ 25 and chrominance from ⁇ 6 to +6. For values that need to be coded and are not found in the selected codebook, and “other” entry at +128 is used, followed by the value, using the number value was quantized, as illustrated in FIG. 20.
  • Typical codebooks are shown in the following tables.
  • Sample Chrominance Codebook Value Bits Pattern ⁇ 6 9 000000000 ⁇ 5 8 00000001 ⁇ 4 7 0000011 ⁇ 3 6 000010 ⁇ 2 4 0001 ⁇ 1 2 01 0 1 1 1 3 001 2 6 000011 3 7 0000001 4 8 00000101 5 8 00000100 6 10 0000000011
  • Each of the four code books used in the compression process must be translated into a code book lookup for the decompression process.
  • the following classification code book is translated to a code book lookup: Block Classification Code Book Index Code Bit Pattern 0 Escape 9 011100000 1 Uniform 1 1 2 UniChr 2 00 3 Pattern 9 011100001 4 Null 3 010 5 NullRLE2 4 0110 6 NullRLE3 6 011101 7 NullRLE4 6 011111 8 NullRLE5 7 0111100 9 NullRLE6 7 0111001 10 NullRLE7 8 01110001 11 NullRLE8 7 0111101 Block Classification Code Book Lookup Index Link Value Code 0 2 1 1 Uniform 2 22 3 21 4 20 5 11 6 8 7 7 NullRLE4 8 10 9 11 NullRLE8 10 8 NullRLE5 11 13 12 6 NullRLE3 13 15 14 9 NullRLE6 15 17 16 10 NullRLE7
  • the next bit is retrieved, which is a one, so the index is incremented to three.
  • the third bit, a one is retrieved, so that the index is incremented to four.
  • the fourth bit is retrieved, another one, so that the index is incremented to five.
  • the fifth bit, a zero is retrieved, so that the index five is used to find a link of 11.
  • the sixth bit, a zero is retrieved, so that the index 11 is used to find a link of 13.
  • the seventh bit, a one, is retrieved, so that the index is incremented to 14. At index 14 a value of 9 is found, which corresponds to NullRLE6.
  • step of decompressing preferably comprises determining a first color “a” and a second color “b” for each block of pixels, based upon the absolute central moment and selection map for each block of pixels, where “x” is the sample mean (or central color value, or arithmetic mean), the number of one bits in the selection map darker than the absolute central moment is “q”, and the total number of bits in the selection map is “m”, according to the following formulas:
  • Some patterns for the selection map tend to describe gradient areas of the original image. It is also possible to utilize a map of coefficients to smooth the boundaries between the decoded “a” and “b” colors. For example, where:
  • Unadjusted Values Coefficients Adjusted Values 92 92 92 113 120% 100% 80% 80% 91 92 94 110 92 92 113 113 100% 80% 80% 100% 92 94 110 113 92 113 113 113 80% 80% 100% 120% 94 110 113 115 113 113 113 113 80% 100% 120% 140% 110 113 115 118
  • the background color is retrieved from the beginning of the data stream.
  • the level two data is retrieved from the data stream and decoded and added to the background color.
  • no level one data is stored.
  • the resulting value for the level two data and background data is replicated for each of the sixteen pixels in that level one block.
  • sixteen level one blocks will be retrieved from the data stream, decoded, and then added to the pixel value from the level two block.
  • a depth filter (light, medium, or heavy), an edge filter (light, medium, or heavy), and a spatial filter (five-pixel light, medium, or adaptive) or a combination of any one from each of the categories.
  • the depth filters remove the “cartoon” look from areas of the image where a single uniform color has been decoded. Small variations in the luminance component are injected into the decoded image. Variations are selected randomly from the following sixteen values based upon the level of filtering specified:
  • the values typically are: ⁇ 3 ⁇ 2 ⁇ 2 ⁇ 1 ⁇ 1 ⁇ 1 0 0 0 0 1 1 1 2 2 3
  • the values typically are: ⁇ 6 ⁇ 4 ⁇ 3 ⁇ 2 ⁇ 2 ⁇ 1 ⁇ 1 ⁇ 1 1 1 1 2 2 3 4 6
  • the values typically are: ⁇ 8 ⁇ 6 ⁇ 5 ⁇ 4 ⁇ 3 ⁇ 2 ⁇ 1 ⁇ 1 1 1 2 3 4 5 6 8
  • the edge filters serve to mask the artifacts occurring at the edges of the four-by-four-pixel level one blocks. The pixels away from the edges are not affected.
  • the tables in FIGS. 19A to D illustrate the three levels of filtering where a horizontal block boundary occurs between the F-J and K-O rows and a vertical boundary between the B-V and C-W columns.
  • the spatial filters operate as conventional convolution filters.
  • a special set of convolution masks has been selected to make the filters easier to implement with less computing power.
  • this special set only five pixels are used (the central target pixel and one each above, below, left, right) and the divisors are multiples of two, typically with the sum of the values of the pixels of the spatial filter being equal to the divisor.
  • the central target pixel is matched with the target pixel of the decompressed, decoded image, and the filtered value of the target pixel is determined as the sum of the products of the five pixels of the spatial filter and the corresponding pixels of the decompressed image with relation to the target pixel of the image, divided by the divisor.
  • the adaptive filter is a combination of the light and medium versions where the medium filter is used where the difference between the surrounding pixels is significant.
  • the flowchart of FIG. 21 shows one implementation of these filters.
  • Another presently preferred aspect of the method of the invention further involves the step of converting the YCrCb color space image data back to the original color image color space.
  • this can comprise converting the YCrCb color space image data back to the RGB color space, and this preferably involves utilizing lookup tables of selected color values for color space conversion. In a preferred aspect, five 256-entry lookup tables are utilized.
  • the present invention accordingly provides for a contactless IC smart card 30 for storing a digitally compressed image which contains image data consisting of a plurality of scan lines of pixels with scalar values.
  • the image data is filtered and encoded according to the invention as discussed hereinabove.
  • the smart card preferably includes an antenna 32 for receiving and transmitting data from a read/write device (not shown), a receiving circuit 34 for demodulating signals received by the antenna, a transmitting circuit 36 for modulating signals to be transmitted and driving the antenna, and an I/O control circuit 38 for serial/parallel conversion of the transmitting signals and reception signals.
  • the smart card also includes a CPU 40 for performing read/write operations on data, including the receiving and transmission of data, as well as data processing, a ROM 42 for storing a control program or the like to operate the CPU, a RAM 44 for storing the digitally compressed image data and results of processing, and a bus 46 for interconnecting the CPU, ROM, RAM and I/O control circuit.
  • An oscillator 48 connected to the CPU and smart card circuitry is also provided for generating an internal clock signal, and a power source 50 , such as a battery, provides power to the CPU and smart card circuitry.
  • a trigger signal line 52 may also be connected between the receiving circuit and the CPU for switching the smart card from a sleep state to an operating state by directly supplying a received trigger signal to the CPU from the receiving circuit.
  • the smart card of the invention is also applicable to grayscale images, and other monochromatic images and chromatic image systems with pixels having scalar values. It will be apparent from the foregoing that while particular forms of the invention have been illustrated and described, various modifications can be made without departing from the spirit and scope of the invention. Accordingly, it is not intended that the invention be limited, except as by the appended claims.

Abstract

A smart card having a memory is used for storing color identification image date that is digitally compressed from a photograph, using a relatively small amount of memory for the storage of the image.

Description

    RELATED APPLICATIONS
  • This is a continuation-in-part of Ser. No. 09/063,255 filed Apr. 20, 1998.[0001]
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0002]
  • The invention relates generally to information signal processing, and more particularly relates to a smart card containing a programmable microchip having a memory for storing digitally compressed color images, such as for storing a color identification photograph. [0003]
  • 2. Description of Related Art [0004]
  • One technique that has been used in digital encoding of image information is known as run length encoding. In this technique, the scan lines of a video image are encoded as a value or set of values of the color content of a series of pixels along with the length of the sequence of pixels having that value, set of values, or range of values. The values may be a measure of the amplitude of the video image signal, or other properties, such as luminance or chrominance. Statistical encoding of frequent color values can also be used to reduce the number of bits required to digitally encode the color image data. [0005]
  • One basic encoding process is based upon the block truncation coding (BTC) algorithm published by Mitchell and Delp of Purdue University in 1979. The basic BTC algorithm breaks an image into 4×4 blocks of pixels and calculates the first and second sample moments. Based upon an initial discriminator value, set to the first sample moment (the arithmetic mean), a selection map of those pixels lighter or darker than the discriminator value is determined, along with a count of the lighter pixels. From the first and second sample moments, the sample variance, and therefore, the standard deviation, can be calculated. The mean, standard deviation, and selection map are preserved for each block. However, the original BTC method is limited to a grayscale image, so that it would be desirable to adapt the BTC method to extend the BTC method to include YCrCb full-color. It would also be desirable to adapt the BTC method to handle delta values, allowing multi-level, or hierarchical, encoding and allowing encoding of differences between frames or from a specified background color. [0006]
  • The range of color values for any given pixel in a color image can be described, for example, as RGB color space, illustrated in FIG. 1, that can be represented by a conventional three-dimensional Cartesian coordinate system, with each axis representing 0 to 100% of the red, green and blue values for the color value of the pixel. A grayscale line can be described as running diagonally from black at 0% of each component to white at 100% of each. Since human vision can only discriminate a limited number of shades of color values, by selecting representative color values in such a color space, a limited number of color values can be used to approximate the actual color values of an image such that the human eye can not differentiate between the actual color values and the selected color values. [0007]
  • As is illustrated in FIGS. 2 and 3, human vision can be characterized by the Hue-Value-Saturation (HVS) color system. Hue is defined as the particular color in the visible spectrum ranging from red through green and blue to violet. Value is defined as the brightness level, ignoring color. Saturation is defined as the intensity of the particular color or the absence of other shades in the mixture. The HVS system can be represented by a generally cylindrical coordinate system with a polar base consisting of hue as the angle, and saturation as the radius. The value or brightness component is represented as the altitude above the base. The actual visible colors do not occupy the entire cylinder, but are approximately two cones, base to base with their vertices at 0% up to 100% on the value scale. The base is tilted in this example because the maximum saturation for blue occurs at a much lower brightness than the maximum saturation of green. [0008]
  • Referring to FIG. 2, in order to represent digitized NTSC/PAL video in a Cartesian coordinate system, the YCrCb color space is used. Because the smart card system of the invention operates in the YCrCb color space, the smart card system provides for a novel color space conversion from 15- or 24-bit RGB. Eight-bit grayscale images are also supported. Referring also to FIG. 3, the chrominance components, Cr and Cb, are two axes that correspond to the polar hue and saturation components in the HVS system. The Y, or luminance, component corresponds to the brightness axis in the HVS graph. This description does not account for the slight differences between YIQ and YUV for NTSC- and PAL-based encoding, which does not form a part of the invention. The following equations can be used to convert from RGB to the YCrCb color space:[0009]
  • Y=0.299R+0.587G+0.114B
  • Cr=0.713(0.701R−0.587G+0.114B)
  • Cb=0.564(−0.299R−0.587G+0.866B)
  • Typical implementations of digitally representing color values in this fashion use floating point arithmetic (11 multiplications and 9 additions/subtractions per pixel) or 16-bit integer arithmetic (9 multiplications, 9 additions/subtractions and 3 division per pixel). Both of these methods are quite wasteful of computing power, particularly on smaller microcontrollers. There is thus a need for a system for representing color values of digitized images that takes advantage of the limitations of human vision in discriminating color in color images in order to reduce the software and hardware requirements, particularly for storage of such color images in smart cards and databases. Smart cards are commonly approximately the same shape and size of a common credit card, and typically contain a programmable microchip, having a memory such as a read only memory, or a read/write memory. Information stored in the memory of the card can be detected by a card interface device such as a card reader or connector. [0010]
  • Unfortunately, noise can seriously interfere with the efficiency of any image compression process, lossy or lossless, because a compression engine must use more unnecessary data to encode noise as if it were actual subject material. Since lossy compression tends to amplify noise creating more noticeable artifacts upon decompression, lossy compression processes therefore typically attempt to remove some of the noise prior to compressing the data. Such preprocessing filters must be used very carefully, because too little filtering will not have the desired result of improved compression performance, and too much filtering will make the decompressed image cartoon-like. [0011]
  • Another technique used for removing unwanted noise from color image data is chromakeying, which is a process of replacing a uniform color background (usually blue) from behind a subject. A common application of this process is a television weather reporter who appears to be standing in front of a map. In actuality, the reporter is standing in front of a blue wall while a computer generated map is replacing all of the blue pixels in the image being broadcast. [0012]
  • While preprocessing filters can remove noise from the area surrounding a subject of interest in an image, subtle changes in lighting or shading can remain in the original image which can be eliminated by chromakeying. There is thus a need to provide a chromakey system in compression of color images for storage on smart cards and databases, in order to replace the background with a solid color to increase the visual quality of the compressed image. It would also be desirable to automate and simplify the chromakey process, to simplify the chromakey process for the operator. The present invention meets these and other needs. [0013]
  • SUMMARY OF THE INVENTION
  • The present invention provides for an improved contact-less smart card for digitally storing color images, such as color identification photographs, compressed into 512 to 2,048 bytes. The smart card of the invention accepts rectangular images in 16 pixel increments ranging from 48 to 256 pixels on a side. The typical size is 96×96 pixels. The smart card of the invention is designed for use in very low computational power implementations such as with 8-bit microcontrollers, possibly with an ASIC accelerator. [0014]
  • Briefly, and in general terms, the present invention accordingly provides for a smart card containing a programmable microchip having a memory for storing a digitally compressed image containing image data consisting of a plurality of scan lines of pixels with scalar values, such as for an identification photograph, for storage of the image. In a presently preferred aspect, the image data is filtered by evaluating the scalar values of individual pixels in the image with respect to neighboring pixels, and statistically encoded by encoding the image data, by dividing the image into an array of blocks of pixels, and encoding each block of pixels into a fixed number of bits that represent the pixels in the block. [0015]
  • In a presently preferred aspect, the image data is filtered by evaluating each individual pixel as a target pixel and a plurality of pixels in close proximity to the target pixel to determine an output value for the target pixel. Presently, each individual pixel preferably is evaluated by evaluating a sequence of five pixels, including two pixels on either side of the target pixel and the target pixel itself, for each target pixel. In one presently preferred approach, an average of the data for a window of the pixels immediately surrounding the target pixel is determined for those pixels surrounding the target pixel that are within a specified range of values, according to the following protocol: if all five pixels are within the specified range, the output target pixel is determined to be the average of the four pixels in a raster line, two on each side of the target pixel; if the two pixels on either side are within a specified range and both sides themselves are within the range, the filtered output target pixel data is determined to be the average of the two pixels on each side of the target pixel; if the two pixels on either side of the target pixel and the target pixel itself are within a specified range, and the other two pixels on the other side are not within the specified range, the output target pixel is determined to be the average of the two neighboring pixels closest in value to the target pixel values and that fall within the specified range; if the five pixels are all increasing or decreasing, or are within a specified range, the output target pixel is determined to be the average of two pixels on whichever side of the target pixel is closest in value to the target pixel; and if the five pixels in the window do not fit into any of the prior cases, the output target pixel is unchanged. [0016]
  • In another presently preferred aspect, the image data is statistically encoded by dividing the image into an array of 4×4 squares of pixels, and encoding each 4×4 square of pixels into a fixed bit length block containing a central color value, color dispersion value, and a selection map that represent the sixteen pixels in the block. [0017]
  • One currently preferred embodiment provides that each block contains a central color value and a color dispersion value, and the image data is statistically encoded by determining a first sample moment of each block as the arithmetic mean of the pixels in the block, and the central color value of each block is set to the arithmetic mean from the first sample moment of the pixels in the block. Preferably a second sample moment of the pixels in the block is determined, and the color dispersion value of each the block is determined by determining the standard deviation from the first and second sample moments. In another presently preferred option of this embodiment, a first absolute moment is determined by determining an average of the difference between the pixel values and the first sample moment, wherein the color dispersion value is set to the first absolute moment. [0018]
  • In another presently preferred aspect, the digital image data is converted to the YCrCb color space. In one currently preferred approach, the digital color image data is converted to the YCrCb color space by converting the image data from the RGB color space. Preferably, lookup tables of selected color values are utilized for color space conversion, and in one preferred approach, the digital image data is converted to the YCrCb color space by utilizing nine 256-entry one-byte lookup tables containing the contribution that each R, G and B make towards the Y, Cr and Cb components. [0019]
  • The statistical encoding of the image data, in another presently preferred aspect, is accomplished by determining a first sample moment of each block as the arithmetic mean of the pixels in the block, determining a second sample moment of the pixels in the block, and determining the selection map from those pixels in the block having values lighter or darker than the first sample moment. [0020]
  • In another presently preferred aspect, the image data is statistically encoded by determining a first sample moment of each block as the arithmetic mean of the pixels in the block, determining a second sample moment of the pixels in the block, and determining the selection map from those pixels in the block having values lighter or darker than the average of the lightest and darkest pixels in the block. [0021]
  • The statistical encoding of the image data, in another presently preferred embodiment, is accomplished by determining a classification of each block, quantifying each block, and compressing each block by codebook compression using minimum redundancy, variable-length bit codes. This classification of each the block involves classifying each the block according to a plurality of categories, and in a presently preferred embodiment, classification of each the block comprises classifying each of the blocks in one of four categories: null blocks exhibiting little or no change from the higher level or previous frame, uniform blocks having a standard deviation less than a predetermined threshold, uniform chroma blocks having a significant luminance component to the standard deviation, but little chrominance deviation, and pattern blocks having significant data in both luminance and chrominance standard deviations. The statistical encoding of the color image data can also further involve determining the number of bits to be preserved for each component of the block after each block is classified. In one presently preferred variant, this can also further involve selecting a quantizer defining the number of bits to be preserved for each component of the block according to the classification of the block to preserve a desired number of bits for the block. In another presently preferred variant, the number of bits for the Y and Cr/Cb components of the blocks to be preserved are determined independently for each classification. In a currently preferred option, all components of each block for pattern blocks are preserved. In another currently preferred option, the mean luminance and chrominance, standard deviation luminance, and a selection map for uniform chroma blocks are preserved. In another currently preferred option, all three color components of the central color value for uniform blocks are preserved. In another currently preferred option, the run length of null blocks are recorded without preserving components of the null blocks. In another preferred embodiment, a classification of each block is determined by matching the texture map of the block with one of a plurality of common pattern maps for uniform chroma and pattern classified blocks, and compressing each block by codebook compression can comprise selecting codes from multiple codebooks. [0022]
  • The digital image compression also involves a hierarchical, or multilevel component. Each image is first divided into an array of 4×4 level-one blocks. Then 4×4 blocks of representative values from the level-one are encoded into higher level, or level-two blocks of central color values. Each level two block describes a [0023] lower resolution 16×16 pixel image. The process continues to 64×64, 256×256, and even 1024×1024 pixel blocks. The number of levels are selected so that four to fifteen top level blocks in each dimension remain. The compression system of the invention uses only two levels. In a presently preferred aspect, the image data is statistically encoded by dividing the image into an array of 4×4 squares of pixels, and multi-level encoding the central color values of each 4×4 square of lower level blocks. In one currently preferred option, multi-level encoding is repeated until from four to fifteen blocks remain on each axis of a top level of blocks. In a preferred variation of this option, the top level of blocks is reduced to residuals from a fixed background color. In an alternate preferred option, each successive lower level block is reduced to the residuals from the encoded block on the level above. In another alternate preferred option, the pixel values are reduced to the residuals from the encoded level one blocks.
  • In another preferred aspect, a datastream is prepared for storage in level order. In an alternate preferred embodiment, the datastream is prepared for storage in block order. Another currently preferred embodiment of the method of the invention further involves adding compressed residuals between input pixel data and level-one decoded blocks to thereby provide loss-less digital compression of the image. [0024]
  • The present invention also provides for a smart card for storage of a digitally compressed color image such as a color identification photograph, the color image containing color image data consisting of a plurality of scan lines of pixels with color values, wherein the color image data is filtered by evaluating the color values of individual pixels in the color image with respect to neighboring pixels, and statistically encoding the color image data by dividing the color image into an array of blocks of pixels, and encoding each block of pixels into a fixed number of bits that represent the pixels in the block. In a presently preferred embodiment, the color image data is filtered by evaluating each individual pixel as a target pixel and a plurality of pixels in close proximity to the target pixel to determine an output value for the target pixel. In a presently preferred aspect, the color image data is filtered by evaluating a sequence of five pixels, including two pixels on either side of the target pixel and the target pixel itself, for each target pixel. [0025]
  • In another presently preferred aspect, the color image data is filtered by determining an average of the data for a window of the pixels immediately surrounding the target pixel for those pixels surrounding the target pixel that are within a specified range of values, according to the following protocol: if all five pixels are within the specified range, the output target pixel is determined to be the average of the four pixels in a raster line, two on each side of the target pixel; if the two pixels on either side are within a specified range and both sides themselves are within the range, the filtered output target pixel data is determined to be the average of the two pixels on each side of the target pixel; if the two pixels on either side of the target pixel and the target pixel itself are within a specified range, and the other two pixels on the other side are not within the specified range, the output target pixel is determined to be the average of the two neighboring pixels closest in value to the target pixel values and that fall within the specified range; if the five pixels are all increasing or decreasing, or are within a specified range, the output target pixel is determined to be the average of two pixels on whichever side of the target pixel is closest in value to the target pixel; and if the five pixels in the window do not fit into any of the prior cases, the output target pixel is unchanged. [0026]
  • A currently preferred aspect of the smart card for a digitally compressed color image further involves the replacement of background in the image being compressed with a scalar value, in order to reduce noise in the image, and to increase the visual quality of the compressed image. In a currently preferred embodiment, the background in the image being compressed is replaced with a scalar value comprises setting an initial chromakey value and delta values. In a preferred aspect, the initial chromakey value and background scalar value are set by capturing one or more calibration images of the background prior to capturing an image without a subject of interest in place, consisting substantially of background pixels, and determining the average and standard deviation of one or more calibration images to set at least an initial chromakey scalar value and range. In another currently preferred aspect, the initial chromakey value and background scalar value are set by capturing an image with a subject of interest in place, and beginning in the upper-left and upper-right corners of the one or more calibration images, collecting pixel data down and towards the center of the image until an edge or image boundary is encountered, and determining the average and standard deviation of those pixels to set at least an initial chomakey value and range. In one preferred aspect, the pixel data are collected from a plurality of images. In another currently preferred aspect, the initial chromakey value and background scalar value are set by manually specifying an initial chromakey value and range without respect to the properties of an individual image being captured prior to image capture. Preferably replacement of the background in the image being compressed involves determining an initial chromakey mask of pixels in the input image that are near the chromakey value. In a currently preferred aspect, three delta components are used to describe a rectangular region in YCrCb color space. In another preferred aspect, one delta component describes a spherical region in YCrCb color space. The three delta components can in an alternate preferred embodiment describe a hollow cylindrical segment in HSV color space. [0027]
  • A further preferred aspect of the smart card for a digitally compressed color image further involves the removal of artifacts from the initial chromakey mask. In one presently preferred embodiment, the artifacts are removed by a) initially determining the background mask set of pixels; b) removing pixels from the mask set that have less than a predetermined threshold of neighboring pixels included in the mask set; c) adding pixels to the mask set that have more than a predetermined threshold of neighboring pixels included in the mask set; and repeating steps b) and c) a plurality of times. [0028]
  • In an alternate preferred aspect of the smart card for a digitally compressed color image, artifacts are removed by applying a sliding linear filter of five pixels once horizontally and once vertically to adjust a plurality of target pixels of the initial chromakey mask, and adjusting each target pixel to be included in the chromakey mask if the target pixel is initially not included in the chromakey mask, the pair of pixels on either side of the target pixel are in the chromakey mask, and the target pixel is not near an edge; adjusting each target pixel to be included in the chromakey mask if the target pixel is initially not included in the chromakey mask, and the two adjacent pixels on either side of the target pixel are included in the chromakey mask; adjusting each target pixel to be included in the chromakey mask if the target pixel is initially not included in the chromakey mask, and if three of the adjacent pixels a distance of two or less pixels away from the target pixel are included in the chromakey mask; adjusting each target pixel to be excluded from the chromakey mask if the target pixel is initially included in the chromakey mask, and if both pairs of pixels on either side of the target pixel are not included in the chromakey mask. [0029]
  • The term “background” is used herein to identify the area around a subject in an image. A significant part of replacing background in the color image being compressed with a solid color comprises determining an initial chromakey value and range of the colors in the background. The term “background color” is used herein to mean a fixed color that is subtracted from each pixel in a specified background area of an image prior to level one encoding, and can be either be copied from a replacement color or supplied by an operator. The term “chromakey color” and “chromakey” refer particularly to the color that is the center of a specified area of colors that are to be replaced, generally calculated from the accumulated pixels in the area, or supplied by an operator. The “replacement color” is a fixed color that is used to replace all pixels indicated in the final chromakey mask, and it can be either copied from the chromakey color, or supplied by an operator. In one presently preferred embodiment, the step of calibrating comprises capturing at least one calibration image of the background prior to capturing an image with a subject of interest in place, consisting substantially of background pixels, and determining the average and standard deviation of the at least one calibration image to set at least an initial chromakey color and range. The term “chromakey range” is used herein to refer to the amount that pixels can differ from the chromakey color and be included in the pixels to be replaced, and is also calculated from the accumulated pixels or is supplied by an operator. [0030]
  • Another preferred aspect of the smart card for a digitally compressed color image further comprises conversion of digital color image data to the YCrCb color space. In one currently preferred approach, the conversion of digital color image data to the YCrCb color space involves conversion of the color image data from the RGB color space. Preferably, the digital color image data is converted to the YCrCb color space by utilizing lookup tables of selected color values for color space conversion, and in one preferred approach, the digital color image data is converted to the YCrCb color space by utilizing nine 256-entry one-byte lookup tables containing the contribution that each R, G and B make towards the Y, Cr and Cb components. [0031]
  • In one presently preferred embodiment of the smart card for storing a digitally compressed color image such as for a color identification photograph, the color image data is statistically encoded by dividing the color image into an array of 4×4 squares of pixels, and encoding each 4×4 square of pixels into a fixed number of bits containing a central color value, color dispersion value, and a selection map that represent the sixteen pixels in the block. Typically, each of the blocks contains a central color value and a color dispersion value, and the statistical encoding of the image data involves determining a first sample moment of each block as the arithmetic mean of the pixels in the block, and the central color value of each block is set to the arithmetic mean from the first sample moment of the pixels in the block. One presently preferred option of this embodiment involves determining a second sample moment of the pixels in the block, and determining the color dispersion value of each block by determining the standard deviation from the first and second sample moments. Another presently preferred option of this embodiment involves determining a first absolute moment by determining an average of the difference between the pixel values and the first sample moment, and wherein the color dispersion value is set to the first absolute moment. [0032]
  • In another presently preferred aspect of the smart card for a digitally compressed color image, the image data is statistically encoded by determining a first sample moment of each block as the arithmetic mean of the pixels in the block, determining a second sample moment of the pixels in the block, and determining the selection map from those pixels in the block having values lighter or darker than the first sample moment. Alternatively, the selection map can be determined from those pixels in the block having values lighter or darker than the average of the lightest and darkest pixels in the block. [0033]
  • Another presently preferred embodiment of the smart card for a digitally compressed color image provides for statistical encoding of the color image data by encoding two levels of blocks of each 4×4 square of pixels, with the two levels including level one blocks and level two blocks, and the level two blocks including central color values. In one preferred aspect, the level two blocks are reduced to residuals from a fixed background color, and in another presently preferred aspect, the level one blocks are reduced to residuals from decoded level two blocks. [0034]
  • The present invention also provides for a smart card for storing a digitally compressed color image, wherein the color image contains image data consisting of a plurality of scan lines of pixels with scalar values, wherein the image data is filtered by evaluating the scalar values of individual pixels in the image with respect to neighboring pixels, and wherein the image data is statistically encoded by dividing the image into an array of blocks of pixels, and encoding each block of pixels into a fixed number of bits that represent the pixels in the block by classifying each said block, quantifying each said block, and compressing each said block by codebook compression using minimum redundancy, variable-length bit codes. [0035]
  • In one presently preferred embodiment of the smart card for a digitally compressed color image, each said block is classified according to a plurality of categories. In one preferred aspect, each of the blocks are classified in one of four categories: 1) null blocks exhibiting little or no change from the higher level or previous frame, 2) uniform blocks having a standard deviation less than a predetermined threshold, 3) uniform chroma blocks having a significant luminance component to the standard deviation, but little chrominance deviation, and 4) pattern blocks having significant data in both luminance and chrominance standard deviations. In one preferred option the number of bits to be preserved can be determined for each component of the block after each said block is classified. Additionally, a quantizer can be selected defining the number of bits to be preserved for each component of the block according to the classification of the block to preserve a desired number of bits for the block. In another presently preferred option, the number of bits for the Y and Cr/Cb components of the blocks to be preserved are determined independently for each classification. All components of each block can be preserved for pattern blocks, and all components of a central color, the mean luminance and chrominance, standard deviation luminance, and a selection map can be preserved for uniform chroma blocks. In another option, all three components of the central color value can be preserved for uniform blocks. Additionally, one preferred implementation provides for recording the run length of null blocks without preserving components of the null blocks. [0036]
  • The smart card for a digitally compressed color image can further involve matching of the texture map of the block with one of a plurality of common pattern maps for uniform chroma and pattern classified blocks; and compression according to codes from multiple codebooks. [0037]
  • The invention also provides for a smart card for storing a digitally compressed datastream of image data, stored in block order, with the image data consisting of a plurality of scan lines of pixels with scalar values, wherein the image data is filtered by evaluation of the scalar values of individual pixels in the image with respect to neighboring pixels, and the image data is statistically encoded by dividing the image into an array of blocks of pixels and encoding each block of pixels into a fixed number of bits that represent the pixels in the block. [0038]
  • In one presently preferred embodiment, the datastream is prepared for storage in block order by selecting a block order to first process those portions of the image that are most important to facial identification. In one preferred option, the block order provides a circle group layout. In another presently preferred option, the corners of the image are truncated. In one preferred aspect, the block order provides an oval group layout, and in a preferred option, the corners of the image are truncated. Alternatively, the block order can provide a bell group layout, and the corners of the image may also be truncated. [0039]
  • In another aspect, the datastream is prepared for storage in block order by dividing the blocks into groups. The blocks can, for example, be divided into groups by assigning a portion of the maximum compressed bytes to each group. In addition, the division of the blocks into groups can also involve adjusting quality-controlling thresholds upon completion of each group. In a preferred aspect, only level two block information is transmitted on the last block to be processed if the information is near a maximum limit of compressed bytes. The compression of the image can also be repeated starting at a lower quality level if necessary to process the entire image into a maximum limit of compressed bytes. [0040]
  • These and other aspects and advantages of the invention will become apparent from the following detailed description and the accompanying drawings, which illustrate by way of example the features of the invention.[0041]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic representation of an RGB color space known in the prior art; [0042]
  • FIG. 2 is a schematic representation of NTSC/PAL video color system in the HVS color space known in the prior art; [0043]
  • FIG. 3 is a schematic representation of NTSC/PAL video color system in the YCrCb color space known in the prior art; [0044]
  • FIG. 4 is a schematic diagram illustrating image acquisition for storage and use on a smart card; [0045]
  • FIG. 5 is a schematic diagram of an overview of the compression of color image data for storage and use on a smart card; [0046]
  • FIGS. [0047] 6 to 10 illustrate color image data preprocessing filter protocols for storage of color image data on a smart card;
  • FIGS. 11A to [0048] 11D show a flow chart for the color image data preprocessing for storage of color image data on a smart card;
  • FIG. 11E is a flow chart of the options for setting the chromakey color and range for storage of color image data on a smart card; [0049]
  • FIG. 11F is a diagram illustrating the automatic chromakey process for storage of color image data on a smart card; [0050]
  • FIGS. 12A to [0051] 12C show a flow chart for multilevel encoding of color image data for storage of color image data on a smart card;
  • FIG. 13 is a flow chart illustrating the encoding a bitstream for storage of color image data on a smart card; [0052]
  • FIGS. 14A to [0053] 14E show a flow chart for codebook compression for storage of color image data on a smart card;
  • FIGS. 15A to [0054] 15D show a flow chart for encoding pattern maps for storage of color image data on a smart card;
  • FIG. 16A is a chart illustrating a 96×96 pixel image divided into four groups to provide adaptive compression; [0055]
  • FIG. 16B is a chart illustrating the non-truncated and truncated circle, oval and bell shaped layouts for pixel blocks; [0056]
  • FIG. 17 shows a flow chart for encoding luminance or chrominance values by codebook lookup for storage of color image data on a smart card; [0057]
  • FIG. 18 is a flow chart of adaptive compression; [0058]
  • FIG. 19 is an illustration of the format of the data stream; [0059]
  • FIGS. 20A, B, C and D are tables of inputs and corresponding edge filters; [0060]
  • FIG. 21 is a flowchart illustrating post-processing spatial filtering; and [0061]
  • FIG. 22 is a schematic diagram of a smart card according to the invention.[0062]
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • While BTC statistical encoding can be used to reduce the number of bits required to digitally encode image data, the BTC method is limited to simple encoding of grayscale images. Digital color image compression methods typically use floating point arithmetic or 16-bit integer arithmetic, which are quite wasteful of computing power, particularly for encoding of color image data on smart cards and databases. Noise can also seriously interfere with the efficiency of the image compression process, and although preprocessing filters can be used to remove noise, too much filtering can make the decompressed image cartoon-like, while too little filtering may not be sufficient to improve compression performance. [0063]
  • As is illustrated in the drawings, which are presented for purposes of illustration and are not intended to limit the scope of the invention, the present invention accordingly provides for a smart card for storage of digitally compressed color images such as color identification photographs, although the invention is equally applicable to gray scale images. Digital color image data typically from a video camera or an existing digitized photograph are first converted from the RGB (Red-Green-Blue) color space to the YCrCb (Luminance-Chrominance) color space. Preferably, the step of converting digital color image data to the YCrCb color space comprises utilizing lookup tables of selected color values for color space conversion, and in one preferred approach, the step of converting digital color image data to the YCrCb color space comprises utilizing nine 256-entry one-byte lookup tables containing the contribution that each R, G and B make towards the Y, Cr and Cb components. [0064]
  • At (or prior to) compile time, nine 256-entry one-byte lookup tables of selected color values are prepared containing the contribution that each R, G and B make towards the Y, Cr and Cb components, for i=0.255, as follows:[0065]
  • RY[i]=224×0.299×i/255≈0.263×i  1)
  • GY[i]=224×0.587×i/255≈0.516×i  2)
  • BY[i]=224×0.114×i/255≈0.100×i  3)
  • RCr[i]=225×0.713×0.701×i/255≈0.441×i  4)
  • GCr[i]=225×0.713×−0.587×i/255≈−0.369×i  5)
  • BCr[i]=225×0.713×−0.114×i/255≈0.072×i  6)
  • RCb[i]=225×0.564×−0.299×i 255≈−0.149×i  7)
  • GCb[i]=225×0.564×−0.587×i/255≈−0.292×i  8)
  • BCb[i]=225×0.564×0.886×i/255≈0.441×i  9)
  • Once completed, the table can be used to convert a pixel from RGB to YCrCb as follows:[0066]
  • Y=RY[r]+GY[g]+BY[b]+16
  • Cr=RCr[r]+GCr[g]+BCr[b]
  • Cb=RCb[r]+GCb[g]+BCb[b]
  • This method requires 8304 bytes of constant ROM, six 8-bit additions and nine table lookups. The nine table lookups might require a 16-bit addition each, but more likely, the microcontroller could handle the lookup through an opcode or built-in addressing mechanism. [0067]
  • In addition to conventional convolution filters, the invention includes a unique preprocessing filter with three goals: 1) reducing noise without removing important face features, 2) sharpen blurred edges, and 3) not to be computationally complex. The preprocessing filter utilizes a five pixel window on a single scan line to determine the output value for the center pixel. For each target pixel, a sequence of five pixels, including 2 pixels on either side of the target pixel and the target pixel itself, are evaluated. Five cases are accounted for in the following discussion, which is directed only to the component of luminance, for simplicity. All three components (YCrCb) are included in the actual filters. [0068]
  • Referring to FIG. 6, in order to filter data for an individual target pixel, an average of the data for the pixels immediately surrounding the target pixel is taken, for those pixels surrounding the target pixel that are within a specified range of values. If all five pixels are within specified limits, the output is the average of four pixels in a raster line (A, B, D, E), two on each side of the target (C). If the two pixels on either side are within a specified range and both sides themselves are within the range, the target pixel is treated as impulse noise. As is illustrated in FIG. 7, the filtered output target pixel data is the average of the four pixels (A, B, D, E) on each side of the target pixel (C). Referring to FIG. 8, if the two pixels on either side of the target pixel and the target pixel itself are within a specified range, the target pixel (C) is considered to be an edge pixel. The output target pixel (C) is the average of the two pixels (A, B or D, E) on the matching side. If the five pixels are all increasing or decreasing (or are within a small range to account for ringing or pre-emphasis typically found in analog video signals), the target pixel is considered to be in the midst of a blurred edge. As is shown in FIG. 9, the output target pixel is then the average of two pixels (A, B) on whichever side is closest in value to the target pixel. As is illustrated in FIG. 10, if the five pixels in the window do not fit into any of the prior cases, the target is treated as being in the midst of a busy area, and the output target pixel is unchanged. The flow chart of FIGS. 11A to [0069] 11D illustrate the color image data preprocessing according to the method of the present invention.
  • Background in the image being compressed can be replaced with a scalar value, in order to reduce noise in the image, and to increase the visual quality of the compressed image. In a currently preferred embodiment, the step of replacing background in the image being compressed with a scalar value can also involve setting an initial chromakey value and delta values. [0070]
  • Methods illustrated in the flow chart of FIG. 11E are used to set the initial chromakey value and range: calibrated image, automatic, automatic-accumulated, and manual. In the chromakey calibrated image process, prior to capturing an image with the subject of interest in place, one or more calibration images of the background consisting substantially entirely of background pixels are captured. The average and standard deviation of those entire images are determined, and are used to set at least an initial chromakey value and range. [0071]
  • In the automatic chromakey calibration process of the invention, illustrated in FIG. 11F, an image is captured with the subject in place. Starting in the upper-left and upper-right corners of an image, pixels are collected down and towards the center until an edge or image boundary is encountered. The average and standard deviation of those pixels are calculated and used to set the initial chromakey value and range. In the automatic-accumulated chromakey process, the selection of pixels is carried out as in the automatic chromakey process, but the background pixel data are collected across several images. The average and standard deviation of those collected pixels are determined and used to set at least the initial chromakey value and range. For manual calibration the initial chromakey value and range are specified without respect to the properties of an individual image being captured prior to image capture. [0072]
  • In the calibrated image, automatic, and automatic-accumulated chromakey options, each pixel used for accumulating calibration data is converted to the YCrCb color space. For each pixel, the Y, Cr, and Cb values and their squares are accumulated along with a count of the pixels accumulated. [0073]
  • The average pixel value is calculated by dividing the accumulated Y, Cr, and Cb values by the number of pixels accumulated. This average is used as the chromakey value. From the Y, Cr, and Cb values and their squares, the standard deviation of the accumulated pixels can be calculated. Separate coefficients, for Y and C can be specified that are multiplied by the standard deviation to become chromakey delta values specifying the variance from the chromakey values to determine the ranges for each of the Y, Cr, and Cb components. [0074]
  • For calculated chromakey values that have very high or very low luminance values, or have small chrominance values, the chromakey values and delta values or variances from the chromakey values used to determine the ranges can be “normalized” by removing the chrominance component of the chromakey value and increasing the chrominance components of the chromakey delta values. In other cases, the chromakey value can be adjusted so that the value plus or minus the delta values for each component does not cross zero. [0075]
  • Preferably, a mask of colors closely matching the chromakey value is created. Three delta components are preferably used to describe a rectangular region in YCrCb color space. The three delta components can in an alternate preferred embodiment describe a hollow cylindrical segment in HSV color space. For each pixel, the differences between the Y, Cr, and Cb values of the pixels are compared with the Y, Cr, and Cb of the components of the chromakey values. If all three of the differences are within the Y, Cr, and Cb chromakey delta values, then a bit in the mask is set. In another preferred aspect, one delta component describes a spherical region in YCrCb color space. [0076]
  • Additionally, the method of the invention can further comprise the step of removing artifacts from the initial chromakey mask. The initial chromakey mask typically contains three types of artifacts that must also be removed. The term “chromakey mask” is used herein to mean the array of on/off bits that indicate whether a pixel is to be replaced in the chromakey process. The first type of artifact arises from small areas of pixels in the background that are not included in the chromakey mask set of pixels replacing background pixels, but should be included in the mask set. The second type of artifact arises from small areas of pixels that are included in the chromakey mask set, but that are actually part of the subject. The third type of artifact arises from those pixels creating a halo effect around the subject where the background and subject tend to blended for a few pixels around the boundary of the subject. [0077]
  • A few passes of erosion and dilation are typically used to adjust the chromakey mask. Erosion is the process of removing pixels from the mask that have few neighbors included in the mask, such as those pixels having less than a predetermined threshold number of adjacent pixels a given distance away; it is used to correct the second type of artifact. Dilation is the process of adding pixels to the mask that have most of their neighboring pixels included in the mask, such as those pixels having more than a predetermined threshold number of adjacent pixels a given distance away included in the mask; it is used to correct the first type of artifact. In one presently preferred embodiment, the step of removing artifacts comprises the steps of: a) initially determining the background mask set of pixels; b) removing pixels from the mask set that have less than a predetermined threshold of neighboring pixels included in the mask set; c) adding pixels to the mask set that have more than a predetermined threshold of neighboring pixels included in the mask set; and repeating steps b) and c) a plurality of times. The third type of artifact can be corrected by utilizing more dilation passes than erosion passes. [0078]
  • Another method has also been developed, operating on the same principles, to accomplish the same goals with much less computational power. According to this “chromakey cleanup” method, corrections are made first horizontally then vertically on the chromakey mask. In the chromakey cleanup process, the initial mask can be adjusted before pixel replacement actually begins. It may be repeated if necessary, and typically two passes are used. A sliding window of five pixels is used, and the center pixel is adjusted according to the following table, where “On” indicates the pixel is included in the chromakey mask, “Off” indicates the pixel is not included in the chromakey mask, and “X” indicates the pixel can be included or not included in the chromakey mask, or can be near an edge or not, for specific cases not covered in the table of rules: [0079]
    PIXEL POSITION Near New
    −2 −1 0 +1 +2 Edge 0
    Off Off Off On On No On
    On On Off Off Off No On
    Off Off On Off Off X Off
    X On Off On X X On
    On Off Off On On X On
    On On Off Off On X On
    X X On X X X On
    X X Off X X X Off
  • Once these artifacts have been removed from the chromakey mask, a replacement color is substituted into the original image for each pixel that is “on” in the mask. The application developer has the option of using the chromakey value color as the replacement color or specifying a fixed replacement color. Further, the developer can use the replacement color as the background color for the first level encoding step or specifying another fixed value. [0080]
  • While the currently preferred method typically uses a cube-shaped region bounded by the chromakey value, plus and minus the chromakey delta for determining a range for each component, it should be recognized that the region may be replaced by a spherically-shaped region determined by a distance parameter from the chromakey value, or alternatively the chromakey calculations may be done in a HSV (Hue, Saturation, Value) color space which would result in a wedge-shaped region. [0081]
  • In the multilevel statistical encoding of a gray scale or a color image data according to the present invention, as illustrated in FIGS. 12A to [0082] 12C, 13 and 14A to 14E, the first portion of the actual process of compression typically involves dividing the image into an array of 4×4 squares of pixels, and encoding each 4×4 square of pixels into a fixed bit length block containing a central color value, color dispersion value, and a selection map that represent the sixteen pixels in the block. Then 4×4 blocks of representative values from the level-one are encoded into higher level, or level-two blocks of central color values. Each level two block describes a lower resolution 16×16 pixel image. The process continues to 64×64, 256×256, and even 1024×1024 pixel blocks. The number of levels are selected so that four to fifteen top level blocks in each dimension remain. The compression system of the invention uses only two levels. In a presently preferred aspect of the method of the invention, the step of statistically encoding the image data comprises dividing the image into an array of 4×4 squares of pixels, and multi-level encoding the central color values of each 4×4 square of lower level blocks. In one currently preferred option, the step of multi-level encoding is repeated until from four to fifteen blocks remain on each axis of a top level of blocks. In a preferred variation of this option, the top level of blocks is reduced to residuals from a fixed background color. In an alternate preferred option, each successive lower level block is reduced to the residuals from the encoded block on the level above. In another alternate preferred option, the pixel values are reduced to the residuals from the encoded level one blocks.
  • In the modified block truncation coding (BTC) algorithm the image is divided into 4×4 blocks of pixels, and the first sample moment (the arithmetic mean) and the second sample moment are determined. In one currently preferred embodiment, each block contains a central color value and a color dispersion value, and the step of statistically encoding the image data comprises determining a first sample moment of each block as the arithmetic mean of the pixels in the block, and the central color value of each block is set to the arithmetic mean from the first sample moment of the pixels in the block. One presently preferred option of this embodiment involves determining a second sample moment of the pixels in the block, and determining the color dispersion value of each the block by determining the standard deviation from the first and second sample moments. In another presently preferred alternate embodiment, instead of a second standard sample moment, a first absolute central moment can be determined, to quantify the dispersion around the central value, and the color dispersion value is set to the first absolute moment. Another presently preferred option of this embodiment involves determining a first absolute moment by determining an average of the difference between the pixel values and the first sample moment, and wherein the color dispersion value is set to the first absolute moment. A selection map of those pixels having color values less than or greater than a discriminator set to the first sample moment is determined, along with a count of the lighter pixels. [0083]
  • In another presently preferred aspect of the method of the invention, the step of statistically encoding the image data comprises determining a first sample moment of each block as the arithmetic mean of the pixels in the block, determining a second sample moment of the pixels in the block, and determining the selection map from those pixels in the block having values lighter or darker than the average of the lightest and darkest pixels in the block. The sample variance and the standard deviation can thus be determined based upon the first and second sample moments. The mean, standard deviation, and selection map are preserved for each block. As adapted for YCrCb color, according to the method of the present invention, the first sample moment for each color component is thus determined according to the following equations: [0084] Y _ = 1 16 i = 1 16 Y i Cr _ = 1 16 i = 1 16 Cr i Cb _ = 1 16 i = 1 16 Cb i
    Figure US20020104891A1-20020808-M00001
  • The second sample moment is determined according to the following equations: [0085] Y 2 _ = 1 16 i = 1 16 ( Y i ) 2 Cr 2 _ = 1 16 i = 1 16 ( Cr i ) 2 Cb 2 _ = 1 16 i = 1 16 ( Cb i ) 2
    Figure US20020104891A1-20020808-M00002
  • The standard deviation is determined according to the following equations:[0086]
  • σY ={square root}{square root over (Y2)}−( {overscore (Y)})2
  • σCr ={square root}{square root over (Cr2)}−( {overscore (Cr)}) 2
  • σCb ={square root}{square root over (Cb2)}−( {overscore (Cb)}) 2
  • The selection map m[0087] i for each block is determined as is illustrated in FIGS. 12A to 12C, where:
  • m i =Y i <{overscore (Y)},i=1 . . . 16
  • Referring to FIG. 12A, each 4×4 block of pixels is collected into a 16 element buffer, in which the index ranges from 0 to 15. In the first step, the first and second moments are determined. Squares are preferably determined by table lookup using an 8-bit table of squares rather than by multiplication. In the second step, the mean and standard deviation are determined, using a square 12 function to determine the square of a 12-bit number based upon the same 8-bit table of squares above. The root function finds roots by binary search of the same 8-bit table of squares. In FIG. 15A, dY, dCr and dCb are the standard deviations for each component, and mY (mean luminance), mCr, and mCb are the arithmetic means. In the third step, illustrated in FIG. 15B, the selector map is determined from the mean luminance value mY for the selector. The one bits in the map mark those pixels that are “darker” than the mean. The signed differences are accumulated from the mean in each chrominance (Cr/Cb) channel. If the Cr channel decreases when the luminance increases, dCr is inverted. If the Cb channel decreases when the luminance increases, dCb is inverted. In the fourth step, illustrated in FIG. 15C, values are normalized. If the luminance of all pixels is equal to or slightly greater than the mean, all standard deviation values are zeroed. If all of the pixels are nearly equal, the standard deviations will all be zero, in which case the map is also zeroed. To reduce the number of possible maps from 65,536 to 32,768, if the MSB (most significant bit) map is set, the map is inverted and the dY, dCr, and dCb values are negated. [0088]
  • The second half of the compression process involves taking the fixed bit (8 and 16 bit) length blocks encoded by the previous multilevel encoding step, and compressing them using minimum redundancy, variable-length bit codes. [0089]
  • The basic process for compressing a single encoded block comprises three steps: classification, quantization, and codebook compression. Before beginning the compression steps, however, the parameters used for the particular block must be established. Several parameters are used to control the compression process. The parameters specify tolerances for how blocks are classified, how many bits of which component in the encoded block will be preserved, and how precisely the selection map is preserved. Different quality parameters may be used for different levels. For adaptive compression, each region of the image will use a different parameter set. [0090]
  • Adaptive compression is the process of making certain areas of the image that are considered to be more important look better, and is accomplished in two basic parts. In the first part, the level two (L2) blocks of the image are divided into groups and a portion of the total compressed data is allocated to each group. In the second part, the compression thresholds are adjusted for each group to ensure that the image is compressed within the allocated space. [0091]
  • For facial identification images, the image is typically divided into three or four groups. The groups are generally layed out in concentric circles with the highest priority area in the center. Targets for the amount of compressed data used for encoding each of the groups are also determined. Usually the highest priority group has two to three times the bits per pixel as the least priority group. [0092]
  • The following example, illustrated in FIG. 16A, is of a 96×96-pixel image divided into four groups with an overall maximum of 1600 bytes of compressed data. [0093]
    Level Two Target Group Bytes per L2
    Group Blocks Bytes Block Bits per Pixel
    A
     4 400 100 3.13
    B  8 400 50 1.56
    C 12 400 33.33 1.04
    D 12 400 <33.33 <1.04
    Total 36 1600  44.44 1.39
  • To adapt to different capture environments and differently shaped faces, three layouts are defined, “circle,” “oval,” and “bell,” as is illustrated in FIG. 16B. For facial identification images, the corners of the image typically carry the least amount of information. The application can also choose the most appropriate of the three group layouts and whether to truncate the corners. [0094]
  • If truncation is selected, one block is removed from each corner on sides with seven or less blocks and two blocks are removed from the corner on sides with more than seven blocks. Truncation is bypassed on images having less than four blocks on either side. [0095]
  • To determine which blocks are considered to be within each group, first the sum of the distance between the center of each block and a set of control points is calculated. For the circle layout, only one control point is used in the center of the image. For the oval layout, three control points are used: one in the center of the image and one each one-fourth of the image height above and below the center. For the bell layout the three control points are (1) centered horizontally and one third down, (2) one-third from the left and one-third up, and (3) one-third from the right and one-third up. The calculated distances are then sorted. [0096]
  • For images with less than 25 blocks, only three groups are used. Blocks associated with the first one-sixth of the distances are included in the first group. The remainder of the first half of the distances are included in the second group. The remaining blocks are included in the third group. For each group, all blocks having a distance equal to the target (one-sixth or one-half) are included in the target group. In general, one-fourth of the maximum compressed bytes are allocated to the first group, one-fourth to the second group, and the remaining one-half to the third group. These allocations are adjusted for the number of blocks that are actually included in the group. [0097]
  • With more than 25 blocks, four groups are used. The distances are divided into ninths. Blocks associated with the first ninth are included in the first group. Blocks associated with the second and third ninth are assigned to the second group. Blocks associated with the fourth and fifth ninth are assigned to the third group. All remaining blocks are included in the fourth group. Again, all blocks having a distance equal to the target are included in the earlier group. In general, one-fourth of the maximum compressed bytes are allocated to each group. These allocations are also adjusted for the number of blocks assigned to the groups. [0098]
  • In order to achieve the highest possible quality within the maximum compressed bytes constraint, an iterative process is used. Several sets of compression thresholds are defined, called quality levels. At the highest quality level, the thresholds are set to very low values. As the quality level decreases, the threshold values are gradually increased. The actual values for the thresholds are chosen so that as the quality level decreases, fewer bytes of compressed data are produced when they are applied to a typical facial identification image. A sample set of quality level values is shown in the table: [0099]
    Quality Map Map
    Level TYU TCU TYN TCN Error Group
    9 0 0 0 0 0 4
    8 0 0 1 1 1 4
    7 1 1 1 1 1 4
    6 1 1 2 2 1 4
    5 2 2 2 2 1 4
    4 2 2 3 3 2 3
    3 3 3 3 3 2 3
    2 3 3 3 3 3 3
    1 3 3 4 4 4 2
    0 4 4 4 4 4 2
  • During the compression process, the quality level is automatically decreased after processing each group of blocks, except when at the highest quality level. If at the end of a block group, the number of accumulated bytes of compressed data exceeds the target for that group (either as calculated above or specified by the application), the quality level is decreased a second step. In addition, when processing the last group, each block is checked against the remaining allocation. If a block is beyond the target, only the level two data is preserved. If the maximum bytes allowed is exceeded, the process is repeated starting one quality level lower. The flowchart in FIG. 17 illustrates this process. [0100]
  • The basic codebook compression process consists of three steps: First, blocks are classified into four categories—null, uniform, uniform chroma, and pattern. Second, the number of bits for the Y and Cr/Cb components may be reduced, differently for each classification. Third, for uniform chroma and pattern classified blocks the texture map is tested against three groups of simpler, more common “pattern maps.” Where the pattern map is sufficiently similar to the texture map from the encoder, it is used. Otherwise the entire 16-bit texture map is kept, as is described further below. [0101]
  • Another aspect of the method of the invention currently preferably involves preparing a datastream for storage or transmission in level order. In an alternate preferred embodiment, the method of the invention involves preparing a datastream for storage or transmission in block order. Another currently preferred embodiment of the method of the invention further involves the step of adding compressed residuals between input pixel data and level-one decoded blocks to thereby provide loss-less digital compression of the image. [0102]
  • For multilevel decompression, blocks can be processed for storage or transmission of the datastream for decoding in either block order or level order. In block order, for each top level block, a top-level block is processed followed by the lower level blocks within the top level block. This method allows adaptive decompression or selective processing of top-level blocks. In level order processing, all of the blocks of the top level are processed first, then each intermediate level, followed by the lowest level processing. In a presently preferred aspect, the step of decompressing comprises restoring the components of the blocks to the original number of bits based upon block classification. This method allows for progressive decoding of still images where a very low resolution image can be displayed when the top level is decoded, and progressively higher resolution images can be displayed as each intermediate level is decoded, and finally the full resolution image can be displayed after decoding the level one data as will be further explained below. In one presently preferred embodiment, the step of decompressing each block comprises decompressing each block by codebook decompression. [0103]
  • In one presently preferred embodiment, the step of preparing a datastream for storage or transmission in block order comprises selecting a block order to first process those portions of the image that are most important to facial identification. In one preferred option, the block order provides a circle group layout. In another presently preferred option, the corners of the image are truncated. In one preferred aspect, the block order provides an oval group layout, and in a preferred option, the corners of the image are truncated. Alternatively, the block order can provide a bell group layout, and the corners of the image may also be truncated. [0104]
  • In another aspect of the invention, the step of preparing a datastream for storage or transmission in block order comprises dividing the blocks into groups. The blocks can, for example, be divided into groups by assigning a portion of the maximum compressed bytes to each group. In addition, the step of dividing the blocks into groups can also involve adjusting quality-controlling thresholds upon completion of each group. In a preferred aspect, only level two block information is transmitted on the last block to be processed if the information is near a maximum limit of compressed bytes. The compression of the image can also be repeated starting at a lower quality level if necessary to process the entire image into a maximum limit of compressed bytes. [0105]
  • The state data is defined as the minimum information that must be the same on the encoding system and the decoding system so that a compressed data bitstream can be successfully decoded. The state data consists of the following items: (1) base rows, (2) base columns, (3) quantizer for level one and level two, (4) codebook identifiers for each classification, luminance, chrominance, and [0106] group 3 maps, and (5) group layout identifier.
  • Typically, an individual application will have very specific needs and constraints. All that is required is that the state data parameters be set the same (or have the same default values) on both the encoder and decoder. However, for applications where any element of the state data might vary from image to image, the state data (or at least the variable items) must be kept with the compressed data. Otherwise, the compressed data would be unusable. In addition, in a given application it may be desirable to store preferences, such as a combination of post filters, with the compressed data. [0107]
  • A quantizer value defines the values used for the b[0108] YU, bYP, bCU, and bCP bit count parameters, which are discussed farther below. A sample table of values for the quantizer and the corresponding bit count parameters is shown here:
    Quantizer bYU bYP bCU bCP
    1 4 4 4 4
    2 5 4 4 4
    3 5 5 4 4
    4 5 4 5 4
    5 5 5 4 4
    6 5 5 5 4
    7 6 4 4 4
    . . .
    63  8 8 8 8
  • During the first part of the adaptive compression process, the [0109] level 2 blocks are sorted. The same sort order algorithm must be used in the decoder. Thus, for the same base rows, base columns, and group layout identifier (circle, oval, or bell; truncated or not), the encoder and decoder will process the level two blocks in the same order.
  • The compressed data for each level two block will usually be followed by its sixteen level one blocks. The level one blocks will be processed from the upper left corner across the top row, continuing from the left of the second row, and finishing in the lower right corner. Two escape codes are defined. The first signals the end of compressed data. The other is used when level one data is to be skipped. FIG. 19 illustrates the format of the data stream. [0110]
  • Four codebooks are used in the basic compression process, one each for block classification, luminance difference, chrominance difference, and group three pattern maps, as is described further below. Different applications will have different distributions of values to be compressed. [0111]
  • The system of statistical encoding known as Huffman coding is used for constructing variable bit length codebooks based upon the frequency of occurrence of each symbol. For the ultimate use of this technique, a new set of codebooks would need to be constructed for each image and transmitted to the decoder. However, this process is usually not practical. The method of the present invention preferably includes several codebooks optimized for a variety of applications. Typically a single set of codebooks is used for an image, but if necessary, each set of parameters can specify different codebooks. [0112]
  • Once the block data and parameters have been collected, the block is classified as null, uniform, uniform chroma, or pattern. Null blocks exhibit little or no change from the higher level or previous frame. Run lengths of one to eight null blocks are collected, and no other information is preserved. Uniform blocks have a relatively low standard deviation, being less than a predetermined threshold, and are therefore relatively uniform in their change in color from the higher level or previous frame. The mean values for all three components are preserved. [0113]
  • Uniform chroma blocks have a significant luminance component to the standard deviation, but little chrominance deviation. The mean luminance and chrominance, standard deviation luminance, and a suitable selection map are preserved. Pattern blocks have significant data in both luminance and chrominance standard deviations. All components of the block are preserved. An additional classification, called an escape code, is also used to navigate the compressed bitstream. [0114]
  • After the block is classified as null, uniform, uniform chroma, or pattern, the number of bits to be preserved for each component of the block is set as follows: [0115]
    {overscore (Y)} {overscore (Cr)} {overscore (Cb)} σY σCr σCb MAP
    Null 0 0 0 0 0 0 0
    Uniform bYU bCU bCU 0 0 0 0
    Uniform Chroma bYU bCU bCU bYP 0 0 Yes
    Pattern bYU bCU bCU bYP bCP bCP Yes
  • For uniform chroma and pattern blocks, the selection map is preserved along with the color data. The run length of null blocks are recorded without preserving components of the null blocks. Three groups of common selection maps are identified by the compression method of the invention. The first two groups are fixed while the application developer can select from several codebooks for the third group. If a suitable match cannot be found in the three groups, the entire texture map is preserved. [0116]
  • The following notation is used when identifying selection maps: [0117]
    0  b14  b13  b12
     b11  b10 b9 b8
    b7 b6 b5 b4
    b3 b2 b1 b0
    For 0 1 1 1 = 7528H
    example 0 1 0 1 in hexidecimal
    0 0 1 0 notation.
    1 0 0 0
  • Since the selection map is normalized in the encoding step so that the MSB is zero, each map actually represents two. [0118]
    Group Members Implied Maps Encoding
    1 00FFH 3333H FF00H CCCCH 3 bits
    2 0FFFH 7777H F000H 8888H 4 bits
    1111H 000FH EEEEH FFF0H
    3 By Codebook typically
    5 to 9 bits
    4 Actual Texture Map 17 bits
  • Since the decoded colors from a block depend upon the number of bits in the map, if a map that is substituted has a different number of bits, the standard deviation components of the block are adjusted. For each individual block, the bitstream is written as is illustrated in FIG. 17. [0119]
  • The classification codebook contains twelve entries, eight run lengths of null blocks and one each for uniform, uniform chromas, and pattern blocks, plus an entry for preceding escape codes. Escape codes are dependent upon the implementation and can be used to signal the end of an image, end of a block run, skipping to a different block, and the like. [0120]
  • The luminance and chrominance codebooks contain the most often observed delta values—the luminance typically including +25 to −25 and chrominance from −6 to +6. For values that need to be coded and are not found in the selected codebook, and “other” entry at +128 is used, followed by the value, using the number value was quantized, as illustrated in FIG. 20. [0121]
  • Typical codebooks are shown in the following tables. [0122]
    Value Bits Pattern
    −4 3 000
    −8 4 0101
    −7 4 0100
    −6 4 1010
    −5 4 0110
    −3 4 1110
    3 4 1001
    4 4 0010
    −9 5 10111
    5 5 11010
    6 5 10110
    7 5 01111
    8 5 10000
    −11 6 001100
    −10 6 110110
    9 6 111101
    10 6 111110
    11 6 001111
    12 6 001110
    13 6 110010
    −16 7 0111011
    . . . . . . . . .
    −22 10 1111000111
    −21 10 1111111101
    −20 10 1111000110
    −18 10 1100001001
    −19 11 11111110101
    −17 12 111111101001
  • Sample Luminance Codebook [0123]
  • Sample Chrominance Codebook: [0124]
    Value Bits Pattern
    −6 9 000000000
    −5 8 00000001
    −4 7 0000011
    −3 6 000010
    −2 4 0001
    −1 2 01
    0 1 1
    1 3 001
    2 6 000011
    3 7 0000001
    4 8 00000101
    5 8 00000100
    6 10 0000000011
  • Each of the four code books used in the compression process (block classification, luminance, chrominance, and group three map) must be translated into a code book lookup for the decompression process. For example, the following classification code book is translated to a code book lookup: [0125]
    Block Classification Code Book
    Index Code Bit Pattern
    0 Escape 9 011100000
    1 Uniform 1 1
    2 UniChr 2 00
    3 Pattern 9 011100001
    4 Null 3 010
    5 NullRLE2 4 0110
    6 NullRLE3 6 011101
    7 NullRLE4 6 011111
    8 NullRLE5 7 0111100
    9 NullRLE6 7 0111001
    10 NullRLE7 8 01110001
    11 NullRLE8 7 0111101
    Block Classification Code Book Lookup
    Index Link Value Code
    0 2
    1 1 Uniform
    2 22
    3 21
    4 20
    5 11
    6 8
    7 7 NullRLE4
    8 10
    9 11 NullRLE8
    10 8 NullRLE5
    11 13
    12 6 NullRLE3
    13 15
    14 9 NullRLE6
    15 17
    16 10 NullRLE7
    17 19
    18 3 Pattern
    19 0 Escape
    20 5 NullRLE2
    21 4 Null
    22 2 UniChr
  • As bits from a code book entry are retrieved from the compressed image bit stream, the code book lookup is traversed until a node value is found. Each lookup process begins at index zero. If a zero is the next bit retrieved from the bit stream, this index is used. If a one is retrieved, the index is incremented by one. If a value is found at that index, the process is complete with that value. If a link is found, it is used as the new index. In the above example, Block Classification Code Book Lookup, to find a NullRLE6 (Pattern 0111001), the search begins at index zero. A zero is retrieved from the bitstream, so that index zero is used in the lookup, resulting in the finding of a link of two. The next bit is retrieved, which is a one, so the index is incremented to three. The third bit, a one, is retrieved, so that the index is incremented to four. The fourth bit is retrieved, another one, so that the index is incremented to five. The fifth bit, a zero, is retrieved, so that the index five is used to find a link of 11. The sixth bit, a zero, is retrieved, so that the [0126] index 11 is used to find a link of 13. The seventh bit, a one, is retrieved, so that the index is incremented to 14. At index 14 a value of 9 is found, which corresponds to NullRLE6.
  • During the encoding process for each block, the mean, standard deviation, and a selection map are preserved. Those values must now be turned back into sixteen pixels that represent the four-by-four square of original pixels or delta values. Two colors “a” and “b” can be determined from the mean “{overscore (X)}”, standard deviation “{overscore (σ)}” number of one bits in the selection map “q”, and the total number of bits in the selection map (16) “m”. The following formula is used for each of the Y, Cr, and Cb components to the “a” color and “b” color. [0127] a = X _ - σ _ [ q m - q ] b = X _ + σ _ [ q m - q ]
    Figure US20020104891A1-20020808-M00003
  • Where a one occurs in the selection map, the “a” color is placed in the corresponding pixel position and a “b” color is placed for each zero in the selection map. To reduce the computation required at run-time, the multiplier involving “q” and “m” has been reduced to a lookup table of coefficients for the standard deviation value “σ”. [0128]
  • For blocks encoded with an absolute central moment and selection map, step of decompressing preferably comprises determining a first color “a” and a second color “b” for each block of pixels, based upon the absolute central moment and selection map for each block of pixels, where “x” is the sample mean (or central color value, or arithmetic mean), the number of one bits in the selection map darker than the absolute central moment is “q”, and the total number of bits in the selection map is “m”, according to the following formulas:[0129]
  • a=x−d/q
  • b=x+{fraction (d/m−q)}
  • where “d” is the absolute central moment, for each of the Y, Cr, and Cb components of the “a” color and “b” color. [0130]
  • Some patterns for the selection map tend to describe gradient areas of the original image. It is also possible to utilize a map of coefficients to smooth the boundaries between the decoded “a” and “b” colors. For example, where: [0131]
  • Mean=100 [0132]
  • Standard Deviation=−10 [0133]
  • Selection Map=0×137F [0134]
  • a=100−(−10)×1.291=113 [0135]
  • b=100+(−10)×0.775=92 [0136]
  • The unadjusted values, coefficients and adjusted values are shown in the following table: [0137]
    Unadjusted Values Coefficients Adjusted Values
    92 92 92 113 120% 100% 80% 80% 91 92 94 110
    92 92 113 113 100% 80% 80% 100% 92 94 110 113
    92 113 113 113 80% 80% 100% 120% 94 110 113 115
    113 113 113 113 80% 100% 120% 140% 110 113 115 118
  • The background color is retrieved from the beginning of the data stream. For each level two block, the level two data is retrieved from the data stream and decoded and added to the background color. For some blocks no level one data is stored. In those blocks, the resulting value for the level two data and background data is replicated for each of the sixteen pixels in that level one block. Where present, sixteen level one blocks will be retrieved from the data stream, decoded, and then added to the pixel value from the level two block. [0138]
  • Three types of post processing filters, a depth filter (light, medium, or heavy), an edge filter (light, medium, or heavy), and a spatial filter (five-pixel light, medium, or adaptive) or a combination of any one from each of the categories can be used. The depth filters remove the “cartoon” look from areas of the image where a single uniform color has been decoded. Small variations in the luminance component are injected into the decoded image. Variations are selected randomly from the following sixteen values based upon the level of filtering specified: [0139]
  • For a light level of filtering, the values typically are: [0140]
    −3 −2 −2 −1 −1 −1 0 0 0 0 1 1 1 2 2 3
  • For a medium level of filtering, the values typically are: [0141]
    −6 −4 −3 −2 −2 −1 −1 −1 1 1 1 2 2 3 4 6
  • For a heavy level of filtering, the values typically are: [0142]
    −8 −6 −5 −4 −3 −2 −1 −1 1 1 2 3 4 5 6 8
  • The edge filters serve to mask the artifacts occurring at the edges of the four-by-four-pixel level one blocks. The pixels away from the edges are not affected. The tables in FIGS. 19A to D illustrate the three levels of filtering where a horizontal block boundary occurs between the F-J and K-O rows and a vertical boundary between the B-V and C-W columns. [0143]
  • The spatial filters operate as conventional convolution filters. A special set of convolution masks has been selected to make the filters easier to implement with less computing power. In this special set, only five pixels are used (the central target pixel and one each above, below, left, right) and the divisors are multiples of two, typically with the sum of the values of the pixels of the spatial filter being equal to the divisor. In implementing the spatial filter, the central target pixel is matched with the target pixel of the decompressed, decoded image, and the filtered value of the target pixel is determined as the sum of the products of the five pixels of the spatial filter and the corresponding pixels of the decompressed image with relation to the target pixel of the image, divided by the divisor. Other shapes for spatial mask may also be suitable, such as a spherical mask, for example. The following charts show the convolution masks for the light and medium five-pixel spatial filters: [0144]
    Light Spatial-5 Filter
    0 +1 0
    +1  +4 +1 
    0 +1 0
    Medium Spatial-5 Filter
    ¼ 0 +1 0
    +1   0 +1 
    0 +1 0
  • The adaptive filter is a combination of the light and medium versions where the medium filter is used where the difference between the surrounding pixels is significant. The flowchart of FIG. 21 shows one implementation of these filters. [0145]
  • Another presently preferred aspect of the method of the invention further involves the step of converting the YCrCb color space image data back to the original color image color space. In a presently preferred variation, this can comprise converting the YCrCb color space image data back to the RGB color space, and this preferably involves utilizing lookup tables of selected color values for color space conversion. In a preferred aspect, five 256-entry lookup tables are utilized. [0146]
  • With reference to FIG. 22, the present invention accordingly provides for a contactless IC [0147] smart card 30 for storing a digitally compressed image which contains image data consisting of a plurality of scan lines of pixels with scalar values. The image data is filtered and encoded according to the invention as discussed hereinabove. The smart card preferably includes an antenna 32 for receiving and transmitting data from a read/write device (not shown), a receiving circuit 34 for demodulating signals received by the antenna, a transmitting circuit 36 for modulating signals to be transmitted and driving the antenna, and an I/O control circuit 38 for serial/parallel conversion of the transmitting signals and reception signals. The smart card also includes a CPU 40 for performing read/write operations on data, including the receiving and transmission of data, as well as data processing, a ROM 42 for storing a control program or the like to operate the CPU, a RAM 44 for storing the digitally compressed image data and results of processing, and a bus 46 for interconnecting the CPU, ROM, RAM and I/O control circuit. An oscillator 48 connected to the CPU and smart card circuitry is also provided for generating an internal clock signal, and a power source 50, such as a battery, provides power to the CPU and smart card circuitry. A trigger signal line 52 may also be connected between the receiving circuit and the CPU for switching the smart card from a sleep state to an operating state by directly supplying a received trigger signal to the CPU from the receiving circuit.
  • It should be readily apparent that the smart card of the invention is also applicable to grayscale images, and other monochromatic images and chromatic image systems with pixels having scalar values. It will be apparent from the foregoing that while particular forms of the invention have been illustrated and described, various modifications can be made without departing from the spirit and scope of the invention. Accordingly, it is not intended that the invention be limited, except as by the appended claims. [0148]

Claims (80)

What is claimed is:
1. A smart card for storing a digitally compressed image, the image containing image data consisting of a plurality of scan lines of pixels with scalar values, comprising:
image data filtered by evaluation of the scalar values of individual pixels in the image with respect to neighboring pixels, said image data being statistically encoded by dividing the image into an array of blocks of pixels, and each block of pixels being encoded into a fixed number of bits that represent the pixels in the block; and
a memory storing said image data.
2. The smart card of claim 1, wherein said memory comprises a programmable microchip.
3. The smart card of claim 1, wherein said image data is filtered by evaluating each said individual pixel as a target pixel and a plurality of pixels in close proximity to the target pixel to determine an output value for the target pixel.
4. The smart card of claim 3, wherein said image data is filtered by evaluating a sequence of five pixels, including two pixels on either side of the target pixel and the target pixel itself, for each said target pixel.
5. The smart card of claim 4, wherein said image data is filtered by determining an average of the data for a window of the pixels immediately surrounding the target pixel for those pixels surrounding the target pixel that are within a specified range of values, according to the following protocol: if all five pixels are within the specified range, the output target pixel is determined to be the average of the four pixels in a raster line, two on each side of the target pixel; if the two pixels on either side are within a specified range and both sides themselves are within the range, the filtered output target pixel data is determined to be the average of the two pixels on each side of the target pixel; if the two pixels on either side of the target pixel and the target pixel itself are within a specified range, and the other two pixels on the other side are not within the specified range, the output target pixel is determined to be the average of the two neighboring pixels closest in value to the target pixel values and that fall within the specified range; if the five pixels are all increasing or decreasing, or are within a specified range, the output target pixel is determined to be the average of two pixels on whichever side of the target pixel is closest in value to the target pixel; and if the five pixels in the window do not fit into any of the prior cases, the output target pixel is unchanged.
6. The smart card of claim 1, wherein said image data is statistically encoded by dividing the image into an array of 4×4 squares of pixels, and encoding each 4×4 square of pixels into a fixed bitlength block containing a central color value, color dispersion value, and a selection map that represent the sixteen pixels in the block.
7. The smart card of claim 1, wherein said image data is converted to the YCrCb color space.
8. The smart card of claim 7, wherein said image data is converted to the YCrCb color space by converting color image data from the RGB color space.
9. The smart card of claim 8, wherein said image data is converted to the YCrCb color space by lookup tables of selected color values for color space conversion.
10. The smart card of claim 9, wherein said image data is converted to the YCrCb color space by nine 256-entry one-byte lookup tables containing the contribution that each R, G and B make towards the Y, Cr and Cb components.
11. The smart card of claim 6, wherein each said block contains a central color value and a color dispersion value.
12. The smart card of claim 11, wherein said image data is statistically encoded by determining a first sample moment of each block as the arithmetic mean of the pixels in the block, and said central color value of each said block is set to the arithmetic mean from the first sample moment of the pixels in the block.
13. The smart card of claim 6, wherein said image data is statistically encoded by determining a first sample moment of each block as the arithmetic mean of the pixels in the block, determining a second sample moment of the pixels in the block, and determining said selection map from those pixels in the block having values lighter or darker than the first sample moment.
14. The smart card of claim 6, wherein said image data is statistically encoded by determining a first sample moment of each block as the arithmetic mean of the pixels in the block, determining a second sample moment of the pixels in the block, and determining said selection map from those pixels in the block having values lighter or darker than the average of the lightest and darkest pixels in the block.
15. The smart card of claim 1, wherein said image data is statistically encoded by dividing the image into an array of 4×4 squares of pixels, and multi-level encoding the central color values of each 4×4 square of lower level blocks.
16. The smart card of claim 15, wherein multi-level encoding is repeated until from four to fifteen blocks remain on each axis of a top level of blocks.
17. The smart card of claim 16, wherein said top level of blocks is reduced to residuals from a fixed background color.
18. The smart card of claim 15, wherein each successive lower level block is reduced to the residuals from the encoded block on the level above.
19. The smart card of claim 15, wherein the pixel values are reduced to the residuals from the encoded level one blocks.
20. The smart card of claim 1, wherein said image data is statistically encoded by determining a classification of each said block, quantifying each said block, and compressing each said block by codebook compression using minimum redundancy, variable-length bit codes.
21. The smart card of claim 20, wherein each said block is classified according to a plurality of categories.
22. The smart card of claim 20, wherein each said block is classified in one of four categories: null blocks exhibiting little or no change from the higher level or previous frame, uniform blocks having a standard deviation less than a predetermined threshold, uniform chroma blocks having a significant luminance component to the standard deviation, but little chrominance deviation, and pattern blocks having significant data in both luminance and chrominance standard deviations.
23. The smart card of claim 20, wherein the number of bits to be preserved is determined for each component of the block after each said block is classified.
24. The smart card of claim 23, wherein a quantizer defining the number of bits to be preserved is determined for each component of the block according to the classification of the block to preserve a desired number of bits for the block.
25. The smart card of claim 23, wherein the number of bits for the Y and Cr/Cb components of the blocks to be preserved are determined independently for each classification.
26. The smart card of claim 25, wherein all components of each said block are preserved for pattern blocks.
27. The smart card of claim 25, wherein the mean luminance and chrominance, standard deviation luminance, and a selection map are preserved for uniform chroma blocks.
28. The smart card of claim 25, wherein all three color components of the central color value are preserved for uniform blocks.
29. The smart card of claim 25, wherein the run length of null blocks is recorded without preserving components of the null blocks.
30. The smart card of claim 22, wherein the texture map of the block is matched with one of a plurality of common pattern maps for uniform chroma and pattern classified blocks.
31. The smart card of claim 1, wherein a datastream is stored in level order in the smart card.
32. The smart card of claim 1, wherein a datastream is stored in block order in the smart card.
33. The smart card of claim 1, compressed residuals are added between input pixel data and level-one decoded blocks to thereby provide loss-less digital compression of the image.
34. A smart card for storing a digitally compressed color image, the color image containing color image data consisting of a plurality of scan lines of pixels with color values, comprising:
color image data filtered by evaluation of the color values of individual pixels in the color image with respect to neighboring pixels, said image data being statistically encoded by dividing the color image into an array of blocks of pixels, and each block of pixels being encoded into a fixed number of bits that represent the pixels in the block; and
a memory storing said color image data.
35. The smart card of claim 34, wherein said memory comprises a programmable microchip.
36. The smart card of claim 34, wherein said digital color image data is converted to the YCrCb color space.
37. The smart card of claim 36, wherein said digital color image data is converted to the YCrCb color space from the RGB color space.
38. The smart card of claim 36, wherein said digital color image data is converted to the YCrCb color space by lookup tables of selected color values for color space conversion.
39. The smart card of claim 38, wherein said digital color image data is converted to the YCrCb color space by nine 256-entry one-byte lookup tables containing the contribution that each R, G and B make towards the Y, Cr and Cb components.
40. The smart card of claim 35, wherein said digital color image data is filtered by evaluating each said individual pixel as a target pixel and a plurality of pixels in close proximity to the target pixel to determine an output value for the target pixel.
41. The smart card of claim 40, wherein said digital color image data is filtered by evaluating a sequence of five pixels, including two pixels on either side of the target pixel and the target pixel itself, for each said target pixel.
42. The smart card of claim 41, wherein said digital color image data is filtered by determining an average of the data for a window of the pixels immediately surrounding the target pixel for those pixels surrounding the target pixel that are within a specified range of values, according to the following protocol: if all five pixels are within the specified range, the output target pixel is determined to be the average of the four pixels in a raster line, two on each side of the target pixel; if the two pixels on either side are within a specified range and both sides themselves are within the range, the filtered output target pixel data is determined to be the average of the two pixels on each side of the target pixel; if the two pixels on either side of the target pixel and the target pixel itself are within a specified range, and the other two pixels on the other side are not within the specified range, the output target pixel is determined to be the average of the two neighboring pixels closest in value to the target pixel values and that fall within the specified range; if the five pixels are all increasing or decreasing, or are within a specified range, the output target pixel is determined to be the average of two pixels on whichever side of the target pixel is closest in value to the target pixel; and if the five pixels in the window do not fit into any of the prior cases, the output target pixel is unchanged.
43. The smart card of claim 35, wherein background in the image being compressed is replaced with a scalar value, in order to reduce noise in the image, and to increase the visual quality of the compressed image.
44. The smart card of claim 43, wherein background in the image being compressed is replaced with a scalar value by setting an initial chromakey mask and delta values.
45. The smart card of claim 44, wherein the initial chromakey mask is determined by the pixels in the input image that are near the chromakey value.
46. The smart card of claim 45, wherein three delta components describe a rectangular region in YCrCb color space.
47. The smart card of claim 45, wherein one delta component describes a spherical region in YCrCb color space.
48. The smart card of claim 45, wherein three delta components describe a hollow cylindrical segment in HSV color space.
49. The smart card of claim 44, wherein artifacts are removed from said initial chromakey mask.
50. The smart card of claim 49, wherein the artifacts are removed by
a) initially determining said background mask set of pixels;
b) removing pixels from said mask set that have less than a predetermined threshold of neighboring pixels included in said mask set;
c) adding pixels to said mask set that have more than a predetermined threshold of neighboring pixels included in said mask set; and
repeating steps b) and c) a plurality of times.
51. The smart card of claim 49, wherein the artifacts are removed by applying a sliding linear filter of five pixels once horizontally and once vertically to adjust a plurality of target pixels of said initial chromakey mask, and adjusting each target pixel to be included in the chromakey mask if the target pixel is initially not included in the chromakey mask, the pair of pixels on either side of the target pixel are in the chromakey mask, and the target pixel is not near an edge; adjusting each target pixel to be included in the chromakey mask if the target pixel is initially not included in the chromakey mask, and the two adjacent pixels on either side of the target pixel are included in the chromakey mask; adjusting each target pixel to be included in the chromakey mask if the target pixel is initially not included in the chromakey mask, and if three of the adjacent pixels a distance of two or less pixels away from the target pixel are included in the chromakey mask; adjusting each target pixel to be excluded from the chromakey mask if the target pixel is initially included in the chromakey mask, and if both pairs of pixels on either side of the target pixel are not included in the chromakey mask.
52. The smart card of claim 34, wherein said image data is statistically encoded by dividing the color image into an array of 4×4 squares of pixels, and encoding each 4×4 square of pixels into a fixed number of bits containing a central color value, color dispersion value, and a selection map that represent the sixteen pixels in the block.
53. The smart card of claim 52, wherein each said block contains a central color value and a color dispersion value.
54. The smart card of claim 53, wherein said image data is statistically encoded by determining a first sample moment of each block as the arithmetic mean of the pixels in the block, and said central color value of each said block is set to the arithmetic mean from the first sample moment of the pixels in the block.
55. The smart card of claim 53, wherein a second sample moment of the pixels in the block is determined, and said color dispersion value of each said block is determined by determining the standard deviation from said first and second sample moments.
56. The smart card of claim 53, wherein said image data is statistically encoded by determining a first absolute moment by determining an average of the difference between said pixel values and said first sample moment, and wherein said color dispersion value is set to said first absolute moment.
57. The smart card of claim 53, wherein said image data is statistically encoded by determining a first sample moment of each block as the arithmetic mean of the pixels in the block, determining a second sample moment of the pixels in the block, and determining said selection map from those pixels in the block having values lighter or darker than the first sample moment.
58. The smart card of claim 53, wherein said image data is statistically encoded by determining a first sample moment of each block as the arithmetic mean of the pixels in the block, determining a second sample moment of the pixels in the block, and determining said selection map from those pixels in the block having values lighter or darker than the average of the lightest and darkest pixels in the block.
59. The smart card of claim 54, wherein said image data is statistically encoded by encoding two levels of blocks of each 4×4 square of pixels, said two levels including level one blocks and level two blocks, and wherein said level two blocks are encoded from the central color values of the level one blocks.
60. The smart card of claim 59, wherein said level two blocks are reduced to residuals from a fixed background color.
61. The smart card of claim 59, wherein said level one blocks are reduced to residuals from decoded level two blocks.
62. A smart card for storing a digitally compressed color image, the color image containing image data consisting of a plurality of scan lines of pixels with scalar values, comprising:
image data filtered by evaluation of the scalar values of individual pixels in the image with respect to neighboring pixels;
said image data being statistically encoded by dividing the image into an array of blocks of pixels, and each block of pixels being encoded into a fixed number of bits that represent the pixels in the block by classifying each said block, quantifying each said block, and compressing each said block by codebook compression using minimum redundancy, variable-length bit codes; and
a memory storing said color image data.
63. The smart card of claim 62, wherein each said block is classified according to a plurality of categories.
64. The smart card of claim 63, wherein each said block is classified in one of four categories: null blocks exhibiting little or no change from the higher level or previous frame, uniform blocks having a standard deviation less than a predetermined threshold, uniform chroma blocks having a significant luminance component to the standard deviation, but little chrominance deviation, and pattern blocks having significant data in both luminance and chrominance standard deviations.
65. The smart card of claim 62, wherein the number of bits for the Y and Cr/Cb components of the blocks to be preserved are determined independently for each classification.
66. The smart card of claim 65, wherein all components of each said block are preserved for pattern blocks.
67. The smart card of claim 65, wherein all components of a central color, standard deviation luminance, and a selection map, are preserved for uniform chroma blocks.
68. The smart card of claim 65, wherein all three color components of the central color value are preserved for uniform blocks.
69. The smart card of claim 65, wherein the run length of null blocks is recorded without preserving components of the null blocks.
70. The smart card of claim 62, wherein the texture map of the block is matched with one of a plurality of common pattern maps for uniform chroma and pattern classified blocks.
71. The smart card of claim 62, wherein said step of compressing each said block by codebook compression comprises selecting codes from multiple codebooks.
72. A smart card for storage of a digitally compressed image, comprising:
a datastream of image data stored in block order, the image data consisting of a plurality of scan lines of pixels with scalar values, the image data filtered by evaluating the scalar values of individual pixels in the image with respect to neighboring pixels, and the image data statistically encoded by dividing the image into an array of blocks of pixels, and encoding each block of pixels into a fixed number of bits that represent the pixels in the block.
73. The smart card of claim 72, wherein datastream of image data is stored in a block order to first process those portions of the image that are most important to facial identification.
74. The smart card of claim 72, wherein said block order provides a circle group layout.
75. The smart card of claim 74, wherein the corners of the image are truncated.
76. The smart card of claim 72, wherein said block order provides an oval group layout.
77. The smart card of claim 76, wherein the corners of the image are truncated.
78. The smart card of claim 72, wherein said block order provides a bell shaped group layout.
79. The smart card of claim 78, wherein the corners of the image are truncated.
80. The smart card of claim 72, wherein said blocks are divided into groups.
US09/836,116 1998-04-20 2001-04-16 Smart card for storage and retrieval of digitally compressed color images Abandoned US20020104891A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/836,116 US20020104891A1 (en) 1998-04-20 2001-04-16 Smart card for storage and retrieval of digitally compressed color images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/063,255 US6244514B1 (en) 1998-04-20 1998-04-20 Smart card for storage and retrieval of digitally compressed color images
US09/836,116 US20020104891A1 (en) 1998-04-20 2001-04-16 Smart card for storage and retrieval of digitally compressed color images

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/063,255 Continuation-In-Part US6244514B1 (en) 1998-04-20 1998-04-20 Smart card for storage and retrieval of digitally compressed color images

Publications (1)

Publication Number Publication Date
US20020104891A1 true US20020104891A1 (en) 2002-08-08

Family

ID=22047996

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/063,255 Expired - Fee Related US6244514B1 (en) 1998-04-20 1998-04-20 Smart card for storage and retrieval of digitally compressed color images
US09/836,116 Abandoned US20020104891A1 (en) 1998-04-20 2001-04-16 Smart card for storage and retrieval of digitally compressed color images

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/063,255 Expired - Fee Related US6244514B1 (en) 1998-04-20 1998-04-20 Smart card for storage and retrieval of digitally compressed color images

Country Status (3)

Country Link
US (2) US6244514B1 (en)
EP (1) EP0952544A3 (en)
JP (1) JP2000030024A (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6601231B2 (en) * 2001-07-10 2003-07-29 Lacour Patrick Joseph Space classification for resolution enhancement techniques
US20030222152A1 (en) * 2002-05-28 2003-12-04 Boley George E.S. Pre-paid debit & credit card
US20040096106A1 (en) * 2002-09-18 2004-05-20 Marcello Demi Method and apparatus for contour tracking of an image through a class of non linear filters
US20040131263A1 (en) * 2002-10-18 2004-07-08 Hiroyuki Kawamoto Image processing apparatus
US20090167906A1 (en) * 2007-12-28 2009-07-02 Altek Corporation False color suppression method for digital image
US20090183249A1 (en) * 2008-01-11 2009-07-16 Microsoft Corporation Trusted storage and display
US7714747B2 (en) 1998-12-11 2010-05-11 Realtime Data Llc Data compression systems and methods
US20100166331A1 (en) * 2008-12-31 2010-07-01 Altek Corporation Method for beautifying human face in digital image
US7777651B2 (en) 2000-10-03 2010-08-17 Realtime Data Llc System and method for data feed acceleration and encryption
US20100320274A1 (en) * 2007-02-28 2010-12-23 Caedlap Aps Electronic Payment, Information, or ID Card with a Deformation Sensing Means
US8054879B2 (en) 2001-02-13 2011-11-08 Realtime Data Llc Bandwidth sensitive data compression and decompression
US8090936B2 (en) 2000-02-03 2012-01-03 Realtime Data, Llc Systems and methods for accelerated loading of operating systems and application programs
US20120237119A1 (en) * 2005-10-04 2012-09-20 Getty Images, Inc. System and method for searching digital images
US8275897B2 (en) 1999-03-11 2012-09-25 Realtime Data, Llc System and methods for accelerated data storage and retrieval
US8504710B2 (en) 1999-03-11 2013-08-06 Realtime Data Llc System and methods for accelerated data storage and retrieval
US8692695B2 (en) 2000-10-03 2014-04-08 Realtime Data, Llc Methods for encoding and decoding data
US20150191342A1 (en) * 2013-02-19 2015-07-09 Gojo Industries, Inc. Refill container labeling
US9143546B2 (en) * 2000-10-03 2015-09-22 Realtime Data Llc System and method for data feed acceleration and encryption
EP3035230A1 (en) 2014-12-19 2016-06-22 Cardlab ApS A method and an assembly for generating a magnetic field
US10095968B2 (en) 2014-12-19 2018-10-09 Cardlabs Aps Method and an assembly for generating a magnetic field and a method of manufacturing an assembly
US10558901B2 (en) 2015-04-17 2020-02-11 Cardlab Aps Device for outputting a magnetic field and a method of outputting a magnetic field
US20230217010A1 (en) * 2022-01-05 2023-07-06 Nanning Fulian Fugui Precision Industrial Co., Ltd. Video compression method and system

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6882738B2 (en) * 1994-03-17 2005-04-19 Digimarc Corporation Methods and tangible objects employing textured machine readable data
US6782115B2 (en) 1998-04-16 2004-08-24 Digimarc Corporation Watermark holograms
US6608911B2 (en) 2000-12-21 2003-08-19 Digimarc Corporation Digitally watermaking holograms for use with smart cards
US7215832B1 (en) * 1998-11-25 2007-05-08 Fujifilm Corporation Retrieval system and image processing apparatus
US7305104B2 (en) * 2000-04-21 2007-12-04 Digimarc Corporation Authentication of identification documents using digital watermarks
US6836564B2 (en) * 2000-04-28 2004-12-28 Denso Corporation Image data compressing method and apparatus which compress image data separately by modifying color
US6923378B2 (en) * 2000-12-22 2005-08-02 Digimarc Id Systems Identification card
US20030086591A1 (en) * 2001-11-07 2003-05-08 Rudy Simon Identity card and tracking system
PT1456810E (en) 2001-12-18 2011-07-25 L 1 Secure Credentialing Inc Multiple image security features for identification documents and methods of making same
US20030211296A1 (en) * 2002-05-10 2003-11-13 Robert Jones Identification card printed with jet inks and systems and methods of making same
WO2003055638A1 (en) 2001-12-24 2003-07-10 Digimarc Id Systems, Llc Laser etched security features for identification documents and methods of making same
US6843422B2 (en) * 2001-12-24 2005-01-18 Digimarc Corporation Contact smart cards having a document core, contactless smart cards including multi-layered structure, pet-based identification document, and methods of making same
US7694887B2 (en) 2001-12-24 2010-04-13 L-1 Secure Credentialing, Inc. Optically variable personalized indicia for identification documents
EP1459246B1 (en) 2001-12-24 2012-05-02 L-1 Secure Credentialing, Inc. Method for full color laser marking of id documents
EP1459239B1 (en) 2001-12-24 2012-04-04 L-1 Secure Credentialing, Inc. Covert variable information on id documents and methods of making same
US7728048B2 (en) 2002-12-20 2010-06-01 L-1 Secure Credentialing, Inc. Increasing thermal conductivity of host polymer used with laser engraving methods and compositions
CA2476895A1 (en) * 2002-02-19 2003-08-28 Digimarc Corporation Security methods employing drivers licenses and other documents
WO2003088144A2 (en) 2002-04-09 2003-10-23 Digimarc Id Systems, Llc Image processing techniques for printing identification cards and documents
US7824029B2 (en) 2002-05-10 2010-11-02 L-1 Secure Credentialing, Inc. Identification card printer-assembler for over the counter card issuing
US7804982B2 (en) 2002-11-26 2010-09-28 L-1 Secure Credentialing, Inc. Systems and methods for managing and detecting fraud in image databases used with identification documents
DE602004030434D1 (en) 2003-04-16 2011-01-20 L 1 Secure Credentialing Inc THREE-DIMENSIONAL DATA STORAGE
WO2005010684A2 (en) 2003-07-17 2005-02-03 Digimarc Corporation Uniquely linking security elements in identification documents
US7744002B2 (en) 2004-03-11 2010-06-29 L-1 Secure Credentialing, Inc. Tamper evident adhesive and identification document including same
EP1771816A1 (en) * 2004-06-29 2007-04-11 Kanzaki Specialty Papers Inc. A multifunction, direct thermal recording material
US7676066B2 (en) * 2004-11-18 2010-03-09 Microsoft Corporation System and method for selectively encoding a symbol code in a color space
US7494051B1 (en) * 2005-08-29 2009-02-24 Day Michael A Multi-functional electronic personal organizer
TWI316814B (en) * 2006-05-12 2009-11-01 Realtek Semiconductor Corp Device for reducing impulse noise and method thereof
CN102004324B (en) * 2010-10-19 2011-10-05 深圳超多维光电子有限公司 Grating, three-dimensional display device and three-dimensional display method
US10674045B2 (en) * 2017-05-31 2020-06-02 Google Llc Mutual noise estimation for videos

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5474623A (en) * 1977-11-28 1979-06-14 Nippon Telegr & Teleph Corp <Ntt> Coding processing system for video signal
US4205341A (en) * 1978-01-24 1980-05-27 Nippon Telegraph And Telephone Public Corporation Picture signal coding apparatus
US4319267A (en) * 1979-02-16 1982-03-09 Nippon Telegraph And Telephone Public Corporation Picture coding and/or decoding equipment
SE425704B (en) * 1981-03-18 1982-10-25 Loefberg Bo DATABERARE
US4580134A (en) * 1982-11-16 1986-04-01 Real Time Design, Inc. Color video system using data compression and decompression
US4743959A (en) * 1986-09-17 1988-05-10 Frederiksen Jeffrey E High resolution color video image acquisition and compression system
GB2223614A (en) 1988-08-30 1990-04-11 Gerald Victor Waring Identity verification
US5164831A (en) 1990-03-15 1992-11-17 Eastman Kodak Company Electronic still camera providing multi-format storage of full and reduced resolution images
JP2876258B2 (en) 1991-01-23 1999-03-31 株式会社リコー Digital electronic still camera
US5268963A (en) 1992-06-09 1993-12-07 Audio Digital Imaging Inc. System for encoding personalized identification for storage on memory storage devices
US5214699A (en) * 1992-06-09 1993-05-25 Audio Digital Imaging Inc. System for decoding and displaying personalized indentification stored on memory storage device
US5872864A (en) * 1992-09-25 1999-02-16 Olympus Optical Co., Ltd. Image processing apparatus for performing adaptive data processing in accordance with kind of image
US5623552A (en) * 1994-01-21 1997-04-22 Cardguard International, Inc. Self-authenticating identification card with fingerprint identification
JP3893480B2 (en) * 1994-09-28 2007-03-14 株式会社リコー Digital electronic camera
EP0721286A3 (en) * 1995-01-09 2000-07-26 Matsushita Electric Industrial Co., Ltd. Video signal decoding apparatus with artifact reduction
US6088391A (en) * 1996-05-28 2000-07-11 Lsi Logic Corporation Method and apparatus for segmenting memory to reduce the memory required for bidirectionally predictive-coded frames
US5883823A (en) * 1997-01-15 1999-03-16 Sun Microsystems, Inc. System and method of a fast inverse discrete cosine transform and video compression/decompression systems employing the same
JP3432392B2 (en) * 1997-04-07 2003-08-04 三菱電機株式会社 Image encoding device, image encoding method, and image storage / transmission device
US6088392A (en) * 1997-05-30 2000-07-11 Lucent Technologies Inc. Bit rate coder for differential quantization
US6356588B1 (en) * 1998-04-17 2002-03-12 Ayao Wada Method for digital compression of color images

Cited By (63)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7714747B2 (en) 1998-12-11 2010-05-11 Realtime Data Llc Data compression systems and methods
US10033405B2 (en) 1998-12-11 2018-07-24 Realtime Data Llc Data compression systems and method
US8502707B2 (en) 1998-12-11 2013-08-06 Realtime Data, Llc Data compression systems and methods
US8643513B2 (en) 1998-12-11 2014-02-04 Realtime Data Llc Data compression systems and methods
US8717203B2 (en) 1998-12-11 2014-05-06 Realtime Data, Llc Data compression systems and methods
US9054728B2 (en) 1998-12-11 2015-06-09 Realtime Data, Llc Data compression systems and methods
US8933825B2 (en) 1998-12-11 2015-01-13 Realtime Data Llc Data compression systems and methods
US9116908B2 (en) 1999-03-11 2015-08-25 Realtime Data Llc System and methods for accelerated data storage and retrieval
US8756332B2 (en) 1999-03-11 2014-06-17 Realtime Data Llc System and methods for accelerated data storage and retrieval
US8719438B2 (en) 1999-03-11 2014-05-06 Realtime Data Llc System and methods for accelerated data storage and retrieval
US10019458B2 (en) 1999-03-11 2018-07-10 Realtime Data Llc System and methods for accelerated data storage and retrieval
US8504710B2 (en) 1999-03-11 2013-08-06 Realtime Data Llc System and methods for accelerated data storage and retrieval
US8275897B2 (en) 1999-03-11 2012-09-25 Realtime Data, Llc System and methods for accelerated data storage and retrieval
US8880862B2 (en) 2000-02-03 2014-11-04 Realtime Data, Llc Systems and methods for accelerated loading of operating systems and application programs
US8090936B2 (en) 2000-02-03 2012-01-03 Realtime Data, Llc Systems and methods for accelerated loading of operating systems and application programs
US9792128B2 (en) 2000-02-03 2017-10-17 Realtime Data, Llc System and method for electrical boot-device-reset signals
US8112619B2 (en) 2000-02-03 2012-02-07 Realtime Data Llc Systems and methods for accelerated loading of operating systems and application programs
US9667751B2 (en) 2000-10-03 2017-05-30 Realtime Data, Llc Data feed acceleration
US8717204B2 (en) 2000-10-03 2014-05-06 Realtime Data Llc Methods for encoding and decoding data
US10419021B2 (en) 2000-10-03 2019-09-17 Realtime Data, Llc Systems and methods of data compression
US10284225B2 (en) 2000-10-03 2019-05-07 Realtime Data, Llc Systems and methods for data compression
US9141992B2 (en) 2000-10-03 2015-09-22 Realtime Data Llc Data feed acceleration
US9143546B2 (en) * 2000-10-03 2015-09-22 Realtime Data Llc System and method for data feed acceleration and encryption
US9967368B2 (en) 2000-10-03 2018-05-08 Realtime Data Llc Systems and methods for data block decompression
US8742958B2 (en) 2000-10-03 2014-06-03 Realtime Data Llc Methods for encoding and decoding data
US9859919B2 (en) 2000-10-03 2018-01-02 Realtime Data Llc System and method for data compression
US8723701B2 (en) 2000-10-03 2014-05-13 Realtime Data Llc Methods for encoding and decoding data
US8692695B2 (en) 2000-10-03 2014-04-08 Realtime Data, Llc Methods for encoding and decoding data
US7777651B2 (en) 2000-10-03 2010-08-17 Realtime Data Llc System and method for data feed acceleration and encryption
US8934535B2 (en) 2001-02-13 2015-01-13 Realtime Data Llc Systems and methods for video and audio data storage and distribution
US9762907B2 (en) 2001-02-13 2017-09-12 Realtime Adaptive Streaming, LLC System and methods for video and audio data distribution
US9769477B2 (en) 2001-02-13 2017-09-19 Realtime Adaptive Streaming, LLC Video data compression systems
US8553759B2 (en) 2001-02-13 2013-10-08 Realtime Data, Llc Bandwidth sensitive data compression and decompression
US8054879B2 (en) 2001-02-13 2011-11-08 Realtime Data Llc Bandwidth sensitive data compression and decompression
US8867610B2 (en) 2001-02-13 2014-10-21 Realtime Data Llc System and methods for video and audio data distribution
US10212417B2 (en) 2001-02-13 2019-02-19 Realtime Adaptive Streaming Llc Asymmetric data decompression systems
US8073047B2 (en) 2001-02-13 2011-12-06 Realtime Data, Llc Bandwidth sensitive data compression and decompression
US8929442B2 (en) 2001-02-13 2015-01-06 Realtime Data, Llc System and methods for video and audio data distribution
US6799313B2 (en) 2001-07-10 2004-09-28 Lacour Patrick Joseph Space classification for resolution enhancement techniques
US20030208742A1 (en) * 2001-07-10 2003-11-06 Lacour Patrick Joseph Space classification for resolution enhancement techniques
US6601231B2 (en) * 2001-07-10 2003-07-29 Lacour Patrick Joseph Space classification for resolution enhancement techniques
US20030222152A1 (en) * 2002-05-28 2003-12-04 Boley George E.S. Pre-paid debit & credit card
US20040096106A1 (en) * 2002-09-18 2004-05-20 Marcello Demi Method and apparatus for contour tracking of an image through a class of non linear filters
US7272241B2 (en) * 2002-09-18 2007-09-18 Consiglio Nazionale Delle Ricerche Method and apparatus for contour tracking of an image through a class of non linear filters
US20040131263A1 (en) * 2002-10-18 2004-07-08 Hiroyuki Kawamoto Image processing apparatus
US20120237119A1 (en) * 2005-10-04 2012-09-20 Getty Images, Inc. System and method for searching digital images
US8571329B2 (en) * 2005-10-04 2013-10-29 Getty Images, Inc. System and method for searching digital images
US8061622B2 (en) * 2007-02-28 2011-11-22 Cardlab Aps Electronic payment, information, or ID card with a deformation sensing means
US20100320274A1 (en) * 2007-02-28 2010-12-23 Caedlap Aps Electronic Payment, Information, or ID Card with a Deformation Sensing Means
US20090167906A1 (en) * 2007-12-28 2009-07-02 Altek Corporation False color suppression method for digital image
US7876364B2 (en) * 2007-12-28 2011-01-25 Altek Corporation False color suppression method for digital image
US20090183249A1 (en) * 2008-01-11 2009-07-16 Microsoft Corporation Trusted storage and display
US8914901B2 (en) * 2008-01-11 2014-12-16 Microsoft Corporation Trusted storage and display
US20100166331A1 (en) * 2008-12-31 2010-07-01 Altek Corporation Method for beautifying human face in digital image
US8326073B2 (en) * 2008-12-31 2012-12-04 Altek Corporation Method for beautifying human face in digital image
US9902606B2 (en) * 2013-02-19 2018-02-27 Gojo Industries, Inc. Refill container labeling
US20150191342A1 (en) * 2013-02-19 2015-07-09 Gojo Industries, Inc. Refill container labeling
US10095968B2 (en) 2014-12-19 2018-10-09 Cardlabs Aps Method and an assembly for generating a magnetic field and a method of manufacturing an assembly
EP3035230A1 (en) 2014-12-19 2016-06-22 Cardlab ApS A method and an assembly for generating a magnetic field
US10614351B2 (en) 2014-12-19 2020-04-07 Cardlab Aps Method and an assembly for generating a magnetic field and a method of manufacturing an assembly
US10558901B2 (en) 2015-04-17 2020-02-11 Cardlab Aps Device for outputting a magnetic field and a method of outputting a magnetic field
US20230217010A1 (en) * 2022-01-05 2023-07-06 Nanning Fulian Fugui Precision Industrial Co., Ltd. Video compression method and system
US11930162B2 (en) * 2022-01-05 2024-03-12 Nanning Fulian Fugui Precision Industrial Co., Ltd. Video compression method and system

Also Published As

Publication number Publication date
EP0952544A2 (en) 1999-10-27
US6244514B1 (en) 2001-06-12
JP2000030024A (en) 2000-01-28
EP0952544A3 (en) 2000-12-20

Similar Documents

Publication Publication Date Title
US20020104891A1 (en) Smart card for storage and retrieval of digitally compressed color images
US6356588B1 (en) Method for digital compression of color images
US5377018A (en) Video compression and decompression using block selection and subdivision
EP1285399B1 (en) Enhanced compression of gray-level images
US6836564B2 (en) Image data compressing method and apparatus which compress image data separately by modifying color
US6909811B1 (en) Image processing apparatus and method and storage medium storing steps realizing such method
US6016360A (en) Method and apparatus for encoding color image data
US6697529B2 (en) Data compression method and recording medium with data compression program recorded therein
EP1341384B1 (en) Image quality control apparatus and method
US8050506B2 (en) Image enhancement device
US6427025B1 (en) Image processing with selection between reversible and irreversible compression
US7106908B2 (en) Method and apparatus for selecting a format in which to re-encode a quantized image
EP0833518B1 (en) Compression of image data with associated cost data
EP0814613A1 (en) Digital image progressive transmission
EP1100049A2 (en) Method for digital compression of color images
US20040151395A1 (en) Encoding method and arrangement for images
EP1613093A2 (en) Area mapped compressed image bit budget monitor
KR100886192B1 (en) Method of image compression for video decoding based on motion compensation
US20040091173A1 (en) Method, apparatus and system for the spatial interpolation of color images and video sequences in real time
JP2802629B2 (en) Image data compression device and image processing device
US20090202165A1 (en) Image decoding method and image decoding apparatus
JPH09163164A (en) Image processing method and image processing unit
IL151741A (en) Enhanced compression of gray level images
JPH06152971A (en) Method and device for encoding picture data

Legal Events

Date Code Title Description
AS Assignment

Owner name: WADA, AYAO, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OTTO, ANTHONY H.;REEL/FRAME:012030/0625

Effective date: 20010714

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION