Nothing Special   »   [go: up one dir, main page]

US20010016065A1 - Image processing apparatus for discriminating image field of original document plural times and method therefor - Google Patents

Image processing apparatus for discriminating image field of original document plural times and method therefor Download PDF

Info

Publication number
US20010016065A1
US20010016065A1 US09/136,929 US13692998A US2001016065A1 US 20010016065 A1 US20010016065 A1 US 20010016065A1 US 13692998 A US13692998 A US 13692998A US 2001016065 A1 US2001016065 A1 US 2001016065A1
Authority
US
United States
Prior art keywords
image
field
density
discrimination
characteristic value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/136,929
Other versions
US6424742B2 (en
Inventor
Naofumi Yamamoto
Haruko Kawakami
Gururaj Rao
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Assigned to KABUSHIKI KAISHA TOSHIBA reassignment KABUSHIKI KAISHA TOSHIBA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RAO, GURURAJ, KAWAKAMI, HARUKO, YAMAMOTO, NAOFUMI
Publication of US20010016065A1 publication Critical patent/US20010016065A1/en
Application granted granted Critical
Publication of US6424742B2 publication Critical patent/US6424742B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/16Image preprocessing
    • G06V30/162Quantising the image signal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/18Extraction of features or characteristics of the image
    • G06V30/18105Extraction of features or characteristics of the image related to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/414Extracting the geometrical structure, e.g. layout tree; Block segmentation, e.g. bounding boxes for graphics or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10008Still image; Photographic image from scanner, fax or copier
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30176Document
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Definitions

  • the present invention relates to a method of discriminating the attribute of an image, an image processing apparatus adopting the discrimination method and an image forming apparatus.
  • the latter method excels in gradation expressing characteristic which enables a photograph or the like to smoothly be reproduced.
  • the latter method is of inferior resolution.
  • the modulation methods are able to realize either of the resolution and the gradation expressing characteristic, both of the foregoing requirements cannot simultaneously be realized when a recording operation is performed.
  • an image field discrimination process is performed. That is, an image which must be recorded is discriminated into fields, for example, photographs, in which the gradation expressing characteristic is of importance and fields, for example, characters or line drawings, in which the resolution is of importance. In accordance with a result of the discrimination, the recording method is switched.
  • an image field discrimination method a method is known in which the difference in change in local densities between gradation fields and character fields or the difference in the local patterns is used.
  • a method has been disclosed in Jpn. Pat. Appln. KOKAI Publication No. 58-3374. The method has steps of dividing an image into small blocks, and calculating the difference between a highest density and a lowest density in each block. Moreover, if the difference is larger than a threshold value, a discrimination is made that the subject block is a character image field. If the difference is smaller than the threshold value, a discrimination is made that the subject block is a gradation image field.
  • the above-mentioned method is able to perform accurate discrimination if the image is composed of only continuous gradation images, such as photographs, and characters.
  • a problem of unsatisfactory discrimination accuracy in a field for example, a dot image, in which local density change frequently occurs.
  • a gradation image having a sharp edge thereon is incorrectly discriminated as a character field.
  • a method has been disclosed in Jpn. Pat. Appln. KOKAI Publication No. 60-204177.
  • the foregoing method has steps of Laplacian-filtering an image; binarizing the image; and performing discrimination in accordance with a shape of, for example, a 4 ⁇ 4 pattern.
  • the above-mentioned method is able to discriminate even a dot image.
  • the above-mentioned method had a problem in that an edge portion on a gradation image can frequently incorrectly be discriminated as a character image.
  • the method using information of the local density among the conventional image field discrimination methods suffers from a problem in that discrimination accuracy of an edge portion of a gradation image having a micro structure similar to that of a character and a rough dot image portion deteriorates.
  • Another problem arises in that a character formed by thick lines and a portion in lines cannot easily be discriminated as a character.
  • An object of the present invention is to provide an image field discrimination method which is capable of realizing both excellent discrimination accuracy and excellent position resolving power and with which a portion in a characteristic can correctly be discriminated, an image processing method using the same and an image forming apparatus.
  • an image processing apparatus comprising: the field separating means for separating an original image into plural types of fields in response to a first image signal obtained at a first density of the supplied original image; characteristic value calculating means for calculating a characteristic value of the original image in response to a second image signal of the original image obtained at a second density which is higher than the first density; discrimination means for discriminating an image field of the original image in accordance with the characteristic value calculated by the characteristic value calculating means to correspond to the type of the field separated by the field separating means; and an image processing means for processing a predetermined image processes corresponding to a result of discrimination of the image field performed by the discrimination means, on the second image signal.
  • the present invention having the above-mentioned structure is different from a conventional method in which discrimination of an image field is performed by one time.
  • the present invention has the structure that discrimination of a field in response to a rough signal of an original image is performed. Moreover, a characteristic value indicated by a dense signal of the original image is obtained. Then, final discrimination of the field is, in each field indicated by the rough signal, performed in accordance with the characteristic value. As a result, further accurate discrimination of a field can be performed.
  • the conventional method has the structure that discrimination of a field is performed only one time in response to a dense signal.
  • the present invention arranged to first perform macro field discrimination in accordance with the rough signal is able to prevent incorrect discrimination that an edge in the foregoing photograph field is a character.
  • the second process is performed such that the field is discriminated in accordance with the characteristic value indicated by the dense signal, a micro character field in a character field and a background field are discriminated from each other.
  • the character field and the background field can individually be detected. Since high contrast similar to that of a character field is not provided for the background field, that is, since low contrast is provided for the background field, generation of noise on the background can be prevented.
  • the field discrimination according to the present invention and arranged to perform two steps is able to prevent incorrect discrimination between a photograph field and a character field, detect a background field in a character field and thus realize a background field free from any noise.
  • a method according to the present invention enables both of a character field and a gradation field to satisfactorily accurately be discriminated from each other because of the above-mentioned reason.
  • the discrimination of an image field according to the present invention can favorably be employed in a digital color image forming apparatus. In this case, accurate discrimination of an image field can be performed and an image in a satisfactory state can be formed.
  • FIG. 1 is a block diagram showing the structure of an essential portion of a digital color copying machine according to a first embodiment of the present invention
  • FIG. 2 is a block diagram showing an example of the structure of an image field discrimination section
  • FIG. 3 is a graph showing characteristics of one-pixel modulation process and a two-pixel modulation process (the relationship between density signals and pulse widths);
  • FIG. 4 is a flow chart of an operation of the digital color copying machine shown in FIG. 4;
  • FIG. 5 is a histogram about brightness values in a general document
  • FIG. 6 is a flow chart of an example of a procedure which is performed by a macro discrimination section and in which the type of a field of pixels in which connection fields of different types overlap is discriminated;
  • FIG. 7 is a block diagram showing the structure of a micro discrimination section
  • FIG. 8 is diagram showing a method which is employed by the micro discrimination section and in which an image field is discriminated;
  • FIG. 9 is a diagram showing an example of an image of an original
  • FIG. 10 is a diagram showing an example of a result of field separation of the image of the original document shown in FIGS. 12A and 12B;
  • FIG. 11 is a graph showing an example of distribution of characteristic values (density change values and average densities) in an usual character field
  • FIGS. 12A and 12B are graphs showing an example of distribution of characteristic values (density change values, average density values and chroma) in characters on a background;
  • FIG. 13 is a graph showing an example of distribution of characteristic values (density change values and average densities) in a continuous gradation field
  • FIG. 14 is a graph showing an example of distribution of characteristic values (density change values and average densities) in a dot image field
  • FIGS. 15A, 15B, 15 C and 15 D are diagrams showing examples of results of micro discrimination in each field of the image of the original document shown in FIG. 9 and separated in the macro discrimination section;
  • FIG. 16 is a diagram showing a final result of the discrimination
  • FIG. 17 is a diagram showing the structure of a micro discrimination section of a digital color copying machine according to a second embodiment of the present invention.
  • FIG. 18 is a diagram showing the structure of an essential portion of a digital color copying machine according to a third embodiment of the present invention.
  • FIG. 19 is a diagram showing the structure of a micro discrimination section according to a fourth embodiment of the present invention.
  • FIG. 20 is a diagram showing an image field discrimination method adapted to the micro discrimination section according to the fourth embodiment.
  • FIG. 1 is a diagram showing an example of the structure of an essential portion of an image forming apparatus (a digital color copying machine which is hereinafter simply called an image copying machine or a copying machine) having an image processing apparatus to which an image field discrimination method according to the present invention is applied.
  • an image forming apparatus a digital color copying machine which is hereinafter simply called an image copying machine or a copying machine
  • the image copying machine incorporates an image input section 1001 , a color converting section 1002 , an image field discrimination section 1004 , a filtering section 104 , a signal selector section 1005 , an inking process section 1006 , a gradation process section 1007 and an image recording section 1008 .
  • the image field discrimination method according to the present invention is applied to the image field discrimination section 1004 . Note that editing processes including an expansion/reduction process and trimming and masking processes not shown do not concern the present invention. Therefore, sections for performing the above-mentioned processes are disposed, for example, immediately posterior to the image input section 1001 .
  • the image input section 1001 reads an image of an original document so as to produce an output of a color image signal 1101 .
  • the color image signal 1101 indicates, for example, each reflectance of R, G and B of each pixel of the original document, the output of the color image signal 1101 being produced in the form of three time-sequential signals obtained by two-dimensionally scanning information of each pixel.
  • the number of read pixels per unit length is called a pixel density.
  • the read density in this embodiment is, for example, 600 dpi, that is, 600 pixels per 25.4 mm. Note that prescanning is performed at a low density of, for example, 200 dpi in a vertical direction (in a sub-scanning direction) as described later.
  • the color converting section 1002 converts the color image signal 1101 indicating the reflectance of RGB into a color image signal 1102 denoting the density of a coloring material (for example, YMC) to be recorded.
  • a coloring material for example, YMC
  • the reflectance of RGB and that of YMC usually hold a very complicated non-linear relationship. Therefore, a 3D table lookup method or a method in which a 1D table lookup and a 3 ⁇ 3 matrix are combined with each other is employed to perform the foregoing converting process.
  • Specific structures of the foregoing methods are disclosed in, for example, Jpn. Pat. Appln. KOKAI Publication No. 1-055245 and Jpn. Pat. Appln. KOKAI Publication No. 61-007774.
  • the image field discrimination section 1004 discriminates the attribute of the pixel in the supplied (color) image signal 1102 to produce an output of an image field signal 1103 .
  • the attribute of a pixel includes three types which are “character”, “edge of gradation” and “smooth gradation”. Therefore, the image field signal 1103 is a signal having any one of values of the three types.
  • the image field discrimination section 1004 incorporates a macro discrimination section 1201 and a micro discrimination section 1202 .
  • the macro discrimination section 1201 incorporates an image separator section 1211 , an image memory 1212 , a CPU 1213 , a program memory 1214 and a field signal output section 1215 .
  • the micro discrimination section 1202 incorporates a characteristic value abstracting section 1311 for abstracting a plurality of (for example, three) characteristic values, an image field discrimination section 1312 for discriminating image fields of a plurality of (for example, five) types and a discrimination signal selector section 1313 .
  • the image field discrimination section 1004 is a section to which the image field discrimination method according to the present invention is applied. Therefore, the detailed structure and operation of the image field discrimination section 1004 will be described later.
  • the filtering section 1003 subjects the YMC color image signals 1102 to a plurality of filtering processes including a sharpening process and a smoothing process in parallel.
  • a three processes including a strong edge emphasizing process, a weak edge emphasizing process and a smoothing filter process are performed so as to produce results of the processes as signals 1104 , 1105 and 1106 .
  • a copying machine usually treats a document image.
  • An image of the foregoing type contains character images and gradation images mixed therein. The character image must sharply be reproduced, while the gradation of the gradation image must smoothly be reproduced. Since industrial printers and marketed printers usually use dot images to express gradation, dot image components must be removed. Therefore, the signal selector section 1005 responds to an image field signal 1103 transmitted from the image field discrimination section 1004 to selectively switch outputs of the various filtering processes from the filtering section 1003 .
  • the YMC color image signal 1102 is subjected to the strong edge emphasizing filter so as to produce an output of a result of the emphasizing process as the signal 1104 to a following section.
  • the image field signal 1103 indicates an edge of gradation
  • an output of a result obtained by subjecting the YMC color image signal 1102 to the weak edge emphasizing filter is produced as a signal 1105 to the following section.
  • the YMC color image signal 1102 is subjected to the smoothing filter so as to produce an output of a signal 1106 obtained by removing noise and dot image components to the following section.
  • the filtering section 1003 may be arranged to receive the image field signal 1103 so as to selectively switch the filtering processes of the plural types in response to the image field signal 1103 .
  • the signal selector section 1005 may be omitted from the structure.
  • the inking process section 1006 converts the filtered YMC color image signals into YMCK signals.
  • black can be expressed by superimposing coloring materials in YMC
  • a general color recording process is performed by using YMCK coloring materials including a black coloring material because the black coloring material excels high density as compared with that realized by stacking YMC coloring materials and the black coloring material is a low cost material.
  • CMY density signals are expressed as CMY and CMYK density signals to be transmitted are expressed as C′, M′, Y′ and K′.
  • K′ k ⁇ min ( C, M, Y )
  • the gradation process section 1007 will now be described.
  • time for which a laser beam is turned on/off is modulated to express an intermediate density.
  • the gradation process section 1007 performs the modulation process. Specifically, a pulse signal having a width in response to the density signal is generated. In response to the pulse signal, the laser beam is turned on/off.
  • the structure is arranged such that a method in which the pulse position is shifted forwards and a method in which the pulse position is shifted rearwards can be switched.
  • the modulation includes a two-pixel modulation method and a one-pixel modulation method.
  • the two-pixel modulation method is performed such that the pulse positions for odd-numbered pixels are shifted forwards and those for even-numbered pixels are shifted rearwards.
  • the one-pixel modulation method is performed such that all of pixels are shifted forwards so as to be recorded. Since the one-pixel modulation method has a structure that pulses are turned on/off at cycles in one pixel units, recording can be performed at a resolution in one pixel units.
  • the two-pixel modulation method arranged to have cycles in two pixel units encounters deterioration in the resolution as compared with the one-pixel modulation method.
  • the pulse width for expressing the same density can be doubled, stability of the density can be improved.
  • the gradation expressing characteristic can be improved as compared with the one-pixel modulation.
  • An example of the relationship between density signals and recordable gradations is shown in FIG. 6. Referring to FIG. 6, a curve 11 indicates the relationship between density signals and pulse widths realized in a case of the one-pixel modulation method. A curve 12 indicates the relationship between density signals and pulse widths realized in a case of the two-pixel modulation.
  • the one-pixel modulation method is a method suitable to record a character image, while the two-pixel modulation method is a method suitable to record a gradation image.
  • selection of the two-pixel modulation process or the one-pixel modulation process is performed in response to the image field signal 1103 .
  • the image field signal 1103 indicates a character
  • the one-pixel modulation process is selected.
  • the image field signal 1103 indicates an edge of a gradation image or a smooth section
  • the two-pixel modulation process is selected.
  • an image of a gradation field can be expressed with smooth gradation and a multiplicity of gradation levels.
  • a sharp image of a character field can be recorded with a high resolution.
  • the image recording section 1008 is adapted to an electrophotography method.
  • the principle of the electrophotographic method will be described briefly. Initially, a laser beam or the like is modulated in response to an image density signal. Then, the modulated laser beam is applied to a photosensitive drum. An electric charge corresponding to a quantity of applied light is generated on the photosensitive surface of the photosensitive drum. Therefore, a laser beam is applied to scan the axial direction of the photosensitive drum to correspond to the scanning position of the image signal. Moreover, the photosensitive drum is rotated so as to be scanned. Thus, a two-dimensional charge distribution corresponding to the image signals is formed on the photosensitive drum.
  • the toner electrically charged by a developing unit is allowed to adhere to the surface of the photosensitive drum.
  • toner in a quantity corresponding to the potential is allowed to adhere to the surface so that an image is formed.
  • the toner on the photosensitive drum is transferred to the surface of recording paper through a transfer belt.
  • the toner is melted by a fixing unit so as to be fixed on the recording paper. The above-mentioned process is sequentially performed for each of the YMCK toner so that a full color image is recorded on the surface of the recording paper.
  • the copying machine shown in FIG. 1 will now be described with reference to a flow chart shown in FIG. 7.
  • the copying machine according to this embodiment is arranged to perform an operation for copying an image of an original document such that the image input section 1001 reads (scans) an image two times.
  • the scanning operation is performed at a high velocity.
  • the image is read at a rough density in the sub-scanning direction.
  • a read image signal is color-converted by the color converting section 1002 , and then supplied to the macro discrimination section 1201 of the image field discrimination section 1004 (steps S 1 and S 2 ).
  • the image separator section 1211 in the macro discrimination section 1201 converts the image signal into a plurality of characteristic value signals (step S 3 ).
  • the contents of one page of the original document are written on the image memory 1212 (step S 4 ).
  • the above-mentioned process is performed simultaneously with the operation of the image input section 1001 for scanning the original document.
  • the CPU 1213 performs a field separation process (step S 5 ).
  • a result of the field separation process is stored in the image memory 1212 (step S 6 ).
  • the image input section 1001 starts second scanning of the image (step S 7 ).
  • the second image scanning operation is performed such that the image is read at a low velocity.
  • the image signal read by the image input section 1001 is subjected to a color conversion process in the color converting section 1002 (step S 8 ), and then supplied to the image field discrimination section 1004 and the filtering section 1003 .
  • the image signal supplied to the image field discrimination section 1004 is supplied to the micro discrimination section 1202 so as to be supplied to the discrimination process (step S 9 ).
  • the field signal stored in the image memory 1212 is supplied through the field signal output section 1215 of the macro discrimination section 1201 .
  • the image field signal 1103 is transmitted from the discrimination signal selector section 1313 (step S 10 ).
  • the image signal transmitted from the color converting section 1002 is allowed to pass through the filtering section 1003 , the signal selector section 1005 , the inking process section 1006 and the gradation process section 1007 , and then transmitted to the image recording section 1008 .
  • Each of the signal selector section 1005 , the inking process section 1006 and the gradation process section 1007 selects a signal and a process in response to the image field signal 1103 supplied in synchronization with execution of each process (steps S 11 to S 14 ).
  • the macro discrimination section 1201 performs field separation in accordance with the major structure of the image.
  • an image of an original document is separated into the following five types of fields.
  • the usual character field is a field in which a white background and characters and graphics are written on the white background. A major portion of usual documents are included in the above-mentioned field.
  • the dot gradation field on a background has a gradation background which exists as the background of characters.
  • the dot gradation field on a background is a field in which the background is divided in terms of colors to emphasize or classify a character or characters are superimposed on a gradation image to make a description.
  • the former case is exemplified by a catalog, while the latter case is exemplified by a map.
  • the continuous gradation field is a field having gradation, such as a human figure or a background image, and realized by recording the gradation image by a continuous gradation method, such as a silver salt photograph method or a sublimation type transfer method. Even if a dot image printing method is employed, an image field in which the dot images have sufficiently high frequencies with which dot image components are eliminated from the image signal is included in the continuous gradation field.
  • the dot gradation field is a field of an image of a human figure or a background image.
  • the dot gradation field is a field in which dot image printing is performed to express a gradation image.
  • a large portion of images is basically classified into any one of the four fields. Some images are not included in the foregoing classification or are difficult to be classified. For example, an image produced by computer graphics, such as a character string having gradation given to the entire field thereto, applies to the foregoing case. A field of the foregoing type is classified as “the other field”.
  • the image separator section 1211 separates the color image signal 1102 transmitted from the color converting section 1002 into image data in a plurality of planes in accordance with the difference in the density of peripheral pixels and a state of chroma. Separated image data is sequentially stored in the image memory 1212 .
  • the image memory 1212 has a capacity corresponding to the number of the planes of the images. Thus, the separated image signals for one page are completely stored in the image memory 1212 .
  • the image separator section 1211 calculates brightness I and chroma S from the YMC color image signal 1102 in accordance with the following formula (10):
  • the brightness value I is a quantity indicating the density of an image.
  • the brightness value I is “0” in a case of a white image, while the same is “1” in a case of a black image.
  • the chroma S is “0” in a case of achromatic color, while the same is enlarged in proportion to the degree of chromatic color. Then, change in the brightness value I in the scanning direction is detected, and then discriminates a field in which the brightness value I is changed as a dot image. Then, histogram with respect to the brightness value I in a certain field is produced. As shown in FIG.
  • the histogram of a usual original document has three peaks, that is, a background density, a gradation density and a black density. Lowest densities among the foregoing peaks are made to be th1 and th2. Pixels other than the dot images are discriminated such that pixels having the brightness value I which is not larger than the threshold value th1 are discriminated as the background pixels, those having the threshold value larger than the threshold value th1 and not larger than the threshold value th2 are discriminated as the gradation pixels and those having the threshold value which is larger than the threshold value th2 are discriminated as the character pixels. Moreover, threshold value th3 for the chroma S is discriminated.
  • Pixels having a chroma S which is smaller than the threshold value th3 and a brightness value I which is larger than the threshold value th2 are discriminated as black pixels.
  • the other pixels are discriminated as gray pixels. That is, in accordance with the rough image signals of the original image, the original image is classified into seven types, which are a character image (an image containing dense pixels), a gradation image (an image containing pixels having a low density similar to that of a photograph), a background image (an image containing pixels having a very low density similar to that of a background), a color image (an image containing colored pixels), a gray image (an image containing gray pixels), a black image (an image containing black pixels) and a dot image (an image having the density which is frequently and greatly changed similarly to a dot image).
  • Classified image data is temporarily stored in the image memory 1212 together with a result of the classification (information about the field separation). Note that each image is made to be a binary image whether or not the image has the
  • the CPU 1213 performs a field discrimination process while the CPU 1213 makes a reference to the contents of separated image data stored in the image memory 1212 so that information about field separation is modified. Then, modified information about field separation for e.g. each pixel written on the image memory 1212 . That is, continuous pixels in the three image field, that is, the character image, the gradation image and the dot image, are unified so that connection fields in rectangular units are produced. Then, the characteristic value of each connection field is calculated so that the type of the field is discriminated.
  • the types of the fields include, for example, the usual character field mainly containing characters and a photograph field which is a gradation image.
  • the position and size of the connection field (information of a connection field) and the type of the field are again stored in the image memory 1212 .
  • pixels having overlapping connection fields of different types are subjected to, for example, a process as shown in FIG. 9.
  • the type of the pixel is discriminated.
  • step S 401 a field discriminated as a photograph field
  • step S 405 and S 406 a dot gradation field if a dot image exists in the foregoing field
  • step S 405 and S 406 the field is discriminated as a continuous gradation field
  • step S 407 and S 408 the field is discriminated as characters on a background.
  • gradation image data is deleted from the field containing a gradation image and the field discriminated in step S 408 as a continuous gradation field (steps S 402 to S 404 ).
  • step S 408 the field discriminated in step S 408 as a continuous gradation field.
  • Information about field separation stored in the image memory 1212 is read by the field signal output section 1215 in synchronization with a second reading signal from the image input section 1001 so as to be transmitted as a field separation signal. Since the density of pixels indicated with the information about field separation in the image memory 1212 and the density of pixels indicated with the image signal from the image input section 1001 are different from each other, the density denoted by the field separation signal is converted so that the two pixel densities are matched with each other before transmittance.
  • the field separation signal is expressed in a 3-bit signal form.
  • the relationship between the values of the 3-bit signals and the fields are as follows: Values Of Region Separation Signals Regions “0” Usual Character Field “1” Characters on Background “2” Continuous Gradation Field “3” Dot Gradation Field “4” Other Fields
  • Another arrangement may be employed in which the field separation signal is formed into a 5-bit signal and the 5-bit signals represent the five fields, respectively.
  • the micro discrimination section 1202 discriminates the field by paying attention to a micro difference in the image.
  • the detailed structure of the micro discrimination section 1202 is shown in FIG. 10.
  • the micro discrimination section 1202 comprises a characteristic value abstracting section 1311 for abstracting three characteristic values, an image field discrimination section 1312 for discriminating five types of image fields and a discrimination signal selector section 1313 .
  • the characteristic value abstracting section 1311 incorporates a density calculation section 1311 d, a density change value abstracting section 1311 a for abstracting the three characteristic value, an average density abstracting section 1311 b and a chroma abstracting section 1311 c.
  • the density change value abstracting section 1311 a abstracts the degree of change in the density of a portion around a pixel of interest. Initially, the density calculation section 1311 d calculates density signal D from the YMC color image signal 1102 . The density is calculated such that a weighted linear sum of the YMC density signals is calculated as indicated with the following formula (2):
  • the density change value abstracting section 1311 a calculates change in the density in a 3 pixel ⁇ 3 pixel block in the vicinity of the pixel of interest so as to transmit a density change value signal DD.
  • the density change value signal DD can be expressed by the following formula (3), in which Max (A 1 , A 2 , . . . , An) indicates a maximum value among A 1 , A 2 , . . . , An.
  • DD Max (
  • the density change may be abstracted by another method, such as a BAT method disclosed in Jan. Pat. Appln. KOKOKU Publication No. 04-05305.
  • a formula for calculating the density change value for use in the BAT method is as shown in the following formula (4):
  • DD Max (D 1 , D 2 , . . . , D 9 )-Min (D 1 , D 2 , . . . , D 9 ) (5)
  • this embodiment has the structure that the reference field is 3 pixel ⁇ 3 pixel range
  • another field may be employed, for example, a larger field, for example, a 4 pixel ⁇ 4 pixel field or a 5 pixel ⁇ 5 pixel field or a non-square 3 pixel ⁇ 5 pixel field. If the reference field is enlarged, accuracy in abstracting the characteristic value is usually improved. However, the size of the hardware is enlarged. Therefore, an appropriate size adaptable to an object and required performance must be employed.
  • the average density abstracting section 1311 b abstracts the density of the pixel of interest. That is, the average density abstracting section 1311 b receives the density signal D transmitted from the density calculation section 1311 d so as to calculate an average value of the density signals of 3 pixel ⁇ 3 pixel field in the vicinity of the pixel of interest. Then, an output of a result of the calculation is produced as an average density signal DA.
  • the average density signal DA indicates the density of a portion around the pixel of interest.
  • the chroma abstracting section 1311 c abstracts the chroma of the pixel of interest.
  • a chroma signal DS expressed with the following formula (5) is generated:
  • the chroma signal DS indicates the chroma of the pixel of interest, that is, whether or not the pixel of interest is colored. In a case of a achromic pixel, such as white, black or gray, DS is substantially “0”. In a case of red or blue, DS is substantially a maximum value of “2”.
  • the image field discrimination section 1312 will now be described.
  • the first to fifth image field discrimination sections 1312 a to 1312 e perform image field discrimination processes suitable to the five fields separated by the image separator section 1211 of the macro discrimination section 1201 .
  • the image field discrimination sections 1312 a to 1312 e receive characteristic value signals DD, DA and DS transmitted from the characteristic value abstracting sections 1311 a to 1311 c, discriminate an image field in response to the received signals and generate an image field signal DT.
  • the first to fifth image field discrimination sections 1312 a to 1312 e have different discrimination methods as will now be described with reference to FIG. 11.
  • the threshold value T3 is made to be a value larger than the threshold value T1.
  • the fourth image field discrimination section 1312 d is arranged to make the image field signal DT to always be a value “0”. That is, the fourth image field discrimination section 1312 d discriminates that all of fields are smooth gradation fields.
  • the threshold values T1 to T7 are predetermined threshold value for performing discrimination which must be discriminated appropriately adaptable to the resolution characteristic of an input system and that of the color conversion process section. Appropriate threshold values will be described later.
  • the discrimination signal selector section 1313 selects the five types of image field signals DT transmitted from the image field discrimination sections 1312 a to 1312 e in response to the field separation signal transmitted from the field signal output section 1215 of the macro discrimination section 1201 . That is, when the fields separation signal is “0”, an output signal from the first image field discrimination section 1312 a is selected. When the field separation signal is “1”, “2”, “3” or “4”, an output signal from the second, third, fourth or fifth image field discrimination sections is selected so as to be transmitted as the image field signal 1103 .
  • the two steps of the field discrimination according to the present invention are performed.
  • incorrect discrimination between a photograph field and a character field can be prevented.
  • a background field in the character field can be detected so that a background field free from any noise is realized.
  • a field 1601 is a field in which a sentence is written, the field 1601 being composed of black and red characters.
  • a field 1602 is a table field in which black characters and ruled lines are drawn on a light color background which is sectioned in terms of color.
  • a field 1603 is a field to which a gradation image realized by a silver salt photograph is pasted.
  • a field 1604 is a field in which a gradation image is recorded by a dither method, that is, a dot image modulation method.
  • the image field discrimination section 1004 performs the image field discrimination process.
  • the macro discrimination section 1201 performs the field separation in accordance with the discriminated characteristic of the structure. Since a reference to a wide field is made when the discrimination is performed, the field separation into the above-mentioned classifications can significantly accurately be performed.
  • An example of a field separation signal denoting a result of the process performed by the macro discrimination section 1201 is shown in FIG. 10. In FIG.
  • fields having respective field separation signal values of “0”, “1”, “2” and “3” are expressed with white (a field 1601 ), diagonal lines facing lower left positions (a field 1602 ), diagonal lines facing lower right positions (a field 1603 ) and a cross diagonal lines (a field 1604 ). Note the illustrated example of the original document is free from “the other field”.
  • the micro discrimination section 1202 discriminates the usual character field into a character field and the other field. Moreover, the micro discrimination section 1202 discriminates the characters on a background into a character field and the other field (a background field). The discrimination is performed in pixel units. The discrimination method employed by the micro discrimination section 1202 will now be described such that a characteristic value distribution in each field is described.
  • characters are generally recorded on a white background. Characters in yellow or blue having very low densities and very fine characters having sizes of 5 points or smaller are sometimes recorded.
  • An example of two-dimensional distribution between the density change value signals DD in the usual character fields and average density signals DA is shown in FIG. 14. Characters are distributed in a field 1801 shown in FIG. 14, while portions other than the characters are distributed about a field 1802 which is the center. Therefore, a boundary line 1803 causes the character field and the field other than the characters to be discriminated. The position of the boundary line corresponds to the threshold values T1 and T2 for performing discrimination.
  • the characters on a background are placed on a thin background.
  • the background is sometimes formed with thin ink
  • the background is usually formed by dot image recording in the form of a usual printed matter. Since the visibility of the characters on the background excessively deteriorates, thin color and small characters are not usually employed in the character portion.
  • deep colors such as black or red and a thick characters having a size of 7 points or larger are used.
  • FIG. 15A An example of two-dimensional distribution in the foregoing region between the density change value signals DD and average density signals DA is shown in FIG. 15A, and an example of two-dimensional distribution between average density signals DA and Chroma signals DS in FIG. 15B.
  • FIG. 15A An example of two-dimensional distribution in the foregoing region between the density change value signals DD and average density signals DA is shown in FIG. 15A, and an example of two-dimensional distribution between average density signals DA and Chroma signals DS in FIG. 15B.
  • black characters are distributed in a field 1901 , while color characters are distributed in a field 1902 .
  • a distribution field of pixels in the background portion is distributed in a field 1903 .
  • the character field and the field other than the character field can be discriminated from each other by dint of the threshold values T4 and T5 indicated by a boundary line 1904 , as shown in FIG. 15B. If the average density DA is high despite the density change value being “0” as shown in FIG. 15A, the threshold value T3 indicated with a boundary line 1901 enables the discrimination from characters to be performed. Therefore, the inside portion in a black character which cannot easily be discriminated by the conventional method can correctly be discriminated.
  • FIGS. 16 and 17 Examples of two-dimensional distribution between the density change value signals DD and average density signals DA in the continuous gradation field and the dot gradation field are shown in FIGS. 16 and 17, respectively.
  • the overall density change value DD is small in the continuous gradation field, while the same is somewhat large in edge portions.
  • the threshold value T6 for performing discrimination indicated with a boundary line 1103 is used to discriminate a gradation image field and edges of the gradation image field. Since gradation is expressed with dot images in the dot gradation field as shown in FIG. 17, the density change value DD is enlarged. Since removal of the dot image component from the dot gradation field causes the quality of the image to be improved, the image field signal DT is made to be “0” regardless of the characteristic value signals DD, DA and DS.
  • the conventional discrimination process using only the micro characteristics has not employed the field separation process according to this embodiment, the threshold values for the discrimination cannot be switched to be adaptable to the five types of the regions. Therefore, the conventional method has used discrimination threshold values obtainable from the same discrimination boundary as indicated by boundary lines 1810 , 1910 , 11010 and 11110 shown in FIGS. 14 to 15 D. It leads to a fact that fine characters which are usual characters and an edge field in a gradation field do not satisfy an expected result of discrimination. Therefore, incorrect discrimination takes place and thus a satisfactory discrimination accuracy cannot be obtained.
  • This embodiment having the structure that the field separation is performed by the macro discrimination process followed by selecting a micro discrimination boundary suitable to each region enables accurate discrimination to be performed with high resolving power. That is, a result of discrimination (see FIG. 14) obtained from the first image field discrimination section 1312 a of the micro discrimination section 1202 is selected for a region discriminated by the macro discrimination section 1201 as the usual character field (having a field separation signal “0”). A result of discrimination (see FIG. 15A to 15 D) obtained from the second image field discrimination section 1312 b of the micro discrimination section 1202 is selected for a region discriminated by the macro discrimination section 1201 as the characters on a background (having a field separation signal “1”).
  • a result of discrimination (see FIG. 16) obtained from the third image field discrimination section 1312 c of the micro discrimination section 1202 is selected for a region discriminated by the macro discrimination section 1201 as the continuous gradation field (having a field separation signal “2”).
  • a result of discrimination (see FIG. 17) obtained from the fourth image field discrimination section 1312 d of the micro discrimination section 1202 is selected for a region discriminated by the macro discrimination section 1201 as the dot gradation field (having a field separation signal “3”).
  • a result of discrimination obtained from the fifth image field discrimination section 1312 e of the micro discrimination section 1202 is selected for a region discriminated by the macro discrimination section 1201 as the other fields (having a field separation signal “4”).
  • FIGS. 15A, 15B, 15 C and 15 D Examples of discrimination of an image of an original document as shown in FIG. 9 performed by the image field discrimination sections 1312 a to 1312 e of the micro discrimination section 1202 are schematically shown in FIGS. 15A, 15B, 15 C and 15 D. Examples of selection of the image field signal DT in the discrimination signal selector section 1313 in response to the field separation signal is shown in FIG. 19.
  • FIGS. 15A, 15B, 15 C, 15 D and 16 a region having the image field signal “2”, that is, a field discriminated as the character field is expressed in black. The other fields are expressed in white.
  • FIG. 15A shows a result of discrimination performed by the first image field discrimination section 1312 a, FIG.
  • FIGS. 15B shows a result of discrimination performed by the second image field discrimination section 1312 b and FIGS. 15C and 15D show result of discrimination performed by the third and fourth image field discrimination sections 1312 c and 1312 d.
  • the image field signal DT does not realize an accurate discrimination in the fields other than the adapted field.
  • the discrimination signal selector section 1313 selects only image field signals adaptable to the field separation signals which are results of discrimination performed by the macro discrimination section 1201 , an accurate result of discrimination can be obtained with the final image field signal.
  • FIG. 20 Another example of the structure of the micro discrimination section 1202 is shown in FIG. 20. Note that the same elements as those shown in FIG. 7 are given the same reference numerals and only different elements will now be described.
  • the micro discrimination section 1202 incorporates a density calculation section 1311 d, three characteristic value abstracting sections, that is, the density change value abstracting section 1311 a, average density abstracting section 1311 b and the chroma abstracting section 1311 c, three threshold value registers 1401 a to 1401 c, comparators 1402 a to 1402 c and a total discrimination section 1403 .
  • the threshold value registers 1401 a to 1401 c five discrimination threshold values corresponding to five types of fields are stored. Any one of the threshold values is selected in response to a field separation signal supplied from the macro discrimination section 1201 to the threshold value registers 1401 a to 1401 c.
  • a threshold value signal transmitted from the selected threshold value register is subjected to a comparison with characteristic value signals DD, DA and DS in the comparators 1402 a to 1402 c. Results of the comparison are transmitted as binary comparison signals.
  • the comparison signals corresponding to the characteristic values are subjected to predetermined logical calculations in the total discrimination section 1403 so that a final image field signal 1404 is transmitted.
  • the total discrimination section 1403 makes a reference to both of the supplied comparison signal and the field separation signal so as to transmit an image field signal.
  • micro discrimination section 1202 is composed of a pair of a total discrimination section and threshold value registers enables a discrimination process similar to that according to the first embodiment to be realized. Since this modification has the structure that the discrimination processes are commonly performed, the flexibility is lowered. However, the size of the circuit can be reduced as compared with the first embodiment.
  • FIG. 18 shows an example of the structure of an essential portion of the color copying machine according to a third embodiment.
  • the same elements as those shown in FIG. 1 are given the same reference numerals and only different elements will now be described.
  • a color image signal of an image of an original document read by the image input section 1001 is allowed to pass through the color converting section 1002 so as to be stored in a page memory 1411 .
  • the foregoing structure is a remarkable difference from the first embodiment. Then, the structure and operation of this embodiment will be described briefly.
  • an image of an original document is read by the image input section 1001 as RGB image signals.
  • the color converting section 1002 converts the RGB image signals into color image signals indicating densities in YMC.
  • the converted YMC color image signals 1102 for one page are stored in the page memory 1411 .
  • the YMC color image signal 1102 is also supplied to a macro discrimination section 1412 .
  • the macro discrimination section 1412 has a structure similar to that of the macro discrimination section 1201 of the image field discrimination section according to the first embodiment so as to perform a similar operation.
  • the YMC color image signal 1102 supplied to the macro discrimination section 1412 is separated into image data in a plurality of planes by the image separator section 1211 shown in FIG. 5. Separated image data is sequentially stored in the image memory 1212 .
  • the CPU 1213 separates the fields while the CPU 1213 makes a reference to the contents of separated image data stored in the image memory 1212 so as to write a result of separation on the image memory 1212 .
  • the image signals stored in the page memory 1411 are sequentially read from the same.
  • information of field separation stored in the image memory 1212 is read through the field signal output section 1215 . Since the pixel density indicated by the field separation information in the image memory 1212 and pixel density indicated by the image signal in the page memory ( 1411 ) are different from each other, the density indicated by the field separation signal is converted so that the densities of the both signals are matched with each other.
  • the YMC color image signal transmitted from the page memory 1411 and the field separation signal transmitted from the macro discrimination section 1412 are supplied to a micro discrimination section 1413 .
  • the structure and operation of the micro discrimination section 1413 are similar to those of the micro discrimination section 1202 according to the first embodiment. That is, characteristic value signals DD, DA and DS are, by the three characteristic value abstracting sections, generated from the supplied YMC color image signals. Then, the five image field discrimination sections 1312 generate respective image field signals from the characteristic value signals. Finally, the discrimination signal selector section 1313 selects results of discrimination (image field signals) performed by the five image field discrimination sections 1312 in response to the field separation signal transmitted from the macro discrimination section 1412 so as to transmit a final image field signal 1103 .
  • the YMC color image signal transmitted from the page memory 1411 is allowed to pass through the filtering section 1003 , the signal selector section 1005 , the inking process section 1006 and the gradation process section 1007 so as to be recorded by the image recording section 1008 .
  • the signal selector section 1005 and the gradation process section 1007 switch the process thereof in response to the image field signal 1103 transmitted from the micro discrimination section 1413 . Since the foregoing switching operation is similar to that according to the first embodiment, the switching operation is omitted from description.
  • the third embodiment is able to perform an image process similar to the image process which can be performed by the first embodiment. Therefore, the image field signal 1103 which is an accurate signal similar to that obtainable from the first embodiment can be obtained.
  • a signal process suitable to the type of the image is selected, a character field can be reproduced with a high resolution. Moreover, a gradation image field can smoothly be reproduced.
  • the third embodiment has the structure that the color image signal is stored in the page memory 1411 , the necessity of reading and scanning an image of an original document two times as is required for the first embodiment can be eliminated. Therefore, the same signals can be employed to perform the macro discrimination and to read the image of the original document. Therefore, a necessity of considering an influence of deviation of the reading position which is exerted at each scanning operation can be eliminated.
  • the capacity of the page memory 1411 and so forth is enlarged to correspond to a plurality of pages, another original document can be read immediately after an original document has been read. Therefore, in a case where an automatic document feeder or the like is used to sequentially copy an original document composed of a plurality of pages, the sequential copying operation can be performed at a high velocity.
  • the foregoing embodiment uses change in the density, the average density and chroma as the characteristic values which are abstracted by the micro discrimination section, the abstracted values are not limited to the foregoing factors. For example, distribution of frequencies in a block or consistency with a predetermined pattern may be employed as the characteristic value.
  • the image copying machine has the structure that the macro discrimination section 1201 uses a macro structural characteristic of an original image to separate a character field and a gradation image field from each other in response to a supplied rough image signal of an original document. Then, a result of image field discrimination DT of the original image performed by the micro discrimination section 1202 in response to a coarse image signal of the original image adaptable to the result of the separation is selected. Then, an image field signal denoting the result of the final image field discrimination is transmitted. In response to the image field signal, the processes which must be performed by the filtering section 1003 and the gradation process section 1007 are selectively switched.
  • a character field is subjected to an edge emphasizing process and a high-resolution recording process, while a gradation image field is subjected to a multi-gradation processes.
  • an image in which both of a character field and a gradation field in an image of the original document are satisfactorily reproduced can be generated and/or recorded.
  • the image field discrimination method and the image processing apparatus according to this embodiment are able to perform discrimination an edge of a gradation image and an edge of a character from each other which has been difficult for the conventional technique.
  • the method and apparatus according to this embodiment are able to perform discrimination of a character placed on a dot image and a field around the character.
  • the foregoing discrimination can accurately be performed with large resolving power.
  • a fourth embodiment will now be described which is a modification of the first embodiment of the present invention. Also this modification has a structure similar to that according to the first embodiment. This embodiment is different from the first embodiment in the structure of the micro discrimination section.
  • the structure of the micro discrimination section according to this modification is shown in FIG. 22.
  • the micro discrimination section incorporates a first segment characteristic abstracting section 2201 , a second segment characteristic abstracting section 2202 , a density abstracting section 2203 , a first image field discrimination section 2204 , a second image field discrimination section 2205 , a third image field discrimination section 2206 , a fourth image field discrimination section 2207 , a fifth image field discrimination section 2208 and a signal selector section 2209 .
  • the first segment characteristic abstracting section 2201 makes a reference to an image signal in a rectangular field having a size of 5 pixels ⁇ 5 pixels in the vicinity of a pixel of interest so as to detect a segment component in a vertical or horizontal direction. Then, the first segment characteristic abstracting section 2201 produces an output of a first segment characteristic value signal (SX) 2251 .
  • SX first segment characteristic value signal
  • the foregoing signal indicates a degree of the segment structure in the rectangular field. If a vertical or a horizontal segment structure exists, a signal having a large value is transmitted. If no segment structure exists or if the density is constant, a signal having a small value is transmitted.
  • the second segment characteristic abstracting section 2202 detects a segment component in a diagonal direction existing in the rectangular field in the vicinity of the pixel of interest. Then, the second segment characteristic abstracting section 2202 transmits a second segment characteristic value signal (SB) 2252 indicating a degree of the diagonal segment structure in the rectangular field.
  • SB second segment characteristic value signal
  • the density abstracting section 2204 abstracts a density component (DD) 2254 in accordance with the following formula:
  • the image field discrimination sections will now be described.
  • the first to fifth image field discrimination sections perform image field discrimination processes adaptable to five types of fields separated by the field separator section, similarly to those of the first embodiment.
  • Each of the image field discrimination sections receives the characteristic value signal transmitted from the characteristic value abstracting section so as to discriminate the image field signal.
  • a discrimination process which is performed in each image field discrimination section is shown in FIG. 23.
  • This modification has a structure that the micro discrimination section employs a discrimination method on the basis of the linearity of an image element. Therefore, accuracy to discriminate a character and a dot image from each other can be improved. Therefore, this modification is suitable when reproducibility of fine characters is required.
  • the fourth embodiment has the structure that the segment components in the main scanning direction and the sub-scanning direction. Therefore, the background field can furthermore accurately be detected. That is, if discrimination is performed simply in accordance with change in the density or the like, there is apprehension that incorrect discrimination is performed such that a background in a constant color as a character field. Since distribution of the segments (edges) in the main scanning direction and the sub-scanning direction is detected, a correct character field can be detected because a character image contains a multiplicity of segments (edge components).
  • the present invention is able to accurate image field discrimination of a supplied image with large resolving power.
  • a macro structure characteristic of an image is used to separate a character field and a gradation image field from each other. Then, a result of an image field discrimination suitable for each of the separated fields is selected so as to obtain a result of final image field discrimination.
  • discrimination between an edge of a gradation image and an edge of a character and that between a character placed on a dot image and a field around the character which have been difficult for the conventional technique, can accurately be performed with large resolving power.
  • a character field is subjected to the edge emphasizing process and a high resolution recording process.
  • the gradation field is subjected to the multi-gradation process.
  • the processes are selectively performed.
  • images of both of the character field and the gradation field can satisfactorily be recorded.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Artificial Intelligence (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Control Or Security For Electrophotography (AREA)

Abstract

An image processing apparatus incorporating a field separating section for separating an original image into plural types of fields in response to a first image signal obtained at a rough density of the supplied original image, a characteristic value calculating section for calculating a characteristic value of the original image in response to a second image signal of the original image obtained at a density which is higher than the rough density, and a discrimination section for discriminating an image field of the original image in accordance with the characteristic value to correspond to the type of the field.

Description

    BACKGROUND OF THE INVENTION
  • The present invention relates to a method of discriminating the attribute of an image, an image processing apparatus adopting the discrimination method and an image forming apparatus. [0001]
  • Recently, when an image incorporating characters and gradation images mixed therein is treated, recording for obtaining a hard copy is an issue that needs to be resolved. An electrophotographic method is in common use as a method of recording a digital image. The foregoing method has unsatisfactory performance of expressing a density of two to several levels per picture point to be recorded. Therefore, when a gradation image is expressed, a pulse width modulation method or the like must employed. The pulse width modulation method is broadly divided into a one-pixel modulation method and a two-pixel modulation method in terms of the period of pulses in the pulse width modulation. Although the former method is able to clearly record characters because of excellent resolution, the foregoing method is of inferior gradation expressing characteristic. On the other hand, the latter method excels in gradation expressing characteristic which enables a photograph or the like to smoothly be reproduced. However, the latter method is of inferior resolution. Although the modulation methods are able to realize either of the resolution and the gradation expressing characteristic, both of the foregoing requirements cannot simultaneously be realized when a recording operation is performed. [0002]
  • To record an image such that both of the resolution and the gradation expressing characteristic are realized, an image field discrimination process is performed. That is, an image which must be recorded is discriminated into fields, for example, photographs, in which the gradation expressing characteristic is of importance and fields, for example, characters or line drawings, in which the resolution is of importance. In accordance with a result of the discrimination, the recording method is switched. [0003]
  • As an image field discrimination method, a method is known in which the difference in change in local densities between gradation fields and character fields or the difference in the local patterns is used. As an example of the former method, a method has been disclosed in Jpn. Pat. Appln. KOKAI Publication No. 58-3374. The method has steps of dividing an image into small blocks, and calculating the difference between a highest density and a lowest density in each block. Moreover, if the difference is larger than a threshold value, a discrimination is made that the subject block is a character image field. If the difference is smaller than the threshold value, a discrimination is made that the subject block is a gradation image field. The above-mentioned method is able to perform accurate discrimination if the image is composed of only continuous gradation images, such as photographs, and characters. However, there arises a problem of unsatisfactory discrimination accuracy in a field, for example, a dot image, in which local density change frequently occurs. There arises another problem in that a gradation image having a sharp edge thereon is incorrectly discriminated as a character field. As an example of the latter method, a method has been disclosed in Jpn. Pat. Appln. KOKAI Publication No. 60-204177. The foregoing method has steps of Laplacian-filtering an image; binarizing the image; and performing discrimination in accordance with a shape of, for example, a 4×4 pattern. The above-mentioned method is able to discriminate even a dot image. However, also the above-mentioned method had a problem in that an edge portion on a gradation image can frequently incorrectly be discriminated as a character image. [0004]
  • If the foregoing methods are combined with each other or if a method is furthermore employed in which correction is performed in accordance with results of discrimination of surrounding pixels by using a characteristic that an image field is constant over a somewhat wide area, the discrimination accuracy can be improved. However, the scale of the circuit results in limitation of reference fields to several pixels. Thus, a satisfactory discrimination accuracy cannot be realized. [0005]
  • As described above, the method using information of the local density among the conventional image field discrimination methods suffers from a problem in that discrimination accuracy of an edge portion of a gradation image having a micro structure similar to that of a character and a rough dot image portion deteriorates. Another problem arises in that a character formed by thick lines and a portion in lines cannot easily be discriminated as a character. [0006]
  • The known method of macroscopically analyzing the structure of a document image, which is able to accurately discriminate existence of a character or a gradation image, suffers from unsatisfactory position-resolving power. Thus, accurate discrimination in pixel unit cannot easily be performed. [0007]
  • BRIEF SUMMARY OF THE INVENTION
  • An object of the present invention is to provide an image field discrimination method which is capable of realizing both excellent discrimination accuracy and excellent position resolving power and with which a portion in a characteristic can correctly be discriminated, an image processing method using the same and an image forming apparatus. [0008]
  • According to one aspect of the present invention, there is provided an image processing apparatus comprising: the field separating means for separating an original image into plural types of fields in response to a first image signal obtained at a first density of the supplied original image; characteristic value calculating means for calculating a characteristic value of the original image in response to a second image signal of the original image obtained at a second density which is higher than the first density; discrimination means for discriminating an image field of the original image in accordance with the characteristic value calculated by the characteristic value calculating means to correspond to the type of the field separated by the field separating means; and an image processing means for processing a predetermined image processes corresponding to a result of discrimination of the image field performed by the discrimination means, on the second image signal. [0009]
  • The present invention having the above-mentioned structure is different from a conventional method in which discrimination of an image field is performed by one time. The present invention has the structure that discrimination of a field in response to a rough signal of an original image is performed. Moreover, a characteristic value indicated by a dense signal of the original image is obtained. Then, final discrimination of the field is, in each field indicated by the rough signal, performed in accordance with the characteristic value. As a result, further accurate discrimination of a field can be performed. [0010]
  • That is, the conventional method has the structure that discrimination of a field is performed only one time in response to a dense signal. Thus, incorrect discrimination of an edge in a photograph field as a character field can easily be made. However, the present invention arranged to first perform macro field discrimination in accordance with the rough signal is able to prevent incorrect discrimination that an edge in the foregoing photograph field is a character. Since the second process is performed such that the field is discriminated in accordance with the characteristic value indicated by the dense signal, a micro character field in a character field and a background field are discriminated from each other. Thus, the character field and the background field can individually be detected. Since high contrast similar to that of a character field is not provided for the background field, that is, since low contrast is provided for the background field, generation of noise on the background can be prevented. [0011]
  • Therefore, the field discrimination according to the present invention and arranged to perform two steps is able to prevent incorrect discrimination between a photograph field and a character field, detect a background field in a character field and thus realize a background field free from any noise. [0012]
  • Also a method according to the present invention enables both of a character field and a gradation field to satisfactorily accurately be discriminated from each other because of the above-mentioned reason. [0013]
  • The discrimination of an image field according to the present invention can favorably be employed in a digital color image forming apparatus. In this case, accurate discrimination of an image field can be performed and an image in a satisfactory state can be formed. [0014]
  • Additional objects and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objects and advantages of the invention may be realized and obtained by means of the instrumentalities and combinations particularly pointed out hereinafter. [0015]
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING
  • The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate presently preferred embodiments of the invention, and together with the general description given above and the detailed description of the preferred embodiments given below, serve to explain the principles of the invention. [0016]
  • FIG. 1 is a block diagram showing the structure of an essential portion of a digital color copying machine according to a first embodiment of the present invention; [0017]
  • FIG. 2 is a block diagram showing an example of the structure of an image field discrimination section; [0018]
  • FIG. 3 is a graph showing characteristics of one-pixel modulation process and a two-pixel modulation process (the relationship between density signals and pulse widths); [0019]
  • FIG. 4 is a flow chart of an operation of the digital color copying machine shown in FIG. 4; [0020]
  • FIG. 5 is a histogram about brightness values in a general document; [0021]
  • FIG. 6 is a flow chart of an example of a procedure which is performed by a macro discrimination section and in which the type of a field of pixels in which connection fields of different types overlap is discriminated; [0022]
  • FIG. 7 is a block diagram showing the structure of a micro discrimination section; [0023]
  • FIG. 8 is diagram showing a method which is employed by the micro discrimination section and in which an image field is discriminated; [0024]
  • FIG. 9 is a diagram showing an example of an image of an original; [0025]
  • FIG. 10 is a diagram showing an example of a result of field separation of the image of the original document shown in FIGS. 12A and 12B; [0026]
  • FIG. 11 is a graph showing an example of distribution of characteristic values (density change values and average densities) in an usual character field; [0027]
  • FIGS. 12A and 12B are graphs showing an example of distribution of characteristic values (density change values, average density values and chroma) in characters on a background; [0028]
  • FIG. 13 is a graph showing an example of distribution of characteristic values (density change values and average densities) in a continuous gradation field; [0029]
  • FIG. 14 is a graph showing an example of distribution of characteristic values (density change values and average densities) in a dot image field; [0030]
  • FIGS. 15A, 15B, [0031] 15C and 15D are diagrams showing examples of results of micro discrimination in each field of the image of the original document shown in FIG. 9 and separated in the macro discrimination section;
  • FIG. 16 is a diagram showing a final result of the discrimination; [0032]
  • FIG. 17 is a diagram showing the structure of a micro discrimination section of a digital color copying machine according to a second embodiment of the present invention; [0033]
  • FIG. 18 is a diagram showing the structure of an essential portion of a digital color copying machine according to a third embodiment of the present invention; [0034]
  • FIG. 19 is a diagram showing the structure of a micro discrimination section according to a fourth embodiment of the present invention; and [0035]
  • FIG. 20 is a diagram showing an image field discrimination method adapted to the micro discrimination section according to the fourth embodiment. [0036]
  • DETAILED DESCRIPTION OF THE INVENTION
  • Embodiments of the present invention will now be described with reference to the drawings. [0037]
  • First Embodiment [0038]
  • FIG. 1 is a diagram showing an example of the structure of an essential portion of an image forming apparatus (a digital color copying machine which is hereinafter simply called an image copying machine or a copying machine) having an image processing apparatus to which an image field discrimination method according to the present invention is applied. [0039]
  • The image copying machine incorporates an [0040] image input section 1001, a color converting section 1002, an image field discrimination section 1004, a filtering section 104, a signal selector section 1005, an inking process section 1006, a gradation process section 1007 and an image recording section 1008. The image field discrimination method according to the present invention is applied to the image field discrimination section 1004. Note that editing processes including an expansion/reduction process and trimming and masking processes not shown do not concern the present invention. Therefore, sections for performing the above-mentioned processes are disposed, for example, immediately posterior to the image input section 1001.
  • The [0041] image input section 1001 reads an image of an original document so as to produce an output of a color image signal 1101. The color image signal 1101 indicates, for example, each reflectance of R, G and B of each pixel of the original document, the output of the color image signal 1101 being produced in the form of three time-sequential signals obtained by two-dimensionally scanning information of each pixel. At this time, the number of read pixels per unit length is called a pixel density. The read density in this embodiment is, for example, 600 dpi, that is, 600 pixels per 25.4 mm. Note that prescanning is performed at a low density of, for example, 200 dpi in a vertical direction (in a sub-scanning direction) as described later.
  • The [0042] color converting section 1002 converts the color image signal 1101 indicating the reflectance of RGB into a color image signal 1102 denoting the density of a coloring material (for example, YMC) to be recorded. The reflectance of RGB and that of YMC usually hold a very complicated non-linear relationship. Therefore, a 3D table lookup method or a method in which a 1D table lookup and a 3×3 matrix are combined with each other is employed to perform the foregoing converting process. Specific structures of the foregoing methods are disclosed in, for example, Jpn. Pat. Appln. KOKAI Publication No. 1-055245 and Jpn. Pat. Appln. KOKAI Publication No. 61-007774.
  • The image [0043] field discrimination section 1004 discriminates the attribute of the pixel in the supplied (color) image signal 1102 to produce an output of an image field signal 1103. In this embodiment, the attribute of a pixel includes three types which are “character”, “edge of gradation” and “smooth gradation”. Therefore, the image field signal 1103 is a signal having any one of values of the three types.
  • The schematic structure of the image [0044] field discrimination section 1004 will now be described. As shown in FIG. 5, the image field discrimination section 1004 incorporates a macro discrimination section 1201 and a micro discrimination section 1202. The macro discrimination section 1201 incorporates an image separator section 1211, an image memory 1212, a CPU 1213, a program memory 1214 and a field signal output section 1215. The micro discrimination section 1202 incorporates a characteristic value abstracting section 1311 for abstracting a plurality of (for example, three) characteristic values, an image field discrimination section 1312 for discriminating image fields of a plurality of (for example, five) types and a discrimination signal selector section 1313. The image field discrimination section 1004 is a section to which the image field discrimination method according to the present invention is applied. Therefore, the detailed structure and operation of the image field discrimination section 1004 will be described later.
  • The [0045] filtering section 1003, subjects the YMC color image signals 1102 to a plurality of filtering processes including a sharpening process and a smoothing process in parallel. In this embodiment, a three processes including a strong edge emphasizing process, a weak edge emphasizing process and a smoothing filter process are performed so as to produce results of the processes as signals 1104, 1105 and 1106.
  • A copying machine usually treats a document image. An image of the foregoing type contains character images and gradation images mixed therein. The character image must sharply be reproduced, while the gradation of the gradation image must smoothly be reproduced. Since industrial printers and marketed printers usually use dot images to express gradation, dot image components must be removed. Therefore, the [0046] signal selector section 1005 responds to an image field signal 1103 transmitted from the image field discrimination section 1004 to selectively switch outputs of the various filtering processes from the filtering section 1003. When the image field signal 1103 indicates a character, the YMC color image signal 1102 is subjected to the strong edge emphasizing filter so as to produce an output of a result of the emphasizing process as the signal 1104 to a following section. When the image field signal 1103 indicates an edge of gradation, an output of a result obtained by subjecting the YMC color image signal 1102 to the weak edge emphasizing filter is produced as a signal 1105 to the following section. When the image field signal 1103 indicates smooth gradation, the YMC color image signal 1102 is subjected to the smoothing filter so as to produce an output of a signal 1106 obtained by removing noise and dot image components to the following section. As a result, the character image can sharply be reproduced and the gradation image can smoothly be reproduced.
  • Note that the [0047] filtering section 1003 may be arranged to receive the image field signal 1103 so as to selectively switch the filtering processes of the plural types in response to the image field signal 1103. In the foregoing case, the signal selector section 1005 may be omitted from the structure.
  • The [0048] inking process section 1006 converts the filtered YMC color image signals into YMCK signals. Although black can be expressed by superimposing coloring materials in YMC, a general color recording process is performed by using YMCK coloring materials including a black coloring material because the black coloring material excels high density as compared with that realized by stacking YMC coloring materials and the black coloring material is a low cost material.
  • As a specific converting method, an UCR (Under Color Reduction) method and a GCR method are known and actually used. A calculating formula of the GCR method is expressed in the following formula (1). In formula (1), CMY density signals are expressed as CMY and CMYK density signals to be transmitted are expressed as C′, M′, Y′ and K′. [0049]
  • K′=k·min (C, M, Y)
  • C′=(C−K′)/(1−K′)
  • M′=(M−K′)/(1−K′)
  • Y′=(Y−K′)/(1−K′)  (1)
  • The [0050] gradation process section 1007 will now be described. When an electrophotograph image is recorded, time for which a laser beam is turned on/off is modulated to express an intermediate density. The gradation process section 1007 performs the modulation process. Specifically, a pulse signal having a width in response to the density signal is generated. In response to the pulse signal, the laser beam is turned on/off. The structure is arranged such that a method in which the pulse position is shifted forwards and a method in which the pulse position is shifted rearwards can be switched.
  • The modulation includes a two-pixel modulation method and a one-pixel modulation method. The two-pixel modulation method is performed such that the pulse positions for odd-numbered pixels are shifted forwards and those for even-numbered pixels are shifted rearwards. On the other hand, the one-pixel modulation method is performed such that all of pixels are shifted forwards so as to be recorded. Since the one-pixel modulation method has a structure that pulses are turned on/off at cycles in one pixel units, recording can be performed at a resolution in one pixel units. On the other hand, the two-pixel modulation method arranged to have cycles in two pixel units encounters deterioration in the resolution as compared with the one-pixel modulation method. However, the pulse width for expressing the same density can be doubled, stability of the density can be improved. Thus, the gradation expressing characteristic can be improved as compared with the one-pixel modulation. An example of the relationship between density signals and recordable gradations is shown in FIG. 6. Referring to FIG. 6, a curve [0051] 11 indicates the relationship between density signals and pulse widths realized in a case of the one-pixel modulation method. A curve 12 indicates the relationship between density signals and pulse widths realized in a case of the two-pixel modulation. The one-pixel modulation method is a method suitable to record a character image, while the two-pixel modulation method is a method suitable to record a gradation image.
  • In this embodiment, selection of the two-pixel modulation process or the one-pixel modulation process is performed in response to the [0052] image field signal 1103. Specifically, when the image field signal 1103 indicates a character, the one-pixel modulation process is selected. When the image field signal 1103 indicates an edge of a gradation image or a smooth section, the two-pixel modulation process is selected. As a result, an image of a gradation field can be expressed with smooth gradation and a multiplicity of gradation levels. A sharp image of a character field can be recorded with a high resolution.
  • The [0053] image recording section 1008 will now be described. In this embodiment, the image recording section 1008 is adapted to an electrophotography method. The principle of the electrophotographic method will be described briefly. Initially, a laser beam or the like is modulated in response to an image density signal. Then, the modulated laser beam is applied to a photosensitive drum. An electric charge corresponding to a quantity of applied light is generated on the photosensitive surface of the photosensitive drum. Therefore, a laser beam is applied to scan the axial direction of the photosensitive drum to correspond to the scanning position of the image signal. Moreover, the photosensitive drum is rotated so as to be scanned. Thus, a two-dimensional charge distribution corresponding to the image signals is formed on the photosensitive drum. Then, the toner electrically charged by a developing unit is allowed to adhere to the surface of the photosensitive drum. At this time, toner in a quantity corresponding to the potential is allowed to adhere to the surface so that an image is formed. Then, the toner on the photosensitive drum is transferred to the surface of recording paper through a transfer belt. Finally, the toner is melted by a fixing unit so as to be fixed on the recording paper. The above-mentioned process is sequentially performed for each of the YMCK toner so that a full color image is recorded on the surface of the recording paper.
  • Operation of Image Forming Apparatus [0054]
  • The operation of the copying machine shown in FIG. 1 will now be described with reference to a flow chart shown in FIG. 7. The copying machine according to this embodiment is arranged to perform an operation for copying an image of an original document such that the [0055] image input section 1001 reads (scans) an image two times.
  • When a first operation for scanning an image is performed, the scanning operation is performed at a high velocity. Thus, the image is read at a rough density in the sub-scanning direction. A read image signal is color-converted by the [0056] color converting section 1002, and then supplied to the macro discrimination section 1201 of the image field discrimination section 1004 (steps S1 and S2). The image separator section 1211 in the macro discrimination section 1201 converts the image signal into a plurality of characteristic value signals (step S3). Thus, the contents of one page of the original document are written on the image memory 1212 (step S4). The above-mentioned process is performed simultaneously with the operation of the image input section 1001 for scanning the original document. After the original document has been scanned and image information has been recorded in the image memory 1212, the CPU 1213 performs a field separation process (step S5). A result of the field separation process is stored in the image memory 1212 (step S6).
  • After the field separation process has been completed by the [0057] CPU 1213, the image input section 1001 starts second scanning of the image (step S7). The second image scanning operation is performed such that the image is read at a low velocity. The image signal read by the image input section 1001 is subjected to a color conversion process in the color converting section 1002 (step S8), and then supplied to the image field discrimination section 1004 and the filtering section 1003. The image signal supplied to the image field discrimination section 1004 is supplied to the micro discrimination section 1202 so as to be supplied to the discrimination process (step S9). In synchronization with the second image scanning process, the field signal stored in the image memory 1212 is supplied through the field signal output section 1215 of the macro discrimination section 1201. As a result, the image field signal 1103 is transmitted from the discrimination signal selector section 1313 (step S10).
  • On the other hand, the image signal transmitted from the [0058] color converting section 1002 is allowed to pass through the filtering section 1003, the signal selector section 1005, the inking process section 1006 and the gradation process section 1007, and then transmitted to the image recording section 1008. Each of the signal selector section 1005, the inking process section 1006 and the gradation process section 1007 selects a signal and a process in response to the image field signal 1103 supplied in synchronization with execution of each process (steps S11 to S14).
  • Description of Image Field Discrimination Section [0059]
  • The image [0060] field discrimination section 1004 shown in FIG. 2 will now be described.
  • Structure and Operation of Macro Discrimination Section [0061]
  • The [0062] macro discrimination section 1201 performs field separation in accordance with the major structure of the image. In this embodiment, an image of an original document is separated into the following five types of fields.
  • 1. Usual Character Field [0063]
  • 2. Characters on Background [0064]
  • 3. Continuous Gradation Field [0065]
  • 4. Dot Gradation Field [0066]
  • 5. Other Field [0067]
  • The usual character field is a field in which a white background and characters and graphics are written on the white background. A major portion of usual documents are included in the above-mentioned field. The dot gradation field on a background has a gradation background which exists as the background of characters. The dot gradation field on a background is a field in which the background is divided in terms of colors to emphasize or classify a character or characters are superimposed on a gradation image to make a description. The former case is exemplified by a catalog, while the latter case is exemplified by a map. The continuous gradation field is a field having gradation, such as a human figure or a background image, and realized by recording the gradation image by a continuous gradation method, such as a silver salt photograph method or a sublimation type transfer method. Even if a dot image printing method is employed, an image field in which the dot images have sufficiently high frequencies with which dot image components are eliminated from the image signal is included in the continuous gradation field. Similarly to the continuous gradation field, the dot gradation field is a field of an image of a human figure or a background image. The dot gradation field is a field in which dot image printing is performed to express a gradation image. [0068]
  • A large portion of images is basically classified into any one of the four fields. Some images are not included in the foregoing classification or are difficult to be classified. For example, an image produced by computer graphics, such as a character string having gradation given to the entire field thereto, applies to the foregoing case. A field of the foregoing type is classified as “the other field”. [0069]
  • The structure and operation of the [0070] macro discrimination section 1201 will now be described. The image separator section 1211 separates the color image signal 1102 transmitted from the color converting section 1002 into image data in a plurality of planes in accordance with the difference in the density of peripheral pixels and a state of chroma. Separated image data is sequentially stored in the image memory 1212. In this embodiment, the image memory 1212 has a capacity corresponding to the number of the planes of the images. Thus, the separated image signals for one page are completely stored in the image memory 1212.
  • The [0071] image separator section 1211 calculates brightness I and chroma S from the YMC color image signal 1102 in accordance with the following formula (10):
  • I=(C+M+Y)/3
  • S=(C−M)2+(M−Y)2+(Y−C)2  (2)
  • The brightness value I is a quantity indicating the density of an image. The brightness value I is “0” in a case of a white image, while the same is “1” in a case of a black image. The chroma S is “0” in a case of achromatic color, while the same is enlarged in proportion to the degree of chromatic color. Then, change in the brightness value I in the scanning direction is detected, and then discriminates a field in which the brightness value I is changed as a dot image. Then, histogram with respect to the brightness value I in a certain field is produced. As shown in FIG. 8, the histogram of a usual original document has three peaks, that is, a background density, a gradation density and a black density. Lowest densities among the foregoing peaks are made to be th1 and th2. Pixels other than the dot images are discriminated such that pixels having the brightness value I which is not larger than the threshold value th1 are discriminated as the background pixels, those having the threshold value larger than the threshold value th1 and not larger than the threshold value th2 are discriminated as the gradation pixels and those having the threshold value which is larger than the threshold value th2 are discriminated as the character pixels. Moreover, threshold value th3 for the chroma S is discriminated. Pixels having a chroma S which is smaller than the threshold value th3 and a brightness value I which is larger than the threshold value th2 are discriminated as black pixels. The other pixels are discriminated as gray pixels. That is, in accordance with the rough image signals of the original image, the original image is classified into seven types, which are a character image (an image containing dense pixels), a gradation image (an image containing pixels having a low density similar to that of a photograph), a background image (an image containing pixels having a very low density similar to that of a background), a color image (an image containing colored pixels), a gray image (an image containing gray pixels), a black image (an image containing black pixels) and a dot image (an image having the density which is frequently and greatly changed similarly to a dot image). Classified image data is temporarily stored in the [0072] image memory 1212 together with a result of the classification (information about the field separation). Note that each image is made to be a binary image whether or not the image has the foregoing characteristic.
  • In accordance with a program code stored in the program memory (for example, a ROM) [0073] 1214, the CPU 1213 performs a field discrimination process while the CPU 1213 makes a reference to the contents of separated image data stored in the image memory 1212 so that information about field separation is modified. Then, modified information about field separation for e.g. each pixel written on the image memory 1212. That is, continuous pixels in the three image field, that is, the character image, the gradation image and the dot image, are unified so that connection fields in rectangular units are produced. Then, the characteristic value of each connection field is calculated so that the type of the field is discriminated. The types of the fields include, for example, the usual character field mainly containing characters and a photograph field which is a gradation image. The position and size of the connection field (information of a connection field) and the type of the field are again stored in the image memory 1212. Then, pixels having overlapping connection fields of different types are subjected to, for example, a process as shown in FIG. 9. Thus, in accordance with information of a connection field and the type of the field, the type of the pixel is discriminated.
  • The procedure of the flow chart shown in FIG. 6 is performed such that a field discriminated as a photograph field (step S[0074] 401) is discriminated as a dot gradation field if a dot image exists in the foregoing field (steps S405 and S406). If a gradation image exists, the field is discriminated as a continuous gradation field (steps S407 and S408). If both of the dot image and the gradation image do not exist, the field is discriminated as characters on a background (steps S407 and S409). If the field is not the photograph field, gradation image data is deleted from the field containing a gradation image and the field discriminated in step S408 as a continuous gradation field (steps S402 to S404). As a result of the above-mentioned process, at least four types of modified information of field discrimination are stored in the image memory 1212.
  • Information about field separation stored in the [0075] image memory 1212 is read by the field signal output section 1215 in synchronization with a second reading signal from the image input section 1001 so as to be transmitted as a field separation signal. Since the density of pixels indicated with the information about field separation in the image memory 1212 and the density of pixels indicated with the image signal from the image input section 1001 are different from each other, the density denoted by the field separation signal is converted so that the two pixel densities are matched with each other before transmittance.
  • In this embodiment, the field separation signal is expressed in a 3-bit signal form. The relationship between the values of the 3-bit signals and the fields are as follows: [0076]
    Values Of Region
    Separation Signals Regions
    “0” Usual Character Field
    “1” Characters on Background
    “2” Continuous Gradation Field
    “3” Dot Gradation Field
    “4” Other Fields
  • Another arrangement may be employed in which the field separation signal is formed into a 5-bit signal and the 5-bit signals represent the five fields, respectively. [0077]
  • Structure and Operation of Micro Discrimination Section [0078]
  • The [0079] micro discrimination section 1202 discriminates the field by paying attention to a micro difference in the image. The detailed structure of the micro discrimination section 1202 is shown in FIG. 10. The micro discrimination section 1202 comprises a characteristic value abstracting section 1311 for abstracting three characteristic values, an image field discrimination section 1312 for discriminating five types of image fields and a discrimination signal selector section 1313.
  • The characteristic [0080] value abstracting section 1311 incorporates a density calculation section 1311 d, a density change value abstracting section 1311 a for abstracting the three characteristic value, an average density abstracting section 1311 b and a chroma abstracting section 1311 c.
  • The density change [0081] value abstracting section 1311 a abstracts the degree of change in the density of a portion around a pixel of interest. Initially, the density calculation section 1311 d calculates density signal D from the YMC color image signal 1102. The density is calculated such that a weighted linear sum of the YMC density signals is calculated as indicated with the following formula (2):
  • D=Ky·Y+Km·M+Kc·C  (3)
  • where Ky=0.25, Km=0.5 and Kc=0.25 [0082]
  • Then, the density change [0083] value abstracting section 1311 a calculates change in the density in a 3 pixel× 3 pixel block in the vicinity of the pixel of interest so as to transmit a density change value signal DD. Assuming that densities of pixels in the 3 pixel× 3 pixel block are D1, D2, D3, . . . , D9, the density change value signal DD can be expressed by the following formula (3), in which Max (A1, A2, . . . , An) indicates a maximum value among A1, A2, . . . , An.
  • DD=Max (|D1-D9|, |D2-D8|, |D3-D7|, |D4-D6|)  (4)
  • Note that the density change may be abstracted by another method, such as a BAT method disclosed in Jan. Pat. Appln. KOKOKU Publication No. 04-05305. A formula for calculating the density change value for use in the BAT method is as shown in the following formula (4): [0084]
  • DD=Max (D1, D2, . . . , D9)-Min (D1, D2, . . . , D9)  (5)
  • Although this embodiment has the structure that the reference field is 3 pixel×3 pixel range, another field may be employed, for example, a larger field, for example, a 4 pixel×4 pixel field or a 5 pixel×5 pixel field or a non-square 3 pixel×5 pixel field. If the reference field is enlarged, accuracy in abstracting the characteristic value is usually improved. However, the size of the hardware is enlarged. Therefore, an appropriate size adaptable to an object and required performance must be employed. [0085]
  • The average [0086] density abstracting section 1311 b abstracts the density of the pixel of interest. That is, the average density abstracting section 1311 b receives the density signal D transmitted from the density calculation section 1311 d so as to calculate an average value of the density signals of 3 pixel× 3 pixel field in the vicinity of the pixel of interest. Then, an output of a result of the calculation is produced as an average density signal DA. The average density signal DA indicates the density of a portion around the pixel of interest.
  • The [0087] chroma abstracting section 1311 c abstracts the chroma of the pixel of interest. In response to YMC signals of the pixel of interest, a chroma signal DS expressed with the following formula (5) is generated:
  • DS=(C−M)2+(M−Y)2+(Y−C)2  (6)
  • The chroma signal DS indicates the chroma of the pixel of interest, that is, whether or not the pixel of interest is colored. In a case of a achromic pixel, such as white, black or gray, DS is substantially “0”. In a case of red or blue, DS is substantially a maximum value of “2”. [0088]
  • The image [0089] field discrimination section 1312 will now be described. The first to fifth image field discrimination sections 1312 a to 1312 e perform image field discrimination processes suitable to the five fields separated by the image separator section 1211 of the macro discrimination section 1201. The image field discrimination sections 1312 a to 1312 e receive characteristic value signals DD, DA and DS transmitted from the characteristic value abstracting sections 1311 a to 1311 c, discriminate an image field in response to the received signals and generate an image field signal DT. The generated image field signal is a 2-bit signal having values “0”, “1” and “2”. As for the relationship between the values and the image fields, a smooth gradation field is expressed when DT=0. An edge of a gradation image field is expressed when DT= 1. An inside portion of a character and an edge are expressed when DT=2. The first to fifth image field discrimination sections 1312 a to 1312 e have different discrimination methods as will now be described with reference to FIG. 11. The first image field discrimination section 1312 a discriminates a point having the density change value DD which is larger than a threshold value T1 and a point having an average density which is larger than a threshold value T2 as a character field (DT=2). The second image field discrimination section 1312 b discriminates a point having the density change value DD which is larger than a threshold value T3 and a point having an average density which is higher than a threshold value T4 and chroma DS which is lower than a threshold value T5 as a character field (DT=2). The threshold value T3 is made to be a value larger than the threshold value T1. The third image field discrimination section 1312 c discriminates a point having a density change value DD which is larger than a threshold value T6 as an edge (DT=1) of a gradation image field. The third image field discrimination section 1312 c discriminates the point as a smooth gradation image field (DT=0) in the other cases. The fourth image field discrimination section 1312 d is arranged to make the image field signal DT to always be a value “0”. That is, the fourth image field discrimination section 1312 d discriminates that all of fields are smooth gradation fields. The fifth image field discrimination section 1312 e discriminates the point as an edge field (DT=1) if the density change value DD is larger than a threshold value T7 and discriminates the same as a gradation field (DT=0) if the density change value DD is smaller than the threshold value T7. Note that the threshold values T1 to T7 are predetermined threshold value for performing discrimination which must be discriminated appropriately adaptable to the resolution characteristic of an input system and that of the color conversion process section. Appropriate threshold values will be described later.
  • The discrimination [0090] signal selector section 1313 selects the five types of image field signals DT transmitted from the image field discrimination sections 1312 a to 1312 e in response to the field separation signal transmitted from the field signal output section 1215 of the macro discrimination section 1201. That is, when the fields separation signal is “0”, an output signal from the first image field discrimination section 1312 a is selected. When the field separation signal is “1”, “2”, “3” or “4”, an output signal from the second, third, fourth or fifth image field discrimination sections is selected so as to be transmitted as the image field signal 1103.
  • As a result, the two steps of the field discrimination according to the present invention are performed. Thus, incorrect discrimination between a photograph field and a character field can be prevented. Moreover, a background field in the character field can be detected so that a background field free from any noise is realized. [0091]
  • Specific Operation [0092]
  • The specific operation will now be described such that the original document shown in FIG. 9 is taken as an example. [0093]
  • The original document shown in FIG. 9 has a unnatural structure as compared with a usual document image to easily describe the structure. A [0094] field 1601 is a field in which a sentence is written, the field 1601 being composed of black and red characters. A field 1602 is a table field in which black characters and ruled lines are drawn on a light color background which is sectioned in terms of color. A field 1603 is a field to which a gradation image realized by a silver salt photograph is pasted. A field 1604 is a field in which a gradation image is recorded by a dither method, that is, a dot image modulation method.
  • An operation will now be described with reference to FIG. 4, the operation being performed when the image of the original document is read by a scanner section not shown so that a copied image of the original document is transmitted from the [0095] printer section 2. As described above, the image of the original document shown in FIG. 9 is read by the image input section 1001 shown in FIG. 4, the image being read as an electric signal. Then, the electric signal is converted into the color image signal 1102 indicating the quantity of YMC toner in color converting section 1002.
  • In response to the YMC [0096] color image signal 1102, the image field discrimination section 1004 performs the image field discrimination process. The macro discrimination section 1201 performs the field separation in accordance with the discriminated characteristic of the structure. Since a reference to a wide field is made when the discrimination is performed, the field separation into the above-mentioned classifications can significantly accurately be performed. An example of a field separation signal denoting a result of the process performed by the macro discrimination section 1201 is shown in FIG. 10. In FIG. 10, fields having respective field separation signal values of “0”, “1”, “2” and “3” are expressed with white (a field 1601), diagonal lines facing lower left positions (a field 1602), diagonal lines facing lower right positions (a field 1603) and a cross diagonal lines (a field 1604). Note the illustrated example of the original document is free from “the other field”.
  • The [0097] micro discrimination section 1202 discriminates the usual character field into a character field and the other field. Moreover, the micro discrimination section 1202 discriminates the characters on a background into a character field and the other field (a background field). The discrimination is performed in pixel units. The discrimination method employed by the micro discrimination section 1202 will now be described such that a characteristic value distribution in each field is described.
  • In the usual character field, characters are generally recorded on a white background. Characters in yellow or blue having very low densities and very fine characters having sizes of 5 points or smaller are sometimes recorded. An example of two-dimensional distribution between the density change value signals DD in the usual character fields and average density signals DA is shown in FIG. 14. Characters are distributed in a [0098] field 1801 shown in FIG. 14, while portions other than the characters are distributed about a field 1802 which is the center. Therefore, a boundary line 1803 causes the character field and the field other than the characters to be discriminated. The position of the boundary line corresponds to the threshold values T1 and T2 for performing discrimination.
  • In the characters on a background, the characters are placed on a thin background. Although the background is sometimes formed with thin ink, the background is usually formed by dot image recording in the form of a usual printed matter. Since the visibility of the characters on the background excessively deteriorates, thin color and small characters are not usually employed in the character portion. In a major portion of the character portions, deep colors, such as black or red and a thick characters having a size of 7 points or larger are used. An example of two-dimensional distribution in the foregoing region between the density change value signals DD and average density signals DA is shown in FIG. 15A, and an example of two-dimensional distribution between average density signals DA and Chroma signals DS in FIG. 15B. In FIG. 15B, black characters are distributed in a [0099] field 1901, while color characters are distributed in a field 1902. In FIG. 15A, a distribution field of pixels in the background portion is distributed in a field 1903. At this time, the character field and the field other than the character field can be discriminated from each other by dint of the threshold values T4 and T5 indicated by a boundary line 1904, as shown in FIG. 15B. If the average density DA is high despite the density change value being “0” as shown in FIG. 15A, the threshold value T3 indicated with a boundary line 1901 enables the discrimination from characters to be performed. Therefore, the inside portion in a black character which cannot easily be discriminated by the conventional method can correctly be discriminated.
  • Examples of two-dimensional distribution between the density change value signals DD and average density signals DA in the continuous gradation field and the dot gradation field are shown in FIGS. 16 and 17, respectively. As shown in FIG. 16, the overall density change value DD is small in the continuous gradation field, while the same is somewhat large in edge portions. In this embodiment, the threshold value T6 for performing discrimination indicated with a [0100] boundary line 1103 is used to discriminate a gradation image field and edges of the gradation image field. Since gradation is expressed with dot images in the dot gradation field as shown in FIG. 17, the density change value DD is enlarged. Since removal of the dot image component from the dot gradation field causes the quality of the image to be improved, the image field signal DT is made to be “0” regardless of the characteristic value signals DD, DA and DS.
  • Since the conventional discrimination process using only the micro characteristics has not employed the field separation process according to this embodiment, the threshold values for the discrimination cannot be switched to be adaptable to the five types of the regions. Therefore, the conventional method has used discrimination threshold values obtainable from the same discrimination boundary as indicated by [0101] boundary lines 1810, 1910, 11010 and 11110 shown in FIGS. 14 to 15D. It leads to a fact that fine characters which are usual characters and an edge field in a gradation field do not satisfy an expected result of discrimination. Therefore, incorrect discrimination takes place and thus a satisfactory discrimination accuracy cannot be obtained.
  • This embodiment having the structure that the field separation is performed by the macro discrimination process followed by selecting a micro discrimination boundary suitable to each region enables accurate discrimination to be performed with high resolving power. That is, a result of discrimination (see FIG. 14) obtained from the first image [0102] field discrimination section 1312 a of the micro discrimination section 1202 is selected for a region discriminated by the macro discrimination section 1201 as the usual character field (having a field separation signal “0”). A result of discrimination (see FIG. 15A to 15D) obtained from the second image field discrimination section 1312 b of the micro discrimination section 1202 is selected for a region discriminated by the macro discrimination section 1201 as the characters on a background (having a field separation signal “1”). A result of discrimination (see FIG. 16) obtained from the third image field discrimination section 1312 c of the micro discrimination section 1202 is selected for a region discriminated by the macro discrimination section 1201 as the continuous gradation field (having a field separation signal “2”). A result of discrimination (see FIG. 17) obtained from the fourth image field discrimination section 1312 d of the micro discrimination section 1202 is selected for a region discriminated by the macro discrimination section 1201 as the dot gradation field (having a field separation signal “3”). A result of discrimination obtained from the fifth image field discrimination section 1312 e of the micro discrimination section 1202 is selected for a region discriminated by the macro discrimination section 1201 as the other fields (having a field separation signal “4”).
  • Examples of discrimination of an image of an original document as shown in FIG. 9 performed by the image [0103] field discrimination sections 1312 a to 1312 e of the micro discrimination section 1202 are schematically shown in FIGS. 15A, 15B, 15C and 15D. Examples of selection of the image field signal DT in the discrimination signal selector section 1313 in response to the field separation signal is shown in FIG. 19. In FIGS. 15A, 15B, 15C, 15D and 16, a region having the image field signal “2”, that is, a field discriminated as the character field is expressed in black. The other fields are expressed in white. FIG. 15A shows a result of discrimination performed by the first image field discrimination section 1312 a, FIG. 15B shows a result of discrimination performed by the second image field discrimination section 1312 b and FIGS. 15C and 15D show result of discrimination performed by the third and fourth image field discrimination sections 1312 c and 1312 d. As can be understood from results of comparison among FIGS. 15A, 15B, 15C, 15D and 16, the image field signal DT does not realize an accurate discrimination in the fields other than the adapted field. However, when the discrimination signal selector section 1313 selects only image field signals adaptable to the field separation signals which are results of discrimination performed by the macro discrimination section 1201, an accurate result of discrimination can be obtained with the final image field signal.
  • Second Embodiment [0104]
  • A modification of the micro discrimination section according to the first embodiment will now be described. Another example of the structure of the [0105] micro discrimination section 1202 is shown in FIG. 20. Note that the same elements as those shown in FIG. 7 are given the same reference numerals and only different elements will now be described.
  • As shown in FIG. 20, the [0106] micro discrimination section 1202 incorporates a density calculation section 1311 d, three characteristic value abstracting sections, that is, the density change value abstracting section 1311 a, average density abstracting section 1311 b and the chroma abstracting section 1311 c, three threshold value registers 1401 a to 1401 c, comparators 1402 a to 1402 c and a total discrimination section 1403. In each of the threshold value registers 1401 a to 1401 c, five discrimination threshold values corresponding to five types of fields are stored. Any one of the threshold values is selected in response to a field separation signal supplied from the macro discrimination section 1201 to the threshold value registers 1401 a to 1401 c. A threshold value signal transmitted from the selected threshold value register is subjected to a comparison with characteristic value signals DD, DA and DS in the comparators 1402 a to 1402 c. Results of the comparison are transmitted as binary comparison signals.
  • The comparison signals corresponding to the characteristic values are subjected to predetermined logical calculations in the [0107] total discrimination section 1403 so that a final image field signal 1404 is transmitted. The total discrimination section 1403 makes a reference to both of the supplied comparison signal and the field separation signal so as to transmit an image field signal.
  • As described above, a structure in which the [0108] micro discrimination section 1202 is composed of a pair of a total discrimination section and threshold value registers enables a discrimination process similar to that according to the first embodiment to be realized. Since this modification has the structure that the discrimination processes are commonly performed, the flexibility is lowered. However, the size of the circuit can be reduced as compared with the first embodiment.
  • Third Embodiment [0109]
  • Another example of a color copying machine incorporating an image processing apparatus employing the image field discrimination method according to the present invention will now be described. FIG. 18 shows an example of the structure of an essential portion of the color copying machine according to a third embodiment. The same elements as those shown in FIG. 1 are given the same reference numerals and only different elements will now be described. In the structure shown in FIG. 21, a color image signal of an image of an original document read by the [0110] image input section 1001 is allowed to pass through the color converting section 1002 so as to be stored in a page memory 1411. The foregoing structure is a remarkable difference from the first embodiment. Then, the structure and operation of this embodiment will be described briefly.
  • Initially, an image of an original document is read by the [0111] image input section 1001 as RGB image signals. Then, the color converting section 1002 converts the RGB image signals into color image signals indicating densities in YMC. The converted YMC color image signals 1102 for one page are stored in the page memory 1411. On the other hand, the YMC color image signal 1102 is also supplied to a macro discrimination section 1412. The macro discrimination section 1412 has a structure similar to that of the macro discrimination section 1201 of the image field discrimination section according to the first embodiment so as to perform a similar operation. That is, the YMC color image signal 1102 supplied to the macro discrimination section 1412 is separated into image data in a plurality of planes by the image separator section 1211 shown in FIG. 5. Separated image data is sequentially stored in the image memory 1212. In accordance with the program code stored in the program memory 1214, the CPU 1213 separates the fields while the CPU 1213 makes a reference to the contents of separated image data stored in the image memory 1212 so as to write a result of separation on the image memory 1212.
  • After the discrimination process has been completed and thus all of results of the field separation have been written on the [0112] image memory 1212, the image signals stored in the page memory 1411 are sequentially read from the same. In synchronization with the reading operation, information of field separation stored in the image memory 1212 is read through the field signal output section 1215. Since the pixel density indicated by the field separation information in the image memory 1212 and pixel density indicated by the image signal in the page memory (1411) are different from each other, the density indicated by the field separation signal is converted so that the densities of the both signals are matched with each other.
  • The YMC color image signal transmitted from the [0113] page memory 1411 and the field separation signal transmitted from the macro discrimination section 1412 are supplied to a micro discrimination section 1413. The structure and operation of the micro discrimination section 1413 are similar to those of the micro discrimination section 1202 according to the first embodiment. That is, characteristic value signals DD, DA and DS are, by the three characteristic value abstracting sections, generated from the supplied YMC color image signals. Then, the five image field discrimination sections 1312 generate respective image field signals from the characteristic value signals. Finally, the discrimination signal selector section 1313 selects results of discrimination (image field signals) performed by the five image field discrimination sections 1312 in response to the field separation signal transmitted from the macro discrimination section 1412 so as to transmit a final image field signal 1103.
  • The YMC color image signal transmitted from the [0114] page memory 1411 is allowed to pass through the filtering section 1003, the signal selector section 1005, the inking process section 1006 and the gradation process section 1007 so as to be recorded by the image recording section 1008. The signal selector section 1005 and the gradation process section 1007 switch the process thereof in response to the image field signal 1103 transmitted from the micro discrimination section 1413. Since the foregoing switching operation is similar to that according to the first embodiment, the switching operation is omitted from description.
  • Also the third embodiment is able to perform an image process similar to the image process which can be performed by the first embodiment. Therefore, the [0115] image field signal 1103 which is an accurate signal similar to that obtainable from the first embodiment can be obtained. When a signal process suitable to the type of the image is selected, a character field can be reproduced with a high resolution. Moreover, a gradation image field can smoothly be reproduced.
  • Since the third embodiment has the structure that the color image signal is stored in the [0116] page memory 1411, the necessity of reading and scanning an image of an original document two times as is required for the first embodiment can be eliminated. Therefore, the same signals can be employed to perform the macro discrimination and to read the image of the original document. Therefore, a necessity of considering an influence of deviation of the reading position which is exerted at each scanning operation can be eliminated. When the capacity of the page memory 1411 and so forth is enlarged to correspond to a plurality of pages, another original document can be read immediately after an original document has been read. Therefore, in a case where an automatic document feeder or the like is used to sequentially copy an original document composed of a plurality of pages, the sequential copying operation can be performed at a high velocity.
  • Although the foregoing embodiment uses change in the density, the average density and chroma as the characteristic values which are abstracted by the micro discrimination section, the abstracted values are not limited to the foregoing factors. For example, distribution of frequencies in a block or consistency with a predetermined pattern may be employed as the characteristic value. [0117]
  • As described above, the image copying machine according to this embodiment has the structure that the [0118] macro discrimination section 1201 uses a macro structural characteristic of an original image to separate a character field and a gradation image field from each other in response to a supplied rough image signal of an original document. Then, a result of image field discrimination DT of the original image performed by the micro discrimination section 1202 in response to a coarse image signal of the original image adaptable to the result of the separation is selected. Then, an image field signal denoting the result of the final image field discrimination is transmitted. In response to the image field signal, the processes which must be performed by the filtering section 1003 and the gradation process section 1007 are selectively switched. Thus, a character field is subjected to an edge emphasizing process and a high-resolution recording process, while a gradation image field is subjected to a multi-gradation processes. As a result, an image in which both of a character field and a gradation field in an image of the original document are satisfactorily reproduced can be generated and/or recorded. That is, the image field discrimination method and the image processing apparatus according to this embodiment are able to perform discrimination an edge of a gradation image and an edge of a character from each other which has been difficult for the conventional technique. Moreover, the method and apparatus according to this embodiment are able to perform discrimination of a character placed on a dot image and a field around the character. Moreover, the foregoing discrimination can accurately be performed with large resolving power.
  • When an image is encoded, switching of the encoding method by using an image field signal denoting a result of final reference numeral enables an image encoding method free from considerable distortion occurring due to the encoding process and exhibiting a high compression ratio to be realized. [0119]
  • Fourth Embodiment [0120]
  • A fourth embodiment will now be described which is a modification of the first embodiment of the present invention. Also this modification has a structure similar to that according to the first embodiment. This embodiment is different from the first embodiment in the structure of the micro discrimination section. [0121]
  • The structure of the micro discrimination section according to this modification is shown in FIG. 22. The micro discrimination section incorporates a first segment [0122] characteristic abstracting section 2201, a second segment characteristic abstracting section 2202, a density abstracting section 2203, a first image field discrimination section 2204, a second image field discrimination section 2205, a third image field discrimination section 2206, a fourth image field discrimination section 2207, a fifth image field discrimination section 2208 and a signal selector section 2209.
  • The first segment [0123] characteristic abstracting section 2201 makes a reference to an image signal in a rectangular field having a size of 5 pixels×5 pixels in the vicinity of a pixel of interest so as to detect a segment component in a vertical or horizontal direction. Then, the first segment characteristic abstracting section 2201 produces an output of a first segment characteristic value signal (SX) 2251. The foregoing signal indicates a degree of the segment structure in the rectangular field. If a vertical or a horizontal segment structure exists, a signal having a large value is transmitted. If no segment structure exists or if the density is constant, a signal having a small value is transmitted.
  • The second segment [0124] characteristic abstracting section 2202 detects a segment component in a diagonal direction existing in the rectangular field in the vicinity of the pixel of interest. Then, the second segment characteristic abstracting section 2202 transmits a second segment characteristic value signal (SB) 2252 indicating a degree of the diagonal segment structure in the rectangular field.
  • The structure and discrimination principle of the first and second segment characteristic abstracting sections are disclosed in Jan. Pat. Appln. SHUTSUGAN Pubulication No. PH10-009480 (TITLE OF THE INVENTION: IMAGE PROCESSING APPARATUS). [0125]
  • The [0126] density abstracting section 2204 abstracts a density component (DD) 2254 in accordance with the following formula:
  • DD=Ky·Y+Km·M+Kc·C
  • where Ky=0.25, Km=0.5 and Kc=0.25 [0127]
  • The image field discrimination sections will now be described. The first to fifth image field discrimination sections perform image field discrimination processes adaptable to five types of fields separated by the field separator section, similarly to those of the first embodiment. Each of the image field discrimination sections receives the characteristic value signal transmitted from the characteristic value abstracting section so as to discriminate the image field signal. A discrimination process which is performed in each image field discrimination section is shown in FIG. 23. [0128]
  • This modification has a structure that the micro discrimination section employs a discrimination method on the basis of the linearity of an image element. Therefore, accuracy to discriminate a character and a dot image from each other can be improved. Therefore, this modification is suitable when reproducibility of fine characters is required. [0129]
  • That is, the fourth embodiment has the structure that the segment components in the main scanning direction and the sub-scanning direction. Therefore, the background field can furthermore accurately be detected. That is, if discrimination is performed simply in accordance with change in the density or the like, there is apprehension that incorrect discrimination is performed such that a background in a constant color as a character field. Since distribution of the segments (edges) in the main scanning direction and the sub-scanning direction is detected, a correct character field can be detected because a character image contains a multiplicity of segments (edge components). [0130]
  • As described above, the present invention is able to accurate image field discrimination of a supplied image with large resolving power. Moreover, a macro structure characteristic of an image is used to separate a character field and a gradation image field from each other. Then, a result of an image field discrimination suitable for each of the separated fields is selected so as to obtain a result of final image field discrimination. Thus, discrimination between an edge of a gradation image and an edge of a character and that between a character placed on a dot image and a field around the character, which have been difficult for the conventional technique, can accurately be performed with large resolving power. [0131]
  • In accordance with the result of the image field discrimination, a character field is subjected to the edge emphasizing process and a high resolution recording process. On the other hand, the gradation field is subjected to the multi-gradation process. Thus, the processes are selectively performed. Thus, images of both of the character field and the gradation field can satisfactorily be recorded. [0132]
  • When an image is encoded, the encoding method is switched in accordance with the result of the image field discrimination. Thus, image encoding can be performed without considerable distortion occurring because of the encoding process such that a high compression ratio is realized. [0133]
  • Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents. [0134]

Claims (16)

1. An image processing apparatus comprising:
field separating means for separating an original image into plural types of fields in response to a first image signal obtained at a first density of the original image;
characteristic value calculating means for calculating a characteristic value of the original image in response to a second image signal of the original image obtained at a second density which is higher than the first density;
discrimination means for discriminating an image field of the original image in accordance with the characteristic value calculated by the characteristic value calculating means to correspond to the type of the field by the field separated by the separating means; and
an image processing means for processing a predetermined image processes corresponding to a result of discrimination of the image field performed by the discrimination means, on the second image signal.
2. An image processing apparatus according to
claim 1
, wherein each of the field separation means and the characteristic value calculating means further includes scanning means for scanning the original image at the first density and then scanning the same at the second density.
3. An image processing apparatus according to
claim 1
, wherein the field separating means includes
means for separating the original image into at least a usual character field, characters on a background, a continuous gradation field and a dot gradation field.
4. An image processing apparatus according to
claim 1
, wherein means is provided which deletes a gradation image existing in a field which has not been discriminated by the field separating means as a gradation field and a gradation image existing in a field discriminated as a continuous gradation field.
5. An image processing apparatus according to
claim 1
, wherein the characteristic value calculating means includes
characteristic value calculating means for calculating change in the density, an average density and chroma which are characteristic values of the original image in response to the second image signal of the original image obtained at the second density which is higher than the first density.
6. An image processing apparatus according to
claim 1
, wherein the characteristic value calculating means includes
characteristic value calculating means for calculating a characteristic value of the original image in each of unit regions having different lengths in a main scanning direction and a sub-scanning direction in response to the second image signal of the original image obtained at the second density which is higher than the first density.
7. An image processing apparatus according to
claim 1
, wherein the discrimination means includes
second discrimination means for discriminating the image field of the original image by means of comparing the characteristic value calculated by the characteristic value calculating means with a threshold value corresponding to the type of the field separated by the separating means.
8. An image processing apparatus according to
claim 1
, wherein the characteristic value calculating means includes
a second characteristic value calculating means for calculating a segment component of the original image in a main scanning direction and a sub-scanning direction as the characteristic value in response to the second image signal of the original image obtained at the second density which is higher than the first density.
9. An image processing method comprising the steps of:
a field separating step for separating an original image into plural types of fields in response to a first image signal obtained at a first density of the supplied original image;
a characteristic value calculating step for calculating a characteristic value of the original image in response to a second image signal of the original image obtained at a second density which is higher than the first density;
a discrimination step for discriminating an image field of the original image in accordance with the characteristic value calculated in the characteristic value calculating step to correspond to the type of the field separated in the field separating step; and
an image processing step for processing a predetermined image processes corresponding to a result of discrimination of the image field performed by the discrimination means, on the second image signal.
10. An image processing method according to
claim 9
, wherein each of the field separation step and the characteristic value calculating step further includes a scanning step for scanning the original image at the first density and then scanning the same at the second density.
11. An image processing method according to
claim 9
, wherein the field separating step includes
a step for separating the original image into at least a usual character field, characters on a background, a continuous gradation field and a dot gradation field.
12. An image processing method according to
claim 9
, wherein a step is provided which deletes a gradation image existing in a field which has not been discriminated in the field separating step as a gradation field and a gradation image existing in a field discriminated as a continuous gradation field.
13. An image processing method according to
claim 9
, wherein the characteristic value calculating steps includes
a characteristic value calculating step for calculating change in the density, an average density and chroma which are characteristic values of the original image in response to the second image signal of the original image obtained at the second density which is higher than the first density.
14. An image processing method according to
claim 9
, wherein the characteristic value calculating step includes
a characteristic value calculating step for calculating a characteristic value of the original image in each of unit regions having different lengths in a main scanning direction and a sub-scanning direction in response to the second image signal of the original image obtained at the second density which is higher than the first density.
15. An image processing method according to
claim 9
, wherein the discrimination step includes
a second discrimination step for discriminating the image field of the original image by means of comparing the characteristic value calculated by the characteristic value calculating step with a threshold value corresponding to the type of the field separated by the separating step.
16. An image processing method according to
claim 9
, wherein the characteristic value calculating step includes
a second characteristic value calculating step for calculating a segment component of the original image in a main scanning direction and a sub-scanning direction as the characteristic value in response to the second image signal of the original image obtained at the second density which is higher than the first density.
US09/136,929 1997-08-20 1998-08-20 Image processing apparatus for discriminating image field of original document plural times and method therefor Expired - Lifetime US6424742B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP9-223673 1997-08-20
JP22367397A JP3891654B2 (en) 1997-08-20 1997-08-20 Image forming apparatus

Publications (2)

Publication Number Publication Date
US20010016065A1 true US20010016065A1 (en) 2001-08-23
US6424742B2 US6424742B2 (en) 2002-07-23

Family

ID=16801861

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/136,929 Expired - Lifetime US6424742B2 (en) 1997-08-20 1998-08-20 Image processing apparatus for discriminating image field of original document plural times and method therefor

Country Status (5)

Country Link
US (1) US6424742B2 (en)
EP (1) EP0899685B1 (en)
JP (1) JP3891654B2 (en)
CN (1) CN1109315C (en)
DE (1) DE69808864T2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030053087A1 (en) * 2001-09-19 2003-03-20 Hidekazu Sekizawa Image processing apparatus
US20090009529A1 (en) * 2007-06-26 2009-01-08 Microsoft Corporation Adaptive contextual filtering
US20090214108A1 (en) * 2008-02-26 2009-08-27 Jonathan Yen System and method for isolating near achromatic pixels of a digital image
US20090225370A1 (en) * 2008-03-04 2009-09-10 Makio Gotoh Image processing apparatus, image processing method, image forming apparatus, program, and recording medium
US20100254622A1 (en) * 2009-04-06 2010-10-07 Yaniv Kamay Methods for dynamically selecting compression method for graphics remoting
US20150281471A1 (en) * 2014-03-31 2015-10-01 Heidelberger Druckmaschinen Ag Method for the automatic parameterization of the error detection of an image inspection system
US20160219186A1 (en) * 2015-01-23 2016-07-28 Konica Minolta, Inc. Image processing device and image processing method
US20170316279A1 (en) * 2016-05-02 2017-11-02 Fuji Xerox Co., Ltd. Change degree deriving apparatus, change degree deriving method and non-transitory computer readable medium

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3711810B2 (en) 1999-10-13 2005-11-02 セイコーエプソン株式会社 Image conversion apparatus, storage medium, and image conversion method
JP3625160B2 (en) 1999-10-14 2005-03-02 シャープ株式会社 Image processing device
US20020040375A1 (en) * 2000-04-27 2002-04-04 Simon Richard A. Method of organizing digital images on a page
US7006253B2 (en) * 2001-04-30 2006-02-28 Kabushiki Kaisha Toshiba Image processing apparatus
JP2002354242A (en) 2001-05-25 2002-12-06 Ricoh Co Ltd Image processor, image reader, image forming device, and color copying machine
US7262884B2 (en) * 2002-11-22 2007-08-28 Kabushiki Kaisha Toshiba Image processing apparatus and image processing method
JP2004266513A (en) 2003-02-28 2004-09-24 Canon Inc Method and apparatus for inputting/outputting image
US7365880B2 (en) 2003-04-04 2008-04-29 Kabushiki Kaisha Toshiba Image processing apparatus and image processing method
US7506271B2 (en) * 2003-12-15 2009-03-17 Microsoft Corporation Multi-modal handwriting recognition correction
US7379594B2 (en) * 2004-01-28 2008-05-27 Sharp Laboratories Of America, Inc. Methods and systems for automatic detection of continuous-tone regions in document images
US20050207675A1 (en) * 2004-03-22 2005-09-22 Kabushiki Kaisha Toshiba Image processing apparatus
US20060072819A1 (en) * 2004-10-06 2006-04-06 Kabushiki Kaisha Toshiba Image forming apparatus and method
US9137417B2 (en) 2005-03-24 2015-09-15 Kofax, Inc. Systems and methods for processing video data
US9769354B2 (en) 2005-03-24 2017-09-19 Kofax, Inc. Systems and methods of processing scanned data
JP4522306B2 (en) 2005-04-08 2010-08-11 株式会社リコー Image processing apparatus, image processing apparatus control method, image recognition method, image forming apparatus, information processing apparatus, data processing method, and program
JP4671885B2 (en) * 2005-06-01 2011-04-20 株式会社リコー Image processing apparatus, program, and image processing method
US7978922B2 (en) * 2005-12-15 2011-07-12 Microsoft Corporation Compressing images in documents
US8102436B2 (en) * 2006-12-18 2012-01-24 Sony Corporation Image-capturing apparatus and method, recording apparatus and method, and reproducing apparatus and method
US8244031B2 (en) * 2007-04-13 2012-08-14 Kofax, Inc. System and method for identifying and classifying color regions from a digital image
US9349046B2 (en) 2009-02-10 2016-05-24 Kofax, Inc. Smart optical input/output (I/O) extension for context-dependent workflows
US9767354B2 (en) 2009-02-10 2017-09-19 Kofax, Inc. Global geographic information retrieval, validation, and normalization
US8774516B2 (en) 2009-02-10 2014-07-08 Kofax, Inc. Systems, methods and computer program products for determining document validity
US9576272B2 (en) 2009-02-10 2017-02-21 Kofax, Inc. Systems, methods and computer program products for determining document validity
US8958605B2 (en) 2009-02-10 2015-02-17 Kofax, Inc. Systems, methods and computer program products for determining document validity
WO2013009651A1 (en) * 2011-07-12 2013-01-17 Dolby Laboratories Licensing Corporation Method of adapting a source image content to a target display
JP2013051498A (en) 2011-08-30 2013-03-14 Canon Inc Image forming device
US9483794B2 (en) 2012-01-12 2016-11-01 Kofax, Inc. Systems and methods for identification document processing and business workflow integration
US9058515B1 (en) 2012-01-12 2015-06-16 Kofax, Inc. Systems and methods for identification document processing and business workflow integration
US10146795B2 (en) 2012-01-12 2018-12-04 Kofax, Inc. Systems and methods for mobile image capture and processing
US9058580B1 (en) 2012-01-12 2015-06-16 Kofax, Inc. Systems and methods for identification document processing and business workflow integration
US8989515B2 (en) 2012-01-12 2015-03-24 Kofax, Inc. Systems and methods for mobile image capture and processing
WO2014160426A1 (en) 2013-03-13 2014-10-02 Kofax, Inc. Classifying objects in digital images captured using mobile devices
US9355312B2 (en) 2013-03-13 2016-05-31 Kofax, Inc. Systems and methods for classifying objects in digital images captured using mobile devices
US9208536B2 (en) 2013-09-27 2015-12-08 Kofax, Inc. Systems and methods for three dimensional geometric reconstruction of captured image data
US20140316841A1 (en) 2013-04-23 2014-10-23 Kofax, Inc. Location-based workflows and services
DE202014011407U1 (en) 2013-05-03 2020-04-20 Kofax, Inc. Systems for recognizing and classifying objects in videos captured by mobile devices
WO2015073920A1 (en) 2013-11-15 2015-05-21 Kofax, Inc. Systems and methods for generating composite images of long documents using mobile video data
US9760788B2 (en) 2014-10-30 2017-09-12 Kofax, Inc. Mobile document detection and orientation based on reference object characteristics
JP2016110354A (en) 2014-12-05 2016-06-20 三星ディスプレイ株式會社Samsung Display Co.,Ltd. Image processor, image processing method, and program
US10242285B2 (en) 2015-07-20 2019-03-26 Kofax, Inc. Iterative recognition-guided thresholding and data extraction
US9779296B1 (en) 2016-04-01 2017-10-03 Kofax, Inc. Content-based detection and three dimensional geometric reconstruction of objects in image and video data
US10803350B2 (en) 2017-11-30 2020-10-13 Kofax, Inc. Object detection and image cropping using a multi-detector approach

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5954376A (en) * 1982-09-21 1984-03-29 Konishiroku Photo Ind Co Ltd Picture processing method
GB2153619B (en) * 1983-12-26 1988-01-20 Canon Kk Image processing apparatus
JPS6110360A (en) * 1984-06-26 1986-01-17 Canon Inc Picture processing device
US4741046A (en) * 1984-07-27 1988-04-26 Konishiroku Photo Industry Co., Ltd. Method of discriminating pictures
DE3881392T2 (en) * 1988-09-12 1993-10-21 Oce Nederland Bv System and method for automatic segmentation.
US5280367A (en) * 1991-05-28 1994-01-18 Hewlett-Packard Company Automatic separation of text from background in scanned images of complex documents
JP3276985B2 (en) * 1991-06-27 2002-04-22 ゼロックス・コーポレーション Image pixel processing method
US5956468A (en) * 1996-07-12 1999-09-21 Seiko Epson Corporation Document segmentation system
US5850474A (en) * 1996-07-26 1998-12-15 Xerox Corporation Apparatus and method for segmenting and classifying image data
US5987221A (en) * 1997-01-24 1999-11-16 Hewlett-Packard Company Encoded orphan pixels for discriminating halftone data from text and line art data

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6987587B2 (en) 2001-09-19 2006-01-17 Kabushiki Kaisha Toshiba Multiple recognition image processing apparatus
US20030053087A1 (en) * 2001-09-19 2003-03-20 Hidekazu Sekizawa Image processing apparatus
US7821524B2 (en) * 2007-06-26 2010-10-26 Microsoft Corporation Adaptive contextual filtering
US20090009529A1 (en) * 2007-06-26 2009-01-08 Microsoft Corporation Adaptive contextual filtering
US20090214108A1 (en) * 2008-02-26 2009-08-27 Jonathan Yen System and method for isolating near achromatic pixels of a digital image
US20090225370A1 (en) * 2008-03-04 2009-09-10 Makio Gotoh Image processing apparatus, image processing method, image forming apparatus, program, and recording medium
US8248659B2 (en) 2008-03-04 2012-08-21 Sharp Kabushiki Kaisha Image processing apparatus, image processing method, image forming apparatus, program, and recording medium
US20100254622A1 (en) * 2009-04-06 2010-10-07 Yaniv Kamay Methods for dynamically selecting compression method for graphics remoting
US9025898B2 (en) * 2009-04-06 2015-05-05 Red Hat Israel, Ltd. Dynamically selecting compression method for graphics remoting
US20150281471A1 (en) * 2014-03-31 2015-10-01 Heidelberger Druckmaschinen Ag Method for the automatic parameterization of the error detection of an image inspection system
US9762750B2 (en) * 2014-03-31 2017-09-12 Heidelberger Druckmaschinen Ag Method for the automatic parameterization of the error detection of an image inspection system
US20160219186A1 (en) * 2015-01-23 2016-07-28 Konica Minolta, Inc. Image processing device and image processing method
US9774764B2 (en) * 2015-01-23 2017-09-26 Konica Minolta, Inc. Image processing device and image processing method
US20170316279A1 (en) * 2016-05-02 2017-11-02 Fuji Xerox Co., Ltd. Change degree deriving apparatus, change degree deriving method and non-transitory computer readable medium
US10586126B2 (en) * 2016-05-02 2020-03-10 Fuji Xerox Co., Ltd. Change degree deriving apparatus, change degree deriving method and non-transitory computer readable medium

Also Published As

Publication number Publication date
JP3891654B2 (en) 2007-03-14
EP0899685A1 (en) 1999-03-03
DE69808864T2 (en) 2003-07-03
CN1213118A (en) 1999-04-07
CN1109315C (en) 2003-05-21
EP0899685B1 (en) 2002-10-23
US6424742B2 (en) 2002-07-23
DE69808864D1 (en) 2002-11-28
JPH1169150A (en) 1999-03-09

Similar Documents

Publication Publication Date Title
US6424742B2 (en) Image processing apparatus for discriminating image field of original document plural times and method therefor
US5134666A (en) Image separator for color image processing
JP3276985B2 (en) Image pixel processing method
US5477335A (en) Method and apparatus of copying of black text on documents using a color scanner
JPH03256178A (en) Picture/character area identifying system for image processor
JPH03208467A (en) Picture area identification system for picture processing unit
CN100477722C (en) Image processing apparatus, image forming apparatus, image reading process apparatus and image processing method
JP2002232708A (en) Image processing device, image forming device using the same, and image processing method
JP2009038529A (en) Image processing method, image processing device, image forming apparatus, image reading device, computer program, and record medium
US6775031B1 (en) Apparatus and method for processing images, image reading and image forming apparatuses equipped with the apparatus, and storage medium carrying programmed-data for processing images
JP3736535B2 (en) Document type identification device
JP3073837B2 (en) Image region separation device and image region separation method
JP2000036912A (en) Image processing method
JP3988970B2 (en) Image processing apparatus, image processing method, and storage medium
JP2629699B2 (en) Image area identification device
JP4260774B2 (en) Image processing apparatus, image forming apparatus, image processing method, image processing program, and recording medium
JP2557480B2 (en) Color image processor
JP3064896B2 (en) Image processing device
JP3581756B2 (en) Image processing device
JP4958626B2 (en) Image processing method, image processing apparatus, image forming apparatus, computer program, and recording medium
JP2696902B2 (en) Color image processing equipment
JP2803852B2 (en) Image processing device
JP2507927B2 (en) Image processing device
JPH10173930A (en) Image output device
EP1605684B1 (en) Method of processing a digital image in order to enhance the text portion of said image

Legal Events

Date Code Title Description
AS Assignment

Owner name: KABUSHIKI KAISHA TOSHIBA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YAMAMOTO, NAOFUMI;KAWAKAMI, HARUKO;RAO, GURURAJ;REEL/FRAME:009410/0105;SIGNING DATES FROM 19980811 TO 19980812

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12