Nothing Special   »   [go: up one dir, main page]

AU2009251019A1 - Classifying pixels of image for compression - Google Patents

Classifying pixels of image for compression Download PDF

Info

Publication number
AU2009251019A1
AU2009251019A1 AU2009251019A AU2009251019A AU2009251019A1 AU 2009251019 A1 AU2009251019 A1 AU 2009251019A1 AU 2009251019 A AU2009251019 A AU 2009251019A AU 2009251019 A AU2009251019 A AU 2009251019A AU 2009251019 A1 AU2009251019 A1 AU 2009251019A1
Authority
AU
Australia
Prior art keywords
pixels
run
classifying
pixel
compression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2009251019A
Inventor
Dixon De Sheng Deng
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2009251019A priority Critical patent/AU2009251019A1/en
Publication of AU2009251019A1 publication Critical patent/AU2009251019A1/en
Abandoned legal-status Critical Current

Links

Landscapes

  • Compression Of Band Width Or Redundancy In Fax (AREA)

Abstract

Methods (550), apparatuses (500), and computer readable storage mediums 5 for classifying pixels of an image for compression are disclosed. At least one run of pixels of the image is analysed (565) using a first classifying method to determine if the pixels are to be compressed using either a first compression method (575) or a second compression method (570). A check is performed to determine if a predetermined criterion associated with the first classifying method (565), which 10 determines if the run of pixels is to be compressed by the first compression method (575), is satisfied. A second classifying method (560) for analysing (560) at least one further run of pixels is enabled in response to the predetermined criterion being satisfied. The first classifying method (565) is more computationally intensive than the second classifying method (560).

Description

S&F Ref: 927049 AUSTRALIA PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name and Address Canon Kabushiki Kaisha, of 30-2, Shimomaruko 3 of Applicant: chome, Ohta-ku, Tokyo, 146, Japan Actual Inventor(s): Dixon De Sheng Deng Address for Service: Spruson & Ferguson St Martins Tower Level 35 31 Market Street Sydney NSW 2000 (CCN 3710000177) Invention Title: Classifying pixels of image for compression The following statement is a full description of this invention, including the best method of performing it known to me/us: 5845c(2447897_1) -1 CLASSIFYING PIXELS OF IMAGE FOR COMPRESSION TECHNICAL FIELD The present invention relates generally to document compression and, in 5 particular, to the compression of continuous-tone compound documents containing an arbitrary combination of text, graphics and photographic images. BACKGROUND Raster Image Processing is the process and the means of turning vector 10 digital information, such as an Adobe@ PostScript@ file, into a high-resolution raster image. The Raster Image Processor (RIP) takes the digital information about fonts and graphics that describes the appearance of a document and translates the information into an image composed of individual pixels that an imaging device, such as a printer, can output. A number of types of Raster Image Processors are known to 15 the art. These include frame-store RIP's, band-store RIP's and tile-order RIP's. In the tile RIP, the page is divided up into square tiles. Each tile is fully rendered before the RIP starts rendering the next tile. With such a RIP, the RIP output compressor may start compression of a rendered tile as soon as the tile is available, provided that the compressor can accept data in tile order. 20 When choosing a compression method, using a method appropriate for the type of data being compressed is important. One method of applying a combination of compression methods is to analyse blocks of pixels (usually tiles) and choose the most appropriate method for each block. One method of doing this is to segment image portions using the length of 25 pixel runs and contrast. The method designates pixels in a short pixel run with low contrast as lossy pixels and long pixels runs or those with high contrast as lossless. Such a method may incorrectly designate thin vertical patterns in an image as lossy when the pattern can be more optimally represented losslessly using an edge-based lossless compression method. 30 Another method of hybrid image compression involves segmentation based on the size of regions of substantially identical colour. The method designates regions A'I7M A /I A A''7\O 1 \ -2 greater than a size threshold as visually significant and compresses these regions using a lossless edge-based compression method. Regions with sizes smaller than the size threshold are considered visually insignificant and compressed using a lossy method. The process of determining regions of substantially identical colour is 5 computationally intensive as pixel runs are stitched together to form the regions. For an image containing substantially more lossy regions than lossless regions, there is a large amount of wasted computational effort to identify regions of substantially identical colour, only to designate them as lossy regions afterwards. Yet another hybrid image compression method involves segmentation 10 parameters and compression quality parameters are dynamically adjusted to suit compression goals using compression rate feedback. Whilst this method controls the compression rate, segmentation is still carried out for each pixel in an image. Therefore, there is still wasted computational effort in segmenting pixels of an image when the image contains substantially more lossy regions. Furthermore, the method 15 alters compression parameters during image compression, which can result in areas of different visual quality in the image. An image discrimination method involves two classifiers employed to segment an image into tonal and character regions. The first classifier utilizes a low pass filter, a comparator and a depth-level judging circuit to coarsely segment an 20 image into lossy and lossless regions. The second classifier utilizes a length or area metric to further analyse lossy regions to identify lossless regions incorrectly designated as lossy by the first classifier. The classifiers rely on parameters that are dependent on statistics calculated from the image to be classified. This requires two passes through image data, which not only increases process time, but also may not be 25 practical in streamed RIPs, where substantial memory buffers are required to store image data to allow for a second pass. Furthermore, the first classifier carries out computationally intensive operations for every pixel of an image, adding to processing time when the image contains substantially more lossy regions than lossless regions. 30 The above discussion relates to documents that form public knowledge by virtue of their publication. Such should not be interpreted as an admission that such documents form part of the common general knowledge in the art. AT)7AA n I A A'7'7AO 1 \ -3 SUMMARY In accordance with an aspect of the invention, there is provided a computer implemented method of classifying pixels of an image for compression. At least one 5 run of pixels of the image is analysed using a first classifying method to determine if the pixels are to be compressed using either a first compression method or a second compression method. A check is performed to determine if a predetermined criterion associated with the first classifying method, which determines if the run of pixels is to be compressed by the first compression method, is satisfied. A second classifying 10 method for analysing at least one further run of pixels is enabled in response to the predetermined criterion being satisfied. The first classifying method is more computationally intensive than the second classifying method. The method may further comprise the step of analysing the at least one further run of pixels using the second classifying method. 15 The method may further comprise the step of generating from the image a number of pixel runs, each comprising at least one pixel. The method may further comprise the step of compressing, using the first compression method, each pixel run determined by either the first or second classifying method to be compressed using the first compression method. The 20 method may further comprise the step of compressing, using the second compression method, each pixel run determined by the first classifying method to be compressed using the second compression method. The method may further comprise the step of outputting to memory each compressed pixel run to provide a compressed image. The method may further comprise the step of processing the image on a tile 25 by-tile basis, each tile comprising a portion of the pixels of the image, the pixel runs being generated from each tile. The first compression method may be a lossy compression method and the second compression method may be a lossless compression method. The first classifying method may generate a colour palette and one or more edges from the 30 pixel runs classified by the first classifying method for the lossless compression method. QT7nAQ /)AA'72nQ 1 I -4 The first classifying method may form a region of at least one run of pixels with a substantially similar colour and determines if the region is to be compressed using the first compression method or the second compression method. The first classifying method may determine the run of pixels is to be 5 compressed using the lossy compression method if an edge in the run of pixels does not have high contrast. The second classifying method may determine the further run of pixels is to be compressed using the lossy compression method if the further run of pixels does not have high contrast. 10 The second classifying method may determine a run of pixels is to be compressed using the lossy compression method if the further run of pixels is less than a predetermined length. In accordance with another aspect of the invention, there is provided an apparatus for classifying pixels of an image for compression, comprising: a memory 15 for storing data and a computer program; and a processor unit coupled to the memory for executing a computer program, the memory and the processor configured to classify the pixels of the image for compression. The computer program comprising: a computer program code module for analysing at least one run of pixels of the image using a first classifying method to determine if the pixels are to be compressed using 20 either a first compression method or a second compression method; a computer program code module for checking if a predetermined criterion associated with the first classifying method, which determines if the run of pixels is to be compressed by the first compression method, is satisfied; and a computer program code module for enabling a second classifying method for analysing at least one further run of pixels in 25 response to the predetermined criterion being satisfied, the first classifying method being more computationally intensive than the second classifying method. The apparatus may further comprise a computer program code module for analysing the at least one further run of pixels using the second classifying method. The apparatus may further comprise a computer program code module for 30 compressing, using the first compression method, each pixel run determined by either the first or second classifying method to be compressed using the first compression method. n'1'71A )A A 77AO 1 N -5 The apparatus may further comprise a computer program code module for compressing, using the second compression method, each pixel run determined by the first classifying method to be compressed using the second compression method. In accordance with yet another aspect of the invention, there is provided a 5 computer readable storage medium having recorded therein a computer program for classifying pixels of an image for compression for execution by a processing unit. The computer program comprises: a computer program code module for analysing at least one run of pixels of the image using a first classifying method to determine if the pixels are to be compressed using either a first compression method or a second 10 compression method; a computer program code module for checking if a predetermined criterion associated with the first classifying method, which determines if the run of pixels is to be compressed by the first compression method, is satisfied; and a computer program code module for enabling a second classifying method for analysing at least one further run of pixels in response to the predetermined criterion 15 being satisfied, the first classifying method being more computationally intensive than the second classifying method. The computer readable storage medium may further comprise a computer program code module for compressing, using the first compression method, each pixel run determined by either the first or second classifying method to be compressed 20 using the first compression method. The computer readable storage medium may further comprise a computer program code module for compressing, using the second compression method, each pixel run determined by the first classifying method to be compressed using the second compression method. 25 BRIEF DESCRIPTION OF THE DRAWINGS Embodiments of the present invention are described hereinafter with reference to the drawings, in which: Fig. I is a schematic block diagram showing a printing system for rendering 30 and printing a document; Fig. 2 is a schematic block diagram of the printing system of Fig. I having a client-based architecture; O'7)AA P) A A'7'710 1 \ -6 Fig. 3 is a schematic block diagram of the printing system of Fig. 1 having a host-based architecture; Fig. 4 is a block diagram depicting a tile; Fig. 5A is a high-level flow diagram showing a process of classifying pixels 5 of an image for compression using two classifying methods and compressing the image using two compression methods dependent upon classification of the pixels, where one classifying method is more computationally intensive than the other; Fig. 5B is a schematic block diagram showing a hybrid compression apparatus, as used in the systems of Figs. 2 and 3, according to the preferred 10 embodiment of the present invention; Fig. 6 shows a flow diagram of the compression process carried out by the hybrid compression apparatus of Fig. 5B; Fig. 7 is a flow diagram illustrating a pixel-run-generation process for a single pixel run on a scanline (or row) of a tile; 15 Fig. 8 is a block diagram showing the neighbourhood of pixels used to calculate contrast for a pixel run; Fig. 9 is a flow diagram of a pixel-run pre-classification process performed by a Pre-Classifier; Fig. 10 is a state transition diagram showing the various states of a Classifier; 20 Fig. I1 is a flow diagram of the palette generation process performed by a Palette Generator of the Classifier; Fig. 12 is a flow diagram illustrating an edge generation process performed by an Edge Generator of the Classifier; Fig. 13 is a block diagram showing examples of instances in which joining 25 criteria are met; and Fig. 14 is a schematic flow diagram showing a method for assessing the visual significance of edges generated by the Edge Generator; Fig. 15 is a block diagram showing an example of a tile being compressed; and 30 Figs. 16A and 16B are a schematic block diagram of a general-purpose computer system upon which arrangements described can be practiced. An1-YnAn PnAA 77AO I\ -7 DETAILED DESCRIPTION Methods, apparatuses, and computer readable storage mediums for classifying pixels of an image for compression are disclosed. Further, methods, apparatuses, and computer readable storage mediums for compressing the image 5 dependent upon the classification of the pixels are disclosed. In the following description, numerous specific details, including particular colour spaces, tile sizes, threshold parameters, and the like are set forth. However, from this disclosure, it will be apparent to those skilled in the art that modifications and/or substitutions may be made without departing from the scope and spirit of the invention. In other 10 circumstances, specific details may be omitted so as not to obscure the invention. Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears. 15 Overview Fig. 5A illustrates a method 550 of classifying pixels of an image for compression, as well as compressing the image dependent upon the pixels being classified. The method 550 may be computer implemented. A digital image 20 comprising a number of pixels is provided as input to the method 550. Preferably, the image is processed in the method 550 as tiles. The image is processed on a tile-by-tile basis, each tile comprising one or more pixels. The first step depicted in the flow diagram is step 560 and the second step is step 565. Step 565 utilises a first classifying method (a classifier), and step 560 utilises a second classifying method (a 25 pre-classifier), as described hereinafter. The first classifying method is more computationally intensive than the second classifying method. The image tiles are provided to the step 560. When the method 550 is initially processing, the step 560 simply generates pixel runs for each tile, without applying the second classifying method, which is disabled. Each pixel run comprises at least one pixel of the image. 30 The pixel runs are provided to the first classifying method in step 565, which is applied to at least one pixel run. The first classifying method is used to analyse the at least one pixel run to determine if the pixels are to be compressed using either a first -8 (e.g., lossy) compression method or a second (e.g., lossless) compression method. Step 565 also checks if a predetermined criterion is satisfied related to the second classifying method determining a pixel is to be compressed by the first (e.g., lossy) compression method. In response to the predetermined criterion being satisfied, the 5 second classifying method in step 560 is enabled to analyse at least one further pixel run. Accordingly, the at least one further pixel run is analysed using the second classifying method. If step 560 determines a pixel run is insignificant, the insignificant pixel run is provided to step 575. Step 575 applies a first (e.g., lossy) compression method to 10 the insignificant pixel runs from step 560. Processing continues from step 575 at step 580, where the compressed pixel runs are provided to step 580. which outputs the compressed image. This may involve outputting to memory each compressed pixel run to provide the compressed image. Step 560 passes any pixel runs that cannot be determined to be insignificant 15 to step 565. Step 565 applies the first classifying method to a pixel run. Step 565 provides classification status feedback to step 560, as described hereinafter. Step 565 preferably generates a colour palette and edges from the pixel runs. An edge consists of pixel runs. If step 565 determines an edge generated from pixel runs to be insignificant, the insignificant edge is passed to the step 575 to apply lossy 20 compression for all pixel runs of the insignificant edge. However, if step 565 determines an edge generated from pixel runs is significant, the significant edge is provided to step 570. In step 570, a second (e.g., lossless) compression method is applied to all pixel runs of a significant edge. The second (losslessly) compressed pixel runs are provided to step 580 to form the compressed image output by step 580. 25 After step 580, processing terminates. Thus, the compressing step 575 can compress using the lossy compression method each pixel run determined by either the first or second classifying method of step 565 or 560 to be compressed using the lossy compression method. The compressing step 570 can compress using the lossless compression method each pixel 30 run determined by the first classifying method of step 565 to be compressed using the lossless compression method. These and other aspects of the invention are described in greater detail hereinafter. An'"7f\ A 0 /' IA A '7"O I N -9 Even though Fig. 5A discloses the exemplary embodiment of selecting between a lossless compression method and a lossy compression method, the present invention is also applicable to a selection between two lossless compression methods or two lossy compression methods. Examples of different lossless compression 5 methods that may be used include Adaptive Binary Optimization (ABO), Portable Network Graphics (PNG) method and Tagged Image File Format (TIFF) method. On the other hand, examples of different lossy compression methods that are applicable include fractal and wavelet compression methods. 10 Printing System Pixel data generated by a RIP can belong to a number of different region types, namely text, graphic, and image regions. Each region type has different characteristics and hence different compression requirements. Pixels forming characters, symbols and other glyphs are referred to as text regions. Pixels forming 15 large regions of the same colour (as evident in many block diagrams, charts and clip art) are referred to as graphic regions. Text regions and graphic regions both contain a single colour per region and require transitions between regions to be defined accurately to maintain sharp edges. Image regions contain many colours per region, which vary more smoothly than the transitions between colours in graphic or text 20 regions. These regions contain a large quantity of data due to the constantly changing colours within the region. The principles of the arrangements described hereinafter have general applicability to image classification, as well as image compression. For ease of explanation the arrangements are described with reference to image compression used 25 in a colour raster image processing system. However, the aspects of the present invention are not limited to the described arrangements. For example, the aspects of the invention may have application to any arrangement utilising hybrid compression where memory and/or computational resources are limited. The aspects of the invention are computationally more efficient in not applying unnecessary operations 30 that are computationally intensive dependent upon the content of the image, or tiles of the image, being processed. 1OAOVt)AA77nQ 1'% -10 Fig. I shows a typical printing system 100, which includes a Personal Computer 101 connected to a Printer 105 through a Network 104. The Personal Computer 101 and the Printer 105 each include at least one processor unit (not illustrated), a memory unit (not illustrated), and a Modulator-Demodulator (Modem) 5 transceiver device for communicating to and from the Network 104. The Printer 105 further includes a Printer Engine 107. The Network 104 may be any connection, including a Universal Serial Port (USB) or parallel port connection. When a user of the Personal Computer 101 chooses to print a document to a physical medium using the Printer 105, there are a number of well-accepted stages in 10 the process. Firstly, a Software Application 102 executing on the Personal Computer 101 generates data in the form of a page description language (PDL), such as Adobe@ PostScript@ or Hewlett-Packard's Printer Command Language (PCL), which describes objects to be printed. Secondly, a Host Print Manager 103 also executing on the Personal Computer 103 processes the PDL, before transmitting the resulting 15 data from the Personal Computer 101 via the Network 104 to the Printer 105. A Client Print Manager 106 in the Printer 105 performs further processing before the resulting data is provided to the Printer Engine 107 of the printer 105 where the resulting data is printed on a physical medium. The work done by the Host Print Manager 103 and the Client Print Manager 20 106 usually comprises job generation, raster image processing (RIP), RIP output compression, RIP output decompression and post RIP processing. These tasks can be split between the Host Print Manager 103 and the Client Print Manager 106 in a number of different ways, depending on the type of architecture chosen. The RIP is responsible for combining the many levels and objects that can 25 exist in a typical print job into a 2-dimensional rasterised output. The output must be capable of defining the colour value for each pixel of the page area at the chosen resolution. Due to the real-time requirements of a laser printer engine, the entire page in raster form is usually available for printing once Post RIP Processing starts. Post RIP Processing is the process of taking the rendered data, performing 30 any final processing needed and feeding the data in real-time to the Printer Engine 107. If all stages of the print process can guarantee real-time supply of data, a simple, single pass system could operate, where data is pipelined through the system at 0170AA0flA77t2 I I -11 constant speed just in time for printing. However, raster image processors do not always operate in real time due to the varying complexity of source data used for rendering. In a typical laser print engine, post RIP processing must operate in real time 5 as the page is fed through the Printer 105, otherwise the Printer Engine 107 stalls and the entire page needs to be printed again. To guarantee supply of data in real-time, an entire page of RIP output must be buffered. The memory required to buffer an entire page of raw pixel data is cost-prohibitive. Therefore, RIP Output Compression is necessary to achieve adequate performance at a low cost. The decompression of the 10 RIP output must also be performed in real time. Compression methods can broadly be categorized as either lossy or lossless. One class of lossy RIP output compression algorithms is the pixel-based methods, such as JPEG and wavelet compression. These algorithms are able to achieve high compression ratios by discarding information that is not visually 15 significant. This achieves good compression ratios and acceptable visual quality for natural images, but documents containing sharp transitions in colour such as basic text or graphics can suffer from the introduction of visible artefacts. The majority of lossless compression algorithms can be broadly categorized as pixel-based or edge-based. Lossless pixel-based methods such as JPEG-LS suffer 20 from the same drawbacks as lossy pixel-based methods. As resolution and colour depth increases, these algorithms become prohibitively memory expensive. The advantage of lossless compression is that the output is of high quality. This is important for text and graphic regions where sharp transitions in colour must be maintained and the sort of artefacts caused by most lossy algorithms must be avoided. 25 The disadvantage of lossless compression is that worst-case jobs cause the compressed size to be larger than the raw size. Edge-based (vector-based) algorithms are generally lossless and therefore preserve sharp transitions in colour. Text regions and graphic regions contain a single colour and can therefore be represented efficiently using edge-based algorithms since 30 a large area of many pixels can be described with only a single edge. These algorithms are less affected by increases in resolution or bit depth since the number of edges does not increase as the resolution increases. However, natural images do not n'N'7A AnNA 1 A 'O I\ -12 compress well with edge-based algorithms and may even cause the compressed size to be larger than the raw size. No lossy or lossless method alone produces a satisfactory outcome for the compression of RIP output that can contain a wide variety of different requirements 5 across a single page. A combination or hybrid of lossless and lossy methods is one way to achieve better results. The lossless method preserves the sharp colour transitions while the lossy method provides strong compression of regions with many colours. This can be an effective method of gaining high compression while maintaining high quality. This 10 requires some method of identifying which regions should be encoded losslessly and which should be encoded lossily. Usually, lossless encoding is used for flat regions. Flat regions are text or graphics regions typically comprising a single colour that is outlined by a sharp edge at page resolution. 15 Usually, lossy encoding is used for regions of pixels that form a photographic image. These image regions contain a wide variety of colours that typically do not change markedly from one pixel to the next. The boundary of an image region must still be retained accurately since the human visual system treats the boundary between an image region and a flat region much the same as the boundary 20 between two flat regions. However, the pixels within an image region do not need to be preserved exactly since the human visual system is less sensitive to small variations in colour or luminance within an image region. The delegation of tasks to either the Host Print Manager 103 or the Client Print Manager 106 depends on the type of architecture chosen. The two common 25 architectures are client-based and host-based. Fig. 2 shows in greater detail a client-based architecture 200 of the printing system 100 of Fig. 1, where the majority of the processing is performed by the Client Print Manager 106. The user of the Personal Computer 101 chooses to print a document, causing the Software Application 102 to create a PDL, which is sent to the 30 Host Print Manager 103. A Job Generator 201 within the Host Print Manager 103 takes the PDL and organizes the PDL into a format that can be supplied to a RIP 203 in the Client Print Manager 106. From the Job Generator 201, the data is sent over (A'"iAA\ /n A A "7AO I \ -13 the Network 104 to the Printer 105, which stores the data in a Job Memory 202 of the Client Print Manager 106. The data can be obtained from the job memory 202 and is then rendered by the RIP 203 to create a bitmap of pixels called the RIP Output. The RIP output is compressed by a RIP Output Compressor 204 coupled to the RIP 203 5 and stored in a Compressed Memory 205. Before the Printer Engine 107 requires the information, the data is decompressed by a RIP Output Decompressor 206 into Uncompressed Memory 207. This data is modified in a number of ways to optimize the print quality on the Print Engine 107 by the Post Rip Processor 208. Finally, the pixel data is supplied to the Print Engine 107. 10 Fig. 3 shows in greater detail a host-based architecture 300 of the printing system 100 of Fig. 1, where a large proportion of the processing has been shifted into the Host Print Manager 103. The user of the Personal Computer 101 chooses to print the document, causing the Software Application 102 to create a PDL, which is sent to the Host Print Manager 103. The Job Generator 201 processes the PDL and organizes 15 the PDL into a format that can be supplied to the RIP 203. This data is stored in the Job Memory 202 before being rendered by the RIP 203 to create a bitmap of pixels called the RIP Output. The RIP Output is compressed by the RIP Output Compressor 204 and sent over the Network 104 to the Client Print Manager 106 in the Printer 105 to be stored in Compressed Memory 205. Before the Printer Engine 107 requires the 20 information, the data is decompressed by the RIP Output Decompressor 206 and stored in Uncompressed Memory 207. From there, the Post RIP Processor 208 modifies the data in a number of ways to optimize the print quality on the Print Engine 107. Finally, the pixel data is sent to the Print Engine 107. The embodiments of the present invention include a number of hybrid 25 compression algorithms for compressing rasterised data in a single pass using a combination of lossless compression and lossy compression. The rasterised data is supplied to the compression algorithm in a tile-by-tile order. Computer Implementation 30 Figs. 16A and 16B depict a general-purpose computer system 1600, upon which the various arrangements described can be practiced. n'1-7nAnf /IAA7'7tlO I\ -14 As seen in Fig. 16A, the computer system 1600 includes: a computer module 1601; input devices such as a keyboard 1602, a mouse pointer device 1603, a scanner 1626, a camera 1627, and a microphone 1680; and output devices including a printer 1615, a display device 1614 and loudspeakers 1617. An external Modulator 5 Demodulator (Modem) transceiver device 1616 may be used by the computer module 1601 for communicating to and from a communications network 1620 via a connection 1621. The communications network 1620 may be a wide-area network (WAN), such as the Internet, a cellular telecommunications network, or a private WAN. Where the connection 1621 is a telephone line, the modem 1616 may be a 10 traditional "dial-up" modem. Alternatively, where the connection 1621 is a high capacity (e.g., cable) connection, the modem 1616 may be a broadband modem. A wireless modem may also be used for wireless connection to the communications network 1620. The computer module 1601 typically includes at least one processor unit 15 1605, and a memory unit 1606. For example, the memory unit 1606 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The computer module 1601 also includes an number of input/output (I/O) interfaces including: an audio-video interface 1607 that couples to the video display 1614, loudspeakers 1617 and microphone 1680; an I/O interface 1613 that couples to 20 the keyboard 1602, mouse 1603, scanner 1626, camera 1627 and optionally a joystick or other human interface device (not illustrated); and an interface 1608 for the external modem 1616 and printer 1615. In some implementations, the modem 1616 may be incorporated within the computer module 1601, for example within the interface 1608. 25 The computer module 1601 also has a local network interface 1611, which permits coupling of the computer system 1600 via a connection 1623 to a local-area communications network 1622, known as a Local Area Network (LAN). As illustrated in Fig. 16A, the local communications network 1622 may also couple to the wide-area communications network 1620 via a connection 1624, which would 30 typically include a so-called "firewall" device or device of similar functionality. The local network interface 1611 may comprise an EthernetTM circuit card, a BluetoothTM AnIYA AA In A A 7'71%O 1\ -15 wireless arrangement or an IEEE 802.11 wireless arrangement; however, numerous other types of interfaces may be practiced for the interface 1611. The I/O interfaces 1608 and 1613 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the 5 Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 1609 are provided and typically include a hard disk drive (HDD) 1610. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 1612 is typically provided to act as a non-volatile source of data. Portable memory devices, such 10 optical disks (e.g., CD-ROM, DVD, Blu ray Disc TM), USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the system 1600. The components 1605 to 1613 of the computer module 1601 typically communicate via an interconnected bus 1604 and in a manner that results in a 15 conventional mode of operation of the computer system 1600 known to those in the relevant art. For example, the processor 1605 is coupled to the system bus 1604 using a connection 1618. Likewise, the memory 1606 and optical disk drive 1612 are coupled to the system bus 1604 by connections 1619. Examples of computers on which the described arrangements can be practised include IBM-PC's and 20 compatibles, Sun Sparcstations, Apple MacTM or a like computer systems. The methods of the image classifying and compressing may be implemented using the computer system 1600 wherein the processes of Figs. I to 15 may be implemented as one or more software application programs 1633 executable within the computer system 1600. In particular, the steps of the methods of the image 25 classifying and compressing are effected by instructions 1631 (see Fig. 16B) in the software 1633 that are carried out within the computer system 1600. The software instructions 1631 may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the image 30 classifying and compressing methods and a second part and the corresponding code modules manage a user interface between the first part and the user. Y'7A (I ) AA '77AO I N -16 The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the computer system 1600 from the computer readable medium, and then executed by the computer system 1600. A computer readable medium having such software or 5 computer program recorded on the computer readable medium is a computer program product. The use of the computer program product in the computer system 1600 preferably effects an apparatus for classifying pixels of an image and an apparatus for compressing the classified pixels of the image. The software 1633 is typically stored in the HDD 1610 or the memory 1606. 10 The software is loaded into the computer system 1600 from a computer readable medium, and executed by the computer system 1600. Thus, for example, the software 1633 may be stored on an optically readable disk storage medium (e.g., CD-ROM) 1625 that is read by the optical disk drive 1612. A computer readable medium having such software or computer program recorded on it is a computer program product. 15 The use of the computer program product in the computer system 1600 preferably effects an apparatus for classifying pixels of an image and an apparatus for compressing the classified pixels of the image. In some instances, the application programs 1633 may be supplied to the user encoded on one or more CD-ROMs 1625 and read via the corresponding drive 1612, 20 or alternatively may be read by the user from the networks 1620 or 1622. Still further, the software can also be loaded into the computer system 1600 from other computer readable media. Computer readable storage media refers to any storage medium that provides recorded instructions and/or data to the computer system 1600 for execution and/or processing. Examples of such storage media include floppy 25 disks, magnetic tape, CD-ROM, DVD, Blu-ray Disc, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 1601. Examples of computer readable transmission media that may also participate in the provision of software, application programs, 30 instructions and/or data to the computer module 1601 include radio or infra-red transmission channels as well as a network connection to another computer or Q7AGAO /AA'7'7nQ 1\ -17 networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like. The second part of the application programs 1633 and the corresponding code modules mentioned above may be executed to implement one or more graphical 5 user interfaces (GUIs) to be rendered or otherwise represented upon the display 1614. Through manipulation of typically the keyboard 1602 and the mouse 1603, a user of the computer system 1600 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user 10 interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 1617 and user voice commands input via the microphone 1680. Fig. 16B is a detailed schematic block diagram of the processor 1605 and a "memory" 1634. The memory 1634 represents a logical aggregation of all the 15 memory modules (including the HDD 1609 and semiconductor memory 1606) that can be accessed by the computer module 1601 in Fig. 16A. When the computer module 1601 is initially powered up, a power-on self-test (POST) program 1650 executes. The POST program 1650 is typically stored in a ROM 1649 of the semiconductor memory 1606 of Fig. 16A. A hardware device such 20 as the ROM 1649 storing software is sometimes referred to as firmware. The POST program 1650 examines hardware within the computer module 1601 to ensure proper functioning and typically checks the processor 1605, the memory 1634 (1609, 1606), and a basic input-output systems software (BIOS) module 165 1, also typically stored in the ROM 1649, for correct operation. Once the POST program 1650 has run 25 successfully, the BIOS 1651 activates the hard disk drive 1610 of Fig. 16A. Activation of the hard disk drive 1610 causes a bootstrap loader program 1652 that is resident on the hard disk drive 1610 to execute via the processor 1605. This loads an operating system 1653 into the RAM memory 1606, upon which the operating system 1653 commences operation. The operating system 1653 is a system level application, 30 executable by the processor 1605, to fulfil various high level functions, including processor management, memory management, device management, storage management, software application interface, and generic user interface. n')7AA0 l)AAo7711 IN -18 The operating system 1653 manages the memory 1634 (1609, 1606) to ensure that each process or application running on the computer module 1601 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 5 1600 of Fig. 16A must be used properly so that each process can run effectively. Accordingly, the aggregated memory 1634 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 1600 and how such is used. 10 As shown in Fig. 16B, the processor 1605 includes a number of functional modules including a control unit 1639, an arithmetic logic unit (ALU) 1640, and a local or internal memory 1648, sometimes called a cache memory. The cache memory 1648 typically includes a number of storage registers 1644 - 1646 in a register section. One or more internal busses 1641 functionally interconnect these 15 functional modules. The processor 1605 typically also has one or more interfaces 1642 for communicating with external devices via the system bus 1604, using a connection 1618. The memory 1634 is coupled to the bus 1604 using a connection 1619. The application program 1633 includes a sequence of instructions 1631 that 20 may include conditional branch and loop instructions. The program 1633 may also include data 1632 which is used in execution of the program 1633. The instructions 1631 and the data 1632 are stored in memory locations 1628, 1629, 1630 and 1635, 1636, 1637, respectively. Depending upon the relative size of the instructions 1631 and the memory locations 1628-1630, a particular instruction may be stored in a 25 single memory location as depicted by the instruction shown in the memory location 1630. Alternately, an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 1628 and 1629. In general, the processor 1605 is given a set of instructions which are 30 executed therein. The processor 1105 waits for a subsequent input, to which the processor 1605 reacts to by executing another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or 0T"7A A /P A A7'7AO 1 \ -19 more of the input devices 1602, 1603, data received from an external source across one of the networks 1620, 1602, data retrieved from one of the storage devices 1606, 1609 or data retrieved from a storage medium 1625 inserted into the corresponding reader 1612, all depicted in Fig. 16A. The execution of a set of the instructions may 5 in some cases result in output of data. Execution may also involve storing data or variables to the memory 1634. The disclosed image classifying and compressing arrangements use input variables 1654, which are stored in the memory 1634 in corresponding memory locations 1655, 1656, 1657. The image classifying and compressing arrangements 10 produce output variables 1661, which are stored in the memory 1634 in corresponding memory locations 1662, 1663, 1664. Intermediate variables 1658 may be stored in memory locations 1659, 1660, 1666 and 1667. Referring to the processor 1605 of Fig. 16B, the registers 1644, 1645, 1646, the arithmetic logic unit (ALU) 1640, and the control unit 1639 work together to 15 perform sequences of micro-operations needed to perform "fetch, decode, and execute" cycles for every instruction in the instruction set making up the program 1633. Each fetch, decode, and execute cycle comprises: (a) a fetch operation, which fetches or reads an instruction 1631 from a memory location 1628, 1629, 1630; 20 (b) a decode operation in which the control unit 1639 determines which instruction has been fetched; and (c) an execute operation in which the control unit 1639 and/or the ALU 1640 execute the instruction. Thereafter, a further fetch, decode, and execute cycle for the next instruction 25 may be executed. Similarly, a store cycle may be performed by which the control unit 1639 stores or writes a value to a memory location 1632. Each step or sub-process in the processes of Figs. 1 to 15 is associated with one or more segments of the program 1633 and is performed by the register section 1644, 1645, 1647, the ALU 1640, and the control unit 1639 in the processor 1605 30 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 1633. nA 7fA A IA A '7O I\ -20 The method of classifying pixels of an image for compression, as well as of compressing the image, may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions of classifying pixels of an image for compression. Such dedicated hardware may 5 include graphic processors, digital signal processors, or one or more microprocessors and associated memories. Exemplary Embodiment The exemplary embodiment of the current invention compresses images as 10 tiles. For the purposes of this arrangement, a tile refers to a block of N-by-M pixels, where there are multiple blocks across the width of the page and multiple blocks down the length of the page. Tiles are disjoint and cover the page. A tile preferably comprises an integer number of 8x8 blocks of pixels. For example, for an A4 page at a printer resolution of 600 dpi, a suitable choice for tile dimensions is M = N = 64. 15 The position of a pixel, (X, Y) where X, Y are integers, within a tile is relative to the upper left hand corner of the tile. Y indexes the tile rows, whereas X indexes the offset of a pixel along a tile row. A tile row comprises the set of pixels that span the width of the tile. For example, Fig. 4 shows a tile comprising 64x64 pixels, where the first pixel in the first tile row occupies pixel position (0, 0) 401, whereas the last pixel 20 in first tile row occupies pixel position (63, 0) 403. Accordingly, the last pixel in the last tile row occupies position (63, 63) 404. Raster tile order refers to processing a tile pixel-by-pixel tile-row-by-tile-row, in sequential order, starting with the first tile row and ending with the last row. Pixel values P(X, Y) within a tile refer to the colour value of pixel P, located at a position (X, Y). Where the dimensions of a page do not 25 contain an integer number of tiles the page is preferably padded to the requisite size. Typically tiles are processed one by one though they may also be processed in parallel. The exemplary embodiment of the RIP Output Compressor 204 of Fig. 2 is a Hybrid Compressor 500 shown in Fig. 5B that compresses pixels on a tile-by-tile 30 basis. The Hybrid Compressor 500 comprises a Pixel Run Generator 502, a Pre classifier 507, a Classifier 504, a Bit Mask Generator 508, an Image Processor 516, an Edge Compressor 510 and a Compressed Data Memory Manager 519. The Classified 0'Y7AAn t)AA'77AQ 1% -21 comprises a Palette Generator 512, an Edge Generator 505, and an Edge Assessor 506. The Image Processor 516 comprises a Bit Mask Processor 517 and an Image Compressor 518. The RIP 203 is coupled to the Pixel Run Generator 502, which provides pixel runs 503 as input to the Pre-Classifier 507. 5 The Pre-Classifier 507 provides pixel runs to the Classifier 504 and the Bit Mask Generator 508 dependent upon the results of pre-classification. In particular, the Pre-Classifier 507 is coupled to the Palette Generator 512 and the Edge Generator 505 of the Classifier 504. The Edge Generator 505 is coupled to the Edge Assessor 506. The Edge Assessor 506 is coupled to the Bit Mask Generator 508. The Palette 10 Generator 512 updates or modifies a Colour Palette 513, which is output to the Edge Compressor 510, external to the Classifier 504. The Palette Generator 512 also updates or modifies the Classifier State 515, as does the Edge Assessor 506. The Classifier State 515 is provided as input to the Pre-Classifier 507 by the Classifier 504. The Edge Assessor 506 also updates or modifies the Edge Store 514, which is 15 input to the Edge Compressor 510. The (losslessly) compressed data output by the Edge Compressor 510 is provided to the Compressed Data Memory Manager 519. The Bit Mask Generator 508 provides bitmasks to the Image Processor 516, and in particular to the Bit Mask Processor 516 of the Image Processor 517. The Bit Mask Processor 517 and the RIP 203 are coupled to the Image Compressor 518. The 20 Bit Mask Generator 508 only passes bitmasks to the Image Processor 517. The RIP provides image pixels to the Image Compressor 517 of the Image Processor 516 for lossy compression, The bitmask is processed by the Bit Mask Processor 517 to work out which pixels within a tile need to be compressed by the Image Compressor 518. The Image Compressor 518 provides lossily compressed data to the Compressed Data 25 Memory Manager 519, which outputs the Compressed RIP Output to the Compressed Memory 205. These elements of the Hybrid Compressor 500 are described in greater detail hereinafter. Compression Flow 30 Fig. 6 shows a tile compression process 600 illustrating at a high-level the operation of the Hybrid Compressor 500. The exemplary embodiment of the Hybrid Compressor 500 compresses raw pixel data provided by the RIP 203 to the Hybrid 0OA7AO[A A771Q I1\ -22 Compressor 500 on a tile-by-tile basis. Input to the Pixel Run Generator 502 is from the job memory buffer located in the RIP 203 containing the raw pixel data of a tile, arranged in tile raster order. Starting on the first tile row within the tile with the pixel located at (0, 0), the 5 Pixel Run Generator 502 executes a pixel-check step 601 in Fig. 6 to determine whether there are more pixels in the tile to process. After all pixels have been processed by the compression process 600, the pixel-check step 601 returns false (NO) and processing continues at compression step 606. In step 606, the tile data is compressed by the Edge Compressor 510 and the Image Processor 516 of Fig. 5B, as 10 described in greater detail hereinafter. The compression process 600 completes for a tile (Done). If there are more pixels to process (YES) in decision step 601, the Pixel Run Generator 502 executes the generate-pixel-run step 602 of the process 600 to generate a run of pixels of substantially identical colour, i.e. identical within a predefined 15 visual distinguishability tolerance. The generate-pixel-run step 602 is explained in more detail hereinafter. After a pixel run is generated by the generate-pixel-run step 602, the pixel run is passed from the Pixel Run Generator 502 to the Pre-classifier 507 to carry out the pre-classification step 603 of Fig. 6. In decision step 604, the Pre-Classifier 507 performs a check to determine if 20 the pixel run needs classification. If classification-decision step 604 determines that the pixel run does not need classification (NO), the Pre-Classifier 507 hands control back to Pixel Run Generator 502 at step 601. Otherwise, if the classification-decision step 604 determines that the pixel run needs classification (YES), the pixel run is passed from the Pre-classifier 507 to the Classifier 504, which executes the pixel-run 25 classifying step 605. After the Classifier 504 completes the classification step 605 for the pixel run, processing continues at step 601 and control is handed back to the Pixel Run Generator 502. Pixel-run Generation 30 Referring to Fig. 5B, raw pixel data is supplied to the Pixel Run Generator 502 by the RIP 203. For the purposes of this description, pixel data is supplied by the RIP 203 on a tile-by-tile basis in tile raster order. In the exemplary embodiment, 0T"1AAGAAT"oAQ 1\ -23 pixel values are represented by three channels representing a red component (R), a green component (G), and a blue component (B) (referred to as RGB) with 8 bits of precision per channel (referred to as bit depth). Other colour representations such as one cyan, one magenta, one yellow and one black channel could also be utilised, 5 along with other bit depths. Each pixel value within a tile is processed by the Pixel Run Generator 502 in tile raster order. In Fig. 5B, the output of the Pixel Run Generator 502 is pixel runs 503. For the purposes of this description, a pixel run comprises a run of consecutive pixels, wholly contained on one tile row, that have substantially identical colour 10 values. Preferably, the colour values in a run are identical, i.e. the tolerance is zero. An example of pixel run generation is described in hereinafter. The step 602 executed by the Pixel Run Generator 502 as part of process 600 in Fig. 6 is described in detail with reference to Fig. 7 showing a pixel-run- generator process 700 that is executed. On a tile-by-tile basis the Pixel Run Generator 502 15 generates pixel runs. Pixel run information is stored in a suitable data structure, which contains at least a colour value for the pixel run and a pixel run length counter for storing the length of the pixel run. Referring to Fig. 7, pixels are read in the get-pixel step 701 one-by-one. A pixel-similarity decision step 702 performs a check to determine whether or not the 20 current pixel colour value P(XY) is identical (or substantially identical) to the previous pixel colour value P(X-1,Y). If the current pixel colour value P(XY) is identical (or substantially identical) to the previous pixel colour value P(X-1,Y) (YES), processing continues at decision step 703. In the check-end-of-line decision step 703 a check is made to determine whether or not the tile scanline (tile row) has 25 ended. If the tile scanline has not yet ended (NO), processing continues at step 704. In the increment-length step 704, a pixel run length counter is incremented and processing returns to the get-pixel step 701, from where the next pixel in the tile scanline is processed. If it is determined in the pixel-similarity step 702 that the current pixel colour 30 value P(XY) is not identical to the previous pixel colour value P:(X-1,Y) (NO), or if it is determined in step 703 that the tile scanline has ended (YES), the pixel run is ended and processing continues at step 705. In the calculate-contrast step 705, the contrast -24 of the pixel run with neighbouring pixels is calculated. After the contrast has been calculated, process 700 sends the pixel run to the Pre-Classifier 507 via the send pixel-run step 706. This means that the Pixel Run Generator 502 needs only to store one pixel run at any given time during processing of the tile. 5 The calculate-contrast step 705 calculates the contrast of a pixel run using the magnitude of the colour differences between the pixel run and its surrounding pixels. Fig. 8 shows an example of a group of pixels 800 including a pixel run 801 consisting of a single pixel X. Contrast can be calculated as the sum of the magnitude of differences between pixel run 801 and all its neighbouring pixels 802 to 809 for each 10 colour channel. Alternatively, contrast can be calculated using a subset of its neighbouring pixels. In the exemplary embodiment, calculate-contrast step 705 calculated two contrast values - ContrastL and ContrastR. ContrastL is calculated as the sum of the magnitude of the colour difference for each colour channel between the pixel run 801 and pixel 809, the pixel immediately to its left. ContrastR is 15 calculated as the sum of the magnitude of the colour difference for each colour channel between the pixel run 801 and pixel 805, the pixel immediately to its right. Pre-Classifier The process 900 to pre-classify a pixel run from Pixel Run Generator 502 is 20 explained with the aid of the pre-classification flow chart depicted in Fig. 9. The Pre classifier 507 receives a pixel run from the Pixel Run Generator 502 in the read-pixel run step 901. The state value of the Classifier 504, stored as Classifier State 515, is checked in the check-state step 902 to determine if the Classifier State 515 is equal to Full Overflow. The state values of the Classifier 504 are explained in more detail 25 hereinafter. If the Classifier 504 does not have state value "Full Overflow" (NO), this is an indication that the Classifier 504 has not determined that the current tile is likely to contain substantially more lossy regions than lossless regions, so the process 900 continues from check-state step 902 to pass-pixel-run step 905. In step 905, the pixel run is passed to the Classifier 504, and in particular to the Edge Generator 505 and 30 Palette Generator 512 for classification. On the other hand, if the Classifier 504 has state value "Full Overflow" (YES), it is an indication that the Classifier 504 has -25 detected that the current tile is likely to have substantially more lossy regions than lossless regions and processing continues at decision step 903. In steps 903 and 904, the Pre-classifier 507 performs checks to determine whether or not the pixel run is likely to belong to a lossy region by examining the 5 pixel run's length and its contrast. The compare-length decision step 903 is executed after the check-state decision step 902. In the compare-length step 903, the length of the pixel run is compared with a predetermined threshold value PRLThreshold to determine whether the pixel run is likely to be designated as belonging to a lossy region of the tile ("Is pixel-run length < 10 PRLThreshold?"). Typically, a pixel run belonging to a lossy region has a short length, so the PRLThreshold value should be small (around 3 to 4). If a pixel run's length is greater than or equal to PRLThreshold, a determination cannot be made with high certainty that the pixel run is likely to belong to a lossy region (NO). Hence, the process 900 proceeds from the compare-length step 903 to the pass-pixel-run step 15 905, where the pixel run is given to the Classifier 504, after which the pre classification process 900 finishes for the pixel run. However, if a pixel run length is less than the PRLThreshold (YES) at the compare-length step 903, a further check is performed at the check-contrast step 904. In decision step 904, a check is made to determine if the contrast of the pixel 20 run is high. In the exemplary embodiment, a pixel-run is considered to be high contrast if its ContrastL value as calculated by the calculation-contrast step 705 is higher than a predetermined threshold value Contrast_ Threshold. Otherwise, the contrast is considered to be low contrast (NO). However, there is an exception when the following three conditions are satisfied: 25 1. the pixel run's ContrastL value is above Contrast Threshold, and 2. its ContrastR value is equal to or below ContrastThreshold, and 3. the ContrastL value of the pixel run immediately to the left of the current pixel run on the same scanline is higher than ContrastThreshold When the above three conditions are satisfied, the pixel run is the beginning of a low 30 contrast region following a high contrast region, and the pixel run is considered to be low contrast (NO) by the check-contrast step 904. AW'7AAI A A '77AO 1 \ -26 A pixel run that is considered to be high contrast (YES) by the check contrast step 904 is passed to the Classifier 504 via the pass-pixel-run step 905, because a determination cannot be made with high certainty that the pixel run is likely to belong to a lossy region. However, if the check-contrast step 904 considers a pixel 5 run to be low contrast (NO), the pixel run is both short and low contrast, so the pixel run is likely to belong to a lossy region. Such a pixel run is not passed to Classifier 504, but is given to the Bit Mask Generator 508 in the update-mask step 906. The Bit Mask Generator 508 maintains a tile bitmask that records the pixel location of lossy regions within a tile. The update-mask step 906 updates the tile bitmask to reflect the 10 positions of the pixels belonging to the pixel run judged to be low contrast by the check-contrast step 904. After the tile's bitmask is updated in step 906, the pre classification process 900 finishes. Classifier States 15 The Classifier State 515, which is checked by the check-state step 902, maintains the state of the Classifier 504 as a tile is being processed by Hybrid Compressor 500. Fig. 10 is a state transition diagram showing the various states of the Classifier State 515 and the conditions under which the Classifier State 515 changes. The Classifier 504 begins with state value "No Overflow" 1001 at the 20 beginning of each tile. As pixel runs are passed to the Classifier 504 to be processed by the Palette Generator 512, the Colour Palette 513 is filled with unique colours encountered in the tile. The palette generation process is explained in more detail hereinafter. The number of unique colours that can be stored in the Colour Palette 513 is predetermined and is usually limited to a small number. If the Colour Palette 513 25 cannot store all unique colours in a tile, this is one of two indications that there are likely to be substantially more lossy regions than lossless regions within the tile. So the Palette Generator 512 updates the state of the Classifier 504 from the state value "No Overflow" 1001 to state value "Palette Overflowed" 1002, or from the state value "Edge Overflowed" 1003 to "Full Overflowed" 1004, depending on the state of the 30 Classifier 504 when the Colour Palette 513 becomes full. Once the Classifier 504 is in the "Full Overflowed" 1004 state for a tile, the Classifier 504 remains in that state until the next tile is processed by the Hybrid Compressor 500. COY7(A 0P A A oAQ 1\ -27 Pixel runs are also processed by the Edge Generator 505, which generates pairs of edges that bound a region of pixels formed by connected pixel runs of substantially identical colour. The process carried out by the Edge Generator 505 is described hereinafter. The visual significance of these edge pairs are assessed by the 5 Edge Assessor 506. The process carried out by the Edge Assessor 506 is described hereinafter. When the Edge Assessor 506 detects that the number of visually insignificant edge pairs exceeds a pre-determined threshold, the tile likely contains more insignificant edge pairs ("Insignificant edge count > IECThreshold"). This is the other one of the two indications that the tile contains substantially more lossy 10 regions than lossless regions. Thus, the Edge Assessor 506 changes the Classifier State 515 from the state value "No Overflow" 1001 to the state value "Edge Overflowed" 1003, or from the state value "Palette Overflowed" 1002 to the state value "Full Overflowed" 1004 in Fig. 10, depending on the state of the Classifier 504. The Classifier State 515 can only be "Full Overflowed" 1004 when both 15 indications "Palette Overflowed" and "Edge Overflowed" of a tile containing substantially more lossy regions than lossless regions are present. Palette Generation The Palette Generator 512 is responsible for maintaining the Colour Palette 20 513 and updating the Classifier State 515 (Fig. 5B). The Colour Palette 513 is an array of distinct colours that are used in the tile and is reset on a per-tile basis. There are a finite number of colours that can be added to the colour palette before the Colour Palette 513 becomes full. The maximum number of array elements N available in the Palette Colour 513 (the palette size) ideally should be small (around 4 to 16) to reduce 25 the processing overhead that occurs in a large colour palette due to the many colour comparisons that must be carried out. The maximum number of array elements N is also preferably a power of 2 to get the most efficient usage of colour indices that indicate the position of a colour in the Colour Palette 513. Referring to Fig. 11, the process 1 100 executed on the Palette Generator 512 for each pixel run is discussed 30 hereinafter. As each new pixel run is received from the Pre-classifier 507, the Palette Generator 512 checks the Classifier State 515 in the palette-check-state decision step f'"7/ A A / P A A'"I7/\fO 1 \ -28 1101. In step 1101, a check is made to determine if the Classifier State is equal to Palette Overflowed or Full Overflowed. If the Classifier State 515 is "Palette Overflowed" or "Full Overflowed" (YES), the Colour Palette 513 already cannot store all unique colours in the current tile, so the process 1100 terminates 5 immediately. However, if the Classifier State 515 in step 1101 is neither "Palette Overflowed" nor "Full Overflowed" (NO), the process 1100 proceeds to the check colour decision step 1102. In decision step 1102, the Colour Palette 513 is checked for a duplicate colour ("Is colour in colour palette?"). If the colour already exists in the Colour 10 Palette 513 (YES), the process 1100 terminates immediately. However, if the check colour step 1102 determines that no duplicate colour exists (NO), processing continues at decision step 1103. In the check-palette step 1103, the number of colours currently stored in the Colour Palette 513 is examined to determine if the Colour Palette 513 is full. If the 15 Colour Palette 513 is currently not full (NO), processing continues at step 1104. In the add-colour step 1104, the new colour is added to the Colour Palette 513, after which the process 1100 terminates. If the Colour Palette 513 is full (YES) in the check-palette step 1103, the new colour cannot be added and the Colour Palette 513 cannot store all unique colours in the current tile. Accordingly, the process 1100 20 proceeds to the change-palette-state step 1105, where the Classifier State 515 is updated according to the state transitions illustrated in Fig. 10 and described herein. After the Classifier State 515 is updated in step 1105, the process 1 100 terminates. Edge Generation 25 Referring to Fig. 5B, an Edge Generator 505 receives pixel runs 503 to generate regions of connected pixel runs that have substantially identical colour values within a tile. Edges mark out the boundaries between these neighbouring regions. Each region requires two edges to fully describe its boundary. An 'enabling' edge outlines the left hand side of a region, and a 'disabling' edge outlines the right hand side of a 30 region. For the purpose of this description, the enabling and disabling edges are referred to as an 'edge pair'. 01AA/)AA AO77AQ 1\ -29 The edge generator process 1200, executed by the Edge Generator 505, creates edge pairs that link pixel runs of identical colour value to one another, forming flat regions as described above. New edge pairs, as the edge pairs are created, are considered active until the edge pairs are precluded from continuing. Edge pairs are 5 extended when a pixel run on the current scanline overlaps an active edge pair and meets the criteria for joining. For a pixel run to join an active edge pair, the pixel run must overlap (using 8-way connectedness, for example) an area that is currently spanned by an active edge pair and have an identical colour value to that associated with the edge pair. As described hereinafter, it is convenient to consider active edge 10 pairs on the previous scanline when attempting to determine whether or not a pixel run joins any existing active edge pairs. Edge pairs are not permitted to cross other edge pairs; the flat regions that are described by edge pairs within a tile are disjoint. Edge pairs can be precluded from continuing in one of two ways: a pixel run on the next tile scanline spans across an active edge pair in such a way that the active edge is 15 precluded from continuing or the last scanline in the tile is processed and the tile ends. In the event the edge is prevented from continuing, the edge is considered 'resolved' and flagged as inactive. Fig. 13 shows examples 1301, 1302, 1303, 1304, and 1305 of when the joining criteria are met according to the 8-way connectedness test. In each example, the pixel 20 run and Active Edge Pair shown have substantially identical colour values. Example 1301 shows an arrangement where a pixel run 1311 is connected to an Active Edge Pair 1310 because the disabling edge of an Active Edge Pair 1310 is located at the same x-position as the starting position of pixel run 1311. Example 1302 shows another arrangement where a pixel run 1321 is connected to an Active Edge Pair 1320 25 because the disabling edge of the Active Edge pair 1320 is located between the starting x-position and ending x-position of the pixel run 1321. Example 1303 shows another arrangement where a pixel run 1331 is connected to an Active Edge Pair 1330 because the ending x-position of the pixel run 1331 is located between the enabling edge and disabling edge of the Active Edge Pair 1330. Example 1304 shows yet 30 another arrangement where a pixel run 1341 is connected to an Active Edge Pair 1340 because the enabling edge of the Active Edge Pair 1340 is located at the same x position as the ending position of pixel run 1341. Example 1305 shows another nl%'7flAn / IA A ""IO I \ -30 arrangement where pixel run 1351 is connected to an Active Edge Pair 1350 because the pixel run 1351 is wholly overlapped by Active Edge Pair 1350. Example 1306 shows yet another arrangement where a pixel run 1361 is connected to an Active Edge Pair 1360 because the Active Edge Pair 1360 is wholly overlapped by pixel run 1361. 5 Referring to Fig. 12, the Edge Generator 505 executes the process 1200 until the entire tile has been processed. The process 1200 starts in read-pix-run step 1201, where the Pixel Run Generator 502 supplies the next pixel run. In check-first-line decision step 1202, a check is made to determine if the pixel run occurs on the first scanline of a tile. If decision step 1202 returns true (YES), the process 1200 proceeds 10 to begin-edge step 1203, where a new edge pair is created and this edge pair is set to active. Following begin-edge step 1203, the process 1200 ends at an exit-generate step 1212. Alternatively, if it is determined in check-first-line step 1202 that the pixel run occurs on a subsequent tile row (NO), processing continues at decision step 1204. The pixel run is examined in check-connectedness decision step 1204 to 15 determine whether or not the pixel run can join any existing active edge pairs. If step 1204 determines that the pixel run cannot join any of the existing overlapping active edge pairs (NO), the Edge Generator 505 proceeds to create-edge step 1205. In step 1205, a new edge pair is created and is set to active, after which the process 1200 proceeds to check-resolve decision step 1210. Alternatively, if the check 20 connectedness step 1204 determines that the pixel run can join an overlapping active edge pair (YES), the Edge Generator 505 proceeds to extend-edge step 1206, where the active edge pair is extended by that pixel run. Following the extend-edge step 1206, the Edge Generator 505 proceeds to a check-resolve step 1210. In check-resolve step 1210, a check is made to determine whether the pixel run 25 resolves any Active Edge Pairs. This involves checking if the pixel run extends past any of the remaining Active Edge Pairs on the previous scanline. If this is the case (YES), the Active Edge Pairs so affected are 'resolved', i.e. set as inactive and will not continue, in resolve-edge step 1211 before the process 1200 ends at the step 1212. Alternatively, if the pixel run does not resolve any active edge pairs (NO), the process 30 1200 ends at exit-generate step 1212. nnV7C An/ /'I A A'77AO I \ -31 For the purposes of this description, edge pairs generated by the Edge Generator 505 are ordered in increasing y, then increasing x direction, by the starting position of the enabling edge. 5 Edge Assessment The Edge Assessor 506 assesses resolved edge pairs generated by Edge Generator 505 for visual significance and determines whether an edge pair describes an image region or a flat region. An edge pair that is deemed to be visually significant is considered to be valid, and a region that is described by a valid edge pair 10 is considered a 'flat' region. Only edge pairs describing a flat region are losslessly compressed by Edge Compressor 5 10, with the resulting compressed data bitstream passed to Data Manager 519 for storage. The Edge Assessor 506 uses a combination of the maximum width of the flat region defined by the edge pair and the edge pair length, as well as the contrast 15 significance of the edge pair to assess the visual significance of an edge pair. Fig. 14 shows an edge assessment process 1400 performed during the pixel-run classification step 605 of Fig. 6 by the Edge Assessor 506. Process 1400 considers the length of the edge pair in the y direction, stored in the edge pair data structure as edgelength, along with the maximum width of the flat region defined by the edge pair, stored in 20 the edge pair data structure as maxwidth. Processing commences in step 1401, where these two values are added together in calculate-significance step 1401 to form a size significance metric M for edge pair assessment. Typically, for an edge pair to be accepted the value of the size significance metric M should be greater than approximately 8-10% of the perimeter of the tile. For example, in a tile of dimensions 25 64x64 pixels, an acceptable threshold value of the metric M would be 20. Accordingly, check-significance step 1802 determines whether the size significance metric M for the edge pair is greater than the edge significance threshold ES_ Threshold. If size significance metric M for the edge pair is determined to be greater than ESThreshold (YES), the 'valid' edge pair data is retained in retain-edge 30 step 1404 before the process 1400 ends. Otherwise, if the size significance metric M for the edge pair is less than or equal to the ESThreshold (NO) in step 1402, the n^-^~ A nl /^ A A -I f - - -32 edge pair is deemed insignificant by size, and the process 1400 proceeds to check edge-contrast step 1403. In decision step 1403, a check is made to determine whether the contrast of the edge pair is significant. In the exemplary embodiment, an edge pair is considered 5 contrast significant if more than half of the pixel runs making up the region bounded by the edge pair are judged to be high contrast by the check-contrast step 904 executed by the Pre-classifier 507. If step 1403 returns true (YES), processing continues at step 1404. Edge pairs judged to be significant by contrast are retained in the Edge Store 514 by the retain-edge step 1404 before the process 1400 ends. 10 Otherwise, if decision step 1403 returns false (NO), the edge pairs are insignificant by contrast (NO). The edge pairs are considered to be visually insignificant and are given to the edge-update-mask step 1405. The Bit Mask Generator 508 updates the tile bitmask in the edge-update-mask step 1405 to reflect the pixel positions of the region enclosed by the visually insignificant edge pair. The edge pair data is then 15 discarded in discard-edge step 1406, where the slot occupied by the edge pair in the edge pair data structure is freed and Edge Assessor 506 increments a count of the number of discarded edge pairs in the tile. In the check-edges decision step 1407, the number of discarded edge pairs is compared with a predetermined discarded edge count threshold DEC_ Threshold. The 20 DECThreshold determines the number of visually insignificant edge pairs that must be encountered in a tile before the tile is considered likely to contain substantially more lossy regions than lossless regions. Typically, the DEC_ Threshold is set to be half the value of tile width or height. For example, in a tile of dimensions 64x64 pixels, an acceptable DEC_ Threshold would be 32. When the number of discarded 25 edge pairs is less than DECThreshold (NO) in step 1407, the edge assessment process 1400 can terminate. However, when there are equal or more discarded edge pairs than DECThreshold (YES) in decision step 1407, this is an indication that the current tile contains a large number of flat regions that are visually insignificant. Therefore, processing continues at step 1408. The change-edge-state step 1408 30 updates the Classifier State 515 according to the state transitions described in Fig. 10, so that the Pre-Classifier 507 can take this indication into consideration when pre A'"7/\An / AA*7'"7AO 1\ -33 classifying pixel runs. Following the change-edge-state step 1408, the process 1400 ends. Edge Compression 5 After all pixel runs have been processed by the Classifier 504, the Edge Compressor 510 compresses the visually significant edge data stored in Edge Store 514 using a lossless compression method, creating a lossless data bitstream that is passed to the Compressed Data Memory Manager 519 in Fig. 5B. If the Colour Palette buffer 513 has not overflowed, depending on which 10 method offers the best compression, there is an option to encode the colour data associated with each edge pair as a colour palette index or as a raw colour value. This decision is dependent on the bit depth of the colour, the maximum number of colours able to be stored in the Colour Palette buffer 513, and the number of edge pairs (or in the case of no image pixels, enabling edges) contained within the tile. The decision is 15 made based on the following inequality: EN (1) B -log 2 N where EN is the number of visually significant edge pairs/enabling edges, B is the bit depth of the colour value of the flat pixels and N is the maximum number of colour values able to be stored in the Colour Palette 513. The right side of inequality (1) is a 20 constant so the decision reduces to a simple comparison of the number of visually significant flat regions with a threshold. If inequality (1) is satisfied, encoding the colour value for an edge pair as a colour palette index is less efficient than encoding as a raw colour value. Otherwise, the colour palette is written to the Compressed Data Memory manager 519, followed by the colour palette index corresponding to 25 each edge pair. The colour palette indices are written to the Compressed Data Memory manager 519 in the same order as the edge pairs were generated in the tile. There is a one-to-one correspondence between the number of colour palette indices and the number of visually significant edge pairs. Alternatively, if encoding the colour data as raw values is more efficient, the 30 raw colour values corresponding to each edge pair are written to the Compressed Data nO1'7AAA 1)AA77A0 1\ -34 Memory manager 519. The raw colour values are written in the same order as the edge pairs were generated in the tile. There is again a one-to-one correspondence between the number of raw colour values and the number of visually significant edge pairs. 5 Since the edge pairs are tile based, a fixed number of bits can be used to code the (X, Y) start positions and number of x-offsets. For example, in a tile size of 64x64 pixels, 6 bits are required to code the start positions and number of x-offsets, i.e. edgelength. The offset data can be coded using an entropy-based encoding method similar to that used by JPEG for encoding DC coefficients. In this method, the edge 10 offsets are represented by symbol pairs: symbol-I and symbol-2. Symbol-I represents the size information, and symbol-2 represents amplitude information. Symbol-I is encoded with a variable length code, generated from a suitable Huffman table. Symbol-2 is encoded as a 'variable-length integer' whose length in bits is stored in the preceding variable length code. The Huffman codes and code lengths 15 are specified externally and are known to both the compressor and the decompressor. If a tile contains only visually significant edge pairs, the enabling edge of one flat region serves as the disabling edge of a neighbouring flat region. Therefore, only the enabling edge in each edge pair needs to be compressed by the Edge Compressor 510 if a tile contains only visually significant edge pairs. 20 Image Compression The Image Processor 516 of Fig. 5B comprises a Bitmask Processor 517 and an Image Compressor 518. If the bitmask contains all "true" values, the entire tile has been classified as 25 image pixels by Classifier 504 and is preserved in as high a quality as the image compression algorithm allows. The bitmask only contains all "true" values if all edge pairs in the tile have been assessed as visually insignificant by the Classifier 504. If the bitmask contains some "false" values, there are some non-image (flat) pixels that do not need to be compressed using the Image Processor 516. The non-image pixels 30 are overwritten, or backfilled as described hereinafter, to make the task of the Image Processor 5 16 easier. nOT7AA n 11A A 77AQ IN -35 Image pixels passed to the Image Processor 516 may contain sharp edges where flat pixels are adjacent to image pixels. In an image compression algorithm, such as JPEG, these sharp edges usually result in larger file sizes due to the increased magnitude of high frequency data. Sharp edges also cause visible artefacts known as 5 "ringing". A method known as 'backfilling' is preferably utilised to minimise both file size and visible artefacts within the tile. For example, only the image pixels as indicated by the bitmask need to be preserved, while all other pixels within the tile can be modified, since their values are losslessly encoded by the Edge Compressor 510. Image pixel values within a tile can be extrapolated, based on the colour values 10 of neighbouring pixels, as needed, to the edges of the tile overwriting any flat pixels. The preferred method of backfilling takes into account whether the non-image pixels are bounded on two sides, either horizontally or vertically, by image pixels. If so, the non-image pixels are one-dimensionally linearly interpolated from the bounding image pixels. Otherwise, the non-image pixels are extrapolated by replicating such 15 pixels with their nearest neighbouring image pixel. This backfilling reduces the artefacts of the sharp edges that usually occur between blocks of pixels. Other methods of backfilling may also be utilised. The Image Compressor 516 preferably utilises the JPEG scheme. JPEG utilise a discrete cosine transform (DCT) to transform integer pixel values to integer 20 transform coefficients. The DCT typically operates on 8x8 pixel blocks on a colour channel-by-colour channel basis. The resulting transform coefficients are quantized and arranged in JPEG ziz-zag order. The re-ordered coefficients are encoded losslessly using a suitable Huffman entropy encoder. 25 Example Fig. 15 shows an example of a tile that is compressed in accordance with an embodiment of the present invention, and the related advantage is described hereinafter. An 8x8 pixel tile 1500 is shown. For the purposes of this example, the Hybrid Compressor 500 utilises a PRLThreshold value of 2, an ES Threshold value 30 of 3, and a DECThreshold value of 4, and the number of unique colours that can be stored in the Colour Palette 513 is 4. The Classifier State 515 is set to "No Overflow" before processing of the tile 1500 begins. The Pixel Run Generator 502 generates six n 'A )A A'771)0 IN -36 pixel runs 1501 to 1506 for first scanline. Each of pixel runs 1501 to 1505 comprises one pixel, while pixel run 1506 comprises three pixels. The Pre-Classifier process 900 receives each pixel run and passes the pixel runs to the edge generation process 1200 and the palette generation process 1100. The Colour Palette 513 can no longer 5 store all unique colours for the tile 1500 after the Palette Generator 512 processes pixel run 1506. There are five colours in the first scanline. So Classifier State 515 becomes "Palette Overflowed". Each pixel run also generates an active edge pair by Edge Generator 505. No edges pairs are assessed during processing on the first scanline since all edges are still active. 10 At the second scanline, pixel run 1507 extends the active edge pair 1515 started by pixel run 1503. The pixel runs 1518, 1519, 1520, 1521, and 1508 (each one pixel) on the second scanline begin new active edges because these pixel runs cannot extend edges pairs from the first scanline. After the pixel run 1508 is processed by the edge generation process 1100, the edges pairs formed by pixel runs 1501, 1502, 1504 15 and 1505 are all resolved by the edge generation process 1100, since the edge pairs can no longer be continued. Each of these resolved edges have low contrast and has an edge assessment size M of 2. So in the edge assessment process 1400, the four edge pairs formed by pixel runs 1501, 1502, 1504 and 1505 are discarded. Since the number of edges discarded is equal to DECThreshold, the Classifier State 515 is 20 updated from "Palette Overflowed" to "Full Overflowed" by the edge assessment process 1400. Due to the change in Classifier State 515, the Pre-Classifier 507 begins to assess pixel-run length and contrast for all pixel runs in tile 1500 after pixel run 1508. Pixel run 1509 in the second scanline has a length of 2, so the Pre-classifier 507 judges the pixel run 1509 to need classification by the Classifier 504, where the Edge 25 Generator 505 extends the edge 1516 started by pixel run 1506 to include the pixel run 1509. In the third scanline, the pixel run 1510 has a length of 1, which is less than the PRLThreshold of 2. The pixel run 1510 also has low contrast against its neighbouring pixels, therefore the pixel run 15 10 is not passed to the Classifier 504 by 30 the Pre-Classifier 507 in the pre-classification process 900. Pixel run 1511 also has a length less than the PRLThreshold, but the pixel run 1511 is judged to be high contrast against its neighbouring pixels. Therefore, the pixel rin 1511 is given to the fNI"f% At% /%A A I\) I X -37 Classifier 504, where Edge Generator 505 extends edge 1515 to include pixel run 1511. The tile compression process 600 continues for the rest of the tile, where only pixel runs 1512 to 1514 are passed to the Classifier 504 by the Pre-classifier 507, because pixel runs 1512 and 1513 are high contrast and pixel run 15 14 is not short. 5 The remaining pixels runs, depicted by 1517, are judged by Pre-classifier 507 to be likely to belong to a lossy region and therefore not classified by Classifier 504. Not classifying the pixel runs in region 1517 significantly improves processing speed without affecting the compression quality of tile 1500, because all pixels runs in region 1517 would have been classified as belonging to visually insignificant edge 10 pairs by the Edge Assessor 506 in the Classifier 504 and thus discarded as edges and designated as lossy regions. After all pixel runs have been either pre-classified or classified, the tile 1500 is compressed in the tile compression step 606 using the Edge Compressor 510 to compress edge pairs 1515 and 1516 and using the Image Compressor 518 to compress 15 the remainder of the tile. Industrial Applicability The arrangements described are applicable to the computer and data processing industries and particularly for the rendering an image. 20 Methods, apparatuses, and computer readable storage mediums for classifying pixels of an image for compression have been described. Further, methods, apparatuses, and computer readable storage mediums for compressing the image dependent upon the classification of the pixels have been described. The foregoing describes only some embodiments of the present invention, and 25 modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive. In the context of this specification, the word "comprising" means "including principally but not necessarily solely" or "having" or "including", and not "consisting only of'. Variations of the word "comprising", such as "comprise" and "comprises" 30 have correspondingly varied meanings. WI-MA(fIAA77AO I \

Claims (20)

1. A computer-implemented method of classifying pixels of an image for compression, said method comprising the steps of: 5 analysing at least one run of pixels of said image using a first classifying method to determine if the pixels are to be compressed using either a first compression method or a second compression method; checking if a predetermined criterion associated with said first classifying method, which determines if the run of pixels is to be compressed by said first 10 compression method, is satisfied; and enabling a second classifying method for analysing at least one further run of pixels in response to said predetermined criterion being satisfied, said first classifying method being more computationally intensive than said second classifying method. 15
2. The method as claimed in claim 1, comprising the step of analysing said at least one further run of pixels using said second classifying method.
3. The method as claimed in claim 1, comprising the step of generating from said image a plurality of pixel runs each comprising at least one pixel. 20
4. The method as claimed in claim 1, further comprising the step of compressing, using said first compression method, each pixel run determined by either said first or second classifying method to be compressed using said first compression method. 25
5. The method as claimed in claim 1, further comprising the step of compressing, using said second compression method, each pixel run determined by said first classifying method to be compressed using said second compression method. 30
6. The method as claimed in claim 4, further comprising the step of outputting to memory each compressed pixel run to provide a compressed image. nAAO)A 1 A A'77Q I ' -39
7. The method as claimed in claim 1, further comprising the step of processing said image on a tile-by-tile basis, each tile comprising a portion of said pixels of said image, said pixel runs being generated from each tile. 5
8. The method as claimed in claim 1, wherein said first compression method is a lossy compression method and said second compression method is a lossless compression method.
9. The method as claimed in claim 8, wherein said first classifying 10 method generates a colour palette and one or more edges from the pixel runs classified by said first classifying method for the lossless compression method.
10. The method as claimed in claim 1, wherein said first classifying method forms a region of at least one run of pixels with a substantially similar colour 15 and determines if the region is to be compressed using the first compression method or the second compression method.
IL. The method as claimed in claim 8, wherein said first classifying method determines the run of pixels is to be compressed using said lossy compression 20 method if an edge in the run of pixels does not have high contrast.
12. The method as claimed in claim 8, wherein said second classifying method determines the further run of pixels is to be compressed using said lossy compression method if said further run of pixels does not have high contrast. 25
13. The method as claimed in claim 8, wherein said second classifying method determines a run of pixels is to be compressed using said lossy compression method if said further run of pixels is less than a predetermined length. 30
14. An apparatus for classifying pixels of an image for compression, comprising: a memory for storing data and a computer program; and 0OA71AO l)AA7'71AQ V% -40 a processor unit coupled to the memory for executing a computer program, said memory and said processor configured to classify said pixels of said image for compression, the computer program comprising: computer program code means for analysing at least one run of pixels of said 5 image using a first classifying method to determine if the pixels are to be compressed using either a first compression method or a second compression method; computer program code means for checking if a predetermined criterion associated with said first classifying method, which determines if the run of pixels is to be compressed by said first compression method, is satisfied; and 10 computer program code means for enabling a second classifying method for analysing at least one further run of pixels in response to said predetermined criterion being satisfied, said first classifying method being more computationally intensive than said second classifying method.
15 15. The apparatus as claimed in claim 14, comprising computer program code means for analysing said at least one further run of pixels using said second classifying method.
16. The apparatus as claimed in claim 14, further comprising computer 20 program code means for compressing, using said first compression method, each pixel run determined by either said first or second classifying method to be compressed using said first compression method.
17. The apparatus as claimed in claim 14, further comprising computer 25 program code means for compressing, using said second compression method, each pixel run determined by said first classifying method to be compressed using said second compression method.
18. A computer readable storage medium having recorded therein a 30 computer program for classifying pixels of an image for compression for execution by a processing unit, the computer program comprising: nO)71AO )AA'7AQ 1% -41 computer program code means for analysing at least one run of pixels of said image using a first classifying method to determine if the pixels are to be compressed using either a first compression method or a second compression method; computer program code means for checking if a predetermined criterion 5 associated with said first classifying method, which determines if the run of pixels is to be compressed by said first compression method, is satisfied; and computer program code means for enabling a second classifying method for analysing at least one further run of pixels in response to said predetermined criterion being satisfied, said first classifying method being more computationally intensive 10 than said second classifying method.
19. The computer readable storage medium as clairned in claim 18, further comprising computer program code means for compressing, using said first compression method, each pixel run determined by either said first or second 15 classifying method to be compressed using said first compression method.
20. The computer readable storage medium as claimed in claim 18, further comprising computer program code means for compressing, using said second compression method, each pixel run determined by said first classifying method to be 20 compressed using said second compression method. DATED this 17th Day of December 2009 CANON KABUSHIKI KAISHA Patent Attorneys for the Applicant SPRUSON&FERGUSON An "A A A /%A A -INO 1N
AU2009251019A 2009-12-17 2009-12-17 Classifying pixels of image for compression Abandoned AU2009251019A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2009251019A AU2009251019A1 (en) 2009-12-17 2009-12-17 Classifying pixels of image for compression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2009251019A AU2009251019A1 (en) 2009-12-17 2009-12-17 Classifying pixels of image for compression

Publications (1)

Publication Number Publication Date
AU2009251019A1 true AU2009251019A1 (en) 2011-07-07

Family

ID=45465212

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2009251019A Abandoned AU2009251019A1 (en) 2009-12-17 2009-12-17 Classifying pixels of image for compression

Country Status (1)

Country Link
AU (1) AU2009251019A1 (en)

Similar Documents

Publication Publication Date Title
US8983185B2 (en) Image compression
US8411942B2 (en) Method and apparatus for hybrid image compression
US8086044B2 (en) Block-based iterative multi-pass data filling technique for compound document compression
JP4773678B2 (en) Document system
JP5132517B2 (en) Image processing apparatus and image processing method
US7343037B1 (en) Dynamic, locally-adaptive, lossless palettization of color and grayscale images
US7912300B2 (en) Image processing apparatus and control method therefor
US20030031371A1 (en) Image encoding apparatus and image decoding apparatus
US8503036B2 (en) System and method of improving image quality in digital image scanning and printing by reducing noise in output image data
JP2000175052A (en) Processing method and system for pixel map expression
JP2002027256A (en) Method for compressing digital document by controlling picture quality undergoing various compression rate constraint
JP2002218254A (en) Image compression method and image compression system
US10834414B2 (en) Transcode PCL delta-row compressed image to edges
US8600180B2 (en) Compression method selection for digital images
JP2000295113A (en) Huffman coded data compressor
US20100329551A1 (en) Image processing apparatus and image processing method
JP2008042685A (en) Image processor and processing method, computer program and computer readable storage medium
US8417041B2 (en) Resolution independent image degradation
Said Compression of compound images and video for enabling rich media in embedded systems
AU2009251019A1 (en) Classifying pixels of image for compression
JP4719924B2 (en) Image processing apparatus and image processing method
JP4047207B2 (en) Image coding apparatus, image coding method, and program
AU2014250724A1 (en) Image compression with adaptive subsampling
JP3867886B2 (en) Image encoding method, image encoding device, image decoding method, and image decoding device
JP4058370B2 (en) Image processing method, image processing apparatus, computer program, and computer-readable storage medium

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application