AU2005203541A1 - Multiple image consolidation in a printing system - Google Patents
Multiple image consolidation in a printing system Download PDFInfo
- Publication number
- AU2005203541A1 AU2005203541A1 AU2005203541A AU2005203541A AU2005203541A1 AU 2005203541 A1 AU2005203541 A1 AU 2005203541A1 AU 2005203541 A AU2005203541 A AU 2005203541A AU 2005203541 A AU2005203541 A AU 2005203541A AU 2005203541 A1 AU2005203541 A1 AU 2005203541A1
- Authority
- AU
- Australia
- Prior art keywords
- bounding box
- group
- region
- regions
- edge
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Landscapes
- Record Information Processing For Printing (AREA)
Description
S&F Ref: 711885 to3
AUSTRALIA
PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name and Address of Applicant: Actual Inventor(s): Address for Service: Invention Title: Canon Kabushiki Kaisha, of 30-2, Shimomaruko 3-chome, Ohta-ku, Tokyo, 146, Japan Edward James Iskenderian Vincent Groarke Timothy Jan Schmidt Spruson Ferguson St Martins Tower Level 31 Market Street Sydney NSW 2000 (CCN 3710000177) Multiple image consolidation in a printing system The following statement is a full description of this invention, including the best method of performing it known to me/us:- 5845c -1- MULTIPLE IMAGE CONSOLIDATION IN A PRINTING SYSTEM Field of the Invention The present invention relates generally to computer-based printer systems and, in particular, to inexpensive printer systems for high-speed printing.
Background A computer application typically provides a page to a device for printing and/or display in the form of a description of the page, with the description provided to device driver software of the device in a page description language (PDL), such as Adobe® PostScript® or Hewlett-Packard® PCL. The PDL provides descriptions of objects to be rendered onto the page, as opposed to a raster image of the page to be printed.
Equivalently, a set of descriptions of graphic objects may be provided in function calls to a graphics interface, such as the Graphical Device Interface (GDI) in the Microsoft WindowsTM operating system, or the X-1 1 in the UnixTM operating system. The page is typically rendered for printing and/or display by an object-based graphics system, also known as a Raster Image Processor (RIP).
A typical printer system comprises a host computer, such as a personal computer connected to a printer by some interface. Example interfaces include a parallel port, Universal Serial Bus (USB), Ethernet or FirewireTM. In a typical office environment the host computer to printer connection may be over a 10/100BaseT Ethernet network that is shared with other users and equipment. In such cases the bandwidth of the network is not exclusively available for host computer to printer data transfer. For this reason it is desirable that the amount of data that is sent from the host computer to the printer, and any data and/or status information sent in the opposite direction, be kept to a minimum.
The actual time spent transmitting the description of the page from the host computer to the printer impacts on the overall printing time from a user's perspective. The choice of a 711885 -2particular PDL is therefore a crucial factor in minimising the time taken to transfer the page description from the host computer to the printer.
In a PDL-based printer the PDL file that describes the page is delivered over the interface from the host computer. Such a PDL-based printer system requires that the printer itself implement PDL interpretation in the course of generating the pixels for printing. PDL interpretation is a task that requires considerable software and/or hardware resources to perform in a reasonable time.
The advantage of such a system including a PDL-based printer is that the amount of data, that is the description in the PDL, which needs to be transferred over the interface is typically small compared to the corresponding pixel data subsequently generated within the printer. This is especially true as the resolution of the printed page increases. In addition, the overall print time of the system, defined roughly as the time from when the user commands the printing of the page to its final arrival out of the printer, is not particularly sensitive to reductions in interface bandwidth. This is because the majority of the overall printing time is consumed by the interpretation of the description in the PDL and the subsequent generation of pixels within the printer, as opposed to the transfer of the description from the host computer to the printer.
In a PDL-based system, a bitmap fill associated with an object has an associated affine transformation which describes the relationship between the source bitmap pixels and how the pixels are to appear on the printed page. This transform may represent a shift, scale, skew, or a rotation operation, or a combination of these operations. If this transformation is represented as its matrix inverse, which describes the printed page to source bitmap transformation, for each pixel in the printed page where the bitmap fill is active a renderer can determine the source pixel(s) which contribute to its value.
However, when a bitmap fill and its associated transform are inserted into the page 711885 -3description, no effort is typically made to remove pixels from the source bitmap which do not contribute to the printed output. An example of when pixels in the source bitmap will not contribute to the printed page is when an object with a bitmap fill is partially obscured by an opaque object with higher priority. As these unused pixels are not removed from the page description these pixels may contribute significantly, and yet redundantly, to the size of the display list to be transmitted to the printer.
In contrast to the system including a PDL-based printer, a system using a hostbased printer system architecture divides the processing load between the host computer and the printer in a different manner. Host-based printer systems require the host computer, typically a personal computer, to fully generate pixel data at the resolution of the page to be printed. This pixel data is then compressed in a lossless or lossy fashion on the host computer and is delivered to the printer across the interface. Sometimes halftoning is also performed on this pixel data on the host computer to reduce the size of the pixel data. This approach is also known as the bitmap approach.
A significant advantage of this bitmap approach is that the printer need not be capable of PDL interpretation. By removing the task of PDL interpretation from the printer to the host computer, the complexity of the printer's role is greatly reduced when compared to that of the PDL-based printer. Since complexity usually translates into cost, the printer for the host-based system can generally be made more cheaply than one that needs to perform PDL interpretation for an equivalent printing speed and page quality.
The disadvantage of such host-based printing systems is the amount of data that needs to be delivered from the host computer to the printer across the interface. An A4 page at 600dpi resolution may require over one hundred megabytes of pixel data to be transferred across this interface when uncompressed. Compressing the pixel data alleviates the problem to some extent, particularly when the pixel data has already been 711885 -4halftoned. However, the data transfer still typically requires many megabytes of compressed pixel data. Apart from the time and memory resource required to process this data in the printer, considerable time is also consumed in simply waiting for these compressed pixels to make their way across the interface from the host computer to the printer. Consequently the host-based printer system is particularly sensitive to increases in page resolution and reduced interface data bandwidth.
A need therefore exists for a representation of a page to be rendered on a printer system that removes the requirement of the printer system to be capable of PDL interpretation without the representation consisting of a large amount of data. In particular, the representation should minimise the amount of redundant bitmap fill data contained within the page representation.
Summary It is an object of the present invention to substantially overcome, or at least ameliorate, one or more disadvantages of existing arrangements.
According to an aspect of the present disclosure, there is provided a method of adding a region to one of a plurality of groups of further regions, each region comprising one or more pixels, said method comprising the steps of: computing a bounding box for said region; computing a bounding box for each group of further regions; selecting the group from said plurality of groups of further regions whose bounding box and the bounding box for said region would meet a selection criterion if said region were to be added to said group; and adding said region to the selected group if the addition of said region to said selected group satisfies an adding criterion.
711885 0 According to another aspect of the present disclosure, there is provided a method of re-grouping a set of grouped regions, each region comprising one or more pixels, each group of regions in said set of grouped regions having a bounding box wholly enclosing all the regions in said group of regions, said method comprising the steps of: for each group of regions, removing any region from said group of regions whose bounding box does not overlap the bounding box of at least one other region N in said group of regions; and Sfor each removed region: (,i selecting a group from said grouped regions whose bounding box and the bounding box of said removed region would meet a selection criterion if the removed region were to be added to said group; and adding said removed region to the selected group if the addition of said removed region satisfies an adding criterion.
According to yet another aspect of the present disclosure, there is provided a method of generating a page description from a list of graphical objects, each graphical object comprising one or more edges and at least one fill, said page description representing the visual appearance of said graphical objects when rendered, said method comprising the steps of: computing regions of overlap between said graphical objects; identifying said regions of overlap requiring representation as bitmaps at page resolution; for each identified region: computing a bounding box enclosing said identified region; and generating a page resolution bitmap within said bounding box representing the visual appearance of said identified region; and 711885 -6generating said page description utilising said page resolution bitmaps.
According to another aspect of the present disclosure, there is provided an apparatus for implementing any one of the aforementioned methods.
According to another aspect of the present disclosure there is provided a computer program for implementing any one of the methods described above.
Other aspects of the invention are also disclosed.
Brief Description of the Drawings Some aspects of the prior art and one or more embodiments of the present invention will now be described with reference to the drawings, in which: Figs. 1, 2 and 3 show schematic block diagrams of prior art pixel rendering systems for rendering computer graphic object images; Fig. 4 shows a schematic block diagram of the. functional blocks of a pixel rendering apparatus forming part of the prior art pixel rendering system of Fig. 1; Fig. 5 illustrates schematically two overlapping objects for the purposes of illustrating how the prior art pixel rendering systems shown in Figs. 1 to 3 render a page; Fig. 6 shows a schematic block diagram of a pixel rendering system for rendering computer graphic object images according to the present invention; Fig. 7A shows a schematic block diagram of a controlling program in the pixel rendering system shown in Fig. 6 in more detail; Fig. 7B shows a schematic block diagram of a pre-processing module in more detail; Fig. 7C shows a schematic block diagram of a fill grouping module in more detail; Fig. 8A shows a page containing several regions for illustrating the formation of boundary boxes; 711885 -7- O Fig. 8B shows the page of Fig. 8A with groups created by a grouping algorithm; ~Figs. 9A to 9D show schematic flow diagrams of a method, performed by an edge tracking module, of processing the edges on a scanline; SFigs. O10A to O10G show schematic flow diagrams of a method, performed by a bounding box grouping module, of merging groups of regions; ttFig. 11 shows a schematic flow diagram of a method, performed by an edge N' association update module, of patching edge behaviour tables; Fig. 12 shows a schematic flow diagram of a method, performed by a pixel rendering apparatus, of rendering a page; Fig. 13 illustrates the functional blocks of the pixel rendering apparatus in the pixel rendering system shown in Fig. 6; Fig. 14A shows a schematic flow diagram of a method, performed by a bounding box grouping module, of cutting groups of bounding boxes to form non-overlapping groups of bounding boxes; Fig. 14B shows a schematic flow diagram of a method, performed by a bounding box grouping module, of relocating bounding boxes to vacant areas of other bounding boxes; Figs. 15A and 15B illustrate the results of the bounding box cutting method; and Figs. 16A and 16B illustrate the results of the boundingbox relocation method.
Detailed Description Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears.
711885 -8- Aspects of prior art pixel rendering systems are described before describing embodiments of the invention. It is noted however that the discussions contained in the "Background" section and the prior art system described below relate to systems which form prior art through their respective publication and/or use. Such should not be interpreted as a representation by the present inventor(s) or patent applicant that such documents or systems in any way form part of the common general knowledge in the art.
Fig. 1 shows a schematic block diagram of a prior art pixel rendering system 100 for rendering computer graphic object images. The pixel rendering system 100 comprises a personal computer 110 connected to a printer system 160 through a network 150. The network 150 may be a typical network involving multiple personal computers, or may be a simple connection between a single personal computer and printer system 160.
The personal computer 110 comprises a host processor 120 for executing a software application 130, such as a word processor or graphical software application, and a controlling program 140, such as a driver on a WindowsTM, UnixTM, or MacintoshTM operating system.
The printer system 160 comprises a controller processor 170, memory 190, a pixel rendering apparatus 180, and a printer engine 195 coupled via a bus 175. The pixel rendering apparatus 180 is typically in the form of an ASIC card coupled via the bus 175 to the controller processor 170, and the printer engine 195. However, the pixel rendering apparatus 180 may also be implemented in software executed in the controller processor 170.
In the prior art pixel rendering system 100 each graphical object that is passed by the controlling program 140 to the pixel rendering apparatus 180 across the network 150 for processing is defined in terms of the following parameters: edges which describe the geometric shape of the graphical object; 711885 -9a fill which describes the colour, the opacity, the colour blend and/or the image to be painted within the shape of the object; and a level which describes whether an object should be painted above or behind other objects, and how colour and opacity from an object should be combined with colour and opacity from other overlapping objects.
Describing each of the parameters of the graphical object in more detail, and starting with the edges, the edges are used to describe the boundaries of an object. Each edge is provided as a sequence of segments in a monotonically increasing Y sequence.
Edge segments may be straight lines or may be any other type of curve, such as a Bezier curve. Edges may also be used to describe text objects by treating each glyph as a geometric shape whose boundaries are described by edges made up of straight-line segments. Edges are attributed a "direction" by the controlling program 140. This direction is a convention that the pixel rendering apparatus 180 uses to determine how an edge affects the activity of an object's level.
The fill is a description of the colour, pattern or image to be painted within the boundaries of the graphical object. The fill may be a flat fill representing a single colour, a blend representing a linearly varying colour, a bitmap image or a tiled repeated) image. In each case the supplied fill is to be applied over the extent of the object. The controlling program 140 generates fill information for each object on a page that is used by the pixel rendering apparatus 180 during the rendering process.
Each object to be rendered is also assigned a level entry. An object's level entry indicates the relative viewing position of that object with respect to all other objects on a page.
In addition, each level entry created by the controlling program 140 also has a fill rule associated therewith, which may be "non-zero winding", or "odd-even fill". Each 711885 level has a fill-count, which is set to zero at the start of each scanline, during the rendering process. As the pixel rendering apparatus 180 processes a scanline working across the page, when a downwards heading edge is crossed, the fill-count of the level or levels associated with the edge is incremented. When an upwards heading edge is crossed, the fill-count of the level or levels associated with the edge is decremented. If a level has a non-zero winding fill rule, then the level is active if its associated fill-count is non-zero. If a level has an odd-even fill rule, then the level is active if its associated fill count is odd.
Yet further, the level entry defines the arithmetic or logical operation which specifies how the colour/opacity from this level is combined with the colour/opacity from other overlapping levels.
In the pixel rendering system 100, the software application 130 creates pagebased documents where each page contains objects such as text, lines, fill regions, and image data. The software application 130 sends a page for printing, in the form of an application job, via a graphics application layer to the controlling program 140. In particular, the software application 130 calls sub-routines in a graphics application layer, such as GDI in WindowsTM, or X-1 1 in UnixTM, which provide descriptions of the objects to be rendered onto the page, as opposed to a raster image to be printed.
The controlling program 140 receives the graphical objects from the application program 130, and constructs an instruction job containing a set of instructions, together with data representing graphical objects. In the present context a job represents one page of output. The job is then transferred to the controller processor 170, which controls the pixel rendering apparatus 180, for printing. In particular, a program executing on the controller processor 170 is responsible for receiving jobs from the controlling program 140, providing memory 190 for the pixel rendering apparatus 180, initialising the pixel 711885 -11rendering apparatus 180, supplying the pixel rendering apparatus 180 with the start location in memory 190 of the job, and instructing the pixel rendering apparatus 180 to start rendering the job.
The pixel rendering apparatus 180 then interprets the instructions and data in the job, and renders the page without further interaction from the controlling program 140.
The output of the pixel rendering apparatus 180 is colour pixel data, which can be used by the output stage of the printer engine 195. The pixel rendering apparatus 180 employs a pixel-based sequential rendering method.
The pixel rendering apparatus 180 of Fig. 1 is shown in more detail in Fig. 4.
The pixel rendering apparatus 180 comprises an instruction execution module 410; an edge tracking module 420; a priority determination module 430; a pixel generation module 440; a pixel compositing module 450; and a pixel output module 460 arranged in a pipeline. For each scanline the pixel rendering apparatus 180 processes each pixel in turn, considering the graphical objects that affect that pixel, to determine the final output for the pixel.
The instruction execution module 410 of the pixel rendering apparatus 180 reads and processes instructions from the instruction job that describes the pages to be printed and formats the instructions into information that is transferred to, the other modules 420 to 460 within the pipeline.
One instruction is used to command the pixel rendering apparatus 180 to prepare a page of a particular dimension for printing. Another instruction indicates that an edge or set of edges with particular characteristics start at a particular scanline at a particular xposition. Yet another instruction specifies that the pixel rendering apparatus 180 should load a particular level into the appropriate internal memory region used to hold level information. Yet another instruction specifies that the pixel rendering apparatus 180 711885 -12should load a particular fill into the appropriate internal memory region used to hold fill information within the printer system 160. Yet another instruction specifies that a particular data set a compressed bitmap for example) should be decompressed using a specified decompression algorithm. The data set to be decompressed must be available in the memory 190 of the printer system 160 before such an instruction is executed by the pixel rendering apparatus 180. Other instructions relevant to the rendering of a page are also decoded by the pixel rendering apparatus's instruction execution module 410.
When executing the instruction job the instruction execution module 410 processes the instructions sequentially. On encountering an instruction to load an edge or set of edges at a specific scanline the instruction execution module 410 extracts relevant information regarding the edge and passes this information onto the edge tracking module 420.
The edge tracking module 420 is responsible for determining the edges of those graphical objects in the instruction job that intersect the currently scanned pixel and passes this information onto the priority determination module 430. When creating the instruction job the controlling program 140 inserts instructions to start an edge or set of edges in the appropriate order and at the appropriate scanline such that the pixel rendering apparatus 180 maintains a correct list of active edges at every scanline of the page when rendering.
When the edge tracking module 420 receives edge information from the instruction execution module 410, the edge tracking module 420 updates an active edge list 470. The active edge list 470 contains a sorted list of the active edges on the current scanline being rendered by the pixel rendering apparatus 180. The active edge list 470 is updated by inserting the incoming edges into the active edge list 470 and sorting this list by each edge's current x-position.
711885 -13- Once a scanline has been rendered, each edge has its x-position updated by executing each edge's x-position update function. This function returns the edge's xposition for the next scanline render. In the case that an edge terminates before the next scanline, that edge is removed from the active edge list 470. When edges cross, the active edge list 470 is resorted.
For each edge that is stored in the active edge list 470 a level or set of levels will be added or removed from an active level list 490 that is maintained by the priority determination module 430. As the active edge list 470 is traversed, a level or set of levels are added to or removed from the active level list 490. The span of pixels between a pair of edges has therefore an active level list 490 that is used to control the compositing of the pixel contributions from each level in the active level list 490.
The priority determination module 430 is pre-loaded with a level table 445 from the instruction job. The level table 445 contains an entry for each graphical object consisting of its z-order position on the page, how the colour of the object should be combined with other objects (compositing operators), and other data. The priority determination module 430 is responsible for determining those objects, so called contributing objects, that make a visual contribution to the currently scanned pixel and passes that information onto the pixel generation module 440.
The information passed from the priority determination module 430 to the pixel generation module 440 includes an index that references the fill of a corresponding contributing object in a fill table 455. The pixel generation module 440 is pre-loaded with the fill table 455 from the instruction job. The fill table 455 contains an entry for each graphical object consisting of the graphical object's fill and other data. The pixel generation module 440, upon receiving information from the priority determination module 430, is responsible for referencing the index into the fill table 455, and using the 711885 -14data therein to generate colour and opacity information which is passed onto the pixel compositing module 450.
The pixel compositing module 450 is responsible for obtaining the final colour for the currently scanned pixel by compositing the fill colours in accordance with the compositing operators as indicated in the rendering instructions. For each level in the active level list 490 for a particular pixel span, the pixels corresponding to the level's object are determined by reference to the fill that is linked to that level. In the case of a simple flat fill, this reference is a trivial lookup of the colour of that object. However, for a bitmap fill a 2-D affine transformation is applied to the y) location of each pixel in the span to determine the source data to be used to colour the pixel on the page, including some interpolation method if required. Similarly a non-trivial calculation is required for any other fill type that relates a pixel's location on the page to the fill data stored within the printer system's fill table 455, such as a linear gradient.
In order to determine the final colour of pixels within a span, the contributions of each level are composited together. Compositing is performed using the transparency of the contributing levels, the colours of the contributing levels and the raster operation specified for each level in the active level list 490. For a span where the fills of all the contributing levels for this span each reference a single colour, the compositing algorithm need only compute the resultant colour from compositing one pixel and then repeat this pixel for the whole span.
For a span where the fills of all the contributing levels for this span are of type bitmap or gradient for example, the compositing algorithm needs to compute the resultant colour from compositing all the pixels in the span in turn.
The final output colour of the pixel is passed onto the pixel output module 460, which is subsequently passed to the printer engine 195.
711885 The pixel rendering apparatus 180 described above considers each output pixel fully, applying all required compositing operations before moving onto the next pixel in the raster output. This method is efficient in an ASIC implementation, but is not the most optimal implementation in software. A variation on this pixel rendering apparatus 180 is possible which offers advantages when implemented in software. In this variation, the pixel rendering apparatus 180 applies one of the required compositing operations to all contributing pixels within a span (between two successive edges), before applying the next compositing operation. Once all required compositing operations have been so applied, the resulting pixel span has been fully processed and may be passed to the pixel output module 460. The next pixel span is then considered.
The distribution of the computational burden of rendering in the pixel rendering system 100 shown in Fig. 1 between the personal computer 110 and the printer system 160 sees the compositing of pixels being performed on the printer system 160 by the pixel rendering apparatus 180. Compositing in a span where there are N active levels is an order-N algorithm. Computing the contribution of each level in the active level list 490 for a page with many overlapping objects can be very time consuming using the process described above, since the contribution of each level must be calculated for each pixel span. If the fills linked to the active levels within a span are of type flat colour then only a single pixel within the span needs to be individually calculated using the compositing algorithm. All other pixels within the span will be identical. However, if the fills linked to the active levels within a span are of type bitmap, or another non-flat fill, then each pixel within the span needs to be individually calculated using the compositing algorithm.
In order for a laser beam printer running at a rated page per minute speed to render in real time, the processing load on the printer due to compositing must be quantified and catered for in specifying the resource requirements of the printer system 711885 -16- 0 160. In particular, the processing power and memory resources of the printer system 160 4 must be such that at no time does the pixel rendering apparatus 180 fail to keep up with the consumption of pixels by the printer engine 195 when the laser printer is actually Sprinting the page.
In 'addition to the compositing of pixels on the page, another task that is performed typically in the printer system 160 is that of Colour Space Conversion (CSC).
(Ni Compositing typically takes place in the RGB (Red Green Blue) colour space and hence a CSC is required following the step of compositing before pixels can be shipped to the printer engine 195 for printing onto the page. CSC between the RGB colour space typically used when compositing and the CMYK colour space required by the printer is a non-linear operation where typically each of the Cyan, Magenta, Yellow and Black components are functions of all three RGB components. CSC places a certain processing burden on the printer system 160 in order to be executed in a timely manner for real time printing on a laser beam printer.
Yet another task that is performed typically in the printer system 160 is that of halftoning. Halftoning is applied to pixels that are at page resolution as a method of reducing the bit depth of data to be transferred to the printer engine 195, thereby reducing the size of the transferred data. In a typical halftoning implementation tables are used to reduce each colour channel from an 8-bit value to a 4-bit, 2-bit or 1-bit value.
The operation of the pixel rendering system 100 in processing an instruction job is now described with reference to Fig. 5 wherein two graphical objects 510 and 520 on a page 500 are illustrated. The graphical objects are a triangle 510 with a light grey flat fill, and a square 520 with a dark grey flat fill. The triangle 510 is above the square 520, and also partially overlaps the square 520. The triangle 510 is partially transparent while the square 520 is fully opaque. The background of page 500 is white.
711885 -17- In the prior art pixel rendering system 100 the controlling program 140 constructs an instruction job wherein the triangle 510 is described by two edges 580 and 585. The first edge 580 is downwards heading, and consists of two segments 581 and 582. The second edge 585 is upwards heading and consists of a single segment. The square 520 is also described by two edges 590 and 595. The first edge 590 is downwards heading, and consists of one segment. The second edge 595 is upwards heading and also consists of one segment only.
The following table shows the edges that the controlling program 140 generates for the objects 510 and 520 in Fig Object Edges Direction Number of Associated Segments Level Index Square 520 590 Down 1 0 595 Up 1 0 Triangle 510 580 Down 2 1 585 Up 1 1 Table 1 Edges created for objects in Fig. The controlling program 140 generates two level entries in the level table 445 (Fig. 4) for the objects in Fig. 5 as follows: Level Index Associated Fill Index Other Data 0 0 (Fa) 1 1(Fb) Table 2 Level Table created for objects in Fig. Within the instruction job the Controlling program 140 generates two fill entries in the fill table 455 (Fig. 4) as follows: 711885 -18- Fill Index Name Type Color Opacity Other Data 0 Fa Flat Dark grey 100% 1 Fb Flat Light grey Table 3 Fill Table created for objects in Fig. To describe how the pixel rendering apparatus 180 renders the objects on page 500, consider scanline 531 in Fig. 5. When rendering this scanline 531 no levels are active until segment 581 of edge 580 is encountered. For this reason the colour output by the pixel rendering apparatus 180 is white from the left of the page 500 up until edge 580.
At edge 580 level 1 is activated. Between edge 580 and edge 590 the active level list 490 contains only level 1. The colour defined by level 1's fill entry is light grey.
Therefore, a span of light grey pixels is output by the pixel rendering apparatus 180 for those pixels between edge 580 and edge 590.
At edge 590 level 0 is activated. Between edge 590 and edge 585 the active level list 490 contains both level 0 and level 1. The colour defined by level O's fill entry is dark grey and the colour defined by level 1 's fill entry is light grey. Therefore, a span of grey pixels is output by the pixel rendering apparatus 180 for the pixels between edge 590 and edge 585, that being the colour that results from compositing the two colours of the objects 510 and 520 using each object's respective fill entry to control the proportions of each object's colour.
At edge 585 level 1 is deactivated. Between edge 585 and edge 595 the active level list 490 contains only level 0. The colour defined by level O's fill entry is dark grey.
Therefore, a span of dark grey pixels is output by the pixel rendering apparatus 180 for those pixels between edge 585 and edge 595.
At edge 595 level 0 is deactivated. Between edge 595 and the right hand edge of the page 500 the active level list 490 contains no levels. Therefore a span of white pixels 711885 -19is output by the pixel rendering apparatus 180 for those pixels between edge 595 and the right hand edge of the page 500.
This example shows the basis on which the pixel rendering apparatus 180 works, and shows simple examples of a fill table 455 and a level table 445. In most print jobs the fill table 455 and the level table 445 are far more complex, and typically contain many entries. In complex print jobs the number of entries can be very large, necessitating methods to deal with such table sizes.
Due to the processing load required for pixel compositing placed on the pixel rendering apparatus 180, the use of the pixel rendering system 100 for low-cost real-time printing on a laser beam printer is unsuitable. The reason is that; when a print system is designed to render in real time, the worst case instruction job must be renderable within the time constraints imposed by the print engine speed if a full framestore is not to be employed in the printer system 160. Once the paper has started to roll past the toner drums a laser beam printer may not be stopped to allow for the raster image processing to catch up in the generation of pixel data.
In many print jobs (such as text only) there is usually no compositing to be performed and the pixel rendering system 100 is capable of real time rendering without extravagant processing and memory resources in the printer system 160. In a text only page the majority of processing in the pixel rendering apparatus 180 is consumed by edge tracking. In many simple cases where there are no overlapping objects it is not a cost effective solution to provide a printer that is capable of multi-level compositing. The proportion of pages that require compositing will depend on the user's use profile. For the printing of documents that are primarily text the priority determination module 430 will in many cases be running only a trivial single level resolution for each span. A 711885 printer with reduced resources (and hence cost) would be capable of rendering such pages in a similar time frame.
However, for very complicated jobs with hundreds of overlapping transparent bitmaps for example, the burden of compositing would require considerable processing and memory resources to be available on the printer system 160 to allow such a page to be rendered in real time. For high page rates and page resolutions, the cost of a printer that contains a real time pixel render engine capable of rendering such complicated pages could be prohibitive.
For this reason the provision of the function of compositing in the printer is not seen as providing a cost effective solution since such a printer would need to handle both the very simple and the very complicated cases with the same low-cost hardware and software resources.
As described earlier an object's boundaries are decomposed into edges by the controlling program 140. The relative viewing position of each object is explicitly coded in the level associated with an object's edges and an object's pixel colours are stored in a fill. This information is generated by the controlling program 140 for each object on the page. These edges, levels and fills are then encoded in an instruction job together with other relevant information to be rendered by the pixel rendering apparatus 180.
In some cases an object's edges will be completely obscured by an overlaying opaque object. An object that is fully obscured by an overlying object will not contribute anything to the pixels on a page. In such cases, the edge information, the level and the fill created by the controlling program 140 for the obscured object are redundant in the instruction job. However the controlling program 140 in the pixel rendering system 100 does not discriminate between such obscured objects and objects that are visible on a page, placing all edges, levels, and fills in the instruction job.
711885 -21 This limitation introduces redundancy in the instruction job created by the controlling program 140 when objects are fully obscured by other objects, leading to an unnecessary increase in the size of the instruction job. This limitation also may introduce redundancy in the instruction job created by the controlling program 140 when objects partially obscure other objects, also leading to an unnecessary increase in the size of the instruction job. This redundancy results in a reduction in speed of the subsequent processing of the instruction job. This redundancy also means that the pixel rendering apparatus 180 uses more memory resources than if such redundancy were removed before the instruction job was created.
Whether or not one object partially or fully obscures other objects depends on the relative positions of the edges of the objects. If the edges of the object should cross one another then the objects can be said to overlap. Furthermore, if edges of objects do not cross at any time, the objects either overlap fully or do not overlap at all. It is only by tracking the edges' relative positions that a decision can be made on whether objects overlap fully or not at all. When the number of objects on a page is large the algorithm to determine the existence of overlap of each object with all other objects on the page becomes very time consuming. An object by object approach is not feasible since it would involve the repeated tracking of each object's edges as it is compared with all the other objects in a page.
Fig. 2 shows a schematic block diagram of another prior art pixel rendering system 200. The pixel rendering system 200 also comprises a personal computer 210 connected to a printer system 260 through a network 250. The personal computer 210 comprises a host processor 220 for executing a software application 230. The printer system 260 comprises a controller processor 270, memory 290, a pixel rendering 711885 -22apparatus 280, and a printer engine 295 coupled via a bus 275. A controlling program 240 executes on the controller processor 270.
Hence, the pixel rendering system 200 differs from the system 100 shown in Fig.
1 in that the controlling program 240 executes on the controller processor 270 of the printer system 260, instead of on the host processor 220, as is the case in system 100. In system 200 the software application 230 sends a high level description of the page (for example a PDL file) to the controlling program 240 executing in the controller processor 270 of the printer system 260. The controlling program 240 performs the same task as its equivalent controlling program 140 in the system 100 shown in Fig. 1. In system 200 the load on the printer system 260 is much greater than the load on the printer system 160 of the system 100.
Fig. 3 shows a schematic block diagram of yet another prior art pixel rendering system 300. The pixel rendering system 300 also comprises a personal computer 310 connected to a printer system 360 through a network 350. The personal computer 310 comprises a host processor 320 for executing a software application 330, a controlling program 340, and a pixel rendering apparatus 380. The pixel rendering apparatus 380 is typically implemented in software executing on the host processor 320, but may also take the form of an ASIC chip executing as a co-processor to the host processor 320. The printer system 360 comprises a controller processor 370, memory 390, and a printer engine 395 coupled via a bus 375. In system 300 the controlling program 340 and the pixel rendering apparatus 380 combine to generate pixels that are delivered to the printer system 360 over the network 350.
The printer system 360 of the host-based printer system 300 may therefore be a candidate for a low-cost printer since it does not implement compositing in the printer system 360, instead requiring that the personal computer 310 implement this function. In 711885 23 the host-based pixel rendering system 300 the page to be printed is rasterised to a bitmap and compressed in the host processor 320 and shipped to the printer system 360 across the network 350.
In the host-based pixel rendering system 300 the main factor that needs to be considered in ensuring that the printer system 360 can maintain real-time printing is the time taken by the controller processor 370 to decompress the compressed bitmap that has been received from the host processor 320 and stored in the memory 390 of the printer system 360. If the bitmap decompression software can run fast enough in the controller processor 370 to ensure that the decompressed pixels are always available to the printer engine 395 when needed, then the printer system 360 is said to be able to print in real time.
Some reniedial action may be taken in the overall host-based pixel rendering system 300 to ensure that real-time printing is guaranteed. For example, the resolution of the rasterised bitmap can be reduced for particularly complicated bitmaps that would otherwise not be possible to decompress in real-time in the printer system 360. This resolution reduction has the net effect of reducing the amount of data that needs to be handled by the bitmap decompression software running in the controller processor 370 with a resulting speed up in the rate of pixel generation. Reducing printer resolution reduces the quality of the printed image, a trade-off that must be considered when designing the printer.
Another method of guaranteeing real-time printing is to reduce the speed of the printer rollers. This has the net effect of reducing the rate of pixel consumption by the printer engine 395. The bitmap decompression software running in the printer system 360 can then take more time to decompress while still meeting the real-time deadline of 711885 24 the printer engine 395. While the quality of the page remains unaffected by this action, the printer's slowdown may not be desirable to the user.
In some existing networked host-based printers the controller processor 370 does not start decompressing pixels until such time as all the compressed pixel data has been received into the printer system 360 over the network 350. The reason for this is to ensure that network latency does not affect the real-time printing guarantee of the printer system 360. If pixel shipping to the printer engine 395 were to commence before the full compressed bitmap was received, then there is the possibility that the network 350 would fail before the full page was received. In this case the printer engine 395 would be starved of data and only a part of the page would be printed. Since the rollers in laser beam printers cannot be stopped once a page has begun to print, the page would be ejected only partially printed.
In choosing to postpone decompression of the compressed bitmap in the printer system 360 until such time as the complete compressed image is received from the host processor 320 over the network 350 the printer system 360 must be designed with enough memory 390 to contain the worst case compressed image size possible. The worst case compressed page size is specified and controlled by the controlling program 340 running in the host processor 320. Lossy compression, resolution reduction and other techniques may be implemented to ensure the worst case page will, when compressed, fit in the memory 390.
In addition, it is desirable that the size of the compressed pixel data be small in order to avoid creating a bottleneck in the network 350. This bottleneck is undesirable because it would adversely affect the overall response time of the network 350 for other users of the network 350 and also because it increases the overall print time, defined as 711885 the time between the moment the user commands a page print and the moment the printed page actually is ejected from the printer system 360.
The greatest advantage that the host-based pixel rendering system 300 has over the printer systems 100 and 200 is printer cost because it does not need to implement the time consuming and resource hungry tasks of PDL interpretation, edge tracking, object compositing, colour space conversion and halftoning in the printer itself. However as increased page resolutions and printing rates are demanded in the future its design places too great a burden both on the network bandwidth, memory requirements, and processing capabilities of the host system necessary to implement a feasible solution.
Thus far, two contrasting printing architectures have been outlined. However, a third form exists whereby an intermediate display list (IDL) is generated on the host and then delivered to the printer. This IDL is, in general, bigger than the corresponding PDL format for the same page but in general is smaller than the typically compressed pixels for that page. By designing the format of the IDL appropriately, the complexity of the printer can be tuned for an optimum cost/performance trade-off in the overall printing system.
An IDL typically describes the objects on a page in terms of each object's boundary and other attributes of that object such as its fill, which describes the colour or colours of the pixels within the object, the relative viewing position of the object, and how the object's colours should be combined with the colours of other objects that overlap it.
The printer designed to work with an IDL must contain such software and/or hardware resources on board to allow the decoding of the IDL and the generation of the rasterised pixel data that corresponds to the page. Since an IDL is generally significantly less complex than a full PDL, the resources needed in an IDL-based printer are 711885 -26consequently less than for the equivalent PDL printer. So, in general, an IDL-based printer is cheaper than the equivalent PDL printer.
Such an IDL approach has some.additional advantages over both the PDL-based and host-based printer systems described above. Firstly, the flexibility in designing the format of an IDL means that computationally intensive tasks may be performed in that part of the printing system that is best suited to executing them. For example, a powerful host computer may be more suited to compositing pixels together than the printer itself.
A flexible IDL format allows the system designer to balance the processing load between the printer and the host computer by dynamically shifting the processing load from host to printer or vice versa depending on the complexity of the page to be printed.
There are many technical challenges in designing a printer system that successfully balances the load between the host and the printer for a wide range of page types. It is of the utmost importance that printing quality and printing speed (page rate) are not compromised in the pursuit of the ideal balance between host and printer. Issues such as cost, quality, software and hardware reuse and immunity to increasing page resolution all need to be addressed in the design of the ideal load-balancing printer system.
Fig. 6 shows a schematic block diagram of a pixel rendering system 600 for rendering computer graphic object images in accordance with the present invention. The pixel rendering system 600 comprises a personal computer 610 connected to a printer system 660 through a network 650. The network 650 may be a typical network involving multiple personal computers, or may be a simple connection between a single personal computer and printer system 660.
The personal computer 610 comprises a host processor 620 for executing a software application 630, such as a word processor or graphical software application, and 711885 -27a controlling program 640 in the form of a driver on a WindowsTM, UnixTM, or MacintoshTM operating system.
The printer system 660 comprises a controller processor 670, memory 690, a pixel rendering apparatus 680, and a printer engine 695 coupled via a bus 675. The pixel rendering apparatus 680 is in the form of an ASIC card coupled via the bus 675 to the controller processor 670, and the printer engine 695. However, the pixel rendering apparatus 680 may also be implemented in software executed in the controller processor 670.
In the pixel rendering system 600, the software application 630 creates pagebased documents where each page contains objects such as text, lines, fill regions, and image data. The software application 630 sends a page for printing, in the form of an application job, via a graphics application layer to the controlling program 640. In particular, the software application 630 calls sub-routines in a graphics application layer, such as GDI in WindowsTM, or X-11 in UnixTM, which provide descriptions of the objects to be rendered onto the page, as opposed to a raster image to be printed.
The controlling program 640 receives the graphical objects from the application program 630, and decomposes the graphical objects into edges, levels and fills in the same manner as the controlling program 140 of the prior art pixel rendering system 100 (Fig. These edges, levels and fills are called the first set of primitives. The fill may be a flat fill representing a single colour, a blend representing a linearly varying colour, a bitmap image or a tiled repeated) image.
The controlling program 640 then further processes this first set of primitives to generate a second set of primitives that facilitates more flexibility in load balancing between the personal computer 610 and printer system 660.
711885 -28 Finally, an instruction job for forwarding to the controller processor 670 is constructed using the second set of primitives.
The job is then transferred for printing, via the network 650, to the controller processor 670, which controls the pixel rendering apparatus 680. In particular, a program executing on the controller processor 670 is responsible for receiving jobs from the controlling program 640, providing memory 690 for the pixel rendering apparatus 680, initialising the pixel rendering apparatus 680, supplying the pixel rendering apparatus 680 with the start location in memory 690 of the job, and instructing the pixel rendering apparatus 680 to start rendering the job.
The pixel rendering apparatus 680 then interprets the instructions and data in the job, and renders the page without further interaction from the controlling program 640.
The output of the pixel rendering apparatus 680 is colour pixel data, which may be used by the output stage of the printer engine 695.
While object based descriptions are generally used to describe the content of a page to be printed, this choice does not always provide the flexibility to implement optimisations such as removing redundant information from a display list. A typical driver as found in the prior art personal computer 110 decomposes the objects one by one into primitives that, in the main, represent only that object. Certain optimisations are possible, for example the reuse of similar shapes and fill data between objects but this is often not very significant.
Therefore, instead of using an object based description for the page, the pixel rendering system 600 uses the edges of objects as the basis for describing the page. One approach uses the concept of an edge behaviour table. The edge behaviour table specifies attributes associated with an edge.
711885 -29- In particular, the edge behaviour table contains one or more entries, each entry containing one or more attributes of the edge to which the entry belongs and also a lifetime for which the attribute or attributes for the edge are active. Table 4 shows an edge behaviour table that has N entries, where each entry has P attributes.
Lifetime Attributel Attribute2 Attribute P Lifetime 1 Value 1 1 Value 1 2 Value p Lifetime 2 Value 21 Value 22 Value 2 p LifetimeN ValueN] ValueN2 ValueNp Table 4 Layout of preferred implementation of an Edge Behaviour Table An attribute of an edge is data that is related to that edge. It may be a reference to data used to calculate the pixels following the edge on the page or it may be a function that is used to update the y) position of the edge from scanline to scanline. The attribute may, in fact, be any data that is relevant to the edge.
While not all possibilities for the attribute of an edge are explicitly described, one skilled in the art may derive other attributes that are related to an edge as part of a page description used to render a page.
Each edge's attribute also has a lifetime, which is typically expressed in scanlines. So, for example, the attributes in one entry of an edge's edge behaviour table that are valid for 25 scanlines would have a lifetime field that indicates this. Depending on the most optimal solution for the hardware or software platform on which the pixel rendering apparatus 680 is implemented, the representation of the lifetime of 25 scanlines may be in relative or absolute terms.
In the case of an absolute representation of the lifetime of an edge behaviour table entry, the lifetime is expressed as "from scanline M to scanline N inclusive", where 711885 M is the first scanline for which the entry's attributes are valid and N is the last scanline for which the entry's attributes are valid, each scanline value being offset from the start of the page.
In the case of a relative representation of the lifetime of an edge behaviour table entry, the lifetime would be expressed as "25 scanlines". Implied in this representation is that the entry's attributes are valid for 25 scanlines from the start of the edge or from the scanline following the last scanline of the previous entry.
Other expressions are also possible for the lifetime field of each entry in an edge behaviour table. For example an entry that means "for the remainder of the edge" may be used to imply that the corresponding attributes are valid for the remainder of the edge.
While not explicitly mentioned here, those skilled in the art would appreciate that other expressions may be used to code in the lifetime of an edge.
Another approach that may be used to replace an object based display list is one where each original edge on the page is split, if necessary, into separate edges where each separate edge extends over successive scanlines as indicated by a single entry in the edge behaviour table that would be constructed for the original edge. Each separate edge references the same (set of) attribute(s) over its whole extent. In order to calculate the points at which the original edge should be split the original edge is tracked and when the attribute(s) of the original edge change, a new separate edge is started at that location and the existing separate edge (if any) is terminated. Note that the resolution of the start/end points of the separate edges must be such that the edge tracking of the separate edges is identical to that of the original edge. This embodiment does not require an edge behaviour table or lifetime data.
Referring again to the pixel rendering system 600 in Fig. 6, the graphical objects on a page to be printed are passed from the software application 630 to be processed by 711885 -31 the controlling program 640. The role of the controlling program is to generate a display list that can be rendered by the pixel rendering apparatus 680. Fig. 7A shows a schematic block diagram of the controlling program 640 in more detail. The controlling program 640 comprises an objects decomposition driver 640a, a primitives processor 640b and an instruction job generator 640c.
The method employed by the controlling program 640 is to first decompose, using the objects decomposition driver 640a, the page objects passed from the software application 630 into a representation of edges, levels and fills. As noted above, these edges, levels and fills are called the first set of primitives, and are stored in store 640d.
Within the primitives processor 640b, the first set of primitives in store 640d is further processed to generate a second set of primitives placed in store 640e. The second set of primitives includes edges, an edge behaviour table for every edge and an aggregate fill table. The second set of primitives in store 640e is then further processed by the instruction job generator 640c which creates an instruction job 640f that can be rendered by the pixel rendering apparatus 680.
In the preferred implementation the primitives processor 640b includes a preprocessing module 700, a schematic block diagram of which is shown in Fig. 7B. The pre-processing module 700 comprises an instruction execution module 710; an edge tracking module 725; a priority determination module 735; a fill collation module 745; and an edge data creation module 755 arranged in a pipeline.
The instruction execution module 710 reads and processes instructions from the first set of primitives in store 640d and formats the instructions into messages that are transferred to the other modules within the pre-processing module 700. The edge tracking module 725 is responsible for determining the edges that bound the span of the 711885 -32currently scanned pixels using the active edge list 720 that the edge tracking module 725 maintains, and passes this information on to the priority determination module 735.
Fig. 9A shows a schematic flow diagram of a method 1500, performed by the edge tracking module 725, of processing the edges on a scanline. It is achieved by determining the active edges on each scanline, and from these active edges, determining the objects that contribute to each pixel on the scanline. The method 1500 determines the active edges on any scanline from the main edge list in the first set of primitives in store 640d. The main edge list contains all the edges to be rendered on the page, sorted in ascending order of their starting scanline (y-order), and the active edge list (ActiveEdgeList) 720 is a temporary list of edges that intersect the current scanline.
The method 1500 starts in step 1550 where a variable CurYis set to zero, and an active edge list 720 and a known fill compositing sequence table 740 are set to empty lists. A copy of the main edge list 736 is made for later inclusion in the second set of primitives 640e that will be used to construct the instruction job 640f. Then, in step 1551, the edge tracking module 725 reads an edge from the main edge list in store 640d. The edge tracking module 725 next determines in step 1552 whether all the edges in the main edge list have been processed, or whether the y-value of the currently-read edge, having variable name current edge.y, is greater than the value stored in the variable CurY. If neither of these conditions is satisfied then the method 1500 proceeds to step 1553 where the edge tracking module 725 removes the current edge from the main edge list. The current edge is also merged into the active edge list. Edges in the active edge list are ordered by ascending x-value; i.e. the order along the scanline. Once the current edge is merged into the active edge list the method 1500 returns to step 1551 where the next edge is read from the main edge list.
711885 -33 If it is determined in step 1552 that either of the conditions is satisfied, then in step 1554 the edge tracking module 725 determines a number N of scanlines to prerender. If all the edges in the main edge list have been processed then the number N is set to the number of scanlines remaining on the page; i.e. the difference between the page height and the current scanline CurY as follows: N PageHeight CurY (1) If, however, there are still edges in the main edge list to process, then the number N is set to the number of scanlines between the current scanline CurY and the scanline on which the currently-read edge commences: N current_edge.y CurY (2) Once the number N of scanlines has been determined in step 1554 the active edge list 720 for N scanlines is passed to the priority determination module 735 for processing in step 1555.. The processing of the N scanlines in step 1555 is described in more detail with reference to Fig. 9B. The edge tracking module 725 then updates the current scanline CurY in step 1556 using the equation: CurY CurY N (3) Next, in step 1557, the edge tracking module 725 determines whether the updated current scanline CurY is equal to the page height. If so, the method 1500 terminates in step 1558. Alternatively, if it is determined in step 1557 that the current scanline CurY is less than the page height then the method 1500 returns to step 1551 from where the next edge from the main edge list is processed.
Step 1555 of processing a scanline by the priority determination module 735 is now described in more detail with reference to Fig. 9B wherein a schematic flow diagram of step 1555 is shown. In particular, in step 1555 the active edge list 720 created by the 711885 -34edge tracking module 725 is used by the priority determination module 735 to update the set of active level information in the active level list 730. The step 1555 starts in an initialising sub-step 1561 wherein the priority determination module 735 sets a temporary active edge list (TempAEL) to an empty list and also sets the active level list (ActiveLevelList) 730 and a last active level list (LastActiveLevelList) to empty lists. Substep 1562 follows where the priority determination module 735 determines whether the active edge list 720 is empty, hence whether all edges in the active edge list 720 have been processed.
If it is determined that the active edge list 720 still contains entries then step 1555 continues to sub-step 1563 where the next edge along the scanline is read from the active edge list 720 and that edge is removed from the active edge list 720. Also, the level or levels pointed to by that edge are activated or deactivated as appropriate. If a level or levels are activated they are added to the active level list 730. Otherwise if the level or levels are deactivated they are removed from the active level list 730.
Then, in sub-step 1564, the active level list 730 is further processed by the fill collation module 745 in order to generate an active fill compositing sequence (ActiveFCS). Sub-step 1564 is described in more detail with reference to Fig. 9C. Substep 1565 follows sub-step 1564 where the generated active fill compositing sequence (ActiveFCS) is associated with the current edge on the present scanline by the edge data creation module 755. Sub-step 1565 is described in more detail with reference to Fig. 9D.
In step 1566 the active level list is copied into the last active level list and the xposition of the edge (Edge.x) and the y-position of the edge (Edge.y) are updated for this edge for the next scanline.
In sub-step 1567 the priority determination module 735 then determines whether the edge expires, or in other words, terminates. If the edge terminates then the step 1555 711885 35 returns to sub-step 1562. Alternatively, if it is determined in sub-step 1567 that the edge does not terminate then that edge is sorted into the temporary active edge list based on the updated x-position in sub-step 1568 before the step 1555 returns to sub-step 1562.
From sub-step 1562 any subsequent edges on the scanline are processed until it is determined in sub-step 1562 that the active edge list 720 is empty. Step 1555 then proceeds to sub-step 1570 where the temporary active edge list is copied into the active edge list 720. The priority determination module 735 then determines in sub-step 1571 whether more scanlines need to be processed. If not then the step 1555 returns in sub-step 1572 back to step 1556 in Fig 9A. Alternatively, if it is determined that more scanlines have to be processed, then the step 1555 returns to sub-step 1561.
Sub-step 1564 of processing the active level list 730 by the fill collation module 745 to calculate the active fill compositing sequence is now described in more detail with reference to Fig. 9C wherein a schematic flow diagram of sub-step 1564 is shown. Substep 1564 starts in sub-step 1580 where a variable Lv is set to the value NULL. The fill collation module 745 then in sub-step 1581 determines whether the active level list 730 is empty. If the active level list 730 is determined to be empty then, in step 1582, the fill collation module 745 sets the active fill compositing sequence variable ActiveFCS to WhiteBkGnd.
Alternatively, if it is determined in sub-step 1581 that the active level list 730 is not empty, then sub-step 1564 proceeds to sub-step 1584 where the active level list 730 is sorted by descending priority order such that the first level in the active level list 730 corresponds to the object that is closest to the viewer. The number of entries in the active level list is NumLevels. In sub-steps 1585, 1586, 1587 and 1588, the fill collation module 745 determines the highest level (Lvl) in the active level list 730 that is opaque when the objects are viewed from above. When such a level is found the fill collation module 745 711885 -36copies that opaque level and any levels above it into the active fill compositing sequence variable in sub-step 1589.
From sub-step 1582 or 1589 processing continues to sub-step 1590 where the fill collation module 745 determines whether the active fill compositing sequence already exists in the set of known fill compositing sequences 740. If it is determined in sub-step 1590 that the active fill compositing sequence does not exist in the set of known fill compositing sequences 740, then the active fill compositing sequence is added to the set of known fill compositing sequences 740 in sub-step 1591.
A fill compositing sequence is an ordered list of contributing levels. Each contributing level has an associated fill that can be a bitmap, graphic, flat colour or other type of fill. The contributing levels within a fill composition sequence are composited together by the instruction job generator 640c as part of the creation of the instruction job 640f. The result of compositing together the contributing levels in a fill composition sequence is an opaque output fill that can be a bitmap, graphic, flat colour or other type of fill.
From sub-step 1591, or if it is determined in sub-step 1590 that the active fill compositing sequence already exists in the set of known fill compositing sequences 740, processing continues to sub-step 1592 where the fill collation module 745 determines the type of fill that will result from compositing the contributing levels in the active fill compositing sequence by examining the types of each fill in the compositing sequence.
The following list of rules describes the fill types resulting from compositing all possible pairs of different fill types.
A flat fill composited with white results in a flat fill; A flat fill composited with a flat fill results in a flat fill; A flat fill composited with a two point blend results in a two point blend; 711885 -37- A flat fill composited with a three point blend results in a bitmap at page resolution; A flat fill composited with a bitmap results in a bitmap (but not at page resolution); A two point blend composited with white results in two point blend; A two point blend composited with a two point blend may result in either a two point blend or a bitmap at page resolution; A two point blend composited with a three point blend results in a bitmap at page resolution; A two point blend composited with a bitmap results in a bitmap at page resolution; A three point blend composited with white results in a bitmap at page resolution; A three point blend composited with a three point blend results in a bitmap at page resolution; A three point blend composited with a bitmap results in a bitmap at page resolution; A bitmap composited with white results in a bitmap (but not at page resolution); A bitmap composited with a bitmap results in a bitmap at page resolution; and A page resolution bitmap composited with any other fill results in a page resolution bitmap.
The determination of the fill type of the active fill compositing sequence occurs by application of the above rules to successive pairs of fill types.
If the fill sequence consists of a single fill, and that fill is a bitmap, then in certain variants of the present embodiment the sequence is deemed to represent a bitmap at page resolution.
711885 -38- If it is determined in sub-step 1592 that the resultant fill is a bitmap at page resolution then processing continues to sub-step 1593 where the fill collation module 745 tracks a current bounding box for the active fill compositing sequence, the current bounding box being one of a set of one or more bounding boxes that enclose all those pixels corresponding to that active fill compositing sequence.
When an active fill compositing sequence that will result in a bitmap fill at page resolution is encountered by the fill collation module 745 for the first time the fill collation module 745 creates a bounding box whose dimensions are: one scanline high and a width equal to at least the width of the span of pixels following the edge currently being pre-processed. The stored parameters for the bounding box are the top-left corner and the bottom-right corner.
If the same active fill compositing sequence is encountered on the subsequent scanline then the fill collation module 745 calculates new parameters for the existing bounding box for that same active fill compositing sequence. These new parameters have values such that the new bounding box would enclose the pixels corresponding to that same active fill compositing sequence on this scanline and the previous scanline.
At this point, the fill collation module 745 determines if it would be more optimal to close off the existing bounding box and to start a new bounding box or to actually expand the existing bounding box to its new dimensions.
This decision can be based on several factors, one of which may be the calculation of usage metrics. A typical usage metric is calculated as follows: Sum the total number of pixels associated with the active fill compositing sequence within the newly calculated dimensions of the existing bounding box; divide this number by the total number of pixels within the newly calculated dimensions of the existing bounding box; 711885 -39if this ratio is lower than a certain threshold, then the existing bounding box can be closed off and a new bounding box created for the span of pixels associated with the active fill compositing sequence on the current scanline.
Other metrics may also be used to calculate the scanline on which an existing bounding box should be closed off and a new bounding box begun.
In this manner, when the page is pre-processed, each fill compositing sequence whose corresponding output fill is of bitmap type at page resolution has an associated set of one or more bounding boxes that together enclose all the pixels corresponding to that active fill compositing sequence.
Following the execution of sub-step 1593, or if it is determined in sub-step 1592 that the resultant fill is not a bitmap at page resolution then sub-step 1564 returns in substep 1594 to sub-step 1565 in Fig. 9B.
Note that the creation and tracking of one or more bounding boxes has been described for a fill compositing sequence that results in a bitmap at page resolution when composited. However, in general one or more bounding boxes can be created and tracked for any fill compositing sequence.
Table 5 below shows an example of the format that the known fill compositing sequences 740 may take. The values in the example are for the page illustrated in Fig.
8A. The page illustrated in Fig. 8A contains a number of different fill composition sequences 1000, 1010, 1020, 1030, 1040 and 1050, with each fill composition sequence 1000, 1010, 1020, 1030, 1040 and 1050 being illustrated with a different fill pattern.
Also illustrated are bounding boxes 1001, 1002, 1031, 1032, 1041, 1051, and 1052 bounding the pixels of the fill compositing sequences.
711885 Entry Fill Pointer to Bounding boxes associated Compositing bounding boxes with each Fill Compositing Sequence Sequence 0 1000 Non-NULL 1001 1002 1 1010 NULL None 2 1020 NULL None 3 1030 Non-NULL 1031 1032 4 1040 Non-NULL 1041 1050 Non-NULL 1051 1052 Table 5 Table of known fill compositing sequences 740 In Table 5 fill compositing sequences 1000, 1030, 1040 and 1050 have one or more bounding boxes associated with them while fill compositing sequences 1010 and 1020 do not have any associated bounding boxes.
Sub-step 1565 (Fig. 9B) where the generated active fill compositing sequence (ActiveFCS) is associated with the current edge on the present scanline by the edge data creation module 755 is now described in more detail with reference to Fig. 9D where a schematic flow diagram of sub-step 1565 is shown.
In one implementation each edge has an edge behaviour table that contains two attributes. The first attribute (Aggregate Fill Index) is a reference to the active fill compositing sequence entry in the set of known fill compositing sequences 740. The second attribute (Bounding Box) is a reference to a bounding box that encloses the pixels associated with the active fill compositing sequence. Further attributes may be added.
711885 -41- Table 6 below shows an example of a preferred edge behaviour table representation for edge 1090 shown in Fig. 8A in accordance with such an implementation.
Lifetime Aggregate Fill Index Bounding Box (Fill Compositing Sequence Ref) (Bounding Box Ref) Lifetime, 1000 1001 Lifetime 2 1000 1002 Lifetime 3 1020 NULL Lifetime 4 1030 1031 Lifetime 5 1050 1051 Lifetime 6 1050 1052 Remainder 1030 1032 Table 6 Edge Behaviour Table Sub-step 1565 starts in sub-step 1520 where the edge data creation module 755 determines whether an edge behaviour table for the edge under consideration exists in the set of edge behaviour tables 760. In the case where such an edge behaviour table does not exist, one is created and added to the set of edge behaviour tables 760 in sub-step 1524 and a variable i is set to In sub-step 1525 that follows sub-step 1524 an entry is added to the edge behaviour table at index i+I (becoming index 0 for the first entry in this case).
Also in sub-step 1525 the Aggregate Fill Index attribute is set to be a reference to the active fill composting sequence for this edge within the set of known fill compositing sequences 740 up to the current scanline, as determined in the sub-steps of Fig. 9C.
Finally in sub-step 1525 the lifetime field of the just-added edge behaviour table entry is initialised to 1. Sub-step 1565 then returns in sub-step 1526 to sub-step 1566 in Fig. 9B.
If it is determined in sub-step 1520 that an edge behaviour table for the edge under consideration already exists in the set of edge behaviour tables 760 then, in sub-step 711885 -42- 1521, the variable i is set to be the index of the last entry.in the edge's edge behaviour table. The edge data creation module 755 then determines in sub-step 1522 whether there has been any change from the previous scanline in the Aggregate Fill Index attribute of this edge. If it is determined that there have been no changes then the lifetime field of this entry is incremented in sub-step 1523 before sub-step 1565 returns in sub-step 1526.
Alternatively, if it is determined in sub-step 1522 that there has been a change from the previous scanline in the attribute of this edge then sub-step 1565 proceeds to sub-step 1525 where a new entry is added to the edge behaviour table that contains as its Aggregate Fill Index attribute a reference to the new fill compositing sequence. The lifetime field of the just-added edge behaviour table entry is also initialised to 1 and the Aggregate Fill Index attribute is set to be a reference to the active fill composting sequence for this edge within the set of known fill compositing sequences 740 up to the current scanline, before sub-step 1565 returns in sub-step 1526.
In this manner, when all the edges have been tracked down the page by the preprocessing module 700, the set of known fill compositing sequences 740 contains an entry for each unique combination of levels (and, by reference, the fills associated with these levels) that has been encountered on the page.
When all the edge tracking has been completed and the fill compositing sequences have been collected for a page in the pre-processing module 700, the controlling program 640 further processes this data in a fill grouping module 800. A schematic block diagram of a preferred implementation of the fill grouping module 800 is shown in Fig. 7C, and is an aspect of the primitives processor 640b (Fig. 7A). The fill grouping module 800 comprises a bounding box grouping module 820, an edge association update module 830 and a bitmap generation module 840 arranged in a pipeline.
711885 -43- A schematic flow diagram of a method 1600, performed by the bounding box grouping module 820, is shown in Fig. 10A. The method 1600 starts in step 1501 where the first fill compositing sequence in the set of known fill compositing sequences 740 is retrieved. Also, the active fill compositing sequence variable (ActiveFCS) is set to this first fill compositing sequence.
The bounding box grouping module 820 then determines in step 1502 whether the active fill compositing sequence contains a reference to one or more bounding boxes.
If it is determined that the active fill compositing sequence contains a reference to one or more bounding boxes then processing continues to step 1503 where these bounding boxes are inserted into groups, the grouping being recorded in the grouping table 860. Step 1503 is described in more detail below with reference to Fig. Following step 1503, or in the case where it is determined in step 1502 that the active fill compositing sequence does not contain a reference to bounding boxes, the bounding box grouping module 820 in step 1504 determines whether the active fill compositing sequence is the last in the set of known fill compositing sequences 740.
If the active fill compositing sequence is not the last in the set of known fill compositing sequences 740 then the active fill compositing sequence variable (ActiveFCS) is set to the next fill compositing sequence in the fill compositing sequence table in step 1505 before processing returns to step 1502.
If the active fill compositing sequence is the last in the set of known fill compositing sequences 740 then, in step 1506, the bounding box grouping module 820 executes an optional extract and regroup process on the groups in the grouping table 860.
This step 1506 is optional because in the preferred implementation the bounding box grouping module 820 collects statistics from the grouping table 860 to determine if the grouping achieved up to this point is sufficient and that no further modification of the 711885 -44groups in the grouping table is necessary. The statistics preferably consist of a Boolean variable indicating whether a bounding box within a group intersects any other bounding box(es) within the same group. Step 1506 is described in more detail with reference to Fig. The method 1600 then continues to step 1507 where groups are optionally merged. This step 1507 is optional because, in the preferred implementation the bounding box grouping module 820 collects statistics, as described above, from the grouping table 860 to determine if the grouping achieved up to this point is sufficient and that no further modification of the groups in the grouping table is necessary. Step 1507 is described in more detail with reference to Fig. 10E. Following step 1507 the method 1600 proceeds to step 1508, where the bounding box groups are rearranged into further, non-overlapping groups by a cutting procedure. Step 1508 is described in more detail with reference to Fig. 14A. Following step 1508 the method 1600 proceeds to step 1509, where certain bounding box groups are relocated into vacant areas within another bounding box group. Step 1509 is described in more detail with reference to Fig. 14B.
The method 1600 then terminates in step 1510.
Alternative arrangements of the method 1600 may exclude any of steps 1506, 1507, 1508, and 1509. Each of these steps further reduces the redundancy of the page description, at the cost of extra computation by the controlling program 640. As a further alternative, steps 1501 to 1507 may be excluded, so that the method 1600 contains no grouping, but consists only of steps 1508 and 1509. In this alternative, for the purposes of steps 1508 and 1509, each bounding box may be regarded as forming its own group.
Fig. 1GB shows a schematic flow diagram of step 1503 where the bounding box grouping module 820 inserts the bounding boxes for a particular active fill compositing sequence into groups. Step 1503 starts in sub-step 1530 where the bounding box 711885 grouping module 820 retrieves the first bounding box of the active fill compositing sequence and sets an active bounding box variable (ActiveBB) to this bounding box.
In sub-step 1531 the active bounding box is inserted into a group within the grouping table 860. Sub-step 1531 is described in more detail with reference to Fig. The bounding box grouping module 820 then determines in sub-step 1532 whether the active bounding box is the last one of the active fill compositing sequence. If it is determined that more bounding boxes remain for processing then, in sub-step 1533, the active bounding box variable is set to be the next bounding box of the active fill compositing sequence. Processing in step 1503 returns to sub-step 1531 where that next bounding box is inserted into a group within the grouping table 860.
If it is determined in sub-step 1532 that the active bounding box is the last one of the active fill compositing sequence, then the step 1503 returns in sub-step 1534 to step 1504 in Fig Each group consists of one or more bounding boxes. Each bounding box within a group maintains a reference to the fill compositing sequence from which it originates.
Each group also contains dimension data that define a rectangular area that encloses all the bounding boxes within said group.
Each group that is created is stored in the grouping table 860. The grouping table 860 contains all the groups that are created by the bounding box grouping module 820. Table 7 shows an example of the format of the grouping table 860 that contains groups that contain the bounding boxes shown in Table 711885 -46- Group Bounding Boxes in Group Parameters of Group 1101 1001 (Topl, lefti, Bottomi, right i 1002 1102 1031 1041 (Top 2 left 2 Bottom 2 rightz) 1051 1103 1052 (Top 3 left 3 Bottom 3 right 3 1104 1032 (Top 4 left 4 Bottom 4 right 4 Table 7: Grouping Table Fig. 8B shows the bounds 1101, 1102, 1103 and 1104 of the groups generated for the bounding boxes shown in Fig. 8A. It is noted that this configuration of groups is only an example of one of many possible configurations of groups. The following description details the general approach taken in order to identify an efficient group configuration from any group of input candidate bounding boxes on a page.
Fig. 10C shows a schematic flow diagram of step 1531 (Fig. 10B) where the active bounding box is inserted into a group within the grouping table 860. Step 1531 starts in sub-step 900 where it is determined whether any groups exist in the grouping table 860. If one or more groups exist then, in step 902, a best-fit group from the grouping table 860 is identified which meets a selection criterion. Preferably, the selection criterion is met when, if the active bounding box were to be added to that bestfit group, the dimensions of the best-fit group would require the least enlargement compared to all the other existing groups in the grouping table 860, in order to fully 711885 -47enclose the existing bounding boxes in the best-fit group and (ii) the active bounding box.
Adding a bounding box described by the following parameters: LeftB; RightB; TopB; BottomB; to the bounding box of the group described by the following parameters: LeftG; RightG; TopG; BottomG; results in the group having a bounding box described by the following parameters: LeftG' min(LeftG, LeftB) RightG' max(RightG, RightB) TopG' min(TopG, TopB) BottomG' max(BottomG, BottomB) This best-fit group identification is achieved by performing a test insertion of the active bounding box into each of the groups in the grouping table 860. The group with the resultant least enlargement of either the group or the active bounding box is thereby identified as the best-fit group. Enlargement can be measured in absolute terms (number of pixels) or as a proportion of the original size.
Following the identification of the best-fit group from the groups in the grouping table 860 in step 902, processing continues to sub-step 903 where it is determined 711885 48 whether the active bounding box and the best-fit group fulfil an adding criterion. The adding criterion requires the area of the best-fit group, following enlargement by the insertion of the active bounding box, be less than a predetermined multiple of (preferably twice) the sum of the area of the bounding box of that best-fit group preceding enlargement and the area of the active bounding box.
If it is determined in sub-step 900 that no groups exist in the grouping table 860, or in sub-step 903 that the active bounding box and the best-fit group do not fulfil the adding criterion, then a new (empty) group is created and assigned to be the best-fit group in sub-step 901. The newly created group is also added to the grouping table 860.
If it is determined in sub-step 903 that the active bounding box and the best-fit group do fulfil the adding criterion, or from sub-step 901, processing continues to substep 904 where the active bounding box is inserted into the best-fit group. This insertion consists of calculating the parameters of a rectangle that encloses all the bounding boxes in that group, including the active bounding box. The rectangle may be larger that the absolute minimum required to fully enclose the bounding boxes in that group.
In sub-step 905 it is then determined whether the number of bounding boxes in the best-fit group is more than a predetermined threshold number. This threshold number is set to be 50 in the preferred implementation. If there are not more than the threshold number of bounding boxes in the best-fit group then step 1531 returns in sub-step 910 to step 1532 in Fig. Alternatively, if it is determined in sub-step 905 that there are more than the threshold number of bounding boxes in the best-fit group then, in sub-step 906 two empty groups are created and added to the grouping table 860. In sub-step 907 that follows a pair of bounding boxes from the best-fit group is identified such that this pair of bounding 711885 -49o boxes, if inserted into an empty group, would create a group of maximum dimensions when compared to any other combination of two bounding boxes from the best-fit group.
Then, in sub-step 908, one of these two identified bounding boxes is inserted into Sone of the two newly created groups (called Group A for the purposes of the following explanation) and the other of the two identified bounding boxes is inserted into the other of the two newly created groups (called Group B for the purposes of the following (Ni Nexplanation).
In sub-step 909 the remaining bounding boxes in the best-fit group are inserted into either Group A or Group B based on the selection criterion described earlier. Then the best-fit group is deleted from the grouping table 860.
Following sub-step 909 step 1531 returns in sub-step 910 to step 1532 in Fig.
Step 1506 in Fig. 10A is now described in more detail with reference to Fig. where a schematic flow diagram of step 1506 is shown. Step 1506 starts in sub-step 920 where the first group in the grouping table 860 is retrieved and is copied into the active group variable (ActiveGroup). It is then determined in sub-step 921 whether there are any bounding boxes in the active group that do not overlap with any other of the bounding boxes contained in the active group. This test excludes groups that only contain a single bounding box since that is a trivial case.
If there are one or more bounding boxes in the active group that do not overlap any other of the bounding boxes in the active group then, in sub-step 922, each bounding box that does not overlap any other bounding boxes within the active group is removed from the active group and is inserted into a single group of its own. This new group is then added to the grouping table 860.
711885 From sub-step 922, or if it is determined in sub-step 921 that each bounding box within the active group overlaps at least one other bounding box in the active group, then it is determined in sub-step 923 whether the active group is the last group in the grouping table 860. If the active group is not the last group in the grouping table 860 then in substep 924 the active group variable is set to the next group in the grouping table 860.
Processing then returns to sub-step 921.
If it is determined in sub-step 923 that all the groups in the grouping table 860 have been processed then, in sub-step 925 the active group variable (ActiveGroup) is set to be the first group in the grouping table 860. Next, in sub-step 926 it is determined whether the active group contains only a single bounding box. If it is determined that the active group does contain only a single bounding box then, in sub-step 927, the active group is removed from the grouping table 860 and the active bounding box variable (Active BB) is set to the single bounding box. Sub-step 928 follows where the active bounding box from the active group is regrouped using an overlap-adding criterion. Substep 928 is described in more detail with reference to Fig. 1OF.
From sub-step 928, or if it is determined in sub-step 926 that the active group does not contain only a single bounding box then, in sub-step 929, it is determined whether the active group is the last in the grouping table 860. If the active group is not the last bounding box in the grouping table then, in sub-step 930 the active group variable is set to the next group in the grouping table. Processing then returns to sub-step 926.
If it isdetermined in sub-step 929 that the active group is the last in the grouping table 860 then step 1506 returns in sub-step 931 to step 1507 in Fig. IOA.
In an alternative implementation, if it is determined in sub-step 923 that all the groups in the grouping table 860 have been processed then step 1506 directly proceeds to sub-step 931 where step 1506 returns to step 1507 in Fig. 711885 I I -51 Fig. 10 E shows a schematic flow diagram of step 1507 (Fig. 10 OA) in more detail.
Step 1507 starts in sub-step 950 where the grouping table 860 is sorted by group area, where the group area is the number of pixels enclosed with the group boundary, from largest to smallest group size. Also in sub-step 950 a temporary, grouping table, called the merged grouping table, is created. The merged grouping table is initially empty.
Processing then continues in sub-step 951 where the active group variable (ActiveGroup) is set to be the first group in the grouping table 860. In sub-step 952 the active group is then inserted into the merged grouping table. Step 952 is described in more detail below with reference to Fig. Sub-step 953 follows where the active group is deleted from the grouping table 860. Next, in sub-step 954, it is determined whether the active group is the last group in the sorted grouping table 860. If more groups remain then processing continues to substep 955 where the active group variable is loaded with the next group in the grouping table 860 before step 1507 returns to sub-step 952.
However, if it is determined in sub-step 954 that the active group is the last group in the grouping table 860 then, in sub-step 956 the contents of the merged grouping table are copied into the grouping table 860 (which is empty at this stage). The merged grouping table is then deleted before step 1507 returns in sub-step 957 to step 1508 in Fig.
A.
Fig. 10F shows a schematic flow diagram of step 928 (Fig. 10D) where the active bounding box from the active group is regrouped using an overlap-adding criterion.
Step 928 starts in sub-step 940 where a best-fit group from the grouping table 860 is identified which, if the single bounding box were to be added to said group, the dimensions of said best-fit group would require the least enlargement of either the group or the single bounding box compared to all the other existing groups in the list of groups, 711885 52 in order to fully enclose the existing bounding boxes in the group and the active bounding box.
It is then determined in sub-step 941 whether the active bounding box and the best-fit group fulfil an adding criterion and an overlap criterion. The adding criterion requires the area of the bounding box of that best-fit group, following enlargement by the insertion of the active bounding box, be less than a predetermined multiple of (preferably twice) the sum of the area of the bounding box of the best-fit group preceding enlargement and the area of active bounding box. The overlap criterion requires that the active bounding box overlap with at least one of the existing bounding boxes within the best-fit group.
If it is determined that the active bounding box should not be inserted into the best-fit group because the adding criterion or the overlap criterion mentioned above are not met, then processing continues to sub-step 942 where an empty group is created and assigned to be the best-fit group. This best-fit group is then added to the grouping table 860.
If it is determined in sub-step 941 that the active bounding box and the best-fit group fulfil an adding criterion and an overlap criterion, or from sub-step 942, processing continues to sub-step 943 where the active bounding box is inserted into the best-fit group. This insertion consists of calculating the parameters of a rectangle that encloses all the bounding boxes in that group, including the active bounding box. The rectangle may be larger that the absolute minimum required to fully enclose the bounding boxes in that group.
Step 928 then returns in sub-step 944 to step 929 in Fig. Fig. O10G shows a schematic flow diagram of step 952 in Fig. 10E where the active group is inserted into the merged grouping table. Step 952 starts in sub-step 960 711885 53 where it is determined whether there are any groups in the merged grouping table. If no groups exist in the merged group table then, in sub-step 963, an empty group is created in the merged grouping table and used as the best-fit group.
However, if it is determined in sub-step 960 that at least one group already exists in the merged grouping table then, in sub-step 961, a best-fit group from the merged grouping table is identified which meets the selection criterion described above with reference to step 902.
This best-fit group identification is achieved by performing a test insertion of the active group into each of the groups in the merged grouping table. It is noted that inserting a first group into a second group is equivalent to inserting all the bounding boxes from the first group into the second group. The group with the resultant least enlargement is thereby identified as the best-fit group.
When the best-fit group has been identified in sub-step 961, processing continues to sub-step 962 where it is determined whether the active group and the best-fit group fulfil an adding criterion. The adding criterion requires the area of the bounding box of the best-fit group, following enlargement by the insertion of the active group be less than the sum of the area of the bounding box of the best-fit group preceding enlargement and the area of the bounding box of the active group.
If it is determined that the active group should not be inserted into the best-fit group because the enlargement criterion mentioned above is not met (the NO option of step 962), then processing continues to sub-step 963.
If it is determined in sub-step 962 that the active group and the best-fit group fulfil the enlargement criterion, or from sub-step 963, processing continues to sub-step 964 where the active group is inserted into the best-fit group in the merged grouping table. Step 952 then returns in sub-step 965 to step 953 in Fig. 711885 -54- Fig. 14A shows a schematic flow diagram of step 1508 in Fig. 10A where the bounding box groups are rearranged into further, non-overlapping groups by a cutting procedure. An illustration of the results of step 1508 may be seen in Figs. 15A and The illustration shows ungrouped bounding boxes, but the method is equally applicable to bounding box groups. Fig. 15A shows a first fill region 1710 and its bounding box 1701 and a second fill region 1720 and its bounding box 1702. The bounding box 1701 of the first region 1710 overlaps the bounding box 1702 for the second region 1720 in a region 1705. Therefore, their page resolution bitmap fills together contain an amount of redundancy, namely the overlap area 1705. To eliminate this redundancy, the two bounding boxes are replaced by two new, non-overlapping bounding boxes. Figure also shows the first region 1710 and the second region 1720. Bounding box 1701 of the first region and bounding box 1702 of the second region have been replaced by a bounding box 1801 which encloses only a portion of the first region 1710, and a bounding box 1802 which encloses the remaining portion of the first region 1710, and all of the second region 1720. The two new bounding boxes enclose all the pixels that will appear on the page.
Step 1508 starts in sub-step 1900 where a temporary grouping table, called the cut grouping table, is created. The cut grouping table is initially empty. Processing then continues in sub-step 1901 which determines whether there are any groups remaining in the grouping table. If any groups remain, execution proceeds to sub-step 1902 where the active group variable (ActiveGroup) is set to be the first group in the grouping table 860.
In the following sub-step 1903, the grouping table is searched to identify a group that overlaps the active group. If such a group can be identified, the overlapping group is retrieved and assigned in sub-step 1904 to the variable OverlapGroup. Next, in sub-step 1905, the 'important' scanlines for the active group overlap group pair are identified.
711885 'Important' scanlines are those at which the outer horizontal bounds of the region formed by the union of the active group and the overlap group undergo a change, including changes to and from "no bounds" at the top and bottom of the union region. In the example of Figs. 15A and 15B, the important scanlines are the top scanline of bounding box 1701, the top scanline of bounding box 1702, and the bottom scanline of bounding box 1702. Note that the bottom scanline of bounding box 1701 is not an 'important' scanline as the outer horizontal bounds of the union region of bounding box 1701 and bounding box 1702 do not change at that scanline.
At the next sub-step 1906, two or more new groups are created in the grouping table 860 from the active group and the overlap group by cutting the union region at the important scanlines. Between each consecutive pair of cuts, and horizontally bounded by the horizontal bounds of the union region between the pair, a new bounding box group is created. The contents of each new group are the bounding boxes, or bounding box portions, from the active group or the overlap group that lie within the union region between the pair of important scanlines. Each time a bounding box is cut into two portions, one portion on either side of an important scanline, the corresponding edge behaviour table entry for the corresponding edge also needs to be split into two entries at the important scanline.
Following sub-step 1906, in sub-step 1907 the active group and the overlap group are deleted from the grouping table 860. Step 1508 then returns to sub-step 1901.
If it is determined in sub-step 1903 that no group in the grouping table overlaps the active group, step 1508 proceeds to sub-step 1908 in which the active group is inserted into the cut grouping table, followed by sub-step 1909 in which the active group is deleted from the grouping table. Execution then returns to sub-step 1901.
711885 -56- If it is determined in sub-step 1901 that no groups remain in the grouping table 860 then, in sub-step 1910, the contents of the cut grouping table, which are by definition a set of non-overlapping groups, are copied into the (now empty) grouping table 860. The cut grouping table is then deleted before step 1508 returns in sub-step 1911 to step 1509 in Fig. Fig. 14B shows a schematic flow diagram of step 1509 in Fig. 10A where bounding box groups are relocated to vacant areas of other bounding box groups. An illustration of the results of step 1508 may be seen in Figs. 16A and 16B. The illustration shows ungrouped bounding boxes, but the method is equally applicable to bounding box groups. Fig. 16A shows two bitmaps at page resolution, 2000 and 2010, bounded by bounding boxes 2005 and 2015 respectively. There is a large vacant area within bounding box 2015 that is not occupied by pixels from the bitmap 2010. Fig. 16B shows the same two bitmaps 2000 and 2010, with bounding box 2005 relocated so that bitmap 2000 lies within the vacant area of bounding box 2015. The amount of redundancy between bounding boxes 2005 and 2015 has clearly been reduced.
Step 1509 starts in sub-step 1912 where the active group variable (ActiveGroup) is set to be the largest group in the grouping table 860. In sub-step 1913, the grouping table is searched to identify an untested group whose bounding box is smaller in area than that of the active group and has not yet been relocated. Sub-step 1914 tests whether that group can be relocated within the active group without their respective filled regions overlapping. If sub-step 1914 returns in the negative, step 1509 proceeds to sub-step 1915, where step 1509 checks whether there is any untested group whose bounding box is smaller in area than that of the active group and has not yet been relocated. If so, execution returns to sub-step 1913. Otherwise, execution proceeds to sub-step 1918.
711885 -57- If sub-step 1914 returns in the affirmative, sub-step 1916 relocates the identified group within the vacant area of the active group. This is implemented by modifying a pointer associated with the relocated group so that the page resolution bitmap fills of the relocated group, once they are generated as described below, reside within the memory space of the bounding box of the active group.
Next, in sub-step 1918, it is determined whether any groups remain in the grouping table 860 that are smaller than ActiveGroup and as yet unrelocated. If more groups remain, then processing continues to sub-step 1920 where the active group variable is loaded with the next unrelocated group in the grouping table 860 smaller than ActiveGroup before step 1509 returns to sub-step 1913. However, if it is determined in sub-step 1918 that no groups remain in the grouping table 860 then step 1509 returns in sub-step 1922 to step 1510 in Fig. Once a bounding box of a fill compositing sequence has been finally allocated to a group, a reference to that group is added to the bounding box in the table of known fill compositing sequences 740. The table of known fill compositing sequences 740 (as shown in Table 5 above) has the added group reference shown in Table 8 in parentheses after each bounding box.
711885 -58- Entry Fill Compositing Pointer to bounding Bounding boxes Sequence boxes (with group references) 0 1000 Non-NULL 1001(1101) 1002 (1101) 1 1010 NULL None 2 1020 NULL None 3 1030 Non-NULL 1031(1102) 1032 (1104) 4 1040 Non-NULL 1041(1102) 1050 Non-NULL 1051(1102) 1052 (1103) Table 8: Table of known fill compositing sequences with group references added Upon completion of method 1600 (Fig. 10A) by the bounding box grouping module 820, the edge association update module 830 executes method 1800, a schematic flow diagram of which is shown in Fig. 11. The purpose of method 1800 is to patch the edge behaviour tables of each edge in order that any entry that has as an attribute a bounding box that has been subsumed into a group, will have that entry changed such that the entry now points to that group.
The method 1800 starts in step 1511 where the edge association update module 830 sequentially retrieves an edge behaviour table (EBT) from the set of edge behaviour tables 760.
The edge association update module 830 then determines in step 1512 whether all the edge behaviour tables have been processed. If all the edge behaviour tables have been processed, then the method 1800 ends in step 1519.
711885 -59- If it is determined in step 1512 that all edge behaviour tables in the set of edge behaviour tables 760 have not been processed then each entry in the retrieved edge behaviour table is processed in turn.
Step 1513 determines whether all the entries in the current edge behaviour table have been processed. If all the entries in the current edge behaviour table have been processed then the method 1800 returns to step 1511 to retrieve the next edge behaviour table for processing.
If it is determined in step 1513 that some entries remain in the current edge behaviour table to be processed, then the edge association update module 830 determines in step 1514 whether the Bounding Box attribute in the current entry of the edge behaviour table points to a bounding box. If the Bounding Box attribute in the current entry of the edge behaviour table does not point to a bounding box, then the next entry in the edge behaviour table is retrieved in step 1518 before method 1800 returns to step 1513.
Alternatively, if it is determined in step 1514 that the Bounding Box attribute in the current entry of the edge behaviour table does point to a bounding box, then the edge association update module 830 in step 1517 uses the bounding box referenced in the edge behaviour table's current entry to look up the group into which that bounding box has been inserted using the grouping table 860, and replaces the Bounding Box attribute of the edge behaviour table entry with that group reference. This completes the patching of the current entry of the edge behaviour table.
The method 1800 then continues to step 1518 where the next entry in the edge behaviour table is retrieved before the method 1800 returns to step 1513.
When all the groups have been created, the bitmap generation module 840 further processes each group in the grouping table 860. This processing takes the form of 711885 generating a bitmap at page resolution for each group by compositing the corresponding fill compositing sequence within each bounding box making up the group. The page resolution bitmap then contains all the pixels within the group bounding box that encloses the bounding boxes that form the group. All the page resolution bitmaps thus created are stored in the table of page resolution bitmaps 850 referenced by the fill compositing sequence table 740. Each entry in the table of page resolution bitmaps 850 is associated with a matrix representing the printed-page-to-bitmap affine transformation, as described above. Because each entry is already at page resolution, no rotation or scaling is needed, so the transformation is equivalent to a simple translation by the negative of the top-left corner of the group bounding box. The table of page resolution bitmaps 850 is then added to the second set of primitives 640e.
It is then the role of the instruction job generator 640c shown in Fig. 7A to generate the instruction job 640f from the second set of primitives 640e in Fig 7A. The instruction job 640f thus created contains all the edges on the page, the associated edge behaviour tables and a table of fills as referenced by the edge behaviour table entries. The table of fills is derived from the table 740 of known fill compositing sequences as created by the primitives processor 640b and the table of page resolution bitmaps 850 referenced by the fill compositing sequences. In the job generation step the known fill compositing sequences which have resulted in bitmaps at page resolution are not included in the table of fills to be generated. Rather, the generated bitmaps at page resolution are included in the table of fills. Also, the edge behaviour table for each edge within the display list is updated as follows. Each edge behaviour table entry that references a bounding box group has its 'eference to a fill compositing sequence replaced with a reference to a bitmap fill from the table of page resolution bitmaps. The referenced bitmap fill is the 711885 -61tn' Sbitmap fill which was derived from the bounding box group also referenced by the same edge behaviour table entry.
The instruction job 640f is transferred to the controller processor 670 of the Sprinter system 660. The controller processor 670 initialises the pixel rendering apparatus 680, supplies the starting location of the instruction job 640f, and instructs the pixel Cc, rendering apparatus 680 to render the job.
NThe resulting instruction job 640f contains significantly less redundancy than would have been the case if the page resolution bitmaps had not been grouped into groups. Pixels in the page resolution bitmaps that do not contribute to the printed output are substantially fewer in number.
Fig. 12 shows a schematic flow diagram of a method 1400, performed by the pixel rendering apparatus 680, of rendering a page described by the instruction job 640f created by the instruction job generator 640c withinthe controlling program 640.
Fig. 13 shows a more detailed schematic block diagram of the pixel rendering apparatus 680 shown in Fig. 6. The pixel rendering apparatus 680 comprises an instruction execution module 1310; an edge tracking module 1320; an edge behaviour table decoding module 1330; a pixel generation module 1340; and a pixel output module 1350, arranged in a pipeline.
The instruction execution module 1310 reads and processes instructions from the instruction job and formats the instructions into messages that are transferred to the other modules 1320 to 1350 within the pipeline. The edge tracking module 1320 is responsible for determining the edges that bound the span of the currently scanned pixel using an active edge list 1370 that it maintains, and passes this information onto the edge behaviour table decoding module 1330. The edge behaviour table decoding module 1330 module is pre-loaded with the edge behaviour tables 1375 from the instruction job. The 711885 -62 edge behaviour table decoding module 1330 uses the corresponding edge behaviour table to determine whether or not the edge is hidden on a scanline and, if not, what fill table entry is required to generate the pixels on that scanline following each edge.
The pixel generation module 1340 is pre-loaded with a fill table 1380 from the instruction job 640f. The pixel generation module 1340, upon receiving a message from the edge behaviour table decoding module 1330, is responsible for retrieving the data in the appropriate aggregate fill table entry, and using the data therein to generate a colour for inclusion in the message for forwarding onto the pixel output module 1350, which is subsequently passed to the printer engine 695.
Referring again to Fig. 12, the method 1400 starts at step 1402 where the edge tracking module 1320 initialises the active edge list 1370 and the hidden edge list to be null-terminated empty lists, and sets the current scanline to the first scanline on the page.
The edge tracking module 1320 then determines in step 1404 whether the current scanline is beyond the end of the page. If it is determined that the current scanline is in fact beyond the edge of the page, then the method 1400 terminates at step 1405.
Alternatively, if it is determined that the current scanline has not yet reached the edge of the page then, in step 1407, the edge tracking module 1320 sets the last active fill to the aggregate fill corresponding to the background colour (typically white).
Following step 1407, in step 1409 the edge tracking module 1320 determines whether the hidden edge list is empty. If the hidden edge list is empty then the edge tracking module 1320 in step 1430 inserts any new edges (ie. those starting on the current scanline) into the active edge list 1370. The edge tracking module 1320 then determines whether the active edge list 1370 is empty in step 1432. If the active edge list 1370 is empty then, in step 1436, the pixel generation module 1340 generates pixels for the entire 711885 -63current scanline using the last active fill. The current scanline is incremented in step 1437 before execution in method 1400 returns to step 1404.
Referring again to step 1432, if the active edge list 1370 is determined not to be empty, then the edge tracking module 1320 at step 1434 loads the first edge in the active edge list 1370 into a variable this_edge. Execution proceeds to step 1439 where the pixel generation module 1340 renders pixels from the left edge of the page to the x-position of the edge loaded into the variable this_edge using the last active fill. Step 1440 follows where the edge tracking module 1320 determines whether the value of the variable thisedge is NULL, ie. whether the end of the active edge list 1370 has been reached. If it is determined in step 1440 that the value of the variable this_edge is not NULL, the edge behaviour table decoding module 1330 in step 1442 then uses the current scanline to locate the active entry in this_edge's edge behaviour table in the edge behaviour tables 1375 by comparing the current scanline with the lifetime field of each entry. Once the active entry has been located, the edge behaviour table decoding module 1330 retrieves the "hidden" attribute of the active entry in step 1444. The method 1400 then proceeds to step 1446 where the edge behaviour table decoding module 1330 determines whether the "hidden" attribute is true, indicating that the edge loaded into the variable this_edge is hidden on the current scanline. If the "hidden" attribute is determined to be true in step 1446, the edge behaviour table decoding module 1330 then determines in step 1477 whether the currently active entry in the edge's edge behaviour table has lifetime data equal to "for the remainder of the edge". If the current active entry in the edge's edge behaviour table does not have such a lifetime data entry, the edge tracking module 1320 then in step 1480 inserts the edge in the variable thisedge into the hidden edge list.
If it is determined in step 1477 that the current active entry in the edge's edge behaviour table does have a lifetime data entry equal to "for the remainder of the edge", 711885 -64or following step 1480, the edge tracking module 1320 in step 1482 removes the variable thisedge from the active edge list 1370.
If it was determined in step 1446 that the edge in the variable this_edge is not hidden, then the edge behaviour table decoding module 1330 in step 1448 retrieves the "aggregate fill index" attribute of the active entry for variable this_edge from the edge's edge behaviour table, and in step 1450 sets the last active fill to this value.
Following either step 1450 or step 1482, the pixel generation module 1340 in step 1484 determines whether the edge in the variable this_edge is the last non-NULL edge in the active edge list 1370, ie. whether the next edge in the active edge list 1370 is NULL. If it is determined that the next edge in the active edge list 1370 is not NULL, the pixel generation module 1340 in step 1486 renders pixels for the span between the xposition of the edge in the variable thisedge and the x-position of the next edge in the active edge list 1370 using the entry in the fill table 1380 indicated by the last active fill.
Alternatively, if it is determined in step 1484 that the next edge in the active edge list 1370 is in fact NULL, then the pixel generation module 1340 in step 1488 renders pixels for the span between the x-position of the edge in the variable this_edge and the right hand edge of the page using the entry in the fill table 1380 indicated by the last active fill.
The amount of compositing required at steps 1486 and 1488 depends on the operation performed by the primitives processor 640b when the primitives processor 640b created the indicated fill table entry. If the primitives processor 640b performed the compositing, which was the case at least for the page resolution bitmap fills, the required fill is simply copied from the indicated fill table entry by the pixel generation module 1340. Otherwise, the pixel generation module 1340 performs the compositing from the fills and operations contained in the indicated fill table entry. Note that the page resolution bitmap fills are treated like any other bitmap fills in that the accompanying 711885 0 transformation, which happens to be a simple translation, is applied to obtain the generated pixel.
Following either step 1486 or 1488, in.step 1490 the edge tracking module 1320 O sets the variable this_edge to the next edge in the active edge list 1370, and execution returns to step 1440.
n If it is determined in step 1440 that the value of the variable this_edge is NULL, c indicating the end of the active edge list 1370 has been reached, then method 1400 0 proceeds to step 1455 where the edge tracking module 1320 sets the variable this_edge to the first entry in the active edge list 1370. In step 1457 that follows the edge tracking module 1320 determines whether the value of the variable this_edge is NULL, ie. whether the end of the active edge list 1370 has been reached. If it is determined that the end of the active edge list 1370 has not been reached, the edge tracking module 1320 then in step 1459 determines whether the edge in the variable this_edge terminates on the current scanline. If it is determined that the edge in the variable this_edge does terminate on the current scanline, then in step 1470 the edge tracking module 1320 removes this_edge from the active edge list 1370. In the case where it is determined in step 1459 that the edge in the variable this_edge does not terminate- on the current scanline, the edge tracking module 1320 updates the x-position of the variable this_edge for the next scanline in step 1463.
Following either step 1463 or step 1470, in step 1472 the edge tracking module 1320 sets the variable this_edge to the next entry in the active edge list 1370, and the method 1400 returns control to step 1457.
If it is determined in step 1457 that the value of the variable this_edge is NULL, indicating that the end of the active edge list 1370 has been reached, the edge tracking module 1320 in step 1466 sorts the active edge list based on the x-position of each active 711885 -66edge on the next scanline. Execution of the method 1400 then returns to step 1404 via step 1437.
Referring again to step 1409, if the edge tracking module 1320 determines that the hidden edge list is not empty, then in step 1412 the edge tracking module 1320 sets the variable this_edge to the first edge in the hidden edge list. In the following step 1415, the edge tracking module 1320 determines whether the value of variable thisedge is NULL, indicating that the end of the hidden edge list has been reached. If the end of the hidden edge list has been reached, then the method 1400 continues to step 1430. If it is determined in step 1415 that the value of variable thisedge is not NULL, that is the end l0 of the hidden edge list has not been reached, the edge behaviour table decoding module 1330 in step 1417 uses the current scanline to locate the active entry in the this_edge's edge behaviour table by comparing the current scanline with the lifetime field of each entry. Once the active entry has been located, the edge behaviour table decoding module 1330 retrieves the "hidden" attribute of the active entry in step 1420.
Execution proceeds to step 1422 where the edge behaviour table decoding module 1330 determines whether the "hidden" attribute is true, indicating that the edge in the variable this_edge is still hidden on the current scanline. If the edge in the variable thisedge is no longer hidden, the edge tracking module 1320 in step 1424 calculates the x-position on the current scanline of the edge in the variable this_edge. Also, the edge tracking module 1320 in step 1426 inserts the edge in the variable thisedge into the active edge list 1370 based on its calculated x-position, and removes the edge in the variable this_edge from the hidden edge list. If it is determined in step 1422 that the edge in the variable this_edge is still hidden, or following step 1426, the edge behaviour table decoding module 1330 then sets the variable thisedge to the next entry in the hidden edge list before the method 1400 returns to step 1415.
711885 -67- The effect of steps 1477 to 1482 and 1417 to 1426 is that an edge that starts as, or becomes, hidden is discarded unless it is known to become un-hidden at a later scanline. In this case, the edge is added to the hidden edge list, where its x-position is not tracked between scanlines and that edge cannot affect the last active fill variable. Once the edge becomes un-hidden the edge rejoins the active edge list in the correct location according-to its newly updated x-position. This saves both tracking and sorting computation on the active edge list 1370 by restricting these operations to non-hidden edges on any scanline.
Referring again to system 600 (Fig. by describing the operation of the pixel rendering system 300 in detail, computer programs are also implicitly disclosed in that it would be apparent to the person skilled in the art that the individual steps described herein are to be put into effect by computer code. The computer programs are not intended to be limited to any particular control flow.
Such computer programs may be stored on any computer readable medium. The computer readable medium may include storage devices such as magnetic or optical disks, memory chips, or other storage devices suitable for interfacing with a processor.
The computer programs when loaded and executed on such processors results in the respective component parts of system 600 described herein.
It is also noted that the foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive.
In the context of this specification, the word "comprising" means "including principally but not necessarily solely" or "having" or "including", and not "consisting 711885 -68only of'. Variations of the word "comprising", such as "comprise" and "comprises" have correspondingly varied meanings.
711885
Claims (27)
1. A method of adding a region to one of a plurality of groups of further regions, each Sregion comprising one or more pixels, said method comprising the steps of: computing a bounding box for said region; t3computing a bounding box for each group of further regions; N' selecting the group from said plurality of groups of further regions whose bounding box and the bounding box for said region would meet a selection criterion if said region were to be added to said group; and adding said region to the selected group if the addition of said region to said selected group satisfies an adding criterion.
2. The method according to claim 1, wherein said region comprises a bitmap image.
3. The method according to claim 1 or 2, wherein said selection criterion selects the group which, if said region was to be added to said group, would result in the least enlargement of either said bounding box of said selected group or said bounding box of said region.
4. The method according to any one of claims 1 to 3, wherein said adding criterion is satisfied if the area of the bounding box of said selected group after the addition of said region is less than a predetermined multiple of the sum of the area of the bounding box of said selected group before the addition and the area of said bounding box for said region.
5. A method according to any one of claims 1 to 4, comprising the further steps of: 711885 70 determining whether the selected group contains more than a predetermined number of regions, and if so: identifying the most widely separated pair of regions from said group; forming a first subgroup containing one region from said pair and a second subgroup containing the other region from said pair; adding the remaining regions from said group to either said first subgroup or said second subgroup, dependent on said selection criterion.
6. A method according to claim 5 wherein said identifying step comprises the sub- steps of: computing, for each of said pair of regions, the dimensions of a bounding box enclosing each region of said pair of regions; and selecting the pair of regions yielding the largest bounding box dimensions.
7. A method according to claim 5 or 6, wherein said adding the remaining regions step comprises the sub-steps, for each of said remaining regions, of: selecting the subgroup whose bounding box and the bounding box for said remaining region would meet said selection criterion if said remaining region were to be added to said subgroup; and adding said remaining region to the selected subgroup.
8. A method according to any one of claims 1 to 7 further comprising the step of re- grouping a set of grouped regions, said step of re-grouping comprising the sub-steps of, for each group of regions in said set of grouped regions: 711885 -71 Oselecting a further group from said set of grouped regions whose bounding O box and the bounding box of said group under consideration would meet said selection criterion if said group under consideration was to be added to said further group; and adding said group under consideration to the selected further group if the addition of said group under consideration satisfies a further adding criterion.
9. The method according to claim 8, wherein said further adding criterion is satisfied if the area of the bounding box of said selected further group after the addition of said group under consideration is less than a second predetermined multiple of the sum of the area of the bounding box of said selected further group before the addition and the area of the bounding box for said group under consideration. A method of re-grouping a set of grouped regions, each region comprising one or more pixels, each group of regions in said set of grouped regions having a bounding box wholly enclosing all the regions in said group of regions, said method comprising the steps of: for each group of regions, removing any region from said group of regions whose bounding box does not overlap the bounding box of at least one other region in said group of regions; and for each removed region: selecting a group from said grouped regions whose bounding box and the bounding box of said removed region would meet a selection criterion if the removed region were to be added to said group; and 711885 72 adding said removed region to the selected group if the addition of said removed region satisfies an adding criterion.
11. A method according to claim 10, wherein said adding criterion is satisfied if: the area of the bounding box of said selected group after the addition of said removed region is less than a predetermined multiple of the sum of the area of the bounding box of said selected group and the area of said bounding box of said removed region; and said bounding box of said removed region overlaps the bounding box of at least one region in said selected group.
12. A method of generating a page description from a list of graphical objects, each graphical object comprising one or more edges and at least one fill, said page description representing the visual appearance of said graphical objects when rendered to a page, said method comprising the steps of: identifying regions of said page requiring representation as bitmaps at page resolution; computing a bounding box enclosing each said identified region; generating a page resolution bitmap within each said bounding box representing the visual appearance of the enclosed identified region; and generating said page description utilising said page resolution bitmaps.
13. The method according to claim 12, wherein each bounding box corresponding to one of said identified regions satisfies a usage criterion, said usage criterion being 711885 73 satisfied if the ratio of the size of the intersection of said identified region and said bounding box to the size of said bounding box exceeds a predetermined threshold.
14. The method according claim 12 or 13, wherein each region requiring representation as bitmaps at page resolution is identified by determining whether the fills of the objects contributing to said region include at least one of the following combinations: a flat colour and a three-point blend; a two-point blend and a three-point blend; a two-point blend and a bitmap; a two-point blend and a two-point blend; a three-point blend and a further three-point blend; a three-point blend and a bitmap; a bitmap and a further bitmap; and a page resolution bitmap and any other fill. The method according to any one of claims 12 to 14, further comprising the step, before said step of generating said page resolution bitmaps, of grouping said bounding boxes into one or more groups.
16. The method according to claim 15, wherein said grouping comprises the sub-steps, for each said bounding box, of: selecting the group or other bounding box whose bounding box and the bounding box under consideration would meet a selection criterion if said bounding box under consideration were to be added to said group or other bounding box; 711885 74 adding said bounding box under consideration to the selected group or other bounding box if the addition of said bounding box to the selected group or other bounding box satisfies an adding criterion.
17. The method according to claim 16, wherein said selection criterion selects the group or other bounding box which, if said bounding box under consideration was to be added to said group or other bounding box, would result in the least enlargement of either said bounding box under consideration, or the bounding box of said selected group or other bounding box.
18. A method according to claim 16 or 17, comprising the further steps of, when selecting a group in said selection step: determining whether the selected group contains more than a predetermined number of regions, and if so: identifying the most widely separated pair of regions from said group; forming a first subgroup containing one region from said pair and a second subgroup containing the other region from said pair; adding the remaining regions from said group to either said first subgroup or said second subgroup, dependent on said selection criterion.
19. A method according to claim 18 wherein said identifying step comprises the sub- steps of: computing, for each of said pair of regions, the dimensions of a bounding box enclosing each region of said pair of regions; and selecting the pair of regions yielding the largest bounding box dimensions. 711885 75 A method according to claim 18 or 19, wherein said adding the remaining regions step comprises the sub-steps, for each of said remaining regions, of: selecting the subgroup whose bounding box and the bounding box for said remaining region would meet said selection criterion if said remaining region were to be added to said subgroup; and adding said remaining region to the selected subgroup.
21. A method according to any one of claims 15 to 20 further comprising the step of re- grouping a set of grouped regions, each group of regions in said set of grouped regions having a bounding box wholly enclosing all the regions in said group of regions, said step of re-grouping comprising the sub-steps of, for each group of regions in said set of grouped regions: selecting a further group from said set of grouped regions whose bounding box and the bounding box of said group under consideration would meet a selection criterion if said group under consideration was to be added to said further group; and adding said group under consideration to the selected further group if the addition of said group under consideration satisfies an adding criterion.
22. The method according to claim 21, wherein said adding criterion is satisfied if the area of the bounding box of said selected further group after the addition of said group under consideration is less than a predetermined multiple of the sum of the area of the bounding box of said selected further group before the addition and the area of the bounding box for said group under consideration. 711885 76
23. A method according to claim 15, further comprising the steps of: for each group of bounding boxes, removing any bounding box from said group whose bounding box does not overlap the bounding box of at least one other bounding box in said group; and for each removed bounding box: selecting a group whose bounding box and said removed bounding box would meet a selection criterion if the removed bounding box were to be added to said group; and adding said removed bounding box to the selected group if the addition of said removed bounding box satisfies an adding criterion.
24. A method according to claim 23, wherein said adding criterion is satisfied if: the area of the bounding box of said selected group after the addition of said removed bounding box is less than a predetermined multiple of the sum of the area of the bounding box of said selected group and the area of said removed bounding box; and said removed bounding box overlaps at least one bounding box in said selected group.
25. A method according to claim 12, further comprising the step, before said step of generating said page resolution bitmaps, of cutting said bounding boxes into a plurality of non-overlapping bounding boxes enclosing the same regions as the original set of bounding boxes. 711885 77
26. A method according to claim 12, further comprising the step; before said step of generating said page resolution bitmaps, of relocating at least one said region within the portion of a bounding box not occupied by its corresponding enclosed region.
27. Apparatus for adding a region to one of a plurality of groups of further regions, each region comprising one or more pixels, said apparatus comprising: means for computing a bounding box for said region; means for computing a bounding box for each group of further regions; means for selecting the group from said plurality of groups of further regions whose bounding box and the bounding box for said region would meet a selection criterion if said region were to be added to said group; and means for adding said region to the selected group if the addition of said region to said selected group satisfies an adding criterion.
28. Apparatus for re-grouping a set of grouped regions, each region comprising one or more pixels, each group of regions in said set of grouped regions having a bounding box wholly enclosing all the regions in said group of regions, said apparatus comprising: means for, for each group of regions, removing any region from said group of regions whose bounding box does not overlap the bounding box of at least one other region in said group of regions; and means for, for each removed region: selecting a group from said grouped regions whose bounding box and the bounding box of said removed region would meet a selection criterion if the removed region were to be added to said group; and 711885 78 adding said removed region to the selected group if the addition of said removed region satisfies an adding criterion.
29. Apparatus for generating a page description from a list of graphical objects, each graphical object comprising one or more edges and at least one fill, said page description representing the visual appearance of said graphical objects when rendered to a page, said apparatus comprising: means for identifying regions of said page requiring representation as bitmaps at page resolution; means for computing a bounding box enclosing each said identified region; means for generating a page resolution bitmap within each said bounding box representing the visual appearance of the enclosed identified region; and means for generating said page description utilising said page resolution bitmaps. A computer program product including a computer readable medium having recorded thereon a computer program for adding a region to one of a plurality of groups of further regions, each region comprising one ormore pixels, said program comprising: code for computing a bounding box for said region; code for computing a bounding box for each group of further regions; code for selecting the group from said plurality of groups of further regions whose bounding box and the bounding box for said region would meet a selection criterion if said region were to be added to said group; and code for adding said region to the selected group if the addition of said region to said selected group satisfies an adding criterion. 711885
79- 31. A computer program product including a computer readable medium having recorded thereon a computer program for re-grouping a set of grouped regions, each region comprising one or more pixels, each group of regions in said set of grouped regions having a bounding box wholly enclosing all the regions in said group of regions, said program comprising: code for, for each group of regions, removing any region from said group of regions whose bounding box does not overlap the bounding box of at least one other region in said group of regions; and code for, for each removed region: selecting a group from said grouped regions whose bounding box and the bounding box of said removed region would meet a selection criterion if the removed region were to be added to said group; and adding said removed region to the selected group if the addition of said removed region satisfies an adding criterion. 32. A computer program product including a computer readable medium having recorded thereon a computer program for generating a page description from a list of graphical objects, each graphical object comprising one or more edges and at least one fill, said page description representing the visual appearance of said graphical objects when rendered, said program comprising: code for identifying regions of said page requiring representation as bitmaps at page resolution; code for computing a bounding box enclosing each said identified region; 711885 code for generating a page resolution bitmap within each said bounding box representing the visual appearance of the enclosed identified region; and code for generating said page description utilising said page resolution bitmaps. 33. A method substantially as described herein with reference to any one of Figs. 6 to 16 of the accompanying drawings. 34. Apparatus substantially as described herein with reference to any one of Figs. 6 to 16 of the accompanying drawings. A computer program substantially as described herein with reference to any one of Figs. 6 to 16 of the accompanying drawings. DATED this 9th Day of August 2005 CANON KABUSHIKI KAISHA Patent Attorneys for the Applicant SPRUSON&FERGUSON 711885
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2005203541A AU2005203541A1 (en) | 2005-08-09 | 2005-08-09 | Multiple image consolidation in a printing system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2005203541A AU2005203541A1 (en) | 2005-08-09 | 2005-08-09 | Multiple image consolidation in a printing system |
Publications (1)
Publication Number | Publication Date |
---|---|
AU2005203541A1 true AU2005203541A1 (en) | 2007-03-01 |
Family
ID=37846264
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2005203541A Abandoned AU2005203541A1 (en) | 2005-08-09 | 2005-08-09 | Multiple image consolidation in a printing system |
Country Status (1)
Country | Link |
---|---|
AU (1) | AU2005203541A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9459819B2 (en) | 2010-11-03 | 2016-10-04 | Canon Kabushiki Kaisha | Method, apparatus and system for associating an intermediate fill with a plurality of objects |
-
2005
- 2005-08-09 AU AU2005203541A patent/AU2005203541A1/en not_active Abandoned
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9459819B2 (en) | 2010-11-03 | 2016-10-04 | Canon Kabushiki Kaisha | Method, apparatus and system for associating an intermediate fill with a plurality of objects |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7561303B2 (en) | Caching and optimisation of compositing | |
EP1577838B1 (en) | A method of rendering graphical objects | |
US7978196B2 (en) | Efficient rendering of page descriptions | |
EP1962224B1 (en) | Applying traps to a printed page specified in a page description language format | |
CA2221752C (en) | Method and apparatus for reducing storage requirements for display data | |
JP4299034B2 (en) | Raster mixed file | |
US6781600B2 (en) | Shape processor | |
JP3454552B2 (en) | Method and system for generating data for generating a page representation on a page | |
US5852679A (en) | Image processing apparatus and method | |
US8723884B2 (en) | Scan converting a set of vector edges to a set of pixel aligned edges | |
US7167259B2 (en) | System and method for merging line work objects using tokenization and selective compression | |
WO2007064851A2 (en) | System to print artwork containing transparency | |
JP2010505162A (en) | Lattice type processing method and apparatus for transparent pages | |
US6429950B1 (en) | Method and apparatus for applying object characterization pixel tags to image data in a digital imaging device | |
JP2013505854A (en) | How to create a printable raster image file | |
US20090091564A1 (en) | System and method for rendering electronic documents having overlapping primitives | |
KR100477777B1 (en) | Method, system, program, and data structure for generating raster objects | |
US20060285144A1 (en) | Efficient Implementation of Raster Operations Flow | |
AU2005203541A1 (en) | Multiple image consolidation in a printing system | |
US20040246510A1 (en) | Methods and systems for use of a gradient operator | |
AU2004240230A1 (en) | Caching and optimisation of compositing | |
AU2004237908A1 (en) | Apparatus for printing overlapping objects | |
JP4467715B2 (en) | Image output control apparatus and method | |
AU2006200899A1 (en) | Efficient rendering of page descriptions | |
AU2008264239A1 (en) | Text processing in a region based printing system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
MK1 | Application lapsed section 142(2)(a) - no request for examination in relevant period |