Nothing Special   »   [go: up one dir, main page]

CN117456052A - Graphic editing method and device based on canvas - Google Patents

Graphic editing method and device based on canvas Download PDF

Info

Publication number
CN117456052A
CN117456052A CN202311176462.4A CN202311176462A CN117456052A CN 117456052 A CN117456052 A CN 117456052A CN 202311176462 A CN202311176462 A CN 202311176462A CN 117456052 A CN117456052 A CN 117456052A
Authority
CN
China
Prior art keywords
target
canvas
size
design
elements
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311176462.4A
Other languages
Chinese (zh)
Inventor
舒娟
杨智枭
寇明钰
黄治
胡小洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing China Tobacco Industry Co Ltd
Original Assignee
Chongqing China Tobacco Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing China Tobacco Industry Co Ltd filed Critical Chongqing China Tobacco Industry Co Ltd
Priority to CN202311176462.4A priority Critical patent/CN117456052A/en
Publication of CN117456052A publication Critical patent/CN117456052A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a graphic editing method and device based on canvas. The method comprises the following steps: acquiring a design element, wherein the design element comprises pattern elements and annotation texts corresponding to the pattern elements; creating layers corresponding to the number of design elements on a preset canvas; setting all design elements on corresponding layers on canvas respectively; and adjusting the size of the design element and the position of the corresponding image layer based on a preset adjustment strategy to obtain a first target image, wherein any two pattern elements are not crossed and any two annotation texts are not crossed in the first target image.

Description

Graphic editing method and device based on canvas
Technical Field
The invention relates to the technical field of image processing, in particular to a graphic editing method and device based on canvas.
Background
Pattern design is one of the design categories common in the design industry. The pattern design can be applied to the scenes of product external packages, plane advertisements and the like. Currently, when a designer designs a pattern based on canvas, the generated image usually lacks annotation of the design concept of a product, and in addition, design elements (such as icons) are easily concentrated in one layer, so that flexible editing of pictures is inconvenient.
Disclosure of Invention
In view of the foregoing, an object of an embodiment of the present application is to provide a graphic editing method and apparatus based on canvas, which can solve the problems of inflexibility in image editing and inconvenience in adding design annotations.
In order to achieve the technical purpose, the technical scheme adopted by the application is as follows:
in a first aspect, an embodiment of the present application provides a canvas-based graphics editing method, where the method includes:
obtaining a design element, wherein the design element comprises a pattern element and an annotation text corresponding to the pattern element;
creating layers corresponding to the number of the design elements on a preset canvas;
setting all the design elements on corresponding layers on canvas;
and adjusting the size of the design element and the position of the corresponding image layer based on a preset adjustment strategy to obtain a first target image, wherein any two pattern elements are not crossed and any two annotation texts are not crossed in the first target image.
With reference to the first aspect, in some optional embodiments, the method further includes:
based on the received first operation instruction, the gradual change of the first appointed pattern element in the first target image is adjusted in the canvas, and/or the edge description of the second appointed pattern element in the first target image is generated, so that the updated first target image is obtained.
With reference to the first aspect, in some optional embodiments, the method further includes:
and controlling the transparency of all annotation texts in the first target image to be changed into full transparency based on the received second operation instruction to obtain a second target image.
With reference to the first aspect, in some optional embodiments, the method further includes:
and exporting the first target image and the second target image to a specified folder in a specified image format.
With reference to the first aspect, in some optional embodiments, adjusting the size of the design element and the position of the corresponding layer based on a preset adjustment policy, to obtain a first target image includes:
estimating a first target size occupied by drawing all pattern elements in the canvas in a non-intersecting manner and a first target position of all pattern elements in the canvas based on each pattern element in the design elements, a first initial size of each pattern element and an editable size corresponding to the canvas, wherein the first target size is smaller than or equal to the editable size;
determining a first target scaling corresponding to each pattern element based on the first target size, the position of each pattern element in the canvas, and the first initial size;
moving the pattern elements in the design elements to the first target positions in the corresponding layers of the canvas, and performing scaling operation on the corresponding pattern elements in the design elements based on the first target scaling;
estimating a second target size occupied by drawing all annotation texts in the canvas in a non-intersecting manner and a second target position of all annotation texts in the canvas based on each annotation text in the design element, a second initial size of each annotation text and an editable size corresponding to the canvas, wherein the first target position of each pattern element is associated with the corresponding second target position of each annotation text, and the second target size is smaller than or equal to the editable size;
determining a second target scaling corresponding to each annotation text based on the second target size, the position of each annotation text in the canvas, and the second initial size;
and moving the annotation text in the design element to the second target position in the corresponding layer of the canvas, and performing scaling operation on the corresponding annotation text in the design element based on the second target scaling to obtain the first target image.
In a second aspect, embodiments of the present application further provide a graphic editing apparatus based on canvas, where the apparatus includes:
an obtaining unit, configured to obtain a design element, where the design element includes a pattern element and an annotation text corresponding to the pattern element;
the creation unit is used for creating layers corresponding to the number of the design elements on a preset canvas;
the setting unit is used for respectively setting all the design elements on corresponding layers on the canvas;
the adjustment unit is used for adjusting the size of the design element and the position of the corresponding image layer based on a preset adjustment strategy to obtain a first target image, wherein any two pattern elements are not crossed, and any two annotation texts are not crossed in the first target image.
With reference to the second aspect, in some optional embodiments, the adjusting unit is further configured to:
based on the received first operation instruction, the gradual change of the first appointed pattern element in the first target image is adjusted in the canvas, and/or the edge description of the second appointed pattern element in the first target image is generated, so that the updated first target image is obtained.
With reference to the second aspect, in some optional embodiments, the adjusting unit is further configured to:
and controlling the transparency of all annotation texts in the first target image to be changed into full transparency based on the received second operation instruction to obtain a second target image.
With reference to the second aspect, in some optional embodiments, the apparatus further includes a deriving unit configured to:
and exporting the first target image and the second target image to a specified folder in a specified image format.
With reference to the second aspect, in some optional embodiments, the adjusting unit is further configured to:
estimating a first target size occupied by drawing all pattern elements in the canvas in a non-intersecting manner and a first target position of all pattern elements in the canvas based on each pattern element in the design elements, a first initial size of each pattern element and an editable size corresponding to the canvas, wherein the first target size is smaller than or equal to the editable size;
determining a first target scaling corresponding to each pattern element based on the first target size, the position of each pattern element in the canvas, and the first initial size;
moving the pattern elements in the design elements to the first target positions in the corresponding layers of the canvas, and performing scaling operation on the corresponding pattern elements in the design elements based on the first target scaling;
estimating a second target size occupied by drawing all annotation texts in the canvas in a non-intersecting manner and a second target position of all annotation texts in the canvas based on each annotation text in the design element, a second initial size of each annotation text and an editable size corresponding to the canvas, wherein the first target position of each pattern element is associated with the corresponding second target position of each annotation text, and the second target size is smaller than or equal to the editable size;
determining a second target scaling corresponding to each annotation text based on the second target size, the position of each annotation text in the canvas, and the second initial size;
and moving the annotation text in the design element to the second target position in the corresponding layer of the canvas, and performing scaling operation on the corresponding annotation text in the design element based on the second target scaling to obtain the first target image.
The invention adopting the technical scheme has the following advantages:
in the technical scheme provided by the application, the layers corresponding to the number of the design elements are created on the preset canvas, and then all the design elements are respectively arranged in the corresponding layers on the canvas, so that the flexible editing of the image based on the layers is convenient. And adjusting the size of the design element and the position of the corresponding image layer based on a preset adjustment strategy to obtain a first target image, wherein any two pattern elements are not crossed and any two annotation texts are not crossed in the first target image. Therefore, the annotation text can be flexibly added into the image, and the automatic generation and the self-adaptive typesetting of the image can be realized.
Drawings
The present application may be further illustrated by the non-limiting examples given in the accompanying drawings. It is to be understood that the following drawings illustrate only certain embodiments of the present application and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may derive other relevant drawings from the drawings without inventive effort.
Fig. 1 is a schematic flow chart of a graphic editing method based on canvas according to an embodiment of the present application.
FIG. 2 is a block diagram of a canvas-based graphics editing apparatus provided in an embodiment of the present application.
Icon: 200-a graphic editing device; 210-an acquisition unit; 220-a creation unit; 230-a setting unit; 240-an adjustment unit.
Detailed Description
The present application will be described in detail below with reference to the drawings and the specific embodiments, and it should be noted that in the drawings or the description of the specification, similar or identical parts use the same reference numerals, and implementations not shown or described in the drawings are in a form known to those of ordinary skill in the art. In the description of the present application, the terms "first," "second," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, the present application further provides a graphic editing method based on canvas, where the graphic editing method can be applied to an electronic device. The electronic device may include a processing module and a storage module. The memory module stores a computer program which, when executed by the processing module, enables the electronic device to perform the respective steps in the graphical editing method.
The electronic device may be, but is not limited to, a personal computer, a server, etc. The graph editing method can be applied to planar advertisement design and design of appearance patterns of product packaging boxes (such as design of appearance patterns of cigarette packaging boxes). The graphic editing method may include the steps of:
step 110, obtaining design elements, wherein the design elements comprise pattern elements and annotation texts corresponding to the pattern elements;
step 120, creating layers corresponding to the number of the design elements on a preset canvas;
step 130, setting all the design elements on corresponding layers on the canvas respectively;
and 140, adjusting the size of the design element and the position of the corresponding image layer based on a preset adjustment strategy to obtain a first target image, wherein any two pattern elements are not crossed and any two annotation texts are not crossed in the first target image.
The steps of the graphic editing method will be described in detail as follows:
before executing step 110, the designer may prepare a large number of icons and annotation text for the icons in advance as pattern elements and annotation text in the design elements. A number of icons and annotation text for the icons may form a database.
When a picture design is required, that is, in step 110, the designer may flexibly select pattern elements and annotation text corresponding to the pattern elements from the database as selected design elements using the electronic device. The number of design elements selected may be flexibly determined according to actual circumstances, and is not particularly limited herein.
In step 120, the number of layers created on the canvas is the same as the number of design elements. For example, if the total number of pattern elements and annotation text in the design elements is M, multiple layers are created on canvas to obtain M layers, where M is an integer greater than 1.
In step 130, the electronic device may draw all pattern elements and annotation texts in the design elements on different layers on canvas, where each pattern element occupies one layer separately and each annotation text occupies one layer separately. That is, in one layer, only one pattern element or text annotation is typically provided. Thus, the subsequent flexible editing of the image based on the single image layer is convenient. For example, the transparency of all elements in a layer is adjusted based on a single layer.
It should be noted that, the electronic device may randomly set the pattern elements and the corresponding annotation texts in the design elements at any position in the canvas layout layer, and then adjust the positions of the pattern elements and the corresponding annotation texts in the layout layer and scale the dimensions through step 140.
In step 140, the preset adjustment policy may be flexibly determined according to the actual situation. In general, the preset adjustment strategy needs to satisfy the following conditions:
in canvas, any two pattern elements after scaling and/or translation are not crossed;
in canvas, any two annotation texts after scaling and/or translation do not intersect.
In this embodiment, step 140 may include:
step 141, estimating a first target size occupied by drawing all pattern elements in the canvas in a non-intersecting manner and a first target position of all pattern elements in the canvas based on each pattern element in the design elements, the first initial size of each pattern element and the editable size corresponding to the canvas, wherein the first target size is smaller than or equal to the editable size;
step 142, determining a first target scaling corresponding to each pattern element based on the first target size, the position of each pattern element in the canvas and the first initial size;
step 143, moving the pattern elements in the design elements to the first target positions in the corresponding layers of the canvas, and performing scaling operation on the corresponding pattern elements in the design elements based on the first target scaling;
step 144, based on each annotation text in the design element, a second initial size of each annotation text, and an editable size corresponding to the canvas, estimating a second target size occupied by drawing all the annotation texts in the canvas in a non-intersecting manner, and a second target position of all the annotation texts in the canvas, wherein the first target position of each pattern element is associated with the corresponding second target position of each annotation text, and the second target size is smaller than or equal to the editable size;
step 145, determining a second target scaling corresponding to each annotation text based on the second target size, the position of each annotation text in the canvas and the second initial size;
and 146, moving the annotation text in the design element to the second target position in the corresponding layer of the canvas, and performing scaling operation on the corresponding annotation text in the design element based on the second target scaling to obtain the first target image.
In step 141, the editable size corresponding to the canvas may be flexibly set by the designer according to the actual situation, without being particularly limited herein. For the editable areas in canvas, pattern elements can be selected one by one and attached to randomly selected layers. If the pattern element pasted at present and the pattern element pasted on the other layer last time are overlapped in a crossing way, the pattern element pasted at present is translated in the editable area so that the pattern element pasted at present and the pattern element pasted at last time are not overlapped in a crossing way. If there is no blank area in the editable area, the currently attached pattern element after translation and the pattern element attached last time can be made to overlap without crossing, at this time, the currently attached pattern element can be reduced, and/or the pattern element attached last time can be reduced, so that the currently attached pattern element can be attached in the editable area without crossing and overlapping. Thus, the first target size of the area occupied by the pattern elements in the design elements drawn in the editable area in a non-overlapping manner and the target position of the pattern elements in the editable area can be estimated.
In step 142, the first target scaling may be understood as the scaling of the pattern element to the first target size at the first initial size.
In step 144, the zoom/pan method of the annotation text is similar to the zoom/pan method of the pattern element, and will not be described here. The difference is that the annotation text is scaled or translated around the perimeter of the associated pattern element when translated or scaled, i.e. the annotation text is spaced from the associated pattern element by a specified distance or less to avoid the annotation text being too far from the associated pattern element. The distance between the annotation text and the associated pattern element may refer to the distance between the center of the text box of the annotation text and the center of the pattern element, and the specified distance may be flexibly determined according to practical situations, which is not particularly limited herein.
In step 145, the second target scale is the scale of scaling the annotation text from the second initial size to the second target size.
Understandably, the first target image is obtained after the position adjustment and the size scaling of the pattern elements and the annotation text. In the first target image, the pattern elements do not intersect each other, the annotation text does not intersect each other, and a region where the pattern elements and the annotation text overlap each other may exist.
In this embodiment, the method may further include:
based on the received first operation instruction, the gradual change of the first appointed pattern element in the first target image is adjusted in the canvas, and/or the edge description of the second appointed pattern element in the first target image is generated, so that the updated first target image is obtained.
It is understandable that the first specified pattern element and the second specified pattern element can be flexibly selected according to practical situations. The designer can flexibly adjust one or more of the gradual change color, the line width of the drawn edge and the color of the drawn edge of any pattern element in the design elements according to the actual demands, so as to realize the custom editing of the pattern. The operation instruction for adjusting the gradual change color, the line width of the tracing and the tracing color is the first operation instruction.
In addition, the designer can freely typeset, drag the layer and other editing functions, and the icon/text cannot exceed the editable area.
In this embodiment, the method may further include:
and controlling the transparency of all annotation texts in the first target image to be changed into full transparency based on the received second operation instruction to obtain a second target image.
It is understood that the operation instruction for the transparency condition is the second operation instruction. The designer can use the operation interface of the electronic device to adjust the attribute of the layer where the annotation text is located, so that the text transparency of the layer is completely transparent, namely, the transparency is 100%, and therefore, the method is equivalent to deleting or hiding the annotation text in the visual presentation of the image.
The second target image corresponds to the image presented after the first image conceals the annotation text, and the second image can be understood as an appearance pattern, an advertisement propaganda picture and the like actually printed on the product packaging box.
In this embodiment, the electronic apparatus may output the first target image and the second target image at one time. The annotation text on the first target image can facilitate others to understand the design concept of the designer; the second target image is the appearance pattern to be actually presented on the product or advertisement, so that the user can conveniently check and check the first target image.
In this embodiment, the method may further include:
and exporting the first target image and the second target image to a specified folder in a specified image format.
The specified image format may be, but is not limited to JPEG, TIFF, RAW and PNG, etc. The designated folder can be flexibly created and set according to actual conditions. The designer can flexibly select corresponding image formats to conduct export of the image files according to actual conditions, and flexibly export the files to a desired folder, so that the user experience of the designer is improved.
As an example, when the method is applied to design of an appearance pattern of a cigarette pack, a designer may implement a picture editor function on a browser of an electronic device using a Canvas API. The Canvas API is a label newly added in the HTML5 for generating images in real time on a webpage, can operate image contents and supports scripted client drawing operation. The designer may highlight, zoom, etc. presentation of the annotation text, the pattern elements, etc. on the first target image, or zoom animation presentation, with the electronic device, and annotate each design element. In the design and development stage of the appearance pattern, the constituent elements of the cigarette packaging box are convenient to independently edit; secondly, in the stage of cigarette sales, consumers can conveniently know the design idea of the appearance pattern of the cigarette product and the culture, emotion content and the like of the appearance design element representation.
Referring to fig. 2, the present application further provides a graphic editing apparatus 200 based on canvas. The graphic editing apparatus 200 includes at least one software function module that may be stored in a memory module in the form of software or Firmware (Firmware) or cured in an Operating System (OS) of an electronic device. The processing module is configured to execute executable modules stored in the storage module, such as software functional modules and computer programs included in the graphic editing apparatus 200.
The functions of the units included in the graphic editing apparatus 200 may be as follows:
an obtaining unit 210, configured to obtain a design element, where the design element includes a pattern element and an annotation text corresponding to the pattern element;
a creating unit 220, configured to create, on a preset canvas, layers corresponding to the number of design elements;
a setting unit 230, configured to set all the design elements on corresponding layers on the canvas respectively;
the adjusting unit 240 is configured to adjust the size of the design element and the position of the corresponding layer based on a preset adjustment policy, so as to obtain a first target image, where in the first target image, any two pattern elements do not intersect, and any two annotation texts do not intersect.
Optionally, the adjusting unit 240 is further configured to: based on the received first operation instruction, the gradual change of the first appointed pattern element in the first target image is adjusted in the canvas, and/or the edge description of the second appointed pattern element in the first target image is generated, so that the updated first target image is obtained.
Optionally, the adjusting unit 240 is further configured to: and controlling the transparency of all annotation texts in the first target image to be changed into full transparency based on the received second operation instruction to obtain a second target image.
Optionally, the graphic editing apparatus 200 further comprises a deriving unit for: and exporting the first target image and the second target image to a specified folder in a specified image format.
Optionally, the adjusting unit 240 is further configured to:
estimating a first target size occupied by drawing all pattern elements in the canvas in a non-intersecting manner and a first target position of all pattern elements in the canvas based on each pattern element in the design elements, a first initial size of each pattern element and an editable size corresponding to the canvas, wherein the first target size is smaller than or equal to the editable size;
determining a first target scaling corresponding to each pattern element based on the first target size, the position of each pattern element in the canvas, and the first initial size;
moving the pattern elements in the design elements to the first target positions in the corresponding layers of the canvas, and performing scaling operation on the corresponding pattern elements in the design elements based on the first target scaling;
estimating a second target size occupied by drawing all annotation texts in the canvas in a non-intersecting manner and a second target position of all annotation texts in the canvas based on each annotation text in the design element, a second initial size of each annotation text and an editable size corresponding to the canvas, wherein the first target position of each pattern element is associated with the corresponding second target position of each annotation text, and the second target size is smaller than or equal to the editable size;
determining a second target scaling corresponding to each annotation text based on the second target size, the position of each annotation text in the canvas, and the second initial size;
and moving the annotation text in the design element to the second target position in the corresponding layer of the canvas, and performing scaling operation on the corresponding annotation text in the design element based on the second target scaling to obtain the first target image.
In this embodiment, the processing module may be an integrated circuit chip with signal processing capability. The processing module may be a general purpose processor. For example, the processor may be a central processing unit (Central Processing Unit, CPU), digital signal processor (Digital Signal Processing, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application.
The memory module may be, but is not limited to, random access memory, read only memory, programmable read only memory, erasable programmable read only memory, electrically erasable programmable read only memory, and the like. In this embodiment, the storage module may be configured to store design elements, canvas, preset adjustment policies, target images, and the like. Of course, the storage module may also be used to store a program, and the processing module executes the program after receiving the execution instruction.
It should be noted that, for convenience and brevity of description, specific working processes of the electronic device described above may refer to corresponding processes of each step in the foregoing method, and will not be described in detail herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus, system, and method may be implemented in other manners as well. The above-described apparatus, systems, and method embodiments are merely illustrative, for example, flow charts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. In addition, the functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application, and various modifications and variations may be suggested to one skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (10)

1. A canvas-based graphic editing method, the method comprising:
obtaining a design element, wherein the design element comprises a pattern element and an annotation text corresponding to the pattern element;
creating layers corresponding to the number of the design elements on a preset canvas;
setting all the design elements on corresponding layers on canvas;
and adjusting the size of the design element and the position of the corresponding image layer based on a preset adjustment strategy to obtain a first target image, wherein any two pattern elements are not crossed and any two annotation texts are not crossed in the first target image.
2. The method according to claim 1, wherein the method further comprises:
based on the received first operation instruction, the gradual change of the first appointed pattern element in the first target image is adjusted in the canvas, and/or the edge description of the second appointed pattern element in the first target image is generated, so that the updated first target image is obtained.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
and controlling the transparency of all annotation texts in the first target image to be changed into full transparency based on the received second operation instruction to obtain a second target image.
4. A method according to claim 3, characterized in that the method further comprises:
and exporting the first target image and the second target image to a specified folder in a specified image format.
5. The method of claim 1, wherein adjusting the size of the design element and the position of the corresponding layer based on a preset adjustment strategy to obtain a first target image comprises:
estimating a first target size occupied by drawing all pattern elements in the canvas in a non-intersecting manner and a first target position of all pattern elements in the canvas based on each pattern element in the design elements, a first initial size of each pattern element and an editable size corresponding to the canvas, wherein the first target size is smaller than or equal to the editable size;
determining a first target scaling corresponding to each pattern element based on the first target size, the position of each pattern element in the canvas, and the first initial size;
moving the pattern elements in the design elements to the first target positions in the corresponding layers of the canvas, and performing scaling operation on the corresponding pattern elements in the design elements based on the first target scaling;
estimating a second target size occupied by drawing all annotation texts in the canvas in a non-intersecting manner and a second target position of all annotation texts in the canvas based on each annotation text in the design element, a second initial size of each annotation text and an editable size corresponding to the canvas, wherein the first target position of each pattern element is associated with the corresponding second target position of each annotation text, and the second target size is smaller than or equal to the editable size;
determining a second target scaling corresponding to each annotation text based on the second target size, the position of each annotation text in the canvas, and the second initial size;
and moving the annotation text in the design element to the second target position in the corresponding layer of the canvas, and performing scaling operation on the corresponding annotation text in the design element based on the second target scaling to obtain the first target image.
6. A canvas-based graphic editing apparatus, the apparatus comprising:
an obtaining unit, configured to obtain a design element, where the design element includes a pattern element and an annotation text corresponding to the pattern element;
the creation unit is used for creating layers corresponding to the number of the design elements on a preset canvas;
the setting unit is used for respectively setting all the design elements on corresponding layers on the canvas;
the adjustment unit is used for adjusting the size of the design element and the position of the corresponding image layer based on a preset adjustment strategy to obtain a first target image, wherein any two pattern elements are not crossed, and any two annotation texts are not crossed in the first target image.
7. The apparatus of claim 6, wherein the adjustment unit is further configured to:
based on the received first operation instruction, the gradual change of the first appointed pattern element in the first target image is adjusted in the canvas, and/or the edge description of the second appointed pattern element in the first target image is generated, so that the updated first target image is obtained.
8. The apparatus according to claim 6 or 7, wherein the adjustment unit is further configured to:
and controlling the transparency of all annotation texts in the first target image to be changed into full transparency based on the received second operation instruction to obtain a second target image.
9. The apparatus according to claim 8, further comprising a deriving unit for:
and exporting the first target image and the second target image to a specified folder in a specified image format.
10. The apparatus of claim 6, wherein the adjustment unit is further configured to:
estimating a first target size occupied by drawing all pattern elements in the canvas in a non-intersecting manner and a first target position of all pattern elements in the canvas based on each pattern element in the design elements, a first initial size of each pattern element and an editable size corresponding to the canvas, wherein the first target size is smaller than or equal to the editable size;
determining a first target scaling corresponding to each pattern element based on the first target size, the position of each pattern element in the canvas, and the first initial size;
moving the pattern elements in the design elements to the first target positions in the corresponding layers of the canvas, and performing scaling operation on the corresponding pattern elements in the design elements based on the first target scaling;
estimating a second target size occupied by drawing all annotation texts in the canvas in a non-intersecting manner and a second target position of all annotation texts in the canvas based on each annotation text in the design element, a second initial size of each annotation text and an editable size corresponding to the canvas, wherein the first target position of each pattern element is associated with the corresponding second target position of each annotation text, and the second target size is smaller than or equal to the editable size;
determining a second target scaling corresponding to each annotation text based on the second target size, the position of each annotation text in the canvas, and the second initial size;
and moving the annotation text in the design element to the second target position in the corresponding layer of the canvas, and performing scaling operation on the corresponding annotation text in the design element based on the second target scaling to obtain the first target image.
CN202311176462.4A 2023-09-12 2023-09-12 Graphic editing method and device based on canvas Pending CN117456052A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311176462.4A CN117456052A (en) 2023-09-12 2023-09-12 Graphic editing method and device based on canvas

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311176462.4A CN117456052A (en) 2023-09-12 2023-09-12 Graphic editing method and device based on canvas

Publications (1)

Publication Number Publication Date
CN117456052A true CN117456052A (en) 2024-01-26

Family

ID=89588095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311176462.4A Pending CN117456052A (en) 2023-09-12 2023-09-12 Graphic editing method and device based on canvas

Country Status (1)

Country Link
CN (1) CN117456052A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118395935A (en) * 2024-06-27 2024-07-26 杭州广立微电子股份有限公司 Standard cell size adjusting method and device and computer equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118395935A (en) * 2024-06-27 2024-07-26 杭州广立微电子股份有限公司 Standard cell size adjusting method and device and computer equipment

Similar Documents

Publication Publication Date Title
CN109933322B (en) Page editing method and device and computer readable storage medium
TWI394055B (en) Common charting using shapes
US11049307B2 (en) Transferring vector style properties to a vector artwork
US10388045B2 (en) Generating a triangle mesh for an image represented by curves
CA2937702C (en) Emphasizing a portion of the visible content elements of a markup language document
US9142044B2 (en) Apparatus, systems and methods for layout of scene graphs using node bounding areas
US11048484B2 (en) Automated responsive grid-based layout design system
US9720581B2 (en) Responsive image rendition authoring
US11216998B2 (en) Jointly editing related objects in a digital image
US20110099523A1 (en) Product selection and management workflow
CN114168238B (en) Method, system and computer-readable storage medium implemented by a computing device
US20180059919A1 (en) Responsive Design Controls
US20110099471A1 (en) Product preview in a product selection and management workflow
US20160314502A1 (en) System and method for streamlining the design and development process of multiple advertising units
CN117456052A (en) Graphic editing method and device based on canvas
US20110099517A1 (en) Product option presentation in a product selection and management workflow
US7262782B1 (en) Selectively transforming overlapping illustration artwork
US20100021060A1 (en) Method for overlapping visual slices
CN113515922A (en) Document processing method, system, device and interaction device
CN107608733A (en) Image display method, device and terminal device
KR101307790B1 (en) Authoring method for digital cartoon contents by linked-translation, and computer-readable recording medium for the same
CN114327208B (en) Legend display method and device, storage medium and terminal
CN114237468B (en) Text and picture translation method and device, electronic equipment and readable storage medium
US20230154082A1 (en) Style-based dynamic content generation
CN115469870A (en) File online processing method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication