CN111724455A - Image processing method and electronic device - Google Patents
Image processing method and electronic device Download PDFInfo
- Publication number
- CN111724455A CN111724455A CN202010543467.6A CN202010543467A CN111724455A CN 111724455 A CN111724455 A CN 111724455A CN 202010543467 A CN202010543467 A CN 202010543467A CN 111724455 A CN111724455 A CN 111724455A
- Authority
- CN
- China
- Prior art keywords
- characters
- character
- target area
- target
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 21
- 238000012986 modification Methods 0.000 claims abstract description 142
- 230000004048 modification Effects 0.000 claims abstract description 142
- 238000012545 processing Methods 0.000 claims abstract description 49
- 238000000034 method Methods 0.000 claims abstract description 38
- 230000004044 response Effects 0.000 claims abstract description 13
- 238000004891 communication Methods 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 21
- 230000006870 function Effects 0.000 description 13
- 230000000694 effects Effects 0.000 description 9
- 238000001514 detection method Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000012790 confirmation Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application discloses an image processing method and electronic equipment, and belongs to the technical field of communication. The method comprises the following steps: receiving a first input of a first image to be modified in a character modification interface; selecting a target area of the first image in response to the first input; acquiring target modification content; receiving a second input to the text modification interface; and responding to the second input, and performing character modification on the position associated with the target area according to the target modification content to obtain a modified second image. According to the method and the device, the modification of the text content in the image can be realized by simply inputting operation, and the image processing experience of a user is improved.
Description
Technical Field
The present application relates to the field of communications technologies, and in particular, to an image processing method and an electronic device.
Background
Characters and pictures are widely used in daily life as important information carriers. For example, document pictures in a mobile phone album account for a great proportion of all the pictures. Compared with paper documents, documents in electronic picture format have the characteristics of convenience in carrying, copying, transmission, management, viewing and the like, so that the documents are widely used.
In order to meet the requirement of a user on content modification, the image editor of the mobile phone and a plurality of retouching software provide a plurality of modifying functions for pictures. The image modification function provided in the existing image editor is more used to enhance the aesthetic feeling of the image. When a user needs to modify the text content in the picture, the user is often required to manually erase, add the text and other large amount of editing work, the operation steps are complicated, the modification traces are obvious, and the modification expectation of the user cannot be achieved.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method and electronic equipment, and the problems that the existing modification of the text content in the picture is complex and fussy in operation steps can be solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including:
receiving a first input of a first image to be modified in a character modification interface;
selecting a target area of the first image in response to the first input;
acquiring target modification content;
receiving a second input to the text modification interface;
and responding to the second input, and performing character modification on the position associated with the target area according to the target modification content to obtain a modified second image.
In a second aspect, an embodiment of the present application further provides an image processing apparatus, including:
the first receiving module is used for receiving first input of a first image to be modified in the character modification interface;
a region selection module for selecting a target region of the first image in response to the first input;
the acquisition module is used for acquiring target modification content;
the second receiving module is used for receiving second input of the character modification interface;
and the character modification module is used for responding to the second input and carrying out character modification on the position associated with the target area according to the target modification content to obtain a modified second image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the image processing method according to the first aspect.
In a fourth aspect, the present application further provides a readable storage medium, on which a program or instructions are stored, and when the program or instructions are executed by a processor, the program or instructions implement the steps of the image processing method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the image processing method according to the first aspect.
In the embodiment of the application, a first input of a first image to be modified in a character modification interface is received; determining a target region of the first image in response to the first input; acquiring target modification content; and receiving a second input to the character modification interface, responding to the second input, and performing character modification on the position associated with the target area according to the target modification content to obtain a modified second image.
Drawings
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an image editing interface of a first image to be modified according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a text modification interface provided in an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a text erasing operation and effect performed by a text modification interface according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a text replacement operation and effect performed by the text modification interface according to an embodiment of the present application;
fig. 6 is a second schematic view illustrating a text replacement operation and effect performed by the text modification interface according to the embodiment of the present application;
fig. 7 is a schematic diagram illustrating a text adding operation and effect executed by a text modification interface according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 10 is a schematic diagram of a hardware structure provided in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The application starting method provided by the embodiment of the present application will be described in detail through specific embodiments and application scenarios thereof with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a method provided in an embodiment of the present application. The implementation of the method is described in detail below with reference to this figure.
here, before the step is performed, in a case where an image editing interface of the first image is displayed, a text modification interface in which the first image to be modified is displayed by a user input to a control having a text modification function in the image editing interface.
Specifically, a text modification function may be added to an image editor of a preset application program to modify text in an image displayed by the preset application program.
It should be noted that the preset application program may be an album, various types of retouching software, an image editor, and the like of the electronic device, and is not limited herein.
For example, a text modification function is added to an image editor in an album of the electronic device, and the text modification function can be implemented in an image editing interface through a control having the text modification function, as shown in fig. 2, and the text modification interface is entered by clicking a "text modification" selection control, as shown in fig. 3.
After entering the text modification interface, the text in the first image may be automatically detected by using a single character detection technique to determine the position and size of each text in the first image.
In this step, the first input is a preset input. Optionally, the first input may include, but is not limited to, at least one of a click input, a press input, a long press input, a pinch input, a drag input, a slide input, and a swipe input, that is, the first input may be one of the above-mentioned inputs, or may also be a combined operation of two or more of the above-mentioned inputs.
Preferably, the first input is a click input or a combination of a click input and a slide input.
For example, the user selects a single character to be modified by clicking with a finger; and selecting continuous multi-characters by clicking with fingers and sliding towards a preset direction.
in this step, the target area is generally a text area of the first image selected by the first input.
It should be noted that, in order to avoid the user from mistakenly selecting the target area due to an operation error, further, after selecting the target area of the first image, the user clicks a text area selection confirmation control in the text modification interface, such as the "modification" control in fig. 3, to confirm the target area.
103, acquiring target modification content;
104, receiving a second input to the character modification interface;
in this step, the second input is a preset input. Optionally, the second input may include, but is not limited to, at least one of a click input, a press input, a long press input, a pinch input, a drag input, a slide input, and a swipe input, that is, the second input may be one of the above-mentioned inputs, or may also be a combined operation of two or more of the above-mentioned inputs.
Specifically, a second input is received to a text modification confirmation control (e.g., "confirm" control in the left diagram of fig. 4) in the text modification interface. Preferably, the second input is a click input.
And 105, responding to the second input, and performing character modification on the position associated with the target area according to the target modification content to obtain a modified second image.
In this step, the text modification includes: at least one of a word erasure, a word replacement, and a word addition.
The position associated with the target area includes one of the following positions:
the location of the target area;
a position between the target region and a third region.
Here, the third region may be a text region adjacent to the target region. With respect to fig. 4, the position between the target region and the third region may be understood as: the location before or after the region in which the target region "Li Ming" is located.
The image processing method comprises the steps of receiving first input of a first image to be modified in a character modification interface; determining a target region of the first image in response to the first input; acquiring target modification content; and receiving a second input to the character modification interface, responding to the second input, and performing character modification on the position associated with the target area according to the target modification content to obtain a modified second image.
As an optional implementation manner, after step 102, in the embodiment of the present application, the method further includes:
and displaying a text editing box on the character modification interface.
This step is performed by displaying a text edit box on the text modification interface after selecting the target area of the first image, as shown in the left diagram of fig. 4. Specifically, a text edit box is displayed below the first image to be modified in the area where the target area "li ming" is selected, so that a user can input characters to be modified in the text edit box.
Correspondingly, step 103 may specifically include:
and acquiring target modification content in the text editing box.
As an optional implementation manner, the method step 105 in the embodiment of the present application may specifically include:
and under the condition that the target modified content does not exist, erasing characters corresponding to the position of the target area to obtain a modified second image.
It should be explained that, in this implementation, the non-target modification content may be understood as that the obtained target modification content is empty.
In an example, when the modification requirement of the user on the text is only text erasure, that is, only existing text needs to be erased, and new text is not needed to be filled, as shown in the left diagram of fig. 4, a target region is selected through a second input, which is a region where "pruming" is located in the left diagram of fig. 4, and the user does not need to input text in a text editing box below a text modification interface, and directly clicks a "confirm" control, so that the text erasure function can be realized, as shown in the right diagram of fig. 4.
Here, erasing the text corresponding to the position of the target area may specifically include:
carrying out binarization processing on the image in the target area, and determining a character part and a background part;
and selecting the pixel value of the most adjacent background area from each pixel of the character part for replacement.
Through the processing, the characters corresponding to the positions of the target areas can be erased, and after the characters are erased, the target areas can be modified without traces as if the character parts do not exist all the time.
Of course, the text part and the background part in the target area can be identified by the image identification technology; then, removing the character part through cutout processing to obtain a blank part; and finally, filling the blank part by utilizing an image filling technology according to the background part to obtain a pure background area without the character part.
As another optional implementation manner, the method step 105 in the embodiment of the present application may specifically include:
and under the condition that the target modification content comprises first characters and the number of the first characters is the same as that of the characters corresponding to the position of the target area, replacing the characters corresponding to the position of the target area with the first characters to obtain a modified second image.
In an example, as shown in the left diagram of fig. 5, the modified content of the target obtained in the text editing box is "small red", that is, the first character is small red, and the number of the character words corresponding to the position where the target area is located, that is, the "pruming" area is the same, the character "pruming" corresponding to the position where the target area is located is replaced with "small red", so as to obtain the modified second image, as shown in the right diagram of fig. 5.
Specifically, step 105 of the method according to this embodiment corresponds to this implementation manner, that is, replacing the text corresponding to the position where the target area is located with the first text to obtain the modified second image, and may specifically include:
acquiring feature information of characters corresponding to the position of the target area, wherein the feature information comprises: font, font size, font color and character spacing between adjacent characters;
in this step, the font of the character in the target area can be recognized through the font recognition technology, and then the size of the font, namely the length and the width of the character, is determined according to the size of the single character detection frame; determining font color according to the average RGB value of the characters; and determining the character spacing according to the distance between the detection frames of the adjacent single characters.
Erasing characters corresponding to the position of the target area;
here, the specific implementation of erasing the text corresponding to the position of the target area has been described in the foregoing implementation manner, and is not described here again.
And correspondingly adding the target modified content (namely the first character) to the target area subjected to character erasing processing based on the characteristic information to obtain a modified second image.
The step may specifically include:
performing font processing on each character in the first characters according to the font, the font size and the font color of the character corresponding to the position of the target area, so that the font, the font size and the font color of each character in the first characters are the same as those of the character corresponding to the position of the target area;
and pasting the first character after font processing in the target area according to the character space between adjacent characters corresponding to the position of the target area.
Here, by the above processing, not only the character replacement but also a modification effect with almost no trace can be achieved.
It should be noted that after one modification operation is completed, the modified image will be displayed on the screen. The user may repeat the steps described above before obtaining the modified second image. In the modification process, the previous modification work can be cancelled by clicking a cancellation control (a left arrow in fig. 5) above the first image; or by clicking a restore control (right arrow in fig. 5) over the first image, the last undone modification work is restored.
As a further optional implementation manner, the method step 105 in the embodiment of the present application may specifically include:
and under the condition that the target modification content comprises second characters and the number N of the second characters is less than the number M of the characters corresponding to the position of the target area, replacing the characters corresponding to the ith character position to the (i + N-1) th character position in the target area with the second characters, erasing the characters corresponding to the remaining M-N character positions in the target area to obtain a modified second image, wherein i is more than or equal to 1 and is less than or equal to M- (N-1), and M, N, i are positive integers.
Here, the value of i in the ith character position may be preset, for example, i is 1; and a text position selection box can be popped up after a user clicks a confirmation control in the text modification interface, and the value of i is determined by the selection of the user. And is not particularly limited herein.
In an example, as shown in the left diagram of fig. 6, the modified target content obtained in the text editing box is "king", that is, the second character is king, the number of characters corresponding to the position of the target region, that is, "li", is less than the number of characters corresponding to the position of the target region, that is, "li", the character corresponding to the position of the 1 st character in the target region is replaced by "king", and the characters corresponding to the remaining 1 character position in the target region are erased, that is, "g", so as to obtain the modified second image, as shown in the right diagram of fig. 6.
Specifically, step 105 of the method in the embodiment of the present application corresponds to this implementation manner, that is, replacing the characters corresponding to the ith to (i + N-1) th character positions in the target area with the second characters, and erasing the characters corresponding to the remaining M-N character positions in the target area to obtain the modified second image, which may specifically include:
acquiring feature information of characters corresponding to the position of the target area, wherein the feature information comprises: font, font size, font color and character spacing between adjacent characters;
erasing characters corresponding to the position of the target area;
and correspondingly adding the target modified content (namely the second character) to the target area subjected to the character erasing processing based on the characteristic information to obtain a modified second image.
Specifically, the second character is correspondingly added to the ith to (i + N-1) th character positions in the target area after the character erasing processing.
The step may specifically include:
performing font processing on each character in the second characters according to the font, the font size and the font color of the character corresponding to the position of the target area, so that the font, the font size and the font color of each character in the second characters are the same as those of the character corresponding to the position of the target area; and correspondingly pasting the second character after font processing to the ith character position to the (i + N-1) th character position in the target area according to the character space between adjacent characters corresponding to the position of the target area.
As still another optional implementation manner, the method step 105 of the embodiment of the present application may specifically include:
when the target modified content comprises a third character and the character number S of the third character is larger than the character number T corresponding to the position of the target area, determining the number of characters which can be accommodated in a first blank area based on the font size of the characters in the target area and the space between adjacent characters, wherein the first blank area is a blank area between the target area and the first area, and the first area is a character area adjacent to the target area;
in this case, for example, when the number of the character words included in the target modified content is less than the number of the original character words in the target area, the number of the character words included in the target modified content is 3, and the number of the character words included in the target modified content is 5, the target modified content is replaced to the target area and the blank area according to the font size of the original character and the space between adjacent fonts, so that the replaced character does not affect the character display of the character area adjacent to the target area, and the number of the character words that can be accommodated in the first blank area needs to be greater than or equal to 2, that is, greater than or equal to S-T; otherwise, in order to implement the text replacement and not affect the text display of the text region adjacent to the target region after the text replacement, at least one of the font size and the inter-word space of the text included in the target modified content needs to be reduced, that is, either the font size of the text included in the target modified content or the inter-word space of the text included in the target modified content is reduced. Specifically, the following two cases are dealt with.
The first condition is as follows: if the number of the characters which can be contained in the first blank area is smaller than S-T, replacing the characters corresponding to the position of the target area with the third characters based on a first font size and/or a first inter-character space to obtain a modified second image, wherein the first font size is smaller than the font size of the characters in the target area, and the first inter-character space is smaller than the space between adjacent characters in the target area;
specifically, based on the first font size, the font color, and the character spacing between adjacent characters in the feature information of the character corresponding to the position of the target area, the character corresponding to the position of the target area is replaced with the third character, so as to obtain a modified second image; or,
replacing the characters corresponding to the position of the target area with the third characters based on the first inter-character space, the fonts, the font sizes and the font colors in the characteristic information of the characters corresponding to the position of the target area to obtain a modified second image; or,
and replacing the characters corresponding to the position of the target area with the third characters based on the first font size, the first inter-character distance, the font and the font color in the characteristic information of the characters corresponding to the position of the target area to obtain a modified second image.
Case two: if the number of the characters which can be contained in the first blank area is greater than or equal to S-T, replacing the characters corresponding to the position of the target area with the first character and the subsequent continuous T-1 characters in the third character, and sequentially adding the remaining S-T characters in the third character to the first blank area to obtain a modified second image, wherein S, T are positive integers.
As still another optional implementation manner, the method step 105 of the embodiment of the present application may specifically include:
when the target modified content comprises a fourth character and the number of words of the fourth character is W, determining the number Z of characters which can be contained in a second blank area based on the font size of the characters in the target area and the distance between adjacent characters, wherein the second blank area is a blank area between the target area and the second area, and the second area is a character area adjacent to the target area;
if W is less than or equal to Z, adding the fourth characters into the second blank area to obtain a modified second image, wherein W, Z are positive integers;
for this situation, W is less than or equal to Z, that is, the number W of the characters included in the target modified content is less than or equal to the number Z of the characters that can be accommodated in the second blank area, which indicates that after the target modified content is directly added to the second blank area according to the font, the font size, the font color, and the distance between adjacent fonts of the original characters in the target area, the display of the characters in the character area adjacent to the target area is not affected.
When the number of the character words in the target area is only 1, the first character in the fourth character is the same as the character in the target area, and W is more than or equal to 2 and less than or equal to Z +1, adding other characters except the first character in the fourth character into the second blank area to obtain a modified second image.
To illustrate, as an example, as shown in the left side diagram of fig. 7, if the target modification content acquired in the text editing box is "bright light", that is, the fourth text is bright light, and the first text "bright" in the fourth text is the same as the text "bright" at the position of the target area, then "light", which is another text in the fourth text except the first text, is added to the second blank area, so as to obtain the modified second image, as shown in the right side diagram of fig. 7. Therefore, the adding positions of other characters except the first character in the fourth character can be determined through the acquired target modification content, the modification type of the character modification is determined to be character addition, and a character modification instruction, namely a character adding instruction, is given.
If W is larger than Z, erasing the characters corresponding to the position of the target area; and adding the characters corresponding to the position of the target area and the fourth characters to the target area and the second blank area after character erasing processing based on a second font size and/or a second inter-character space to obtain a modified second image, wherein the second font size is smaller than the font size of the characters in the target area, and the second inter-character space is smaller than the space between adjacent characters in the target area.
Specifically, based on a second font size, a font in the feature information of the character corresponding to the position of the target area, a font color, and a character interval between adjacent characters, adding the character corresponding to the position of the target area and the fourth character to the target area and the second blank area after the character erasing processing to obtain a modified second image; or,
adding the characters corresponding to the position of the target area and the fourth characters to the target area and the second blank area after character erasing processing based on a second inter-character distance, the characters in the characteristic information of the characters corresponding to the position of the target area, the size of the characters and the color of the characters to obtain a modified second image; or,
and adding the characters corresponding to the position of the target area and the fourth characters to the target area and the second blank area after character erasing processing based on the size of the second font, the distance between the second characters, the font and the font color in the characteristic information of the characters corresponding to the position of the target area, so as to obtain a modified second image.
In this case, W > Z, that is, the number W of the characters included in the target modified content is greater than the number Z of the characters that can be accommodated in the second blank area, which indicates that after the target modified content is directly added to the second blank area according to the font size of the original characters in the target area and the distance between adjacent fonts, the display of the characters in the character area adjacent to the target area is affected, and the target modified content cannot be directly added to the second blank area.
Therefore, in order not to affect the display of the characters in the character region adjacent to the target region, it is necessary to reduce at least one of the font size and the inter-character distance of the characters included in the target modified content, that is, either the font size of the characters included in the target modified content or the inter-character distance of the characters included in the target modified content; meanwhile, considering the display effect of the characters in the final target area, for example, the characters are displayed consistently and beautifully, the steps are executed, namely, the characters corresponding to the position of the target area are removed; and adding the characters corresponding to the position of the target area and the fourth characters to the target area and the second blank area after character erasing processing based on a second font size and/or a second inter-character space to obtain a modified second image, wherein the second font size is smaller than the font size of the characters in the target area, and the second inter-character space is smaller than the space between adjacent characters in the target area.
When the number of character words in the target area is only 1, the first character of the fourth character is the same as the character in the target area, and W is more than Z +1, erasing the character corresponding to the position of the target area; and adding the characters corresponding to the position of the target area and the fourth characters to the target area and the second blank area after character erasing processing based on a second font size and/or a second inter-character space to obtain a modified second image, wherein the second font size is smaller than the font size of the characters in the target area, and the second inter-character space is smaller than the space between adjacent characters in the target area.
The image processing method comprises the steps of receiving first input of a first image to be modified in a character modification interface; determining a target region of the first image in response to the first input; acquiring target modification content; and receiving a second input to the character modification interface, responding to the second input, and performing character modification on the position associated with the target area according to the target modification content to obtain a modified second image.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method. In the embodiment of the present application, an image processing apparatus that executes an image processing method is taken as an example, and the image processing apparatus provided in the embodiment of the present application is described.
Fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. The image processing apparatus 800 may include:
a first receiving module 801, configured to receive a first input of a first image to be modified in a text modification interface;
a region selection module 802, configured to select a target region of the first image in response to the first input;
an obtaining module 803, configured to obtain target modification content;
a second receiving module 804, configured to receive a second input to the text modification interface;
and a text modification module 805, configured to, in response to the second input, perform text modification on a position associated with the target area according to the target modification content, so as to obtain a modified second image.
Optionally, the image processing apparatus 800 further includes:
the display module is used for displaying a text edit box on the character modification interface;
the obtaining module 803 includes:
and the first acquisition unit is used for acquiring the target modification content in the text editing box.
Optionally, the text modification module includes:
and the first modifying unit is used for erasing the characters corresponding to the position of the target area under the condition that the target modified content does not exist, so as to obtain a modified second image.
Optionally, the text modification module 805 includes:
and the second modifying unit is used for replacing the characters corresponding to the position of the target area with the first characters to obtain a modified second image under the condition that the target modified content comprises the first characters and the number of the characters of the first characters is the same as that of the characters corresponding to the position of the target area.
Optionally, the text modification module 805 includes:
and a third modifying unit, configured to, when the target modified content includes a second word and the word number N of the second word is less than the word number M of the word corresponding to the position of the target area, replace the word corresponding to the ith to (i + N-1) th word positions in the target area with the second word, and erase the words corresponding to the remaining M-N word positions in the target area to obtain a modified second image, where i is greater than or equal to 1 and is less than or equal to M- (N-1), and M, N, i are positive integers.
Optionally, the text modification module 805 includes:
a first processing unit, configured to, when the target modified content includes a third text and the number S of words of the third text is greater than the number T of words of the text corresponding to the position of the target area, determine, based on a font size of the text in the target area and a distance between adjacent texts, a number of texts that can be accommodated in a first blank area, where the first blank area is a blank area between the target area and the first area, and the first area is a text area adjacent to the target area;
a fourth modifying unit, configured to, when the number of characters that can be accommodated in the first blank area is smaller than S-T, replace, based on a first font size and/or a first inter-character distance, a character corresponding to a position where the target area is located with the third character to obtain a modified second image, where the first font size is smaller than a font size of characters in the target area, and the first inter-character distance is smaller than a distance between adjacent characters in the target area;
a fifth modifying unit, configured to, when the number of characters that can be accommodated in the first blank area is greater than or equal to S-T, replace the character corresponding to the position where the target area is located with a first character and T-1 characters that are consecutive thereafter in the third character, and sequentially add the remaining S-T characters in the third character to the first blank area, so as to obtain a modified second image, where S, T are positive integers.
Optionally, the text modification module 805 includes:
a second processing unit, configured to, when the target modified content includes a fourth word and the number of words of the fourth word is W, determine, based on a font size of the word in the target area and a distance between adjacent words, a number Z of words that can be accommodated in a second blank area, where the second blank area is a blank area between the target area and a second area, and the second area is a word area adjacent to the target area;
a sixth modifying unit, configured to, when W is less than or equal to Z, add the fourth character to the second blank area to obtain a modified second image, where W, Z are positive integers;
a seventh modifying unit, configured to erase, when W > Z, a text corresponding to a position where the target area is located; and adding the characters corresponding to the position of the target area and the fourth characters to the target area and the second blank area after character erasing processing based on a second font size and/or a second inter-character space to obtain a modified second image, wherein the second font size is smaller than the font size of the characters in the target area, and the second inter-character space is smaller than the space between adjacent characters in the target area.
Optionally, the text modification module 805 includes:
the second acquisition unit is used for acquiring the characteristic information of the characters corresponding to the position of the target area, wherein the characteristic information comprises a font, a font size, a font color and a character interval between adjacent characters;
the erasing unit is used for erasing characters corresponding to the position of the target area;
and the eighth modifying unit is used for correspondingly adding the first characters to the target area subjected to character erasing processing based on the characteristic information to obtain a modified second image.
The image processing device receives a first input of a first image to be modified in a character modification interface through a first receiving module; the region selection module quickly responds to the first input and determines a target region of the first image; the acquisition module acquires target modification content; the second receiving module receives second input to the character modification interface, the character modification module responds to the second input, and carries out character modification on the position associated with the target area according to the target modification content to obtain a modified second image.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a Network Attached Storage (NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 7, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 9, an electronic device 900 is further provided in this embodiment of the present application, and includes a processor 901, a memory 902, and a program or an instruction stored in the memory 902 and executable on the processor 901, where the program or the instruction is executed by the processor 901 to implement each process of the foregoing image processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 10 is a schematic hardware structure diagram of an electronic device implementing various embodiments of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, a processor 1010, and a power supply 1011.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The user input unit 1007 is used for receiving a first input of a first image to be modified in the text modification interface; a processor 1010 for selecting a target region of the first image in response to the first input; acquiring target modification content; a user input unit 1007, configured to receive a second input to the text modification interface; and the processor 1010 is configured to, in response to the second input, modify the position associated with the target area in a text manner according to the target modification content, so as to obtain a modified second image.
In the embodiment of the application, the modification of the text content in the image can be realized by simply inputting the operation, and the image processing experience of a user is improved.
Optionally, a display unit 1006, configured to display a text editing box on the text modification interface; correspondingly, the processor 1010 is further configured to obtain the target modification content in the text edit box.
Optionally, the processor 1010 is further configured to erase, in a case that the target modified content does not exist, a text corresponding to a position where the target area is located, to obtain a modified second image.
Optionally, the processor 1010 is further configured to, when the target modification content includes a first text and the number of words of the first text is the same as the number of words of a text corresponding to the position of the target area, replace the text corresponding to the position of the target area with the first text to obtain a modified second image.
Optionally, the processor 1010 is further configured to, when the target modification content includes a second word and the word number N of the second word is less than the word number M of the word corresponding to the position of the target region, replace the word corresponding to the ith word position to the (i + N-1) th word position in the target region with the second word, and erase the words corresponding to the remaining M-N word positions in the target region, so as to obtain a modified second image, where i is greater than or equal to 1 and is less than or equal to M- (N-1), and M, N, i are positive integers.
Optionally, the processor 1010 is further configured to:
when the target modified content comprises a third character and the character number S of the third character is larger than the character number T corresponding to the position of the target area, determining the number of characters which can be accommodated in a first blank area based on the font size of the characters in the target area and the space between adjacent characters, wherein the first blank area is a blank area between the target area and the first area, and the first area is a character area adjacent to the target area;
if the number of the characters which can be contained in the first blank area is smaller than S-T, replacing the characters corresponding to the position of the target area with the third characters based on a first font size and/or a first inter-character space to obtain a modified second image, wherein the first font size is smaller than the font size of the characters in the target area, and the first inter-character space is smaller than the space between adjacent characters in the target area;
if the number of the characters which can be contained in the first blank area is greater than or equal to S-T, replacing the characters corresponding to the position of the target area with the first character and the subsequent continuous T-1 characters in the third character, and sequentially adding the remaining S-T characters in the third character to the first blank area to obtain a modified second image, wherein S, T are positive integers.
Optionally, the processor 1010 is further configured to:
when the target modified content comprises a fourth character and the number of words of the fourth character is W, determining the number Z of characters which can be contained in a second blank area based on the font size of the characters in the target area and the distance between adjacent characters, wherein the second blank area is a blank area between the target area and the second area, and the second area is a character area adjacent to the target area;
if W is less than or equal to Z, adding the fourth characters into the second blank area to obtain a modified second image, wherein W, Z are positive integers;
if W is larger than Z, erasing the characters corresponding to the position of the target area; and adding the characters corresponding to the position of the target area and the fourth characters to the target area and the second blank area after character erasing processing based on a second font size and/or a second inter-character space to obtain a modified second image, wherein the second font size is smaller than the font size of the characters in the target area, and the second inter-character space is smaller than the space between adjacent characters in the target area.
Optionally, the processor 1010 is further configured to:
acquiring feature information of characters corresponding to the position of the target area, wherein the feature information comprises: font, font size, font color and character spacing between adjacent characters;
erasing characters corresponding to the position of the target area;
and correspondingly adding the first characters to the target area subjected to character erasing processing based on the characteristic information to obtain a modified second image.
In the embodiment of the application, the modification of the text content in the image can be realized by simply inputting the operation, and the image processing experience of a user is improved.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the graphics processing Unit 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1009 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. Processor 1010 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (18)
1. An image processing method, comprising:
receiving a first input of a first image to be modified in a character modification interface;
selecting a target area of the first image in response to the first input;
acquiring target modification content;
receiving a second input to the text modification interface;
and responding to the second input, and performing character modification on the position associated with the target area according to the target modification content to obtain a modified second image.
2. The method of claim 1, wherein after selecting a target region of the first image in response to the first input, the method further comprises:
displaying a text edit box on the character modification interface;
the obtaining of the target modification content includes:
and acquiring target modification content in the text editing box.
3. The method according to claim 1, wherein said modifying the position associated with the target area in text according to the target modification content to obtain a modified second image comprises:
and under the condition that the target modified content does not exist, erasing characters corresponding to the position of the target area to obtain a modified second image.
4. The method according to claim 1, wherein said modifying the position associated with the target area in text according to the target modification content to obtain a modified second image comprises:
and under the condition that the target modification content comprises first characters and the number of the first characters is the same as that of the characters corresponding to the position of the target area, replacing the characters corresponding to the position of the target area with the first characters to obtain a modified second image.
5. The method according to claim 1, wherein said modifying the position associated with the target area in text according to the target modification content to obtain a modified second image comprises:
and under the condition that the target modification content comprises second characters and the number N of the second characters is less than the number M of the characters corresponding to the position of the target area, replacing the characters corresponding to the ith character position to the (i + N-1) th character position in the target area with the second characters, erasing the characters corresponding to the remaining M-N character positions in the target area to obtain a modified second image, wherein i is more than or equal to 1 and is less than or equal to M- (N-1), and M, N, i are positive integers.
6. The method according to claim 1, wherein said modifying the position associated with the target area in text according to the target modification content to obtain a modified second image comprises:
when the target modified content comprises a third character and the character number S of the third character is larger than the character number T corresponding to the position of the target area, determining the number of characters which can be accommodated in a first blank area based on the font size of the characters in the target area and the space between adjacent characters, wherein the first blank area is a blank area between the target area and the first area, and the first area is a character area adjacent to the target area;
if the number of the characters which can be contained in the first blank area is smaller than S-T, replacing the characters corresponding to the position of the target area with the third characters based on a first font size and/or a first inter-character space to obtain a modified second image, wherein the first font size is smaller than the font size of the characters in the target area, and the first inter-character space is smaller than the space between adjacent characters in the target area;
if the number of the characters which can be contained in the first blank area is greater than or equal to S-T, replacing the characters corresponding to the position of the target area with the first character and the subsequent continuous T-1 characters in the third character, and sequentially adding the remaining S-T characters in the third character to the first blank area to obtain a modified second image, wherein S, T are positive integers.
7. The method according to claim 1, wherein said modifying the position associated with the target area in text according to the target modification content to obtain a modified second image comprises:
when the target modified content comprises a fourth character and the number of words of the fourth character is W, determining the number Z of characters which can be contained in a second blank area based on the font size of the characters in the target area and the distance between adjacent characters, wherein the second blank area is a blank area between the target area and the second area, and the second area is a character area adjacent to the target area;
if W is less than or equal to Z, adding the fourth characters into the second blank area to obtain a modified second image, wherein W, Z are positive integers;
if W is larger than Z, erasing the characters corresponding to the position of the target area; and adding the characters corresponding to the position of the target area and the fourth characters to the target area and the second blank area after character erasing processing based on a second font size and/or a second inter-character space to obtain a modified second image, wherein the second font size is smaller than the font size of the characters in the target area, and the second inter-character space is smaller than the space between adjacent characters in the target area.
8. The method according to claim 4 or 5, wherein said modifying the position associated with the target area in text according to the target modification content to obtain a modified second image comprises:
acquiring feature information of characters corresponding to the position of the target area, wherein the feature information comprises: font, font size, font color and character spacing between adjacent characters;
erasing characters corresponding to the position of the target area;
and correspondingly adding the target modification content to the target area subjected to the character erasing processing based on the characteristic information to obtain a modified second image.
9. An image processing apparatus characterized by comprising:
the first receiving module is used for receiving first input of a first image to be modified in the character modification interface;
a region selection module for selecting a target region of the first image in response to the first input;
the acquisition module is used for acquiring target modification content;
the second receiving module is used for receiving second input of the character modification interface;
and the character modification module is used for responding to the second input and carrying out character modification on the position associated with the target area according to the target modification content to obtain a modified second image.
10. The apparatus according to claim 9, wherein the image processing apparatus further comprises:
the display module is used for displaying a text edit box on the character modification interface;
the acquisition module includes:
and the first acquisition unit is used for acquiring the target modification content in the text editing box.
11. The apparatus of claim 9, wherein the text modification module comprises:
and the first modifying unit is used for erasing the characters corresponding to the position of the target area under the condition that the target modified content does not exist, so as to obtain a modified second image.
12. The apparatus of claim 9, wherein the text modification module comprises:
and the second modifying unit is used for replacing the characters corresponding to the position of the target area with the first characters to obtain a modified second image under the condition that the target modified content comprises the first characters and the number of the characters of the first characters is the same as that of the characters corresponding to the position of the target area.
13. The apparatus of claim 9, wherein the text modification module comprises:
and a third modifying unit, configured to, when the target modified content includes a second word and the word number N of the second word is less than the word number M of the word corresponding to the position of the target area, replace the word corresponding to the ith to (i + N-1) th word positions in the target area with the second word, and erase the words corresponding to the remaining M-N word positions in the target area to obtain a modified second image, where i is greater than or equal to 1 and is less than or equal to M- (N-1), and M, N, i are positive integers.
14. The apparatus of claim 9, wherein the text modification module comprises:
a first processing unit, configured to, when the target modified content includes a third text and the number S of words of the third text is greater than the number T of words of the text corresponding to the position of the target area, determine, based on a font size of the text in the target area and a distance between adjacent texts, a number of texts that can be accommodated in a first blank area, where the first blank area is a blank area between the target area and the first area, and the first area is a text area adjacent to the target area;
a fourth modifying unit, configured to, when the number of characters that can be accommodated in the first blank area is smaller than S-T, replace, based on a first font size and/or a first inter-character distance, a character corresponding to a position where the target area is located with the third character to obtain a modified second image, where the first font size is smaller than a font size of characters in the target area, and the first inter-character distance is smaller than a distance between adjacent characters in the target area;
a fifth modifying unit, configured to, when the number of characters that can be accommodated in the first blank area is greater than or equal to S-T, replace the character corresponding to the position where the target area is located with a first character and T-1 characters that are consecutive thereafter in the third character, and sequentially add the remaining S-T characters in the third character to the first blank area, so as to obtain a modified second image, where S, T are positive integers.
15. The apparatus of claim 9, wherein the text modification module comprises:
a second processing unit, configured to, when the target modified content includes a fourth word and the number of words of the fourth word is W, determine, based on a font size of the word in the target area and a distance between adjacent words, a number Z of words that can be accommodated in a second blank area, where the second blank area is a blank area between the target area and a second area, and the second area is a word area adjacent to the target area;
a sixth modifying unit, configured to, when W is less than or equal to Z, add the fourth character to the second blank area to obtain a modified second image, where W, Z are positive integers;
a seventh modifying unit, configured to erase, when W > Z, a text corresponding to a position where the target area is located; and adding the characters corresponding to the position of the target area and the fourth characters to the target area and the second blank area after character erasing processing based on a second font size and/or a second inter-character space to obtain a modified second image, wherein the second font size is smaller than the font size of the characters in the target area, and the second inter-character space is smaller than the space between adjacent characters in the target area.
16. The apparatus of claim 12 or 13, wherein the text modification module comprises:
the second acquisition unit is used for acquiring the characteristic information of the characters corresponding to the position of the target area, wherein the characteristic information comprises a font, a font size, a font color and a character interval between adjacent characters;
the erasing unit is used for erasing characters corresponding to the position of the target area;
and the eighth modifying unit is used for correspondingly adding the target modified content to the target area subjected to the character erasing processing based on the characteristic information to obtain a modified second image.
17. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, which when executed by the processor, implement the steps of the image processing method of any one of claims 1 to 8.
18. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010543467.6A CN111724455A (en) | 2020-06-15 | 2020-06-15 | Image processing method and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010543467.6A CN111724455A (en) | 2020-06-15 | 2020-06-15 | Image processing method and electronic device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111724455A true CN111724455A (en) | 2020-09-29 |
Family
ID=72566950
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010543467.6A Pending CN111724455A (en) | 2020-06-15 | 2020-06-15 | Image processing method and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111724455A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112288835A (en) * | 2020-10-29 | 2021-01-29 | 维沃移动通信有限公司 | Image text extraction method and device and electronic equipment |
CN113093960A (en) * | 2021-04-16 | 2021-07-09 | 南京维沃软件技术有限公司 | Image editing method, editing device, electronic device and readable storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5548700A (en) * | 1989-12-29 | 1996-08-20 | Xerox Corporation | Editing text in an image |
JPH1196157A (en) * | 1997-09-25 | 1999-04-09 | Sharp Corp | Document processor, and computer-readable recording medium where document processing program is recorded |
JP2001273509A (en) * | 2000-03-28 | 2001-10-05 | Toshiba Corp | Method and device for editing document picture |
US20150052439A1 (en) * | 2013-08-19 | 2015-02-19 | Kodak Alaris Inc. | Context sensitive adaptable user interface |
CN104715497A (en) * | 2014-12-30 | 2015-06-17 | 上海孩子国科教设备有限公司 | Data replacement method and system |
CN105184838A (en) * | 2015-09-21 | 2015-12-23 | 深圳市金立通信设备有限公司 | Picture processing method and terminal |
CN105654532A (en) * | 2015-12-24 | 2016-06-08 | Tcl集团股份有限公司 | Photo photographing and processing method and system |
CN106293462A (en) * | 2016-08-04 | 2017-01-04 | 广州视睿电子科技有限公司 | Character display method and device |
CN107741816A (en) * | 2017-10-27 | 2018-02-27 | 咪咕动漫有限公司 | A kind of processing method of image information, device and storage medium |
-
2020
- 2020-06-15 CN CN202010543467.6A patent/CN111724455A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5548700A (en) * | 1989-12-29 | 1996-08-20 | Xerox Corporation | Editing text in an image |
JPH1196157A (en) * | 1997-09-25 | 1999-04-09 | Sharp Corp | Document processor, and computer-readable recording medium where document processing program is recorded |
JP2001273509A (en) * | 2000-03-28 | 2001-10-05 | Toshiba Corp | Method and device for editing document picture |
US20150052439A1 (en) * | 2013-08-19 | 2015-02-19 | Kodak Alaris Inc. | Context sensitive adaptable user interface |
CN104715497A (en) * | 2014-12-30 | 2015-06-17 | 上海孩子国科教设备有限公司 | Data replacement method and system |
CN105184838A (en) * | 2015-09-21 | 2015-12-23 | 深圳市金立通信设备有限公司 | Picture processing method and terminal |
CN105654532A (en) * | 2015-12-24 | 2016-06-08 | Tcl集团股份有限公司 | Photo photographing and processing method and system |
CN106293462A (en) * | 2016-08-04 | 2017-01-04 | 广州视睿电子科技有限公司 | Character display method and device |
CN107741816A (en) * | 2017-10-27 | 2018-02-27 | 咪咕动漫有限公司 | A kind of processing method of image information, device and storage medium |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112288835A (en) * | 2020-10-29 | 2021-01-29 | 维沃移动通信有限公司 | Image text extraction method and device and electronic equipment |
CN113093960A (en) * | 2021-04-16 | 2021-07-09 | 南京维沃软件技术有限公司 | Image editing method, editing device, electronic device and readable storage medium |
CN113093960B (en) * | 2021-04-16 | 2022-08-02 | 南京维沃软件技术有限公司 | Image editing method, editing device, electronic device and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107977155B (en) | Handwriting recognition method, device, equipment and storage medium | |
US12135864B2 (en) | Screen capture method and apparatus, and electronic device | |
CN111724455A (en) | Image processing method and electronic device | |
CN113313027A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN111985203A (en) | Document processing method, document processing device and electronic equipment | |
CN114518827A (en) | Text processing method and device | |
CN113283220A (en) | Note recording method, device and equipment and readable storage medium | |
CN112306320A (en) | Page display method, device, equipment and medium | |
CN111930245A (en) | Character input control method and device and electronic equipment | |
CN111985183A (en) | Character input method and device and electronic equipment | |
CN113805709B (en) | Information input method and device | |
WO2023284640A1 (en) | Picture processing method and electronic device | |
CN113794943B (en) | Video cover setting method and device, electronic equipment and storage medium | |
CN112698771B (en) | Display control method, device, electronic equipment and storage medium | |
CN115421632A (en) | Message display method and device, electronic equipment and storage medium | |
CN111796736B (en) | Application sharing method and device and electronic equipment | |
CN113807058A (en) | Text display method and text display device | |
CN113807057A (en) | Method and device for editing characters | |
CN112288835A (en) | Image text extraction method and device and electronic equipment | |
CN112732100A (en) | Information processing method and device and electronic equipment | |
CN114442881A (en) | Information display method and device, electronic equipment and readable storage medium | |
CN114518859A (en) | Display control method, display control device, electronic equipment and storage medium | |
CN113238694A (en) | Interface comparison method and device and mobile terminal | |
CN113010815A (en) | Display method and electronic device | |
CN112764551A (en) | Vocabulary display method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |