CN109492451B - Coded image identification method and mobile terminal - Google Patents
Coded image identification method and mobile terminal Download PDFInfo
- Publication number
- CN109492451B CN109492451B CN201811279527.7A CN201811279527A CN109492451B CN 109492451 B CN109492451 B CN 109492451B CN 201811279527 A CN201811279527 A CN 201811279527A CN 109492451 B CN109492451 B CN 109492451B
- Authority
- CN
- China
- Prior art keywords
- image
- coded
- target
- information
- encoded
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/146—Methods for optical code recognition the method including quality enhancement steps
- G06K7/1465—Methods for optical code recognition the method including quality enhancement steps using several successive scans of the optical code
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Toxicology (AREA)
- Health & Medical Sciences (AREA)
- Electromagnetism (AREA)
- General Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Mobile Radio Communication Systems (AREA)
- Telephone Function (AREA)
Abstract
The invention provides a coded image identification method and a mobile terminal, and belongs to the technical field of mobile terminals. The mobile terminal can identify an input first coded image, if the first coded image is failed to be identified, the first coded image is stored, a second coded image input within a preset time length is identified, the probability that a target operation can be executed is improved by identifying the second coded image, furthermore, if the second coded image is failed to be identified, the second coded image is stored, the target coded image is generated based on the stored second coded image and the first coded image, and finally, the target operation is executed based on the target coded image, so that the probability that the target operation can be executed is further improved by combining information in the first coded image and the second coded image through generating the target coded image.
Description
Technical Field
The embodiment of the invention relates to the technical field of mobile terminals, in particular to a coded image identification method and a mobile terminal.
Background
Since an image can carry a variety of information, in an application, information required for performing different operations is often embedded in the image by means of encoding, so as to generate an encoded image, for example, a two-dimensional code image. Accordingly, the user may perform a corresponding operation based on the information carried in the encoded image, and specifically, when performing the corresponding operation, the user may perform the corresponding operation by scanning the image to obtain the information.
However, since the encoded image may be damaged, for example, the two-dimensional code image on the shared device for performing the device opening operation may be damaged due to human damage, which may result in that the information obtained from the encoded image is insufficient for performing the corresponding operation, and thus the corresponding operation may not be performed.
Disclosure of Invention
The invention provides a coded image identification method and a mobile terminal, which aim to solve the problem that target operation cannot be executed by identifying a coded image.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a coded image identification method, which is applied to a mobile terminal, and the method may include:
recognizing an input first encoded image;
if the first coded image is failed to be identified, storing the first coded image, and identifying a second coded image input within a preset time length;
if the second coded image is failed to be identified, storing the second coded image;
generating a target encoded image based on the stored second encoded image and the first encoded image;
performing a target operation based on the target encoded image.
In a second aspect, an embodiment of the present invention provides a mobile terminal, where the mobile terminal may include:
the first identification module is used for identifying an input first coding image;
the second identification module is used for storing the first coded image and identifying a second coded image input within a preset time length if the first coded image is failed to be identified;
the storage module is used for storing the second coded image if the second coded image is failed to be identified;
a generating module for generating a target encoded image based on the stored second encoded image and the first encoded image;
and the execution module is used for executing target operation based on the target coding image.
In a third aspect, an embodiment of the present invention provides a mobile terminal, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the encoded image recognition method according to the first aspect.
In a fourth aspect, the embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program implements the steps of the encoded image recognition method according to the first aspect.
In the embodiment of the invention, the mobile terminal identifies an input first coded image, if the identification of the first coded image fails, the first coded image is stored, and identifies a second coded image input within a preset time duration, the probability that the target operation can be executed is improved by identifying the second coded image, further, if the identification of the second coded image fails, the second coded image is stored, the target coded image is generated based on the stored second coded image and the first coded image, and finally, the target operation is executed based on the target coded image, so that the probability that the target operation can be executed is further improved by combining information in the first coded image and the second coded image through the generation of the target coded image.
Drawings
FIG. 1 is a flowchart illustrating steps of a method for identifying a coded image according to an embodiment of the present invention;
FIG. 2-1 is a flow chart illustrating steps of another method for identifying a coded image according to an embodiment of the present invention;
FIG. 2-2 is a schematic diagram of a first encoded image according to an embodiment of the present invention;
FIGS. 2-3 are schematic diagrams of a second encoded image according to an embodiment of the present invention;
fig. 3 is a block diagram of a mobile terminal according to an embodiment of the present invention;
fig. 4 is a block diagram of another mobile terminal provided by an embodiment of the present invention;
fig. 5 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of steps of a method for identifying a coded image according to an embodiment of the present invention, where as shown in fig. 1, the method may be applied to a mobile terminal, and the method may include:
In the embodiment of the invention, when a user of the mobile terminal needs to identify the coded image, the user can perform operations such as clicking or double clicking on the coded image identification icon, so that the coded image scanning function of the mobile terminal is started, then, the user can control the camera of the mobile terminal to collect the first coded image, so that the first coded image is input into the mobile terminal, and accordingly, the mobile terminal can identify the input first coded image, namely, information in the first coded image is acquired, and corresponding target operation is executed based on the information. The first encoded image may be an encoded image provided on the target device for performing a target operation on the target device, for example, a two-dimensional code image provided on a sharing bicycle for turning on the bicycle.
And 102, if the first coded image is failed to be identified, storing the first coded image, and identifying a second coded image input within a preset time length.
In practical applications, a first encoded image set on a target device may be damaged, for example, damaged by human malice, damaged due to scratch in a using process, and the like, so that the first encoded image may have a problem of information loss and insufficient information, that is, an information amount of information included in the first encoded image is smaller than an information amount of information required for performing a target operation, which may cause a failure of an operation of identifying the first encoded image by a mobile terminal, that is, the mobile terminal may not perform the target operation on the target device based on the information included in the first encoded image. Accordingly, in order to improve the probability that the target operation can be performed, in the case that the identification of the first encoded image fails, if the user wants to continue to implement the target operation, the user often continues to input the second encoded image within a preset time, and therefore, in this step, the mobile terminal may identify the encoded image input by the user within a preset time length as the second encoded image for the target operation, where the preset time length may be a maximum time interval time length of the scanning operation when the user implements the target operation by continuously scanning multiple encoded images, where the maximum time interval time length may be obtained by testing multiple users in advance. Specifically, the second encoded image may be an encoded image having the same content as the first encoded image and different position, for example, two encoded images respectively disposed at a head and a tail of a shared bicycle, where the encoded image input by the user first is the first encoded image and the encoded image input by the user later is the second encoded image. Further, in order to avoid the problem that the target operation cannot be executed due to failure of identification of the second encoded image, in this step, the mobile terminal may store the first encoded image when identification of the first encoded image fails, so as to save part of information carried in the first encoded image.
And 103, if the second coded image is failed to be identified, storing the second coded image.
In practical applications, the second encoded image set on the target device may also be damaged, for example, damaged by human malice, damaged due to scratch in the using process, and the like, so that the second encoded image may also have a problem of information loss and insufficient information, that is, the information amount of the information included in the second two-dimensional code image is smaller than the information amount of the information required for executing the target operation, and thus, the identification operation of the mobile terminal on the second encoded image may be failed, that is, the mobile terminal cannot execute the target operation on the target device based on the second encoded image. Further, since the damaged area of the first coded image may be different from the damaged area of the second coded image, the information missing from the first coded image is different from the information missing from the second coded image, that is, the first coded image may include the information missing from the second coded image, and the second coded image may include the information missing from the first coded image, in this step, the mobile terminal may store the second coded image when the identification of the second coded image fails, so as to save part of the information carried in the second coded image.
And 104, generating a target coded image based on the stored second coded image and the first coded image.
In this step, the mobile terminal may combine the second encoded image and the first encoded image to generate the target encoded image, so as to integrate the information contained in the first encoded image and the information contained in the second encoded image, and maximally enrich the amount of information that can be used to perform the target operation.
In the embodiment of the present invention, since the target encoded image is generated by combining the second encoded image and the first encoded image, information contained in the target encoded image is often richer and more complete, and thus, the information contained in the target encoded image can more probably satisfy information required for performing the target operation, and in this step, the mobile terminal can perform the target operation based on the target encoded image, so as to further improve the probability that the target encoding operation can be performed.
In summary, according to the code image identification method provided by the embodiment of the present invention, the mobile terminal may identify an input first code image, store the first code image if the first code image fails to be identified, identify a second code image input within a preset time duration, and improve the probability that the target operation can be performed by identifying the second code image, further, if the second code image fails to be identified, store the second code image, generate the target code image based on the stored second code image and the first code image, and finally perform the target operation based on the target code image, so that the target code image can be generated to combine information in the first code image and the second code image, thereby further improving the probability that the target operation can be performed.
Fig. 2-1 is a flowchart illustrating steps of another method for encoding pictures according to an embodiment of the present invention, which may be applied to a mobile terminal, as shown in fig. 2-1, and which may include:
Specifically, the implementation manner of this step is similar to that of step 101, and accordingly, reference may be made to step 101, which is not described herein again in this embodiment of the present invention.
In order to facilitate a user to know that the information contained in the first coded image is insufficient in time, the first coded image is failed to be identified, and then the second coded image is input to the mobile terminal in time, in the embodiment of the invention, the mobile terminal can also display second reminding information after the first coded image is stored and before the second coded image is identified; the second reminding information may be, for example, text-type information whose content is "target operation cannot be executed through the encoded image, please scan other available encoded images", or other types of information of other contents, such as voice-type information and video-type information, which is not limited in the embodiment of the present invention.
And 203, if the second coded image is failed to be identified, storing the second coded image.
Specifically, the implementation manner of this step is similar to that of step 103, and accordingly, reference may be made to step 103, which is not described herein again in this embodiment of the present invention.
Specifically, this step can be realized by the following substeps (1) to (4):
substep (1): and adjusting the shape of the first coded image and the shape of the second coded image to a preset shape by using a preset image correction algorithm.
In this step, the preset image correction algorithm may be affine transformation, where the affine transformation is composed of a non-singular linear transformation followed by a translation transformation, and in the case of a finite dimension, an affine transformation may correspond to multiplication of a matrix and a vector, which is a linear transformation from a two-dimensional coordinate to a two-dimensional coordinate, and the affine transformation may maintain straightness of a two-dimensional figure, that is, a straight line still becomes a straight line after the affine transformation, and parallelism of the two-dimensional image, that is, a relative positional relationship between straight lines remains unchanged, a parallel line still becomes a parallel line after the affine transformation, and a positional order of points on the straight line does not change, and three pairs of non-collinear corresponding points may determine a unique affine transformation. When performing the correction, the first encoded image and the second encoded image may be respectively subjected to operations such as translation, scaling, and rotation based on affine transformation to achieve adjustment of the shape of the first encoded image and the shape of the second encoded image into preset shapes, where the preset shapes may be squares that are set in advance for convenience of processing the images in subsequent steps, and of course, the preset shapes may also be other shapes.
Substep (2): and respectively determining a first effective region in the first coded image and a second effective region in the second coded image by using a preset image detection algorithm.
In this step, a first effective region in the first encoded image indicates that the first encoded image still carries an information region without being damaged, and a second effective region in the second encoded image indicates that the second encoded image still carries an information region without being damaged. In practical application, after the coded image is damaged, the content of the damaged area tends to become fuzzy, so in this step, the mobile terminal can obtain a first effective area by detecting a clear edge regular area in the first coded image, and obtain a second effective area by detecting a clear edge regular area in the second coded image.
Specifically, the grayscale image corresponding to the first encoded image and the grayscale image corresponding to the second encoded image can be obtained by performing a graying process on the first encoded image and the second encoded image, where the graying process is a process of converting a color image into a grayscale image, and the grayscale image obtained after the graying process can reflect the distribution and characteristics of the overall and local chromaticity and luminance levels of the whole image. In the embodiment of the invention, the gray processing is carried out, so that the calculation amount for the subsequent processing of the first coded image and the second coded image is reduced, and the consumption of terminal system resources is reduced. Then, the edge maps of the first encoded image and the second encoded image may be respectively calculated based on a preset edge operator, where the preset edge operator may be a Sobel operator or a laplacian operator, and further, the valid regions in the first encoded image and the second encoded image may be respectively determined based on the edge map of the first encoded image and the edge map of the second encoded image. Of course, in practical applications, the effective region may also be determined based on other manners, for example, because the content of the damaged region tends to become blurred, and therefore, the texture features of the image in the undamaged region in the encoded image and the texture features of the image in the damaged region tend to have a large difference, so the mobile terminal may also determine the second effective region in the second encoded image based on the extracted texture features by extracting the texture features of various positions in the first encoded image, determining the first effective region in the first encoded image based on the extracted texture features, and extracting the texture features of various positions in the second encoded image.
Substep (3): and setting the pixel value of each pixel point in the first coding image except the first effective region as a preset value to obtain the first effective region image.
In this step, the preset value may be preset according to actual requirements, for example, the preset value may be 0, and the preset value may also be 255, which is not limited in the embodiment of the present invention. Fig. 2-2 is a schematic diagram of a first encoded image according to an embodiment of the present invention, and as shown in fig. 2-2, an area a covered by oblique lines represents a first effective area, so that the mobile terminal may set a pixel value of each pixel point in an area other than the area a, that is, an area b, to a preset value, and after the setting is completed, the first effective area image may be obtained.
Substep (4): and setting the pixel value of each pixel point in the second coding image except the second effective region as the preset value to obtain the second effective region image.
For example, fig. 2 to 3 are schematic diagrams of a second encoded image according to an embodiment of the present invention, as shown in fig. 2 to 3, a region c covered by oblique lines represents a second effective region, so that the mobile terminal may set a pixel value of each pixel point in a region other than the region c, that is, the region d, to a preset value, and after the setting is completed, a second effective region image may be obtained.
And step 205, combining the first effective area image and the second effective area image to generate a target coded image.
Specifically, this step can be realized by the following substeps (5) to (7):
substep (5): and determining one of the first effective area image and the second effective area image as a reference image, and determining the other image as a contrast image.
Specifically, the mobile terminal may use the first effective area image as a reference image and the second effective area image as a comparison image, or may use the second effective area image as a reference image and the first effective area image as a comparison image, which is not limited in this embodiment of the present invention.
Substep (6): and for each reference pixel point in the reference image, if the pixel value of the reference pixel point is different from the pixel value of the comparison pixel point at the same position in the comparison image, and the pixel value of the reference pixel point is a preset value, adjusting the pixel value of the reference pixel point to be the pixel value of the comparison pixel point at the same position in the comparison image.
In this step, the pixel points included in the reference image are reference pixel points, and the pixel points included in the comparison image are comparison pixel points, specifically, the mobile terminal can compare each reference pixel point in the reference image with the pixel values of the comparison pixel points at the same position in the comparison image in sequence, and if the pixel value of the reference pixel point in the reference image is the same as the pixel value of the comparison pixel point at the same position in the comparison image, it can be determined that both the reference pixel point in the reference image and the comparison pixel point at the same position in the comparison image carry information, or both the reference pixel point in the reference image and the comparison pixel point at the same position in the comparison image are damaged pixel points which do not carry information, and therefore, the reference pixel point can not be processed.
Further, if the pixel value of the reference pixel point in the reference image is different from the pixel value of the comparison pixel point at the same position in the comparison image, at this time, whether the pixel value of the reference pixel point is a preset value can be further judged, if the pixel value of the reference pixel point is not the preset value, it can be determined that the reference pixel point in the reference image carries information, and therefore, the reference pixel point may not be processed, and further, if the pixel value of the reference pixel point is a preset value, the reference pixel point in the reference image can be determined as a damaged pixel point which does not carry information, and the contrast pixel points at the same position in the contrast image carry information, and at the moment, the pixel value of the reference pixel point can be adjusted to the pixel value of the contrast pixel point at the same position in the contrast image so as to enrich the information carried in the reference image.
Substep (7): and determining the adjusted reference image as a target coding image.
Correspondingly, after all the reference pixel points are traversed and all the reference pixel points needing to be adjusted are adjusted, the adjusted reference image can be determined as a target coded image, and information carried in the target coded image is all information carried in the first coded image and the second coded image.
Specifically, in this step, the mobile terminal may first analyze the target encoded image to obtain information included in the target encoded image, for example, the black lattice in the target encoded image may be analyzed to be 1, the white lattice may be analyzed to be 0, and the information is sorted according to a preset sequence, so as to obtain a character string in the target encoded image, where the character string is the information included in the target encoded image. Then, if the information amount of the information contained in the target encoded image is not less than the information amount of the information required to perform the target operation, the mobile terminal may perform the target operation according to the information contained in the target encoded image.
For example, the mobile terminal may determine whether the length of the parsed character string matches a preset length, and if the length of the character string is not less than the preset length, it may be considered that the information included in the target encoded image is sufficient, and at this time, the target operation may be performed according to the information included in the target encoded image. It should be noted that, in an actual application scenario, a plurality of encoded images may be set for a target operation, that is, a plurality of second encoded images exist, and therefore, in the embodiment of the present invention, when the target operation cannot be executed based on the target encoded image, another second encoded image input by a user may be identified, and if the identification fails, the other second encoded image and the target encoded image may be combined to continue the synthesis by the above method, so as to improve the probability that the target operation can be executed.
In summary, in another encoded image recognition method provided by the embodiments of the present invention, a mobile terminal may recognize an input first encoded image, store the first encoded image if the first encoded image is failed to be recognized, recognize a second encoded image input within a preset time duration, and increase a probability that a target operation can be performed by recognizing the second encoded image, and further, if the second encoded image is failed to be recognized, the mobile terminal may determine a first valid region image based on the first encoded image, determine a second valid region image based on the second encoded image, combine the first valid region image and the second valid region image to generate a target encoded image, so that all information carried in the first encoded image and the second encoded image can be carried in the target encoded image, and finally perform the target operation based on the target encoded image, further increasing the probability that the target operation can be performed.
Fig. 3 is a block diagram of a mobile terminal according to an embodiment of the present invention, and as shown in fig. 3, the mobile terminal 30 may include:
the first recognition module 301 is used for recognizing the input first coded image.
The second identifying module 302, if the first encoded image is not identified, stores the first encoded image and identifies a second encoded image input within a preset time duration.
A storage module 303, configured to store the second encoded image if the second encoded image is not identified.
A generating module 304, configured to generate a target encoded image based on the stored second encoded image and the first encoded image.
An execution module 305 for executing a target operation based on the target encoded image.
In summary, the mobile terminal provided in the embodiment of the present invention can implement each process implemented by the mobile terminal in the method embodiment of fig. 1, and is not described herein again to avoid repetition. The mobile terminal provided by the embodiment of the invention can identify an input first coded image, store the first coded image if the first coded image is failed to be identified, identify a second coded image input within a preset time duration, and improve the probability that a target operation can be executed by identifying the second coded image, further, if the second coded image is failed to be identified, store the second coded image, generate a target coded image based on the stored second coded image and the first coded image, and finally execute the target operation based on the target coded image, so that the probability that the target operation can be executed can be further improved by combining information in the first coded image and the second coded image by generating the target coded image.
Fig. 4 is a block diagram of a mobile terminal according to an embodiment of the present invention, and as shown in fig. 3, the mobile terminal 40 may include:
the first recognition module 401 is configured to recognize an input first encoded image.
A second identifying module 402, configured to store the first encoded image and identify a second encoded image input within a preset time length if the first encoded image is failed to be identified.
A storage module 403, configured to store the second encoded image if the second encoded image is failed to be identified.
A generating module 404, configured to generate a target encoded image based on the stored second encoded image and the first encoded image.
An execution module 405 for executing a target operation based on the target encoded image.
Optionally, the generating module 404 includes:
a determination sub-module 4041 is configured to determine a first effective area image based on the first encoded image and a second effective area image based on the second encoded image.
The generating sub-module 4042 is configured to combine the first effective region image and the second effective region image to generate a target encoded image.
Optionally, the determining sub-module 4041 is configured to:
and adjusting the shape of the first coded image and the shape of the second coded image to a preset shape by utilizing a preset image rectification algorithm.
And respectively determining a first effective region in the first coded image and a second effective region in the second coded image by using a preset image detection algorithm.
And setting the pixel value of each pixel point in the first coding image except the first effective region as a preset value to obtain the first effective region image.
And setting the pixel value of each pixel point in the second coding image except the second effective region as the preset value to obtain the second effective region image.
Optionally, the generating sub-module 4042 is configured to:
and determining one of the first effective area image and the second effective area image as a reference image, and determining the other image as a contrast image.
And for each reference pixel point in the reference image, if the pixel value of the reference pixel point is different from the pixel value of the comparison pixel point at the same position in the comparison image, and the pixel value of the reference pixel point is a preset value, adjusting the pixel value of the reference pixel point to be the pixel value of the comparison pixel point at the same position in the comparison image.
And determining the adjusted reference image as a target coding image.
Optionally, the executing module 405 is configured to:
analyzing the target coding image to acquire information contained in the target coding image;
if the information quantity of the information contained in the target coding image is not less than the information quantity of the information required by executing the target operation, executing the target operation according to the information contained in the target coding image;
if the information quantity of the information contained in the target coding image is smaller than the information quantity of the information required by executing the target operation, displaying first reminding information;
the first reminding information is used for reminding a user that the information is insufficient and the target operation cannot be executed.
Optionally, the mobile terminal 40 further includes:
the display module is used for displaying the second reminding information; and the second reminding information is used for reminding a user of inputting a second coded image.
In summary, the mobile terminal provided in the embodiment of the present invention can implement each process implemented by the mobile terminal in the method embodiment of fig. 2-1, and for avoiding repetition, details are not described here again. The mobile terminal provided by the embodiment of the invention can identify the input first coded image, if the identification of the first coded image fails, storing the first code image and recognizing a second code image inputted within a preset time period, increasing the probability that the target operation can be performed by recognizing the second code image, and further, if the recognition of the second code image fails, the mobile terminal further determines a first effective area image based on the first encoded image and a second effective area image based on the second encoded image, combines the first effective area image and the second effective area image to generate a target encoded image, so that all the information carried in the first coded picture and the second coded picture can be carried in the target coded picture, and finally, the target operation is executed based on the target encoded image, further increasing the probability that the target operation can be executed.
Fig. 5 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present invention.
The mobile terminal 500 includes, but is not limited to: radio frequency unit 501, network module 502, audio output unit 503, input unit 504, sensor 505, display unit 506, user input unit 507, interface unit 508, memory 509, processor 510, and power supply 511. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 5 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Wherein, the processor 510 is configured to: recognizing an input first encoded image;
a processor 510 for: if the first coded image is failed to be identified, storing the first coded image and identifying a second coded image input within a preset time length;
a processor 510 for: if the second coded image is failed to be identified, storing the second coded image;
a processor 510 for: generating a target encoded image based on the stored second encoded image and the first encoded image;
a processor 510 for: performing a target operation based on the target encoded image.
In summary, the mobile terminal may identify an input first encoded image, store the first encoded image if the first encoded image is failed to be identified, identify a second encoded image input within a preset time duration, and improve the probability that a target operation can be performed by identifying the second encoded image, further, store the second encoded image if the second encoded image is failed to be identified, generate a target encoded image based on the stored second encoded image and the first encoded image, and finally perform the target operation based on the target encoded image, so that the probability that the target operation can be performed can be further improved by generating the target encoded image in combination with information in the first encoded image and the second encoded image.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 502, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the mobile terminal 500 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The mobile terminal 500 also includes at least one sensor 505, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 5061 and/or a backlight when the mobile terminal 500 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 506 is used to display information input by the user or information provided to the user. The Display unit 506 may include a Display panel 5061, and the Display panel 5061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, can collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 5071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of the touch event. Although in fig. 5, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 508 is an interface through which an external device is connected to the mobile terminal 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 500 or may be used to transmit data between the mobile terminal 500 and external devices.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the mobile terminal. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The mobile terminal 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, the power supply 511 may be logically connected to the processor 510 via a power management system, so that functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, the mobile terminal 500 includes some functional modules that are not shown, and thus, are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 510, a memory 509, and a computer program that is stored in the memory 509 and can be run on the processor 510, and when the computer program is executed by the processor 510, the processes of the above-mentioned coded image recognition method embodiment are implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned coded image identification method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The term "comprising" is used to specify the presence of stated features, integers, steps, operations, elements, components, operations, components, or the components, and/components.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (12)
1. A coded image recognition method is applied to a mobile terminal, and is characterized by comprising the following steps:
recognizing an input first encoded image;
if the first coded image is failed to be identified, storing the first coded image, and identifying a second coded image input within a preset time length;
if the second coded image is failed to be identified, storing the second coded image;
generating a target encoded image based on the stored second encoded image and the first encoded image;
performing a target operation based on the target encoded image;
generating, by the processor, a target encoded image based on the stored second encoded image and the first encoded image, including:
determining a first active area image based on the first encoded image and a second active area image based on the second encoded image;
combining the first effective area image with the second effective area image to generate a target coding image;
the second coded picture is a coded picture with the same content and different position as the first coded picture, and the missing information of the second coded picture is different from the missing information of the first coded picture.
2. The method of claim 1, wherein determining a first active area image based on the first encoded image and determining a second active area image based on the second encoded image comprises:
adjusting the shape of the first coded image and the shape of the second coded image to a preset shape by using a preset image correction algorithm;
respectively determining a first effective region in the first coded image and a second effective region in the second coded image by using a preset image detection algorithm;
setting the pixel value of each pixel point in the first coding image except the first effective area as a preset value to obtain a first effective area image;
and setting the pixel value of each pixel point in the second coding image except the second effective region as the preset value to obtain the second effective region image.
3. The method of claim 2, wherein the combining the first active area image with the second active area image to generate a target encoded image comprises:
determining one of the first effective area image and the second effective area image as a reference image, and determining the other image as a contrast image;
for each reference pixel point in the reference image, if the pixel value of the reference pixel point is different from the pixel value of the comparison pixel point at the same position in the comparison image, and the pixel value of the reference pixel point is a preset value, adjusting the pixel value of the reference pixel point to be the pixel value of the comparison pixel point at the same position in the comparison image;
and determining the adjusted reference image as a target coding image.
4. The method of claim 1, wherein the performing the target operation based on the target encoded image comprises:
analyzing the target coding image to acquire information contained in the target coding image;
if the information quantity of the information contained in the target coding image is not less than the information quantity of the information required by executing the target operation, executing the target operation according to the information contained in the target coding image;
if the information quantity of the information contained in the target coding image is smaller than the information quantity of the information required by executing the target operation, displaying first reminding information;
the first reminding information is used for reminding a user that the information is insufficient and the target operation cannot be executed.
5. The method of claim 1, wherein after storing the first encoded image, the method further comprises:
displaying the second reminding information; and the second reminding information is used for reminding a user of inputting a second coded image.
6. A mobile terminal, characterized in that the mobile terminal comprises:
the first identification module is used for identifying an input first coded image;
the second identification module is used for storing the first coded image and identifying a second coded image input within a preset time length if the first coded image is failed to be identified;
the storage module is used for storing the second coded image if the second coded image is failed to be identified;
a generating module, configured to generate a target encoded image based on the stored second encoded image and the first encoded image;
an execution module to execute a target operation based on the target encoded image;
the generation module comprises:
a determination sub-module for determining a first active area image based on the first encoded image and a second active area image based on the second encoded image;
a generation submodule, configured to combine the first effective region image with the second effective region image to generate a target encoded image;
the second coded picture is a coded picture with the same content and different position as the first coded picture, and the missing information of the second coded picture is different from the missing information of the first coded picture.
7. The mobile terminal of claim 6, wherein the determining submodule is configured to:
adjusting the shape of the first coded image and the shape of the second coded image to a preset shape by using a preset image correction algorithm;
respectively determining a first effective region in the first coded image and a second effective region in the second coded image by using a preset image detection algorithm;
setting the pixel value of each pixel point in the first coding image except the first effective region as a preset value to obtain a first effective region image;
and setting the pixel value of each pixel point in the second coding image except the second effective region as the preset value to obtain the second effective region image.
8. The mobile terminal of claim 6, wherein the generation submodule is configured to:
determining one of the first effective area image and the second effective area image as a reference image, and determining the other image as a contrast image;
for each reference pixel point in the reference image, if the pixel value of the reference pixel point is different from the pixel value of the comparison pixel point at the same position in the comparison image, and the pixel value of the reference pixel point is a preset value, adjusting the pixel value of the reference pixel point to be the pixel value of the comparison pixel point at the same position in the comparison image;
and determining the adjusted reference image as a target coding image.
9. The mobile terminal of claim 6, wherein the execution module is configured to:
analyzing the target coding image to acquire information contained in the target coding image;
if the information quantity of the information contained in the target coding image is not less than the information quantity of the information required by executing the target operation, executing the target operation according to the information contained in the target coding image;
if the information quantity of the information contained in the target coded image is smaller than the information quantity of the information required by executing the target operation, displaying first reminding information;
the first reminding information is used for reminding a user that the information is insufficient and the target operation cannot be executed.
10. The mobile terminal of claim 6, wherein the mobile terminal further comprises:
the display module is used for displaying the second reminding information; and the second reminding information is used for reminding a user of inputting a second coded image.
11. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on said memory and executable on said processor, said computer program, when executed by said processor, implementing the steps of the coded image recognition method according to any one of claims 1 to 5.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the coded image recognition method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811279527.7A CN109492451B (en) | 2018-10-30 | 2018-10-30 | Coded image identification method and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811279527.7A CN109492451B (en) | 2018-10-30 | 2018-10-30 | Coded image identification method and mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109492451A CN109492451A (en) | 2019-03-19 |
CN109492451B true CN109492451B (en) | 2022-08-16 |
Family
ID=65693245
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811279527.7A Active CN109492451B (en) | 2018-10-30 | 2018-10-30 | Coded image identification method and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109492451B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112507988B (en) * | 2021-02-04 | 2021-05-25 | 联仁健康医疗大数据科技股份有限公司 | Image processing method and device, storage medium and electronic equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11110476A (en) * | 1997-10-08 | 1999-04-23 | Fujitsu Ltd | Issuing and reading devices of recording carrier recording digital code signal and recording carrier |
JP2006252010A (en) * | 2005-03-09 | 2006-09-21 | Masahiro Kutogi | It article and method for manufacturing the same |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3676937B2 (en) * | 1999-01-25 | 2005-07-27 | 大日本印刷株式会社 | Image processing system, image transmission system, and recording medium |
US7028861B2 (en) * | 2003-12-16 | 2006-04-18 | Joseph S. Kanfer | Electronically keyed dispensing systems and related methods of installation and use |
US20060101177A1 (en) * | 2004-10-05 | 2006-05-11 | Plustek Inc. | System with Universal Serial Bus (USB) host functions and its processing methods |
JP2008099134A (en) * | 2006-10-13 | 2008-04-24 | Fuji Xerox Co Ltd | Data decoding apparatus and program |
CN100511271C (en) * | 2006-11-16 | 2009-07-08 | 深圳市天朗时代科技有限公司 | Two-dimensional decoding method |
US7920717B2 (en) * | 2007-02-20 | 2011-04-05 | Microsoft Corporation | Pixel extraction and replacement |
JP4902569B2 (en) * | 2008-02-19 | 2012-03-21 | キヤノン株式会社 | Image coding apparatus and control method thereof |
JP2010028309A (en) * | 2008-07-16 | 2010-02-04 | Canon Inc | Apparatus, method, program, and storage medium |
US8677227B2 (en) * | 2010-08-25 | 2014-03-18 | Royal Institution for the Advancement of Learning / McGill University | Method and system for decoding |
CN102625103A (en) * | 2012-03-23 | 2012-08-01 | 李卫伟 | Method and device for decoding |
CN103369331B (en) * | 2012-03-27 | 2016-12-21 | 北京数码视讯科技股份有限公司 | The complementing method of image cavity and device and the treating method and apparatus of video image |
US10055643B2 (en) * | 2014-09-19 | 2018-08-21 | Bendix Commercial Vehicle Systems Llc | Advanced blending of stitched images for 3D object reproduction |
CN104778491B (en) * | 2014-10-13 | 2017-11-07 | 刘整 | For the image code of information processing and generation with parsing its apparatus and method |
CN105989175B (en) * | 2015-03-05 | 2019-11-22 | 深圳市腾讯计算机系统有限公司 | Image display method and apparatus |
CN104657700B (en) * | 2015-03-25 | 2017-07-25 | 广州宽度信息技术有限公司 | A kind of Quick Response Code resistant to damage coding/decoding method |
CN106156684B (en) * | 2016-06-30 | 2019-01-18 | 南京理工大学 | A kind of two-dimensional code identification method and device |
CN107992780B (en) * | 2017-10-31 | 2021-05-28 | 维沃移动通信有限公司 | Code identification method and mobile terminal |
CN107818283A (en) * | 2017-11-02 | 2018-03-20 | 深圳天珑无线科技有限公司 | Quick Response Code image pickup method, mobile terminal and computer-readable recording medium |
CN108021839B (en) * | 2017-12-08 | 2020-10-23 | 博众精工科技股份有限公司 | Error correction reading method and system for QR (quick response) code |
-
2018
- 2018-10-30 CN CN201811279527.7A patent/CN109492451B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11110476A (en) * | 1997-10-08 | 1999-04-23 | Fujitsu Ltd | Issuing and reading devices of recording carrier recording digital code signal and recording carrier |
JP2006252010A (en) * | 2005-03-09 | 2006-09-21 | Masahiro Kutogi | It article and method for manufacturing the same |
Also Published As
Publication number | Publication date |
---|---|
CN109492451A (en) | 2019-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108427876B (en) | Fingerprint identification method and mobile terminal | |
CN109151180B (en) | Object identification method and mobile terminal | |
CN107944325B (en) | Code scanning method, code scanning device and mobile terminal | |
CN107977652B (en) | Method for extracting screen display content and mobile terminal | |
CN109240577B (en) | Screen capturing method and terminal | |
CN110706179A (en) | Image processing method and electronic equipment | |
CN107749046B (en) | Image processing method and mobile terminal | |
CN108763317B (en) | Method for assisting in selecting picture and terminal equipment | |
CN109523253B (en) | Payment method and device | |
CN111401463B (en) | Method for outputting detection result, electronic equipment and medium | |
CN111145087B (en) | Image processing method and electronic equipment | |
CN109727212B (en) | Image processing method and mobile terminal | |
CN109544172B (en) | Display method and terminal equipment | |
WO2019223493A1 (en) | Object recognition method and mobile terminal | |
CN111031178A (en) | Video stream clipping method and electronic equipment | |
CN110519503B (en) | Method for acquiring scanned image and mobile terminal | |
CN109639981B (en) | Image shooting method and mobile terminal | |
CN110022551B (en) | Information interaction method and terminal equipment | |
CN108932505B (en) | Image processing method and electronic equipment | |
CN108391011B (en) | Face recognition method and mobile terminal | |
CN110740265B (en) | Image processing method and terminal equipment | |
CN109819331B (en) | Video call method, device and mobile terminal | |
CN109189517B (en) | Display switching method and mobile terminal | |
CN109492451B (en) | Coded image identification method and mobile terminal | |
CN107977591B (en) | Two-dimensional code image identification method and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |