CN108182457B - Method and apparatus for generating information - Google Patents
Method and apparatus for generating information Download PDFInfo
- Publication number
- CN108182457B CN108182457B CN201810088070.5A CN201810088070A CN108182457B CN 108182457 B CN108182457 B CN 108182457B CN 201810088070 A CN201810088070 A CN 201810088070A CN 108182457 B CN108182457 B CN 108182457B
- Authority
- CN
- China
- Prior art keywords
- image
- matching
- pixel point
- point
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application discloses a method and a device for generating information. One embodiment of the method comprises: matching each first characteristic point in the first characteristic point set with each second characteristic point in the second characteristic point set to obtain a matched characteristic point pair set; determining the distribution density of second feature points contained in the matching feature point pair set in an image area of a second image, and determining an image area corresponding to the maximum density in the determined densities as a matching feature point dense area; determining a second feature point contained in the dense region of the matched feature points; determining a set of matching characteristic point pairs containing the determined second characteristic point in the matching characteristic point pair set as a corrected matching characteristic point pair set; and generating a first matching result of the target pixel point based on the corrected matching feature point pair set and the target pixel point. The method and the device improve the accuracy of determining the matching result of the target pixel point.
Description
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to the technical field of internet, and particularly relates to a method and a device for generating information.
Background
In the prior art, when compatibility tests on websites are performed on different devices, each device is often required to be tested independently; alternatively, the position of the target point of the website page on the screen of one device may be determined based on the position of the target point on the screen of another device, so that the compatibility test may be performed based on an automated test tool or program.
At present, the methods for determining the corresponding points of the target point on the screen of other devices mainly include the following two methods: one is positioning based on the attributes (such as id, name, etc.) of the page control; another is matching based on some single image feature. The image Feature based on a single image Feature may be an image Feature obtained based on Scale Invariant Feature Transform (SIFT).
Disclosure of Invention
The embodiment of the application provides a method and a device for generating information.
In a first aspect, an embodiment of the present application provides a method for generating information, where the method includes: matching each first characteristic point in the first characteristic point set with each second characteristic point in the second characteristic point set to obtain a matched characteristic point pair set, wherein the first characteristic points are characteristic points of a target image area included in the first image, the target image area includes target pixel points, and the second characteristic points are characteristic points of the second image; determining the distribution density of second feature points contained in the matching feature point pair set in an image area of a second image, and determining an image area corresponding to the maximum density in the determined densities as a matching feature point dense area; determining a second feature point contained in the dense region of the matched feature points; determining a set of matching characteristic point pairs containing the determined second characteristic point in the matching characteristic point pair set as a corrected matching characteristic point pair set; and generating a first matching result of the target pixel point based on the corrected matching feature point pair set and the target pixel point.
In some embodiments, the above method further comprises: taking each pixel point of the target pixel point in the neighborhood of the first image as a first seed pixel point, and determining a polymerization region of the first seed pixel point meeting a preset screening condition as a first region by adopting a region growing algorithm; taking each pixel point in the second image as a second seed pixel point, and determining a polymerization region of the second seed pixel point meeting a preset screening condition as a second region by adopting a region growing algorithm; determining a second area meeting at least one matching condition as a second area matched with the first area: the difference between the filling degree of the second area and the filling degree of the first area is smaller than a preset filling degree threshold value; the difference between the aspect ratio of the second area and the aspect ratio of the first area is smaller than a preset aspect ratio threshold value; the similarity between the second area and the first area is greater than a preset first similarity threshold; determining a combination of the first area and a second area matched with the first area as a matching area pair; and generating a second matching result of the target pixel point based on the matching region pair and the target pixel point.
In some embodiments, the preset screening condition comprises at least one of: the product of the first preset distance value, the height of the first image and the width of the first image is less than the number of pixels in the aggregation area; the width of the aggregation area is smaller than the product of the width of the first image and a second preset distance value; the height of the aggregation area is smaller than the product of the height of the first image and the second preset distance value.
In some embodiments, a neighborhood of the target pixel point is presented with a first set of words and the second image is presented with a second set of words; and the above method further comprises: for each first word in the first word set, determining a second word matched with the first word in a second word set, and determining a combination of the first word and the second word matched with the first word as a matched word pair; and generating a third matching result of the target pixel point based on the matching vocabulary pairs and the target pixel point.
In some embodiments, determining a second vocabulary that matches the first vocabulary comprises: determining the four-corner number of the first vocabulary; determining the similarity between each second vocabulary in the second vocabulary set and the first vocabulary; and determining a second vocabulary which is the same as the four corner number of the first vocabulary and/or has the maximum similarity as a second vocabulary matched with the first vocabulary.
In some embodiments, the above method further comprises: performing template matching operation on the second image by utilizing the neighborhood of the target pixel point, determining the similarity between the image area of the second image and the neighborhood, and determining the image area of the second image with the maximum similarity in the determined similarity as a matched image area; determining selected pixel points in the neighborhood, and determining matched pixel points of the selected pixel points in the matched image area; and generating a fourth matching result of the target pixel point based on the selected pixel point, the matched pixel point and the target pixel point.
In some embodiments, the above method further comprises: based on the generated matching result, a final matching result is generated.
In some embodiments, the first image is an image displayed by the first electronic device when the target web page is presented on the first electronic device, and the second image is an image displayed by the second electronic device when the target web page is presented on the second electronic device.
In some embodiments, the above method further comprises: and performing compatibility test on the website related to the target webpage based on the final matching result.
In a second aspect, an embodiment of the present application provides an apparatus for generating information, where the apparatus includes: the matching unit is configured to match each first feature point in the first feature point set with each second feature point in the second feature point set to obtain a matching feature point pair set, wherein the first feature points are feature points of a target image region included in the first image, the target image region includes target pixel points, and the second feature points are feature points of the second image; a first determining unit configured to determine the density at which the second feature points included in the matching feature point pair set are distributed in the image region of the second image, and determine an image region corresponding to the maximum density among the determined densities as a matching feature point dense region; a second determination unit configured to determine a second feature point included in the matching feature point dense region; a third determining unit, configured to determine a set of matching characteristic point pairs including the determined second characteristic point in the set of matching characteristic point pairs as a modified set of matching characteristic point pairs; and the first generating unit is configured to generate a first matching result of the target pixel point based on the corrected matching feature point pair set and the target pixel point.
In some embodiments, the above apparatus further comprises: the fourth determining unit is configured to use each pixel point of the target pixel point in the neighborhood of the first image as a first seed pixel point, and determine a polymerization region of the first seed pixel point meeting a preset screening condition as a first region by adopting a region growing algorithm; the fifth determining unit is configured to use each pixel point in the second image as a second seed pixel point, and determine a polymerization region of the second seed pixel points meeting the preset screening condition as a second region by using a region growing algorithm; a sixth determining unit configured to determine a second region that meets at least one of the following matching conditions as a second region that matches the first region: the difference between the filling degree of the second area and the filling degree of the first area is smaller than a preset filling degree threshold value; the difference between the aspect ratio of the second area and the aspect ratio of the first area is smaller than a preset aspect ratio threshold value; the similarity between the second area and the first area is greater than a preset first similarity threshold; a seventh determining unit configured to determine a combination of the first region and a second region to which the first region is matched as a matching region pair; and the second generating unit is configured to generate a second matching result of the target pixel point based on the matching region pair and the target pixel point.
In some embodiments, the preset screening conditions include at least one of: the product of the first preset distance value, the height of the first image and the width of the first image is less than the number of pixels in the aggregation area; the width of the aggregation area is smaller than the product of the width of the first image and a second preset distance value; the height of the aggregation area is smaller than the product of the height of the first image and the second preset distance value.
In some embodiments, a neighborhood of the target pixel point is presented with a first set of words and the second image is presented with a second set of words; and the above apparatus further comprises: an eighth determining unit configured to determine, for each first word in the first word set, a second word in the second word set that matches the first word, and determine a combination of the first word and the second word that matches the first word as a matching word pair; and the third generating unit is configured to generate a third matching result of the target pixel point based on the matching vocabulary pairs and the target pixel point.
In some embodiments, the eighth determining unit includes: a first determining module configured to determine four corner numbers of the first vocabulary; the second determining module is used for determining the similarity of each second vocabulary in the second vocabulary set and the first vocabulary; and the third determining module is configured to determine a second vocabulary which is the same as the four corner numbers of the first vocabulary and/or has the largest similarity as the second vocabulary matched with the first vocabulary.
In some embodiments, the above apparatus further comprises: a ninth determining unit, configured to perform template matching operation on the second image by using a neighborhood of the target pixel point, determine similarity between an image region of the second image and the neighborhood, and determine an image region of the second image with the largest similarity among the determined similarities as a matching image region; a tenth determining unit configured to determine a selected pixel point in the neighborhood and determine a matching pixel point of the selected pixel point in the matching image region; and the fourth generating unit is configured to generate a fourth matching result of the target pixel point based on the selected pixel point, the matching pixel point and the target pixel point.
In some embodiments, the above apparatus further comprises: a fifth generating unit configured to generate a final matching result based on the generated matching result.
In some embodiments, the first image is an image displayed by the first electronic device when the target web page is presented on the first electronic device, and the second image is an image displayed by the second electronic device when the target web page is presented on the second electronic device.
In some embodiments, the above apparatus further comprises: and the testing unit is configured for carrying out compatibility testing on the website related to the target webpage based on the final matching result.
In a third aspect, an embodiment of the present application provides an electronic device for generating information, including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of any of the embodiments of the method for generating information as described above.
In a fourth aspect, the present application provides a computer-readable storage medium for generating information, on which a computer program is stored, which when executed by a processor implements the method of any one of the embodiments of the method for generating information as described above.
The method and apparatus for generating information according to the embodiment of the present application obtain a matching feature point pair set by matching each first feature point in a first feature point set with each second feature point in a second feature point set, then determine the density of distribution of the second feature points included in the matching feature point pair set in an image region of a second image, determine an image region corresponding to the maximum density among the determined densities as a matching feature point dense region, then determine second feature points included in the matching feature point dense region, then determine a set of matching feature point pairs including the determined second feature points in the matching feature point pair set as a modified matching feature point pair set, and finally generate a first matching result of a target pixel point based on the modified matching feature point pair set and the target pixel point, the accuracy of determining the matching result of the target pixel point is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for generating information according to the present application;
FIG. 3A' is a schematic illustration of a first image of a first application scenario of a method for generating information according to the present application;
FIG. 3A' is a schematic illustration of a second image of the first application scenario described above for a method for generating information according to the present application;
fig. 3B' is a schematic diagram of the positions of target pixel points for the first application scenario according to the method for generating information of the present application;
fig. 3B "is a schematic diagram of determining a position of a corresponding point of a target pixel point for the first application scenario according to the method for generating information of the present application;
FIG. 3C' is a schematic illustration of a first image of a second application scenario of a method for generating information according to the present application;
FIG. 3C' is a schematic illustration of a second image of the second application scenario described above for a method for generating information according to the present application;
fig. 3D' is a schematic diagram of the positions of target pixels for the second application scenario according to the method for generating information of the present application;
fig. 3D "is a schematic diagram of determining a position of a corresponding point of a target pixel point for the second application scenario according to the method for generating information of the present application;
FIG. 3E' is a schematic illustration of a first image of a third application scenario of a method for generating information according to the present application;
FIG. 3E' is a schematic illustration of a second image of the third application scenario described above for a method for generating information according to the present application;
FIG. 3F' is a schematic illustration of a first image of a fourth application scenario of a method for generating information according to the present application;
FIG. 3F' is a schematic illustration of a second image of the fourth application scenario described above for a method for generating information according to the present application;
FIG. 4 is a flow diagram of yet another embodiment of a method for generating information according to the present application;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for generating information according to the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing an electronic device according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the method for generating information or the apparatus for generating information of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background web server providing support for web pages displayed on the terminal devices 101, 102, 103. The background web server can analyze and process the received data and feed back the processing result to the terminal equipment.
The server 105 may also be a background information processing server that acquires images displayed on the terminal apparatuses 101, 102, 103. The background information processing server may perform processing such as corresponding point matching on the received or acquired pictures from different terminal devices, and feed back a processing result (e.g., matching result information) to the terminal device.
It should be noted that, in practice, the method for generating information provided by the embodiment of the present application often needs to be executed by a relatively high-performance electronic device; the means for generating information often need to be implemented by relatively high performance electronic devices. Servers tend to have higher performance than terminal devices. Thus, in general, the method for generating information provided by the embodiment of the present application is generally performed by the server 105, and accordingly, the apparatus for generating information is generally disposed in the server 105. However, when the performance of the terminal device can satisfy the execution condition of the method or the setting condition of the device, the method for generating information provided in the embodiment of the present application may also be executed by the terminal device 101, 102, 103, and the means for generating information may also be provided in the terminal device 101, 102, 103.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for generating information in accordance with the present application is shown. The method for generating information comprises the following steps:
In this embodiment, an electronic device (for example, a server or a terminal device shown in fig. 1) on which the method for generating information operates may match each first feature point in the first feature point set with each second feature point in the second feature point set, so as to obtain a matching feature point pair set. The first feature point is a feature point of a target image area included in the first image, the target image area includes a target pixel point, and the second feature point is a feature point of the second image. The shape and size of the above-mentioned target image area may be set in advance. For example, the shape of the target image may be a circle, for example, the circle may be centered on a target pixel (or other pixels in the first image) and has a radius of 0.5 (or other numerical values) times the width of the first image; the shape of the target image may be a rectangle, for example, the length of the side may be 0.5 (or other numerical value) times the width of the first image, the shape of the square may be centered on the target pixel (or other pixel in the first image), and the like. The feature points may be pixel points in the image for representing color and texture information of the image.
In practice, the electronic device described above may perform this step as follows:
first, the electronic device may extract SURF (speed Up Robust feature) feature points of a target image region including a request pixel point on a first image to obtain a first SURF feature point set, and calculate a feature vector of each first SURF feature point in the first SURF feature point set to obtain a first feature vector set.
Then, the electronic device may extract SURF feature points of the second image to obtain a second SURF feature point set, and calculate a feature vector of each second SURF feature point in the second SURF feature point set to obtain a second feature vector set.
Then, the electronic device may determine, for each first SURF feature point in the first SURF feature point set (denoted as point a), a second SURF feature point (denoted as point B1) having a minimum distance (e.g., euclidean distance, manhattan distance, etc.) from point a in the second SURF feature point set. The second SURF feature point that is the next smallest distance from point a is determined (denoted as point B2). The distance between the point A and the point B1 is recorded as L1; the distance between the point A and the point B2 is denoted as L2.
Subsequently, the electronic apparatus may calculate a ratio of L1 to L2, and if the ratio is smaller than a threshold value set in advance, determine the B1 point as a matching feature point of the a point. And determining the combination of the points A and B1 as a matching characteristic point pair. The threshold value can be used for characterizing the similarity between the point a and the point B1.
Finally, the electronic device may determine matching feature points of each first SURF feature point. Thereby, a matching characteristic point pair set is obtained.
See, by way of example, fig. 3A' and fig. 3A ". In fig. 3A', the first image 301 includes a target image region 3010. The target image region 3010 includes target pixel points 3013. The server (i.e., the electronic device described above) matches each first feature point (e.g., first feature points 3011, 3012, 3014) in the first set of feature points (i.e., the set of feature points included in the target image region 3010) with each second feature point in the second set of feature points. The second feature point may be a feature point included in the second image 302 in fig. 3A ″. According to the steps, it is determined that the feature point 3021 is a matching feature point of the feature point 3011, the feature point 3022 is a matching feature point of the feature point 3012, and the feature point 3024 is a matching feature point of the feature point 3014. Based on this, the electronic device obtains a set of matching characteristic point pairs.
Optionally, the electronic device may further directly determine, as a matching feature point of the first SURF feature point, a second SURF feature point with the greatest similarity to the first SURF feature point (without comparing a size relationship between a ratio of the minimum distance to the second minimum distance and a preset threshold). Wherein, the similarity can be characterized by standardized Euclidean distance, Hamming distance and the like.
In this embodiment, based on the matching feature point pair set obtained in step 201, the electronic device may determine the density at which the second feature points included in the matching feature point pair set are distributed in the image region of the second image, and determine the image region corresponding to the greatest density among the determined densities as the matching feature point dense region. Wherein the shape and size of the image area of the second image may be preset. The density may be characterized by the number of second feature points included per unit area of the image region.
As an example, please continue to refer to fig. 3A' and fig. 3A ". Here, the server moves the second image 302 with a rectangular frame having a size equal to the target image region 3010 as a target frame. And determining the initial position of the movement and the number of second feature points included in the image area of the second image framed by the target frame after each movement. Finally, the server determines the image area 3020 including the largest number of second feature points as the matching feature point dense area.
In step 203, a second feature point included in the dense region of matched feature points is determined.
In this embodiment, based on the matching feature point dense region determined in step 202, the electronic device may determine the second feature point included in the matching feature point dense region.
As an example, please refer to fig. 3A' and fig. 3A ". Wherein the server determines the second characteristic points 3022, 3023, 3024 comprised in the dense area of matching characteristic points 3020.
And 204, determining a set of the matched characteristic point pairs containing the determined second characteristic point in the matched characteristic point pair set as a corrected matched characteristic point pair set.
In this embodiment, the electronic device may determine, as the modified matching characteristic point pair set, a set of matching characteristic point pairs including the determined second characteristic point in the matching characteristic point pair set.
As an example, please refer to fig. 3A' and fig. 3A ". The server determines a set of matching characteristic point pairs including the second characteristic points 3022, 3023, and 3024 in the set of matching characteristic point pairs as a modified set of matching characteristic point pairs.
In this embodiment, based on the corrected matching feature point pair set determined in step 204 and the target pixel point, the electronic device may generate a first matching result of the target pixel point. The first matching result may be information of a position of a corresponding point (i.e., a matching point) of the target pixel point in the second image; or information for characterizing whether the second image includes a corresponding point of the target pixel point.
As an example, this step may be performed as follows:
first, the electronic device may determine a position of each first feature point in the modified set of matching feature point pairs in the first image (referred to as a position set a), and a position of each second feature point in the modified set of matching feature point pairs in the second image (referred to as a position set B).
Then, the electronic device can determine the position of the midpoint of each position in the position set a (hereinafter referred to as midpoint position a). For example, the abscissa of the midpoint location may be the average of the abscissas of the respective locations in the location set a, and the ordinate of the midpoint location may be the average of the ordinates of the respective locations in the location set a. Similarly, the electronic device may determine a midpoint position of each position in the position set B (hereinafter referred to as midpoint position B).
And then, the electronic equipment can determine the position of the target pixel point.
Finally, the electronic device may determine, according to the relative position between the target pixel point and the midpoint position a, that the pixel point having the same relative position as the midpoint position B on the second image is the corresponding point (i.e., the matching point) of the target pixel point. Therefore, the electronic equipment can generate a first matching result of the target pixel point. The first matching result may be position information of a corresponding point of the target pixel point.
Optionally, the electronic device may further determine, as a set a, a set of first feature points included in the modified matching feature point pair set, and determine, as a set B, a set of second feature points included in the modified matching feature point pair set. For each first feature point in the set a, its neighborhood (image area in which the shape and size are set in advance) is made a feature area, and when the feature areas generated by a plurality of points overlap, it is specified that the union of the feature areas is calculated as the same feature area. In this way, the feature region set Ra of the set a is generated. The feature region set Rb of set B is generated in the same manner. And if the target pixel point is located in a certain characteristic region rA in the characteristic region set Ra, enabling the corresponding point of rA in the matching characteristic region rB in Rb to be the position of the corresponding point of the target pixel point. And generating a first matching result of the target pixel point based on the first matching result.
For example, please refer to fig. 3B' and fig. 3B ″. The first image 303 includes a target pixel point 3031. The server determines that the target pixel point 3031 is located in the feature region 3032 (i.e., the feature region rA) and determines that the feature region 3042 in the second image 304 is a matching feature region of the feature region 3032 according to the above steps. Further, the length of the feature region 3032 (shaped as a rectangle) is Wa, the width is Ha, the distance from the target pixel point 3031 to the length (one side) of the feature region 3032 is Ta, and the distance from the target pixel point 3031 to the width (the other side) of the feature region 3032 is La. The server also determines a feature area 3042 (rectangular in shape) having a length Wb and a width Hb. Thus, the server can determine the position of the corresponding point of the target pixel point 3031 by determining the values of Lb and Tb. Where Lb is the distance from the corresponding point to the length (one side) of the feature region 3042, and Tb is the distance from the corresponding point to the width (the other side) of the feature region 3042. As an example, the server may determine the values of Lb and Tb by the following equations:
Lb=Wb*La/Wa
Tb=Hb*Ta/Ha
thus, the server can generate the position information of the corresponding point 3041 of the target pixel point 3031. In the illustration, the position information of the corresponding point 3041 is the first matching result.
It can be understood that, when the similarity between the corresponding point and the target pixel point is smaller than a preset similarity threshold (for example, 0.9), the first matching result may be "no corresponding point exists", may also be information used for characterizing the similarity between the corresponding point and the target pixel point, and may also be other information.
In the method provided by the above embodiment of the present application, a matching feature point pair set is obtained by matching each first feature point in a first feature point set with each second feature point in a second feature point set; determining the distribution density of second feature points contained in the matching feature point pair set in an image area of a second image, and determining an image area corresponding to the maximum density in the determined densities as a matching feature point dense area; determining a second feature point contained in the dense region of the matched feature points; determining a set of matching characteristic point pairs containing the determined second characteristic point in the matching characteristic point pair set as a corrected matching characteristic point pair set; and generating a first matching result of the target pixel point based on the corrected matching feature point pair set and the target pixel point. The method and the device improve the accuracy of determining the matching result of the target pixel point.
In some optional implementations of this embodiment, the method further includes: taking each pixel point of the target pixel point in the neighborhood of the first image as a first seed pixel point, and determining a polymerization region of the first seed pixel point meeting a preset screening condition as a first region by adopting a region growing algorithm; taking each pixel point in the second image as a second seed pixel point, and determining a polymerization region of the second seed pixel point meeting a preset screening condition as a second region by adopting a region growing algorithm; determining a second area meeting at least one matching condition as a second area matched with the first area: the difference between the filling degree of the second area and the filling degree of the first area is smaller than a preset filling degree threshold value; the difference between the aspect ratio of the second area and the aspect ratio of the first area is smaller than a preset aspect ratio threshold value; the similarity between the second area and the first area is greater than a preset first similarity threshold; determining a combination of the first area and a second area matched with the first area as a matching area pair; and generating a second matching result of the target pixel point based on the matching region pair and the target pixel point.
The neighborhood of the target pixel point in the first image is an image area containing the target pixel point, and the shape and the size of the target pixel point are preset. For example, the neighborhood may be a rectangular or square image region centered on the target pixel. The second matching result may be information of a position of a corresponding point (i.e., a matching point) of the target pixel point in the second image; or information for characterizing whether the second image includes a corresponding point of the target pixel point. The preset screening conditions are conditions for screening the aggregation region to obtain the first region, which are set in advance. It will be appreciated that the aggregate regions obtained using the region growing algorithm may contain noise-induced regions (which appear to be too small in area), or large background, blank regions (which appear to be too large in area), which are not helpful for matching. The preset screening conditions described above can be used to reject these regions. The method for determining the similarity between the first region and the second region includes, but is not limited to: the method for calculating the image similarity based on the feature points and the method for calculating the image similarity based on the histogram.
Illustratively, the degree of filling of the image area (including the first area and the second area) may be determined as follows: first, the product of the length (which can be characterized in pixels) and the width (which can be characterized in pixels) of an image region can be determined; then, a ratio of the actual number of pixels of the image area to the above product may be determined, which is determined as the degree of filling of the image area.
The region growing algorithm is one of algorithms related to image segmentation techniques. The basic idea of the region growing algorithm is to group pixels with similarities together to form a region. In the embodiment of the present application, the steps of the region growing algorithm may be performed as follows: firstly, taking each pixel point in the neighborhood in a first image as a seed pixel point; then, pixels (for example, pixels with the same color) in the neighborhood around the seed pixel and having the same or similar properties as the seed pixel are merged into the region where the seed pixel is located, and then, new pixels can continue to grow around as the seed pixels until no pixel meeting the condition can be included, so that a polymerization region is obtained.
In practice, the preset filling degree threshold, the preset aspect ratio threshold and the preset first similarity threshold may be determined or adjusted according to the experience of the skilled person and/or a reference to the accuracy of the second matching result (for example, the accuracy of the second matching result obtained in the historical data).
As an example, please refer to fig. 3C' and fig. 3C ". The server determines the position of the target pixel point 3111 on the first image 311, and then determines that the image area 3110 is the neighborhood of the target pixel point 3111; then, the server takes each pixel point in the neighborhood 3110 as a first seed pixel point, and determines an aggregation region of the first seed pixel points meeting preset screening conditions as a first region by adopting a region growing algorithm; the first region 3112 is finally obtained. Similarly, the server obtains a second region in the second image 312, and determines a second region 3122 matching the first region 3112 according to the matching condition. On this basis, the server generates information of the position of the corresponding point 3121 of the target pixel point 3111 (for example, the position of the corresponding point 3121 of the target pixel point 3111 is reached by moving 10 pixel points upward and then moving 10 pixel points leftward with the center of the first region 3112 as a starting point), that is, generates a second matching result of the target pixel point, according to the relative position of the first region 3112 and the target pixel point 3111 (for example, the position of the corresponding point 3121 of the target pixel point 3111 is reached by moving 10 pixel points upward and then moving 10 pixel points leftward with the center of the second region 3122 as a starting point).
Optionally, the electronic device may further generate a second matching result of the target pixel point according to the following steps:
for example, please refer to fig. 3D' and fig. 3D ". Wherein, the first image 313 includes the target pixel 3131. The pixel 3130 is a pixel (e.g., a center point of the first area or another pixel) in the first area determined by the server using the above steps; pixel 3140 is the pixel in the second region that matches the first region as determined by the server in the second image 314 using the above steps. It should be noted that the relative position of the pixel 3140 in the second area is consistent with the relative position of the pixel 3130 in the first area. In the drawing, WaIs the width, H, of the first image 313aIs the length of the first image 313; wbIs the width, H, of the second image 314bIs the length of the second image 314. As an example, a coordinate of a position of the target pixel 3131 (e.g., a coordinate whose abscissa is a distance from the target pixel 3131 to a length of the first image 313 and whose ordinate is a distance from the target pixel 3131 to a width of the first image 313) is setCan be written as (q)x,qy) (ii) a The coordinate of the position of the pixel 3130 (e.g., the coordinate with the abscissa being the distance from the pixel 3130 to the length of the first image 313 and the ordinate being the distance from the pixel 3130 to the width of the first image 313) may be denoted as (ma)x,may) (ii) a The coordinate of the location of pixel 3140 (e.g., the coordinate with the abscissa being the distance from pixel 3140 to the length of second image 314 and the ordinate being the distance from pixel 3140 to the width of second image 314) may be referred to as (mb)x,mby). Thus, the server can determine the position of the corresponding point 3141 of the target pixel 3131 by using the following formula:
tx=mbx–Wb/Wa*(max-qx)
ty=mby–Hb/Ha*(may-qy)
wherein, txFor characterizing the distance, t, of the corresponding point 3141 to the length (one edge) of the second image 314yFor characterizing the ordinate as the distance of the corresponding point 3141 to the width (the other side) of the second image 314. Thereby, the server generates a second matching result of the target pixel 3131, wherein the second matching result may be information of a position where the corresponding point 3141 of the target pixel 3131 is located.
It is understood that when the second region matched with the first region is scaled to a size equal to the size of the first region, the second matching result may be generated by calculating the similarity between the first region and the second region and comparing the size relationship between the calculated similarity and a preset similarity threshold (e.g., 0.99). For example, when the calculated similarity is smaller than a preset similarity threshold, the second matching result may be "no corresponding point exists".
It should be noted that, by comparing the second matching result with the first matching result, it is helpful to generate a more accurate matching result of the target pixel.
In some optional implementations of this embodiment, the preset screening condition includes at least one of: the product of the first preset distance value, the height of the first image and the width of the first image is less than the number of pixels in the aggregation area; the width of the aggregation area is smaller than the product of the width of the first image and a second preset distance value; the height of the aggregation area is smaller than the product of the height of the first image and the second preset distance value.
The polymerization region may be a rectangular region or a circular region. The first and second preset pitch values may be preset to represent values of distances between sub-images within the first image (e.g., controls in an image rendered when the sub-image or page is displayed on the device using a matting technique). In practice, the first preset distance value and the second preset distance value may be set by a skilled person according to experience. For example, the first preset pitch value may be 0.01, and the second preset pitch value may be 0.3.
It can be understood that, when the similarity between the corresponding point and the target pixel point is smaller than a preset similarity threshold (for example, 0.9), the first matching result may be "no corresponding point exists", may also be information used for characterizing the similarity between the corresponding point and the target pixel point, and may also be other information.
It should be noted that the aggregation region is screened by presetting the preset screening condition, so that the first region is obtained, and generation of a more accurate matching result of the target pixel point is facilitated.
In some optional implementations of this embodiment, a first vocabulary set is presented in a neighborhood of the target pixel point, and a second vocabulary set is presented in the second image; and the above method further comprises: for each first word in the first word set, determining a second word matched with the first word in a second word set, and determining a combination of the first word and the second word matched with the first word as a matched word pair; and generating a third matching result of the target pixel point based on the matching vocabulary pairs and the target pixel point.
The first vocabulary and the second vocabulary can be vocabularies which can be directly copied and the like; or words that are fused with the image (e.g., words that cannot be directly copied). The second vocabulary that matches the first vocabulary may include, but is not limited to: a second vocabulary consistent with the color presented by the first vocabulary; a second vocabulary consistent with the font size presented by the first vocabulary; a second vocabulary consistent with the font presented by the first vocabulary.
As an example, the above steps may be performed as follows:
referring to fig. 3E' and fig. 3E ″, firstly, the electronic device may recognize the neighborhood 3210 of the target pixel 3211 in the first image 321 and the vocabulary information (the first vocabulary set and the second vocabulary set) in the entire region of the second image 322 by an OCR (Optical Character Recognition) technique, and determine the coordinate position (of the vocabulary).
Then, the electronic device can cut words of the first vocabulary set and the second vocabulary set. For example, words segmented according to the spatial distance, that is, words with a word spacing smaller than a preset word spacing threshold value, may be regarded as the same word, and conversely, words may be regarded as different words. In the illustration, the first set of words includes "hello"; the second set of words includes "hello", "hello".
Then, the electronic device may determine, for each first word in the first set of words, a second word (e.g., a word having a color, a size, and a font size that are consistent with the first word) in the second set of words, and determine a combination of the first word and the second word that is matched with the first word as a matching word pair.
Finally, the electronic device may generate a third matching result of the target pixel point 3211 according to the positions of the first vocabulary and the second vocabulary in the matching vocabulary pair and the position of the target pixel point 3211. The position of the corresponding point 3221 of the target pixel point 3211 is obtained (as shown in fig. 3E' and fig. 3E ″).
It is to be understood that when there is no second vocabulary that matches the first vocabulary, the third matching result may be information such as "no match".
It should be noted that, the electronic device may determine the position of the first word (e.g. the center of the first word)The coordinates can be written as (ma)x,may) Wherein the abscissa is the distance from the position to the length of the first image and the ordinate is the distance from the position to the width of the first image; the coordinate of the position of the target pixel (e.g., the coordinate with the abscissa being the distance from the target pixel to the length of the first image and the ordinate being the distance from the target pixel to the width of the first image) may be denoted as (q)x,qy) (ii) a The coordinate of the location of the second word that matches the first word (e.g., the coordinate with the abscissa being the long distance from the location to the second image and the ordinate being the wide distance from the location to the second image) may be denoted as (mb)x,mby). Therefore, the server can determine the position of the corresponding point of the target pixel point by adopting the following formula:
tx=mbx–Wb/Wa*(max-qx)
ty=mby–Hb/Ha*(may-qy)
wherein, WaIs the width of the first image, HaIs the length of the first image; wbIs the width of the second image, HbIs the length of the second image. t is txFor characterizing the distance, t, of the corresponding point to the length (edge) of the second imageyFor characterizing the ordinate as the distance of the corresponding point to the width (the other side) of the second image. Therefore, the server generates a third matching result of the target pixel point. And the third matching result is the information of the position of the corresponding point of the target pixel point.
It should be noted that, when the first image and the second image include vocabularies, a third matching result is generated by determining a second vocabulary matched with the first vocabulary in the second vocabulary set, which is helpful for generating a more accurate matching result of the target pixel point.
In some optional implementations of this embodiment, determining a second vocabulary that matches the first vocabulary includes: determining the four-corner number of the first vocabulary; determining the similarity between each second vocabulary in the second vocabulary set and the first vocabulary; and determining a second vocabulary which is the same as the four corner number of the first vocabulary and/or has the maximum similarity as a second vocabulary matched with the first vocabulary. The similarity can be determined by calculating euclidean distances of feature vectors of feature points of the images and the like.
It should be noted that, due to the low resolution and small font of the electronic device, the recognition result of the OCR technology has a certain error rate. When an error is identified, the four-corner code of the identification result can be ensured to be consistent with the original character with a high probability. Therefore, the matching accuracy can be greatly improved by using the four-corner number of the characters (such as Chinese characters) as the matching basis.
In some optional implementations of this embodiment, the method further includes: performing template matching operation on the second image by utilizing the neighborhood of the target pixel point, determining the similarity between the image area of the second image and the neighborhood, and determining the image area of the second image with the maximum similarity in the determined similarity as a matched image area; determining selected pixel points in the neighborhood, and determining matched pixel points of the selected pixel points in the matched image area; and generating a fourth matching result of the target pixel point based on the selected pixel point, the matched pixel point and the target pixel point.
The selected pixel point can be any one pixel point in the neighborhood, and the matched pixel point can be a pixel point corresponding to the selected pixel point in the matched image area. For example, when the neighborhood is a rectangular region, the selected pixel point may be a center point of the neighborhood, and the matching pixel point may be a center point of the matching image region.
Here, the template matching operation is a well-known operation widely studied by those skilled in the art of image processing, and will not be described in detail here.
It is understood that the fourth matching result may be generated by calculating the similarity between the image region of the second image and the neighborhood region, and comparing the magnitude relationship between the calculated similarity and a preset similarity threshold (e.g., 0.99). For example, when the calculated similarity is smaller than a preset similarity threshold, the fourth matching result may be "no corresponding point exists".
For example, please refer to fig. 3F' and fig. 3F ″. In the illustration, the first image 331 includes the target pixel 3311, the server determines the image region 3310 as a neighborhood of the target pixel 3311, then the server performs template matching on the second image, determines the similarity between each image region of the second image 332 and the neighborhood 3310, determines the image region 3320 of the second image with the maximum similarity among the determined similarities as a matching image region, then the server determines the selected pixel 3312 in the neighborhood, determines the matching pixel 3322 of the selected pixel 3312 in the matching image region, and finally the server generates the position information of the corresponding point 3321 of the target pixel 3311 based on the selected pixel 3312, the matching pixel 3322, and the target pixel 3311. The position information of the corresponding point 3321 is the fourth matching result.
It will be appreciated that the electronic device may be configured to note the coordinates of the locations of the selected pixels as (ma)x,may) Wherein the abscissa is the distance from the position to the length of the first image and the ordinate is the distance from the position to the width of the first image; the coordinate of the position of the target pixel (e.g., the coordinate with the abscissa being the distance from the target pixel to the length of the first image and the ordinate being the distance from the target pixel to the width of the first image) may be denoted as (q)x,qy) (ii) a The coordinate of the location of the matching pixel of the selected pixel (e.g., the coordinate with the abscissa being the long distance from the location to the second image and the ordinate being the wide distance from the location to the second image) may be denoted as (mb)x,mby). Therefore, the server can determine the position of the corresponding point of the target pixel point by adopting the following formula:
tx=mbx–Wb/Wa*(max-qx)
ty=mby–Hb/Ha*(may-qy)
wherein, WaIs the width of the first image, HaIs the length of the first image; wbIs the width of the second image, HbIs the length of the second image. t is txFor characterizing the distance, t, of the corresponding point to the length (edge) of the second imageyFor characterizing the ordinate as the distance of the corresponding point to the width (the other side) of the second image. Thereby, the server generates a fourth matching result of the target pixel (information of the position where the corresponding point of the target pixel is located).
It should be noted that, by the template matching method, matching results such as the fourth matching result and the first matching result are compared, which is helpful for determining a more accurate position of a corresponding point of the target pixel point.
With further reference to fig. 4, a flow 400 of yet another embodiment of a method for generating information is shown. The flow 400 of the method for generating information comprises the steps of:
In this embodiment, step 401 is substantially the same as step 201 in the corresponding embodiment of fig. 2, and is not described here again.
It should be noted that, in this embodiment, the first feature point is a feature point of a target image region included in the first image, the target image region includes a target pixel point, and the second feature point is a feature point of the second image. The first image is an image displayed by the first electronic device when the target webpage is presented on the first electronic device, and the second image is an image displayed by the second electronic device when the target webpage is presented on the second electronic device.
In this embodiment, step 402 is substantially the same as step 202 in the corresponding embodiment of fig. 2, and is not described herein again.
In step 403, a second feature point included in the dense region of matched feature points is determined.
In this embodiment, step 403 is substantially the same as step 203 in the corresponding embodiment of fig. 2, and is not described herein again.
In this embodiment, step 404 is substantially the same as step 204 in the corresponding embodiment of fig. 2, and is not described herein again.
In this embodiment, step 405 is substantially the same as step 205 in the corresponding embodiment of fig. 2, and is not described herein again.
Based on the generated matching result, a final matching result is generated, step 406.
In this embodiment, the electronic device may further generate a final matching result based on the obtained matching result. Wherein, the matching result comprises the first matching result and at least one of the following items: a second matching result and a third matching result. And fourthly, matching results. Depending on whether at least one of the above-mentioned items (second matching result, third matching result, fourth matching result) is generated. It is to be understood that, before this step is performed, if only the first matching result is generated (without generating the second matching result, the third matching result, the fourth matching result), the generated matching result only includes the first matching result (without including the second matching result, the third matching result, the fourth matching result); if only the first and fourth matching results are generated (and the second and third matching results are not generated), the generated matching results include only the first and fourth matching results (and the second and third matching results are not included).
It will be appreciated that there are a variety of implementations for generating the final match result based on the generated match result. For example, when the generated matching results include a first matching result, a second matching result, a third matching result, and a fourth matching result, and the first matching result is "corresponding point coordinates (100 )", the second matching result is "corresponding point coordinates (101 )", the third matching result is "corresponding point coordinates (100 )", and the fourth matching result is "corresponding point coordinates (99, 99)", the electronic device may determine the matching result with the largest number of occurrences as a final matching result (at this time, the final matching result may be "corresponding point coordinates (100, 100)"); the integrated information of the generated matching results may also be determined as the final matching result (in this case, the final matching result may be "corresponding point coordinates (100,100), (101 ), (99,99), (100 )").
In this embodiment, the electronic device may further perform a compatibility test on a website related to the target webpage based on the final matching result. The compatibility test may include, but is not limited to: the method comprises the following steps of browser compatibility testing, screen size and resolution compatibility testing, operating system compatibility testing and compatibility testing of different equipment models.
Illustratively, after the corresponding point of the target pixel point of the first image in the second image is determined through the final matching result, the electronic device may implement synchronous operation on the first electronic device and the second electronic device. For example, the input box of the first image may be clicked and text entered while the same text is entered in the same input box on the second image. And further determines whether there is an abnormality in the display of the above-mentioned characters in the first electronic device and the second electronic device, and so on.
It can be understood that when the final matching result represents that the corresponding point of the target pixel point does not exist in the second image, it indicates that the website may have a compatibility problem.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the flow 400 of the method for generating information in the present embodiment highlights the steps of generating the final matching result and performing the compatibility test on the website based on the obtained matching result. Therefore, more matching schemes can be introduced into the scheme described in the embodiment, so that the accuracy of determining the matching result of the target pixel point is further improved, and the efficiency of website compatibility test is improved.
With further reference to fig. 5, as an implementation of the method shown in the above figures, the present application provides an embodiment of an apparatus for generating information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the apparatus 500 for generating information of the present embodiment includes: a matching unit 501, a first determining unit 502, a second determining unit 503, a third determining unit 504, and a first generating unit 505. The matching unit 501 is configured to match each first feature point in the first feature point set with each second feature point in the second feature point set to obtain a matching feature point pair set, where the first feature point is a feature point of a target image region included in the first image, the target image region includes target pixel points, and the second feature point is a feature point of the second image; the first determining unit 502 is configured to determine the densities at which the second feature points included in the matching feature point pair set are distributed in the image region of the second image, and determine an image region corresponding to the greatest density among the determined densities as a matching feature point dense region; the second determination unit 503 is configured to determine a second feature point included in the matching feature point dense region; the third determining unit 504 is configured to determine a set of matching characteristic point pairs including the determined second characteristic point in the set of matching characteristic point pairs as a modified set of matching characteristic point pairs; the first generating unit 505 is configured to generate a first matching result of the target pixel point based on the corrected matching feature point pair set and the target pixel point.
In this embodiment, the matching unit 501 of the apparatus 500 for generating information may match each first feature point in the first feature point set with each second feature point in the second feature point set, so as to obtain a matching feature point pair set. The first feature point is a feature point of a target image area included in the first image, the target image area includes a target pixel point, and the second feature point is a feature point of the second image. The shape and size of the target image area may be set in advance. For example, the shape of the target image may be a circle, for example, the circle may be centered on a target pixel (or other pixels in the first image) and has a radius of 0.5 (or other numerical values) times the width of the first image; the shape of the target image may be a rectangle, for example, the length of the side may be 0.5 (or other numerical value) times the width of the first image, the shape of the square may be centered on the target pixel (or other pixel in the first image), and the like.
In this embodiment, based on the matching feature point pair set obtained by the matching unit 501, the first determining unit 502 may determine the distribution density of the second feature points included in the matching feature point pair set in the image region of the second image, and determine the image region corresponding to the maximum density among the determined densities as the matching feature point dense region. Wherein the shape and size of the image area of the second image may be preset. The density may be characterized as the number of second feature points included per unit area of the image region.
In this embodiment, the second determining unit 503 may determine the second feature point included in the matching feature point dense region based on the matching feature point dense region obtained by the first determining unit 502.
In this embodiment, the third determining unit 504 may determine, as the corrected matching feature point pair set, a set of matching feature point pairs including the determined second feature point from among the matching feature point pair sets, based on the second feature point determined by the second determining unit 503.
In this embodiment, the first generating unit 505 may generate a first matching result of the target pixel point based on the corrected matching feature point pair set and the target pixel point obtained by the third determining unit 504.
In some optional implementations of this embodiment, the apparatus further includes: a fourth determining unit (not shown in the figure) is configured to use each pixel point of the target pixel point in a neighborhood of the first image as a first seed pixel point, and determine a polymerization region of the first seed pixel point meeting a preset screening condition as a first region by using a region growing algorithm; a fifth determining unit (not shown in the figure) is configured to use each pixel point in the second image as a second seed pixel point, and determine a polymerization region of the second seed pixel point meeting a preset screening condition as a second region by using a region growing algorithm; a sixth determining unit (not shown in the figure) is configured to determine a second area meeting at least one of the following matching conditions as a second area matching the first area: the difference between the filling degree of the second area and the filling degree of the first area is smaller than a preset filling degree threshold value; the difference between the aspect ratio of the second area and the aspect ratio of the first area is smaller than a preset aspect ratio threshold value; the similarity between the second area and the first area is greater than a preset first similarity threshold; a seventh determining unit configured to determine a combination of the first region and a second region to which the first region is matched as a matching region pair; and the second generating unit is configured to generate a second matching result of the target pixel point based on the matching region pair and the target pixel point.
The neighborhood of the target pixel point in the first image is an image area containing the target pixel point, and the shape and the size of the target pixel point are preset. For example, the neighborhood may be a rectangular or square image region centered on the target pixel. The second matching result may be information of a position of a corresponding point (i.e., a matching point) of the target pixel point in the second image; or information for characterizing whether the second image includes a corresponding point of the target pixel point. The preset screening conditions are conditions for screening the aggregation region to obtain the first region, which are set in advance. It will be appreciated that the aggregate regions obtained using the region growing algorithm may contain noise-induced regions (which appear to be too small in area) or large background or blank regions (which appear to be too large in area) that are not helpful in matching the shapes. The preset screening conditions are used for rejecting the regions.
In some optional implementations of this embodiment, the preset screening condition includes at least one of: the product of the first preset distance value, the height of the first image and the width of the first image is less than the number of pixels in the aggregation area; the width of the aggregation area is smaller than the product of the width of the first image and a second preset distance value; the height of the aggregation area is smaller than the product of the height of the first image and the second preset distance value.
Wherein the aggregation region may be a rectangular region. The first and second preset pitch values may be preset values characterizing the distance between sub-images within the first image (e.g., images of controls of a page rendered by a device). In practice, the first preset distance value and the second preset distance value may be set by a skilled person according to experience. For example, the first preset pitch value may be 0.01, and the second preset pitch value may be 0.3.
In some optional implementations of this embodiment, a first vocabulary set is presented in a neighborhood of the target pixel point, and a second vocabulary set is presented in the second image; and the above apparatus further comprises: an eighth determining unit (not shown in the figures) is configured to determine, for each first word in the first set of words, a second word in the second set of words that matches the first word, and determine a combination of the first word and the second word that matches the first word as a matching word pair; the third generating unit (not shown in the figure) is configured to generate a third matching result of the target pixel point based on the matching vocabulary pairs and the target pixel point.
The first vocabulary and the second vocabulary can be vocabularies which can be directly copied and the like; or words that are fused with the image (e.g., words that cannot be directly copied). The second vocabulary that matches the first vocabulary may include, but is not limited to: a second vocabulary consistent with the color presented by the first vocabulary; a second vocabulary consistent with the font size presented by the first vocabulary; a second vocabulary consistent with the font presented by the first vocabulary.
In some optional implementations of this embodiment, the eighth determining unit includes: a first determining module (not shown) configured to determine the four corner number of the first vocabulary; a second determining module (not shown in the figure) is configured to determine similarity between each second vocabulary in the second vocabulary set and the first vocabulary; a third determining module (not shown) is configured to determine a second vocabulary with the same four corner numbers and/or the largest similarity as the first vocabulary as a second vocabulary matching the first vocabulary.
It should be noted that, due to the low resolution and small font of the electronic device, the recognition result of the OCR technology has a certain error rate. When an error is identified, the four-corner code of the identification result can be ensured to be consistent with the original character with a high probability. Therefore, the four-corner number of the characters (such as Chinese characters) is used as the matching basis, and the recognition rate can be greatly improved.
In some optional implementations of this embodiment, the apparatus further includes: a ninth determining unit (not shown in the figure) is configured to perform template matching operation on the second image by using a neighborhood of the target pixel point, determine similarity between an image region of the second image and the neighborhood, and determine an image region of the second image with the maximum similarity in the determined similarity as a matching image region; a tenth determining unit (not shown in the figure) is configured to determine a selected pixel point in the neighborhood and determine a matching pixel point of the selected pixel point in the matching image region; the fourth generating unit (not shown in the figure) is configured to generate a fourth matching result of the target pixel point based on the selected pixel point, the matched pixel point and the target pixel point.
The selected pixel point can be any one pixel point in the neighborhood, and the matched pixel point can be a pixel point corresponding to the selected pixel point in the matched image area. For example, when the neighborhood is a rectangular region, the selected pixel point may be a center point of the neighborhood, and the matching pixel point may be a center point of the matching image region.
In some optional implementations of this embodiment, the apparatus further includes: a fifth generating unit (not shown in the figure) is configured to generate a final matching result based on the generated matching result.
And generating a final matching result based on the obtained matching result. Wherein, the matching result comprises the first matching result and at least one of the following items: a second matching result and a third matching result. And fourthly, matching results. Depending on whether at least one of the above-mentioned items (second matching result, third matching result, fourth matching result) is generated. It will be appreciated that in performing this step, if only the first match result is generated (and the second, third, and fourth match results are not generated), then the generated match result includes only the first match result (and does not include the second, third, and fourth match results); if only the first and fourth matching results are generated (and the second and third matching results are not generated), the generated matching results include only the first and fourth matching results (and the second and third matching results are not included).
In some optional implementations of this embodiment, the first image is an image displayed by the first electronic device when the target webpage is presented on the first electronic device, and the second image is an image displayed by the second electronic device when the target webpage is presented on the second electronic device.
In some optional implementations of this embodiment, the apparatus further includes: the testing unit (not shown in the figure) is configured to perform a compatibility test on the website related to the target webpage based on the final matching result.
The compatibility test may include, but is not limited to: the method comprises the following steps of browser compatibility test (whether a test program can normally run on different browsers and whether functions can be normally used), screen size and resolution compatibility test (whether the test program can be normally displayed under different resolutions), operating system compatibility test (whether the test program can normally run under different operating systems, whether functions can be normally used, whether display is correct, and the like), and compatibility test of different equipment models (for example, whether normal running can be performed on mainstream equipment and the phenomenon of breakdown can not occur).
In the apparatus provided in the above embodiment of the present application, each first feature point in the first feature point set is matched with each second feature point in the second feature point set by the matching unit 501 to obtain a matching feature point pair set, then the first determining unit 502 determines the density of the second feature points included in the matching feature point pair set distributed in the image region of the second image, determines the image region corresponding to the maximum density of the determined densities as a matching feature point dense region, then the second determining unit 503 determines the second feature points included in the matching feature point dense region, then the third determining unit 504 determines the set of matching feature point pairs including the determined second feature points in the matching feature point pair set as a modified matching feature point pair set, and finally the first generating unit 505 obtains a target pixel point based on the modified matching feature point pair set and the target pixel point, and generating a first matching result of the target pixel point, thereby improving the accuracy of determining the matching result of the target pixel point.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use in implementing the electronic device of an embodiment of the present application. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601.
It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a matching unit, a first determining unit, a second determining unit, a third determining unit, and a generating unit. The names of these units do not form a limitation to the unit itself in some cases, and for example, the first generation unit may also be described as "a unit that generates a first matching result of a target pixel point".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: matching each first characteristic point in the first characteristic point set with each second characteristic point in the second characteristic point set to obtain a matched characteristic point pair set, wherein the first characteristic points are characteristic points of a target image area included in the first image, the target image area includes target pixel points, and the second characteristic points are characteristic points of the second image; determining the distribution density of second feature points contained in the matching feature point pair set in an image area of a second image, and determining an image area corresponding to the maximum density in the determined densities as a matching feature point dense area; determining a second feature point contained in the dense region of the matched feature points; determining a set of matching characteristic point pairs containing the determined second characteristic point in the matching characteristic point pair set as a corrected matching characteristic point pair set; and generating a first matching result of the target pixel point based on the corrected matching feature point pair set and the target pixel point.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.
Claims (20)
1. A method for generating information, comprising:
matching each first feature point in the first feature point set with each second feature point in the second feature point set to obtain a matched feature point pair set, wherein the first feature points are feature points of a target image region included in the first image, the target image region includes target pixel points, and the second feature points are feature points of the second image;
determining the distribution density of second feature points in the matching feature point pair set in the image area of the second image, and determining the image area corresponding to the maximum density in the determined densities as a matching feature point dense area;
determining a second feature point included in the dense region of matched feature points;
determining a set of matching characteristic point pairs containing the determined second characteristic point in the matching characteristic point pair set as a corrected matching characteristic point pair set;
generating a first matching result of the target pixel point based on the corrected matching feature point pair set and the target pixel point;
determining a matching image region matched with the neighborhood of the target pixel point from the second image, and generating a fourth matching result of the target pixel point based on the selected pixel point in the neighborhood, the matching pixel point of the selected pixel point determined in the matching image region, and the target pixel point.
2. The method of claim 1, wherein the method further comprises:
taking each pixel point of the target pixel point in the neighborhood of the first image as a first seed pixel point, and determining a polymerization region of the first seed pixel point meeting a preset screening condition as a first region by adopting a region growing algorithm;
taking each pixel point in the second image as a second seed pixel point, and determining a polymerization region of the second seed pixel point meeting the preset screening condition as a second region by adopting a region growing algorithm;
determining a second area meeting at least one matching condition as a second area matched with the first area: the difference between the filling degree of the second area and the filling degree of the first area is smaller than a preset filling degree threshold value; the difference between the aspect ratio of the second area and the aspect ratio of the first area is smaller than a preset aspect ratio threshold value; the similarity between the second area and the first area is greater than a preset first similarity threshold;
determining a combination of the first region and a second region matched with the first region as a matched region pair;
and generating a second matching result of the target pixel point based on the matching region pair and the target pixel point.
3. The method of claim 2, wherein the preset screening conditions include at least one of:
the product of the first preset interval value, the height of the first image and the width of the first image is less than the number of pixels of the aggregation area;
the width of the aggregation area is smaller than the product of the width of the first image and a second preset distance value;
the height of the aggregation area is smaller than the product of the height of the first image and the second preset distance value.
4. The method of one of claims 1-3, wherein a neighborhood of the target pixel point is presented with a first set of words and the second image is presented with a second set of words; and
the method further comprises the following steps:
for each first word in the first word set, determining a second word matched with the first word in the second word set, and determining a combination of the first word and the second word matched with the first word as a matched word pair;
and generating a third matching result of the target pixel point based on the matching vocabulary pairs and the target pixel point.
5. The method of claim 4, wherein said determining a second vocabulary that matches the first vocabulary comprises:
determining the four-corner number of the first vocabulary;
determining the similarity between each second vocabulary in the second vocabulary set and the first vocabulary;
and determining a second vocabulary which is the same as the four corner number of the first vocabulary and/or has the maximum similarity as a second vocabulary matched with the first vocabulary.
6. The method of claim 1, wherein the determining a matching image region from the second image that matches a neighborhood of the target pixel point and generating a fourth matching result for the target pixel point based on a selected pixel point within the neighborhood, a matching pixel point of the selected pixel point determined in the matching image region, and the target pixel point comprises:
performing template matching operation on the second image by utilizing the neighborhood of the target pixel point, determining the similarity between the image area of the second image and the neighborhood, and determining the image area of the second image with the maximum similarity in the determined similarity as a matched image area;
determining selected pixel points in the neighborhood, and determining matching pixel points of the selected pixel points in the matching image region;
and generating a fourth matching result of the target pixel point based on the selected pixel point, the matching pixel point and the target pixel point.
7. The method of claim 1, wherein the method further comprises:
based on the generated matching result, a final matching result is generated.
8. The method of claim 7, wherein the first image is an image displayed by a first electronic device when a target web page is presented on the first electronic device, and the second image is an image displayed by a second electronic device when the target web page is presented on the second electronic device.
9. The method of claim 8, wherein the method further comprises:
and performing compatibility test on the website related to the target webpage based on the final matching result.
10. An apparatus for generating information, comprising:
the matching unit is configured to match each first feature point in the first feature point set with each second feature point in the second feature point set to obtain a matching feature point pair set, wherein the first feature points are feature points of a target image region included in the first image, the target image region includes target pixel points, and the second feature points are feature points of the second image;
a first determining unit configured to determine the densities of the second feature points included in the matching feature point pair set distributed in the image region of the second image, and determine an image region corresponding to the largest density among the determined densities as a matching feature point dense region;
a second determining unit configured to determine a second feature point included in the matching feature point dense region;
a third determining unit, configured to determine a set of matching characteristic point pairs including the determined second characteristic point in the set of matching characteristic point pairs as a modified set of matching characteristic point pairs;
the first generating unit is configured to generate a first matching result of the target pixel point based on the corrected matching feature point pair set and the target pixel point;
and the fourth generation module is configured to determine a matching image region matched with the neighborhood of the target pixel point from the second image, and generate a fourth matching result of the target pixel point based on the selected pixel point in the neighborhood, the matching pixel point of the selected pixel point determined in the matching image region, and the target pixel point.
11. The apparatus of claim 10, wherein the apparatus further comprises:
a fourth determining unit, configured to use each pixel point of the target pixel point in a neighborhood of the first image as a first seed pixel point, and determine, by using a region growing algorithm, a polymerization region of the first seed pixel point meeting a preset screening condition as a first region;
a fifth determining unit, configured to use each pixel point in the second image as a second seed pixel point, and determine, by using a region growing algorithm, a polymerization region of the second seed pixel point that meets the preset screening condition as a second region;
a sixth determining unit, configured to determine a second area meeting at least one of the following matching conditions as a second area matching the first area: the difference between the filling degree of the second area and the filling degree of the first area is smaller than a preset filling degree threshold value; the difference between the aspect ratio of the second area and the aspect ratio of the first area is smaller than a preset aspect ratio threshold value; the similarity between the second area and the first area is greater than a preset first similarity threshold;
a seventh determining unit configured to determine a combination of the first region and a second region matched with the first region as a matching region pair;
and the second generating unit is configured to generate a second matching result of the target pixel point based on the matching region pair and the target pixel point.
12. The apparatus of claim 11, wherein the preset screening condition comprises at least one of:
the product of the first preset interval value, the height of the first image and the width of the first image is less than the number of pixels of the aggregation area;
the width of the aggregation area is smaller than the product of the width of the first image and a second preset distance value;
the height of the aggregation area is smaller than the product of the height of the first image and the second preset distance value.
13. The apparatus of one of claims 10-12, wherein a neighborhood of the target pixel point is presented with a first set of words and the second image is presented with a second set of words; and
the device further comprises:
an eighth determining unit, configured to determine, for each first word in the first word set, a second word in the second word set that matches the first word, and determine a combination of the first word and the second word that matches the first word as a matching word pair;
and the third generating unit is configured to generate a third matching result of the target pixel point based on the matching vocabulary pair and the target pixel point.
14. The apparatus of claim 13, wherein the eighth determining unit comprises:
a first determining module configured to determine four corner numbers of the first vocabulary;
a second determining module configured to determine similarity between each second vocabulary in the second vocabulary set and the first vocabulary;
and the third determining module is configured to determine a second vocabulary which is the same as the four corner numbers of the first vocabulary and/or has the largest similarity as the second vocabulary matched with the first vocabulary.
15. The apparatus of claim 10, wherein the fourth generating means comprises:
a ninth determining unit, configured to perform template matching operation on the second image by using a neighborhood of the target pixel point, determine similarity between an image region of the second image and the neighborhood, and determine an image region of the second image with the largest similarity among the determined similarities as a matching image region;
a tenth determining unit configured to determine a selected pixel point in the neighborhood and determine a matching pixel point of the selected pixel point in the matching image region;
and the fourth generating unit is configured to generate a fourth matching result of the target pixel point based on the selected pixel point, the matching pixel point and the target pixel point.
16. The apparatus of claim 10, wherein the apparatus further comprises:
a fifth generating unit configured to generate a final matching result based on the generated matching result.
17. The apparatus of claim 16, wherein the first image is an image displayed by a first electronic device when a target web page is presented on the first electronic device, and the second image is an image displayed by a second electronic device when the target web page is presented on the second electronic device.
18. The apparatus of claim 17, wherein the apparatus further comprises:
and the testing unit is configured to perform compatibility testing on the website related to the target webpage based on the final matching result.
19. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-9.
20. A computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810088070.5A CN108182457B (en) | 2018-01-30 | 2018-01-30 | Method and apparatus for generating information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810088070.5A CN108182457B (en) | 2018-01-30 | 2018-01-30 | Method and apparatus for generating information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108182457A CN108182457A (en) | 2018-06-19 |
CN108182457B true CN108182457B (en) | 2022-01-28 |
Family
ID=62551752
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810088070.5A Active CN108182457B (en) | 2018-01-30 | 2018-01-30 | Method and apparatus for generating information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108182457B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6707305B2 (en) * | 2018-02-06 | 2020-06-10 | 日本電信電話株式会社 | Content determination device, content determination method, and program |
CN110648382B (en) * | 2019-09-30 | 2023-02-24 | 北京百度网讯科技有限公司 | Image generation method and device |
CN111079730B (en) * | 2019-11-20 | 2023-12-22 | 北京云聚智慧科技有限公司 | Method for determining area of sample graph in interface graph and electronic equipment |
CN112569591B (en) * | 2021-03-01 | 2021-05-18 | 腾讯科技(深圳)有限公司 | Data processing method, device and equipment and readable storage medium |
CN117351438B (en) * | 2023-10-24 | 2024-06-04 | 武汉无线飞翔科技有限公司 | Real-time vehicle position tracking method and system based on image recognition |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102034235A (en) * | 2010-11-03 | 2011-04-27 | 山西大学 | Rotary model-based fisheye image quasi dense corresponding point matching diffusion method |
CN103020945A (en) * | 2011-09-21 | 2013-04-03 | 中国科学院电子学研究所 | Remote sensing image registration method of multi-source sensor |
CN103473565A (en) * | 2013-08-23 | 2013-12-25 | 华为技术有限公司 | Image matching method and device |
-
2018
- 2018-01-30 CN CN201810088070.5A patent/CN108182457B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102034235A (en) * | 2010-11-03 | 2011-04-27 | 山西大学 | Rotary model-based fisheye image quasi dense corresponding point matching diffusion method |
CN103020945A (en) * | 2011-09-21 | 2013-04-03 | 中国科学院电子学研究所 | Remote sensing image registration method of multi-source sensor |
CN103473565A (en) * | 2013-08-23 | 2013-12-25 | 华为技术有限公司 | Image matching method and device |
Non-Patent Citations (2)
Title |
---|
Dense Non-Rigid Point-Matching Using Random Projections;Raffay Hamid et al.;《2013 IEEE Conference on Computer Vision and Pattern Recognition》;20130623;第2914-2921页 * |
基于无人机图像的密集匹配方法研究;蔡龙洲 等;《测绘科学》;20130930;第38卷(第5期);第126-128页,第132页 * |
Also Published As
Publication number | Publication date |
---|---|
CN108182457A (en) | 2018-06-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108182457B (en) | Method and apparatus for generating information | |
CN109117831B (en) | Training method and device of object detection network | |
US10915980B2 (en) | Method and apparatus for adding digital watermark to video | |
CN108229419B (en) | Method and apparatus for clustering images | |
CN109344762B (en) | Image processing method and device | |
CN109118456B (en) | Image processing method and device | |
WO2021012382A1 (en) | Method and apparatus for configuring chat robot, computer device and storage medium | |
CN109711508B (en) | Image processing method and device | |
CN110827301B (en) | Method and apparatus for processing image | |
CN109993749B (en) | Method and device for extracting target image | |
CN110427915B (en) | Method and apparatus for outputting information | |
CN110211195B (en) | Method, device, electronic equipment and computer-readable storage medium for generating image set | |
CN114359932B (en) | Text detection method, text recognition method and device | |
US8494284B2 (en) | Methods and apparatuses for facilitating detection of text within an image | |
CN116109824A (en) | Medical image and pixel-level label generation method and device based on diffusion model | |
CN113033377A (en) | Character position correction method, character position correction device, electronic equipment and storage medium | |
CN112488095A (en) | Seal image identification method and device and electronic equipment | |
CN111292333B (en) | Method and apparatus for segmenting an image | |
CN106611148B (en) | Image-based offline formula identification method and device | |
CN112508005B (en) | Method, apparatus, device and storage medium for processing image | |
CN109697722B (en) | Method and device for generating trisection graph | |
CN116259064B (en) | Table structure identification method, training method and training device for table structure identification model | |
CN115082598B (en) | Text image generation, training, text image processing method and electronic equipment | |
CN112102145A (en) | Image processing method and device | |
CN111881778B (en) | Method, apparatus, device and computer readable medium for text detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |